Friday, March 26, 2021

YUM repositories

Introduction
============

In order to access a YUM repository, you must create a .repo file under
/etc/yum.repos.d or put it directly inside /etc/yum.conf. But the former is much
preferred. Once a .repo file is created under /etc/yum.repos.d, given that all
its options are valid and the path to repository (baseurl) is accessible, you
would be able to see it under "yum repolist" command output. Here is a sample
.repo file. Options can either be on uppercase or lowercase.
[local]  # this is the repo id
name=My local repo
baseurl=file:///localrepo  # valid locations are http://, ftp://, file:///path, ..
#mirrorlist=file:///root  # alternative to baseurl; can point to a file containing multiple baseurls
enabled=1  # 1 (enabled), 0 (disabled), not specifying this means it is enabled
gpgcheck=0  # 1 (enabled), 0 (disabled); tells whether yum to perform gpg signature check on the packages

Here is another example now with yum variables involved.
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

Where:
  $releasever = usually the number part in "cat /etc/redhat-release" output
  $basearch = architecture (as seen in "uname -p")
  $infra = whatever inside /etc/yum/vars/infra (mine is set to "stock")

Decoding the URL Paths
======================

So if we are going to decode the mirror path for a
"CentOS Linux release 7.3.1611 (Core)" 64-bit server, it would be:

http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock

Notice that the 7.3.1611 was stripped down to 7 on the url path. That is the
case sometimes. Those yum variables can be overwritten if you create a file
with same name under /etc/yum/vars. In Centos 7.3.1611, there is only one file
inside that directory which is infra whose content is set to "stock".

Common types of repositories you might encounter
================================================

1. Base - packages that came from installation discs
2. Updates - usually from external repos containing updates for packages
             installed from base
3. EPEL/extras - extra packages that is not included from base

Advance options for yum.conf
============================

Here are some advance options for yum.conf that rarely used but I believe is
helpful. Updating yum.conf doesn't need to restart any service. It will read the
settings the next time you run a transaction via "yum" command.

showdupesfromrepos=[0|1]

  - default is 0
  - if you have multiple repos, by default "yum list " will search
    all those repos and display only the package with the latest version leaving
    the others hidden from your view
  - setting this option to "1" will list also the other packages from other
    repos despite of its version
  - even if set to "1", this doesn't affect "yum install ;" where
    it by default install the latest package
  - without specifying this in yum.conf, you can do this in every yum
    transaction by adding "--showduplicates" option

Sources
=======

yum.conf(5)  - contains options for main config file as well as options for
               custom .repo files under /etc/yum.repos.d
yum(8)  - command line options for "yum"

Thursday, March 25, 2021

Convert GlusterFS Volume from Distributed-Replicated to Replicated

You have an existing gluster volume "gv0" backed by 4 bricks from each server.
There are a total of 4 gluster servers.

1. Remove 2 bricks from. Number of bricks to be remove must be multiple of the
   total number of bricks.
   gluster volume remove-brick myvolume node{3..4}:/bricks/gv0/brick1 start

   The "start" at the end tells gluster to move existing data from the bricks
   being removed to the remaining bricks.

2. Check the status of data migration. Make sure it is completed before
   proceeding to next step.
   gluster volume remove-brick myvolume node{3..4}:/bricks/gv0/brick1 status

3. Re-add the 2 bricks to form a 4-way replicated volume.
   gluster volume add-brick myvolume replica 4 node{3..4}:/bricks/gv0/brick1

4. You might notice that the re-added bricks' size are smaller than what is
   expected. Some files seems still not present on them but any access to those
   missing file will make them appear on the bricks. A simple `ls` will make
   the files reappear.

Wednesday, March 24, 2021

Installing Artifactory on Centos 7

Host OS: Centos 7.3.1611 (Core)

Artifactory Pro Version: 4.15.0


  1. Setup artifactory

        a. download jfrog-artifactory-pro-4.15.0.zip from: https://bintray.com/jfrog/artifactory-pro/download_file?file_path=org%2Fartifactory%2Fpro%2Fjfrog-artifactory-pro%2F4.15.0%2Fjfrog-artifactory-pro-4.15.0.zip

        b. unzip file to your preferred installation directory

unzip jfrog-artifactory-pro-4.15.0.zip -d /opt
        c. setup ARTIFACTORY_HOME
echo ‘export ARTIFACTORY_HOME=/opt/artifactory-pro-4.15.0’ >> ~/.bash_profile
source ~/.bash_profile


  1. Setup Java

        a. download jdk-8u131-linux-x64.rpm from:

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

        b. install

rpm -ivh jdk-8u131-linux-x64.rpm
        c. setup JAVA_HOME as root:
echo ‘export JAVA_HOME=/usr/java/jdk1.8.0_131’ >> ~/.bash_profile
source ~/.bash_profile


  1. Setup MySQL

        a. install package as root

wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum install mysql-community-server
        b. setup service
systemctl start mysqld
systemctl enable mysqld
mysql_secure_installation
        c. create artifactory database
mysql -uroot -p
mysql> CREATE DATABASE artdb CHARACTER SET utf8 COLLATE utf8_bin;
mysql> GRANT ALL on artdb.* TO ‘artifactory’@‘localhost’ IDENTIFIED BY ‘yourpassword’;
mysql> FLUSH PRIVILEGES;
        d. make artifactory use the database
cp $ARTIFACTORY_HOME/misc/db/mysql.properties $ARTIFACTORY_HOME/etc/db.properties
        e. download mysql-connector from: https://dev.mysql.com/downloads/file/?id=470332 f. extract and copy mysql-connector to $ARTIFACTORY_HOMEtomcatlib/
tar xvf mysql-connector-java-5.1.42.tar.gz
cp mysql-connector-java-5.1.42/mysql-connector-java-5.1.42-bin.jar $ARTIFACTORY_HOME/tomcat/lib/


  1. Open firewall port and start artifactory
firewall-cmd —add-port=8081/tcp —permanent
firewall-cmd —reload
$ARTIFACTORY_HOME/bin/artifactoryctl start


  1. Validate installation by accessing the following URL

http::8081

username: admin

default password: password # be sure to change this once you logged in


Source

https://www.jfrog.com/confluence/display/RTF/MySQL

Tuesday, March 23, 2021

Types of GlusterFS Volumes

Distributed
- default type
- distributes data evenly across nodes (with 100 files, 50 will be on node1 and 50 will be on node2)
- a file is located on one brick only
- faster than replicated
- no redunduncy
- no automatic failover
- volume size is the sum of brick sizes
- sample command:
  gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Replicated
- w/ redunduncy
- a file appears on all bricks
- volume size is the size of each brick
- sample command:
  gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
    * "replica 2" means keep 2 copies of data at a given time
Distributed-Replicated
- adjacent bricks are replicas of each other
    * e.g with 8 bricks, brick1 and brick2 are replicas of each other, brick3 and brick4 are
      replicas of each other, and so on..
- sample command:
  gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Striped
- a file is divided in chunks
- each chunk is located on each bricks
- faster access
- no redunduncy
- sample command:
  gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2

Monday, March 22, 2021

GlusterFS Quick Setup on Fedora 26

1. Prepare atleast 2 nodes

2. Create bricks
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount

3. Install package (both nodes)
yum install glusterfs-server

4. Allow communication (both nodes)
iptables -I INPUT -p all -s -j ACCEPT

5. Configure trusted pool
gluster peer probe server2 # from server1
gluster peer probe server1 # from server2
gluster peer status # both servers

6. Create bricks to export (both nodes)
mkdir -p /data/brick1/gv0

7. Create gluster volume (from any single node)
gluster volume create vg0 transport tcp replica 2 server1:/data/brick/gv0 server2:/data/brick/gv0
gluster volume start gv0

8. Mount the volume inside the servers (both nodes)

mount -t glusterfs $(hostname):/gv0 /mnt

Sunday, March 21, 2021

x86 Architecture and Instruction Set Overview

Basic Info

  • 16-bit (need to clock cycles to transfer alll 32 bits) or 32-bit
  • Data types based on size
    • Byte: 8 bits
    • Word: 16 bits
    • Doubleword: 32 bits
    • Quadword: 64 bits
    • Double quadword: 128 bits

Internal Achitecture

Instruction Categories

  • Data movement
  • Stack manipulation
  • Arithmetic and logic
  • Conversions
  • Control flows
  • String and flag manipulation
  • Input/Output
  • Protected mode

Other instruction categories

  • Floating-point instructions
  • SIMD instructions
  • SSE3 (Streaming SIMD Extension 3)
  • AES instructions - for AES encryption
  • MPX instructions - for memory protection
  • SMX instructions - safer mode
  • TSX instructions - for transactional synchronization extensions
  • VMX instructions - virtual machine extensions

Common instruction patterns

xor reg, reg ; Set reg to zero
test reg, reg ; Test if reg contains zero
add reg, reg ; Shift reg left by one bit


Instruction formats

  • Individual instruction size = 1 to 15 bytes

x86 Assembly Language

Here is an example assembly code that prints a string and exits in Windows console.


.386
.model FLAT,C
.stack 400h
.code
includelib libcmt.lib
includelib legacy_stdio_definitions.lib extern printf:near
extern exit:near
public main
main proc
; Print the message
push offset message
call printf
; Exit the program with status 0
push 0
call exit
main endp
.data
message db "Hello, Computer Architect!",0 end


To run it:

ml /Fl /Zi /Zd hello_x86.asm

Saturday, March 20, 2021

Network Time Protocol (NTP)

Introduction to NTP

  Devices across the network often needs an accurate time which is done by synchronizing to a reference clock. One example of network setup that is very dependent on accurate time is in a kerberized environment. Kerberos makes use of timestamps to determine the validity of TGT (Ticket Granting Ticket). One way of achieving an accurate time is via Network Time Protocol (NTP) that runs via UDP port 123.

  In a typical setup, a client has NTP installed and is synchronizing to a reference clock which is either a central NTP server within your organization or to publicly accessible NTP server like 0.rhel.pool.ntp.org.

Strata and atomic clocks

In NTP, we have this so called "strata" (or stratum for singular) which is the level on which your reference clock is located. Atomic clocks are the most accurate clocks in the world which are located in stratum "0".

Strata levels:
0   - most accurate clocks: atomic clocks, GPS, mobile phone systems
1   - devices with GPS/atomic clocks attached
2   - syncs from stratum 1; serves lower strata
...
15 - lowest valid startum
16 - devices located here are in unsynchronized state

The Drift File

  This contains the offset (or difference) in PPM (parts per million) between your system's clock compared to a reference clock. More specifically in RHEL/Centos, it is /var/lib/ntp/drift. If this file is present and NTP is started/restarted, it will adjust your time based from the content of the file. The existence of the drift file allows ntpd to quickly adjust the time wihout recomputing again so you will see the following line in /var/log/messages:

  Mar 25 21:55:11 rhel6 ntpd[8991]: frequency initialized -37.951 PPM from /var/lib/ntp/drift

If it doesn't exist, the next restart will make ntpd launch on a special mode for 15 minutes and then back to normal mode again. ntpd recreates (rm /var/lib/ntp/drift && touch /var/lib/ntp/drift) the drift file every hour so it is important that NTP has correct permissions on /var/lib/ntp directory.

  A positive value means that your clock is fast compared to the reference and a negative means it's slow. We have 86,400 s/day and if we divide it into 1,000,000 PPM the quotient would be 0.0864 s/day-PPM. That value is what we need to convert the content of the drift file into seconds.

  Using the above excerpt in /var/log/messages as an example, the drift file's content is -37.951 so multiplying it to 0.0864 results to -3.2789664. It means that your system is slower than the reference clock by 3 seconds. That might not be a lot but in time synchronization's perspective that offset is large! Usually acceptable values are in millisecond range.

The NTP daemon (NTPD)

When launched, you will see the following in ps output.

    /usr/sbin/ntpd -u ntp:ntp -p /var/run/ntpd.pid -g

That is true in RHEL 6.X however in RHEL 7.X, "-p" part is omitted. "-u ntp:ntp" drops privileges of ntpd to ntp user and ntp group while "-p /var/run/ntpd.pid" holds the PID value. By default, ntpd will exit when it sees a 1000 seconds difference between your time and the reference clock. "-g" overrides that default behavior and continues to operate and sync to the reference.

Configuring an NTP client

1. Install NTP package
  # yum install -y ntp
2. Add your reference clocks (NTP servers)
  # vi /etc/ntp.conf
  server my.ntp1.server iburst  # iburst provides ???
  server my.ntp2.server
3. Start and enable NTPD at boot
  RHEL 6/SysV-based systems:
  # service ntpd start
  # chkconfig ntpd on
  RHEL 7.X/Systemd-based servers:
  # systemctl start ntpd
  # systemctl enable ntpd
4. Check if your system is synchronizing to the NTP server. NTP may take a time to sync.
  # ntpstat  # keyword to see is "synchronised to X at stratum Y..."
  # ntpq -p

Configuring an NTP server

1. Install NTP package
  # yum install -y ntp
2. Add your reference clocks (NTP servers). Since we are configuring an NTP server to serve time to devices in our internal organization, we need to sync to a publicly accessible NTP pool.
  # vi /etc/ntp.conf
  server 0.centos.pool.ntp.org iburst
  server 1.centos.pool.ntp.org
3. Update ntp.conf to allow access to our clients
  # vi /etc/ntp.conf
  restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
  # hint: that line above is usually commented on a default ntp.conf so there's no need to memorize the format
4. Start and enable NTPD at boot
  RHEL 6/SysV-based systems:
  # service ntpd start
  # chkconfig ntpd on
  RHEL 7.X/Systemd-based servers:
  # systemctl start ntpd
  # systemctl enable ntpd
5. Open up UDP/port 123 on our firewall (if there is any firewall that is active)
  RHEL 6.X/SysV-based systems:
  # iptables -A INPUT -m state --state NEW -p udp --dport 123 -j ACCEPT
  # service iptables restart
   RHEL 7.X/Systemd-based servers:
  # firewall-cmd --add-service=ntp --permanent
  # firewall-cmd --reload
6. Monitor if you are syncing to an internet NTP pool
  # ntpstat  # keyword to see is "synchronised to X at stratum Y..."
  # ntpq -p

Understanding "ntpq -p" output

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*80.26.104.184   217.127.32.90    2 u   66  256  377  470.247   32.058  33.497
+128.95.231.7    140.142.2.8      3 u  254  256  377  217.646   -3.832   2.734
+64.112.189.11   128.10.252.6     2 u    2  256  377  258.208    2.395  47.246
 127.127.1.0     LOCAL(0)        10 l   56   64  377    0.000    0.000   0.002

Where:
  remote - reference clock/NTP server; 127.127.1.0 refers to yourself
    * = selected as primary time source
    + = included in the average computation
    - = rejected
  refid - NTP server of your reference clock/NTP server
  st - stratum of "remote"
  t - server type (unicast/broadcast/multicast/local)
  when - how long since last poll (in seconds)
  poll - how frequently to query server (in seconds)
  reach - success/failure rate of last 8 queries in octal bitmask
    377 = 11111111 = all last 8 queries are successful = success
    257 = 10101111 = only last 4 queries are successful
    5 and 7 = failed
  delay - network round trip time (in milliseconds)
  offset - difference between local clock and remote clock (in milliseconds)
  jitter - difference of successive time values from server (high jitter might mean unstable clock or poor network performance)

Sources

RHEL 6 Deployment Guide
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/Deployment_Guide/Red_Hat_Enterprise_Linux-6-Deployment_Guide-en-US.pdf

Everything about NTP

ntp.conf(5)  - contains options for ntp configuration
ntpd(8)  - NTP daemon