Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Friday, April 30, 2021

REGEX (Regular Expressions)

Start and end of line

        #  matches all line starting with "cat"

egrep '^cat' regex.txt

# matches all line ending with "cat"

egrep 'cat$' regex.txt

# matches all line that has "cat" anywhere on the line

egrep 'cat' regex.txt

# matches a line that contains only "cat"

egrep '^cat$' regex.txt

# matches empty lines

egrep '^$' regex.txt

# matches non-empty lines (-v is for negating the output)

egrep -v '^$' regex.txt

Single character match

       # there must be any single character betwen "a" and "c"

egrep 'a.c' regex.txt

Character class

        # matches "gray" or "grey"

egrep 'gr[ae]y' regex.txt

#  combining several character classes

egrep 'sep[ea]r[ea]te' regex.txt

# matches "<H1>", "<H2>", and "<H3>"

egrep '<H[123]>' regex.txt

# same as above ^ (provides a range)

egrep '<H[1-3]>' regex.txt

# matches "<H[-]>" (doesn't provide a range)

egrep '<H[-]>' regex.txt

# multiple ranges are fine

egrep '<H[0123456789abcdefABCDEF]>' regex.txt

# simplified version of the above ^ expression

egrep '<H[0-9a-fA-F]>' regex.txt

# matches a "!", ".", "_", and "?"

egrep '<H[!._?]>' regex.txt

# match if and only if there is something that is not

# "<Hx>" (remember this concept)

egrep '<H[^x]>' regex.txt

# matches all that are not "<H1>", "<H2>",

# or "<H3>"

egrep '<H[^1-3]>' regex.txt

Alternatives

        # matches "gray" or "grey"

egrep 'gray|grey' regex.txt

# same as above ^

egrep 'gr(a|e)y' regex.txt

# matches any line that begins with 'From: ',

# 'To: ', or 'Subject: '

egrep '^(From|To|Subject) ' regex.txt 

Word boundaries

        # matches all lines that have a string which starts with "cat"

egrep '\<cat' regex.txt

# matches all lines that have a string which ends with "cat"

egrep 'cat\>' regex.txt

# matches all lines only that have a word "cat" which is not

# embedded within another word (or string). e.g this will

# match line `the cat is furry` but not `concatenate this file`

egrep '\<cat\>' regex.txt

Optional items

        # matches lines with string "color" or "colour" ("u" is optional)

egrep 'colou?r' regex.txt

# same as `egrep '(July|Jul) (4th|four|4)' regex.txt`

egrep 'July? (four|4(th)?)'

# allows one optional space

egrep '<H1 ?>' regex.txt

Quantifiers: repetition

        # matches "<H1>", "<H1 >", "<H1  >", "<H1   >", and

# so on (no space, w/ one space, or w/ more than

# one space after H1)

egrep '<H1 *>' regex.txt

# matches "<H1 >", "<H1  >", "<H1   >", and so on

# (atleast w/ one space after H1 is required)

egrep '<H1 +>' regex.txt

# matches "<H>", "<H1>", "<H2>", "<H3>", ... "<H9>"

# (number after "H" is not required)

egrep '<H[0-9]*>' regex.txt

# matches "<H0>", "<H1>", "<H2>", "<H3>", ... "<H9>"

# (number after "H" is required)

egrep '<H[0-9]+>' regex.txt

# matches "o" for atleast once or up to 3

# times ({min,max})

egrep 'co{1,3}l' regex.txt

# matches "o" for exactly 3 times ({min,max})

egrep 'co{3,3}l' regex.txt

# see p 75/780 of "OReilly - Mastering Regular Expressions" book

egrep <HR +SIZE *= *[0-9]+ *> regex.txt

Parentheses and backreferences

        # matches all words that are repeated atleast

# twice (with space between repetitions) like

  - not all `egrep` supports backreference # "the the", "apple apple apple", etc

    and `\< .. \>` egrep '\<([a-zA-Z]+) +\1\>' regex.txt

# same as above but this time this version also

# matches double words with different capitalization

# like "The the"

#

# this seems wrong??

egrep '\<([a-zA-Z]+) +\1\>' regex.txt

Escape sequence

        # removes the special function of "." w/c

# is to match any single character

egrep 'www\.facebook\.com' regex.txt

Miscellaneous

        # moves all non-hidden files on the current directory

# to the target directory

mv *.* Archive/

Some examples

        # matches a variable name that are allowed to contain only

# alphanumeric characters and underscores, but which may

# not begin with a number

egrep '[a-zA-Z_][a-zA-Z_0-9]*' regex.txt

# a string within doublequotes (see book for explanation)

egrep '"[^"]*"' regex.txt

# dollar amount (with optional cents)

egrep '\$[0-9]+(\.[0-9][0-9])?' regex.txt

# time of day, such as "9:17 am" or "12:30 pm"

egrep '(1[012]|[1-9]):[0-5][0-9] (am|pm)' regex.tx

Metacharacters

- special chracters that are used to match and manipulate patterns

^  :  matches start of line

$  :  matches end of line

|  :  provides alternatives

.  :  matches any single character

() :  you can put alternatives inside (separated by |)

?  :  quantifier - optional item (must be placed after the optional item)

*  :  quantifier - similar to ?, matches none, one or more of the immediately-preceding item (exit status is always 0)

+  :  quantifier - similar to ?, MUST match one or more of the immediately-preceding item (exit status is 0 or 1 for fail)

{min,max}  :  interval quantifier - matches the immediately-preceding item for atleast "min" times or until "max" times

Character Class

[]  :  represents a single character to match

Character Class Metacharacter

- these are special characters put inside character classes

- they have different meanings inside a character class compared to when placed outside a character class

-  :  provides range of characters (not considered metacharacter if it is the first character in the class)

^  :  negates the list

Metasequences

- these are used for word boundaries

- use this if you want to search for a particular string that is not embedded in a larger word

- let say you want to look only for the word "cat" and disregard lines with "catleya", "concatenate", etc

- this is not supported on all versions of egrep

\< :  the position at the start of a word

\> :  the position at the end of a word

\1 :  remembers strings/texts inside immediately-preceding parenthesis (used as backreferencing tool)

EGREP

egrep "^(From|Subject): " --> same as egrep "^From: |^Subject: "

- Not all egrep programs are the same. The supported set of metacharacters, as well as their meanings, are often different—see your local documentation

- The useful -i option discounts capitalization during a match

grep -i 'regular_expression' text_file  ##search a filename based on the regular expression

grep -i '^$' text_file  ## searches fro blank lines

grep -i '^$' text_file | wc -l  ## returns the number of blank lines

grep . text_file  ## deletes all blank lines

Sunday, April 18, 2021

Running Zenity in Crontab

What is Zenity?


  It is a GNOME application that produces pop-up windows that can be used to send message to user. It also have file selection capabilities by using --file-selection option.

  Example usage are:
  zenity --info # sends a simple pop up message
  zenity --info --text="Backup is running, press OK to close this window"  # with user-defined message


How to run this in CRON?


  Since CRON doesn't process X applications by default, you need to add the following option:
  zenity --info --text="Hi, I was launched via CRON" --display=:0


Sources


http://promberger.info/linux/2009/01/02/running-x-apps-like-zenity-from-crontab-solving-cannot-open-display-problem/

zenity(1)

Thursday, April 15, 2021

Linux Command Line Basics

When you first login to a Linux terminal, you are now using bash. Bourne-Again SHell (or BASH) is the default shell. A shell is the interpreter of anything you type on the terminal. In this post, I will briefly discuss the basics of linux command line.

Kinds of Shells

bash
  - Bourne Again SHell
  - based on earlier version of sh
  - commonly default shell in Linux

sh
  - Bourne shell
  - this is were BASH was based
  - not often used in Linux but usually a pointer to /bin/bash

tcsh
  - based on earlier C shell (csh)
  - not popular in Linux

csh
  - the original C Shell
  - not commonly used in Linux
  - very similar to tcsh

ksh
  - combination of features of csh and sh

zsh
  - Z Shell
  - evolution of ksh

How to start a shell?

There are several ways and here are the most common ways.

1. Once you logged in to a linux server (either by SSH or via console), you automatically starts your own shell
2. In xterm or GUI, you can right click on the desktop and open a terminal. In that way, you are also starting a shell

Parts of a shell

Understanding every part of the shell is very important specially for linux administrators because it is the one you interact with everytime you deal with Linux. As an example, when you logged as a regular, the shell will provide you a prompt where you can type your commands.

$ uname
Linux
$

The "$" represents something. In Linux by default, it means that you are logged in as regular user. You may customized it by changing the PS1 environment variable. A root's prompt starts with "#" so its important to clearly distinguished a regular user's prompt from a root's prompt. That can avoid accidentally running destructive command like "rm -fr *" on /. The command used in the example above is "uname" and its output is "Linux". Depending on the command you type, it may return an output or nor. Or it may even return an error.

Types of commands you can use

2 types of commands are "Internal" and "External" commands. Internal are the ones built-in in linux while External are the  ones came from outside sources (e.g came from installing a package). To see what type of command, execute the following:

$ type cd
cd is a shell builtin
$ type date
date is /bin/date
$
$ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd
$

The first example tells us that "cd" is an Internal command while the second one indicates that "date" is an external command. There are some commands you may encounter that has both Internal and External versions. So to see it, add "-a" on type command as shown in the 3rd example above.

Getting help

Since we are dealing with command line, we often need more information on a particular command we are using. There are several ways on how to display command information.

1. Using "man" - perhaps the most important help utility in Linux
  man <command>  # displays man pages of a command
  man -k <string>  # searches the man database for commands (or description) matching the specified string
  man -K <string>  # similar as above but do a full search (takes significantly longer to complete)
  man <section number> <command/config file>  # prints information on a specific manual section (see discussion below)

Man contains sections which groups the type of information you can see on a particular command. Not all command have the same manual sections. Here are the example of manual sections:

Section Number
  1 -- user commands (contains traditional command usage and options)
  2 -- system calls provided by the kernel (not commonly used by sys ads)
  3 -- library calls
  4 -- device files
  5 -- file formats (these are config files like "man 5 fstab")
  6 -- games
  7 -- miscellaneous
  8 -- system administration commands (usually ones that must be run as root)
  9 -- kernel routines

On the 9 manual sections, only 1, 5, and 8 are the most commonly used by system administrators.

2. Using pinfo/info - not all commands support this
  info coreutils 'ls invocation'  # as an example, you can see this command under "SEE ALSO" of ls (1)

3. Exploring /usr/share/doc
  - some packages includes README files under this directory
  - you can see examples here
  - sometimes there are also sample configurations

Environment Variables

These are global values read by the shell and inherited by the user (which can also be overriden) during startup. To see all environment variables, type "env". Here are the common environment variables in Linux.

PS1 - this is your prompt (e.g [user@host] $)
HOME - contains path to your home directory
HOSTNAME - the hostname of current machine where you are logged in
LOGNAME - the username used to login to the machine

To display the values, use the following command:

echo $HOME

Wednesday, April 14, 2021

Systemd Mounts

INTRODUCTION


Aside from managing services, systemd can also handle filesystem mounts similar to /etc/fstab. In this post, I
will show you how to mount local and remote filesystems using systemd.

MOUNTING A LOCAL FILESYSTEM


1. Create your .mount unit file
cat << EOF >> /etc/systemd/system/test.mount  # unit filename must match the mountpoint
[Unit]
Description=test mount

[Mount]
What=/dev/mapper/test_vg-test_lv
Where=/test  # this mountpoint must match the name of the unit file
                     # if this directory doesn't exist, systemd will create it with 0755 permissions

[Install]
WantedBy=multi-user.target  # if we want to mount the fs on boot, add this Install section
EOF
systemctl daemon-reload
systemctl start test.mount
systemctl enable test.mount  # this mounts filesystem at startup


2. Validate mount
[root@server ~]# df /test
Filesystem                  1K-blocks  Used Available Use% Mounted on
/dev/mapper/test_vg-test_lv   1041060 32944   1008116   4% /test

[root@server ~]#


Now that you have mounted a filesystem using systemd, you can unmount it by "systemctl stop test.mount"
or still by traditional way of "umount /test". The latter will automatically stop test.mount. If the mountpoint
is as series of directory tree (e.g /test/sub1), change the unit filename to test-sub1.mount and update
the Where= option.

MOUNTING A REMOTE FILESYSTEM


In the previous part, we used a local filesystem. Now let's try using a CIFS share to mount using systemd.

1. Create your .mount unit file
cat << EOF >> /etc/systemd/system/cifs.mount
[Unit]
Description=test mount (CIFS)

[Mount]
What=//192.168.122.11/share
Where=/cifs
Options=credentials=/root/cifscreds  # you can specify here mount options

[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start cifs.mount
systemctl enable cifs.mount


2. Validate mount
[root@server ~]# df /cifs
Filesystem            1K-blocks  Used Available Use% Mounted on
//192.168.122.11/share   1041060 32944   1008116   4% /cifs
[root@server ~]#


SOURCES

Man pages:
systemd.mount(5) - contains basice usage and options

Monday, April 12, 2021

Moving user to a new home directory



This is a simple hack if you want to move an existing user's home directory to a new location. This is helpful if you have a new partition create for user's home dirs (e.g /home2)

1. Execute this 1 command
usermod -md /home2/bob bob
# /home2 must exist
# /home2/bob will be created automatically by the command
# bob's files on the old home dir will be moved to the new
# the old home dir (together with its contents) will be deleted


2. Test as root
su - bob
# this shouldn't return an error

3. You can also see that the /etc/passwd entry has been updated as well
grep bob /etc/passwd

Friday, April 9, 2021

Installing ClusterSSH in Windows

You need to install Cygwin first. Cygwin provides Linux functionality on Windows-based machines.

PART 1: Installing Cygwin


NOTE:
  - During this setup, it will allow you to manually install the tools you need like ssh-server and perl. Cygwin has no package manager (like rpm, yum, dpkg, etc..) so if you want to install additional packages, you need to run the setup again and choose "Install from Internet (downloaded files will be kep for future re-use)"
  - Make sure you also have an internet connection

1. Download the latest version at: https://cygwin.com/setup-x86.exe
2. Run the downloaded file
3. Choose "Install from Internet (downloaded files will be kep for future re-use)"
4. Root Directory: C:\cygwin
5. Install: All Users (RECOMMENDED)
6. Local Package Directory: C:\Users\merrell\Desktop
7. Direct Connection
8. Choose A Download Site: This is where you will get the packages you want to install. Usually I choose the first one.
9. Select Packages: You can now select the packages you want. Download time will depend how large the packages you are downloading. Since we will install clusterssh, we need to choose the following packages:
perl
make
gcc-core
perl-tk
perl-Test-Pod
perl-Test-Pod-Coverage
perl-Try_Tiny
perl-File-Slurp
perl-File-Which
perl-Readonly
xinit
openssh
curl
wget
10. Once finished, you can now open the Cygwin terminal on which you can run Linux commands

PART 2: Installing clusterssh


NOTE:
  - You need a live internet connection before proceeding on the steps below

2. Extract the file anywhere you want
3. Open Cygwin terminal and go to the extracted folder
4. Open XWin server: Programs > Cygwin-x (32-bit) > XWin server
5. In Cygwin terminal, execute the following commands in order to install cssh and all required modules:
  $ cpan
    * press enter to all questions *
    cpan[1]> install Module::Build
    cpan[2]> exit
  $ perl Build installdeps
    * press enter to all questions *
  $ perl Build.PL
  $ ./Build
  $ ./Build test
  $ ./Build install
6. Now test cssh by opening 2 terminal at once using "root" as user
  $ cssh -l root host1 host2

Thursday, April 8, 2021

MRepo tutorial

WHAT IS MREPO?


  It is an open-source tool that creates a repository out of an ISO file (usually your installation disc) or from
public URLs like http://mirror.centos.org/. Those are some of the possible sources, explore the docs (see
sources below this post) for more choices.

  Aside from creating a repository, mrepo has the capability of making sure that both sides are in sync. When
a package from a http://mirror.centos.org/centos-7/7/os/x86_64/Packages/ has been remove, your local
repository will be udpated when you launch the mrepo command with the appropriate options (see tutorials
below on how to use mrepo command). Or if a newer package is available, mrepo will download it for you
and add it to your local repo.

INSTALLATION


1. Download the tarball from the site below. On other distribution, this is available via RPM on EPEL.
http://dag.wieers.com/home-made/mrepo/mrepo-0.8.7.tar.bz2

2. Untar it to any place you want.
tar xvf mrepo-0.8.7.tar.bz2

3. Go to the extracted directory and install it
cd mrepo-0.8.7
make install
# the installation will also produce a sysV script since this tool was prior to systemd. You may convert
# it to a unit file.

TIP: You might want to keep the extracted directory because it contains useful information like sample
    configurations and tutorials

CREATING A REPOSITORY FROM AN ISO FILE


In this part, we will create a repository out of an ISO file which typically is your distribution's installation disc.

1. Make sure the required directory exists
mkdir /var/mrepo


2. Create a subdirectory for your distribution
mkdir /var/mrepo/centos7-x86_64
# I chose the subdirectory name centos7-x86_64 because it is a valid format which is $dist-$arch.
# $dist and $arch is part of the config file as you will see in step 4. You may also use a format of $dist
# only but the former has an advantage in terms of containing the packages. If you use $dist only and
# in the future you provided URLs as source of your repository together with an existing ISO source,
# mrepo will create another directory which is /var/mrepo/$dist-$arch and put RPMs there instead on
# /var/mrepo/$dist. So in result, you will have 2 directories under /var/mrepo, subdir $dist which contain
# the actual ISO file and $dist-$arch which contains the RPMs from the URLs. So to make things cleaner,
# just create /var/mrepo/$dist-$arch at the start which will contain both the ISO and the URL RPMs w/o
# forcing mrepo to create separate directories.


3. Copy your distribution's ISO file (installation disc) to that directory
cp CentOS-7-x86_64-DVD-1611.iso /var/mrepo/centos7-x86_64/


4. Update the main config file, mrepo.conf, by appending the configurations below
cat << EOF >> /etc/mrepo.conf
[centos7]  # $dist or tag; part of subdirectory name in step 2
name = Centos 7 (64-bit) # mrepo doesn't care any description you put here
release = 7
arch = x86_64  # self explanatory? :) this is also part of subdirectory name in step 2
metadata = repomd  # we'll pick this as we are creating an RPM repository (another possible value is apt)
iso = CentOS-7-x86_64-DVD-1611.iso  # must match the filename copied in step 3
# it is important that you specify the filename correctly because mrepo will look for that file
EOF


TIP: You may also create a separate .conf file under /etc/mrepo.conf.d/ and mrepo will read it as if it is
included in the main config

5. Create the repository
mrepo -ugv  # see mrepo -h for flag uses
# As a summary, that command will perform the following:
#  1. Create /var/www/mrepo/$disr-$arch directory tree
#  2. Look for ISO files under the specified directories in order
#      /var/mrepo/$dist-$arch/$iso
#      /var/mrepo/$tag/$iso
#      /var/mrepo/iso/$iso
#      /var/mrepo/$iso
#  3. Mount ISO file to /var/www/mrepo/$dist-$arch/disc1 as a loop device
#  4. Create the links under /var/www/mrepo/$dist-$arch/


TIP: To see the detailed steps on how it create the repository, you may increase verbosity by -vvvvv (5 Vs!)

6. Once the above command completes, you will expect a directory structure similar to mine:

[root@home ~]# ll /var/www/mrepo/centos7-x86_64/  # this is the /var/www/mrepo/$dist-$arch
total 266
drwxr-xr-x. 8 root root   2048 Dec  5 21:20 disc1  # mountpoint of iso under /var/mrepo/$dist/$iso
lrwxrwxrwx. 1 root root     50 Mar 29 20:24 HEADER.shtml -> ../../../../usr/share/mrepo/html/HEADER.repo.shtml
drwxr-xr-x. 2 root root     42 Mar 29 20:24 iso  # contains a symlink to /var/mrepo/$dist/$iso
lrwxrwxrwx. 1 root root     50 Mar 29 20:24 README.shtml -> ../../../../usr/share/mrepo/html/README.repo.shtml
drwxr-xr-x. 2 root root      6 Mar 29 20:24 RPMS.all  # empty
drwxr-xr-x. 3 root root 212992 Mar 29 20:24 RPMS.os  # contains the actual RPMs as well as the repodata that came from the iso file
[root@home ~]#


7. You have now created a repository out from your ISO. But that repository is not yet available to clients
    until you share it via any methods like http, ftp, or nfs. Please see the other part of this post on where
    I demonstrate to you on how to share the repository via http method.

CREATING REPOSITORIES FROM URLS


In this part, we are going to create and sync a local repository from a public URL repository. As an example, http://mirror.centos.org/centos/7/extras/x86_64/, contains extra packages for Centos 7 so we will use that for this demonstration.

1. Update the mrepo.conf and add the following lines:
cat << EOF >> /etc/mrepo.conf
[centos7]
name = Centos 7 (64-bit)
release = 7
arch = x86_64
metadata = repomd
base = http://mirror.centos.org/centos/$release/os/$arch/
updates = http://mirror.centos.org/centos/$release/updates/$arch/
extras = http://mirror.centos.org/centos/$release/extras/$arch/
epel = https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$arch
EOF
# base, updates, extras, and epel are strings to describe the corresponding URL repository
# the following directory trees will be created:
#  /var/mrepo/$dist-$arch/{base,updates,extras,epel}
#  /var/www/mrepo/$dist-$arch/RPMS.{base,updates,extras,epel}


2. Install lftp because mrepo will use that to get packages from the URLs
yum install -y lftp


3. Sync the repositories. Since this is the first time, this will download al packages. All succeeding sync using
mrepo command will just download new packages or remove packages in your local that are no longer existing from the URLs.
mrepo -ugv


4. When you are finish downloading all packages, you can now make the repositories accessible to clients.
    See next section for the steps.

NOTE: If you can use "mirrorlists" in traditional yum repo, here in mrepo you cannot.

Making your repositories accessible via HTTP

There are many ways on how you can make your repositories accessible. Some of this are via
NFS and FTP. We will use HTTP for this demonstration since this is by far the most common way.

* On server *

1. Install http
yum install -y httpd


2. Start and enable at boot
RHEL 6.X/sysV systems:
service httpd start
chkconfig httpd on
RHEL 7.X/systemd systems:
systemctl start httpd
systemctl enable httpd


3. Open up port 80 on firewall
IPTABLES:
iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT
service iptables restart
FIREWALLD:
firewall-cmd --add-service=http --permanent
firewall-cmd --reload


4. Add link of each repo to DocumentRoot. You may use another approach but this is the easiest for me. :)
ln -s /var/www/mrepo/ /var/www/html/mrepo


5. If selinux is enabled and in enforcing mode, change selinux context of /var/mrepo to allow httpd access
semanage fcontext -at httpd_sys_content_t '/var/mrepo(./*)?'
restorecon -Rv /var/mrepo


6. Make sure your repository is accessible by testing it on your local browser.
http://localhost/mrepo/centos7-x86_64/RPMS.base

* On client *

1. Create .repo file
cat << EOF >> /etc/yum.repos.d/server.repo
[base]
name=Centos 7 - Base Packages (64-bit)
baseurl=http://server/mrepo/centos7-x86_64/RPMS.base
EOF


2. See if you can see the repository
yum repolist all


3. Validate by installing a package
yum install -y zsh


SOURCES


Official site:
http://dag.wiee.rs/home-made/mrepo/

mrepo command usage (no manual page but this is sufficient):
mrepo -h

Documentations:
mrepo-0.8.7/docs

Sample configurations:
mrepo-0.8.7/configs

Other web turorial:
https://asenjo.nl/wiki/index.php/Mrepo_centos7

Tuesday, April 6, 2021

Setup NIC Bonding in Linux in Centos 6

1. create bond0 config file
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
--- START EDIT ---
DEVICE=bond0
IPADDR=10.10.10.11
NETWORK=10.10.10.0
NETMASK=255.255.255.0
USRCTL=no
BOOTPROTO=none
ONBOOT=yes
--- END EDIT ---

2. edit the following files:
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
--- START EDIT ---
DEVICE=eth0
MASTER=bond0
SLAVE=yes
HWADDR=00:0C:29:8D:FB:EF
TYPE=Ethernet
UUID=5606b424-2186-410d-9dbc-dcb65b330bd9
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
--- END EDIT ---

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
--- START EDIT ---
DEVICE=eth1
MASTER=bond0
SLAVE=yes
HWADDR=00:25:B5:0B:B1:00
TYPE=Ethernet
UUID=6353d563-0a85-47d2-b566-37f6c91c8973
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
--- END EDIT ---

3. create bond module file
# vi /etc/modprobe.d/modprobe.conf
--- START EDIT ---
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
--- END EDIT ---

4. restart network
# service network restart

Monday, April 5, 2021

RPM and Debian Package Managers

In this post, we will discuss the 2 common package managers in Linux. They are the following:

1. RPM-based - rpm,yum
2. Debian-based - dpkg,apt-get,dselect,aptitude,apt-cache

RPM

Examples of systems that uses RPM-based package managers are Red Hat, Fedora, and Centos. The "rpm" command is to install packages directly. It doesn't handle dependency requirements so you need to install the needed packages (if there is any) before installing the package you want. For example, if you want to install packageA but it needs packageB, you need to install packageB first before proceeding with packageA. Here are the common uses of "rpm" command.

* Installing/Upgrading *

rpm -i <filename.rpm>  # installs a package (install only if it doesn't exist)
rpm -ivh <filename.rpm>  # same as the above, but increases verbosity (-v) and hashes as progress indicator (-h)
rpm -ivh <filename.rpm> --force  # installs a package even if it exists (package reinstall)
rpm -Fvh <filename.rpm>  # freshens a package (upgrade a package if ONLY an older version exists)
rpm -Uvh <filename.rpm>  # installs a package if that doesn't exist or upgrade a package if it exists
rpm -ivh <filename.rpm> --nodeps  # installs a package ignoring dependency checking (use this with caution because the program you will install might not work)
rpm -i <package name> --test  # do a dry-run instead of installing actual package

* Removing *

rpm -e <filename> -- removes a package

* Querying *

rpm -qi <package name>  # queries a package (-q) by printing information (-i); takes only package name and not filename (file.rpm)
rpm -ql <package name>  # lists files associated in the package (-l)
rpm -qf <package name>  # shows you on what package the file/directory/script came from; example is rpm -qf $(which cp)
rpm -qi <package name> --change-log  # prints change log of the given package
rpm -qi <package name> --scripts  # prints scripts that ran when the package was installed
rpm -q[other query options]p file_name.rpm  # queries a package filename instead (-p) of package name; example is rpm -qip my-program.rpm

YUM

There is also an RPM-based tool that handles dependency conflicts automatically and that is YUM (Yellowdog Updater Modified). "yum" command is the one we used which can also do what "rpm" command can. This is the best package management tool to use if you are using the Linux distributions mentioned above. The main disadvantage of this is you need a repository before you can do any transactions whereas RPM don't. Repositories are the ones inside /etc/yum.repos.d directory and will be discussed on a separate post in this blog. YUM commands that are commonly used are:

* Installing/Upgrading *

yum install # installs a package or multiple packages
yum install <package name(s)> -y  # same as above but answers "y" to all installation questions
yum update  # updates all installed packages on the system
yum update <package name(s)>  # updates a single or multiple packages
yum localinstall <filename.rpm>  # installs a local rpm file using your yum repository to resolve dependencies

* Removing *

yum remove <package name(s)>  # removes packages and all packages where the package depends on
yum erase <package name(s)>  # same as above

* Querying *

yum list  # lists ALL installed packages and shows you if there is a newer version for each of those
yum list <package name> # lists installed packages and shows you if there is a newer version for that
yum list available <package name> # lists all available packages for update
yum search *pattern*  # searches all package names that contains the string; prints both installed and available packages
yum search all *pattern*  # deeper search compare to the above command (searches also contents not only package names)
yum provides <file/directory>  # shows what package owns the file/directory
yum whatprovides */<file/directory>  # use this when you don't get a result from the above command
yum info <package name>  # shows info on a package (similar to rpm -qi)
yum check-update  # checks whether there are updates available
yum --disablerepo=* --eablerepo=<repo ID> list <pacakge name>  # list package on a particular repo

DPKG

Aside from RPM-based systems, we also have Debian-based systems. Examples of these are Ubuntu and of course Debian. The package management tools they use are apt-get, aptitude, dselect, and dpkg. "apt-get" is similar to yum where you need a repository which is defined under /etc/apt/sources.lst, "dpkg" is similar to rpm, "dselect" is a menu based tool, and aptitude is a combination of dselect's menu based and apt-get's cli features. Debian packages by convention ends with .deb. Let's start with dpkg. Here are the common commands for dpkg.

* Installing/Upgrading/Configuring*

dpkg -i <package name>  # installs a package
dpgk -i <package name> --ignore-depends=<package>  # ignore dependency conflict (similar to --nodeps in rpm)
dpkg -i <package name> --no-act  # tests for dependency only (similar to --test in rpm)
dpkg -iG <package name>  # doesn't install a package if a newer version of same package is already installed
dpkg -iE <package name>  # doesn't install a package if same version of package is already installed

* Removing *

dpkg -r <package name>  # removes a package while retaining its configuration files
dpkg -P <package name>  # removes a package and configuration files

* Querying *

dpkg -P <package name>  # displays information about an installed package
dpkg -I <uninstalled package name>  # displays information about an uninstalled package
dpkg --get-selections  # displays current installed packages
dpkg -l <pattern>  # list installed packages matching the pattern string
dpkg -L <package name>  # lists files associated with the package (similar to rpm -ql)

One command that is very useful if you want to return the package to its original state (fresh install with default settings) is "dpkg-reconfigure". For example, the command below will reconfigure the samba package, asking the packages initial installation questions and restarting its daemons.

dpkg-reconfigure samba

APT-GET

Similar to yum , Debian-based systems have the tool called "apt-get" which automatically handle dependency conflicts. Before using it, make sure you have the appropriate sources inside /etc/apt/sources.lst. Here are the common usage of that command:

* Installing/Upgrading *

apt-get install <package name>  # installs a package
apt-get install <package name> -y  # installs a package assuming "y" to all questions
apt-get install <package name> -s  # doesn't install, simulates a dry-run
apt-get install <package name> --no-upgrade  # don't upgrade a package if an older version exists
apt-get install <package name> -d  # downloads a package but doesn't install it
apt-get install <package name> -s  # performs a simulation/dry-run without installing any package or configuring any file
apt-get install /path/to/deb/package -f  # installs a .deb package and resolves dependencies through APT repositories
apt-get update  # obtains information on available packages inside /etc/apt/sources.lst
apt-get upgrade  # upgrades all installed packages to newer versions
apt-get dist-upgrade  # similar to above command but performs "smart" conflict resolution

* Removing *

apt-get remove <package name>  # removes a package

* Configuring *

apt-get dselect-upgrade  # performs any changes in package status left undone after running deselect
apt-get clean  # performs housekeeping (like yum clean)
apt-get autoclean  # similar to the above command but removes about packages that can no longer be downloaded

* Querying *

apt-get check  # checks package database for consistency and broken package installations
apt-file search /path/to/file  # searches for package containing a file

APT-CACHE

We also have apt-cache. Its only purpose is to provide information about Debian package database. Here are sample commands:



apt-cache show <package name>  # displays descripion of package
apt-cache showpkg <package name>  # same as above but displays dependency information instead


apt-cache show <package name>  # displays descripion of package
apt-cache stats  # displays package statistics (how many installed, dependencies recorded, etc..)
apt-cache unmet  # displays information about unmet dependencies
apt-cache depends <package name>  #  shows on what packages the one you specified depend on
apt-cache pkgnames  # displays all installed packages
apt-cache pkgnames <string>  # displays list of installed packages matching the string specified

DSELECT

A menu-based package manager also exists for Debian-based systems. That is "dselect". When invoked, it will display you the following menu.

0. [A]ccess Choose the access method to use.
1. [U]pdate Update list of available packages, if possible.
2. [S]elect Request which packages you want on your system.
3. [I]nstall Install and upgrade wanted packages.
4. [C]onfig Configure any packages that are unconfigured.
5. [R]emove Remove unwanted software.
6. [Q]uit Quit dselect.

APTITUDE

A tool that combines dselect's menu based gui and apt-get's cli is "aptitude". Here are the common usage:

aptitude search <package name>  # searches packages related to the one you specified (apt-get seems not to have this feature)
aptitude update  # update package list from APT repository
aptitude install <package name>  # installs a package
aptitude install <package name>-  # removes a package (w/ leading dash)
aptitude remove <package name>  # same as above
aptitude full-upgrade  # upgrades all installed packages
aptitude safe-upgrade  # conservative version of the above command
aptitude autoclean  # removes already-downloaded packages that are no longer available
aptitude help  # shows complete options

Sunday, April 4, 2021

Setup NGINX Proxy

Once in a while there are applications that needs to run in unprivileged port
(ports above 1024). What can you do to protect its identity from attacks? Aha!
Use HTTPS.. But how? We can setup another host in front of it (proxy server) to
accept incoming requests via encrypted channel (HTTPS, port 443/tcp) and
redirect that to the backend server (or proxied host) via an uncrypted channel.
In this post, we will use Nginx to have that setup.

1. First, let's say we have VM01 that runs an application on port 8081.







2. Now, let's spin up another host, VM02 (Centos 7.3 w/ SELinux in enforcing
mode), and install Nginx. First, be sure that the nginx repo is enabled.

[root@vm02 ~]# cat /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/x86_64/
gpgcheck=0
enabled=1
[root@vm02 ~]#

3. Install nginx package.

[root@vm02 ~]# yum install -y nginx

4. Start and enable nginx at boot

[root@vm02 ~]# systemctl start nginx
[root@vm02 ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@vm02 ~]#

5. Create selfsigned certificates. In this part, you may use `genkey` or
`openssl`. I always wanted the openssl way because its faster. BTW, don't
memorize the command below. Just be familiar with it because you can always see
it inside `/etc/pki/tls/certs/make-dummy-cert`

[root@vm02 ~]# /usr/bin/openssl req -newkey rsa:2048 -keyout vm02.key -nodes -x509 -days 365 -out vm02.crt
Generating a 2048 bit RSA private key
.........+++
..........................................................................................+++
writing new private key to 'vm02.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:California
Locality Name (eg, city) [Default City]:Los Angeles
Organization Name (eg, company) [Default Company Ltd]:dummy
Organizational Unit Name (eg, section) []:dummy
Common Name (eg, your name or your server's hostname) []:vm02
Email Address []:dummy@nxdomain.com
[root@vm02 ~]#

6. Move the certificates to the correct paths and run `restorecon` to make sure
SELinux contexts are correct.

[root@vm02 ~]# mv *crt /etc/pki/tls/certs
[root@vm02 ~]# mv *key /etc/pki/tls/private/                                                                                                                                                                [root@vm02 ~]# restorecon -Rv /etc/pki/tls/
restorecon reset /etc/pki/tls/certs/vm02.crt context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/pki/tls/private/vm02.key context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:cert_t:s0
[root@vm02 ~]#

7. Update nginx configuration to use SSL, point to the correct certificates
(ssl_certificate_*), and activate reverse proxy (proxy_pass). We will just use
the default config and use the minimal directives needed for simplicity.

[root@vm02 ~]# cat /etc/nginx/conf.d/default.conf
server {
    listen       443;
    server_name  vm02;
    ssl on;
    ssl_certificate /etc/pki/tls/certs/vm02.crt;
    ssl_certificate_key /etc/pki/tls/private/vm02.key;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    location / {
        proxy_pass   http://vm01:8081;
    }
}
[root@vm02 ~]#

8. Open up port 443/tcp on the firewall to allow incoming connections. You may
use firewall-cmd's "--add-service" or "--add-port". Let's use "--add-service"
since there is already an existing service defined for HTTPS.

[root@vm02 ~]# firewall-cmd --add-service=https --permanent
success
[root@vm02 ~]# firewall-cmd --reload
success
[root@vm02 ~]#

9. Activate this SELinux boolean to allow HTTP to forward requests to our
upstream server (VM01).

[root@vm02 ~]# setsebool -P httpd_can_network_connect on
[root@vm02 ~]#

10. Now, our proxy server is ready. Let's try connecting. It must display the
data from the upstream like in #1.







So that are the basics steps in hiding your application via a proxy server.
This is very important if your application accepts user details like username
and passwords. You never want your credentials to be sent in cleartext!

Hope you learned something from this post :)

Saturday, April 3, 2021

Building an RPM package

INTRODUCTION

  In this post, I will teach you how to create an RPM package from a source code. This tutorial assumes that
you already have the tarballed source code ready to be unpacked.

  As an example in our previous post about mrepo, we have installed it from source and not by using RPM
so that is a perfect way to demomstrate the RPM creation.

1. Install the needed packages
yum install -y rpm-build rpmdevtools
# rpm-build is required which contains "rpmbuild" command
# rpmdevtools is optional which is helpful in creating the directory tree


2. Create the directory tree
rpmdev-setuptree
# that command will create /root/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}


3. Copy the tarballed source file to SOURCES
cp mrepo-0.8.7.tar.bz2 /root/rpmbuild/SOURCES/


4. Copy the spec file to SPECS. It is usually included inside the tarball so unpack the tarball to /tmp first then
get the spec file from there.
tar xvf mrepo-0.8.7.tar.bz2 -C /tmp
cp /tmp/mrepo-0.8.7/mrepo.spec /root/rpmbuild/SPECS


INFO: A spec file describes the software and contains instructions on how to install the software. Creation
and detailed discussion of a spec file is not covered in this post but I might create one soon.

5. Now create the RPMs (src and binary)
rpmbuild -ba /root/rpmbuild/SPECS/mrepo.spec
# that command will create the following files
#  /root/rpmbuilds/RPMS/noarch/mrepo-0.8.7-1.noarch.rpm -> the actual rpm you can install
#  /root/rombuild/SRPMS/mrepo-0.8.7-1.src.rpm -> contains the original source code and the spec file


6. Validate by installing the rpm
rpm -ivh /root/rpmbuild/noarch/mrepo-0.8.7-1.noarch.rpm


SOURCES

Other tutorials:
http://www.tldp.org/HOWTO/RPM-HOWTO/build.html
http://www.thegeekstuff.com/2015/02/rpm-build-package-example/

Friday, April 2, 2021

Understanding Linux load average

We often hear the term “load average” but how much do we know about it? In this post, I will try my best to explain in the fewest letters everything we need to know about load average.


First, how do we determine the load average? We can use commands like uptime and top. Both will present you 3 values.


bash-3.2$ uptime
 03:46:39 up 703 days, 11:00,  2 users,  load average: 3.04, 3.10, 3.08
bash-3.2$


From the output above, 3.04, 3.10, and 3.08 is the load average for the past 1, 5, and 15 minutes respectively. You may also use “cat procloadavg” and will get same result for the 1st 3 columns.


Now, what is LOAD AVERAGE? It is the average of all cpu loads on your system.


What is a CPU LOAD? It is the NUMBER of processes using + NUMBER of processes in queue for a single cpu core. It is NOT the CPU usage.


What is CPU USAGE? It tells us how active is your cpu cores.


To determine CPU LOAD on each cpu core, we follow these 2 formulas:


load per cpu core = (load average) / (# of cpu cores)

load per cpu core = (processes using the cpu core) + (processes inqueue for that cpu core)


where:

load average = the one reported in procloadavg, uptime, top, etc ..

# of cores = grep -c ^proc proccpuinfo


To better illustrate, look at the scenarios below for a 1-core and 2-core machine.


1-core machine (CPU A only):


0 process using CPU A = load average of 0 (under capacity)

1 process using CPU A = load average of 1 (at max capacity)

1 process using CPU A, 1 process waiting in line = load average of 2 (overcapacity)

1 process using CPU A, 2 process waiting in line = load average of 3 (overcapacity)

… and so on ..


2-core machine (CPU A and CPU B):


0 process using CPU A, 0 process using CPU B = load average of 0 (under capacity)

1 process using CPU A, 0 process using CPU B = load average of 0.5 (under capacity)

1 process using CPU A, 1 process using CPU B = load average of 1 (max capacity)

1 process using CPU A, 1 process using CPU B, 1 processes waiting in line = load average of 1.5 (slightly overcapacity)

1 process using CPU A, 1 process using CPU B, 2 processes waiting in line = load average of 2 (overcapacity)

… and so on ..


Based from the examples above, a load average of 1 means all of your cpu cores are at max capacity. Load average of 2 means each cpu core has 1 process running and 1 process waiting in line.


There are instances where your load average is high but the CPU usage is low. An example is when you have several hunged processes occupying all your cpu cores. Sinced hunged, those processes doesn’t generate CPU activity but they still hold the CPU cores. When all CPU cores are being held, no other processes can use them. So if processes can’t get hold of the CPU, this can slow down your system.


In short, a system with LOW CPU USAGE but with HIGH LOAD AVERAGE can still slow down your system.

Wednesday, March 31, 2021

Pacemaker/Corosync on Ubuntu

One common way of achieving High-Availability setup is by installing pacemaker
and corosync on your nodes. Pacemaker controls and manages the resources which
is dependent on Corosync which controls the communication between nodes.

We will configure an active/passive setup on 2 nodes running on Ubuntu Server
17.04 and mimic a real-world scenario on which 1 node went down. Do note that
most of the commands below needs to be executed on both nodes unless the task
explicitly designated for one (or any) node only.

Configuring the cluster
=======================

1. In any clustering software, time is a critical factor in ensuring
synchronization between the nodes so let's configure our time/data settings
properly by installing ntp. After installation, wait for few minutes and check
the date and time if they are already correct.

sudo apt-get install ntp
sudo systemctl start ntp
sudo systemctl enable ntp
[...]
date

2. Install our cluster suite. In Ubuntu 17.04, which is the OS of our choice,
"corosync" and other required packages can be installed by just installin
"pacemaker". After installation, make sure required services are running and
enabled at boot.

sudo apt-get install pacemaker
sudo systemctl start pacemaker corosync
sudo systemctl enable pacemaker corosync

3. (Do this one one node only) Corosync requires an authkey (authorization key)
to be present on all members of the cluster. To create one, install an entropy
package, generate authkey and send that to all members of the cluster. In our
case, we will send it to node2. The generated authkey is located in
/etc/corosync/authkey.

sudo apt-get install haveged
sudo corosync-keygen
scp /etc/corosync/authkey node2:/etc/corosync/authkey

4. Backup the default corosync.conf and replace the contents with the config
below. The important items here are the 3 IP addresses - IPs of each nodes and
the bind IP which will be used by the cluster itself. You must decide on what
bind IP will you use. Just make sure that it is not used by any host on your
network. Later, you will see that it will be automatically generated when we
start our cluster.

sudo cp /etc/corosync/corosync.conf /etc/corosync/corosync.conf.orig
sudo vi /etc/corosync/corosync.conf

--- START ---
totem {
  version: 2
  cluster_name: lbcluster
  transport: udpu
  interface {
    ringnumber: 0
    bindnetaddr: <put bind IP here>
    broadcast: yes
    mcastport: 5405
  }
}
quorum {
  provider: corosync_votequorum
  two_node: 1
}
nodelist {
  node {
    ring0_addr: <put node1 IP>
    name: primary
    nodeid: 1
  }
  node {
    ring0_addr: <put node2 IP>
    name: secondary
    nodeid: 2
  }
}
logging {
  to_logfile: yes
  logfile: /var/log/corosync/corosync.log
  to_syslog: yes
  timestamp: on
}
--- END ---

5. Now, we need to allow pacemaker service in corosync. We do that by creating
the service directory and "pcmk" file inside it. We need to add one more setting
on the default file.

sudo mkdir /etc/corosync/service.d
sudo vi /etc/corosync/service.d/pcmk

--- START ---
service {
  name: pacemaker
  ver: 1
}
--- END ---

sudo echo "START=yes" >> /etc/default/corosync

6. Let's restart corosync and pacemaker to get the configuration we made.

sudo systemctl restart corosync pacemaker

7. Let's verify if the cluster honored the node IPs. We must have similar output
below where both node IPs were detected.

sudo corosync-cmapctl | grep members

* sample output *

runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(10.0.0.1)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(10.0.0.2)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined

8. We can now interact with pacemaker and see the status of our cluster using
"crm" command. On the output below, you will see that both nodes (primary and
secondary) are online but we still don't have a resource. On the next steps, we
will create a virtual IP resource.

crm status

* sample output *

Stack: corosync
Current DC: primary (version 1.1.16-94ff4df) - partition with quorum
Last updated: Thu Dec 28 20:13:19 2017
Last change: Thu Dec 28 20:10:30 2017 by hacluster via crmd on primary

3 nodes configured
0 resources configured

Node node1: UNCLEAN (offline)
Online: [ primary secondary ]

No resources

9. (Do this on one node only) Before creating our virtual IP resource, let's
disable quorum and fencing settings for simplicity. Whenever we configure any
properties, we can do it on node only since it will be synchronizes to all
members.

sudo crm configure property stonith-enabled=false
sudo crm configure property no-quorum-policy=ignore

10. (Do this on one node only) Let's create our first resource using the command
below. This will be a virtual IP (or the bind IP) that will represent our
cluster. Meaning, access to the cluster must be done via this IP and not on the
individual IPs of the nodes. Since we are aiming for an active/passive setup, it
is better to add `resource-stickiness="100"` as one of the parameters. That
means that when one node is offline, the other node will get the bind IP and
assign it to itself from that moment even after the other node comes back to
life. Be sure to set this IP same as the `bindnetaddr` inside corosync.conf.

sudo crm configure primitive virtual_ip \
ocf:heartbeat:IPaddr2 params ip="10.1.1.3" \
cidr_netmask="32" op monitor interval="10s" \
meta migration-threshold="2" failure-timeout="60s" \
resource-stickiness="100"

Same thing in configuring a cluster property, the command above needs to be ran
on one node only since it will be synchronize across the members.

11. Once a resource is created, it will immediately appear on the status. Let's
verify. From the output below, you can see that the virtual_ip resource is
started on the primary which is pertaining to node1. So if you log in to that
node and inspect the network interfaces, you should see a new one tied to the
bind IP. Also, at this moment, that bind IP is already UP and pingable.

sudo crm status

* sample output *

Stack: corosync                                                                 
Current DC: primary (version 1.1.16-94ff4df) - partition with quorum
Last updated: Thu Dec 28 20:33:39 2017     
Last change: Thu Dec 28 20:32:54 2017 by root via cibadmin on primary
                                           
3 nodes configured   
1 resource configured
                             
Online: [ primary secondary ]           
                 
Full list of resources:
                                       
 virtual_ip     (ocf::heartbeat:IPaddr2):       Started primary

Testing High-Availability
=========================

Now that we have a fully working cluster, the best way to appreciate its magic
is by testing!

1. (Do this on node1 only). Let's replicate a real world scenario where the
primary node went down. What will happen to the cluster? Will the bind IP go
down also? You may mimic such scenario by disconnecting the interface or
powering off the server but for a quicker way, let's use our favorite "crm"
command to switch the primary node into "standby" mode.

sudo crm node standby primary

* sample output *

Stack: corosync
Current DC: primary (version 1.1.16-94ff4df) - partition with quorum
Last updated: Thu Dec 28 20:44:45 2017
Last change: Thu Dec 28 20:40:02 2017 by root via crm_attribute on primary

3 nodes configured
1 resource configured

Node primary: standby
Online: [ secondary ]

Full list of resources:

 virtual_ip     (ocf::heartbeat:IPaddr2):       Started secondary

If you started doing a continuous ping to the bind IP before this step, you will
notice a 1 - 5 second pause. Our HA is doing its magic. It is moving the bind IP
from the primary (node1) to the secondary (node2). And when you login in to
the secondary, you will see that a new interface is created having the bind IP.
The primary node no longer have that interface.

2. Now, let's remove the primary from standby mode.

sudo crm node online primary

* sample output *

Stack: corosync
Current DC: primary (version 1.1.16-94ff4df) - partition with quorum
Last updated: Thu Dec 28 20:48:28 2017
Last change: Thu Dec 28 20:48:25 2017 by root via crm_attribute on primary

3 nodes configured
1 resource configured

Online: [ primary secondary ]

Full list of resources:

 virtual_ip     (ocf::heartbeat:IPaddr2):       Started secondary

Both nodes are now online but the bind IP is still on the secondary since we
added `resource-stickiness="100"` as one of the parameters when we configure
our resource.

That concludes our post for today. Hope you learn something! :)