Wednesday, April 21, 2021

Replicated Volume via GlusterFS

In this post, we will demonstrate on how to create a replicated glusterFS
volume. We will use 3 hosts, 2 gluster servers and 1 client all running
Centos 7.3.

vm01 = gluster server 1
vm02 = gluster server 2
vm03 = client

As an introduction, a replicated gluster FS volume ensures that data is present
on all bricks (the building blocks of a volume). This is useful in situations
where you want high-availability and redundancy. There are other types of
gluster FS volumes which you can see at Gluster Volumes.

Configuring the servers


Please note that most of the commands below needs to be ran on both servers
unless the step explicitly tells to run on 1 node only.

1. Install glusterFS server packages on the servers. You have 2 options on
how to install the server packages, one is to install from SIG repository and
the other is from gluster.org. Let's try the first one by searching for the
latest stable repository and installing it.

[root@vm01 ~]# yum search centos-release-gluster
=================================================================================== N/S matched: centos-release-gluster ====================================================================================
centos-release-gluster310.noarch : Gluster 3.10 (Long Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster312.noarch : Gluster 3.12 (Long Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster313.noarch : Gluster 3.13 (Short Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster36.noarch : GlusterFS 3.6 packages from the CentOS Storage SIG repository
centos-release-gluster37.noarch : GlusterFS 3.7 packages from the CentOS Storage SIG repository
centos-release-gluster38.noarch : GlusterFS 3.8 packages from the CentOS Storage SIG repository
centos-release-gluster39.noarch : Gluster 3.9 (Short Term Stable) packages from the CentOS Storage SIG repository


  Name and summary matches only, use "search all" for everything.
[root@vm01 ~]#
[root@vm01 ~]# yum install -y centos-release-gluster310
<output truncated>
root@vm01 ~]#

2. Now that we have a repository to get the packages, let's install the actual
server packages.

[root@vm01 ~]# yum install -y glusterfs glusterfs-cli glusterfs-libs glusterfs-server
<output truncated>
root@vm01 ~]#

3. Start and enable gluster service.

[root@vm01 ~]# systemctl enable --now glusterd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@vm01 ~]#

4. Add a separate disk on your nodes to hold our gluster partitions. This make
sure that it is separated from the OS partition to mimic real world setup. Once
added, make sure it can be seen by your hosts. In my case, I added a 5 GB disk
to each gluster server and can be seen as /dev/sdb inside the hosts.

[root@vm01 ~]# fdisk -l | grep sd
Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    16777215     7339008   8e  Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
[root@vm01 ~]#

5. Let's now create the underlying filesystem.

[root@vm01 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@vm01 ~]# vgcreate gluster_vg /dev/sdb
  Volume group "gluster_vg" successfully created
[root@vm01 ~]# lvcreate -n gluster_lv -l 100%FREE gluster_vg
  Logical volume "gluster_lv" created.
[root@vm01 ~]# mkfs.xfs /dev/mapper/gluster_vg-gluster_lv
meta-data=/dev/mapper/gluster_vg-gluster_lv isize=512    agcount=4, agsize=327424 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1309696, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@vm01 ~]#

6. Mount the filesystem and make sure its persistent across reboots.

[root@vm01 ~]# mkdir /brick
[root@vm01 ~]# echo '/dev/mapper/gluster_vg-gluster_lv /brick xfs defaults 0 0' >> /etc/fstab
[root@vm01 ~]# mount -a
[root@vm01 ~]# df -h /brick/
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/gluster_vg-gluster_lv  5.0G   33M  5.0G   1% /brick
[root@vm01 ~]#

7. Open up needed ports - for bricks and pool communication. In versions below
3.4, each brick needs a port starting at 24009/tcp. For versions higher, each
brick needs a port starting at 49152/tcp. Opening this brick ports allows client
to access your volumes. Since we will use 1 brick only, we will open up port
49152/tcp. In addition to that, pool communication on any versions needs to
ports 24007/tcp and 24008/tcp on all servers.

[root@vm01 ~]# firewall-cmd --add-port={24007-24008/tcp,49152/tcp} --permanent
success
[root@vm01 ~]# firewall-cmd --reload
success
[root@vm01 ~]#

8. (Do this on 1 node only). Gluster servers work via Trusted Storage Pool or
"TSP". This pool contains the members sharing the bricks. In order to add a
member, let's execute the command below on the 1st server. Once executed, you
don't need to run it again on the other node.

[root@vm01 ~]# gluster peer probe vm02       
peer probe: success.                                             
[root@vm01 ~]#

9. After being probing for members, let's verify if the pool status.

* from vm01 *

[root@vm01 ~]# gluster peer status
Number of Peers: 1

Hostname: vm02
Uuid: 06525146-da28-4081-b6e3-35372ebf269c
State: Peer in Cluster (Connected)
[root@vm01 ~]#


* from vm02 *

[root@vm02 ~]# gluster peer status
Number of Peers: 1

Hostname: vm01
Uuid: 0c77f8b8-5706-413e-9a25-6e3951d738f5
State: Peer in Cluster (Connected)
[root@vm02 ~]#

10. Create the directory that we will export via a gluster volume

[root@vm01 ~]# mkdir /brick/export1

11. (Do this on 1 node only) Let's create the actual gluster volume and start
it.

[root@vm01 ~]# gluster volume create glustervol01 replica 2 transport tcp vm01:/brick/export1 vm02:/brick/export1
volume create: glustervol01: success: please start the volume to access data
[root@vm01 ~]# gluster volume start glustervol01
volume start: glustervol01: success
[root@vm01 ~]#

12. Let's verify the volume status. Same output should appear on both servers.

[root@vm01 ~]# gluster volume info

Volume Name: glustervol01
Type: Replicate
Volume ID: bf0b27da-ff4b-4bfd-aaa5-1708c16237e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vm01:/brick/export1
Brick2: vm02:/brick/export1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@vm01 ~]#

13. Now that our gluster volume "glustervol01" has been setup, let's configure
our client.

Configuring the client


1. Install needed packages to mount the gluster volume

[root@vm03 ~]# yum install -y glusterfs glusterfs-fuse attr
<output truncated>
[root@vm03 ~]#

2. Mount the volume. You may mount the volume from vm01 or vm02.
It doesn't make any difference since this is a shared storage. Take note that
we mount the volume by its volume name "glustervol01" and not by the path of
the brick (/brick/export1).

[root@vm03 ~]# mkdir /mnt/glustervol01
[root@vm03 ~]# mount -t glusterfs vm01:/glustervol01 /mnt/glustervol01
[root@vm03 ~]# touch /mnt/glustervol01/file-from-vm03
[root@vm03 ~]# ll /mnt/glustervol01/file-from-vm03
-rw-r--r--. 1 root root 0 Jan  8 21:50 /mnt/glustervol01/file-from-vm03
[root@vm03 ~]#

3. Client has successfully mounted the volume and can immediately start writing
files to it. Once a file has been created, you will notice that it is present on
the 2 bricks (1 on vm01 and the other on vm02) which signifies a
replicated storage.

* from vm01 *

[root@vm01 ~]# ls -l /brick/export1/
total 0
-rw-r--r--. 2 root root 0 Jan  8 21:50 file-from-vm03
[root@vm01 ~]#


* from vm02 *

[root@vm02 ~]# ls -l /brick/export1/
total 0
-rw-r--r--. 2 root root 0 Jan  8 21:50 file-from-vm03
[root@vm02 ~]#

That concludes our mini tutorial. Hope you enjoy and learn a lot from this.

No comments:

Post a Comment