Monday, February 25, 2019

Cgroups


Basic Information
-----------------

- Kernel feature that limits CPU, memory, Disk I/O and network usage of
  processes.
- More sophisticated than `ulimit`.
- Some resources that can be controlled via cgroups:
    a. cpu share
    b. cpu number to use
    c. memory
    d. and many more

Controllers
-----------

memory
limits memory usage
cpuacct
monitors cpu usage

Tutorials
---------

Commands to check
# Check process cgroup membership
ps -o cgroup <PID>
Creating cgroups manually (not persistent)
1. Install packages
yum install libcgroup libcgroup-tools
 
2. Create group under memory controller
mkdir /sys/fs/cgroup/memory/foo

3. Create this shell script
#!/bin/bash

while true; do
  echo hello > /dev/null
  sleep 1
done

4. Run script in background to emulate memory consuming process
bash test.sh &

5. Apply memory limit under "foo" group
echo 5 > /sys/fs/cgroup/memory/memory.limit_in_bytes

6. Get PID of script and include it under "foo" group
echo (pgrep -f test.sh) > /sys/fs/cgroup/memory/foo/cgroup.procs

7. Notice that script was killed by OOM
journalctl | tail
Creating cgroups using "libcgroup" (not persistent)
1. Install packages
yum install libcgroup libcgroup-tools
 
2. Create group under memory controller
cgcreate -g memory:foo

3. Create this shell script
#!/bin/bash

while true; do
  echo hello > /dev/null
  sleep 1
done

4. Run script in background to emulate memory consuming process
cgexec -g memory:foo test.sh &

5. Apply memory limit under "foo" group
echo 5 > /sys/fs/cgroup/memory/memory.limit_in_bytes

6. Notice that script was killed by OOM
journalctl | tail
Enabling persistent configurations
1. Update /etc/cgconfig.conf
 
group foo {
  cpu {
    cpu.shares = 100;
  }
  memory {
    memory.limit_in_bytes = 5;
  }
}

2. Enable service
systemctl enable --now cgconfig

3. Run a program under cgroup control
cgexec -g memory:foo test.sh &


Thursday, February 7, 2019

Setup single-master cluster using kubeadm

In this tutorial, we will setup a 1 master 1 worker kubernetes cluster.
The machines are provisioned using Vagrant (not covered in this tutorial) and
are both running Centos 7.6. Make sure to add a second IP on the machines. We
will use them as the node IPs.

1. Prepare 2 VM (1 master and 1 worker).
kube1 --> master
kube2 --> node
2. Disable swap on all machines.
swapoff /swapfile
sed -i 's/\/swapfile none swap defaults 0 0/# \/swapfile none swap defaults 0 0/' /etc/fstab
3. Disable firewall on all machines.
systemctl disable --now firewalld
4. Install kubeadm and kubelet on all machines.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --MarkdownEditing=kubernetes
systemctl enable kubelet && systemctl start kubelet
5. Install docker on all machines.
yum install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum update && yum install docker-ce-18.06.1.ce
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl start --now docker
6. Go to master and initialize it. We will use Calico as our pod network
plugin so we need to specify the pod CIDR during initialization. We will use
also the node IP of the master to advertise the API server endpoint.
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.50.101
7. You should have an output similar to this and make sure to copy the
bootstrap command. You will use that in joining other nodes to the cluster.
[...]
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
  http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.50.101:6443 --token wwdq15.mu6fjqngw9en8i07 --discovery-token-ca-cert-hash sha256:3ecf97042860331e2cc5df8b72a94f7bdd1c77024aa0ea8ee59c422139a3a86f
8. Still on the master node, setup your kubeconfig file so you can now start
interacting with the master.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/conf
9. Once kubeconfig file is setup, check the status of the node and the pod.
There should be only 1 node running and coredns pods should not be running at
this point since we haven't install calico yet.
kubectl get nodes
kubectl get po --all-namespaces
10. Install calico.
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
11. Go to the worker node and join it to the cluster.
kubeadm join 192.168.50.101:6443 --token wwdq15.mu6fjqngw9en8i07 --discovery-token-ca-cert-hash sha256:3ecf97042860331e2cc5df8b72a94f7bdd1c77024aa0ea8ee59c422139a3a86f
12. Wait for few minutes and there will now be 2 kubernetes nodes available.
kubectl get nodes

If you want an automated way, visit this github repo.