Saturday, March 16, 2019

Restic for Deduplicated backups!


Restic
------

- Uses deduplication for fast and efficient backups.
- Written in go.
- Runs on Linux and Windows.
- Can be used in cloud, remote, or local backups.

Initializes as backup location
restic -r /opt/backups init
Takes a backup
restic -r /opt/backups --verbose backup /home/john/myfiles
List backups as snapshots
restic -r /opt/backups snaphots
Restore a backup to a target directory
restic -r /opt/backups restore <snapshot ID> --target /tmp/restored-data
Restores latest backup
restic -r /opt/backups restore latest --target /tmp/restored-data
Compares differences between 2 snapshots
restic -r /opt/backups diff <snapshot ID 1> <snapshot ID 2>

Sources:

Monday, February 25, 2019

Cgroups


Basic Information
-----------------

- Kernel feature that limits CPU, memory, Disk I/O and network usage of
  processes.
- More sophisticated than `ulimit`.
- Some resources that can be controlled via cgroups:
    a. cpu share
    b. cpu number to use
    c. memory
    d. and many more

Controllers
-----------

memory
limits memory usage
cpuacct
monitors cpu usage

Tutorials
---------

Commands to check
# Check process cgroup membership
ps -o cgroup <PID>
Creating cgroups manually (not persistent)
1. Install packages
yum install libcgroup libcgroup-tools
 
2. Create group under memory controller
mkdir /sys/fs/cgroup/memory/foo

3. Create this shell script
#!/bin/bash

while true; do
  echo hello > /dev/null
  sleep 1
done

4. Run script in background to emulate memory consuming process
bash test.sh &

5. Apply memory limit under "foo" group
echo 5 > /sys/fs/cgroup/memory/memory.limit_in_bytes

6. Get PID of script and include it under "foo" group
echo (pgrep -f test.sh) > /sys/fs/cgroup/memory/foo/cgroup.procs

7. Notice that script was killed by OOM
journalctl | tail
Creating cgroups using "libcgroup" (not persistent)
1. Install packages
yum install libcgroup libcgroup-tools
 
2. Create group under memory controller
cgcreate -g memory:foo

3. Create this shell script
#!/bin/bash

while true; do
  echo hello > /dev/null
  sleep 1
done

4. Run script in background to emulate memory consuming process
cgexec -g memory:foo test.sh &

5. Apply memory limit under "foo" group
echo 5 > /sys/fs/cgroup/memory/memory.limit_in_bytes

6. Notice that script was killed by OOM
journalctl | tail
Enabling persistent configurations
1. Update /etc/cgconfig.conf
 
group foo {
  cpu {
    cpu.shares = 100;
  }
  memory {
    memory.limit_in_bytes = 5;
  }
}

2. Enable service
systemctl enable --now cgconfig

3. Run a program under cgroup control
cgexec -g memory:foo test.sh &


Thursday, February 7, 2019

Setup single-master cluster using kubeadm

In this tutorial, we will setup a 1 master 1 worker kubernetes cluster.
The machines are provisioned using Vagrant (not covered in this tutorial) and
are both running Centos 7.6. Make sure to add a second IP on the machines. We
will use them as the node IPs.

1. Prepare 2 VM (1 master and 1 worker).
kube1 --> master
kube2 --> node
2. Disable swap on all machines.
swapoff /swapfile
sed -i 's/\/swapfile none swap defaults 0 0/# \/swapfile none swap defaults 0 0/' /etc/fstab
3. Disable firewall on all machines.
systemctl disable --now firewalld
4. Install kubeadm and kubelet on all machines.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --MarkdownEditing=kubernetes
systemctl enable kubelet && systemctl start kubelet
5. Install docker on all machines.
yum install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum update && yum install docker-ce-18.06.1.ce
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl start --now docker
6. Go to master and initialize it. We will use Calico as our pod network
plugin so we need to specify the pod CIDR during initialization. We will use
also the node IP of the master to advertise the API server endpoint.
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.50.101
7. You should have an output similar to this and make sure to copy the
bootstrap command. You will use that in joining other nodes to the cluster.
[...]
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
  http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.50.101:6443 --token wwdq15.mu6fjqngw9en8i07 --discovery-token-ca-cert-hash sha256:3ecf97042860331e2cc5df8b72a94f7bdd1c77024aa0ea8ee59c422139a3a86f
8. Still on the master node, setup your kubeconfig file so you can now start
interacting with the master.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/conf
9. Once kubeconfig file is setup, check the status of the node and the pod.
There should be only 1 node running and coredns pods should not be running at
this point since we haven't install calico yet.
kubectl get nodes
kubectl get po --all-namespaces
10. Install calico.
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
11. Go to the worker node and join it to the cluster.
kubeadm join 192.168.50.101:6443 --token wwdq15.mu6fjqngw9en8i07 --discovery-token-ca-cert-hash sha256:3ecf97042860331e2cc5df8b72a94f7bdd1c77024aa0ea8ee59c422139a3a86f
12. Wait for few minutes and there will now be 2 kubernetes nodes available.
kubectl get nodes

If you want an automated way, visit this github repo.

Friday, January 25, 2019

SYN Flooding


Symptoms is this is an increase in number of SYN-RECV states in
netstat. Here is an example for SSH connections.

[...]
tcp    SYN-RECV   0      0      10.147.0.6:22                 144.217.57.63:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.156.179:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.145.254:45914             
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.156.179:45914             
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.158:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.151:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.158:45914
[...]

Normally, 3-way handshake happens like this.

1. Client sends SYN packet to server
2. Server responds SYN-ACK packet to client
3. Client sends ACK packet to server
 
During SYN-RECV state, client doesn't send back ACK packet. This
maybe an example of SYN flooding a type of malicious attack.

Several kernel parameters can be configured to defend your server.

net.ipv4.tcp_syncookies = 1  --> prevents valid connections from dropping (best param to configure)
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv=45
net.ipv4.conf.all.rp_filter = 1
 

Saturday, January 12, 2019

Setting up RKE Cluster


Centos 7.5
Docker 17.03
Kubernetes 1.11

Preparation
===========

1. Download latest RKE binary in https://github.com/rancher/rke/releases/.

2. Create separate partition for docker data.
pvcreate /dev/sdb
vgcreate docker_vg /dev/sdb
lvcreate -n docker_lv -l 100%FREE docker_vg
mkfs.xfs /dev/mapper/docker_vg-docker_lv
echo "/dev/mapper/docker_vg-docker_lv  /var/lib/docker  xfs  defaults  0 0" >> /etc/fstab
mkdir -p /var/lib/docker
mount -a
3. Install docker-ce 17.03.
curl https://releases.rancher.com/install-docker/17.03.sh | sh
systemctl enable --now dockervm.
4. Disable swappiness.
echo "vm.swappiness = 0" >> /etc/sysctl.conf
sysctl -p
5. Disable Network Manager.
systemctl disable --now NetworkManager
6. Allow the following ports in the firewall.
cat << EOF > ports.txt
6443/tcp
2376/tcp
2379/tcp
2380/tcp
8472/tcp
10250/tcp
80/tcp
443/tcp
30000-32767/tcp
EOF
for port in $(cat ports.txt); do firewall-cmd --add-port=$port --permanent; done
7. Load needed kernel modules.
cat << EOF > modules.txt
br_netfilter
ip6_udp_tunnel
ip_set
ip_set_hash_ip
ip_set_hash_net
iptable_filter
iptable_nat
iptable_mangle
iptable_raw
nf_conntrack_netlink
nf_conntrack
nf_conntrack_ipv4
nf_defrag_ipv4
nf_nat
nf_nat_ipv4
nf_nat_masquerade_ipv4
nfnetlink
udp_tunnel
veth
vxlan
x_tables
xt_addrtype
xt_conntrack
xt_comment
xt_mark
xt_multiport
xt_nat
xt_recent
xt_set
xt_statistic
xt_tcpudp
EOF
for module in $(cat modules.txt); do modprobe $module; done
8. Add the following line in /usr/lib/systemd/system/docker.service.
[...]
KillMode=process
MountFlags=shared
[...]
systemctl daemon-reload
ssytemctl restart docker
9. Allow SSH tunelling and forwarding. Update sshd_config.
[...]
AllowTcpForwarding yes
PermitTunnel yes
[...]
systemctl restart sshd
10. Get kubectl.
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
mv kubectl /usr/local/bin/
Bootstrap the cluster
=====================

1. Generate RKE config. This will allow you to answer a series of
questions regarding the cluster. It will generate "cluster.yml" to
be used on the next step.
rke config
2. Bootstrap cluster.
rke up

Wednesday, January 2, 2019

Rancher Roles and Permissions


Inroduction
-----------

Hierarchy of cluster, projects and namespaces.



Permission Levels
-----------------

Global Permissions - Authorization for rancher (excluding individual managed k8
                     clusters).
Cluster and Project Roles - Authorization for specific k8 cluster

Adding Users
------------

Local Users:

User needs to be a member of a cluster first which is set by rancher admin.
Without doing that, even with correct password, login will fail with the
following message (seen in Rancher v2.1.1).


Global Permissions
------------------

Default global permissions:

1. Administrator - full control over the entire Rancher system and all clusters within it
2. Standard User - can create new clusters and use them; can also assign other
                   users permissions to their clusters

Default Assignments:

1. External - users are assigned "Standard User" by default
2. Local - global permissions are assigned during user creation

Projects and Namespaces
-----------------------

A project is a collection of namespaces. A resource quota on project level will
serve as the limit of all combined resource quota of all namespaces.

Permissions on project level will also apply on all namespaces within it. You
can override that by editing the namespace permission manually.

Project permissions
-------------------

Project permissions are applied realtime. User don't need to logout and login
for his/her permission to take effect - it is applied immediately.

A user can gain access even he was not added in cluster level - that is by
adding him on project level.

K8 vs Rancher Rolebindings
--------------------------

Created using rancher:

...
  labels:
    authz.cluster.cattle.io/rtb-owner: 25426a95-0291-11e9-b27e-005056b62395
    cattle.io/creator: norman
  name: clusterrolebinding-68fxg
...

Created by k8:

...
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"clusterrolebinding-f7ggc","namespace":"sata-dev"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"admin"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"User","name":"user-f7ggc"}]}
  creationTimestamp: "2019-01-02T03:00:13Z"
...

Rancher Architecture


Architecture
------------

Rancher 2 is built for docker and kubernetes.


Rancher API Server
- sits in front of k8 api server and etcd
- implements the ff:
  a. manage user identities (AD/Github..)
  b. manage access control and security policies
  c. manage projects (groups of namespaces)
  d. keep track of nodes
Cluster Controller
- used for global rancher install
- provides access control policies to clusters and projects
- provisions clusters (docker/RKE/GKE)
Cluster Agents
- manages individual k8 clusters
  a. workload management (pods, deployments, etc ..)
  b. applies roles and bindings
  c. communication between k8 clusters and rancher server
Authentication Proxy
- forwards authentication to individual k8 clusters