Friday, January 25, 2019

SYN Flooding


Symptoms is this is an increase in number of SYN-RECV states in
netstat. Here is an example for SSH connections.

[...]
tcp    SYN-RECV   0      0      10.147.0.6:22                 144.217.57.63:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.156.179:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.145.254:45914             
tcp    SYN-RECV   0      0      10.147.0.6:22                 104.27.156.179:45914             
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.158:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.151:3082              
tcp    SYN-RECV   0      0      10.147.0.6:22                 103.9.179.158:45914
[...]

Normally, 3-way handshake happens like this.

1. Client sends SYN packet to server
2. Server responds SYN-ACK packet to client
3. Client sends ACK packet to server
 
During SYN-RECV state, client doesn't send back ACK packet. This
maybe an example of SYN flooding a type of malicious attack.

Several kernel parameters can be configured to defend your server.

net.ipv4.tcp_syncookies = 1  --> prevents valid connections from dropping (best param to configure)
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv=45
net.ipv4.conf.all.rp_filter = 1
 

Saturday, January 12, 2019

Setting up RKE Cluster


Centos 7.5
Docker 17.03
Kubernetes 1.11

Preparation
===========

1. Download latest RKE binary in https://github.com/rancher/rke/releases/.

2. Create separate partition for docker data.
pvcreate /dev/sdb
vgcreate docker_vg /dev/sdb
lvcreate -n docker_lv -l 100%FREE docker_vg
mkfs.xfs /dev/mapper/docker_vg-docker_lv
echo "/dev/mapper/docker_vg-docker_lv  /var/lib/docker  xfs  defaults  0 0" >> /etc/fstab
mkdir -p /var/lib/docker
mount -a
3. Install docker-ce 17.03.
curl https://releases.rancher.com/install-docker/17.03.sh | sh
systemctl enable --now dockervm.
4. Disable swappiness.
echo "vm.swappiness = 0" >> /etc/sysctl.conf
sysctl -p
5. Disable Network Manager.
systemctl disable --now NetworkManager
6. Allow the following ports in the firewall.
cat << EOF > ports.txt
6443/tcp
2376/tcp
2379/tcp
2380/tcp
8472/tcp
10250/tcp
80/tcp
443/tcp
30000-32767/tcp
EOF
for port in $(cat ports.txt); do firewall-cmd --add-port=$port --permanent; done
7. Load needed kernel modules.
cat << EOF > modules.txt
br_netfilter
ip6_udp_tunnel
ip_set
ip_set_hash_ip
ip_set_hash_net
iptable_filter
iptable_nat
iptable_mangle
iptable_raw
nf_conntrack_netlink
nf_conntrack
nf_conntrack_ipv4
nf_defrag_ipv4
nf_nat
nf_nat_ipv4
nf_nat_masquerade_ipv4
nfnetlink
udp_tunnel
veth
vxlan
x_tables
xt_addrtype
xt_conntrack
xt_comment
xt_mark
xt_multiport
xt_nat
xt_recent
xt_set
xt_statistic
xt_tcpudp
EOF
for module in $(cat modules.txt); do modprobe $module; done
8. Add the following line in /usr/lib/systemd/system/docker.service.
[...]
KillMode=process
MountFlags=shared
[...]
systemctl daemon-reload
ssytemctl restart docker
9. Allow SSH tunelling and forwarding. Update sshd_config.
[...]
AllowTcpForwarding yes
PermitTunnel yes
[...]
systemctl restart sshd
10. Get kubectl.
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
mv kubectl /usr/local/bin/
Bootstrap the cluster
=====================

1. Generate RKE config. This will allow you to answer a series of
questions regarding the cluster. It will generate "cluster.yml" to
be used on the next step.
rke config
2. Bootstrap cluster.
rke up

Wednesday, January 2, 2019

Rancher Roles and Permissions


Inroduction
-----------

Hierarchy of cluster, projects and namespaces.



Permission Levels
-----------------

Global Permissions - Authorization for rancher (excluding individual managed k8
                     clusters).
Cluster and Project Roles - Authorization for specific k8 cluster

Adding Users
------------

Local Users:

User needs to be a member of a cluster first which is set by rancher admin.
Without doing that, even with correct password, login will fail with the
following message (seen in Rancher v2.1.1).


Global Permissions
------------------

Default global permissions:

1. Administrator - full control over the entire Rancher system and all clusters within it
2. Standard User - can create new clusters and use them; can also assign other
                   users permissions to their clusters

Default Assignments:

1. External - users are assigned "Standard User" by default
2. Local - global permissions are assigned during user creation

Projects and Namespaces
-----------------------

A project is a collection of namespaces. A resource quota on project level will
serve as the limit of all combined resource quota of all namespaces.

Permissions on project level will also apply on all namespaces within it. You
can override that by editing the namespace permission manually.

Project permissions
-------------------

Project permissions are applied realtime. User don't need to logout and login
for his/her permission to take effect - it is applied immediately.

A user can gain access even he was not added in cluster level - that is by
adding him on project level.

K8 vs Rancher Rolebindings
--------------------------

Created using rancher:

...
  labels:
    authz.cluster.cattle.io/rtb-owner: 25426a95-0291-11e9-b27e-005056b62395
    cattle.io/creator: norman
  name: clusterrolebinding-68fxg
...

Created by k8:

...
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"clusterrolebinding-f7ggc","namespace":"sata-dev"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"admin"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"User","name":"user-f7ggc"}]}
  creationTimestamp: "2019-01-02T03:00:13Z"
...

Rancher Architecture


Architecture
------------

Rancher 2 is built for docker and kubernetes.


Rancher API Server
- sits in front of k8 api server and etcd
- implements the ff:
  a. manage user identities (AD/Github..)
  b. manage access control and security policies
  c. manage projects (groups of namespaces)
  d. keep track of nodes
Cluster Controller
- used for global rancher install
- provides access control policies to clusters and projects
- provisions clusters (docker/RKE/GKE)
Cluster Agents
- manages individual k8 clusters
  a. workload management (pods, deployments, etc ..)
  b. applies roles and bindings
  c. communication between k8 clusters and rancher server
Authentication Proxy
- forwards authentication to individual k8 clusters