now to make a real cluster having 3 nodes (1 master, 2 minions).
Let's try using Fedora 27 server instead of Centos to make the setup a more
different than usual and let's try to make our life more easier by installing
the packages from DNF repo instead of installing from tarballs.
Here is a summary of version we will use in this tutorial:
Host OS: Ubuntu 17.04 (Zesty)
|_ Virtualization: VirtualBox 5.2.4 r119785 (Qt5.7.1)
|_ Virtual Machine OS: Fedora Server 27
|_ Kubernetes: 1.7.3
|_ ETCD: 3.2.7
|_ Flannel: 0.7.0
|_ Docker: 1.13.1
Preparation
===========
1. Make sure all nodes can ping and resolved each other's hostnames.
2. Internet connectivity is required to download the packages.
3. If you have firewall enabled, turn it off on all nodes. If left opened, this
might cause issues. For example, it will prevent flannel from routing all the
packets properly. In short, pods will not be pingable from other pods.
systemctl disable --now firewalld
systemctl mask firewalld
systemctl mask firewalld
Setup the master
================
1. Install ETCD. This will store all information about your cluster.
[root@master ~]# dnf install -y etcd
[...]
Installed:
etcd.x86_64 3.2.7-1.fc27
Complete!
[root@master ~]#
[...]
Installed:
etcd.x86_64 3.2.7-1.fc27
Complete!
[root@master ~]#
2. Update etcd configuration.
[root@master ~]# cat << EOF > /etc/etcd/etcd.conf
> ETCD_NAME=default
> ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
> ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
> ETCD_ADVERTISE_CLIENT_URLS="http://master:2379"
> EOF
[root@master ~]#
> ETCD_NAME=default
> ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
> ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
> ETCD_ADVERTISE_CLIENT_URLS="http://master:2379"
> EOF
[root@master ~]#
3. Start and enable ETCD.
[root@master ~]# systemctl enable --now etcdctld
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
[root@master ~]#
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
[root@master ~]#
4. Verify etcd is running healthy before proceeding on the next step.
[root@master ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://master:2379
cluster is healthy
[root@master ~]#
member 8e9e05c52164694d is healthy: got healthy result from http://master:2379
cluster is healthy
[root@master ~]#
5. Install kubernetes package. It enables us to configure kube-apiserver,
kube-scheduler, and kube-controller-manager.
[root@master ~]# dnf install -y kubernetes
[...]
Installed:
kubernetes.x86_64 1.7.3-1.fc27 criu.x86_64 3.6-1.fc27 oci-register-machine.x86_64 0-5.12.git3c01f0b.fc27
oci-systemd-hook.x86_64 1:0.1.15-1.git2d0b8a3.fc27 atomic-registries.x86_64 1.20.1-9.fc27 audit-libs-python3.x86_64 2.7.8-1.fc27
checkpolicy.x86_64 2.7-2.fc27 conntrack-tools.x86_64 1.4.4-5.fc27 container-selinux.noarch 2:2.42-1.fc27
container-storage-setup.noarch 0.8.0-2.git1d27ecf.fc27 docker.x86_64 2:1.13.1-44.git584d391.fc27 docker-common.x86_64 2:1.13.1-44.git584d391.fc27
docker-rhel-push-plugin.x86_64 2:1.13.1-44.git584d391.fc27 kubernetes-client.x86_64 1.7.3-1.fc27 kubernetes-master.x86_64 1.7.3-1.fc27
kubernetes-node.x86_64 1.7.3-1.fc27 libcgroup.x86_64 0.41-13.fc27 libnet.x86_64 1.1.6-14.fc27
libnetfilter_cthelper.x86_64 1.0.0-12.fc27 libnetfilter_cttimeout.x86_64 1.0.0-10.fc27 libnetfilter_queue.x86_64 1.0.2-10.fc27
libsemanage-python3.x86_64 2.7-1.fc27 libyaml.x86_64 0.1.7-4.fc27 oci-umount.x86_64 2:2.3.2-1.git3025b19.fc27
policycoreutils-python-utils.x86_64 2.7-1.fc27 policycoreutils-python3.x86_64 2.7-1.fc27 protobuf-c.x86_64 1.2.1-7.fc27
python3-PyYAML.x86_64 3.12-5.fc27 python3-pytoml.noarch 0.1.14-2.git7dea353.fc27 setools-python3.x86_64 4.1.1-3.fc27
skopeo-containers.x86_64 0.1.27-1.git93876ac.fc27 socat.x86_64 1.7.3.2-4.fc27 subscription-manager-rhsm-certificates.x86_64 1.21.1-1.fc27
systemd-container.x86_64 234-8.fc27 yajl.x86_64 2.1.0-8.fc27
Complete!
[root@master ~]#
[...]
Installed:
kubernetes.x86_64 1.7.3-1.fc27 criu.x86_64 3.6-1.fc27 oci-register-machine.x86_64 0-5.12.git3c01f0b.fc27
oci-systemd-hook.x86_64 1:0.1.15-1.git2d0b8a3.fc27 atomic-registries.x86_64 1.20.1-9.fc27 audit-libs-python3.x86_64 2.7.8-1.fc27
checkpolicy.x86_64 2.7-2.fc27 conntrack-tools.x86_64 1.4.4-5.fc27 container-selinux.noarch 2:2.42-1.fc27
container-storage-setup.noarch 0.8.0-2.git1d27ecf.fc27 docker.x86_64 2:1.13.1-44.git584d391.fc27 docker-common.x86_64 2:1.13.1-44.git584d391.fc27
docker-rhel-push-plugin.x86_64 2:1.13.1-44.git584d391.fc27 kubernetes-client.x86_64 1.7.3-1.fc27 kubernetes-master.x86_64 1.7.3-1.fc27
kubernetes-node.x86_64 1.7.3-1.fc27 libcgroup.x86_64 0.41-13.fc27 libnet.x86_64 1.1.6-14.fc27
libnetfilter_cthelper.x86_64 1.0.0-12.fc27 libnetfilter_cttimeout.x86_64 1.0.0-10.fc27 libnetfilter_queue.x86_64 1.0.2-10.fc27
libsemanage-python3.x86_64 2.7-1.fc27 libyaml.x86_64 0.1.7-4.fc27 oci-umount.x86_64 2:2.3.2-1.git3025b19.fc27
policycoreutils-python-utils.x86_64 2.7-1.fc27 policycoreutils-python3.x86_64 2.7-1.fc27 protobuf-c.x86_64 1.2.1-7.fc27
python3-PyYAML.x86_64 3.12-5.fc27 python3-pytoml.noarch 0.1.14-2.git7dea353.fc27 setools-python3.x86_64 4.1.1-3.fc27
skopeo-containers.x86_64 0.1.27-1.git93876ac.fc27 socat.x86_64 1.7.3.2-4.fc27 subscription-manager-rhsm-certificates.x86_64 1.21.1-1.fc27
systemd-container.x86_64 234-8.fc27 yajl.x86_64 2.1.0-8.fc27
Complete!
[root@master ~]#
6. Update the kubernetes system config /etc/kubernetes/config to point to
master. This must be the same on all nodes so we will do this to the minions
later
KUBE_MASTER="--master=http://master:8080"
7. Configure the API server. This should be done on master only.
[root@master ~]# cat << EOF > /etc/kubernetes/apiserver
> KUBE_API_ADDRESS="--address=0.0.0.0"
> KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
> KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
> KUBE_API_ARGS=""
> EOF
[root@master ~]#
> KUBE_API_ADDRESS="--address=0.0.0.0"
> KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
> KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
> KUBE_API_ARGS=""
> EOF
[root@master ~]#
8. Start and enable the 3 kubernetes services.
[root@master ~]# systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
[root@master ~]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
[root@master ~]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
[root@master ~]#
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
[root@master ~]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
[root@master ~]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
[root@master ~]#
9. Verify that the API server is accessible by doing a curl on the endpoint
from all nodes. You must have similar output below. Let's try executing it from
one of the minions.
[root@node1 ~]# curl http://master:8080/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.1.45:6443"
}
]
}[root@node1 ~]#
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.1.45:6443"
}
]
}[root@node1 ~]#
10. Create a flannel network by generating it from a json file and storing it in
ETCD.
[root@master ~]# cat flannel-config.json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
[root@master ~]# etcdctl set /coreos.com/network/config < flannel-config.json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
[root@master ~]# etcdctl set /coreos.com/network/config < flannel-config.json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
11. Verify that the key exists in ETCD.
[root@master ~]# etcdctl get /coreos.com/network/config
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
[root@master ~]#
Setup the minions
=================
All the steps below needs to be done on all minions (node1 and node2).
1. Install kubernetes package. It enables us to configure kube-proxy, kubelet,
and docker on the minions.
[root@node1 ~]# dnf install -y kubernetes
[...]
Installed:
kubernetes.x86_64 1.7.3-1.fc27 criu.x86_64 3.6-1.fc27 oci-register-machine.x86_64 0-5.12.git3c01f0b.fc27
oci-systemd-hook.x86_64 1:0.1.15-1.git2d0b8a3.fc27 atomic-registries.x86_64 1.20.1-9.fc27 audit-libs-python3.x86_64 2.7.8-1.fc27
checkpolicy.x86_64 2.7-2.fc27 conntrack-tools.x86_64 1.4.4-5.fc27 container-selinux.noarch 2:2.42-1.fc27
container-storage-setup.noarch 0.8.0-2.git1d27ecf.fc27 docker.x86_64 2:1.13.1-44.git584d391.fc27 docker-common.x86_64 2:1.13.1-44.git584d391.fc27
docker-rhel-push-plugin.x86_64 2:1.13.1-44.git584d391.fc27 kubernetes-client.x86_64 1.7.3-1.fc27 kubernetes-master.x86_64 1.7.3-1.fc27
kubernetes-node.x86_64 1.7.3-1.fc27 libcgroup.x86_64 0.41-13.fc27 libnet.x86_64 1.1.6-14.fc27
libnetfilter_cthelper.x86_64 1.0.0-12.fc27 libnetfilter_cttimeout.x86_64 1.0.0-10.fc27 libnetfilter_queue.x86_64 1.0.2-10.fc27
libsemanage-python3.x86_64 2.7-1.fc27 libyaml.x86_64 0.1.7-4.fc27 oci-umount.x86_64 2:2.3.2-1.git3025b19.fc27
policycoreutils-python-utils.x86_64 2.7-1.fc27 policycoreutils-python3.x86_64 2.7-1.fc27 protobuf-c.x86_64 1.2.1-7.fc27
python3-PyYAML.x86_64 3.12-5.fc27 python3-pytoml.noarch 0.1.14-2.git7dea353.fc27 setools-python3.x86_64 4.1.1-3.fc27
skopeo-containers.x86_64 0.1.27-1.git93876ac.fc27 socat.x86_64 1.7.3.2-4.fc27 subscription-manager-rhsm-certificates.x86_64 1.21.1-1.fc27
systemd-container.x86_64 234-8.fc27 yajl.x86_64 2.1.0-8.fc27
Complete!
[root@node1 ~]#
[...]
Installed:
kubernetes.x86_64 1.7.3-1.fc27 criu.x86_64 3.6-1.fc27 oci-register-machine.x86_64 0-5.12.git3c01f0b.fc27
oci-systemd-hook.x86_64 1:0.1.15-1.git2d0b8a3.fc27 atomic-registries.x86_64 1.20.1-9.fc27 audit-libs-python3.x86_64 2.7.8-1.fc27
checkpolicy.x86_64 2.7-2.fc27 conntrack-tools.x86_64 1.4.4-5.fc27 container-selinux.noarch 2:2.42-1.fc27
container-storage-setup.noarch 0.8.0-2.git1d27ecf.fc27 docker.x86_64 2:1.13.1-44.git584d391.fc27 docker-common.x86_64 2:1.13.1-44.git584d391.fc27
docker-rhel-push-plugin.x86_64 2:1.13.1-44.git584d391.fc27 kubernetes-client.x86_64 1.7.3-1.fc27 kubernetes-master.x86_64 1.7.3-1.fc27
kubernetes-node.x86_64 1.7.3-1.fc27 libcgroup.x86_64 0.41-13.fc27 libnet.x86_64 1.1.6-14.fc27
libnetfilter_cthelper.x86_64 1.0.0-12.fc27 libnetfilter_cttimeout.x86_64 1.0.0-10.fc27 libnetfilter_queue.x86_64 1.0.2-10.fc27
libsemanage-python3.x86_64 2.7-1.fc27 libyaml.x86_64 0.1.7-4.fc27 oci-umount.x86_64 2:2.3.2-1.git3025b19.fc27
policycoreutils-python-utils.x86_64 2.7-1.fc27 policycoreutils-python3.x86_64 2.7-1.fc27 protobuf-c.x86_64 1.2.1-7.fc27
python3-PyYAML.x86_64 3.12-5.fc27 python3-pytoml.noarch 0.1.14-2.git7dea353.fc27 setools-python3.x86_64 4.1.1-3.fc27
skopeo-containers.x86_64 0.1.27-1.git93876ac.fc27 socat.x86_64 1.7.3.2-4.fc27 subscription-manager-rhsm-certificates.x86_64 1.21.1-1.fc27
systemd-container.x86_64 234-8.fc27 yajl.x86_64 2.1.0-8.fc27
Complete!
[root@node1 ~]#
2. Update /etc/kubernetes/config to point to master.
KUBE_MASTER="--master=http://master:8080"
3. Update the kubelet config so we can register the minion. Make sure to change
the --hostname-override correctly for each minion.
[root@node1 ~]# cat << EOF >> /etc/kubernetes/kubelet
> KUBELET_ADDRESS="--address=0.0.0.0"
> KUBELET_HOSTNAME="--hostname-override=node1"
> KUBELET_API_SERVER="--api-servers=http://master:8080"
> EOF
[root@node1 ~]#
> KUBELET_ADDRESS="--address=0.0.0.0"
> KUBELET_HOSTNAME="--hostname-override=node1"
> KUBELET_API_SERVER="--api-servers=http://master:8080"
> EOF
[root@node1 ~]#
4. Start and enable kubelet service. This register our minions.
[root@node1 ~]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]#
[root@node1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-02-06 20:48:18 +08; 3s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3696 (kubelet)
Tasks: 12 (limit: 4915)
Memory: 39.4M
CPU: 1.127s
CGroup: /system.slice/kubelet.service
├─3580 journalctl -k -f
├─3696 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://master:8080 --address=0.0.0.0 --hostname-override=node1 --allow-privileged=false --cgroup-driver=systemd
└─3765 journalctl -k -f
[...]
Feb 06 20:48:19 node1 kubelet[3696]: I0206 20:48:19.587455 3696 kubelet_node_status.go:85] Successfully registered node node1
[...]
[root@node1 ~]#
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]#
[root@node1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-02-06 20:48:18 +08; 3s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3696 (kubelet)
Tasks: 12 (limit: 4915)
Memory: 39.4M
CPU: 1.127s
CGroup: /system.slice/kubelet.service
├─3580 journalctl -k -f
├─3696 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://master:8080 --address=0.0.0.0 --hostname-override=node1 --allow-privileged=false --cgroup-driver=systemd
└─3765 journalctl -k -f
[...]
Feb 06 20:48:19 node1 kubelet[3696]: I0206 20:48:19.587455 3696 kubelet_node_status.go:85] Successfully registered node node1
[...]
[root@node1 ~]#
Once all minions were registered, they will now appear in "kubectl get nodes"
from the master.
Starting kubelet will start docker but will not enable it. Let's do it manually.
[root@node1 ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node1 ~]#
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node1 ~]#
Also, docker 1.13 is known to have issues with flannel 0.7.0. By default, this
version of docker sets the iptables FORWARD policy to DROP. That will prevent
inter-pod communications. So we need to manually set it to ACCEPT by using this
command on the nodes.
[root@node1 ~]# iptables -P FORWARD ACCEPT
5. Install flannel to allow inter-pod communication.
[root@node1 ~]# dnf install -y flannel
[...]
Installed:
flannel.x86_64 0.7.0-5.fc27
Complete!
[root@node1 ~]#
[...]
Installed:
flannel.x86_64 0.7.0-5.fc27
Complete!
[root@node1 ~]#
6. Configure flannel by pointing it to ETCD on the master. That is were it will
get all the network information.
[root@node1 ~]# cat << EOF > /etc/sysconfig/flanneld
> FLANNEL_ETCD_ENDPOINTS="http://master:2379"
> FLANNEL_ETCD_PREFIX="/coreos.com/network"
> EOF
[root@node1 ~]#
> FLANNEL_ETCD_ENDPOINTS="http://master:2379"
> FLANNEL_ETCD_PREFIX="/coreos.com/network"
> EOF
[root@node1 ~]#
7. Start and enable flannel.
[root@node1 ~]# systemctl enable --now flanneld
Created symlink /etc/systemd/system/multi-user.target.wants/flanneld.service → /usr/lib/systemd/system/flanneld.service.
Created symlink /etc/systemd/system/docker.service.requires/flanneld.service → /usr/lib/systemd/system/flanneld.service.
[root@node1 ~]#
Created symlink /etc/systemd/system/multi-user.target.wants/flanneld.service → /usr/lib/systemd/system/flanneld.service.
Created symlink /etc/systemd/system/docker.service.requires/flanneld.service → /usr/lib/systemd/system/flanneld.service.
[root@node1 ~]#
Let's test the cluster!
=======================
So now that we are done setting up master and minions. Let's try creating a
simple nginx deployment with 2 replicas. Execute this on the master.
[root@master ~]# kubectl run nginx --image=nginx --replicas=2
deployment "nginx" created
[root@master ~]#
deployment "nginx" created
[root@master ~]#
Wait for few minutes and our nginx pods should be up and running.
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-935182578-hdvqv 1/1 Running 0 58s 172.17.0.2 node1
nginx-935182578-zd8cd 1/1 Running 0 58s 172.17.0.2 node2
[root@master ~]#
NAME READY STATUS RESTARTS AGE IP NODE
nginx-935182578-hdvqv 1/1 Running 0 58s 172.17.0.2 node1
nginx-935182578-zd8cd 1/1 Running 0 58s 172.17.0.2 node2
[root@master ~]#
If for some reasons the pods are not creating and you see the following message
from "journalctl" on the minions:
unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\
Try to do a "docker pull gcr.io/google_containers/pause-amd64:3.0" manually on
each minion. That should do the trick and your pods will now be created.
Notice the pod IP addresses. You should be able to ping that from any minions.
[root@node1 ~]# ping -c 1 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.115 ms
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
[root@node1 ~]#
[root@node2 ~]# ping -c 1 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
[root@node2 ~]#
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.115 ms
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
[root@node1 ~]#
[root@node2 ~]# ping -c 1 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.115 ms
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
[root@node2 ~]#
We were able to ping the IPs because of flannel. Without it, the pods can only
be reached on the minion were it is located.
Let's finalize our testing by exposing our application outside the cluster via a service.
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service "nginx" exposed
[root@master ~]# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 3d
nginx 10.254.183.3 <nodes> 80:31046/TCP 5s
[root@master ~]#
service "nginx" exposed
[root@master ~]# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 3d
nginx 10.254.183.3 <nodes> 80:31046/TCP 5s
[root@master ~]#
And here it is! A static web page that inside kubernetes.
No comments:
Post a Comment