Before we begin, let me give overview about the machines :
HAProxy machine : 192.168.29.93 ( Machine on which the HAProxy load balancer will be installed)
Master1 : 192.168.29.90
Master2 : 192.168.29.91
Master3 : 192.168.29.92
Worker1 : 192.168.29.100
Worker2 : 192.168.29.101
Worker3 : 192.168.29.102
Installing dependencies
-
Cloud Flare SSL ( generate the different certificates )
-
Kubectl client ( use to manage the Kubernetes cluster )
Installing Cloud Flare SSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl*
3. copy the binaries to /usr/local/bin.
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
4. Verify whether the installation is successful.
cfssl version
Installing kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
kubectl version
Install the HAProxy load balancer
sudo apt-get update sudo apt-get upgrade
sudo apt-get install haproxy
sudo vim /etc/haproxy/haproxy.cfg
global ... default ... frontend kubernetes bind 192.168.29.93:6443 option tcplog mode tcp default_backend kubernetes-master-nodes backend kubernetes-master-nodes mode tcp balance roundrobin option tcp-check server k8s-master-0 192.168.29.90:6443 check fall 3 rise 2 server k8s-master-1 192.168.29.91:6443 check fall 3 rise 2 server k8s-master-2 192.168.29.92:6443 check fall 3 rise 2
sudo systemctl restart haproxy
Now lets Generate the TLS certificates
vim ca-config.json
{ "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } }
vim ca-csr.json
{ "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "IE", "L": "Cork", "O": "Kubernetes", "OU": "CA", "ST": "Cork Co." } ] }
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls -la
Creating the certificate for the Etcd cluster
vim kubernetes-csr.json
{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "IE", "L": "Cork", "O": "Kubernetes", "OU": "Kubernetes", "ST": "Cork Co." } ] }
2. Generate the certificate and private key.
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=192.168.29.90,192.168.29.91,192.168.29.92,192.168.29.93,127.0.0.1,kubernetes.default -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
la -la
4. Copy the certificate to each nodes.
scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.90:~ scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.91:~ scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.92:~ scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.100:~ scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.101:~ scp ca.pem kubernetes.pem kubernetes-key.pem ubuntu@192.168.29.102:~
Preparing the nodes for kubeadm
Preparing the 192.168.29.90/91/92/100/101/102 machine
Installing Docker latest version
$ sudo -s # curl -fsSL https://get.docker.com -o get-docker.sh # sh get-docker.sh # usermod -aG docker your-user
Installing kubeadm, kublet, and kubectl
1- Add the Google repository key. # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 2- Add the Google repository. # vim /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io kubernetes-xenial main 3- Update the list of packages and install kubelet, kubeadm and kubectl. # apt-get update # apt-get install kubelet kubeadm kubectl 4- Disable the swap. # swapoff -a # sed -i '/ swap / s/^/#/' /etc/fstab
Installing and configuring Etcd on the 192.168.29.90/91/92 machine (All 3 master)
sudo mkdir /etc/etcd /var/lib/etcd
sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
tar xvzf etcd-v3.3.13-linux-amd64.tar.gz
$ sudo vim /etc/systemd/system/etcd.service [Unit] Description=etcd Documentation=https://github.com/coreos [Service] ExecStart=/usr/local/bin/etcd --name 192.168.29.90 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernetes.pem --peer-key-file=/etc/etcd/kubernetes-key.pem --trusted-ca-file=/etc/etcd/ca.pem --peer-trusted-ca-file=/etc/etcd/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.168.29.90:2380 --listen-peer-urls https://10.10.40.10:2380 --listen-client-urls https://192.168.29.90:2379,http://127.0.0.1:2379 --advertise-client-urls https://192.168.29.90:2379 --initial-cluster-token etcd-cluster-0 --initial-cluster 192.168.29.90=https://192.168.29.90:2380,192.168.29.91=https://192.168.29.91:2380,192.168.29.92=https://192.168.29.92:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
Initializing the master nodes
Initializing the Master node 192.168.29.90
$ vim config.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration kubernetesVersion: stable apiServerCertSANs: - 192.168.29.93 controlPlaneEndpoint: "192.168.29.93:6443" etcd: external: endpoints: - https://192.168.29.90:2379 - https://192.168.29.91:2379 - https://192.168.29.92:2379 caFile: /etc/etcd/ca.pem certFile: /etc/etcd/kubernetes.pem keyFile: /etc/etcd/kubernetes-key.pem networking: podSubnet: 10.30.0.0/24 apiServerExtraArgs: apiserver-count: "3"
Initializing the 2nd master node 192.168.29.91
$ vim config.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration kubernetesVersion: stable apiServerCertSANs: - 192.168.29.93 controlPlaneEndpoint: "192.168.29.93:6443" etcd: external: endpoints: - https://192.168.29.90:2379 - https://192.168.29.91:2379 - https://192.168.29.92:2379 caFile: /etc/etcd/ca.pem certFile: /etc/etcd/kubernetes.pem keyFile: /etc/etcd/kubernetes-key.pem networking: podSubnet: 10.30.0.0/24 apiServerExtraArgs: apiserver-count: "3"
$ vim config.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration kubernetesVersion: stable apiServerCertSANs: - 192.168.29.93 controlPlaneEndpoint: "192.168.29.93:6443" etcd: external: endpoints: - https://192.168.29.90:2379 - https://192.168.29.91:2379 - https://192.168.29.92:2379 caFile: /etc/etcd/ca.pem certFile: /etc/etcd/kubernetes.pem keyFile: /etc/etcd/kubernetes-key.pem networking: podSubnet: 10.30.0.0/24 apiServerExtraArgs: apiserver-count: "3"
Configuring kubectl on the client machine
$ sudo chmod +r /etc/kubernetes/admin.conf 3- From the client machine, copy the configuration file. $ scp ubuntu@192.168.29.90:/etc/kubernetes/admin.conf . 4- Create and configure the kubectl configuration directory. $ mkdir ~/.kube $ mv admin.conf ~/.kube/config $ chmod 600 ~/.kube/config 5- Go back to the SSH session on the master and change back the permissions of the configuration file. $ sudo chmod 600 /etc/kubernetes/admin.conf 6- check that you can access the Kubernetes API from the client machine. $ kubectl get nodes
Deploying the overlay network
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
kubectl get pods -n kube-system
$ kubectl get nodes