Press ESC to close

INSTALL KUERNETES CLUSTER ON CENTOS 7 WITH KUBEADM

Min requerements for the servers in the cluster

4GB RAM and 2Cpu per server.

SERVER SERVER HOSTNAME SPECS CPU
MASTER k8smaster.frkcvk.com 4 GB RAM 2
WORKER1 k8sworker1.frkcvk.com 4 GB RAM 2
WORKER2 k8sworker2.frkcvk.com 4 GB RAM 2
$ sudo  yum update -y

Add Kubernetes repository for CentOS 7 to all the servers.

sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
EOF

 

Then install required packages.

sudo yum -y install epel-release vim git curl wget kubelet kubeadm kubectl --disableexcludes=kubernetes

-- Spesific version install for kubernetes

yum install kubelet-1.23.15 kubeadm-1.23.15 kubectl-1.23.15 --disableexcludes=kubernetes

Confirm installation by checking the version of kubectl.

$ kubectl version --client

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}

Disable selinux, swap and firewall , I preffered firewall and selinux disabled

sudo vi /etc/selinux/config

--SELINUX=disabled

$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld

Configure sysctl

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

$ sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sudo sysctl --system

Install Container runtime

  • Docker
  • Containerd
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo yum update -y

# Create required directories
sudo mkdir /etc/docker
sudo mkdir -p /etc/systemd/system/docker.service.d

# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

sudo systemctl daemon-reload 
sudo systemctl restart docker
sudo systemctl enable docker

Installing Containerd

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter


# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload configs
sudo sysctl --system

# Add Docker repo
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo yum update -y

# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo su
sudo containerd config default > /etc/containerd/config.toml

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

To use the systemd cgroup driver, set plugins.cri.systemd_cgroup = true in /etc/containerd/config.toml. When using kubeadm, manually configure the cgroup driver for kubelet

Login to the server to be used as master and make sure that the br_netfilter module is loaded

Bellow steps only Master node apply

$ lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  2 br_netfilter,ebtable_broute

Enable kubelet service.

sudo systemctl enable kubelet

We now want to initialize the machine that will run the control plane components which includes etcd (the cluster database) and the API Server.

Pull container images:

# sudo systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@k8smaster k8suser]# sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.22.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.22.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.22.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.22.1
[config/images] Pulled k8s.gcr.io/pause:3.5
[config/images] Pulled k8s.gcr.io/etcd:3.5.0-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.4

Set cluster endpoint DNS name or add record to /etc/hosts file.

sudo vi/etc/hosts

192.168.142.11  k8smaster.frkcvk.com    k8smaster
192.168.142.12  k8sworker1.frkcvk.com   k8sworker1
192.168.142.13  k8sworker2.frkcvk.com   k8sworker2

Create Cluster

sudo kubeadm init \
  --pod-network-cidr=192.168.0.0/16 

--OUTPUT

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.142.11:6443 --token ny2yod.w2g5d7ptqrglrnmp \
        --discovery-token-ca-cert-hash sha256:56807297d11d137a8533ee1f89fa64ba2367b28496d2dda211253ba1da063d0a
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Worker Nodes add cluster;

# kubeadm join 192.168.142.11:6443 --token ny2yod.w2g5d7ptqrglrnmp \
>         --discovery-token-ca-cert-hash sha256:56807297d11d137a8533ee1f89fa64ba2367b28496d2dda211253ba1da063d0a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check Cluster

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.142.11:6443
CoreDNS is running at https://192.168.142.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

INSTALL Network Pluggin (Callico)

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml -O

# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

--Finally apply the yaml file

# kubectl apply -f custom-resource.yaml

Confirm that all of the pods are running: Wait all status will be running.

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58497c65d5-2fvsv       1/1     Running   0          2m47s
kube-system   calico-node-5slr7                              1/1     Running   0          2m47s
kube-system   calico-node-8n5gr                              1/1     Running   0          2m47s
kube-system   calico-node-hbg4j                              1/1     Running   0          2m47s
kube-system   coredns-78fcd69978-kzg85                       1/1     Running   0          9m14s
kube-system   coredns-78fcd69978-m92hg                       1/1     Running   0          9m14s
kube-system   etcd-k8smaster.frkcvk.com                      1/1     Running   0          9m28s
kube-system   kube-apiserver-k8smaster.frkcvk.com            1/1     Running   0          9m28s
kube-system   kube-controller-manager-k8smaster.frkcvk.com   1/1     Running   0          9m28s
kube-system   kube-proxy-5j86s                               1/1     Running   0          9m15s
kube-system   kube-proxy-kmn4k                               1/1     Running   0          7m1s
kube-system   kube-proxy-lr2vk                               1/1     Running   0          6m37s
kube-system   kube-scheduler-k8smaster.frkcvk.com            1/1     Running   0          9m28s

Confirm master node is ready:

$ kubectl get nodes -o wide

NAME                    STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8smaster.frkcvk.com    Ready    control-plane,master   10m     v1.22.1   192.168.142.11   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8sworker1.frkcvk.com   Ready    <none>                 8m20s   v1.22.1   192.168.142.12   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8sworker2.frkcvk.com   Ready    <none>                 7m56s   v1.22.1   192.168.142.13   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8

Run below command on the control-plane to see if the node joined the cluster

$ kubectl get nodes
NAME                    STATUS   ROLES                  AGE     VERSION
k8smaster.frkcvk.com    Ready    control-plane,master   11m     v1.22.1
k8sworker1.frkcvk.com   Ready    <none>                 8m43s   v1.22.1
k8sworker2.frkcvk.com   Ready    <none>                 8m19s   v1.22.1

Install Kubernetes Dashboard (Optional)
Kubernetes dashboard can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources.

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir