Press ESC to close

HOW TO INSTALL KUBERNETES ON RHEL8

Kubernetes cluster’ı nasıl kuracağınızı adım adım inceleyeceğiz. Bu yapıda bir adet master node ve iki adet worker node bulunacak. Aşağıdaki komutlar master ve worker node ların tamamında çalıştırılacaktır.

# dnf update -y

Öncelikle Docker kurulumunu gerçekleştimeliyiz. Kubernetes için container runtime olarak Docker’ı kullanacağız

Docker Repository Ekleme:

# dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Docker Kurulumu:

# dnf install docker-ce docker-ce-cli containerd.io -y

Docker başlatıp etkinleştrelim;

# systemctl start docker
# systemctl enable docker

Kubernetes Repository Ekleme ve paketleri yükleme

# tee /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
EOF
# dnf clean all

# dnf install -y yum-utils device-mapper-persistent-data lvm2

# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

swap ı kapatmamız gerekmekte

# swapoff -a
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Containerd Yapılandırma Dosyasını Oluşturalım:

# mkdir -p /etc/containerd
# containerd config default | sudo tee /etc/containerd/config.toml

Systemd Cgroup Driver Ayarını Değiştirin:

config.toml dosyasında SystemdCgroup = true ayarını etkinleştirin. Bu, Kubernetes’in containerd ile uyumlu çalışmasını sağlar. 

# vi /etc/containerd/config.toml

SystemdCgroup = true

Containerd’yi Yeniden Başlatın:

Yapılandırma değişikliklerinin etkin olması için containerd servisini yeniden başlatın ve otomatik başlatılmasını sağlayın

# systemctl restart containerd
# systemctl enable containerd

Kubelet Servisini Başlatma

Kubelet servisini her açılışta otomatik olarak başlatmak için:

# systemctl enable --now kubelet

Master Node’u Başlatma

Master node’a giriş yapın ve cluster başlatma komutunu çalıştırın.


# kubeadm init --pod-network-cidr=192.168.0.0/16
I1031 19:21:45.303540   75061 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.15
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1031 19:21:56.068137   75061 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8sn01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 134.122.31.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8sn01 localhost] and IPs [134.122.31.205 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8sn01 localhost] and IPs [150.122.31.205 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.004465 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8sn01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8sn01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ut7oqz.3mbshdzfqo9xv942
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 150.122.31.205:6443 --token ut7oqz.3mbshdzfqo9xv942 \
        --discovery-token-ca-cert-hash sha256:a3def828427ae98341613438e766198392b51570e8ea237c80186091c67abcc0

Çıktı olarak verilen kubeadm join komutunu bir kenara not alın; worker node’ları eklemek için kullanılacak.

# kubeadm join 150.150.100.205:6443 --token ut7oqz.3mbshdzfqo9xv942 \
        --discovery-token-ca-cert-hash sha256:a3def828427ae98341613438e766198392b51570e8ea237c80186091c67abcc0

Kubernetes Kullanıcı Ayarları:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Ağ Eklentisi Kurulumu

Master node üzerinde aşağıdaki komutla Calico ağ eklentisini kurarak node’ların iletişim kurabilmesini sağlanır

# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Bu düzenlemeden sonra nodelar ready durumda olduğunu görmeliyiz

# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
k8sn01   Ready    control-plane   13m     v1.28.15
k8sn02   Ready    <none>          8m2s    v1.28.15
k8sn03   Ready    <none>          7m50s   v1.28.15

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir