Skip to main content

Install Kubernetes Control Plane (master) and Worker Node:

Install Kubernetes Control Plane (master) and Worker Node:

Pre-requisite for Kubernetes Control Plane and Worker Nodes / minion:
Disable selinux:

Edit /etc/selinux/config file, and set SELINUX-disabled as follows, and bounce the server.
SELINUX=disabled
#getenfoce
Disabled
Disable firewall:

#systemctl stop firewalld
#systemctl disable firewalld
#stystemctl status firewalld
Active: inactive (dead)
Set bridge-nf-call-iptables contents:
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Oterwise you will get following error
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
Disable swap:
#swapoff -a
Otherwise you will get following error
[ERROR Swap]: running with swap on is not supported. Please disable swap
Configure Repository on control plane and worker node:
Execute following command to create file /etc/repos.d/kubernetes.repo and add content for kubernetes, which will help to
set name, base URL, enable, gpgcheck (check for package signature), gpgkey, download kubernetes from Google cloud repository
for RHEL, and Cent OS.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
> https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
Install Kubernetes and Docker container  on control plane and worker node:

# yum install kubeadm docker -y
Start kubelet and enable kubelet on control plane and worker node:
# systemctl  restart kubelet && systemctl enable kubelet
Start Docker and Enable Docker on control plane and worker node:
systemctl restart docker && systemctl enable docker
Initialize cluster on control-plane:
# kubeadm init
Following messages will be displayed on control-plane
Your Kubernetes control-plane has initialized successfully!
Execute following command on Control Plane:
I have decided to use cluster using root, so I had execute following command as root user.
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Execute following command to deploy network:
# export kubever=$(kubectl version | base64 | tr -d '\n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

Join Worker Node:
Do you remember while intializing kubernetes cluster on control plane / master you got following message. It is about to join worker node with control plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <ip-address>:6443 --token 2folj8.j7pq6glnjuy71ch7 \
    --discovery-token-ca-cert-hash sha256:713838b7e8a604030b355dd3afcd5d5d6a7ffca466c3cbd8b911738f02712865

Execute following command on worker node / minion
# kubeadm join <ip-address>:6443 --token 2folj8.j7pq6glnjuy71ch7 \
    --discovery-token-ca-cert-hash sha256:713838b7e8a604030b355dd3afcd5d5d6a7ffca466c3cbd8b911738f02712865
We can expect following message once worker node join control plane.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Check status from control plane:
# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
<hostname>   Ready    master   15m   v1.18.3
<hostname>   Ready    <none>   67s   v1.18.3

Noticed Roles for worker node is <none>, then explore more at https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/

 

Backup etcd : https://support.coreos.com/hc/en-us/articles/115000323894-Creating-etcd-backup 

Note:  
Explore K8s directory structure and K8s processes on control plane and node.
Explore K8s installation with detail messages.
Explore K8s master high availability 

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l

Create MySQL database with hyphen

Create MySQL database with hyphen: If you are trying to create MySQL database with hyphen " - " in the name such as test-db and get error  " your MySQL server version for the right syntax to use near '-db' at line" then you might be wondering how to get it done as your business require MySQL database name with hyphen " - "  Here is the fix, use escape character " ` " before and after database name such as `test-db` and you will be able to create database with hyphen. CREATE DATABASE `test-db`;