Skip to main content

Install Kubernetes Control Plane (master) and Worker Node:

Install Kubernetes Control Plane (master) and Worker Node:

Pre-requisite for Kubernetes Control Plane and Worker Nodes / minion:
Disable selinux:

Edit /etc/selinux/config file, and set SELINUX-disabled as follows, and bounce the server.
SELINUX=disabled
#getenfoce
Disabled
Disable firewall:

#systemctl stop firewalld
#systemctl disable firewalld
#stystemctl status firewalld
Active: inactive (dead)
Set bridge-nf-call-iptables contents:
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Oterwise you will get following error
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
Disable swap:
#swapoff -a
Otherwise you will get following error
[ERROR Swap]: running with swap on is not supported. Please disable swap
Configure Repository on control plane and worker node:
Execute following command to create file /etc/repos.d/kubernetes.repo and add content for kubernetes, which will help to
set name, base URL, enable, gpgcheck (check for package signature), gpgkey, download kubernetes from Google cloud repository
for RHEL, and Cent OS.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
> https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF
Install Kubernetes and Docker container  on control plane and worker node:

# yum install kubeadm docker -y
Start kubelet and enable kubelet on control plane and worker node:
# systemctl  restart kubelet && systemctl enable kubelet
Start Docker and Enable Docker on control plane and worker node:
systemctl restart docker && systemctl enable docker
Initialize cluster on control-plane:
# kubeadm init
Following messages will be displayed on control-plane
Your Kubernetes control-plane has initialized successfully!
Execute following command on Control Plane:
I have decided to use cluster using root, so I had execute following command as root user.
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Execute following command to deploy network:
# export kubever=$(kubectl version | base64 | tr -d '\n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

Join Worker Node:
Do you remember while intializing kubernetes cluster on control plane / master you got following message. It is about to join worker node with control plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <ip-address>:6443 --token 2folj8.j7pq6glnjuy71ch7 \
    --discovery-token-ca-cert-hash sha256:713838b7e8a604030b355dd3afcd5d5d6a7ffca466c3cbd8b911738f02712865

Execute following command on worker node / minion
# kubeadm join <ip-address>:6443 --token 2folj8.j7pq6glnjuy71ch7 \
    --discovery-token-ca-cert-hash sha256:713838b7e8a604030b355dd3afcd5d5d6a7ffca466c3cbd8b911738f02712865
We can expect following message once worker node join control plane.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Check status from control plane:
# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
<hostname>   Ready    master   15m   v1.18.3
<hostname>   Ready    <none>   67s   v1.18.3

Noticed Roles for worker node is <none>, then explore more at https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/

 

Backup etcd : https://support.coreos.com/hc/en-us/articles/115000323894-Creating-etcd-backup 

Note:  
Explore K8s directory structure and K8s processes on control plane and node.
Explore K8s installation with detail messages.
Explore K8s master high availability 

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...

Oracle E-Business Suite Online Patch Phases executing adop

Following description about Oracle E-Business Suite is high level and from documentation https://docs.oracle.com/cd/E26401_01/doc.122/e22954/T202991T531062.htm#5281339 for in depth and detail description refer it. The online patching cycle phases: Prepare Apply Finalize Cutover Cleanup Prepare phase: Start a new online patching cycle, Prepares the environment for patching. $ adop phase=prepare Apply phase: Applies the specified patches to the environment. Apply one or more patches to the patch edition. $ adop phase=apply patches=123456,789101 workers=8 Finalize phase: Performs any final steps required to make the system ready for cutover. Perform the final patching operations that can be executed while the application is still online. $ adop phase=finalize Cutover phase: Shuts down application tier services, makes the patch edition the new run edition, and then restarts application tier services. This is the only phase that involves a brief ...