Skip to main content

Kubernetes directory structure and process on Control Plane / Master

Kubernetes directory structure and process on Control Plane / Master:

Kubelet environment file - /var/lib/kubelet/kubeadm-flags.env
Kubelet configuration file - /var/lib/kubelet/config.yaml
Certificate directory folder - /etc/kubernetes/pki
Kube config directory - /etc/kubernetes
admin config file - /etc/kubernetes/admin.conf
kube config file - /etc/kubernetes/kubelet.conf
control manager config file - /etc/kubernetes/controller-manager.conf
scheduler config file - /etc/kubernetes/scheduler.conf
Manifest directory - /etc/kubernetes/manifests
Cluster store - /etc/kubernetes/manifests
/etc/kubernetes/manifests/etcd.yaml 
/etc/kubernetes/manifests/kube-apiserver.yaml 
/etc/kubernetes/manifests/kube-controller-manager.yaml 
/etc/kubernetes/manifests/kube-scheduler.yaml
/etc/kubernetes/pki:

Has following key, crt files and one directory for etcd
apiserver.crt apiserver.key ca.crt  front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.key      sa.key
apiserver-etcd-client.key  apiserver-kubelet-client.key  etcd    front-proxy-client.crt  sa.pub
/etc/kubernetes/pki/etcd:
Has following files
ca.crt  ca.key  healthcheck-client.crt  healthcheck-client.key  peer.crt  peer.key  server.crt  server.key

Containerd directory : /var/lib/containerd
Installation create pods

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
kube-system Namespace - store kubeadm-config, kubelet-config-1.18
kube-public Namespace - cluster-info
Token: 2folj8.j7pq6glnjuy

K8s APISERVER process:
kube-apiserver --advertise-address=<ControlPlane_IP_Address>
--allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
 --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379
 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
 --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
 --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
 --requestheader-allowed-names=front-proxy-client
 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
 --requestheader-extra-headers-prefix=X-Remote-Extra-
 --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User
 --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub
 --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
 --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
K8s SCHEDULER process:
kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
K8s CONTROl-MANAGER process:
kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1
--client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key
--use-service-account-credentials=true
K8s etcd process:
etcd --advertise-client-urls=https://<ControlPlane_IP_Address>:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://<ControlPlane_IP_Address>:2380
--initial-cluster=
<ControlPlane_HostName>=https://<ControlPlane_IP_Address>:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://<ControlPlane_IP_Address>:2379
--listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://<ControlPlane_IP_Address>:2380
--name=
<ControlPlane_HostName> --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
K8s KUBELET process:
/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
--config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni
--pod-infra-container-image=k8s.gcr.io/pause:3.2
K8s KUBE-PROXY process:
/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=<ControlPlane_Hostname>

Containerd processes: Several Docker containerd processes.
containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/01a30ed6cf52f098168ff18b0fc4ef5f5a87fd7923e075cb25e74326a2c4095c \
-address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd
-runtime-root /var/run/docker/runtime-runc

K8s Networking processes:
WEAVE Launch Script:
/bin/sh /home/weave/launch.sh
WEAVER:
/home/weave/weaver --port=6783 --datapath=datapath --name=a6:95:c6:85:a6:ef --host-root=/host
--http-addr=127.0.0.1:6784 --metrics-addr=0.0.0.0:6782 --docker-api= --no-dns
--db-prefix=/weavedb/weave-net --ipalloc-range=10.32.0.0/12 --nickname=
<ControlPlane_Hostname>
--ipalloc-init consensus=0 --conn-limit=200 --expect-npc
Kube Utils:
/home/weave/kube-utils -run-reclaim-daemon -node-name=
<ControlPlane_Hostname> -peer-name=a6:95:c6:85:a6:ef
-log-level=debug
WEAVE NPC:
/usr/bin/weave-npc

Kubernetes process on Node:

WEAVE Launch Script:
/bin/sh /home/weave/launch.sh

WEAVER:
 /home/weave/weaver --port=6783 --datapath=datapath --name=3e:ad:e3:5c:0a:67 --host-root=/host --http-addr=127.0.0.1:6784 --metrics-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.32.0.0/12 --nickname=<Node_HostName> --ipalloc-init consensus=1 --conn-limit=200 --expect-npc <ControlPlane_IP_Address>
 Kube Utils:
 /home/weave/kube-utils -run-reclaim-daemon -node-name=<Node_HostName> -peer-name=3e:ad:e3:5c:0a:67 -log-level=debug
 

 

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...

Oracle E-Business Suite Online Patch Phases executing adop

Following description about Oracle E-Business Suite is high level and from documentation https://docs.oracle.com/cd/E26401_01/doc.122/e22954/T202991T531062.htm#5281339 for in depth and detail description refer it. The online patching cycle phases: Prepare Apply Finalize Cutover Cleanup Prepare phase: Start a new online patching cycle, Prepares the environment for patching. $ adop phase=prepare Apply phase: Applies the specified patches to the environment. Apply one or more patches to the patch edition. $ adop phase=apply patches=123456,789101 workers=8 Finalize phase: Performs any final steps required to make the system ready for cutover. Perform the final patching operations that can be executed while the application is still online. $ adop phase=finalize Cutover phase: Shuts down application tier services, makes the patch edition the new run edition, and then restarts application tier services. This is the only phase that involves a brief ...