Skip to main content

Kubernetes | K8s | Kubernetes Components

Kubernetes:
  • Kubernetes word derived from Greek word meaning Helmsman – person who steers a ship.
  • Open-sourced handed over to the Cloud Native Computing Foundation (CNCF), written in Go (Golang)
  • Kubernetes is an orchestration, orchestration of containerized apps., often written as k8s
  • It is leading container orchestrator lets us manage containerized apps and or micro service apps.
  • Micro service apps made of lots of small and independents parts.
  • You say hey Kubernetes, here is my app, consist of these parts, just run it for me, and Kubernetes run it.  
  • You package the app as containers, give them declarative manifest, and let Kubernetes runt it.
  • It is platform agnostic, runs on bare metal, VMs, cloud instances (private and public), OpenStack, anything with Linux.
  • Does scaling, self-healing, load balancing, rolling updates and more.
  • Lives on Github at kubernetes/kubernetes, Twitter - @kubernetesio
How Kubernetes relate to Docker:
  • Docker is a low level technology, orchestrated and managed by Kubernetes.
  • Kubernetes has also released Container Runtime Interface – (CRI) 

Explore Kubernetes Master and workers

Kubernetes and Borg:
  • Google use frameworks – Borg and Omega (Google in house technology), to check billions of container in check. That is why some people think Kubernetes is an open-sourced version of either Borg or Omega, but it is not.
Kubernetes components:
  • Kubernetes made of one more masters (also refereed as control plane) and bunch of nodes. Explore at - https://shrenikp.blogspot.com/2020/03/kubernetes-masters-control-plane.html
  • Application service runs on nodes.
  • Deployment means package application and deploy it on Kubernetes.
  • Deployments defined via YAML or JSON manifest file – contains what images to use, ports to expose, network to join, how to perform update, how many replicas etc. we give file to Kubernetes master, which deploy it on cluster, constantly monitor it, and make sure it is running exactly as requested. If something is not as we ask, it tries to fix it. Deployments build on top of Replicaset, add update model, make versioned rollback.It is first-class REST objects in Kubernetes API.
  • Pods - Minimum unit of scaling in Kubernetes, mortal, born, live and die. In VM World atomic unit of deployment is virtual machine, in Docker world, its container, in Kubernetes it's POD. Containers always runs inside Pods. If POD die, Kubernetes start another one, smell, feel exactly like one that died, with new ID and new IP address, in cluster
    You can run multiple container inside single POD, they share the same environment such as IPC namespace, shared memory, volumes, network stack etc., same IP address. Multiple container in the same POD can communicate using localhost, good for tightly coupled container requirement. To scale application you do add POD and not container. 
    PODS are deployed via ReplicaSets. ReplicaSets is a higher-lever Kubernetes object that wraps around a Pod and adds features, such as self healing, and scaling.
  • Services - Services are fully-fledged objects like, PODS, Replicaset, and Deployments. Services provide stable IP addresses, DNS, support TCP (default) and UDP, load balancing across PODS, stable networking endpoint. Sends traffic to healthy Pods. 
  • Labels - Services knows which Pods to load-balance across is via labels. Pods are loosely associated with Service, as they share same lables as the Service.
  • ConfigMap:
    1. An API object stores non-confidential data in key-value pairs.
    2. Pods can use ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
    3. It allows you to decouple environment-specific configuration from container images, so applications are easily portable.
    4. It is not designed to hold large chunks of data.
    5. Limit to store data for ConfigMap is 1 MiB. Mounting a volume or use separate database or file service to store more data.
    Explore

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...