Skip to main content

MySQL Custom Docker Container | Docker Compose

MySQL Custom Docker Container | Docker Compose:

Here we are going to explore how to customize, MySQL Docker container using custom parameters in file, using volume, and Docker compose:

Create parameter file my.cnf locally, create Docker file which has mount volume (directory) configuration, bring up container using "docker-compose up -d", connect database and make sure everything is as expected.

Create directory for persistent data volume:
Create directory /mysql/mysql8020_data/docker_compose1/data_1 to store MySQL docker container persistent data.
Create my.cnf file at /mysql/docker_compose_1/conf.d/:
[mysqld]
default_authentication-plugin=mysql_native_password
server_id=7777
port=7399
Note: You might be wondering how local conf.d director's my.cnf file is read by docker container. Local my.cnf file is mapped to container /etc/mysql/conf.d directory which can be access by following command. File my.cnf is not mapped, directory conf.d is mapped to /etc/mysql/conf.d After creating docker you can explore it by using command docker exec -it <container_name> bash 

Create Docker file /mysql/docker_compose_1/docker_compose.yml:
version: '3.7'
services:
  <service_name>:
    container_name: <container_name>
    image: mysql:8.0.20
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: "<password>"
    volumes:
      - /mysql/mysql8020_data/docker_compose1/data_1:/var/lib/mysql
      - /mysql/mysql8020_data/docker_compose1/conf.d:/etc/mysql/conf.d

Bring up MySQL Docker container using Docker-compose:
# docker-compose up -d
Connect MySQL instance check custom parameters implementation:
# docker exec -it <caontainer_name> mysql -uroot -p
mysql > show variables like '%port%'; 
+--------------------------+-----------+
| Variable_name            | Value |
+--------------------------+-----------+
| port                             | 7399  |
+--------------------------+-----------+
| server_id                     | 7777  |
+--------------------------+----------------------------------------------+
| default_authentication_plugin | mysql_native_password |
+--------------------------+----------------------------------------------+
Connect Docker container bash and inspect container:
# docker exec -it <container_name> bash
# docker inspect shrenik_20 | grep -i conf.d
"/mysql/mysql8020_data/docker_compose1/conf.d:/etc/mysql/conf.d:rw"
"Source": "/mysql/mysql8020_data/docker_compose1/conf.d",
"Destination": "/etc/mysql/conf.d",
"/etc/mysql/conf.d": {},

Processes running on OS:
Check Docker Process:
# docker ps -a
CONTAINER ID        IMAGE                            COMMAND                 CREATED             STATUS                    PORTS                               NAMES
79cbce4c0d05        mysql:8.0.20                "docker-entrypoint.s…"   4 days ago                  Up 4 days             3306/tcp, 33060/tcp          <container_name>

# ps -ef | grep docker
root     49814  1171  0 Jun06 ?        00:00:07 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/79cbce4c0d05c93643327bcf4e4f486d87da2eefed48cdf76f2ce5a41e57e667 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
Note: For each container there will be a process for containerd to abstract resources for container from host OS.

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...