Skip to main content

Run multiple version of MySQL container on the same virtual host

Run multiple version of MySQL container on the same virtual host:

Find out the tag for MySQL version:
Visit https://hub.docker.com/r/mysql/mysql-server/  and find out the tag for appropriate version
For example for MySQL release 8, tags are - 8.0.18, 8.0, 8, latest
MySQL release 5.7, tags are - 5.7.28, 5.7, 5
MySQL release 5.6, tags are - 5.6.46, 5.6
Pull the image from Docker Hub:
For MySQL 8.0.16 Community edition Docker image execute following command
# docker pull mysql/mysql-server:8.0.16
For MySQL 5.7 Community edition Docker image execute following command
# docker pull mysql/mysql-server:5.7.25
For MySQL latest:
# docker pull mysql/mysql-server:latest
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
mysql/mysql-server     8.0.16              39649194a7e7        8 months ago        289MB
mysql/mysql-server     5.7.25              0dc21157ff24        10 months ago       244MB
mysql/mysql-server     8.0.18              b172b40598f0        2 months ago        350MB mysql/mysql-server     latest              b172b40598f0        2 months ago        350MB
Run Docker container for MySQL 5.7.25:
# docker run --name=mysql_5.7.25 -d mysql/mysql-server:5.7.25
Run Docker container for MySQL 8.0.18:
# docker run --name=mysql_8.0.18_1 -d mysql/mysql-server:8.0.18
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                            PORTS                 NAMES
bf087eca6842        mysql/mysql-server:5.7.25   "/entrypoint.sh mysq…"   3 seconds ago       Up 3 seconds (health: starting)   3306/tcp, 33060/tcp   mysql_5.7.25
370f5c56cd1a        mysql/mysql-server:8.0.18   "/entrypoint.sh mysq…"   3 seconds ago        Up 3 seconds (health: starting)   3306/tcp, 33060/tcp   mysql_8.0.18_1

Common commands use for MySQL running in container
Find Docker Version:
# docker –v
Get root password generated:
# docker logs <container_name> 2>&1 | grep GENERATED
# docker logs mysql_2 2>&1 | grep GENERATED
List Docker Container:
# docker ps –a
Fetch MySQL logs:
# docker logs <container_name> 2>&1 | grep GENERATED
Connect MySQL:
#docker exec -it <container_name> mysql -uroot -p
Connect MySQL file system in Docker:
#docker exec -it <container_name> bash
Backup MySQL databases using mysqldump:
# docker exec [MYSQL_CONTAINER] /usr/bin/mysqldump \
-u [MYSQL_USER] --password=[MYSQL_PASSWORD] \
--all-databases > backup.sql
Restore MySQL databases:
#
cat backup.sql | docker exec -i [MYSQL_CONTAINER] /usr/bin/mysql -u [MYSQL_USER] --password=[MYSQL_PASSWORD] [MYSQL_DATABASE]
Note: Percona xtrabackup is supported only for Percona Xtrabackup image. 
https://www.percona.com/blog/2017/03/20/running-percona-xtrabackup-windows-docker/

Comments

  1. Running multiple versions of MySQL containers on the same virtual host can optimize resource usage and simplify testing. Check out Skynode for tips on setting this up seamlessly!

    ReplyDelete

Post a Comment

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...