Skip to main content

Docker Swarm Overly Network and MySQL Docker Container


Initialize Swarm on Node 1
# docker swarm init --advertise-addr=10.86.64.236
Swarm initialized: current node (n7w1dp3ub0illangx4qh96kld) is now a manager.
# docker network create -d overlay dba_test-overlay
vlhzekq8v3kka6t7vns7dhnvc
Node1- Add attachable network:
# docker network create --driver=overlay --attachable <newtork_name>
7avvbx3tmfg36ae7bipfq0waf
Run Docker Container with Overlay network:
# docker run -it --name=mysql_8.0.18_1 \
--network <network_name> \
--volume=/mysql/<mysql_version>/<app_name>/data_1:/var/lib/mysql \
--publish <UserDefinedPortNo>:3306 \
-d mysql/mysql-server:8.0.18

Retrieve root user password for container mysql_5.7.25_2:
# docker logs mysql_8.0.18_1 2>&1 | grep GENERATED
[Entrypoint] GENERATED ROOT PASSWORD: 0h7eD]UbfUs&ABjuzfuM@Hogw@w
# docker exec -it mysql_8.0.18_1 mysql -uroot –p

mysql> alter user 'root'@'localhost' identified by 'root123#';
Query OK, 0 rows affected (0.00 sec)
mysql> create database shrenik;
Query OK, 1 row affected (0.00 sec)
mysql> grant all on shrenik.* to 'test'@'%' ;
Query OK, 0 rows affected (0.01 sec)
mysql> create table t1
    -> (
    ->  col_1 integer,
    -> col_2 char(5)
    -> );
Query OK, 0 rows affected (0.01 sec)
Connect server using Yog / Workbench:
Hostname : <host_name>
Port : <UserDefinedPortNo>
User – <UserName>
Password - <Password>

Spin another Docker container for MySQL 8.0.18 on same virtual host:
# docker run -it --name=mysql_8.0.18_2 \
--network <Network_Name> \
--volume=/mysql/<MySQLVersion>/<AppName>/data/:/var/lib/mysql \
--publish <UserDefinedPortNo>:3306 \
-d mysql/mysql-server:8.0.18
Retrieve root user password for container mysql_5.7.25_2:
# docker logs mysql_8.0.18_2  2>&1 | grep GENERATED
[Entrypoint] GENERATED ROOT PASSWORD: osWAsYdObAJOklYg3carc0letYvO

mysql> alter user 'root'@'localhost' identified by 'root123#';
mysql> create database shrenik_1;
mysql> create user 'test'@'%' identified by '<password>';
mysql> use test_8018;
mysql> create table t1_8018
    ->  (
    ->
    ->  col_1 integer,
    -> col_2 varchar(25)
    -> );
Connect server using Yog / Workbench:
Hostname : <hostname>
Port : 4999
User – test_8018
Password - <code>

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...