Skip to main content

Docker Swarm Overly Network and MySQL Docker Container


Initialize Swarm on Node 1
# docker swarm init --advertise-addr=10.86.64.236
Swarm initialized: current node (n7w1dp3ub0illangx4qh96kld) is now a manager.
# docker network create -d overlay dba_test-overlay
vlhzekq8v3kka6t7vns7dhnvc
Node1- Add attachable network:
# docker network create --driver=overlay --attachable <newtork_name>
7avvbx3tmfg36ae7bipfq0waf
Run Docker Container with Overlay network:
# docker run -it --name=mysql_8.0.18_1 \
--network <network_name> \
--volume=/mysql/<mysql_version>/<app_name>/data_1:/var/lib/mysql \
--publish <UserDefinedPortNo>:3306 \
-d mysql/mysql-server:8.0.18

Retrieve root user password for container mysql_5.7.25_2:
# docker logs mysql_8.0.18_1 2>&1 | grep GENERATED
[Entrypoint] GENERATED ROOT PASSWORD: 0h7eD]UbfUs&ABjuzfuM@Hogw@w
# docker exec -it mysql_8.0.18_1 mysql -uroot –p

mysql> alter user 'root'@'localhost' identified by 'root123#';
Query OK, 0 rows affected (0.00 sec)
mysql> create database shrenik;
Query OK, 1 row affected (0.00 sec)
mysql> grant all on shrenik.* to 'test'@'%' ;
Query OK, 0 rows affected (0.01 sec)
mysql> create table t1
    -> (
    ->  col_1 integer,
    -> col_2 char(5)
    -> );
Query OK, 0 rows affected (0.01 sec)
Connect server using Yog / Workbench:
Hostname : <host_name>
Port : <UserDefinedPortNo>
User – <UserName>
Password - <Password>

Spin another Docker container for MySQL 8.0.18 on same virtual host:
# docker run -it --name=mysql_8.0.18_2 \
--network <Network_Name> \
--volume=/mysql/<MySQLVersion>/<AppName>/data/:/var/lib/mysql \
--publish <UserDefinedPortNo>:3306 \
-d mysql/mysql-server:8.0.18
Retrieve root user password for container mysql_5.7.25_2:
# docker logs mysql_8.0.18_2  2>&1 | grep GENERATED
[Entrypoint] GENERATED ROOT PASSWORD: osWAsYdObAJOklYg3carc0letYvO

mysql> alter user 'root'@'localhost' identified by 'root123#';
mysql> create database shrenik_1;
mysql> create user 'test'@'%' identified by '<password>';
mysql> use test_8018;
mysql> create table t1_8018
    ->  (
    ->
    ->  col_1 integer,
    -> col_2 varchar(25)
    -> );
Connect server using Yog / Workbench:
Hostname : <hostname>
Port : 4999
User – test_8018
Password - <code>

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...

Oracle E-Business Suite Online Patch Phases executing adop

Following description about Oracle E-Business Suite is high level and from documentation https://docs.oracle.com/cd/E26401_01/doc.122/e22954/T202991T531062.htm#5281339 for in depth and detail description refer it. The online patching cycle phases: Prepare Apply Finalize Cutover Cleanup Prepare phase: Start a new online patching cycle, Prepares the environment for patching. $ adop phase=prepare Apply phase: Applies the specified patches to the environment. Apply one or more patches to the patch edition. $ adop phase=apply patches=123456,789101 workers=8 Finalize phase: Performs any final steps required to make the system ready for cutover. Perform the final patching operations that can be executed while the application is still online. $ adop phase=finalize Cutover phase: Shuts down application tier services, makes the patch edition the new run edition, and then restarts application tier services. This is the only phase that involves a brief ...