Skip to main content

MySQL Master to Master Replication with AUTO_INCREMENT

Master-to-master replication:

MySQL offers parameters auto_increment_increment, and auto_increment_offset for master to master replication. Most of the time folks believe when we set auto_increment_offset to odd no., such as 1 for server 1(Master 1),  and even no. such as 2, for server 2 (Master 2), we are all set. But according to Oracle these two parameter  "can be used to control the operation of AUTO_INCREMENT columns." Means columns which are not using AUTO_INCREMENT for primary key, master to master replication could break at some point, and require human intervention.
Default is 1 for these two parameters. Ref. https://dev.mysql.com/doc/refman/8.0/en/replication-options-master.html

For auto_increment_increment - 10, and auto_increment_offset

 auto_increment_increment:
This parameter controls the interval between successive column values. Use of auto_increment_increment in table definition - col_name INT NOT NULL AUTO_INCREMENT PRIMARY KEY. When you SET @@auto_increment_increment=10, then each row will have value 1, 11, 21, 31 etc.
auto_increment_offset:This parameter determines the starting point for the AUTO_INCREMENT column value.
Use of auto_increment_offset in table definition - col_name INT NOT NULL AUTO_INCREMENT PRIMARY KEY. When you SET @@auto_increment_offset=5, then each row will have value 5,15,25,35 etc.
Note: When the value of auto_increment_offset is greater than that of auto_increment_increment, the value of auto_increment_offset is ignored.

In ideal situation application logic, policies and procedures needs to be protected in such a way that says "if you want to modify row on Master1, do so on Master2 also" and that needs to exist for every row or object in every table to keep continue Master to Master replication, otherwise use certification based replication such as Galera, Group Replication, or InnoDB cluster, which can handle split brain, automatic fail over, and some other issues.

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...