Skip to main content

MySQL InnoDB cluster and group replication limitations


MySQL InnoDB cluster limitations:

01. A group can consist of maximum 9 servers. Attempting to add another server to a group with 9 members causes the request to join to be refused.
02. Limitations and issues described for multi-primary mode groups can also apply in single-primary mode clusters during a failover event, while the newly elected primary flushes out its applier queue from the old primary.
03. Individual transactions that result in GTID contents which are large enough that it cannot be copied between group members over the network within a 5 second window can cause failures in the group communication. To avoid this issue try and limit the size of your transactions as much as possible. For example, split up files used with LOAD DATA INFILE into smaller chunks.
04. Group Replication cannot currently make use of replication Event Checksums.
The certification process does not take into account gap locks, table lock, and named locks.
Setting a transaction isolation level to SERIALIZABLE configures Group Replication to refuse to commit the transaction.
05. Concurrent data definition statements and data manipulation statements executing against the same object but on different servers is not supported when using multi-primary mode.
06. Multi-primary mode groups (members all configured with group_replication_single_primary_mode=OFF) do not support tables with multi-level foreign key dependencies, specifically tables that have defined CASCADING foreign key constraints.
07. When a group is operating in multi-primary mode, SELECT .. FOR UPDATE statements can result in a deadlock.
08. Global replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.
09. InnodDB cluster is built on replication with GTID, so limitations of GTID are applicable to InnoDB cluster also.

Limitation of GTID:
  • No support for non transactional engine such as MyISAM.
  • CREATE TABLE ... SELECT.
  • ALTER TABLE ... ADD, if the column has an expression default value.
  • Binlog_format is set to STATEMENT, and CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE statements cannot be used inside transactions, procedures, functions, and triggers when GTIDs are in use on the server (that is, when the enforce_gtid_consistency system variable is set to ON).
  • sql_slave_skip_counter is not supported when using GTIDs. If you need to skip transactions, use the value of the master's gtid_executed variable instead.
  • IGNORE_SERVER_IDS option of the CHANGE MASTER TO statement is deprecated when using GTIDs.
Ref.:


Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(&#