Skip to main content

MySQL 8.0.20 new features | MySQL 8.0.20 enhancements

MySQL 8.0.20 InnoDB enhancement:
Improvement for CATS:
CATS - Contention Aware Transaction Scheduling is improved in MySQL 8.0.20. For CATS transaction weight computation is required. From MySQL 8.0.20 this CATS weight computation is performed on separate thread entirely, which improves performance computation performance and accuracy. FIFO (First in first out) is removed from MySQL 8.0.20. Transaction scheduling performed by the CATS algorithm since MySQL 8.0.20, which used be performed by FIFO.
Explore more:
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-20.html
https://mysqlserverteam.com/contention-aware-transaction-scheduling-arriving-in-innodb-to-boost-performance/
Storage area for Doublewritter:
Doublewritter buffer used to store data in system tablespace, now it is has its own storage area in doublewriter files, which has resulted in flexibility to place double writer buffer pages, increase throughput, reduce write latency. System variable for doublewritter storage are innodb_doublewrite_dir, innodb_doublewrite_files, innodb_doublewrite_pages,
innodb_doublewrite_batch_size.
Binary Log Compression:
MySQL 8.0.20 supports compression of Binary Logs. By default it is OFF. You can enable binary log compression by system variable binlog_transaction_compression. Algorithm zstd is used to compress binary logs. You can set the compression level (1 to 22) by using system variable binlog_transaction_compression_level_zstd. Transaction payloads remain in compressed state on originator, sent to replication slave, replication stream and relay log. Binary log compression saves disk space on originator, and recipient, as well as network bandwidth also. 

Primary Key Check:
System variable sql_require_primary_key was introduced in 8.0.13. MySQL 8.0.20 supports REQUIRE_TABLE_PRIMARY_KEY_CHECK for CHANGE MASTER TO statement. System variable sql_require_primary_key is used to evaluate the primary key check.
REQUIRE_TABLE_PRIMARY_KEY_CHECK, enables a replication slave to select its own policy for primary key checks. When it is ON for replication channel, it sets sql_require_primary_key=ON, when REQUIRE_TABLE_PRIMARY_KEY_CHECK is set to OFF, the sql_require_primary_key system variable value will be OFF. So slave can have its own policy to check primary key. The REQUIRE_TABLE_PRIMARY_KEY_CHECK default value is STREAM, in this situation slave value is replicated from the master for each transaction. Primary key check might help to identify Error: 1032, slave remain behind master in case of batch job for table with millions of rows.
Ref.:
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-20.html
Explore InnoDB related bugs are fixed in MySQL 8.0.20 at - https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-20.html

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...