Skip to main content

MySQL Percona Tool Kit

Kill queries running longer than 60s:
pt-kill --busy-time 60 --kill

Print, do not kill, queries running longer than 60s:
pt-kill --busy-time 60 --print

Print all login processes:
pt-kill --match-state login --print --victims all
Find all tables created more than a day ago, which use the MyISAM engine, and print their names:

 pt-find --ctime +1 --engine MyISAM
Find InnoDB tables and convert them to MyISAM:
pt-find --engine InnoDB --exec "ALTER TABLE %D.%N ENGINE=MyISAM"
Find empty tables in the test and junk databases, and delete them:
pt-find --empty junk test --exec-plus "DROP TABLE %s"
Find tables more than five gigabytes in total size:
pt-find --tablesize +5G
Summarize MySQL information nicely:
pt-mysql-summary --user=root
Diff MySQL configuration files and server variables:
Diff host1 config from SHOW VARIABLES against host2:
pt-config-diff h=host1 h=host2

Diff config from [mysqld] section in my.cnf against host1 config:
pt-config-diff /etc/my.cnf h=host1
Diff the [mysqld] section of two option files:
pt-config-diff /etc/my-small.cnf /etc/my-large.cnf
Print deadlocks on host1:
pt-deadlock-logger h=host1
Print deadlocks on host1 once then exit:
pt-deadlock-logger h=host1 --iterations 1
Save deadlocks on host1 to percona_schema.deadlocks on host2:
pt-deadlock-logger h=host1 --dest h=host2,D=percona_schema,t=deadlocks
Analyze MySQL queries from logs, processlist, and tcpdump
Report the slowest queries from slow.log:
pt-query-digest slow.log

Report the slowest queries from the processlist on host1:
pt-query-digest --processlist h=host1
Capture MySQL protocol data with tcppdump, then report the slowest queries:
tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt

Archive rows from a MySQL table into another table or a file.
Archive all rows from oltp_server to olap_server and to a file:
pt-archiver --source h=oltp_server,D=test,t=tbl --dest h=olap_server \
--file '/var/log/archive/%Y-%m-%d-%D.%t'                           \
--where "1=1" --limit 1000 --commit-each

Purge (delete) orphan rows from child table:
pt-archiver --source h=host,D=db,t=child --purge \
--where 'NOT EXISTS(SELECT * FROM parent WHERE col=child.col)'

Verify MySQL replication integrity:
checksum every table, and report the results on every detected replica:
pt-table-checksum

Collect forensic data about MySQL when problems occur:
pt-stalk [OPTIONS]
pt-stalk waits for a trigger condition to occur, then collects data to help diagnose problems.

The tool is designed to run as a daemon with root privileges, so that you can diagnose intermittent problems that you cannot observe directly. You can also use it to execute a custom command, or to collect data on demand without waiting for the trigger to occur.

Ref.:
https://www.percona.com/doc/percona-toolkit/LATEST/pt-stalk.html
https://www.percona.com/sites/default/files/presentations/5_Percona_Toolkit_Tools_Save_Day_4.3.13.pdf
https://www.percona.com/doc/percona-toolkit/LATEST/pt-kill.htmlhttps://www.percona.com/doc/percona-toolkit/3.0/index.html

Pytohn MySQL Online Schema Change:
https://code.facebook.com/posts/1290069194423954
Percona Online Schema Change - https://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html#cmdoption-pt-online-schema-change--alter-foreign-keys-method
AMAZON RDS - http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/
https://severalnines.com/blog/online-schema-change-mysql-mariadb-comparing-github-s-gh-ost-vs-pt-online-schema-change
https://signalvnoise.com/posts/3174-taking-the-pain-out-of-mysql-schema-changes

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...