Skip to main content

Set SELinux contexts for MySQL Server for datadir, logdir, errolog, pid, socket, port

Set SELinux contexts for MySQL Server for datadir, logdir, errolog, pid, socket, port:

semange help:

semanage -h
semanage fcontext -h

List the current MySQL contexts:
#semanage fcontext -l | grep -i mysql

List port available for MySQL:
#semanage port -l | grep mysql
Add port to mysqld template:
#semanage -a -t mysqld_port_t -p tcp 3375
Add port range to mysqld template:
#semanage port -a -t mysqld_port_t -p tcp 35000-38000
Remove the mysql templete tcp port:
#semanage port -d -t mysqld_port_t -p tcp 3375
Set the data directory context:
Default location for data directory - /var/lib/mysql/, the SELinux context used is mysqld_db_t.
# semanage fcontext -a -t mysqld_db_t "/path/to/my/custom/datadir(/.*)?"
# restorecon -Rv /path/to/my/custom/datadir
Set the log directory context:
# semanage fcontext -a -t mysqld_db_t "/path/to/my/custom/logdir(/.*)?"
# restorecon -Rv /path/to/my/custom/logdir
Set error log file context:
The default location for RedHat RPMs is /var/log/mysqld.log, the SELinux context used is mysqld_log_t.
# semanage fcontext -a -t mysqld_log_t "/path/to/my/custom/error.log"
# restorecon -Rv /path/to/my/custom/error.log
Set PID file context:
The default location for the PID file is /var/run/mysqld/mysqld.pid, the SELinux context used is mysqld_var_run_t.
# semanage fcontext -a -t mysqld_var_run_t "/path/to/my/custom/pidfile/directory/.*?"

#
restorecon -Rv /path/to/my/custom/pidfile/directory
Set the unix-domain socket context:
The default location for the unix-domain socket is /var/lib/mysql/mysql.sock, the SELinux context used is mysqld_var_run_t.

#
semanage fcontext -a -t mysqld_var_run_t "/path/to/my/custom/mysql\.sock"

#
restorecon -Rv /path/to/my/custom/mysql.sock
Set the TCP port context:
The default TCP port is 3306, the SELinux context used is mysqld_port_t

#
semanage port -a -t mysqld_port_t -p tcp 13306

List:
#semanage port -l | grep mysql
Remove content from context:
#semanage fcontext -d /path/to/my/custom/error.log 
Tools require:
# yum install policycoreutils-python


Explore more about mysqld_selinux - https://linux.die.net/man/8/mysqld_selinux

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...