Skip to main content

MySQL NDB Cluster Installation for Data, SQL and Management Node

Management node: It manages the other nodes within the NDB Cluster, provides configuration data, start and stop nodes, running backups. As it manages the configuration of the other nodes, it should be started first, before any other node. Use ndb_mgmd to start it.
It also maintains a log of cluster activities. Management clients can connect to the management server and check the cluster's status.
Data node: It stores cluster data. There are as many data nodes as there are replicas, times the number of fragments. For example, with two replicas, each having two fragments, you need four data nodes. One replica is good for data storage, but no redundancy. For redundancy and high availability 2 more nodes are recommended.To start data node use ndbd or ndbmtd. Tables are normally stored completely in memory, that's why it can be refer as in-memory database. Some data can be stored on disk.
SQL node: It accesses the cluster data. It is a traditional MySQL server that uses the NDBCLUSTER storage engine. An SQL node is a mysqld process started with the --ndbcluster and --ndb-connect string options. It is API node, which designates any application which accesses NDB Cluster data. Another example of an API node is the ndb_restore utility that is used to restore a cluster backup. 
Note: Use multiple data and SQL nodes. The use of multiple management nodes is also highly recommended for production environment, redundancy. 

SQL Node:- Installation of MySQL cluster
Following command will install mysql cluster in directory /usr/local
$ tar -C /usr/local -xzvf mysql-cluster-gpl-7.5.7-linux2.6.tar.gz
Create link
$ ln -s /usr/local/mysql-cluster-gpl-7.5.7-linux2.6-i686 /usr/local/mysql

Set up the system databases using mysqld
$ cd mysql
$ mysqld --initialize

Set the necessary permissions for the MySQL server and data directories:
$ chown -R root .
$ chown -R mysql data
$ chgrp -R mysql .

Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:
$ cp support-files/mysql.server /etc/rc.d/init.d/
$ chmod +x /etc/rc.d/init.d/mysql.server
$ chkconfig --add mysql.server

Data nodes:-
Data nodes does not require the mysqld binary. Only the NDB Cluster data node executable ndbd (single-threaded) or ndbmtd (multi-threaded) is required. These binaries can also be found in the .tar.gz archive.

Install the Data node binaries:-

Change location to the /var/tmp directory, and extract the ndbd and ndbmtd binaries from the archive into a suitable directory such as /usr/local/bin:
$ cd /var/tmp
$ tar -zxvf mysql-5.7.18-ndb-7.5.7-linux-i686-glibc23.tar.gz
$ cd mysql-5.7.18-ndb-7.5.7-linux-i686-glibc23
$ cp bin/ndbd /usr/local/bin/ndbd
$ cp bin/ndbmtd /usr/local/bin/ndbmtd
Change location to the directory into which you copied the files, and then make both of them executable:
$ cd /usr/local/bin
$ chmod +x ndb*
The preceding steps should be repeated on each data node host.
Note:-The data directory on each machine hosting a data node is /usr/local/mysql/data. This piece of information is essential when configuring the management node. 

Management nodes:-

It does not require the mysqld binary. Only the NDB Cluster management server (ndb_mgmd) is required; you most likely want to install the management client (ndb_mgm) as well. Both of these binaries also be found in the .tar.gz archive.
Extract the ndb_mgm and ndb_mgmd from the archive into a suitable directory such as /usr/local/bin:
$ cd /var/tmp
$ tar -zxvf mysql-5.7.18-ndb-7.5.7-linux2.6-i686.tar.gz
$ cd mysql-5.7.18-ndb-7.5.7-linux2.6-i686
$ cp bin/ndb_mgm* /usr/local/bin
Make both of them executable:
$ cd /usr/local/bin
$ chmod +x ndb_mgm*

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...

Oracle E-Business Suite Online Patch Phases executing adop

Following description about Oracle E-Business Suite is high level and from documentation https://docs.oracle.com/cd/E26401_01/doc.122/e22954/T202991T531062.htm#5281339 for in depth and detail description refer it. The online patching cycle phases: Prepare Apply Finalize Cutover Cleanup Prepare phase: Start a new online patching cycle, Prepares the environment for patching. $ adop phase=prepare Apply phase: Applies the specified patches to the environment. Apply one or more patches to the patch edition. $ adop phase=apply patches=123456,789101 workers=8 Finalize phase: Performs any final steps required to make the system ready for cutover. Perform the final patching operations that can be executed while the application is still online. $ adop phase=finalize Cutover phase: Shuts down application tier services, makes the patch edition the new run edition, and then restarts application tier services. This is the only phase that involves a brief ...