Skip to main content

Oracle Golden Gate Basic Classic

Create environment
01. Source & target - Create GG tablespace.
02. Source & target - Create the GoldenGate Schema Owner
03. Grant privilege including DBA to GG schema owner
04. Add schema owner to global parameter file ./GLOBALS
05. Execute role set up script to create role GGS_GGSUSER_ROLE
06. Grant role GGS_GGSUSER_ROLE to GG user at both Source and target
Configure GG  Extract
01. Source and Target - Configure Manager Parameters
$ ggsci
GGSCI 1> EDIT PARAMS MGR
PORT 7809
DYNAMICPORTLIST 7810-7820
02. Source, create parameter file for Extract - ex1,
EXTRACT ex1
USERID <id>, PASSWORD <pwd>
EXTTRAIL /home/oracle/goldengate/dirdat/ex
TABLE <Schema>.*;
03. Source, configure Data Pump Parameters
EXTRACT dp1
USERID <id>, PASSWORD <pwd>
RMTHOST <hostname>, MGRPORT 7809
RMTTRAIL /home/oracle/goldengate/dirdat/rt
TABLE <schema>.*;
Target - Create check point table
$ ggsci
GGSCI 1> DBLOGIN USERID <id>, PASSWORD <pwd>
GGSCI 2> ADD CHECKPOINTTABLE <Schema>.checkpointtable
Add checkpoint table to ./GLOBALS
$ ggsci
GGSCI 1> EDIT PARAMS ./GLOBALS
GGSCHEMA <Scheam>
CHECKPOINTTABLE <Schema>.checkpointtable
Configure GG Replicate 
Target - create parameter file for rep1
$ ggsci
GGSCI 1> EDIT PARAMS rep1
REPLICAT rep1
USERID <id>, PASSWORD <pwd>
ASSUMETARGETDEFS
DISCARDFILE /home/oracle/goldengate/discards, PURGE
MAP <Schema>.*, TARGET <Schema>.*;
Note:You can use APPEND in place of PURGE
Source server configure supplemental logging for all tables that will be replicated
$ ggsci
GGSCI 1> DBLOGIN USERID <id>, PASSWORD <pwd>
GGSCI 2> ADD TRANDATA <TableName>
Source Server Add Extract
$ ggsci
GGSCI 1> ADD EXTRACT ex1, TRANLOG, BEGIN NOW
Source Add the Extract Trail
$ ggsci
GGSCI 1> ADD EXTTRAIL /home/oracle/goldengate/dirdat/ex, EXTRACT ex1
Source Add the Data Pump Process
$ ggsci
GGSCI 1> ADD EXTRACT dp1 EXTTRAILSOURCE /home/oracle/goldengate/dirdat/ex
Source Add the Data Pump Trail
On the source server add the Data Pump trail (/home/oracle/gg/dirdat/rt).
This trail is created on the target server.
However, the name is required in order to set up the Data Pump process on the source server.
$ ggsci
GGSCI 1> ADD RMTTRAIL /home/oracle/goldengate/dirdat/rt, EXTRACT dp1
Target Add the Replication Process
$ ggsci
GGSCI 1> ADD REPLICAT rep1, EXTTRAIL /home/oracle/goldengate/dirdat/rt
Source Start Manager
$ ggsci
GGSCI 1> START MANAGER
Target start the Manager
$ ggsci
GGSCI 1> START MANAGER
Source Start Extract Process
$ ggsci
GGSCI 1> START EXTRACT ex1
Verify that the Extract
$ ggsci
GGSCI > INFO EXTRACT ex1
Start Data Pump Process
$ ggsci
GGSCI 3> START EXTRACT dp1
Verify Data Pump 
$ ggsci
GGSCI 2> INFO EXTRACT dp1
Target Start Replication
$ ggsci
GGSCI 1> START REPLICAT rep1
Verify Replication 
$ ggsci
GGSCI 2> INFO REPLICAT rep1

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC...

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l...

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(...