Skip to main content

Oracle E-Business Suite Scripts

Oracle E-Business Suite scripts:
AutoConfig Scope and Components:
1. In Release 12.2 application tier is AutoConfig-enabled.
2. Applications context file location - INST_TOP as <INST_TOP>/appl/admin/<CONTEXT_NAME>.xml
3. The Release 12.2 database tier created via Rapid Install is also AutoConfig-enabled.
4. Database context file location - RDBMS ORACLE_HOME as <RDBMS_ORACLE_HOME>/appsutil/<CONTEXT_NAME>.xml
AutoConfig components:
Applications context - An XML repository located in the INST_TOP that contains information specific to the APPL_TOP. 
Database context - An XML repository located in the RDBMS ORACLE_HOME that contains information specific to that database tier. 
AutoConfig template files - Files containing named tags that are replaced with instance-specific information from the appropriate context, in the process of instantiation. 
AutoConfig driver files - Every Oracle E-Business Suite product maintains a driver file used by AutoConfig. The driver file lists the AutoConfig file templates and their destination locations. 
AutoConfig scripts - A set of scripts that provide a simplified interface to the AutoConfig APIs.
E-Business Suite Services:
Note: A particular service can be started or stopped via the adstrtal or adstpall scripts only if the service and its service group are both enabled.
Root Service - Node Manager - adnodemgrctl.sh 
Web Administration - WebLogic Admin Server - adadminsrvctl.sh
Web Entry Point Services:
 Oracle HTTP Server - adapcctl.sh
 Oracle Process Manager - adopmnctl.sh
Web Application Services:
 oacore - admanagedsrvctl.sh
 oafm - admanagedsrvctl.sh
 forms - admanagedsrvctl.sh
 forms-c4ws - admanagedsrvctl.sh
Batch Processing Services:
 Oracle TNS Listener - adalnctl.sh
 Concurrent Manager - adcmctl.sh
 Fulfillment Server - jtffmctl.sh
 Oracle ICSM - ieoicsm.sh
Other Services:
 Forms Server - adformsrvctl.sh
 Oracle MWA Service - mwactlwrpr.sh
Manage Oracle E-Business Suite Service Processes:
Following scripts are located in <INST_TOP>/admin/scripts.
The adstrtal and adstpall scripts can be used to start and stop all the AutoConfig managed application tier services in a single operation.
Administer the individual services separately using their respective service control scripts.

The oacore, oafm, forms and forms-c4ws services can also be managed by starting and stopping the respective managed servers via the WebLogic Server Administration Console.
Start Applications services  - adstrtal.sh
Stop Applications services - adstpall.sh
Start individual service (except those that are part of the Web Application Services service group) - <control_script> start
Stop individual service (except those that are part of the Web Application Services service group)  - <control_script> stop
Start individual managed server (all services that are part of the Web Application Services service group) - admanagedsrvctl.sh start <managed_server_name>
Stop individual managed server (all services that are part of the Web Application Services service group) - admanagedsrvctl stop <managed_server_name>, admanagedsrvctl abort <managed_server_name>
The 'stop' command will shut down the managed server only after no user sessions remain connected, while the 'abort' command will shut down the managed server immediately.

Commands for managing processes on the Database tier:
Start database listener process - addlnctl.sh start <SID>
Stop database listener process - addlnctl.sh stop <SID>
Start database server process - addbctl.sh start
Stop database server process - addbctl.sh stop

Using AutoConfig Tools for System Configuration:
adautocfg.sh - For running AutoConfig.
 Applications Tier: <INST_TOP>/admin/scripts
 Database Tier: <RDBMS_ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME>
adchkcfg.sh - run before running AutoConfig to review the changes on running AutoConfig. This will generate a report showing the differences between the existing configuration and what the configuration would be after running AutoConfig.
 Execute DB Tier - sh <RDBMS_ORACLE_HOME>/appsutil/bin/adchkcfg.sh contextfile=<CONTEXT_FILE>
 Execute App Tier - sh <AD_TOP>/bin/adchkcfg.sh contextfile=<CONTEXT_FILE>
 On Applications Tier: <AD_TOP>/bin
 On Database Tier: <RDBMS_ORACLE_HOME>/appsutil/bin
GenCtxInfRep.pl - Used to find out detailed information about context variables and the templates in which they are used.
 On Applications Tier: <FND_TOP>/patch/115/bin
 On Database Tier: <RDBMS_ORACLE_HOME>/appsutil/bin
adtmplreport.sh - Used to gather information regarding the location of the AutoConfig templates, provided the location of the instantiated files and vice versa.
 On Applications Tier: <AD_TOP>/bin
 On Database Tier: <RDBMS_ORACLE_HOME>/appsutil/bin
admkappsutil.pl - Used while applying patches to the database tier. Running this script generates appsutil.zip, which may be copied over to the database tier to migrate the patch to the database tier.
 On Applications Tier: <AD_TOP>/bin
Rolling Back an AutoConfig Session:
Each AutoConfig run creates a rollback script you can use to revert to the previous configuration settings if necessary.
Location - Application  <INST_TOP>/admin/out/<MMDDhhmm> , Database  <RDBMS_ORACLE_HOME>/appsutil/out/<CONTEXT_NAME><MDDhhmm>
To roll back an AutoConfig session, execute the following commands:
$ restore.sh
Configuration Synchronization - Document 1905593.1, Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2.


Ref.:

https://docs.oracle.com/cd/E26401_01/doc.122/e22953/T174296T589913.htm#6237552

Comments

Popular posts from this blog

MySQL InnoDB cluster troubleshooting | commands

Cluster Validation: select * from performance_schema.replication_group_members; All members should be online. select instance_name, mysql_server_uuid, addresses from  mysql_innodb_cluster_metadata.instances; All instances should return same value for mysql_server_uuid SELECT @@GTID_EXECUTED; All nodes should return same value Frequently use commands: mysql> SET SQL_LOG_BIN = 0;  mysql> stop group_replication; mysql> set global super_read_only=0; mysql> drop database mysql_innodb_cluster_metadata; mysql> RESET MASTER; mysql> RESET SLAVE ALL; JS > var cluster = dba.getCluster() JS > var cluster = dba.getCluster("<Cluster_name>") JS > var cluster = dba.createCluster('name') JS > cluster.removeInstance('root@<IP_Address>:<Port_No>',{force: true}) JS > cluster.addInstance('root@<IP add>,:<port>') JS > cluster.addInstance('root@ <IP add>,:<port> ') JS > dba.getC

MySQL slave Error_code: 1032 | MySQL slave drift | HA_ERR_KEY_NOT_FOUND

MySQL slave Error_code: 1032 | MySQL slave drift: With several MySQL, instance with master slave replication, I have one analytics MySQL, environment which is larger in terabytes, compared to other MySQL instances in the environment. Other MySQL instances with terabytes of data are running fine master, slave replication. But this analytics environment get started generating slave Error_code :1032. mysql> show slave status; Near relay log: Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_log_pos 5255306 Near master section: Could not execute Update_rows event on table <db_name>.<table_name>; Can't find record in '<table_name>', Error_code: 1032; Can't find record in '<table_name>', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log <name>-bin.000047, end_l

InnoDB cluster Remove Instance Force | Add InnoDB instance

InnoDB cluster environment UUID is different on node: To fix it stop group replication, remove instance (use force if require), add instance back Identify the node which is not in sync: Execute following SQL statement on each node and identify the node has different UUID on all nodes. mysql> select * from mysql_innodb_cluster_metadata.instances; Stop group replication: Stop group replication on the node which does not have same UUID on all nodes. mysql > stop GROUP_REPLICATION; Remove instances from cluster: Remove all secondary node from the cluster and add them back if require. $mysqlsh JS >\c root@<IP_Address>:<Port_No> JS > dba.getCluster().status() JS > dba.getCluster () <Cluster:cluster_name> JS > var cluster = dba.getCluster("cluster_name"); JS >  cluster.removeInstance('root@<IP_Address>:<Port_No>'); If you get "Cluster.removeInstance: Timeout reached waiting......" JS > cluster.removeInstance(&#