Tuesday, June 12, 2018

GoldenGate : ADD SCHEMATRANDATA failing with OGG-01790 + ORA-06550

Tried to enable supplemental logging at schema level on 11.2.0.4 database triggered with following error  


GGSCI (sev1) 1> dblogin USERID g_user, PASSWORD oracle
Successfully logged into database.

GGSCI (sev1) 2> add schematrandata SCOTT

2018-06-11 09:54:07  ERROR   OGG-01790  Failed to ADD SCHEMATRANDATA on schema SCOTT because of the following SQL error: ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_CAPTURE_ADM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored SQL BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('SCOTT','ALLKEYS_ON'); END;.



The grants are missing for the GoldenGate user. Please run below to open up grants 


SQL> exec dbms_streams_auth.grant_admin_privilege('G_USER');

PL/SQL procedure successfully completed.


It failed again because of missing parameter enable_goldengate_replication, we need this set to TRUE to support GoldenGate Replication


GGSCI (sev1) 3> add schematrandata SCOTT

2018-06-11 10:51:21  ERROR   OGG-01790  Failed to ADD SCHEMATRANDATA on schema SCOTT because of the following SQL error: ORA-26947: Oracle GoldenGate replication is not enabled.
ORA-06512: at "SYS.DBMS_CAPTURE_ADM_INTERNAL", line 1577
ORA-06512: at "SYS.DBMS_CAPTURE_ADM_INTERNAL", line 1086
ORA-06512: at "SYS.DBMS_CAPTURE_ADM", line 722
ORA-06512: at line 1 SQL BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('SCOTT','ALLKEYS_ON'); END;.

Change the parameter on DB level


SQL> alter system set enable_goldengate_replication=true scope=both;

System altered.

Supplemental Logging got enabled now. 


GGSCI (sev1) 4> add schematrandata SCOTT

2018-06-11 10:52:35  INFO    OGG-01788  SCHEMATRANDATA has been added on schema SCOTT.

To check if its enabled, please run below


GGSCI (sev1) 5> info schematrandata SCOTT

2018-06-11 10:57:23  INFO    OGG-01785  Schema level supplemental logging is enabled on schema SCOTT.


Hope this resolves your issue. 

Tuesday, May 29, 2018

Oracle GoldenGate : ERROR: opening port for REPLICAT (Connection refused)

The server got restarted, all the replicats we had set up were in status "Starting...", but none was actually doing anything.
Attempting to stop them got the following error:


GGSCI (serv7) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    STARTING    REPLICAT1    00:00:00      00:35:16    
REPLICAT    STARTING    REPLICAT2    00:00:00      00:35:08    


GGSCI (serv7) 8> stop r*

Sending STOP request to REPLICAT REPLICAT1 ...

ERROR: opening port for REPLICAT REPLICAT1 (Connection refused).

Sending STOP request to REPLICAT REPLICAT2 ...

ERROR: opening port for REPLICAT REPLICAT2 (Connection refused).


Stopping/Starting the manager service didn't help either - they still said "Starting" and were unresponsive. Before I even attempted to start the replicat for the first time, it said "Starting", and an attempt to start it gave me "ERROR: REPLICAT REPLICAT2 is already running.".


The cause was the replicat process status file, located in the DIRPCS folder under the Goldengate home - there should be a file for each replicat that's currently running giving details about the status. When a replicat stops, this file is deleted. Since all of the current replicats weren't doing anything (they were all sitting at the end of the previous trail file), they should have been stopped. I renamed the PCR files for the affected replicat processes, and then manager reporting "ABENDED" - at that point, I was able to start up each replicat without issue.


prddb1:serv7:prddb1:(391) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ ls -lrt
total 12
-rwxr----- 1 dba oracle 66 May 29 16:49 REPLICAT1.pcr
-rwxr----- 1 dba oracle 66 May 29 16:50 REPLICAT2.pcr
-rwxr----- 1 dba oracle 56 May 29 16:57 MGR.pcm
prddb1:serv7:prddb1:(392) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ mv REPLICAT1.pcr REPLICAT1.pcr.old
prddb1@PRD:serv7:prddb1:(397) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ mv REPLICAT2.pcr REPLICAT2.pcr.old


GGSCI (serv7) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    ABENDED     REPLICAT1    00:00:00      00:38:55    
REPLICAT    ABENDED    REPLICAT2    00:00:00      00:38:47   


GGSCI (serv7) 2> start R*

Sending START request to MANAGER ...
REPLICAT REPLICAT1 starting

Sending START request to MANAGER ...
REPLICAT REPLICAT2 starting

GGSCI (serv7) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REPLICAT1    00:00:00      00:44:48    
REPLICAT    RUNNING     REPLICAT2    00:00:00      00:44:40    

GGSCI (serv7) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REPLICAT1    00:06:56      00:00:00    
REPLICAT    RUNNING     REPLICAT2    00:00:02      00:00:08   

I Hope this resolves your issue ..  

Monday, May 14, 2018

Oracle GoldenGate : OGG-02022 Logmining server does not exist on this Oracle database


While starting the new extract processes, the extracts were getting abended with below error


2018-05-10 17:32:00  ERROR   OGG-02022  Logmining server does not exist on this Oracle database.

2018-05-10 17:32:00  ERROR   OGG-01668  PROCESS ABENDING.

Solution :

   There is an easy solution for this. You need to login to the database through GGSCI prompt and register the extract. This should get your extracts started

Example : 


GGSCI > dblogin userid guser password LKJFSDKLJFLASDJLKSDJF
GGSCI > register extract extract1, database 
GGSCI > start extract1


Hope this resolves your issue.

Thursday, May 3, 2018

Oracle RAC : Change VIP status from INTERMEDIATE state back to ONLINE state

Check current VIP status:

$ crsctl status resource ora.serv2.vip
NAME=ora.serv2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=INTERMEDIATE on serv1

Stop the VIP resource:

$ crsctl stop resource ora.serv2.vip
CRS-2673: Attempting to stop 'ora.serv2.vip' on 'serv1'
CRS-2677: Stop of 'ora.serv2.vip' on 'serv1' succeeded

Start the VIP resource:

$ crsctl start resource ora.serv2.vip
CRS-2672: Attempting to start 'ora.serv2.vip' on 'serv2'
CRS-2676: Start of 'ora.serv2.vip' on 'serv2' succeeded

Check current VIP status:

$ crsctl status resource ora.serv2.vip
NAME=ora.serv2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on serv2

Monday, February 5, 2018

Oracle RAC : ASM instance startup failing with "terminating the instance due to error 482" in alert log

ASM instance startup failing with "terminating the instance due to error 482" in alert log

ASM instance alert log shows below error while starting the ASM instance on second node

Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1994] [PC:0x43EFF99, kjbmprlst()+1369] [flags: 0x0, count: 1]
Errors in file /opt/app/oragrid/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_lmd0_39620.trc  (incident=224081):
ORA-07445: exception encountered: core dump [kjbmprlst()+1369] [SIGSEGV] [ADDR:0x1994] [PC:0x43EFF99] [Address not mapped to object] []
Incident details in: /opt/app/oragrid/orabase/diag/asm/+asm/+ASM2/incident/incdir_224081/+ASM2_lmd0_39620_i224081.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20180131051633], requested by (instance=2, osid=39620 (LMD0)), summary=[incident=224081].
PMON (ospid: 39577): terminating the instance due to error 482

Fix :

The cluster_database parameter was changed to FALSE which was resulting in the ASM startup failures

Please run the below from the running cluster node making change to the ASM instance

SQL> alter system set cluster_database=TRUE scope=spfile sid='+ASM2';

System altered.

This should start your ASM instance now. I hope this resolves your issue.