Tuesday, June 12, 2018

GoldenGate : ADD SCHEMATRANDATA failing with OGG-01790 + ORA-06550

Tried to enable supplemental logging at schema level on 11.2.0.4 database triggered with following error  


GGSCI (sev1) 1> dblogin USERID g_user, PASSWORD oracle
Successfully logged into database.

GGSCI (sev1) 2> add schematrandata SCOTT

2018-06-11 09:54:07  ERROR   OGG-01790  Failed to ADD SCHEMATRANDATA on schema SCOTT because of the following SQL error: ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_CAPTURE_ADM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored SQL BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('SCOTT','ALLKEYS_ON'); END;.



The grants are missing for the GoldenGate user. Please run below to open up grants 


SQL> exec dbms_streams_auth.grant_admin_privilege('G_USER');

PL/SQL procedure successfully completed.


It failed again because of missing parameter enable_goldengate_replication, we need this set to TRUE to support GoldenGate Replication


GGSCI (sev1) 3> add schematrandata SCOTT

2018-06-11 10:51:21  ERROR   OGG-01790  Failed to ADD SCHEMATRANDATA on schema SCOTT because of the following SQL error: ORA-26947: Oracle GoldenGate replication is not enabled.
ORA-06512: at "SYS.DBMS_CAPTURE_ADM_INTERNAL", line 1577
ORA-06512: at "SYS.DBMS_CAPTURE_ADM_INTERNAL", line 1086
ORA-06512: at "SYS.DBMS_CAPTURE_ADM", line 722
ORA-06512: at line 1 SQL BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('SCOTT','ALLKEYS_ON'); END;.

Change the parameter on DB level


SQL> alter system set enable_goldengate_replication=true scope=both;

System altered.

Supplemental Logging got enabled now. 


GGSCI (sev1) 4> add schematrandata SCOTT

2018-06-11 10:52:35  INFO    OGG-01788  SCHEMATRANDATA has been added on schema SCOTT.

To check if its enabled, please run below


GGSCI (sev1) 5> info schematrandata SCOTT

2018-06-11 10:57:23  INFO    OGG-01785  Schema level supplemental logging is enabled on schema SCOTT.


Hope this resolves your issue. 

Tuesday, May 29, 2018

Oracle GoldenGate : ERROR: opening port for REPLICAT (Connection refused)

The server got restarted, all the replicats we had set up were in status "Starting...", but none was actually doing anything.
Attempting to stop them got the following error:


GGSCI (serv7) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    STARTING    REPLICAT1    00:00:00      00:35:16    
REPLICAT    STARTING    REPLICAT2    00:00:00      00:35:08    


GGSCI (serv7) 8> stop r*

Sending STOP request to REPLICAT REPLICAT1 ...

ERROR: opening port for REPLICAT REPLICAT1 (Connection refused).

Sending STOP request to REPLICAT REPLICAT2 ...

ERROR: opening port for REPLICAT REPLICAT2 (Connection refused).


Stopping/Starting the manager service didn't help either - they still said "Starting" and were unresponsive. Before I even attempted to start the replicat for the first time, it said "Starting", and an attempt to start it gave me "ERROR: REPLICAT REPLICAT2 is already running.".


The cause was the replicat process status file, located in the DIRPCS folder under the Goldengate home - there should be a file for each replicat that's currently running giving details about the status. When a replicat stops, this file is deleted. Since all of the current replicats weren't doing anything (they were all sitting at the end of the previous trail file), they should have been stopped. I renamed the PCR files for the affected replicat processes, and then manager reporting "ABENDED" - at that point, I was able to start up each replicat without issue.


prddb1:serv7:prddb1:(391) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ ls -lrt
total 12
-rwxr----- 1 dba oracle 66 May 29 16:49 REPLICAT1.pcr
-rwxr----- 1 dba oracle 66 May 29 16:50 REPLICAT2.pcr
-rwxr----- 1 dba oracle 56 May 29 16:57 MGR.pcm
prddb1:serv7:prddb1:(392) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ mv REPLICAT1.pcr REPLICAT1.pcr.old
prddb1@PRD:serv7:prddb1:(397) /dev/prddb1/ggs/12.1.2.1.0/dirpcs
$ mv REPLICAT2.pcr REPLICAT2.pcr.old


GGSCI (serv7) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    ABENDED     REPLICAT1    00:00:00      00:38:55    
REPLICAT    ABENDED    REPLICAT2    00:00:00      00:38:47   


GGSCI (serv7) 2> start R*

Sending START request to MANAGER ...
REPLICAT REPLICAT1 starting

Sending START request to MANAGER ...
REPLICAT REPLICAT2 starting

GGSCI (serv7) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REPLICAT1    00:00:00      00:44:48    
REPLICAT    RUNNING     REPLICAT2    00:00:00      00:44:40    

GGSCI (serv7) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REPLICAT1    00:06:56      00:00:00    
REPLICAT    RUNNING     REPLICAT2    00:00:02      00:00:08   

I Hope this resolves your issue ..  

Monday, May 14, 2018

Oracle GoldenGate : OGG-02022 Logmining server does not exist on this Oracle database


While starting the new extract processes, the extracts were getting abended with below error


2018-05-10 17:32:00  ERROR   OGG-02022  Logmining server does not exist on this Oracle database.

2018-05-10 17:32:00  ERROR   OGG-01668  PROCESS ABENDING.

Solution :

   There is an easy solution for this. You need to login to the database through GGSCI prompt and register the extract. This should get your extracts started

Example : 


GGSCI > dblogin userid guser password LKJFSDKLJFLASDJLKSDJF
GGSCI > register extract extract1, database 
GGSCI > start extract1


Hope this resolves your issue.

Thursday, May 3, 2018

Oracle RAC : Change VIP status from INTERMEDIATE state back to ONLINE state

Check current VIP status:

$ crsctl status resource ora.serv2.vip
NAME=ora.serv2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=INTERMEDIATE on serv1

Stop the VIP resource:

$ crsctl stop resource ora.serv2.vip
CRS-2673: Attempting to stop 'ora.serv2.vip' on 'serv1'
CRS-2677: Stop of 'ora.serv2.vip' on 'serv1' succeeded

Start the VIP resource:

$ crsctl start resource ora.serv2.vip
CRS-2672: Attempting to start 'ora.serv2.vip' on 'serv2'
CRS-2676: Start of 'ora.serv2.vip' on 'serv2' succeeded

Check current VIP status:

$ crsctl status resource ora.serv2.vip
NAME=ora.serv2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on serv2

Monday, February 5, 2018

Oracle RAC : ASM instance startup failing with "terminating the instance due to error 482" in alert log

ASM instance startup failing with "terminating the instance due to error 482" in alert log

ASM instance alert log shows below error while starting the ASM instance on second node

Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1994] [PC:0x43EFF99, kjbmprlst()+1369] [flags: 0x0, count: 1]
Errors in file /opt/app/oragrid/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_lmd0_39620.trc  (incident=224081):
ORA-07445: exception encountered: core dump [kjbmprlst()+1369] [SIGSEGV] [ADDR:0x1994] [PC:0x43EFF99] [Address not mapped to object] []
Incident details in: /opt/app/oragrid/orabase/diag/asm/+asm/+ASM2/incident/incdir_224081/+ASM2_lmd0_39620_i224081.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20180131051633], requested by (instance=2, osid=39620 (LMD0)), summary=[incident=224081].
PMON (ospid: 39577): terminating the instance due to error 482

Fix :

The cluster_database parameter was changed to FALSE which was resulting in the ASM startup failures

Please run the below from the running cluster node making change to the ASM instance

SQL> alter system set cluster_database=TRUE scope=spfile sid='+ASM2';

System altered.

This should start your ASM instance now. I hope this resolves your issue. 

Saturday, December 30, 2017

Oracle : Spatial is INVALID In DBA_REGISTRY after applying patches

Spatial is INVALID In DBA_REGISTRY after applying patches



Issue : 

COMP_ID        COMP_NAME                            VERSION      STATUS
-------------- ------------------------------------ ------------ --------
OWB            OWB                                  11.2.0.4.0   VALID
APEX           Oracle Application Express           3.2.1.00.12  VALID
EM             Oracle Enterprise Manager            11.2.0.4.0   VALID
AMD            OLAP Catalog                         11.2.0.4.0   VALID
SDO            Spatial                              11.2.0.4.0   INVALID  <--- Invalid
ORDIM          Oracle Multimedia                    11.2.0.4.0   VALID
XDB            Oracle XML Database                  11.2.0.4.0   VALID
CONTEXT        Oracle Text                          11.2.0.4.0   VALID
EXF            Oracle Expression Filter             11.2.0.4.0   VALID
RUL            Oracle Rules Manager                 11.2.0.4.0   VALID
OWM            Oracle Workspace Manager             11.2.0.4.0   VALID
CATALOG        Oracle Database Catalog Views        11.2.0.4.0   VALID
CATPROC        Oracle Database Packages and Types   11.2.0.4.0   VALID
JAVAVM         JServer JAVA Virtual Machine         11.2.0.4.0   VALID
XML            Oracle XDK                           11.2.0.4.0   VALID
CATJAVA        Oracle Database Java Packages        11.2.0.4.0   VALID
APS            OLAP Analytic Workspace              11.2.0.4.0   VALID
XOQ            Oracle OLAP API                      11.2.0.4.0   VALID
RAC            Oracle Real Application Clusters     11.2.0.4.0   VALID

19 rows selected.

Solution :


connect / as sysdba
alter session set current_schema="MDSYS";
@?/md/admin/sdogr.sql
@?/md/admin/prvtgr.plb 

alter session set current_schema="SYS";
set serveroutput on
exec utl_recomp.recomp_serial('MDSYS');
exec sys.VALIDATE_SDO();
select comp_name, version, status from dba_registry where comp_id='SDO';
COMP_NAME  VERSION                 STATUS
---------  ----------------------  ------------------
Spatial    11.2.0.4.0              VALID

Hope this solution works for you.. 

Sunday, October 8, 2017

Oracle : ORA-10485: Real-Time Query cannot be enabled while applying migration redo

Issue :

After patching when starting the Standby database, MRP process fails with the below error. Observed the below err messages in alert log of standby.

All non-current ORLs have been archived.
Media Recovery Log +PRODDB1_FRA/prodDB1/archivelog/2017_10_08/thread_1_seq_68958.4014.956804163
MRP0: Background Media Recovery terminated with error 10485
Errors in file /opt/app/prodDB1/oracle/diag/rdbms/prodDB1/prodDB1/trace/prodDB1_pr00_44779.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
Managed Standby Recovery not using Real Time Apply
Sun Oct 08 03:07:15 2017
Completed: alter database recover managed standby database using current logfile disconnect
Recovery interrupted!
Block change tracking service stopping.
Stopping background process CTWR
Sun Oct 08 03:07:17 2017
MRP0: Background Media Recovery process shutdown (prodDB1)


Cause :

ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

The Real-Time Query feature was enabled when an attempt was made to recover through migration redo generated during primary upgrades or downgrades

Solution

Start the standby database in mount stage and start applying logs till all the logs are applied, once the logs are applied, you can start the database in read only mode and enable Real Time Apply

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 2.1379E+11 bytes
Fixed Size                  2265864 bytes
Variable Size            5.6371E+10 bytes
Database Buffers         1.5730E+11 bytes
Redo Buffers              112762880 bytes
Database mounted.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY disconnect from session;

Database altered.


Alert Log Shows
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY disconnect from session
Attempt to start background Managed Standby Recovery process (prodDB1)
Sun Oct 08 03:10:56 2017
MRP0 started with pid=84, OS id=233034 
MRP0: Background Managed Standby Recovery process started (prodDB1)



Check if all logs are applied and there is no GAP between the Primary database and the Standby


    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1             68965            68965
         2             69547            69547

SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
Media recovery complete.
SQL> alter database open read only;

Database altered.

SQL> alter database recover managed standby database using current logfile disconnect;

Database altered.


Alert Log Shows

alter database recover managed standby database using current logfile disconnect
Attempt to start background Managed Standby Recovery process (prodDB1)
Sun Oct 08 03:14:38 2017
MRP0 started with pid=84, OS id=33618 
MRP0: Background Managed Standby Recovery process started (prodDB1)
 started logmerger process
Sun Oct 08 03:14:43 2017
Managed Standby Recovery starting Real Time Apply
Only allocated 127 recovery slaves (requested 128)
Parallel Media Recovery started with 127 slaves


I hope this helps you..