RMAN Recovery Catalog Issues after 12.2 Upgrade

Well, after the upgrade of the DB to 12.2 the next step is to upgrade the RMAN Recovery Catalog to the same version in order to be able to run backups using it. The process is pretty straight forward.
.    Connect to RMAN using target and catalog
.    Execute upgrade catalog
RMAN> upgrade catalog;
This finishes correctly. But then we try to resync the catalog we got below error:
RMAN> resync catalog;
starting full resync of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of resync command on default channel at 06/07/2018 22:15:36
ORA-01403: no data found
This is due to a bug 27013146. The solution is to apply this patch to the newly upgraded Oracle Home and re-run the upgrade catalog again.
More information can be found in MOS Note 2341947.1.
Everything is cool now, right? Well, now the old 12.1.0.2 database backups are failing with below error:
RMAN-03008: error while performing automatic resync of recovery catalog
For that, the solution is outlined in MOS Doc Id 2291791.1.
Happy 12.2 upgrades!
Thanks,
Alfredo

RMAN restore slow due to ASMB process from ASM to filesystem

I was restoring a database running on ASM to an instance running on filesystem.
The step for cataloging files was extremely slow and after taking a look at the alertlog file, I noticed tons of errors related to ASMB process.
Well, turns out this is an unpublished bug explained in more detail on the below MOS note.
12c RMAN Operations from ASM To Non-ASM Slow Performance (Doc ID 2081537.1)
After applying the suggested patch (19503821) the performance went back to normal and the restore progressed as expected.
Thanks,

Alfredo

ORA-01503: CREATE CONTROLFILE failed on RMAN duplicate between RAC databases

It was while since I performed an RMAN duplicate from a RAC database to another RAC database. You may find this post useless as there’s a lot of information out there about the error I faced, but all my posts are kind of my own little diary.
So, basically. In order to successfully duplicate one database to another you may need this.

    – TNS setup on the server where the database to be duplicated is hosted. You need to be able to tnsping both, the target and the auxiliary database.

     – Create a parameter file and password file for the auxiliary and startup nomount the first instance.
    – Create a full backup of the target database including archivelogs.
    – Connect to rman to both the target and the auxiliary databases from the auxiliary server:
rman target sys/*****@ auxiliary /
    – Issue the duplicate command:

 duplicate target database to ;
This should successfully finish but I got this error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 01/03/2017 18:29:58
RMAN-05501: aborting duplication of target database
RMAN-06136: ORACLE error from auxiliary database: ORA-01503: CREATE CONTROLFILE failed
ORA-12720: operation requires database is in EXCLUSIVE mode
I searched this and the culprit is that my parameter file for the auxiliary had this:

cluster_database = true

By MOS note “Rman duplicate fail with RMAN-06136, ORA-01503, ORA-12720, ORA-00494 enqueue [CF] (Doc ID 1335479.1)”. Duplicate from a RAC database to another RAC database is not supported, hence we need to modify the parameter cluster_database to false in the auxiliary database.
After doing this the duplicate finished correctly.
Thanks,

Alfredo

WAIT_FOR_GAP. How to restore missing archivelogs from backup?

In a Dataguard configuration, Oracle’s RFS (Remote File Server) writes redo data to the standby. When for any reason it can’t write this data, MRP (Managed Recovery Process) will wait for the archivelog to be applied and have the status “WAIT_FOR_LOG”. This will lead the standby to be out-of-sync with the primary database.
Sometimes some archivelogs can’t be transferred from primary database to the standby leaving a gap in the archivelog sequence. The MRP process will have the status “WAIT_FOR_GAP”.  
In order to fix the archivelog gap we have to manually transfer the archivelogs missing.
To find the gap you can query v$archive_gap (gv$archive_gap for RAC).
SELECT INST_ID, THREAD#, HIGH_SEQUENCE#, LOW_SEQUENCE# FROM GV$ARCHIVE_GAP;
INST_ID       THREAD#       HIGH_SEQUENCE#       LOW_SEQUENCE#
————- ————— ———————— ————————
2             2             823                  811
You can see that we are missing archivelogs from sequence 811 to 823 for thread 2. If these archivelogs are not available in the primary we have to restore them from backup.
RMAN> RESTORE ARCHIVELOG FROM SEQUENCE 811 UNTIL SEQUENCE 823 THREAD=2;
Keep in mind that parameter THREAD defaults to 1, so you must specify the thread number when you are trying to restore from a different thread.
After restoring these archivelogs the RFS process should transfer them automatically to the standby. Verify if the gap is fixed.
Thanks,

Alfredo

How to upgrade the RMAN recovery catalog to 12c?

Some days back I faced an issue while trying to upgrade the RMAN recovery catalog to support 12c databases.


RMAN> upgrade catalog;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-07539: insufficient privileges to create or upgrade the catalog schema

The problem is that for 12c the catalog owner requires additional privileges. The solution is to run the dbmsrmansys.sql script that comes with the 12c binaries.

You have to copy this file from your 12c home “$ORACLE_HOME/rdbms/admin” directory to where your RMAN recovery catalog database is and execute it.

SQL> @$ORACLE_HOME/rdbms/admin/dbmsrmansys.sql
The script is going to complain for the lack of 2 other scripts, but the upgrade runs just fine.
$ rman CATALOG rman@catalog
recovery catalog database Password:
RMAN> UPGRADE CATALOG;
RMAN> UPGRADE CATALOG;
RMAN> EXIT;
Thanks,

Alfredo

RMAN jobs not working after OEM upgrade to 12.1.0.4

If you are planning to upgrade your OEM to 12.1.0.4 and you have RMAN jobs scheduled in Cloud Control, you should consider applying patch 19519190 to the OMS. I noticed that most of the RMAN jobs were having issues and even worst, some steps were empty!!! 

Obviously, the jobs were succeeding as the step is empty. In other words, the jobs were doing nothing.

Looks like this patch is not part of any PSU, yet! But having a problem with hundreds of jobs and especially with RMAN jobs is very risky.

Take a look at EM 12c: RMAN Step Commands are Being Removed from Multi-step RMAN Script Jobs in Enterprise Manager 12.1.0.4 Cloud Control (Doc ID 1914916.1).

Thanks,

Alfredo