Skip to main content

RMAN : Duplicate database without connecting to target

Using "backup location", I do not need to connect to target database, but how do I know what "until time" is available in the backup pieces? one way is to find the backup piece information from target database by using "list backup...", but what if I do not have the target database at all? while, I can arbitrarily give a time, then RMAN will tell me what is the time in those back pieces, nice!

Steps captured!

1.create the RAC database using dbca
2.create spfile='+oradata/bcfw/spfileBCFW.ora' from pfile='/tmp/pfile.ora'. 
   alter system set cluster_database=false scope=memory
3.srvctl start instance -d dbname -i dbname1 -o nomount
4.export NLS_DATE_FORMAT='yyyymmdd hh24:mi:ss'
5.rman> duplicate database
/* the backup was done using this:
RUN
{
  ALLOCATE CHANNEL ch11 TYPE DISK MAXPIECESIZE 1G;
  BACKUP as COMPRESSED backupset
  FORMAT '/u01/dba/jsun/%d_D_%T_%u_s%s_p%p'
  DATABASE
  PLUS ARCHIVELOG
  FORMAT '/u01/dba/jsun/%d_A_%T_%u_s%s_p%p';
  RELEASE CHANNEL ch11;
 }
*/

Duplicate database:
Method 1: get the until scn from target database.

export NLS_DATE_FORMAT='yyyymmdd hh24:mi:ss'
rman>
 list archivelog time between 'sysdate-1' and 'sysdate';
 list backup of archivelog time between 'sysdate-1' and 'sysdate';

On auxiliary database:
connect auxiliary  /
connect target sys/syspassword@targetdb
RUN
{
  ALLOCATE auxiliary  CHANNEL ch11 TYPE DISK MAXPIECESIZE 1G;
  set until scn 9776681621703;
  duplicate target database to 'BCFW' NOFILENAMECHECK;
  RELEASE   CHANNEL ch11;
 }

Method 2: replicate database using backup location without connecting to target database

export NLS_DATE_FORMAT='yyyymmdd hh24:mi:ss'
connect auxiliary  /
RUN
{
  ALLOCATE auxiliary  CHANNEL ch11 TYPE DISK MAXPIECESIZE 1G;
  duplicate database to 'BCFW' NOFILENAMECHECK until time "to_date('04-04-2019 13:31:18','dd-mm-yyyy hh24:mi:ss')" backup location '/u01/dba/jsun';
  RELEASE   CHANNEL ch11;
}

Errors in memory script
RMAN-06617: UNTIL TIME (20190404:13:31:18) is ahead of last NEXT TIME in archived logs (20190404:10:56:44)

Bingo! Now we get the correct time from the error 20190404:10:56:44

RUN
{
  ALLOCATE auxiliary  CHANNEL ch11 TYPE DISK MAXPIECESIZE 1G;
  duplicate database to 'BCFW' NOFILENAMECHECK until time "to_date('04-04-2019 10:56:44','dd-mm-yyyy hh24:mi:ss')" backup location '/u01/dba/jsun';
  RELEASE   CHANNEL ch11;
}

The duplicating database finish successfully!

6.
alter system set cluster_database=true scope=spfile;
select group#,member,status from gv$logfile;
select group#,thread#,bytes,status from gv$log;
alter database drop logfile group 21; ---... there are redundant redo groups.
alter database add logfile thread 2 group 4 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database add logfile thread 2 group 5 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database add logfile thread 2 group 6 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database add logfile thread 3 group 7 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database add logfile thread 3 group 8 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database add logfile thread 3 group 9 ('+oraDATA01','+oraFLASH01') size 524288000 reuse;
alter database enable public thread 2;
alter database enable public thread 3;
show parameter undo
show parameter spfile; if you are using wrong spfile, correct that.
create pfile='/u01/dba/jsun/d' from spfile;  -- theck the parameters and make sure they are correct
select tablespace_name,file_name, bytes,autoextensible from dba_data_files where tablespace_name like '%UNDO%';
drop tablespace undotbs5; ... --- there are redundant undo...
#create undo tablespace UNDOTBS2 datafile  '+oraDATA01' size 500M;

7. srvctl start database -d BCFW; srvctl status database -d BCFW
8. srvctl add service -d BCFW -s BCFW_TAF -r "BCFW1,BCFW2,BCFW3"
srvctl modify service -d BCFW -s BCFW_TAF -e SELECT -m BASIC  -z 5 -w 0
/* options for srvctl modify service
 -P {NONE | BASIC | PRECONNECT} TAF policy specification
 -l <role> Role of the service (primary, physical_standby, logical_standby, snapshot_standby)
 -y <policy> Management policy for the service (AUTOMATIC or MANUAL)
 -e <Failover type> Failover type (NONE, SESSION, or SELECT)
 -m <Failover method> Failover method (NONE or BASIC)
 -w <integer> Failover delay
 -z <integer> Failover retries
*/

Comments

Popular posts from this blog

non-existent process lock port on windows server

I have a database link created between oracle and sqlserver using oracle tg4odbc, the product is installed on windows server and run as service "OracleOraGtw11g_home1TNSListener", but sometime the service cannot started, the root cause of this problem is that the port number 1521 is used by an non-existent process. The first step is to use netstat -bano|find "1521" to get the process id, in my case it's 5844, which shows the connection is from my oracle server 10.8.0.169 H:\>netstat -bano|find "1521"   TCP    0.0.0.0:1521           0.0.0.0:0              LISTENING       5844   TCP    10.14.45.33:1521       10.8.0.169:42987       ESTABLISHED     5844 however the process id does not show in either task manager or process explorer. The next step is to run tcpview, which shows non-existent under process column, there are three rows, two show status as "listening", the other one shows status "established", right click and k

Opatch apply/lsinventory error: oneoff is corrupted or does not exist

I am applying the quarterly patch for 19c RDBMS, I tried using napply but failed, but somehow it corrupted the inventory though nothing applied. further apply and lsinventory command ran into error like this: $ ./OPatch/opatch lsinventory Oracle Interim Patch Installer version 12.2.0.1.21 Copyright (c) 2020, Oracle Corporation.  All rights reserved. Oracle Home       : /u02/app/oracle/19.0.0 Central Inventory : /u01/app/oraInventory    from           : /u02/app/oracle/19.0.0/oraInst.loc OPatch version    : 12.2.0.1.21 OUI version       : 12.2.0.7.0 Log file location : /u02/app/oracle/19.0.0/cfgtoollogs/opatch/opatch2020-09-08_13-35-59PM_1.log Lsinventory Output file location : /u02/app/oracle/19.0.0/cfgtoollogs/opatch/lsinv/lsinventory2020-09-08_13-35-59PM.txt -------------------------------------------------------------------------------- Inventory load failed... OPatch cannot load inventory for the given Oracle Home. LsInventorySession failed: Unable to create patchObject Possible ca

shell script to clean up oracle dumpfile

https://github.com/iacosta/personal/blob/master/shells/cleanhouse.sh #!/bin/ksh # # Script used to cleanup any Oracle environment. # # Cleans:      audit_file_dest #              background_dump_dest #              core_dump_dest #              user_dump_dest #              Clusterware logs # # Rotates:     Alert Logs #              Listener Logs # # Scheduling:  00 00 * * * /networkdrive/dba/scripts/purge/cleanup.sh -d 7 > /u01/dba/bin/cleanup.log 2>&1 # # Created By:  Lei Dao # # # RM="rm -f" RMDIR="rm -rf" LS="ls -l" MV="mv" TOUCH="touch" TESTTOUCH="echo touch" TESTMV="echo mv" TESTRM=$LS TESTRMDIR=$LS SUCCESS=0 FAILURE=1 TEST=0 HOSTNAME=`hostname` ORAENV="oraenv" TODAY=`date +%Y%m%d` ORIGPATH=/usr/local/bin:$PATH ORIGLD=$LD_LIBRARY_PATH export PATH=$ORIGPATH # Usage function. f_usage(){   echo "Usage: `basename $0` -d DAYS [-a DAYS] [-b DAYS] [