Sunday, December 19, 2010

Recovery Fundamentals: SCN

This post is to give you information about various recovery fundamental details and how recovery works.

We will start by looking at various SCNs and where they are stored.

There are 3 SCNs basically in control file

1. Checkpoint SCN
2. Stop SCN
3. Thread checkpoint SCN

Checkpoint SCN is the datafile checkpoint SCN when checkpoint happens for datafile. This checkpoint SCN is recorded in datafile header as well.

Stop SCN is the SCN which gets recoreded in control file when datafile is taken in begin backup mode or when datafile is taken offline. This is the checkpoint at a point when datafile header is freezed.

Thread Checkpoint SCN is the one related to online redo log files. This SCN gets generated when ever transaction get recoreded in online redo log file.

When we shut down database with normal or immediate option, all these SCN are synchronized and made equal.

Lets take a quick example:

1) System checkpoint SCN in controlfile

SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
——————
3700901
2) Datafile checkpoint SCN in controlfile

SQL> select name, checkpoint_change# from v$datafile
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
CHECKPOINT_CHANGE#
——————
/dy/oracle/product/db10g/dbf/htmldb01.dbf
3700901

3) Stop SCN in control file

SQL> select name, last_change# from v$datafile
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
LAST_CHANGE#
————
/dy/oracle/product/db10g/dbf/htmldb01.dbf

4) Start SCN in datafile header

SQL> select name, checkpoint_change# from v$datafile_header
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
CHECKPOINT_CHANGE#
——————
/dy/oracle/product/db10g/dbf/htmldb01.dbf
3700901

Shut down the database now and start in mount mode

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 243276104 bytes
Database Buffers 822083584 bytes
Redo Buffers 6397952 bytes
Database mounted.
SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
——————
3722204

SQL> select name, checkpoint_change# , last_change# from v$datafile
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
CHECKPOINT_CHANGE# LAST_CHANGE#
—————— ————
/dy/oracle/product/db10g/dbf/htmldb01.dbf
3722204 3722204

All these SCN values are coming from control file. Here you can see that last_change# from v$datafile was showing NULL. But when we shut down the database this value got updated to same as checkpoint_change#. This last_change# is the stop SCN and checkpoint_change# is the start SCN. So when we shutdown the database it run a checkpoint and makes start SCN = stop SCN.

Lets check the SCN in datafile header

SQL> select name, checkpoint_change# from v$datafile
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
CHECKPOINT_CHANGE#
——————
/dy/oracle/product/db10g/dbf/htmldb01.dbf
3722204
So here we see that datafile header is having same checkpoint # as system checkpoint number.

How oracle decides whethere recovery is required?

When database is started, Oracle checks the system SCN stored in control file and datafiles header. It compared system SCN which each datafile header and it those matches, then next it checks the start SCN and stop SCN in datafile headers, if those are also same then it will open the database else it as for recovery.
Also as soon as we open the database the last_change# in v$datafile_header will be set to NULL again.

Now shutdown the database with abort option.

SQL> shut abort
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 243276104 bytes
Database Buffers 822083584 bytes
Redo Buffers 6397952 bytes
Database mounted.
SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
——————
3722206

So we can see thet system checkpoint # is 3722206

SQL> select name, checkpoint_change# , last_change# from v$datafile
2 where name like ‘%htmldb%’;

NAME
——————————————————————————–
CHECKPOINT_CHANGE# LAST_CHANGE#
—————— ————
/dy/oracle/product/db10g/dbf/htmldb01.dbf
3722206
Here you can see that datafile header checkpoint SCN is also 3722206, but stop SCN # in controlfile is NULL. If shutdown checkpoint would have happened, then it would have updated the stop SCN for controlfile. But since we used “shut abort”, no checkpoint happened during shutdown. This situation is called “crash recovery”. Here the start SCN of datafile header and stop SCN of datafile header are not matching. This kind of situation is automatically taken care by Oracle. When you open the database, oracle automatically applies the transaction from redo log files and undo tablespace and it will recover the database. Problem happens when system SCN # does not match with datafile header start SCN. This is called “instance recovery”.

During start of database Stop SCN = NULL => Needs crash recovery
During Start of database DATAFILE HEADER START SCN != SYSTEM SCN in control file => Media recovery

When doing media recover we can have 2 situations

1) Datafile header SCN is less then datafile SCN stored in control file.

So when you open the database, Oracle checks the SCN number of datafile present in datafile header and control file. If the SCN matches it will open the datafile, else it will ask for recovery. Now when it ask for recovery, it will check the start SCN of datafile in datafile header. From this SCN onwards it needs recovery. So all the logs having this SCN number and beyond is required for recovery.

2) Datafile header SCN is more then datafile SCN stored in control file.

This kind of situation happens when you use backup control file or when you are recovering using “Backup controlfile”. In such situation since datafile header SCN is higher then control file, Oracle really doesn’t know till what SCN to recover. So you tell Oracle that you are using a “backup controlfile” and that you will tell it when to stop applying redo by replying “cancel.” When Oracle starts recovery, it looks at the datafiles to know the last time a checkpoint was performed on the datafile. Oracle now knows to start applying recovery to the datafile for all SCNs after the SCN in the datafile header. But Oracle does not know when to stop, and eventually, Oracle applies recovery in all of your archived redo logs. You can then tell Oracle to use the redo in the online redo logs. Oracle will ask you where to find more redo. At this point, you tell it to quit applying redo by replying CANCEL.

Once we open in reset logs mode, SCN numbers are synchronized in datafiles and controlfiles and redo sequence numbers are reset to 1.

archive Generation huge in begin backup mode why ?

Many of you must have heard or experienced that while taking hot backup of database LGWR process writes aggressively. Meaning that more redo data has been written to redo log file and consecutively more archive logs gets generated.

Here is the common misconception we have in our mind. If some one ask, why excessive redo logs and archive logs are getting generated when we start a hot backup of database ?? Quickly we answer .. Its simple, when we put tablespace in hot backup mode, Oracle will take a check point of tablespace and data files belonging to this tablespace will be freezed. Any user activity happening on objects belonging to this tablespace wont write data to these datafiles, instead it will write data to redo log files. So obviously there will be more redo log file generation.

Well, to some extent this is COMPLETELY WRONG !!!

I will straight way come to the point and explain you what happens when we put the tablespace in hot backup mode.

Your first assumption that datafiles belonging to the tablespace in hot backup mode is freezed is wrong. Datafiles are not freezed, only the datafile headers will be freezed !! So simply imagine that when you put the tablespace in backup mode, Oracle will take a checkpoint and update the datafile headers with checkpoint SCN and there after it is freezed until we take tablespace out of backup mode.

Other datafile (other then header part) remains as normal and data changes happens continuously to this datafile.

Now you may want to ask me “do I mean to say that datafiles gets updated continuously even when we are coping the same to backup location ?”. The answer is YES. Never think that the datafile you are coping is “Consistent”. No, datafiles gets changed continuously !!!

You might want to ask couple of more questions then.

1) If we say that backup file is not consistent and changes continuously, then how come Oracle is able to recover the database when we restore that datafile?

2) If the data changes are anyway happening continuously on data files, then why there is excess redo log generation ?

Thats it !! don’t ask me more then this. Let me explain answers to these questions.

Consider a typical case, where an Oracle database is installed on Linux platform. The standard Oracle block size if 8K and lets say that OS level data block size is 512K. Now when we put the tablespace in “Begin Backup” mode checkpoint has happened and datafile header is freezed. You found which are the files related to this tablespace and started copying using OS command. Now when you copy a datafile using OS command it is going to copy as per OS block size. Lets say when you start copying it gave 8 blocks to you to copy – that means you are copying 4K (512K X 4) to backup location. That means you are copying half of Oracle block to backup location. Now this process of copy can be preempted by Server CPU depending on load. Lets say when you started copying after copy of those 8 block (4K, half of Oracle block), your process get preempted by CPU and it has allocated CPU time to some other important process. Mean while DBWR process changes that block that you have copied halfway (since datafile is not freezed and only header is freezed, continuous updates can happen to datafile).

After a while CPU returns back and gives you next 8 blocks to copy (rest of the halk Oracle block). Now here is the problem !!! we copied half of the oracle block taken at time T0 and another half taken at time T1 and in-between the data block got changed. Does this sounds consistent ? Not to me !! Such type of block is called “Fractured Block”.

Well, since Oracle copies files like this it should do some thing, so that during recovery it wont face any problem.

Usually in case of a normal tablespace (which is not in begin backup mode), when a transaction happens oracle generates redo information and puts in redo log file. This is the bare minimum information that oracle generates in order to redo the information. It does not copy the complete block. Where as in case of begin backup mode, if a transaction happens and changes any block FOR THE FIST TIME, oracle copies the complete block to redo log file. This happens only during first time. If subsequent transaction updates the same block again, oracle will not copy the complete block to redo, instead it will generate minimum information to redo the changes. Now because oracle has to copy the complete block when it changes for the first time in begin backup mode, we say that excess redo gets generated when we put tablespace in begin backup mode.

Question arises, why Oracle has to copy the complete block to redo log files. As you have seen above that during copy of datafile, there can be many fractured blocks, and during restore and recovery its going to put those block back and try to recover. Now assume that block is fractured and oracle has minimum information that it generates in the redo. Under such condition it wont be possible for Oracle to recover such blocks. So instead Oracle just copies the entire block back from redo log files to datafiles during recovery process. This will make the datafile consistent. So recovery process is very important which takes care of all fractured blocks and makes it possible to recover a database.

I hope this explains above 2 questions.

Now you can easily explain why hot backup is not possible if database is in NOARCHIVELOG mode.

When you take a backup using RMAN, it does not generate excessive redo logs. The reason is simple. RMAN is intelligent. It does not use OS block for copying, instead it uses oracle blocks for copying datafiles so the files are consistent.

Hope this helps !!

Monday, December 13, 2010

Shrinking listener logfile online.

% cd /u01/app/oracle/product/9.2.0/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on


Wednesday, July 28, 2010

RMAN Tablepsace point in time recovery

In this method following tablespaces can't be skipped.
======================================================
select tablespcae_name,count(*) from dba_extents where owner in ('SYS','SYSTEM') group by tablespace_name ;




Tablespace pointin time recovey :
===================================
08:58:01 SQL> select count(*) from bill;

COUNT(*)
----------
9

==========
Parameter file :
================
*.audit_file_dest='/usr/app/oracle/product/10.2.0/admin/dummy/adump'
*.background_dump_dest='/usr/app/oracle/product/10.2.0/admin/dummy/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA/dummy/controlfile/dummy.ctl'
*.core_dump_dest='/usr/app/oracle/product/10.2.0/admin/dummy/cdump'
*.db_block_size=8192
*.db_name='dummy'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sourceXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.sga_target=167772160
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/usr/app/oracle/product/10.2.0/admin/dummy/udump'
db_file_name_convert='+DATA/source','+DATA/dummy'
log_file_name_convert='+DATA/source,'+DATA/dummy'

Source database datafiles list :
====================================
FILE_NAME FILE_ID
--------------------------------------------- ----------
+DATA/source/datafile/users.269.714904761 4
+DATA/source/datafile/sysaux.267.714904759 3
+DATA/source/datafile/undotbs1.268.714904761 2
+DATA/source/datafile/system.266.714904757 1
+DATA/source/datafile/ranji.276.714905559 5
+DATA/source/datafile/tools.277.714905603 6
Archive log files generation history on source databse :
=======================================================
SQL> select name,stamp,completion_time,sequence# from v$archived_log;

NAME STAMP COMPLETION_TIME SEQUENCE#
--------------------------------------------- ---------- -------------------- ----------
+DATA/source/1_1_714904885.dbf 714906459 MAR 29 2010 09:07:39 1
+DATA/source/1_2_714904885.dbf 714906674 MAR 29 2010 09:11:14 2
+DATA/source/1_3_714904885.dbf 714911401 MAR 29 2010 10:30:01 3
+DATA/source/1_4_714904885.dbf 714911868 MAR 29 2010 10:37:48 4
+DATA/source/1_5_714904885.dbf 714912166 MAR 29 2010 10:42:46 5
+DATA/source/1_6_714904885.dbf 714913375 MAR 29 2010 11:02:55 6
+DATA/source/1_7_714904885.dbf 714913652 MAR 29 2010 11:07:32 7

7 rows selected.



RMAN Script:
============
run {
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;
set until time "to_date('Mar 29 2010 09:11:00','MON DD YYYY HH24:MI:SS')";
set newname for datafile 1 to '+DATA/dummy/datafile/system.dbf';
set newname for datafile 2 to '+DATA/dummy/datafile/undotbs1.dbf';
set newname for datafile 3 to '+DATA/dummy/datafile/sysaux.dbf';
set newname for datafile 5 to '+DATA/dummy/datafile/ranji.dbf';
duplicate target database to dummy skip tablespace TOOLS
logfile
group 1('+DATA/dummy/onlinelog/group_1.log') size 10m reuse,
group 2('+DATA/dummy/onlinelog/group_2.log') size 10m reuse,
group 3('+DATA/dummy/onlinelog/group_3.log') size 10m reuse;
}

Output :
=========
RMAN> run {
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;
set until time "to_date('Mar 29 2010 09:11:00','MON DD YYYY HH24:MI:SS')";
set newname for datafile 1 to '+DATA/dummy/datafile/system.dbf';
set newname for datafile 2 to '+DATA/dummy/datafile/undotbs1.dbf';
set newname for datafile 3 to '+DATA/dummy/datafile/sysaux.dbf';
set newname for datafile 5 to '+DATA/dummy/datafile/ranji.dbf';
duplicate target database to dummy skip tablespace TOOLS
logfile
group 1('+DATA/dummy/onlinelog/group_1.log') size 10m reuse,
group 2('+DATA/dummy/onlinelog/group_2.log') size 10m reuse,
group 3('+DATA/dummy/onlinelog/group_3.log') size 10m reuse;
}
2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>
allocated channel: c1
channel c1: sid=156 devtype=DISK

allocated channel: c2
channel c2: sid=155 devtype=DISK

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting Duplicate Db at 29-MAR-10
Datafile 6 skipped by request

contents of Memory Script:
{
set until scn 454075;
set newname for datafile 1 to
"+DATA/dummy/datafile/system.dbf";
set newname for datafile 2 to
"+DATA/dummy/datafile/undotbs1.dbf";
set newname for datafile 3 to
"+DATA/dummy/datafile/sysaux.dbf";
set newname for datafile 4 to
"+DATA/dummy/datafile/users.269.714904761";
set newname for datafile 5 to
"+DATA/dummy/datafile/ranji.dbf";
restore
check readonly
clone database
skip tablespace TOOLS ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 29-MAR-10

channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +DATA/dummy/datafile/system.dbf
restoring datafile 00002 to +DATA/dummy/datafile/undotbs1.dbf
restoring datafile 00003 to +DATA/dummy/datafile/sysaux.dbf
restoring datafile 00004 to +DATA/dummy/datafile/users.269.714904761
restoring datafile 00005 to +DATA/dummy/datafile/ranji.dbf
channel c1: reading from backup piece /crs/backups/02l9p6rs.bkp
channel c1: restored backup piece 1
piece handle=/crs/backups/02l9p6rs.bkp tag=TAG20100329T090811
channel c1: restore complete, elapsed time: 00:01:59
Finished restore at 29-MAR-10
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DUMMY" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( '+DATA/dummy/onlinelog/group_1.log' ) SIZE 10 M REUSE,
GROUP 2 ( '+DATA/dummy/onlinelog/group_2.log' ) SIZE 10 M REUSE,
GROUP 3 ( '+DATA/dummy/onlinelog/group_3.log' ) SIZE 10 M REUSE
DATAFILE
'+DATA/dummy/datafile/system.dbf'
CHARACTER SET WE8ISO8859P1


contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=714917673 filename=+DATA/dummy/datafile/undotbs1.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=714917673 filename=+DATA/dummy/datafile/sysaux.dbf
datafile 5 switched to datafile copy
input datafile copy recid=3 stamp=714917673 filename=+DATA/dummy/datafile/ranji.dbf
datafile 4 switched to datafile copy
input datafile copy recid=4 stamp=714917673 filename=+DATA/dummy/datafile/users.285.714917561

contents of Memory Script:
{
set until time "to_date('Mar 29 2010 09:11:00','MON DD YYYY HH24:MI:SS')";
recover
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 29-MAR-10
datafile 6 not processed because file is offline

starting media recovery

archive log thread 1 sequence 2 is already on disk as file +DATA/source/1_2_714904885.dbf
archive log filename=+DATA/source/1_2_714904885.dbf thread=1 sequence=2
media recovery complete, elapsed time: 00:00:01
Finished recover at 29-MAR-10

contents of Memory Script:
{
shutdown clone;
startup clone nomount ;
}
executing Memory Script

database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 167772160 bytes

Fixed Size 1218316 bytes
Variable Size 62916852 bytes
Database Buffers 100663296 bytes
Redo Buffers 2973696 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DUMMY" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( '+DATA/dummy/onlinelog/group_1.log' ) SIZE 10 M REUSE,
GROUP 2 ( '+DATA/dummy/onlinelog/group_2.log' ) SIZE 10 M REUSE,
GROUP 3 ( '+DATA/dummy/onlinelog/group_3.log' ) SIZE 10 M REUSE
DATAFILE
'+DATA/dummy/datafile/system.dbf'
CHARACTER SET WE8ISO8859P1


contents of Memory Script:
{
set newname for tempfile 1 to
"+DATA/dummy/tempfile/temp.274.714904919";
switch clone tempfile all;
catalog clone datafilecopy "+DATA/dummy/datafile/undotbs1.dbf";
catalog clone datafilecopy "+DATA/dummy/datafile/sysaux.dbf";
catalog clone datafilecopy "+DATA/dummy/datafile/users.285.714917561";
catalog clone datafilecopy "+DATA/dummy/datafile/ranji.dbf";
switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed temporary file 1 to +DATA/dummy/tempfile/temp.274.714904919 in control file

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/undotbs1.dbf recid=1 stamp=714917764

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/sysaux.dbf recid=2 stamp=714917765

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/users.285.714917561 recid=3 stamp=714917766

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/ranji.dbf recid=4 stamp=714917766

datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=714917764 filename=+DATA/dummy/datafile/undotbs1.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=714917765 filename=+DATA/dummy/datafile/sysaux.dbf
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=714917766 filename=+DATA/dummy/datafile/ranji.dbf
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=714917766 filename=+DATA/dummy/datafile/users.285.714917561

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
# drop offline and skipped tablespaces
sql clone "drop tablespace TOOLS including contents cascade constraints";
}
executing Memory Script

sql statement: drop tablespace TOOLS including contents cascade constraints
Finished Duplicate Db at 29-MAR-10

RMAN> exit


Recovery Manager complete.


single tablespace recovery :
=============================
run {
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;
set newname for datafile 1 to '+DATA/dummy/datafile/system.dbf';
set newname for datafile 2 to '+DATA/dummy/datafile/undotbs1.dbf';
set newname for datafile 3 to '+DATA/dummy/datafile/sysaux.dbf';
set newname for datafile 5 to '+DATA/dummy/datafile/ranji.dbf';
set newname for datafile 4 to '+DATA/dummy/datafile/user.dbf';
duplicate target database to dummy skip tablespace TOOLS
logfile
group 1('+DATA/dummy/onlinelog/group_1.log') size 10m reuse,
group 2('+DATA/dummy/onlinelog/group_2.log') size 10m reuse,
group 3('+DATA/dummy/onlinelog/group_3.log') size 10m reuse;
}

Output :
==========
[oracle@localhost dbs]$ rman target sys/source@source auxiliary /

Recovery Manager: Release 10.2.0.1.0 - Production on Mon Mar 29 11:34:01 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: SOURCE (DBID=2833265521)
connected to auxiliary database: DUMMY (not mounted)

RMAN> run {
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;
set newname for datafile 1 to '+DATA/dummy/datafile/system.dbf';
set newname for datafile 2 to '+DATA/dummy/datafile/undotbs1.dbf';
set newname for datafile 3 to '+DATA/dummy/datafile/sysaux.dbf';
set newname for datafile 5 to '+DATA/dummy/datafile/ranji.dbf';
set newname for datafile 4 to '+DATA/dummy/datafile/user.dbf';
duplicate target database to dummy skip tablespace TOOLS
logfile
group 1('+DATA/dummy/onlinelog/group_1.log') size 10m reuse,
group 2('+DATA/dummy/onlinelog/group_2.log') size 10m reuse,
group 3('+DATA/dummy/onlinelog/group_3.log') size 10m reuse;
}
2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=156 devtype=DISK

allocated channel: c2
channel c2: sid=155 devtype=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting Duplicate Db at 29-MAR-10
Datafile 6 skipped by request

contents of Memory Script:
{
set until scn 463518;
set newname for datafile 1 to
"+DATA/dummy/datafile/system.dbf";
set newname for datafile 2 to
"+DATA/dummy/datafile/undotbs1.dbf";
set newname for datafile 3 to
"+DATA/dummy/datafile/sysaux.dbf";
set newname for datafile 4 to
"+DATA/dummy/datafile/user.dbf";
set newname for datafile 5 to
"+DATA/dummy/datafile/ranji.dbf";
restore
check readonly
clone database
skip tablespace TOOLS ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 29-MAR-10

channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00002 to +DATA/dummy/datafile/undotbs1.dbf
restoring datafile 00003 to +DATA/dummy/datafile/sysaux.dbf
restoring datafile 00005 to +DATA/dummy/datafile/ranji.dbf
channel c1: reading from backup piece /crs/backups/0fl9pdjq.bkp
channel c2: starting datafile backupset restore
channel c2: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +DATA/dummy/datafile/system.dbf
restoring datafile 00004 to +DATA/dummy/datafile/user.dbf
channel c2: reading from backup piece /crs/backups/0el9pdjq.bkp
channel c1: restored backup piece 1
piece handle=/crs/backups/0fl9pdjq.bkp tag=TAG20100329T110321
channel c1: restore complete, elapsed time: 00:02:05
channel c2: restored backup piece 1
piece handle=/crs/backups/0el9pdjq.bkp tag=TAG20100329T110321
channel c2: restore complete, elapsed time: 00:02:30
Finished restore at 29-MAR-10
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DUMMY" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( '+DATA/dummy/onlinelog/group_1.log' ) SIZE 10 M REUSE,
GROUP 2 ( '+DATA/dummy/onlinelog/group_2.log' ) SIZE 10 M REUSE,
GROUP 3 ( '+DATA/dummy/onlinelog/group_3.log' ) SIZE 10 M REUSE
DATAFILE
'+DATA/dummy/datafile/system.dbf'
CHARACTER SET WE8ISO8859P1


contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=714915496 filename=+DATA/dummy/datafile/undotbs1.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=714915497 filename=+DATA/dummy/datafile/sysaux.dbf
datafile 5 switched to datafile copy
input datafile copy recid=3 stamp=714915497 filename=+DATA/dummy/datafile/ranji.dbf
datafile 4 switched to datafile copy
input datafile copy recid=4 stamp=714915497 filename=+DATA/dummy/datafile/user.dbf

contents of Memory Script:
{
set until scn 463518;
recover
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 29-MAR-10
datafile 6 not processed because file is offline

starting media recovery

archive log thread 1 sequence 7 is already on disk as file +DATA/source/1_7_714904885.dbf
archive log filename=+DATA/source/1_7_714904885.dbf thread=1 sequence=7
media recovery complete, elapsed time: 00:00:10
Finished recover at 29-MAR-10

contents of Memory Script:
{
shutdown clone;
startup clone nomount ;
}
executing Memory Script

database dismounted

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 167772160 bytes

Fixed Size 1218316 bytes
Variable Size 62916852 bytes
Database Buffers 100663296 bytes
Redo Buffers 2973696 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DUMMY" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( '+DATA/dummy/onlinelog/group_1.log' ) SIZE 10 M REUSE,
GROUP 2 ( '+DATA/dummy/onlinelog/group_2.log' ) SIZE 10 M REUSE,
GROUP 3 ( '+DATA/dummy/onlinelog/group_3.log' ) SIZE 10 M REUSE
DATAFILE
'+DATA/dummy/datafile/system.dbf'
CHARACTER SET WE8ISO8859P1


contents of Memory Script:
{
set newname for tempfile 1 to
"+DATA/dummy/tempfile/temp.274.714904919";
switch clone tempfile all;
catalog clone datafilecopy "+DATA/dummy/datafile/undotbs1.dbf";
catalog clone datafilecopy "+DATA/dummy/datafile/sysaux.dbf";
catalog clone datafilecopy "+DATA/dummy/datafile/user.dbf";
catalog clone datafilecopy "+DATA/dummy/datafile/ranji.dbf";
switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed temporary file 1 to +DATA/dummy/tempfile/temp.274.714904919 in control file

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/undotbs1.dbf recid=1 stamp=714915644

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/sysaux.dbf recid=2 stamp=714915647

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/user.dbf recid=3 stamp=714915648

cataloged datafile copy
datafile copy filename=+DATA/dummy/datafile/ranji.dbf recid=4 stamp=714915649

datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=714915644 filename=+DATA/dummy/datafile/undotbs1.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=714915647 filename=+DATA/dummy/datafile/sysaux.dbf
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=714915649 filename=+DATA/dummy/datafile/ranji.dbf
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=714915648 filename=+DATA/dummy/datafile/user.dbf

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script


database opened

contents of Memory Script:
{
# drop offline and skipped tablespaces
sql clone "drop tablespace TOOLS including contents cascade constraints";
}
executing Memory Script

sql statement: drop tablespace TOOLS including contents cascade constraints
Finished Duplicate Db at 29-MAR-10

RMAN>
RMAN>
RMAN>

Followers