My Blog List

Tuesday, November 6, 2007

9iR2 to 10gR1 DB Manual upgrade

Step 1- Just cross check your Source Database

SQL> column COMP_ID format a20
SQL> column COMP_NAME format a40
SQL> column version format a15
SQL> column status format a15
SQL> select COMP_ID,COMP_NAME,VERSION,status from dba_registry;

COMP_ID COMP_NAME VERSION STATUS
-------------------- ---------------------------------------- --------------- ------------
CATALOG Oracle9i Catalog Views 9.2.0.6.0 VALID
CATPROC Oracle9i Packages and Types 9.2.0.6.0 VALID

SQL>

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE 9.2.0.6.0 Production
TNS for Solaris: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 – Production


Step 2- Log in to the system as the owner of the ORACLE_HOME directory of the
new Oracle Database 10g release and copy the
file
ORACLE_HOME/rdbms/admin/utlu101i.sql

bash-2.05$ sqlplus

SQL*Plus: Release 9.2.0.6.0 - Production on Tue Nov 6 11:43:07 2007

PS1=[$PWD]
export ORACLE_SID=PRODCOPY
export ORACLE_HOME=/oracle/product/9.2.0
export LD_LIBRARY_PATH=/oracle/product/9.2.0/lib32
export LD_LIBRARY_PATH_64=/oracle/product/9.2.0/lib
SQL> spool /tmp/utlu101i.txt
SQL> @/tmp/utlu101i.sql
Oracle Database 10.1 Upgrade Information Tool 11-06-2007 11:56:35
.
*************************************************************************
Database:
---------
--> name: PRODCOPY
--> version: 9.2.0.6.0
--> compatibility: 9.2.0
.
*************************************************************************
Logfiles: [make adjustments in the current environment]
----------------------------------------------------
-- The existing log files are adequate. No changes are required.
.
*************************************************************************
Tablespaces: [make adjustments in the current environment]
----------------------------------------------------------
--> SYSTEM tablespace is adequate for the upgrade.
.... owner: SYS
.... minimum required size: 1323 MB
.
*************************************************************************
Options: [present in existing database]
---------------------------------------
--> Partitioning
WARNING: Listed option(s) must be installed with Oracle Database 10.1
.
*************************************************************************
Update Parameters: [Update Oracle Database 10.1 init.ora or spfile]
-------------------------------------------------------------------
WARNING: --> "shared_pool_size" needs to be increased to at least "150944944"
WARNING: --> "pga_aggregate_target" needs to be increased to at least "25165824"WARNING: --> "large_pool_size" needs to be increased to at least "8388608"
WARNING: --> "java_pool_size" needs to be increased to at least "50331648"
.
*************************************************************************
Deprecated Parameters: [Update Oracle Database 10.1 init.ora or spfile]
-----------------------------------------------------------------------
-- No deprecated parameters found. No changes are required.
.
*************************************************************************
Obsolete Parameters: [Update Oracle Database 10.1 init.ora or spfile]
---------------------------------------------------------------------
-- No obsolete parameters found. No changes are required
.
*************************************************************************
Components: [The following database components will be upgraded or installed]
-----------------------------------------------------------------------------
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
.
*************************************************************************
.
*************************************************************************
SYSAUX Tablespace: [Create tablespace in Oracle Database 10.1 environment]
--------------------------------------------------------------------------
--> New "SYSAUX" tablespace
.... minimum required size for database upgrade: 500 MB
Please create the new SYSAUX Tablespace AFTER the Oracle Database
10.1 server is started and BEFORE you invoke the upgrade script.
.
*************************************************************************

Oracle Database 10g: Changes in Default Behavior
------------------------------------------------
This page describes some of the changes in the behavior of Oracle Database 10g from that of previous releases. In some cases the default values of some parameters have changed. In other cases new behaviors/requirements have been introduced that may affect current scripts or applications. More detailed information is in the documentation.

SQL OPTIMIZER
The Cost Based Optimizer (CBO) is now enabled by default.
* Rule-based optimization is not supported in 10g (setting
OPTIMIZER_MODE to RULE or CHOOSE is not supported). See Chapter
12, "Introduction to the Optimizer," in Oracle Database Performance Tuning Guide.
* Collection of optimizer statistics is now performed by default, automatically for all schemas (including SYS), for pre-existing databases upgraded to 10g, and for newly created 10g databases. Gathering optimizer statistics on stale objects is scheduled by
default to occur daily during the maintenance window. See Chapter 15, "Managing Optimizer Statistics" in Oracle Performance Tuning Guide. * See the Oracle Database Upgrade Guide for changes in behavior for the COMPUTE STATISTICS clause of CREATE INDEX, and for behavior changes in SKIP_UNUSABLE_INDEXES.

UPGRADE/DOWNGRADE
* After upgrading to 10g, the minimum supported release to downgrade to is Oracle 9i R2 release 9.2.0.3 (or later), and the minimum value for COMPATIBLE is 9.2.0. The only supported downgrade path is for those users who have kept COMPATIBLE=9.2.0
and have an installed 9i R2 (release 9.2.0.3 or later) executable. Users upgrading to 10g from prior releases (such as Oracle 8, Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless they first install 9i R2. When upgrading to10g, by default the database will remain at 9i R2 file format compatibility, so the on disk structures that 10g writes are compatible with 9i R2 structures; this makes it possible to downgrade to 9i R2. Once file format compatibility has been explicitly advanced to 10g (using COMPATIBLE=10.x.x), it is no longer possible to downgrade. See the Oracle Database Upgrade Guide. * A SYSAUX tablespace is created upon upgrade to 10g. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces, it reduces the number of tablespaces required by Oracle that you, as a DBA, must maintain.

MANAGEABILITY
* Database performance statistics are now collected by the Automatic Workload Repository (AWR) database component, automatically upon upgrade to 10g and also for newly created 10g databases. This data is stored in the SYSAUX tablespace, and is
used by the database for automatic generation of performance recommendations. See Chapter 5, "Automatic Performance Statistics" in the Oracle Database Performance Tuning Guide. * If you currently use Statspack for performance data gathering,
see section 1. of the Statspack readme (spdoc.txt in the RDBMS ADMIN directory) for directions on using Statspack in 10g to avoid conflict with the AWR.

MEMORY
Automatic PGA Memory Management is now enabled by default (unless PGA_AGGREGATE_TARGET is explicitly set to 0 or WORKAREA_SIZE_POLICY is explicitly set to MANUAL). PGA_AGGREGATE_TARGET is defaulted to 20% of the SGA size, unless explicitly set. Oracle recommends tuning the value of PGA_AGGREGATE_TARGET after upgrading. See Chapter 14 of the Oracle Database Performance Tuning Guide.
* Previously, the number of SQL cursors cached by PL/SQL was determined by OPEN_CURSORS. In 10g, the number of cursors cached is determined by SESSION_CACHED_CURSORS. See the Oracle Database Reference manual.
* SHARED_POOL_SIZE must increase to include the space needed for shared pool overhead. * The default value of DB_BLOCK_SIZE is operating system specific, but is typically 8KB (was typically 2KB in previous releases).

TRANSACTION/SPACE
* Dropped objects are now moved to the recycle bin, where the space is only reused when it is needed. This allows 'undropping' a table using the FLASHBACK DROP feature. See Chapter 14 of the Oracle Database Administrator's Guide. * Auto tuning undo retention is on by default. For more information, see Chapter 10, "Managing the Undo Tablespace," in the Oracle Database Administrator's Guide.

CREATE DATABASE
* In addition to the SYSTEM tablespace, a SYSAUX tablespace is always created at database creation, and upon upgrade to 10g. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces, it reduces the number of tablespaces required by Oracle that you, as a DBA, must maintain. See Chapter 2, "Creating a Database," in the Oracle Database Administrator's
Guide.
* In 10g, by default all new databases are created with 10g file format compatibility. This means you can immediately use all the 10g features. Once a database uses 10g compatible file formats, it is not possible to downgrade this database to prior releases.
Minimum and default logfile sizes are larger. Minimum is now 4 MB, default is 50MB, unless you are using Oracle Managed Files (OMF) when it is 100 MB.

PL/SQL procedure successfully completed.

SQL> spool off


Step 3- Upgrade will leave all objects (packages,views,...) invalid, except for tables.
All other objects must be recompiled manually.

List all objects that are not VALID before the upgrade. This list of fatal objects.

Select substr(owner,1,12) owner, substr(object_name,1,30) object, Substr(object_type,1,30) type, status
from dba_objects where status <> 'VALID’;


OWNER OBJECT TYPE STATUS
------------ ------------------------------ ------------------ ---------------
SYSADM PS_REQ_PO_LN_I VIEW INVALID
SYSADM PS_REQ_PO_LN_D VIEW INVALID

SQL>


Step 4- Ensure ORACLE_SID is set to instance you want to upgrade.
Echo $ORACLE_SID
Echo $ORACLE_HOME


Step 5- As of Oracle 9i the National Characterset (NLS_NCHAR_CHARACTERSET
will be limited to UTF8 and AL16UTF16.)

Note 276914.1 The National Character Set in Oracle 9i and 10g

Any other NLS_NCHAR_CHARACTERSET will no longer be supported.
When upgrading to 9i the value of NLS_NCHAR_CHARACTERSET is based
on value currently used in the Oracle8 version.
If the NLS_NCHAR_CHARACTERSET is UTF8 then new it will stay UTF8.
In all other cases the NLS_NCHAR_CHARACTERSET is changed to AL16UTF16
and -if used- N-type data (= data in columns using NCHAR, NVARCHAR2 orNCLOB )
may need to be converted. The change itself is done in step 31 by running the upgrade script.

If you are NOT using N-type columns *for user data* then simply go to step 6.
No further action required.

( so if: select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where
DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in
('SYS','SYSTEM'); returns no rows, go to point 6.)

no rows selected

If you have N-type columns *for user data* then check:

SQL> select * from nls_database_parameters where parameter
='NLS_NCHAR_CHARACTERSET';
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_NCHAR_CHARACTERSET AL16UTF16

SQL>

If you are using N-type columns AND your National Characterset
is UTF8 or is in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then also simply go to point 6.
The conversion of the user data itself will then be done in step 38.

If you are using N-type columns AND your National Characterset is NOT
UTF8 or NOT in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

(your current NLS_NCHAR_CHARACTERSET is for example US7ASCII, WE8ISO8859P1, CL8MSWIN1251 ...)
then you have to:
* change the tables to use CHAR, VARCHAR2 or CLOB instead the N-type
or
* use export/import the table(s) containing N-type columns
and truncate those tables before migrating to 9i.
The recommended NLS_LANG during export is simply the NLS_CHARACTERSET,
not the NLS_NCHAR_CHARACTERSET


Step 6- Check for corruption in the dictionary, use the following commands in sqlplus connected as sys:
Set verify off
Set space 0
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql
Select 'Analyze '||object_type||' '||object_name
||' validate structure;'
from dba_objects
where owner='SYS'
and object_type in ('INDEX','TABLE','CLUSTER');
spool off
This creates a script called analyze.sql.
Run the script.

This script (analyze.sql) should not return any errors.


Step 7 - Ensure that all Snapshot refreshes are successfully completed.
And replication is stopped.
$ Sqlplus SYS/
SQL> Select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;

no rows selected

SQL>

Step 8- Stop the listener for the database
$ lsnrctl
Lsnrctl> stop




Step 9 - Ensure no files need media recovery:

SQL> Select * from v$recover_file;

no rows selected

SQL>


Step 10 - Ensure no files are in backup mode:


SQL> Select * from v$backup where status!='NOT ACTIVE';

no rows selected

SQL>

This should return no rows.

Step 11- Resolve any outstanding unresolved distributed transaction:

SQL> Select * from dba_2pc_pending;

no rows selected

SQL>

If this returns rows you should do the following:

Select local_tran_id from dba_2pc_pending;
Execute dbms_transaction.purge_lost_db_entry('');
Commit;

Step 12 - Disable all batch and cron jobs.


Step 13 - Ensure the users sys and system have 'system' as their default tablespace.

SQL> Select username, default_tablespace from dba_users where username
in ('SYS','SYSTEM');
2
USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
SYSTEM SYSTEM
SYS SYSTEM

SQL>

To modify use:
Alter user sys default tablespace SYSTEM;
Alter user system default tablespace SYSTEM;


Step 14 - Optionally ensure the aud$ is in the system tablespace when auditing is enabled.
SQL> Select tablespace_name from dba_tables where table_name='AUD$';

TABLESPACE_NAME
------------------------------
SYSTEM

SQL>

Step 15 - Note down where all control files are located.

QL> Select name from v$controlfile;

NAME
-------------------------------------------------------------------
/oradata/PRODCOPY/ora_control1

/oradata/PRODCOPY/ora_control2

SQL>

Step 15 - Note down all sysdba users.

SQL> Select * from v$pwfile_users;

no rows selected

SQL>
(Its using OS authentication that’s why its no row selected )

If a passwordfile is used copy it to the new location. On unix the default
is $ORACLE_HOME/dbs/orapw.

On windows NT this is %ORACLE_HOME%\database\orapw

Step 16 - Shutdown the database

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

Step 17 -Change the init.ora file:
- Make a backup of the init.ora file.
- Comment out obsolete parameters
- Change all deprecated parameters
- Make sure the COMPATIBLE initialization parameter is properly set for
the new Oracle Database 10gR1 release. The Upgrade Information Tool displays
a warning in the Database section if COMPATIBLE is not properly set.
- If you have set the parameter NLS_LENGTH_SEMANTICS to CHAR, change the value
to BYTE during the upgrade.
- Verify that the parameter DB_DOMAIN is set properly.
- Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to
at least 24 MB.
- Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 48 MB.
- Ensure there is a value for DB_BLOCK_SIZE
- Comment out the JOB_QUEUE_PROCESSES parameter, put in a new and set this
explicitly to zero, during the upgrade
- Comment out the AQ_TM_PROCESSES parameter, put in a new and set this
explicitly to zero, during the upgrade
- On Windows operating systems, change the BACKGROUND_DUMP_DEST and USER_DUMP_DEST
initialization parameters that point to RDBMS80 or any other environment variable
to point to the following directories instead:
BACKGROUND_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME
and USER_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME\archive
- Make sure all path names in the parameter file are fully specified.
You should not have relative path names in the parameter file.
- If you are using a cluster database, set the parameter CLUSTER_DATABASE=FALSE
during the upgrade.
- If you are upgrading a cluster database, then modify the initdb_name.ora
file in the same way that you modified the parameter file.
- check MAX_ENABLED_ROLES parameter value. Increase it if the number of roles
in the Database is close to its value. When upgrading to higher versions,
new roles might be added . If the Database already contains high number of
roles, upgrade scripts can fail with error ORA-01925 as indicated in
Note 261632.1
- Ensure that the shared_pool_size and the large_pool_size are at least 150Mb
- Ensure that PLSQL_V2_COMPATIBILITY is unset for the upgrade, otherwise this
could cause problems during a later stage. See Note 170282.1

Step 18 -Check for adequate freespace on archive log destination file systems.

Step 19 - nsure the NLS_LANG variable is set correctly:

$ echo $NLS_LANG

Step 20 - If needed copy the listener.ora and the tnsnames.ora to the new location
(when no TNS_ADMIN env. Parameter is used)
cp $ORACLE_HOME/network/admin /network/admin

If your Operating system is Windows NT, delete your services
With the ORADIM of your old oracle version.

Stop the OracleServiceSID Oracle service of the database you are upgrading,
where SID is the instance name. For example, if your SID is ORCL, then enter
the following at a command prompt:

C:\> NET STOP OracleServiceORCL

For Oracle 8.0 this is:
C:\ORADIM80 -DELETE -SID

For Oracle 8i or higher this is:
C:\ORADIM -DELETE -SID

And create the new Oracle Database 10g service at a command prompt using the
ORADIM command of the new Oracle Database release:

C:\> ORADIM -NEW -SID SID -INTPWD PASSWORD -MAXUSERS USERS
-STARTMODE AUTO -PFILE ORACLE_HOME\DATABASE\INITSID.ORA

Step 21 -Copy configuration files from the ORACLE_HOME of the database being upgraded to the new Oracle Database 10gR1 ORACLE_HOME:
If your parameter file resides within the old environment's ORACLE_HOME,
then copy it to the new ORACLE_HOME. By default, Oracle looks for the parameter
file in ORACLE_HOME/dbs on UNIX platforms and in ORACLE_HOME\database on
Windows operating systems. The parameter file can reside anywhere you wish,
but it should not reside in the old environment's ORACLE_HOME after you
upgrade to Oracle Database 10gR1.

If your parameter file is a text-based initialization parameter file with
either an IFILE (include file) or a SPFILE (server parameter file) entry,
and the file specified in the IFILE or SPFILE entry resides within the old
environment's ORACLE_HOME, then copy the file specified by the IFILE or
SPFILE entry to the new ORACLE_HOME. The file specified in the IFILE or SPFILE
entry contains additional initialization parameters.

If you have a password file that resides within the old environments
ORACLE_HOME, then move or copy the password file to the new Oracle Database
10g ORACLE_HOME.

The name and location of the password file are operating system-specific.
On UNIX platforms, the default password file is ORACLE_HOME/dbs/orapwsid.
On Windows operating systems, the default password file is
ORACLE_HOME\database\pwdsid.ora. In both cases, sid is your Oracle instance ID.

If you are upgrading a cluster database and your initdb_name.ora file resides
within the old environment's ORACLE_HOME, then move or copy the initdb_name.ora
file to the new ORACLE_HOME.

Note:
If you are upgrading a cluster database, then perform this step on all nodes
in which this cluster database has instances configured.

Step 23 - Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup:
::N

Step 24 - Update the environment variables like ORACLE_HOME and PATH
$ . oraenv

step 25 - Make sure the following environment variables point to the new Release directories:
- ORACLE_HOME
- PATH
- ORA_NLS10
- ORACLE_BASE
- LD_LIBRARY_PATH
- ORACLE_PATH


Step 26 - PERFORM a Full cold backup!!!!!!!

You can either do this by manually copying the files or sign on to RMAN:


Step 27 - Startup upgrade the database:
[/home/oracle]sqlplus "/ as sysdba"

SQL*Plus: Release 10.1.0.2.0 - Production on Tue Nov 6 13:45:56 2007

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 1048576000 bytes
Fixed Size 1305920 bytes
Variable Size 996676288 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.
SQL>

.
Step 28 – The following SQL statement would create a 500 MB SYSAUX tablespace For the database

SQL>
CREATE TABLESPACE sysaux DATAFILE '/oradata/PRODCOPY/sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Tablespace created.

Step 29 - Spool the output so you can take a look at possible errors after the pgrade:

Run the appropriate script for your version.

From To: Only Script to Run
==== === ==================
7.x.x 10.1 Not supported
8.0.5 10.1 Not supported
8.0.6 10.1 u0800060.sql
8.1.5 10.1 Not Supported
8.1.6 10.1 Not Supported
8.1.7 10.1 u0801070.sql
9.0.1 10.1 u0900010.sql
9.2 10.1 u0902000.sql

SQL> Spool Upgrade.log
SQL>@?/rdbms/admin/u0902000.sql

- - - -- - --
- - - -- - -
-
1 row selected.



1 row selected.


TIMESTAMP
--------------------------------------------------------------------------------

1 row selected.


PL/SQL procedure successfully completed.


COMP_ID COMP_NAME STATUS VERSION
---------- ----------------------------------- ----------- ----------
CATALOG Oracle Database Catalog Views VALID 10.1.0.2.0
CATPROC Oracle Database Packages and Types VALID 10.1.0.2.0

2 rows selected.

DOC>##################################################################
DOC>##################################################################
DOC>
DOC> The above query lists the SERVER components in the upgraded
DOC> database, along with their current version and status.
DOC>
DOC> Please review the status and version columns and look for
DOC> any errors in the spool log file. If there are errors in the spool
DOC> file, or any components are not VALID or not the current version,
DOC> consult the Oracle Database Upgrade Guide for troubleshooting
DOC> recommendations.
DOC>
DOC> Next shutdown immediate, restart for normal operation, and then
DOC> run utlrp.sql to recompile any invalid application objects.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.


COMP_ID COMP_NAME STATUS VERSION
---------- ----------------------------------- ----------- ----------
CATALOG Oracle Database Catalog Views VALID 10.1.0.2.0
CATPROC Oracle Database Packages and Types VALID 10.1.0.2.0

2 rows selected.

DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC> The above query lists the SERVER components in the upgraded
DOC> database, along with their current version and status.
DOC>
DOC> Please review the status and version columns and look for
DOC> any errors in the spool log file. If there are errors in the spool
DOC> file, or any components are not VALID or not the current version,
DOC> consult the Oracle Database Upgrade Guide for troubleshooting
DOC> recommendations.
DOC>
DOC> Next shutdown immediate, restart for normal operation, and then
DOC> run utlrp.sql to recompile any invalid application objects.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP DBUPG_END 2007-11-06 15:38:16

1 row selected.

SQL>


Step 30 - Run utlu101s.sql, specifying the TEXT option
SQL> @?/rdbms/admin/utlu101s.sql

PL/SQL procedure successfully completed.

Enter value for 1: TEXT
Oracle Database 10.1 Upgrade Status Tool 06-NOV-2007 03:46:27
--> Oracle Database Catalog Views Normal successful completion
--> Oracle Database Packages and Types Normal successful completion
No problems detected during upgrade

PL/SQL procedure successfully completed.

SQL>


Step 31- Restart the database:
(DO NOT USE SHUTDOWN ABORT!!!!!!!!!)
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.



SQL> startup restrict
ORACLE instance started.

Total System Global Area 1048576000 bytes
Fixed Size 1305920 bytes
Variable Size 996676288 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.

Executing this clean shutdown flushes all caches, clears buffers and performs
other database housekeeping tasks. Which is needed if you want to upgrade
specific components.



Step 32 - Run script to recompile invalid pl/sql modules:
SQL> @utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2007-11-06 15:50:12

1 row selected.

PL/SQL procedure successfully completed.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2007-11-06 16:16:43

1 row selected.

PL/SQL procedure successfully completed.


If there are still objects which are not valid after running the script run
the following:
spool invalid_post.lst
Select substr(owner,1,12) owner,
Substr(object_name,1,30) object,
Substr(object_type,1,30) type, status from
dba_objects where status <>'VALID';
spool off

OWNER OBJECT TYPE STATUS
------------ ------------------------------ ------------------- -------
SYSADM PS_REQ_PO_LN_I VIEW INVALID
SYSADM PS_REQ_PO_LN_D VIEW INVALID

Now compare the invalid objects in the file 'invalid_post.lst' with the invalid
objects in the file 'invalid_pre.lst' you create in step 5.

There should be no dictionary objects invalid.


Step 33- Shutdown the database and startup the database.
$ sqlplus /nolog
SQL> Connect sys/passwd_for_sys as sysdba
SQL> Shutdown
SQL> Startup restrict

Step 34 – Below is not applicable in my case
A) IF you are NOT using N-type columns for *user* data:

select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where
DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in
('SYS','SYSTEM');
did not return rows in point 8 of this note.

then simply:
$ sqlplus /nolog
SQL> connect sys/passwd_for_sys as sysdba
SQL> shutdown immediate


B) IF your version 8 NLS_NCHAR_CHARACTERSET was UTF8:

you can look up your previous NLS_NCHAR_CHARACTERSET using this select:
select * from nls_database_parameters where parameter ='NLS_SAVED_NCHAR_CS';

then simply:
$ sqlplus /nolog
SQL> connect sys/passwd_for_sys as sysdba
SQL> shutdown immediate


C) IF you are using N-type columns for *user* data *AND*
your previous NLS_NCHAR_CHARACTERSET was in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then the N-type columns *data* need to be converted to AL16UTF16:

To upgrade user tables with N-type columns to AL16UTF16 run the
script utlnchar.sql:

$ sqlplus /nolog
SQL> connect sys/passwd_for_sys as sysdba
SQL> @utlnchar.sql
SQL> shutdown immediate



D) IF you are using N-type columns for *user* data *AND *
your previous NLS_NCHAR_CHARACTERSET was *NOT* in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then import the data exported in point 8 of this note.
The recommended NLS_LANG during import is simply the NLS_CHARACTERSET,
not the NLS_NCHAR_CHARACTERSET

After the import:
$ sqlplus /nolog
SQL> connect sys/passwd_for_sys as sysdba
SQL> shutdown immediate


Step 35 - Modify the listener.ora file:
For the upgraded intstance(s) modify the ORACLE_HOME parameter
to point to the new ORACLE_HOME.

Step 36 - Start the listener
$ lsnrctl
LSNRCTL> start

Step 37 - Enable cron and batch jobs

Change oratab entry to use automatic startup
SID:ORACLE_HOME:Y




Appendix A: Deprecated parameters in release 10gR1:
------------------------------------------------

BUFFER_POOL_KEEP (replaced by DB_KEEP_CACHE_SIZE)
BUFFER_POOL_RECYCLE (replaced by DB_RECYCLE_CACHE_SIZE)
GLOBAL_CONTEXT_POOL_SIZE
LOCK_NAME_SPACE
LOG_ARCHIVE_START
MAX_ENABLED_ROLES
PARALLEL_AUTOMATIC_TUNING
PLSQL_COMPILER_FLAGS (replaced by PLSQL_CODE_TYPE and PLSQL_DEBUG)
DRS_START (replaced by DG_BROKER_START)
FAST_START_IO_TARGET (replaced by FAST_START_MTTR_TARGET)
MTS_CIRCUITS (replaced by CIRCUITS)
MTS_DISPATCHERS (replaced by DISPATCHERS)
MTS_MAX_DISPATCHERS (replaced by MAX_DISPATCHERS)
MTS_MAX_SERVERS (replaced by MAX_SHARED_SERVERS)
MTS_SERVERS (replaced by SHARED_SERVERS)
MTS_SESSIONS (replaced by SHARED_SERVER_SESSIONS)
PARALLEL_SERVER (replaced by CLUSTER_DATABASE)
PARALLEL_SERVER_INSTANCES (replaced by CLUSTER_DATABASE_INSTANCES)


Appendix B: Obsolete parameters in 10gR1:
---------------------------------------

DBLINK_ENCRYPT_LOGIN
HASH_JOIN_ENABLED
LOG_PARALLELISM
MAX_ROLLBACK_SEGMENTS
MTS_CIRCUITS
MTS_DISPATCHERS
MTS_LISTENER_ADDRESS
MTS_MAX_DISPATCHERS
MTS_MAX_SERVERS
MTS_MULTIPLE_LISTENERS
MTS_SERVERS
MTS_SERVICE
MTS_SESSIONS
OPTIMIZER_MAX_PERMUTATIONS
ORACLE_TRACE_COLLECTION_NAME
ORACLE_TRACE_COLLECTION_PATH
ORACLE_TRACE_COLLECTION_SIZE
ORACLE_TRACE_ENABLE
ORACLE_TRACE_FACILITY_NAME
ORACLE_TRACE_FACILITY_PATH
PARTITION_VIEW_ENABLED
PLSQL_NATIVE_C_COMPILER
PLSQL_NATIVE_LINKER
PLSQL_NATIVE_MAKE_FILE_NAME
PLSQL_NATIVE_MAKE_UTILITY
ROW_LOCKING
SERIALIZABLE
TRANSACTION_AUDITING
UNDO_SUPPRESS_ERRORS
DISTRIBUTED_TRANSACTIONS
MAX_TRANSACTION_BRANCHES
PARALLEL_BROADCAST_ENABLED
STANDBY_PRESERVES_NAMES
ALWAYS_ANTI_JOIN
ALWAYS_SEMI_JOIN
DB_BLOCK_LRU_LATCHES
DB_BLOCK_MAX_DIRTY_TARGET
DB_FILE_DIRECT_IO_COUNT
GC_DEFER_TIME
GC_RELEASABLE_LOCKS
GC_ROLLBACK_LOCKS
HASH_MULTIBLOCK_IO_COUNT
INSTANCE_NODESET
JOB_QUEUE_INTERVAL
OPS_INTERCONNECTS
OPTIMIZER_PERCENT_PARALLEL
SORT_MULTIBLOCK_READ_COUNT
TEXT_ENABLE
DB_BLOCK_BUFFERS


Reference –
Complete checklist for manual upgrades to 10gR2
Metalink Note: 316889.1

Complete checklist for manual upgrades to 10gR1 (10.1.0.x)
Metalink Note: 263809.1

Wednesday, September 26, 2007

Migrating Non – ASM database to ASM

Migrating Non – ASM database to ASM

Target database is ESADEMO and is in mount stage
This migration is performed using Recovery Manager (RMAN) even if you are not using RMAN for your primary backup and recovery strategy.

Limitation on ASM Migration with Transportable Tablespaces
The procedure described here does not work for transportable (foreign) tablespaces. Such tablespaces needs to be made read-write and imported into the database, before they can be migrated into ASM using this procedure.

Determine Names of Database Files and control files

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 26 12:10:19 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
D:\oracle\product\10.2.0\oradata\esademo\esademo\system01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\undotbs01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\sysaux01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\users01.dbf

SYS@ ESADEMO>show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string d:\oracle\product\10.2.0\orada
ta\esademo\esademo\control01.c
tl, d:\oracle\product\10.2.0\o
radata\esademo\esademo\control
02.ctl, d:\oracle\product\10.2
.0\oradata\esademo\esademo\con
trol03.ctl
SYS@ ESADEMO>


Disk-Based Migration of a Database to ASM

If you have enough disk space that you can have both your entire non-ASM database and your ASM disk group on disk at the same time, you can do the migration directly without using tapes.
The procedure differs slightly between primary and standby databases. A number of the steps described in this procedure apply only in one or the other case. There are also a few steps where the procedure is different depending upon whether you are using a recovery catalog. The steps that vary are identified as necessary in the description of the process.
The process described in this section is intended to minimize downtime for your database. The example assumes that the destination ASM disk group for database files is +ASM_NEW.During the migration process all flashback logs are discarded. As a result, any guaranteed restore points in the database become unusable. You should drop all guaranteed restore points before performing the migration.

To perform the migration, carry out the following steps:
Step – 1 Back up your database files as copies to the ASM disk group.

C:\>set ORACLE_SID=+ASM

C:\>asmcmd
ASMCMD> ls
ASM/
ASM_NEW/
ASMCMD> cd ASM_NEW
ASMCMD> mkdir ESADEMO
ASMCMD> cd ESADEMO
ASMCMD> pwd
+ASM_NEW/ESADEMO
ASMCMD>


C:\>set oracle_sid=ESADEMO

C:\>rman target / catalog rman/password@rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Sep 26 12:28:14 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ESADEMO (DBID=1806048469, not open)
connected to recovery catalog database

RMAN> BACKUP AS COPY INCREMENTAL LEVEL 0 DATABASE FORMAT '+ASM_NEW' TAG 'ORA_ASM_MIGRATION';
Starting backup at 26-SEP-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\SYSTEM01.DBF
output filename=+ASM_NEW/esademo/datafile/system.258.634307507 tag=ORA_ASM_MIGRATION recid=1 stamp=634307554
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\SYSAUX01.DBF
output filename=+ASM_NEW/esademo/datafile/sysaux.257.634307563 tag=ORA_ASM_MIGRATION recid=2 stamp=634307586
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:26
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\UNDOTBS01.DBF
output filename=+ASM_NEW/esademo/datafile/undotbs1.256.634307589 tag=ORA_ASM_MIGRATION recid=3 stamp=634307593
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\USERS01.DBF
output filename=+ASM_NEW/esademo/datafile/users.259.634307597 tag=ORA_ASM_MIGRATION recid=4 stamp=634307597
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+ASM_NEW/esademo/controlfile/backup.260.634307599 tag=ORA_ASM_MIGRATION recid=5 stamp=634307600
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 26-SEP-07
RMAN>


Step – 2 Create a copy of the SPFILE in the ASM disk group. In this example, the SPFILE for the migrated database will be stored as +ASM_NEW/spfile.
RMAN> run {
2> BACKUP AS BACKUPSET SPFILE;
3> RESTORE SPFILE TO "+ASM_NEW/spfile";
4> }

Starting backup at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 26-SEP-07
channel ORA_DISK_1: finished piece 1 at 26-SEP-07
piece handle=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1 tag=TAG20070926T125327 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 26-SEP-07

Starting restore at 26-SEP-07
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring SPFILE
output filename=+ASM_NEW/spfile
channel ORA_DISK_1: reading from backup piece D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1 tag=TAG20070926T125327
channel ORA_DISK_1: restore complete, elapsed time: 00:00:17
Finished restore at 26-SEP-07


Step –3 At this point, if you want the option of easily returning the database to non-ASM storage later, make copies of your current control file and all online logs. This command backs up the current control file to a non-ASM location:
RMAN> BACKUP AS COPY CURRENT CONTROLFILE FORMAT 'd:\oracle\product\10.2.0\oradata\ESADEMO\pre-ASM-controfile.cf';

Starting backup at 26-SEP-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\PRE-ASM-CONTROFILE.CF tag=TAG20070926T130231 recid=6 stamp=634309352
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 26-SEP-07

RMAN>

Step – 4 now create an init.ora specifying the location of the new SPFILE, and start the instance with it. For example, create /tmp/pfile.ora with the following contents:

SPFILE=+ASM_NEW/spfile

Now start the database in NOMOUNT

SYS@ ESADEMO>shu immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SYS@ ESADEMO>startup nomount
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 62915940 bytes
Database Buffers 100663296 bytes
Redo Buffers 2945024 bytes

SYS@ ESADEMO>show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ----------------
spfile string +ASM_NEW/spfile
SYS@ ESADEMO>


Step –5 The next step is to migrate the control file to ASM.
In SQL*Plus, change the CONTROL_FILES initialization parameter using the following command:
SYS@ ESADEMO>alter system set control_files='+ASM_NEW/ct1.f','+ASM_NEW /ct2.f' scope=spfile sid='*';

System altered.

SYS@ ESADEMO>

Step – 6 Shut down and startup in NOMOUNT again, so that the changed parameters take effect. (The CONTROL_FILES parameter change only takes effect upon a restart because it is a static parameter.)

Then, use RMAN to actually create the new control files in ASM. For example, assuming that one of your original control file locations was /private/ct1.f, use the following command

C:\>rman target / catalog rman/password@rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Sep 26 13:19:31 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ESADEMO (not mounted)
connected to recovery catalog database

RMAN> shutdown immediate;

Oracle instance shut down

RMAN> startup nomount;

Oracle instance started

Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 62915940 bytes
Database Buffers 100663296 bytes
Redo Buffers 2945024 bytes

RMAN> restore controlfile from 'd:\oracle\product\10.2.0\oradata\ESADEMO\pre-ASM-controfile.cf';

Starting restore at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy
output filename=+ASM_NEW/esademo/controlfile/ct1.f
output filename=+ASM_NEW/esademo/controlfile/ct2.f
Finished restore at 26-SEP-07

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+ASM_NEW/esademo/datafile/system.258.634307507"
datafile 2 switched to datafile copy "+ASM_NEW/esademo/datafile/undotbs1.256.634307589"
datafile 3 switched to datafile copy "+ASM_NEW/esademo/datafile/sysaux.257.634307563"
datafile 4 switched to datafile copy "+ASM_NEW/esademo/datafile/users.259.634307597"

RMAN> recover database;

Starting recover at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

starting media recovery

archive log thread 1 sequence 1 is already on disk as file D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\REDO03.LOG
archive log filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\REDO03.LOG thread=1 sequence=1
media recovery complete, elapsed time: 00:00:01
Finished recover at 26-SEP-07

RMAN> alter database open resetlogs;;
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN>


SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 26 13:39:33 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
esademo OPEN


SYS@ ESADEMO>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+ASM_NEW/esademo/datafile/system.258.634307507
+ASM_NEW/esademo/datafile/undotbs1.256.634307589
+ASM_NEW/esademo/datafile/sysaux.257.634307563
+ASM_NEW/esademo/datafile/users.259.634307597

SYS@ ESADEMO>

Step –7 moving online logs to ASM disk

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>declare
2 cursor rlc is
3 select group# grp, thread# thr, bytes/1024 bytes_k, 'NO' srl
4 from v$log
5 union
6 select group# grp, thread# thr, bytes/1024 bytes_k, 'YES' srl
7 from v$standby_log
8 order by 1;
9 stmt varchar2(2048);
10 swtstmt varchar2(1024) := 'alter system switch logfile';
11 ckpstmt varchar2(1024) := 'alter system checkpoint global';
12 begin
13 for rlcRec in rlc loop
14 if (rlcRec.srl = 'YES') then
15 stmt := 'alter database add standby logfile thread ' ||
16 rlcRec.thr || ' ''+ASM_NEW'' size ' ||
17 rlcRec.bytes_k || 'K';
18 execute immediate stmt;
19 stmt := 'alter database drop standby logfile group ' || rlcRec.grp;
20 execute immediate stmt;
21 else
22 stmt := 'alter database add logfile thread ' ||
23 rlcRec.thr || ' ''+ASM_NEW'' size ' ||
24 rlcRec.bytes_k || 'K';
25 execute immediate stmt;
26 begin
27 stmt := 'alter database drop logfile group ' || rlcRec.grp;
28 dbms_output.put_line(stmt);
29 execute immediate stmt;
30 exception
31 when others then
32 execute immediate swtstmt;
33 execute immediate ckpstmt;
34 execute immediate stmt;
35 end;
36 end if;
37 end loop;
38 end;
39 /

PL/SQL procedure successfully completed.

SYS@ ESADEMO>select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+ASM_NEW/esademo/onlinelog/group_3.268.634311897
+ASM_NEW/esademo/onlinelog/group_2.266.634311889
+ASM_NEW/esademo/onlinelog/group_1.265.634311883
+ASM_NEW/esademo/onlinelog/group_5.267.634311877

SYS@ ESADEMO>


Step –7 moving tempfiles to ASM disk

SYS@ ESADEMO>select name from v$tempfile;

NAME
------------------------------------------------------------
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\TEMP01.DBF

RMAN> run {
2> set newname for tempfile 1 to '+ASM_NEW/temp01.dbf';
3> switch tempfile all;
4> }

executing command: SET NEWNAME

renamed temporary file 1 to +ASM_NEW/temp01.dbf in control file
starting full resync of recovery catalog
full resync complete

RMAN>

SYS@ ESADEMO>select name from v$tempfile;

NAME
----------------------------------------------
+ASM_NEW/temp01.dbf

Tuesday, September 25, 2007

Data pumps in oracle 10g

Data pumps in oracle 10g


Some drawbacks of exp/imp utility

· Operations are difficult to restart
· Execution is client-side and single-threaded
· Dump files can grow enormously
· Tuning mechanisms are limited
· Progress monitoring is difficult
· Database object filtering features are limited


Data Pump Concepts
The DataPump is the next logical progression in a series of improvements that have appeared during the last few Oracle releases. The two most dramatic upgrades involve the ability to read data extremely quickly via direct path unloading, and the capability to write data directly to external tables.
DataPump Components
DataPump consists of three components that work in concert to manage and perform operations:

Command-Line Interface. Like the original Export (EXP.EXE) and Import (IMP.EXE) command-line interfaces, the DataPump provides two command-line interfaces, EXPDP.EXE and IMPDP.EXE, for controlling DataPump Export and Import operations, respectively.

DataPump also expands the command-line interface's capabilities by providing Interactive-Command Mode. This mode allows the DBA to start a DataPump operation and then disconnect from it by simply issuing a CTL+C keystroke. Later, the DBA simply opens another DataPump session while specifying the operation's job name and then issues the ATTACH directive to reconnect to the session. The DBA can then issue a series of commands, from taking a pulse of the operation via the STATUS directive, to adding more export dump files, or even terminating a long-running operation if desired.
DBMS_METADATA.

Introduced in Oracle 9i, this PL/SQL supplied package provides methods to extract and format metadata - literally, "information about information" - from an Oracle database. At its core, DBMS_METADATA stores metadata in XML format for ultimate flexibility in its presentation of that metadata.


DataPump Export uses DBMS_METADATA to gather all metadata about the database objects being processed then stores that metadata in the Export operation's master table. DataPump Import uses the corresponding master table to extract information from the export dump file(s), and then uses DBMS_METADATA to create the DDL statements it needs to create new database objects as part of the Import operation.

DBMS_DATAPUMP. At the heart of the DataPump is the new DBMS_DATAPUMP PL/SQL supplied package. This package contains methods for exporting data from and importing data into any Oracle database. EXPDP and IMPDP are actually making calls to this new package to perform their respective operations.
What is especially powerful about DBMS_DATAPUMP is that a DBA can also utilize it directly, either in anonymous PL/SQL blocks or within a stored procedure, stored function, or package, to create custom DataPump Export and Import jobs. I will demonstrate how to accomplish this in the final article in this series when we look at some of the other advanced features of the DataPump.
DataPump Jobs, Master Processes, and the Master Table
DataPump also implements major (and in my opinion, elegant!) improvements to the execution and management of Export and Import operations. Each DataPump operation uses a master process to manage and control the export or import session. Each master process is identified by a unique job name, and DataPump automatically assigns one by default if one is not specified at execution time.
The master process controls the DataPump operation by invoking at least one worker process that actually performs the Export or Import. (This is extremely similar to the way Recovery Manager (RMAN) manages backup, restoration and recovery processing.) If the DataPump operation has requested additional parallel processes for multiple-threaded processing, more than one worker process will be created and managed.
The master process also creates a master table as a user table within the user account in which the DataPump job is invoked. This master table is always named the same as the DataPump job, and it is used slightly differently depending on the type of DataPump operation, but it is always created within the same user account that is invoking the operation.
During an Export operation, the master table is created when the job commences, contains all target objects for the Export, and is retained until the job finishes successfully, at which point it is dropped. However, if the job is paused or fails due to error, the Export process uses the master table to determine what objects still need to be exported and how best to resume the job. Once the job completes successfully, the master table's contents are written to the Export file itself for use by an Import process.
On the other hand, an Import operation reads the master table from the Export dump file and then creates the master table under the appropriate user account. During an Import, the master process uses the master table to decide which database objects need to be created and in what order before any data is actually imported.



How Does Data Pump Access Data?
Data Pump supports two access methods to load and unload table row data: direct path and external tables. Because both methods support the same external data representation, data that is unloaded with one method can be loaded using the other method. Data Pump automatically chooses the fastest method appropriate for each table.
Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or reenabled.

Using Data pump
To make full use of Data Pump technology, you must be a privileged user. Privileged users have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. Nonprivileged users have neither.
Privileged users can do the following:
· Export and import database objects owned by others
· Export and import nonschema-based objects such as tablespace and schema definitions, system privilege grants, resource plans, and so forth
· Attach to, monitor, and control Data Pump jobs initiated by others
· Perform remapping operations on database datafiles
· Perform remapping operations on schemas other than their own
bash-2.05$ export ORACLE_SID=FIN10g
bash-2.05$ sqlplus "/ as sysdba"


SQL*Plus: Release 10.1.0.2.0 - Production on Wed Sep 19 11:58:33 2007

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> create user dbpumpowner identified by dbpumpowner;

User created.

SQL>grant connect,resource,exp_full_database,imp_full_database to bpumpowner;

Grant succeeded.

SQL>

External Tables
The Oracle database has provided an external tables capability since Oracle9i that allows reading of data sources external to the database. As of Oracle Database 10g, the external tables feature also supports writing database data to destinations external to the database. Data Pump provides an external tables access driver (ORACLE_DATAPUMP) that reads and writes files. The format of the files is the same format used with the direct path method. This allows for high-speed loading and unloading of database tables.

External tables by example
Step –1 first Creating a directory at OS level with appropriate permission
bash-2.05$ pwd
/home/oracle
bash-2.05$ mkdir ext_data_dir
bash-2.05$ ls -l | grep ext_
drwxr-xr-x 2 oracle dba 512 Sep 19 12:10 ext_data_dir
bash-2.05$
Step –2 put the external table's data file in the data directory
C:\>ftp 172.16.8.206
Connected to 172.16.8.206.
220 IGSMALPPS04 FTP server ready.
User (172.16.8.206:(none)): oracle
331 Password required for oracle.
Password:
230 User oracle logged in.
ftp> lcd
Local directory now C:\.
ftp> pwd
257 "/home/oracle" is current directory.
ftp> cd /home/oracle/ext_data_dir/data
250 CWD command successful.
ftp> bin
200 Type set to I.
ftp> pwd
257 "/home/oracle/ext_data_dir/data" is current directory.
ftp> put External_table_test.csv
200 PORT command successful.
150 Opening BINARY mode data connection for External_table_test.csv.
226 Transfer complete.



bash-2.05$ pwd
/home/oracle/ext_data_dir/data
bash-2.05$ ls -l
total 0
-rw-r--r-- 1 oracle dba 633 Sep 19 12:16 External_table_test.csv
bash-2.05$
bash-2.05$ cat External_table_test.csv
3/9/2007,Monday,backup review,System Monitoring ,lunch,System Monitoring
4/9/2007,Tusday,backup review,System Monitoring ,lunch,System Monitoring and processed user requests
5/9/2007,Wednesday,backup review,FINPROD Instance Cloaning for testing ,lunch,System Monitoring
6/9/2007,Thursday,backup review,System Monitoring and processed user requests,lunch,System Monitoring
7/9/2007,Friday,backup review,System Monitoring ,lunch,System Monitoring and processed user requests
10/9/2007,Monday,backup review,System Monitoring ,lunch,System Monitoring
11/9/2007,Tusday,backup review,System Monitoring ,lunch,System Monitoring
bash-2.05$

Step –3 Creating the directory

bash-2.05$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.1.0.2.0 - Production on Wed Sep 19 12:33:41 2007

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> create or replace directory xtern_data_dir as '/home/oracle/ext_data_dir/data';

Directory created.

SQL> grant read,write on directory xtern_data_dir to scott;

Grant succeeded.

SQL>


Step –4 Creating the Table

SQL> conn scott/tiger@FIN10g
Connected.
SQL> create table ext_emp_daily_rpt
(2 work_date varchar2(20),
3 day varchar2(20),
4 Activity1 varchar2(50),
5 Activity2 varchar2(50),
6 Activity3 varchar2(10),
7 Activity4 varchar2(70)
8 )
9 organization external
10 ( default directory xtern_data_dir
11 access parameters
12 ( records delimited by newline
13 fields terminated by ','
14 )
15 location ('External_table_test.csv')
16 );

Table created.

Step –4 Select statements on external table which actually fetching data from CSV file.SQL> select * from ext_emp_daily_rpt;

WORK_DATE DAY ACTIVITY1 ACTIVITY2 ACTIVITY3 ACTIVITY4
---------- ---------- --------------- -------------------------------------------------- ---------- ----------------------------------------------------------
3/9/2007 Monday backup review System Monitoring lunch System Monitoring
4/9/2007 Tusday backup review System Monitoring lunch System Monitoring and processed user requests
5/9/2007 Wednesday backup review FINPROD Instance Cloaning for testing lunch System Monitoring
6/9/2007 Thursday backup review System Monitoring and processed user requests lunch System Monitoring
7/9/2007 Friday backup review System Monitoring lunch System Monitoring and processed user requests
10/9/2007 Monday backup review System Monitoring lunch System Monitoring
11/9/2007 Tusday backup review System Monitoring lunch System Monitoring

7 rows selected.

Direct-Path INSERT

Oracle inserts data into a table in one of two ways:
· During conventional insert operations, Oracle reuses free space in the table, interleaving newly inserted data with existing data. During such operations, Oracle also maintains referential integrity constraints.
· During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.

Situations in Which Direct Path Load Is Not Used
If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:
· A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
· A domain index exists for a LOB column.
· A table is in a cluster.
· There is an active trigger on a pre-existing table.
· Fine-grained access control is enabled in insert mode on a pre-existing table.
· A table contains BFILE columns or columns of opaque types.
· A referential integrity constraint is present on a pre-existing table.
· A table contains VARRAY columns with an embedded opaque type.
· The table has encrypted columns
· The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
o There is an active trigger
o The table is partitioned
o fgac is in insert mode
o A referential integrity constraint exists
o A unique index exists
· Supplemental logging is enabled and the table has at least 1 LOB column





Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, Data Pump uses the external table method to unload data, rather than direct path:
· Fine-grained access control for SELECT is enabled.
· The table is a queue table.
· The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
· The table contains encrypted columns.
· A column of an evolved type that needs upgrading.
· A column of type LONG or LONG RAW that is not last.
External tables as the data access mechanism in the following situations
· Loading and unloading very large tables and partitions in situations where parallel SQL can be used to advantage
· Loading tables with global or domain indexes defined on them, including partitioned object tables
· Loading tables with active triggers or clustered tables
· Loading and unloading tables with encrypted columns
· Loading tables with fine-grained access control enabled for inserts
· Loading tables that are partitioned differently at load time and unload time

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.



New features that provide this increased performance in Oracle Data pumps

Monitoring DataPump Operations
· The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See PARALLEL for information about using this parameter in export. See PARALLEL for information about using this parameter in import. (This feature is available only in the Enterprise Edition of Oracle Database 10g.)
· The ability to restart Data Pump jobs. See START_JOB for information about restarting export jobs. See START_JOB for information about restarting import jobs.
· The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job. See ATTACH for information about using this parameter in export. See ATTACH for information about using this parameter in import.
· Support for export and import operations over the network, in which the source of each operation is a remote instance. See NETWORK_LINK for information about using this parameter in export. See NETWORK_LINK for information about using this parameter in import.
· The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. See REMAP_DATAFILE.
· Enhanced support for remapping tablespaces during an import operation. See REMAP_TABLESPACE.
· Support for filtering the metadata that is exported and imported, based upon objects and object types. For information about filtering metadata during an export operation, see INCLUDE and EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE and EXCLUDE.
· Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available in Import's Interactive-Command Mode.
· The ability to estimate how much space an export job would consume, without actually performing the export. See ESTIMATE_ONLY.


· The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported. See VERSION for more information about using this parameter in export.
In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database. See VERSION for more information about using this parameter in import.
For additional information about using different versions, see Moving Data Between Different Database Versions.
· Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.) See Default Locations for Dump, Log, and SQL Files for information about some of the implications of server-based operations.



Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allow the DBA to monitor the progress of all DataPump operations.

Data Pump by example

bash-2.05$ expdp sysadm/sysadm@FIN10g tables=psoprdefn directory=xtern_data_dir dumpfile=PSORDEFN.dmp job_name=PSOPRDEFN_EXPORT

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 13:32

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production With the Partitioning, OLAP and Data Mining options
Starting "SYSADM"."PSOPRDEFN_EXPORT": sysadm/********@FIN10g tables=psoprdefn directory=xtern_data_dir dumpfile=PSORDEFN.dmp job_name=PSOPRDEFN_EXPORT
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SYSADM"."PSOPRDEFN" 86.40 KB 465 rows
Master table "SYSADM"."PSOPRDEFN_EXPORT" successfully loaded/unloaded
************************************************************************
Dump file set for SYSADM.PSOPRDEFN_EXPORT is:
/home/oracle/ext_data_dir/data/PSORDEFN.dmp
Job "SYSADM"."PSOPRDEFN_EXPORT" successfully completed at 13:33
bash-2.05$





bash-2.05$ expdp sysadm/sysadm@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 13:42

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSADM"."SYSADM_FULL_EXPORT_SYSADM": sysadm/********@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 2.270 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
^C
Export>

Export>

Export> help
The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

Export> status
Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_OMC_RGRP_TAO6
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 22,846

Export>

Export> parallel=4

Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_PO_CMFRTMS_T10
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 27,531

Worker 2 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_VCHR_ACCTG_LINE
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 3 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PSAPMSGPUBCERRP
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 4 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PSPCMPROG
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Export>

Export>

Export>


Stop a Running data pump Job

Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y


bash-2.05$

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
ESTIMATE000028 NOT RUNNING
SYSADM_FULL_EXPORT_SYSADM NOT RUNNING

SQL>



Restart Stopped data pump Job

bash-2.05$ expdp sysadm/sysadm@FIN10g attach=SYSADM_FULL_EXPORT_SYSADM

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 15:26

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production With the Partitioning, OLAP and Data Mining options

Job: SYSADM_FULL_EXPORT_SYSADM
Owner: SYSADM
Operation: EXPORT
Creator Privs: FALSE
GUID: 3A799A403E455650E0440003BAC70D65
Start Time: Wednesday, 19 September, 2007 15:26
Mode: SCHEMA
Instance: FIN10g
Max Parallelism: 4
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND sysadm/********@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY XTERN_DATA_DIR
LOG_FILE_NAME export.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: IDLING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 24,576

Worker 1 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PS_PST_VAT_TMP5
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 31,533

Worker 2 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PS_VCHR_ACCTG_LINE
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 3 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PSAPMSGPUBCERRP
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 4 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PSPCMPROG
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658




Export> start_job


Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 28,672

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_ADB_TREE_SEL016
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 967

Export>

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
SYSADM_FULL_EXPORT_SYSADM EXECUTING
ESTIMATE000028 NOT RUNNING

SQL>


Add a Dump file
At this juncture, a dump file can be added by issuing the ADD_FILE directive at the
Export> prompt. The new dump file will automatically be created in the same directory
as our original dump file

Export> ADD_FILE=ADDITION_@_SYSADM.DMP

Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 28,672
Dump File: /home/oracle/ext_data_dir/data/ADDITION_FIN10g_SYSADM.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_EE_PO5_TMP
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 11,248

Export>



And many more ……………………….. J

Monday, September 17, 2007

Working with Undo Tablespace and Undo Retention

(1)There formula for calculating optimal UNDO_RETENTION is this:
OPTIMAL UNDO RETENTION = ACTUAL UNDO SIZE / ( DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC )


-->Actual undo tablespace size can be seen from:-

select bytes from dba_data_files where tablespace_name like '%UNDO%';


-->Actual DB_BLOCK_SIZE can be seen from pfile or:-

select name,value from v$parameter where name like '%block%';


-->For UPS(undo per second) here goes another query which you need to fire another query, that is:-

Select sum(undoblks) / sum((end_time-begin_time)*86400)
from v$undostat;


Here is the single query which does it all:-

This indicates you the actual undo retention value required by the DB in bytes.


SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
ROUND((d.undo_size / (to_number(f.value) * g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
FROM ( SELECT SUM(a.bytes) undo_size
FROM v$datafile a,v$tablespace b,dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#) d,v$parameter e,v$parameter f,
( SELECT MAX(undoblks/((end_time-begin_time)*3600*24))undo_block_per_sec FROM v$undostat)g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size'
/



OUTPUT=

output will be something like this:-

ACTUAL UNDO SIZE [MByte]------------------------17000
UNDO RETENTION [Sec] ------------------------18000
OPTIMAL UNDO RETENTION [Sec]--------------------3338


(2) Now to calculate the actual size of the undo tablespace you need to do the following calculation:-

UR=undo retention
UPS=undo per second
DBS=data block size

undo tablespace = (UR *(UPS * DBS)) + (DBS * 24)

UR and DBS can be obtained from the parameter file or by the queries mentioned above,
for UPS here goes another query which you need to fire another query, that is:-

Select sum(undoblks) / sum((end_time-begin_time)*86400)
from v$undostat;

Monday, September 3, 2007

Moving Datafiles over ASM diskgroups

Source DiskGroup - +ASM
Target Diskgroup - +DGFAIL


SYS@DEEPDB>ALTER DATABASE DATAFILE '+ASM/deepdb/datafile/users.259.631107019' OFFLINE;

Database altered.

SYS@DEEPDB>
========================================================================
C:\>rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Mon Sep 3 12:06:03 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: DEEPDB (DBID=3027735554)
======================================
RMAN> COPY DATAFILE '+ASM/deepdb/datafile/users.259.631107019' TO '+DGFAIL';

Starting backup at 03-SEP-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=131 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=+ASM/deepdb/datafile/users.259.631107019
output filename=+DGFAIL/deepdb/datafile/users.260.632318885 tag=TAG20070903T120801 recid=2 stamp=632318886
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
Finished backup at 03-SEP-07

RMAN>
====================================================
SYS@DEEPDB>ALTER DATABASE RENAME FILE '+ASM/deepdb/datafile/users.259.631107019' TO '+DGFAIL/deepdb/datafile/users.260.632318885';

Database altered.

SYS@DEEPDB>
=================================================
RMAN> SWITCH DATAFILE '+DGFAIL/deepdb/datafile/users.260.632318885' TO COPY;

datafile 4 switched to datafile copy "+DGFAIL/deepdb/datafile/users.260.632318885"

RMAN>
=================================================
RMAN> recover datafile '+DGFAIL/deepdb/datafile/users.260.632318885';

Starting recover at 03-SEP-07
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:06

Finished recover at 03-SEP-07

RMAN>

====================================================
SYS@DEEPDB>ALTER DATABASE DATAFILE '+DGFAIL/deepdb/datafile/users.260.632318885' ONLINE;

Database altered.

SYS@DEEPDB>select name from v$datafile;

NAME
---------------------------------------------------------------------------------------------
+ASM/deepdb/datafile/system.256.631107019
+ASM/deepdb/datafile/undotbs1.258.631107019
+ASM/deepdb/datafile/sysaux.257.631107019
+DGFAIL/deepdb/datafile/users.260.632318885
SYS@DEEPDB>
=====================================================

Friday, August 31, 2007

CREATING PHYSICAL STANDBY DATABASE USING RMAN

CREATING PHYSICAL STANDBY DATABASE USING RMAN


Table of Contents
1.0 Introduction

1.0.1 Purpose
1.0.2 Scope


2.0 working with RMAN for Standby database

2.0.1. What is the standby database
2.0.2. Overview
2.0.3. Initialization parameters and configuration
2.0.4 Pre-requisites
2.0.4.1 Target database must be mounted or open
2.0.4.2 A valid full database backup of the target database 2.0.5 Step required
2.0.5.1 Creating password file for standby server
2.0.5.2 Creating standby control file
2.0.5.3 Recording Last Sequence number
2.0.5.4 Backing up new archive logs
2.0.5.5 Making RMAN backup available on standby server
2.0.5.6 Creating same directory structure on standby
2.0.5.7 Starting standby instance
2.0.5.8 Ensure Oracle Net Connectivity to Standby (Auxiliary)
Database
2.0.5.8 Mount or Open the Target Database
2.0.5.9 Ensuring Necessary Backups and Archived Redo Log Files
2.0.6 Create Standby Database
2.0.7 put the Standby in Managed Recovery Mode
2.0.8 Verifying from alert log contents
2.0.9 Startup and Shutdown procedure of Primary and Standby DB

1.0 INTRODUCTION
1.0.1 Purpose
This document will aid understanding how to create a standby database using RMAN utility.
1.0.2 Scope
This article provides a quick introduction on how to create an Oracle standby database using the RMAN DUPLICATE TARGET DATABASE FOR STANDBY command on the Sun Solaris SunOS 5.9 operating environment. For this demonstration, we will create a standby database using RMAN's DUPLICATE command to a different host other than the primary database. Both hosts are running Sun Solaris SunOS 5.9.
Please refer following link for more details
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96566.pdf

2.0 Creating physical standby database using RMAN

2.0.1 - What is the Oracle standby database (Physical and Logical)

The difference between physical and logical standby is in the way the changes from the primary are applied.
Both created as an exact image of the primary database. Both receive redo logs from the primary database. The difference is that a physical standby is mounted (but not open) and applies the received redo logs just as in the case of media failure recovery. A logical standby reconstructs SQL statements from the received redo logs and executes them. A logical standby is (must be) opened and can with some limitations be used for reporting or other purposes. A physical standby is mounted and generally cannot be used for any other purposes. You can however switch temporarily to read-only mode and query it but synchronization with the primary will be paused until you return to the recovery mode. In 10g Rel. 2 you can also open it read-write and then flashback to its original state


2.0.2 – Overview

RMAN cannot fully automate creation of the standby database because you must manually create an initialization parameter file for the standby database, start the standby instance without mounting the control file, and perform any Oracle Net setup (tnsnames.ora entries) required before performing the creation of the standby. Also, you must have RMAN backups of all datafiles available as well as a control file backup that is usable as a standby control file. All of these steps will be fully described in this article.
After the standby database is created, RMAN can back up the standby database and archived redo logs as part of your backup strategy. These standby backups are fully interchangeable with primary backups. In other words, you can restore a backup of a standby datafile to the primary database, and you can restore a backup of a primary datafile to the standby database


2.0.3- Configuration parameters (and Oracle initialisation parameters)

Primary database
Operating Environment= SunOS 5.9 Generic_118558-33 sun4u sparc SUNW,Sun-Fire-V240
Server Name= IGSMALPPS04
ORACLE_HOME= /oracle/product/9.2.0
Database SID=ORCL
Target TNS Names Entry=ORCL
Standby TNS Names Entry=SORCL
Archive Log Mode=TRUE
*._pga_max_size=41943040
*.aq_tm_processes=1
*.audit_trail='DB'
*.background_dump_dest='/oracle/admin/ORCL/bdump'
*.compatible='9.2.0.0.0'
*.control_files='/oracle/oradata/ORCL/control01.ctl','/oracle/oradata/ORCL/control02.ctl','/oracle/oradata/ORCL/control03.ctl'
*.core_dump_dest='/oracle/admin/ORCL/cdump'
*.db_block_size=8192
*.db_cache_size=104857600
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='ORCL'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='ORCL'
*.java_pool_size=31457280
*.job_queue_processes=10
*.large_pool_size=31457280
*.log_archive_format='Archive_%s_%t.ARC'
*.log_archive_start=TRUE
*.open_cursors=300
*.pga_aggregate_target=20971520
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=104857600
*.sort_area_size=524288
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/admin/ORCL/udump'
*.fal_client= 172.16.8.206
*.fal_server= 172.16.8.209
*.instance_name=ORCL
*.log_archive_dest_1= 'location=/oracle/oradata/ORCL/Archives'
*.log_archive_dest_2= 'service=SORCL optional reopen=15'
*.log_archive_dest_state_1= enable
*.log_archive_dest_state_2= enable
*.standby_archive_dest='/oracle/oradata/ORCL/Archives'
*.standby_file_management = auto


RMAN Configuration on Primary Database

CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oracle/oradata/RMAN_BKP/ORCL_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/oradata/RMAN_BKP/RMAN_BKP_%s_%t_%U';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/product/9.2.0/dbs/snapcf_ORCL.f'; # default

Note: Full RMAN backup require of primary database on primary site.

Standby database server
Operating Environment= SunOS 5.9 Generic_118558-33 sun4u sparc SUNW,Sun-Fire-V240
Server Name= igsmalpps01
ORACLE_HOME= /oracle/oracle9i
Database SID=RORCL
Target TNS Names Entry=ORCL
Standby TNS Names Entry=SORCL
*._pga_max_size=41943040
*.aq_tm_processes=1
*.audit_trail='DB'
*.background_dump_dest='/oracle/admin/ORCL/bdump'
*.compatible='9.2.0.0.0'
*.control_files='/oracle/oradata/ORCL/control01.ctl','/oracle/oradata/ORCL/control02.ctl','/oracle/oradata/ORCL/control03.ctl'
*.core_dump_dest='/oracle/admin/ORCL/cdump'
*.db_block_size=8192
*.db_cache_size=104857600
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='ORCL'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='ORCL'
*.java_pool_size=31457280
*.job_queue_processes=10
*.large_pool_size=31457280
*.log_archive_format='Archive_%s_%t.ARC'
*.log_archive_start=TRUE
*.open_cursors=300
*.pga_aggregate_target=20971520
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=104857600
*.sort_area_size=524288
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/admin/ORCL/udump'
*.fal_client= 172.16.8.206
*.fal_server= 172.16.8.209
*.instance_name=ORCL
*.log_archive_dest_1= 'location=/oracle/oradata/ORCL/Archives'
*.log_archive_dest_2= 'service=ORCL optional reopen=15'
*.log_archive_dest_state_1= enable
*.log_archive_dest_state_2= enable
*.standby_archive_dest='/oracle/oradata/ORCL/Archives'
*.standby_file_management = auto

Standby (Auxiliary) Database Server

2.0.4 - Pre-requisites


2.0.4.1 Target database must be mounted or open

bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 10 14:02:48 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to an idle instance.

SQL> startup open
ORACLE instance started.

Total System Global Area 337087024 bytes
Fixed Size 731696 bytes
Variable Size 218103808 bytes
Database Buffers 117440512 bytes
Redo Buffers 811008 bytes
Database mounted.
Database opened.
SQL>

2.0.4.2 A valid full database backup of the target database

RMAN> list backup summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Tag
------- -- -- - ----------- --------------- ------- ------- ---
1049 B F A DISK 09-AUG-07 1 1 TAG20070809T125511
1087 B A A DISK 09-AUG-07 1 1 TAG20070809T130045
1155 B A A DISK 09-AUG-07 1 1 TAG20070809T163121
1159 B F A DISK 09-AUG-07 1 1 TAG20070809T163126
1175 B A A DISK 09-AUG-07 1 1 TAG20070809T163202
1179 B F A DISK 09-AUG-07 1 1
1188 B A A DISK 10-AUG-07 1 1 TAG20070810T140824
1192 B F A DISK 10-AUG-07 1 1 TAG20070810T140833
1208 B A A DISK 10-AUG-07 1 1 TAG20070810T140919
1212 B F A DISK 10-AUG-07 1 1

RMAN>

2.0.5 - Step required
2.0.5.1-Creating password file for standby server
bash-2.05$ cd $ORACLE_HOME
bash-2.05$ cd dbs
bash-2.05$ orapwd file=orapwORCL password=deep123 entries=5
bash-2.05$ pwd
/oracle/oracle9i/dbs
bash-2.05$

2.0.5.2 Creating standby control file
bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ rman target / catalog rman/rman@rman
Recovery Manager: Release 9.2.0.6.0 - 64bit Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1155865002)
connected to recovery catalog database
RMAN> backup current controlfile for standby format='/oracle/oradata/RMAN_BKP/standbyORCL.ctl';
Starting backup at 10-AUG-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=11 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including standby controlfile in backupset
channel ORA_DISK_1: starting piece 1 at 10-AUG-07
channel ORA_DISK_1: finished piece 1 at 10-AUG-07
piece handle=/oracle/oradata/ RMAN_BKP /standbyORCL.ctl comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 10-AUG-07
Starting Control File and SPFILE Autobackup at 10-AUG-07
piece handle=/oracle/oradata/RMAN_BKP/ORCL_c-1155865002-20070810-01 comment=NONE
Finished Control File and SPFILE Autobackup at 10-AUG-07
RMAN>

2.0.5.3 Recording Last Sequence number

bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ sqlplus "/ as sysdba"
SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 10 14:38:48 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> alter system switch logfile;
System altered.

SQL> /
System altered.

SQL> /
System altered.

SQL> /
System altered.

SQL> /
System altered.

SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
41


2.0.5.3 - Backing up new archive logs

bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ rman target / catalog rman/rman@rman

Recovery Manager: Release 9.2.0.6.0 - 64bit Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to target database: ORCL (DBID=1155865002)
connected to recovery catalog database

RMAN> backup archivelog all delete input;
Starting backup at 10-AUG-07
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=16 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=31 recid=73 stamp=630254339
input archive log thread=1 sequence=32 recid=74 stamp=630254340
input archive log thread=1 sequence=33 recid=75 stamp=630254342
input archive log thread=1 sequence=34 recid=76 stamp=630254343
input archive log thread=1 sequence=35 recid=77 stamp=630254358
input archive log thread=1 sequence=36 recid=78 stamp=630254531
channel ORA_DISK_1: starting piece 1 at 10-AUG-07
channel ORA_DISK_1: finished piece 1 at 10-AUG-07
piece handle=/oracle/oradata/RMAN_BKP/RMAN_BKP_89_630254532_2pip1qu4_1_1
comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
channel ORA_DISK_1: deleting archive log(s)
archive log filename=/oracle/oradata/ORCL/Archives/Archive_31_1.ARC recid=73
stamp=630254339
archive log filename=/oracle/oradata/ORCL/Archives/Archive_32_1.ARC recid=74
stamp=630254340
archive log filename=/oracle/oradata/ORCL/Archives/Archive_33_1.ARC recid=75
stamp=630254342
archive log filename=/oracle/oradata/ORCL/Archives/Archive_34_1.ARC recid=76
stamp=630254343
archive log filename=/oracle/oradata/ORCL/Archives/Archive_35_1.ARC recid=77
stamp=630254358
archive log filename=/oracle/oradata/ORCL/Archives/Archive_36_1.ARC recid=78
stamp=630254531
Finished backup at 10-AUG-07

Starting Control File and SPFILE Autobackup at 10-AUG-07
piece handle=/oracle/oradata/RMAN_BKP/ORCL_c-1155865002-20070810-02
comment=NONE
Finished Control File and SPFILE Autobackup at 10-AUG-07


2.0.5.5 Making RMAN backup available on standby server

bash-2.05$ mkdir /oracle/oradata/RMAN_BKP
bash-2.05$
bash-2.05$ rcp /oracle/oradata/RMAN_BKP/*.* 172.16.8.209:/oracle/oradata/RMAN_BKP
bash-2.05$ rcp /oracle/oradata/RMAN_BKP/* 172.16.8.209:/oracle/oradata/RMAN_BKP
bash-2.05$

2.0.5.6 Creating same directory structure on standby
bash-2.05$ mkdir /oracle/oradata/admin
bash-2.05$ mkdir /oracle/oradata/admin/udump
bash-2.05$ mkdir /oracle/oradata/admin/bdump
bash-2.05$ mkdir /oracle/oradata/admin/cdump
bash-2.05$ mkdir /oracle/oradata/ORCL

2.0.5.7 Starting the standby instance


bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 10 16:30:09 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 337087024 bytes
Fixed Size 731696 bytes
Variable Size 218103808 bytes
Database Buffers 117440512 bytes
Redo Buffers 811008 bytes

SQL>

2.0.5.8 Ensure Oracle Net Connectivity to Standby (Auxiliary) Database
On Primary Database Server


bash-2.05$ lsnrctl services
LSNRCTL for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:23:39
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=IGSMALPPS04)(PORT=2030)))
Services Summary...
Service "ORCL" has 1 instance(s).
Instance "ORCL", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
The command completed successfully
bash-2.05$

bash-2.05$ tnsping ORCL

TNS Ping Utility for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:24:41
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/oracle/product/9.2.0/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP)(HOST = 172.16.8.206)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME =
ORCL)))
OK (10 msec)
bash-2.05$ tnsping SORCL
TNS Ping Utility for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:24:49
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/oracle/product/9.2.0/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP)(HOST = 172.16.8.209)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME =
SORCL)))
OK (0 msec)
bash-2.05$

On Standby Database Server


bash-2.05$ lsnrctl services
LSNRCTL for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:38:52
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.

Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Service "finsp1" has 1 instance(s).
Instance "finsp1", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Service "orcl" has 2 instance(s).
Instance "ORCL", status BLOCKED, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
The command completed successfully
bash-2.05$


bash-2.05$ tnsping ORCL
TNS Ping Utility for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:39:58
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/oracle/oracle9i/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP)(HOST = igsmalpps01)(PORT = 1521))) (CONNECT_DATA = (SERVER =
DEDICATED) (SERVICE_NAME = ORCL)))
OK (0 msec)

bash-2.05$ tnsping SORCL
TNS Ping Utility for Solaris: Version 9.2.0.6.0 - Production on 10-AUG-2007 16:40:01
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/oracle/oracle9i/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP)(HOST =172.16.8.209)(PORT = 1521))) (CONNECT_DATA = (SERVER =
DEDICATED) (SERVICE_NAME = SORCL)))
OK (0 msec)
bash-2.05$

RMAN>

2.0.5.8 Mount or Open the Target Database

bash-2.05$ sqlplus "/ as sysdba"
SQL*Plus: Release 9.2.0.6.0 - Production on Fri Aug 10 16:33:47 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to an idle instance.
SQL> startup open
ORACLE instance started.
Total System Global Area 337087024 bytes
Fixed Size 731696 bytes
Variable Size 218103808 bytes
Database Buffers 117440512 bytes
Redo Buffers 811008 bytes
Database mounted.
Database opened.


2.0.5.9 Ensuring Necessary Backups and Archived Redo Log Files
RMAN> list backup summary;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Tag
------- -- -- - ----------- --------------- ------- ------- ---
1049 B F A DISK 09-AUG-07 1 1 TAG20070809T125511
1087 B A A DISK 09-AUG-07 1 1 TAG20070809T130045
1155 B A A DISK 09-AUG-07 1 1 TAG20070809T163121
1159 B F A DISK 09-AUG-07 1 1 TAG20070809T163126
1175 B A A DISK 09-AUG-07 1 1 TAG20070809T163202
1188 B A A DISK 10-AUG-07 1 1 TAG20070810T140824
1192 B F A DISK 10-AUG-07 1 1 TAG20070810T140833
1208 B A A DISK 10-AUG-07 1 1 TAG20070810T140919
1222 B F A DISK 10-AUG-07 1 1
1240 B A A DISK 10-AUG-07 1 1 TAG20070810T144212
1249 B F A DISK 10-AUG-07 1 1



2.0.6 Create Standby Database


bash-2.05$ export ORACLE_SID=ORCL
bash-2.05$ rman target / auxiliary sys/deep123@SORCL

Recovery Manager: Release 9.2.0.6.0 - 64bit Production

Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to target database: ORCL (DBID=1155865002)
connected to auxiliary database: ORCL (not mounted)

RMAN> run{
2> set until sequence = 41 thread = 1;
3> allocate auxiliary channel ch1 type disk;
4> duplicate target database for standby dorecover nofilenamecheck ;
5> }

executing command: SET until clause
using target database controlfile instead of recovery catalog

allocated channel: ch1
channel ch1: sid=14 devtype=DISK

Starting Duplicate Db at 13-AUG-07

printing stored script: Memory Script
{
restore clone standby controlfile to clone_cf;
replicate clone controlfile from clone_cf;
sql clone 'alter database mount standby database';
}
executing script: Memory Script

Starting restore at 13-AUG-07

channel ch1: starting datafile backupset restore
channel ch1: restoring controlfile
output filename=/oracle/oradata/ORCL/control01.ctl
channel ch1: restored backup piece 1
piece handle=/oracle/oradata/RMAN_BKP/stby_cfile.33ip245j_1_1 tag=TAG20070810T171947
params=NULL
channel ch1: restore complete
Finished restore at 13-AUG-07

replicating controlfile
input filename=/oracle/oradata/ORCL/control01.ctl
output filename=/oracle/oradata/ORCL/control02.ctl
output filename=/oracle/oradata/ORCL/control03.ctl

sql statement: alter database mount standby database

printing stored script: Memory Script
{
set until scn 4108109;
set newname for datafile 1 to
"/oracle/oradata/ORCL/system01.dbf";
set newname for datafile 2 to
"/oracle/oradata/ORCL/undotbs01.dbf";
set newname for datafile 3 to
"/oracle/oradata/ORCL/cwmlite01.dbf";
set newname for datafile 4 to
"/oracle/oradata/ORCL/drsys01.dbf";
set newname for datafile 5 to
"/oracle/oradata/ORCL/example01.dbf";
set newname for datafile 6 to
"/oracle/oradata/ORCL/indx01.dbf";
set newname for datafile 7 to
"/oracle/oradata/ORCL/odm01.dbf";
set newname for datafile 8 to
"/oracle/oradata/ORCL/tools01.dbf";
set newname for datafile 9 to
"/oracle/oradata/ORCL/users01.dbf";
set newname for datafile 10 to
"/oracle/oradata/ORCL/xdb01.dbf";
restore
check readonly
clone database
;
}
executing script: Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 13-AUG-07

channel ch1: starting datafile backupset restore
channel ch1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /oracle/oradata/ORCL/system01.dbf
restoring datafile 00002 to /oracle/oradata/ORCL/undotbs01.dbf
restoring datafile 00003 to /oracle/oradata/ORCL/cwmlite01.dbf
restoring datafile 00004 to /oracle/oradata/ORCL/drsys01.dbf
restoring datafile 00005 to /oracle/oradata/ORCL/example01.dbf
restoring datafile 00006 to /oracle/oradata/ORCL/indx01.dbf
restoring datafile 00007 to /oracle/oradata/ORCL/odm01.dbf
restoring datafile 00008 to /oracle/oradata/ORCL/tools01.dbf
restoring datafile 00009 to /oracle/oradata/ORCL/users01.dbf
restoring datafile 00010 to /oracle/oradata/ORCL/xdb01.dbf
channel ch1: restored backup piece 1
piece handle=/oracle/oradata/RMAN_BKP/RMAN_BKP_97_630263937_31ip2441_1_1
tag=TAG20070810T171857 params=NULL
channel ch1: restore complete
Finished restore at 13-AUG-07

printing stored script: Memory Script
{
switch clone datafile all;
}
executing script: Memory Script

datafile 2 switched to datafile copy
input datafilecopy recid=14 stamp=630499130 filename=/oracle/oradata/ORCL/undotbs01.dbf
datafile 3 switched to datafile copy
input datafilecopy recid=15 stamp=630499130 filename=/oracle/oradata/ORCL/cwmlite01.dbf
datafile 4 switched to datafile copy
input datafilecopy recid=16 stamp=630499130 filename=/oracle/oradata/ORCL/drsys01.dbf
datafile 5 switched to datafile copy
input datafilecopy recid=17 stamp=630499130 filename=/oracle/oradata/ORCL/example01.dbf
datafile 6 switched to datafile copy
input datafilecopy recid=18 stamp=630499130 filename=/oracle/oradata/ORCL/indx01.dbf
datafile 7 switched to datafile copy
input datafilecopy recid=19 stamp=630499130 filename=/oracle/oradata/ORCL/odm01.dbf
datafile 8 switched to datafile copy
input datafilecopy recid=20 stamp=630499130 filename=/oracle/oradata/ORCL/tools01.dbf
datafile 9 switched to datafile copy
input datafilecopy recid=21 stamp=630499130 filename=/oracle/oradata/ORCL/users01.dbf
datafile 10 switched to datafile copy
input datafilecopy recid=22 stamp=630499130 filename=/oracle/oradata/ORCL/xdb01.dbf

printing stored script: Memory Script
{
set until scn 4108109;
recover
standby
clone database
delete archivelog
;
}
executing script: Memory Script

executing command: SET until clause

Starting recover at 13-AUG-07

starting media recovery

channel ch1: starting archive log restore to default destination
channel ch1: restoring archive log
archive log thread=1 sequence=37
channel ch1: restoring archive log
archive log thread=1 sequence=38
channel ch1: restoring archive log
archive log thread=1 sequence=39
channel ch1: restoring archive log
archive log thread=1 sequence=40
channel ch1: restored backup piece 1
piece handle=/oracle/oradata/RMAN_BKP/RMAN_BKP_101_630497688_35ip98co_1_1
tag=TAG20070813T101448 params=NULL
channel ch1: restore complete
archive log filename=/oracle/oradata/ORCL/Archives/Archive_37_1.ARC thread=1
sequence=37
channel clone_default: deleting archive log(s)
archive log filename=/oracle/oradata/ORCL/Archives/Archive_37_1.ARC recid=4
stamp=630499134
archive log filename=/oracle/oradata/ORCL/Archives/Archive_38_1.ARC thread=1
sequence=38
channel clone_default: deleting archive log(s)
archive log filename=/oracle/oradata/ORCL/Archives/Archive_38_1.ARC recid=2
stamp=630499133
archive log filename=/oracle/oradata/ORCL/Archives/Archive_39_1.ARC thread=1
sequence=39
channel clone_default: deleting archive log(s)
archive log filename=/oracle/oradata/ORCL/Archives/Archive_39_1.ARC recid=1
stamp=630499133
archive log filename=/oracle/oradata/ORCL/Archives/Archive_40_1.ARC thread=1
sequence=40
channel clone_default: deleting archive log(s)
archive log filename=/oracle/oradata/ORCL/Archives/Archive_40_1.ARC recid=3
stamp=630499133
media recovery complete
Finished recover at 13-AUG-07
Finished Duplicate Db at 13-AUG-07
released channel: ch1

2.0.7 put the Standby in Managed Recovery Mode

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
ORCL MOUNTED


SQL> alter database recover managed standby database disconnect;

Database altered.

SQL>

RMAN>

2.0.8 Verifying from alert log contents

ALTER DATABASE OPEN
Mon Aug 13 10:48:56 2007
LGWR: Primary database is in CLUSTER CONSISTENT mode
Thread 1 opened at log sequence 45
Current log# 2 seq# 45 mem# 0: /oracle/oradata/ORCL/redo02.log
Successful open of redo thread 1
Mon Aug 13 10:48:57 2007
SMON: enabling cache recovery
Mon Aug 13 10:48:57 2007
Successfully onlined Undo Tablespace 1.
Mon Aug 13 10:48:57 2007
SMON: enabling tx recovery
Mon Aug 13 10:48:57 2007
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: ALTER DATABASE OPEN
Mon Aug 13 10:49:52 2007
ARC0: Begin FAL archive (thread 1 sequence 42 destination SORCL)
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
ARC0: Complete FAL archive (thread 1 sequence 42 destination SORCL)
Mon Aug 13 10:49:56 2007
ARC1: Begin FAL archive (thread 1 sequence 43 destination SORCL)
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
ARC1: Complete FAL archive (thread 1 sequence 43 destination SORCL)
Mon Aug 13 10:49:56 2007
ARC0: Begin FAL archive (thread 1 sequence 44 destination SORCL)
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
ARC0: Complete FAL archive (thread 1 sequence 44 destination SORCL)
Mon Aug 13 10:51:26 2007
ARC0: Evaluating archive log 2 thread 1 sequence 45
ARC0: Beginning to archive log 2 thread 1 sequence 45
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
Mon Aug 13 10:51:26 2007
Thread 1 advanced to log sequence 46
Current log# 3 seq# 46 mem# 0: /oracle/oradata/ORCL/redo03.log
Mon Aug 13 10:51:26 2007
Creating archive destination LOG_ARCHIVE_DEST_1:
'/oracle/oradata/ORCL/Archives/Archive_45_1.ARC'
ARC0: Completed archiving log 2 thread 1 sequence 45
Mon Aug 13 10:51:28 2007
Thread 1 advanced to log sequence 47
Current log# 1 seq# 47 mem# 0: /oracle/oradata/ORCL/redo01.log
Mon Aug 13 10:51:28 2007
ARC1: Evaluating archive log 3 thread 1 sequence 46
ARC1: Beginning to archive log 3 thread 1 sequence 46
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
Creating archive destination LOG_ARCHIVE_DEST_1:
'/oracle/oradata/ORCL/Archives/Archive_46_1.ARC'
ARC1: Completed archiving log 3 thread 1 sequence 46
Mon Aug 13 10:51:31 2007
Thread 1 advanced to log sequence 48
Current log# 2 seq# 48 mem# 0: /oracle/oradata/ORCL/redo02.log
Mon Aug 13 10:51:31 2007
ARC0: Evaluating archive log 1 thread 1 sequence 47
ARC0: Beginning to archive log 1 thread 1 sequence 47
Creating archive destination LOG_ARCHIVE_DEST_2: 'SORCL'
Creating archive destination LOG_ARCHIVE_DEST_1:
'/oracle/oradata/ORCL/Archives/Archive_47_1.ARC'
ARC0: Completed archiving log 1 thread 1 sequence 47

2.0.9 Startup and Shutdown procedure of Primary and Standby DB


Start up Sequence of Primary and standby database

1. Startup the standby database first
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 337087024 bytes
Fixed Size 731696 bytes
Variable Size 218103808 bytes
Database Buffers 117440512 bytes
Redo Buffers 811008 bytes
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

Database altered.
SQL> alter database recover managed standby database disconnect;

Database altered.

2. Start-up the Primary Database second

Shut down Sequence of Primary and standby database

1. Shutdown the primary database first
2. Shutdown the standby database second