My Blog List

Wednesday, September 26, 2007

Migrating Non – ASM database to ASM

Migrating Non – ASM database to ASM

Target database is ESADEMO and is in mount stage
This migration is performed using Recovery Manager (RMAN) even if you are not using RMAN for your primary backup and recovery strategy.

Limitation on ASM Migration with Transportable Tablespaces
The procedure described here does not work for transportable (foreign) tablespaces. Such tablespaces needs to be made read-write and imported into the database, before they can be migrated into ASM using this procedure.

Determine Names of Database Files and control files

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 26 12:10:19 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
D:\oracle\product\10.2.0\oradata\esademo\esademo\system01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\undotbs01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\sysaux01.dbf
D:\oracle\product\10.2.0\oradata\esademo\esademo\users01.dbf

SYS@ ESADEMO>show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string d:\oracle\product\10.2.0\orada
ta\esademo\esademo\control01.c
tl, d:\oracle\product\10.2.0\o
radata\esademo\esademo\control
02.ctl, d:\oracle\product\10.2
.0\oradata\esademo\esademo\con
trol03.ctl
SYS@ ESADEMO>


Disk-Based Migration of a Database to ASM

If you have enough disk space that you can have both your entire non-ASM database and your ASM disk group on disk at the same time, you can do the migration directly without using tapes.
The procedure differs slightly between primary and standby databases. A number of the steps described in this procedure apply only in one or the other case. There are also a few steps where the procedure is different depending upon whether you are using a recovery catalog. The steps that vary are identified as necessary in the description of the process.
The process described in this section is intended to minimize downtime for your database. The example assumes that the destination ASM disk group for database files is +ASM_NEW.During the migration process all flashback logs are discarded. As a result, any guaranteed restore points in the database become unusable. You should drop all guaranteed restore points before performing the migration.

To perform the migration, carry out the following steps:
Step – 1 Back up your database files as copies to the ASM disk group.

C:\>set ORACLE_SID=+ASM

C:\>asmcmd
ASMCMD> ls
ASM/
ASM_NEW/
ASMCMD> cd ASM_NEW
ASMCMD> mkdir ESADEMO
ASMCMD> cd ESADEMO
ASMCMD> pwd
+ASM_NEW/ESADEMO
ASMCMD>


C:\>set oracle_sid=ESADEMO

C:\>rman target / catalog rman/password@rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Sep 26 12:28:14 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ESADEMO (DBID=1806048469, not open)
connected to recovery catalog database

RMAN> BACKUP AS COPY INCREMENTAL LEVEL 0 DATABASE FORMAT '+ASM_NEW' TAG 'ORA_ASM_MIGRATION';
Starting backup at 26-SEP-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\SYSTEM01.DBF
output filename=+ASM_NEW/esademo/datafile/system.258.634307507 tag=ORA_ASM_MIGRATION recid=1 stamp=634307554
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\SYSAUX01.DBF
output filename=+ASM_NEW/esademo/datafile/sysaux.257.634307563 tag=ORA_ASM_MIGRATION recid=2 stamp=634307586
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:26
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\UNDOTBS01.DBF
output filename=+ASM_NEW/esademo/datafile/undotbs1.256.634307589 tag=ORA_ASM_MIGRATION recid=3 stamp=634307593
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\USERS01.DBF
output filename=+ASM_NEW/esademo/datafile/users.259.634307597 tag=ORA_ASM_MIGRATION recid=4 stamp=634307597
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+ASM_NEW/esademo/controlfile/backup.260.634307599 tag=ORA_ASM_MIGRATION recid=5 stamp=634307600
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 26-SEP-07
RMAN>


Step – 2 Create a copy of the SPFILE in the ASM disk group. In this example, the SPFILE for the migrated database will be stored as +ASM_NEW/spfile.
RMAN> run {
2> BACKUP AS BACKUPSET SPFILE;
3> RESTORE SPFILE TO "+ASM_NEW/spfile";
4> }

Starting backup at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 26-SEP-07
channel ORA_DISK_1: finished piece 1 at 26-SEP-07
piece handle=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1 tag=TAG20070926T125327 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 26-SEP-07

Starting restore at 26-SEP-07
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring SPFILE
output filename=+ASM_NEW/spfile
channel ORA_DISK_1: reading from backup piece D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\07ISTI67_1_1 tag=TAG20070926T125327
channel ORA_DISK_1: restore complete, elapsed time: 00:00:17
Finished restore at 26-SEP-07


Step –3 At this point, if you want the option of easily returning the database to non-ASM storage later, make copies of your current control file and all online logs. This command backs up the current control file to a non-ASM location:
RMAN> BACKUP AS COPY CURRENT CONTROLFILE FORMAT 'd:\oracle\product\10.2.0\oradata\ESADEMO\pre-ASM-controfile.cf';

Starting backup at 26-SEP-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\PRE-ASM-CONTROFILE.CF tag=TAG20070926T130231 recid=6 stamp=634309352
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 26-SEP-07

RMAN>

Step – 4 now create an init.ora specifying the location of the new SPFILE, and start the instance with it. For example, create /tmp/pfile.ora with the following contents:

SPFILE=+ASM_NEW/spfile

Now start the database in NOMOUNT

SYS@ ESADEMO>shu immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SYS@ ESADEMO>startup nomount
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 62915940 bytes
Database Buffers 100663296 bytes
Redo Buffers 2945024 bytes

SYS@ ESADEMO>show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ----------------
spfile string +ASM_NEW/spfile
SYS@ ESADEMO>


Step –5 The next step is to migrate the control file to ASM.
In SQL*Plus, change the CONTROL_FILES initialization parameter using the following command:
SYS@ ESADEMO>alter system set control_files='+ASM_NEW/ct1.f','+ASM_NEW /ct2.f' scope=spfile sid='*';

System altered.

SYS@ ESADEMO>

Step – 6 Shut down and startup in NOMOUNT again, so that the changed parameters take effect. (The CONTROL_FILES parameter change only takes effect upon a restart because it is a static parameter.)

Then, use RMAN to actually create the new control files in ASM. For example, assuming that one of your original control file locations was /private/ct1.f, use the following command

C:\>rman target / catalog rman/password@rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Sep 26 13:19:31 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ESADEMO (not mounted)
connected to recovery catalog database

RMAN> shutdown immediate;

Oracle instance shut down

RMAN> startup nomount;

Oracle instance started

Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 62915940 bytes
Database Buffers 100663296 bytes
Redo Buffers 2945024 bytes

RMAN> restore controlfile from 'd:\oracle\product\10.2.0\oradata\ESADEMO\pre-ASM-controfile.cf';

Starting restore at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy
output filename=+ASM_NEW/esademo/controlfile/ct1.f
output filename=+ASM_NEW/esademo/controlfile/ct2.f
Finished restore at 26-SEP-07

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+ASM_NEW/esademo/datafile/system.258.634307507"
datafile 2 switched to datafile copy "+ASM_NEW/esademo/datafile/undotbs1.256.634307589"
datafile 3 switched to datafile copy "+ASM_NEW/esademo/datafile/sysaux.257.634307563"
datafile 4 switched to datafile copy "+ASM_NEW/esademo/datafile/users.259.634307597"

RMAN> recover database;

Starting recover at 26-SEP-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

starting media recovery

archive log thread 1 sequence 1 is already on disk as file D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\REDO03.LOG
archive log filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\REDO03.LOG thread=1 sequence=1
media recovery complete, elapsed time: 00:00:01
Finished recover at 26-SEP-07

RMAN> alter database open resetlogs;;
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN>


SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 26 13:39:33 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
esademo OPEN


SYS@ ESADEMO>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+ASM_NEW/esademo/datafile/system.258.634307507
+ASM_NEW/esademo/datafile/undotbs1.256.634307589
+ASM_NEW/esademo/datafile/sysaux.257.634307563
+ASM_NEW/esademo/datafile/users.259.634307597

SYS@ ESADEMO>

Step –7 moving online logs to ASM disk

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SYS@ ESADEMO>declare
2 cursor rlc is
3 select group# grp, thread# thr, bytes/1024 bytes_k, 'NO' srl
4 from v$log
5 union
6 select group# grp, thread# thr, bytes/1024 bytes_k, 'YES' srl
7 from v$standby_log
8 order by 1;
9 stmt varchar2(2048);
10 swtstmt varchar2(1024) := 'alter system switch logfile';
11 ckpstmt varchar2(1024) := 'alter system checkpoint global';
12 begin
13 for rlcRec in rlc loop
14 if (rlcRec.srl = 'YES') then
15 stmt := 'alter database add standby logfile thread ' ||
16 rlcRec.thr || ' ''+ASM_NEW'' size ' ||
17 rlcRec.bytes_k || 'K';
18 execute immediate stmt;
19 stmt := 'alter database drop standby logfile group ' || rlcRec.grp;
20 execute immediate stmt;
21 else
22 stmt := 'alter database add logfile thread ' ||
23 rlcRec.thr || ' ''+ASM_NEW'' size ' ||
24 rlcRec.bytes_k || 'K';
25 execute immediate stmt;
26 begin
27 stmt := 'alter database drop logfile group ' || rlcRec.grp;
28 dbms_output.put_line(stmt);
29 execute immediate stmt;
30 exception
31 when others then
32 execute immediate swtstmt;
33 execute immediate ckpstmt;
34 execute immediate stmt;
35 end;
36 end if;
37 end loop;
38 end;
39 /

PL/SQL procedure successfully completed.

SYS@ ESADEMO>select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+ASM_NEW/esademo/onlinelog/group_3.268.634311897
+ASM_NEW/esademo/onlinelog/group_2.266.634311889
+ASM_NEW/esademo/onlinelog/group_1.265.634311883
+ASM_NEW/esademo/onlinelog/group_5.267.634311877

SYS@ ESADEMO>


Step –7 moving tempfiles to ASM disk

SYS@ ESADEMO>select name from v$tempfile;

NAME
------------------------------------------------------------
D:\ORACLE\PRODUCT\10.2.0\ORADATA\ESADEMO\ESADEMO\TEMP01.DBF

RMAN> run {
2> set newname for tempfile 1 to '+ASM_NEW/temp01.dbf';
3> switch tempfile all;
4> }

executing command: SET NEWNAME

renamed temporary file 1 to +ASM_NEW/temp01.dbf in control file
starting full resync of recovery catalog
full resync complete

RMAN>

SYS@ ESADEMO>select name from v$tempfile;

NAME
----------------------------------------------
+ASM_NEW/temp01.dbf

Tuesday, September 25, 2007

Data pumps in oracle 10g

Data pumps in oracle 10g


Some drawbacks of exp/imp utility

· Operations are difficult to restart
· Execution is client-side and single-threaded
· Dump files can grow enormously
· Tuning mechanisms are limited
· Progress monitoring is difficult
· Database object filtering features are limited


Data Pump Concepts
The DataPump is the next logical progression in a series of improvements that have appeared during the last few Oracle releases. The two most dramatic upgrades involve the ability to read data extremely quickly via direct path unloading, and the capability to write data directly to external tables.
DataPump Components
DataPump consists of three components that work in concert to manage and perform operations:

Command-Line Interface. Like the original Export (EXP.EXE) and Import (IMP.EXE) command-line interfaces, the DataPump provides two command-line interfaces, EXPDP.EXE and IMPDP.EXE, for controlling DataPump Export and Import operations, respectively.

DataPump also expands the command-line interface's capabilities by providing Interactive-Command Mode. This mode allows the DBA to start a DataPump operation and then disconnect from it by simply issuing a CTL+C keystroke. Later, the DBA simply opens another DataPump session while specifying the operation's job name and then issues the ATTACH directive to reconnect to the session. The DBA can then issue a series of commands, from taking a pulse of the operation via the STATUS directive, to adding more export dump files, or even terminating a long-running operation if desired.
DBMS_METADATA.

Introduced in Oracle 9i, this PL/SQL supplied package provides methods to extract and format metadata - literally, "information about information" - from an Oracle database. At its core, DBMS_METADATA stores metadata in XML format for ultimate flexibility in its presentation of that metadata.


DataPump Export uses DBMS_METADATA to gather all metadata about the database objects being processed then stores that metadata in the Export operation's master table. DataPump Import uses the corresponding master table to extract information from the export dump file(s), and then uses DBMS_METADATA to create the DDL statements it needs to create new database objects as part of the Import operation.

DBMS_DATAPUMP. At the heart of the DataPump is the new DBMS_DATAPUMP PL/SQL supplied package. This package contains methods for exporting data from and importing data into any Oracle database. EXPDP and IMPDP are actually making calls to this new package to perform their respective operations.
What is especially powerful about DBMS_DATAPUMP is that a DBA can also utilize it directly, either in anonymous PL/SQL blocks or within a stored procedure, stored function, or package, to create custom DataPump Export and Import jobs. I will demonstrate how to accomplish this in the final article in this series when we look at some of the other advanced features of the DataPump.
DataPump Jobs, Master Processes, and the Master Table
DataPump also implements major (and in my opinion, elegant!) improvements to the execution and management of Export and Import operations. Each DataPump operation uses a master process to manage and control the export or import session. Each master process is identified by a unique job name, and DataPump automatically assigns one by default if one is not specified at execution time.
The master process controls the DataPump operation by invoking at least one worker process that actually performs the Export or Import. (This is extremely similar to the way Recovery Manager (RMAN) manages backup, restoration and recovery processing.) If the DataPump operation has requested additional parallel processes for multiple-threaded processing, more than one worker process will be created and managed.
The master process also creates a master table as a user table within the user account in which the DataPump job is invoked. This master table is always named the same as the DataPump job, and it is used slightly differently depending on the type of DataPump operation, but it is always created within the same user account that is invoking the operation.
During an Export operation, the master table is created when the job commences, contains all target objects for the Export, and is retained until the job finishes successfully, at which point it is dropped. However, if the job is paused or fails due to error, the Export process uses the master table to determine what objects still need to be exported and how best to resume the job. Once the job completes successfully, the master table's contents are written to the Export file itself for use by an Import process.
On the other hand, an Import operation reads the master table from the Export dump file and then creates the master table under the appropriate user account. During an Import, the master process uses the master table to decide which database objects need to be created and in what order before any data is actually imported.



How Does Data Pump Access Data?
Data Pump supports two access methods to load and unload table row data: direct path and external tables. Because both methods support the same external data representation, data that is unloaded with one method can be loaded using the other method. Data Pump automatically chooses the fastest method appropriate for each table.
Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or reenabled.

Using Data pump
To make full use of Data Pump technology, you must be a privileged user. Privileged users have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. Nonprivileged users have neither.
Privileged users can do the following:
· Export and import database objects owned by others
· Export and import nonschema-based objects such as tablespace and schema definitions, system privilege grants, resource plans, and so forth
· Attach to, monitor, and control Data Pump jobs initiated by others
· Perform remapping operations on database datafiles
· Perform remapping operations on schemas other than their own
bash-2.05$ export ORACLE_SID=FIN10g
bash-2.05$ sqlplus "/ as sysdba"


SQL*Plus: Release 10.1.0.2.0 - Production on Wed Sep 19 11:58:33 2007

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> create user dbpumpowner identified by dbpumpowner;

User created.

SQL>grant connect,resource,exp_full_database,imp_full_database to bpumpowner;

Grant succeeded.

SQL>

External Tables
The Oracle database has provided an external tables capability since Oracle9i that allows reading of data sources external to the database. As of Oracle Database 10g, the external tables feature also supports writing database data to destinations external to the database. Data Pump provides an external tables access driver (ORACLE_DATAPUMP) that reads and writes files. The format of the files is the same format used with the direct path method. This allows for high-speed loading and unloading of database tables.

External tables by example
Step –1 first Creating a directory at OS level with appropriate permission
bash-2.05$ pwd
/home/oracle
bash-2.05$ mkdir ext_data_dir
bash-2.05$ ls -l | grep ext_
drwxr-xr-x 2 oracle dba 512 Sep 19 12:10 ext_data_dir
bash-2.05$
Step –2 put the external table's data file in the data directory
C:\>ftp 172.16.8.206
Connected to 172.16.8.206.
220 IGSMALPPS04 FTP server ready.
User (172.16.8.206:(none)): oracle
331 Password required for oracle.
Password:
230 User oracle logged in.
ftp> lcd
Local directory now C:\.
ftp> pwd
257 "/home/oracle" is current directory.
ftp> cd /home/oracle/ext_data_dir/data
250 CWD command successful.
ftp> bin
200 Type set to I.
ftp> pwd
257 "/home/oracle/ext_data_dir/data" is current directory.
ftp> put External_table_test.csv
200 PORT command successful.
150 Opening BINARY mode data connection for External_table_test.csv.
226 Transfer complete.



bash-2.05$ pwd
/home/oracle/ext_data_dir/data
bash-2.05$ ls -l
total 0
-rw-r--r-- 1 oracle dba 633 Sep 19 12:16 External_table_test.csv
bash-2.05$
bash-2.05$ cat External_table_test.csv
3/9/2007,Monday,backup review,System Monitoring ,lunch,System Monitoring
4/9/2007,Tusday,backup review,System Monitoring ,lunch,System Monitoring and processed user requests
5/9/2007,Wednesday,backup review,FINPROD Instance Cloaning for testing ,lunch,System Monitoring
6/9/2007,Thursday,backup review,System Monitoring and processed user requests,lunch,System Monitoring
7/9/2007,Friday,backup review,System Monitoring ,lunch,System Monitoring and processed user requests
10/9/2007,Monday,backup review,System Monitoring ,lunch,System Monitoring
11/9/2007,Tusday,backup review,System Monitoring ,lunch,System Monitoring
bash-2.05$

Step –3 Creating the directory

bash-2.05$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.1.0.2.0 - Production on Wed Sep 19 12:33:41 2007

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> create or replace directory xtern_data_dir as '/home/oracle/ext_data_dir/data';

Directory created.

SQL> grant read,write on directory xtern_data_dir to scott;

Grant succeeded.

SQL>


Step –4 Creating the Table

SQL> conn scott/tiger@FIN10g
Connected.
SQL> create table ext_emp_daily_rpt
(2 work_date varchar2(20),
3 day varchar2(20),
4 Activity1 varchar2(50),
5 Activity2 varchar2(50),
6 Activity3 varchar2(10),
7 Activity4 varchar2(70)
8 )
9 organization external
10 ( default directory xtern_data_dir
11 access parameters
12 ( records delimited by newline
13 fields terminated by ','
14 )
15 location ('External_table_test.csv')
16 );

Table created.

Step –4 Select statements on external table which actually fetching data from CSV file.SQL> select * from ext_emp_daily_rpt;

WORK_DATE DAY ACTIVITY1 ACTIVITY2 ACTIVITY3 ACTIVITY4
---------- ---------- --------------- -------------------------------------------------- ---------- ----------------------------------------------------------
3/9/2007 Monday backup review System Monitoring lunch System Monitoring
4/9/2007 Tusday backup review System Monitoring lunch System Monitoring and processed user requests
5/9/2007 Wednesday backup review FINPROD Instance Cloaning for testing lunch System Monitoring
6/9/2007 Thursday backup review System Monitoring and processed user requests lunch System Monitoring
7/9/2007 Friday backup review System Monitoring lunch System Monitoring and processed user requests
10/9/2007 Monday backup review System Monitoring lunch System Monitoring
11/9/2007 Tusday backup review System Monitoring lunch System Monitoring

7 rows selected.

Direct-Path INSERT

Oracle inserts data into a table in one of two ways:
· During conventional insert operations, Oracle reuses free space in the table, interleaving newly inserted data with existing data. During such operations, Oracle also maintains referential integrity constraints.
· During direct-path INSERT operations, Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.

Situations in Which Direct Path Load Is Not Used
If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:
· A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
· A domain index exists for a LOB column.
· A table is in a cluster.
· There is an active trigger on a pre-existing table.
· Fine-grained access control is enabled in insert mode on a pre-existing table.
· A table contains BFILE columns or columns of opaque types.
· A referential integrity constraint is present on a pre-existing table.
· A table contains VARRAY columns with an embedded opaque type.
· The table has encrypted columns
· The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
o There is an active trigger
o The table is partitioned
o fgac is in insert mode
o A referential integrity constraint exists
o A unique index exists
· Supplemental logging is enabled and the table has at least 1 LOB column





Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, Data Pump uses the external table method to unload data, rather than direct path:
· Fine-grained access control for SELECT is enabled.
· The table is a queue table.
· The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
· The table contains encrypted columns.
· A column of an evolved type that needs upgrading.
· A column of type LONG or LONG RAW that is not last.
External tables as the data access mechanism in the following situations
· Loading and unloading very large tables and partitions in situations where parallel SQL can be used to advantage
· Loading tables with global or domain indexes defined on them, including partitioned object tables
· Loading tables with active triggers or clustered tables
· Loading and unloading tables with encrypted columns
· Loading tables with fine-grained access control enabled for inserts
· Loading tables that are partitioned differently at load time and unload time

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.



New features that provide this increased performance in Oracle Data pumps

Monitoring DataPump Operations
· The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See PARALLEL for information about using this parameter in export. See PARALLEL for information about using this parameter in import. (This feature is available only in the Enterprise Edition of Oracle Database 10g.)
· The ability to restart Data Pump jobs. See START_JOB for information about restarting export jobs. See START_JOB for information about restarting import jobs.
· The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job. See ATTACH for information about using this parameter in export. See ATTACH for information about using this parameter in import.
· Support for export and import operations over the network, in which the source of each operation is a remote instance. See NETWORK_LINK for information about using this parameter in export. See NETWORK_LINK for information about using this parameter in import.
· The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. See REMAP_DATAFILE.
· Enhanced support for remapping tablespaces during an import operation. See REMAP_TABLESPACE.
· Support for filtering the metadata that is exported and imported, based upon objects and object types. For information about filtering metadata during an export operation, see INCLUDE and EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE and EXCLUDE.
· Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available in Import's Interactive-Command Mode.
· The ability to estimate how much space an export job would consume, without actually performing the export. See ESTIMATE_ONLY.


· The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported. See VERSION for more information about using this parameter in export.
In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database. See VERSION for more information about using this parameter in import.
For additional information about using different versions, see Moving Data Between Different Database Versions.
· Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.) See Default Locations for Dump, Log, and SQL Files for information about some of the implications of server-based operations.



Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allow the DBA to monitor the progress of all DataPump operations.

Data Pump by example

bash-2.05$ expdp sysadm/sysadm@FIN10g tables=psoprdefn directory=xtern_data_dir dumpfile=PSORDEFN.dmp job_name=PSOPRDEFN_EXPORT

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 13:32

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production With the Partitioning, OLAP and Data Mining options
Starting "SYSADM"."PSOPRDEFN_EXPORT": sysadm/********@FIN10g tables=psoprdefn directory=xtern_data_dir dumpfile=PSORDEFN.dmp job_name=PSOPRDEFN_EXPORT
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SYSADM"."PSOPRDEFN" 86.40 KB 465 rows
Master table "SYSADM"."PSOPRDEFN_EXPORT" successfully loaded/unloaded
************************************************************************
Dump file set for SYSADM.PSOPRDEFN_EXPORT is:
/home/oracle/ext_data_dir/data/PSORDEFN.dmp
Job "SYSADM"."PSOPRDEFN_EXPORT" successfully completed at 13:33
bash-2.05$





bash-2.05$ expdp sysadm/sysadm@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 13:42

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSADM"."SYSADM_FULL_EXPORT_SYSADM": sysadm/********@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 2.270 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
^C
Export>

Export>

Export> help
The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

Export> status
Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_OMC_RGRP_TAO6
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 22,846

Export>

Export> parallel=4

Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_PO_CMFRTMS_T10
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 27,531

Worker 2 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_VCHR_ACCTG_LINE
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 3 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PSAPMSGPUBCERRP
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 4 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PSPCMPROG
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Export>

Export>

Export>


Stop a Running data pump Job

Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y


bash-2.05$

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
ESTIMATE000028 NOT RUNNING
SYSADM_FULL_EXPORT_SYSADM NOT RUNNING

SQL>



Restart Stopped data pump Job

bash-2.05$ expdp sysadm/sysadm@FIN10g attach=SYSADM_FULL_EXPORT_SYSADM

Export: Release 10.1.0.2.0 - 64bit Production on Wednesday, 19 September, 2007 15:26

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production With the Partitioning, OLAP and Data Mining options

Job: SYSADM_FULL_EXPORT_SYSADM
Owner: SYSADM
Operation: EXPORT
Creator Privs: FALSE
GUID: 3A799A403E455650E0440003BAC70D65
Start Time: Wednesday, 19 September, 2007 15:26
Mode: SCHEMA
Instance: FIN10g
Max Parallelism: 4
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND sysadm/********@FIN10g directory=xtern_data_dir dumpfile=SYSADM_FULL.dmp job_name=SYSADM_FULL_EXPORT_SYSADM
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY XTERN_DATA_DIR
LOG_FILE_NAME export.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: IDLING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 24,576

Worker 1 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PS_PST_VAT_TMP5
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 31,533

Worker 2 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PS_VCHR_ACCTG_LINE
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 3 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PSAPMSGPUBCERRP
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658

Worker 4 Status:
State: UNDEFINED
Object Schema: SYSADM
Object Name: PSPCMPROG
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 41,658




Export> start_job


Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 28,672

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_ADB_TREE_SEL016
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 967

Export>

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
SYSADM_FULL_EXPORT_SYSADM EXECUTING
ESTIMATE000028 NOT RUNNING

SQL>


Add a Dump file
At this juncture, a dump file can be added by issuing the ADD_FILE directive at the
Export> prompt. The new dump file will automatically be created in the same directory
as our original dump file

Export> ADD_FILE=ADDITION_@_SYSADM.DMP

Export> status

Job: SYSADM_FULL_EXPORT_SYSADM
Operation: EXPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/ext_data_dir/data/SYSADM_FULL.dmp
bytes written: 28,672
Dump File: /home/oracle/ext_data_dir/data/ADDITION_FIN10g_SYSADM.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSADM
Object Name: PS_EE_PO5_TMP
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 11,248

Export>



And many more ……………………….. J

Monday, September 17, 2007

Working with Undo Tablespace and Undo Retention

(1)There formula for calculating optimal UNDO_RETENTION is this:
OPTIMAL UNDO RETENTION = ACTUAL UNDO SIZE / ( DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC )


-->Actual undo tablespace size can be seen from:-

select bytes from dba_data_files where tablespace_name like '%UNDO%';


-->Actual DB_BLOCK_SIZE can be seen from pfile or:-

select name,value from v$parameter where name like '%block%';


-->For UPS(undo per second) here goes another query which you need to fire another query, that is:-

Select sum(undoblks) / sum((end_time-begin_time)*86400)
from v$undostat;


Here is the single query which does it all:-

This indicates you the actual undo retention value required by the DB in bytes.


SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
ROUND((d.undo_size / (to_number(f.value) * g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
FROM ( SELECT SUM(a.bytes) undo_size
FROM v$datafile a,v$tablespace b,dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#) d,v$parameter e,v$parameter f,
( SELECT MAX(undoblks/((end_time-begin_time)*3600*24))undo_block_per_sec FROM v$undostat)g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size'
/



OUTPUT=

output will be something like this:-

ACTUAL UNDO SIZE [MByte]------------------------17000
UNDO RETENTION [Sec] ------------------------18000
OPTIMAL UNDO RETENTION [Sec]--------------------3338


(2) Now to calculate the actual size of the undo tablespace you need to do the following calculation:-

UR=undo retention
UPS=undo per second
DBS=data block size

undo tablespace = (UR *(UPS * DBS)) + (DBS * 24)

UR and DBS can be obtained from the parameter file or by the queries mentioned above,
for UPS here goes another query which you need to fire another query, that is:-

Select sum(undoblks) / sum((end_time-begin_time)*86400)
from v$undostat;

Monday, September 3, 2007

Moving Datafiles over ASM diskgroups

Source DiskGroup - +ASM
Target Diskgroup - +DGFAIL


SYS@DEEPDB>ALTER DATABASE DATAFILE '+ASM/deepdb/datafile/users.259.631107019' OFFLINE;

Database altered.

SYS@DEEPDB>
========================================================================
C:\>rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Mon Sep 3 12:06:03 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: DEEPDB (DBID=3027735554)
======================================
RMAN> COPY DATAFILE '+ASM/deepdb/datafile/users.259.631107019' TO '+DGFAIL';

Starting backup at 03-SEP-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=131 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=+ASM/deepdb/datafile/users.259.631107019
output filename=+DGFAIL/deepdb/datafile/users.260.632318885 tag=TAG20070903T120801 recid=2 stamp=632318886
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
Finished backup at 03-SEP-07

RMAN>
====================================================
SYS@DEEPDB>ALTER DATABASE RENAME FILE '+ASM/deepdb/datafile/users.259.631107019' TO '+DGFAIL/deepdb/datafile/users.260.632318885';

Database altered.

SYS@DEEPDB>
=================================================
RMAN> SWITCH DATAFILE '+DGFAIL/deepdb/datafile/users.260.632318885' TO COPY;

datafile 4 switched to datafile copy "+DGFAIL/deepdb/datafile/users.260.632318885"

RMAN>
=================================================
RMAN> recover datafile '+DGFAIL/deepdb/datafile/users.260.632318885';

Starting recover at 03-SEP-07
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:06

Finished recover at 03-SEP-07

RMAN>

====================================================
SYS@DEEPDB>ALTER DATABASE DATAFILE '+DGFAIL/deepdb/datafile/users.260.632318885' ONLINE;

Database altered.

SYS@DEEPDB>select name from v$datafile;

NAME
---------------------------------------------------------------------------------------------
+ASM/deepdb/datafile/system.256.631107019
+ASM/deepdb/datafile/undotbs1.258.631107019
+ASM/deepdb/datafile/sysaux.257.631107019
+DGFAIL/deepdb/datafile/users.260.632318885
SYS@DEEPDB>
=====================================================