NAVEEN

Sunday, January 17, 2010

Dataabse hit ratios

Dictionary Cache Hit Ratio

The dictionary hit ratio displays the percentage of memory reads for the data dictionary and other objects.

Query for dictionary hit ratio

SQL>select (1-(sum(getmisses)/sum(gets))) * 100 "Hit Ratio" from v$rowcache

Library Cache Hit Ratio

The library cache hit ratio reveals the percentage of memory reads for actual statements and PL/SQL objects. Note that a high hit ratio is not always good; see Chapter 4 for a detailed explanation.
Query for library hit ratio

select Sum(Pins) / (Sum(Pins) + Sum(Reloads)) * 100 "Hit Ratio"
from V$LibraryCache

Querry to check database performance parameter.

set linesize 120
set pages 200
Select * from v$sysmetric;

Script t o create Database

create database primary
user sys identified by primary
user system identified by primary
maxinstances 1
maxloghistory 1
maxlogfiles 5
maxlogmembers 5
character set US7ASCII
national character set AL16UTF16
datafile 'G:\oracle\product\10.2.0\oradata\primary\system01.dbf' size 200M
extent management LOCAL
SYSAUX datafile 'G:\oracle\product\10.2.0\oradata\primary\sysaux01.dbf' size 200M
DEFAULT TEMPORARY tablespace temp01
tempfile 'G:\oracle\product\10.2.0\oradata\primary\temp01.dbf' size 100M
UNDO tablespace UNDOTBS1
datafile 'G:\oracle\product\10.2.0\oradata\primary\UNDOTBS1.dbf' size 200M
DEFAULT tablespace users
datafile 'G:\oracle\product\10.2.0\oradata\primary\users01.dbf' size 100M
LOGFILE group 1
('G:\oracle\product\10.2.0\oradata\primary\redo01.log') size 10M,
group 2
('G:\oracle\product\10.2.0\oradata\primary\redo02.log') size 10M;

===============control file creation======================================
CREATE CONTROLFILE SET DATABASE "PRIMARY" RESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 32
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\REDO02.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\SYSTEM01.DBF',
'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\UNDOTBS1.DBF',
'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\SYSAUX01.DBF',
'C:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMARY\USERS01.DBF'
CHARACTER SET US7ASCII

Create Logical volume in Linux

Basic LVM commands
Initializing disks or disk partitions
To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands:

pvcreate /dev/hda
pvcreate /dev/hdb

If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:

pvcreate /dev/hda1

Creating a volume group
Once you have one or more physical volumes created, you can create a volume group from these PVs using the vgcreate command. The following command:

vgcreate volume_group_one /dev/hda /dev/hdb

creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.
Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:

pvcreate /dev/hdc
vgextend volume_group_one /dev/hdc

This same PV can be removed from volume_group_one by the vgreduce command:

vgreduce volume_group_one /dev/hdc

Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.
Creating a logical volume
We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the volume group to a single linear LV called logical_volume_one with the following LVM command:

lvcreate -n logical_volume_one --size 255G volume_group_one

Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one:

vgdisplay volume_group_one | grep "Total PE"

which returns

Total PE 65536

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:

lvcreate -n logical_volume_one -l 65536 volume_group_one

To create a 1500MB linear LV named logical_volume_one and its block device special file /dev/volume_group_one/logical_volume_one use the following command:

lvcreate -L1500 -n logical_volume_one volume_group_one

The lvcreate command uses linear mappings by default.
Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used:

lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:

lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb

creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and /dev/hdb) with stripe size 4 KB and 128 GB in size.
An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:

umount /dev/volume_group_one/logical_volume_one
lvremove /dev/volume_group_one/logical_volume_one

Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout:

/dev//

so that if we had two volume groups myvg1 and myvg2 and each with three logical volumes named lv01, lv02, lv03, six device special files would be created:

/dev/myvg1/lv01
/dev/myvg1/lv02
/dev/myvg1/lv03
/dev/myvg2/lv01
/dev/myvg2/lv02
/dev/myvg2/lv03

Extending a logical volume
An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:

lvextend -L120G /dev/myvg/homevol

will extend LV /dev/myvg/homevol to 12 GB, while

lvextend -L+10G /dev/myvg/homevol

will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command

ext2online /dev/myvg/homevol

will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.
The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

How to get Scheduled jobs information in Oracle?

SQL> select TABLE_NAME from dict where table_name like 'DBA_S%';


DBA_SCHEDULER_PROGRAMS
DBA_SCHEDULER_JOBS
DBA_SCHEDULER_JOB_CLASSES
DBA_SCHEDULER_WINDOWS
DBA_SCHEDULER_PROGRAM_ARGS
DBA_SCHEDULER_JOB_ARGS
DBA_SCHEDULER_JOB_LOG
DBA_SCHEDULER_JOB_RUN_DETAIL
DBA_JOBS


--scheduled job run details--------------------
SQL> set linesize 800
SQL> select LOG_ID,LOG_DATE,OWNER,JOB_NAME,STATUS,ERROR#,REQ_START_DATE,ACTUAL_START_DATE,RUN_DURATION,SESSION_ID,CPU_USED,ADDITIONAL_INFO from DBA_SCHEDULER_JOB_RUN_DETAILS;
--------------------------------------------------

--scheduled job logs------

SQL> select LOG_ID,LOG_DATE,OWNER, JOB_NAME,STATUS,USER_NAME,CLIENT_ID,GLOBAL_UID,ADDITIONAL_INFO from DBA_SCHEDULER_JOB_LOG;

-------------------------DBA JOBS---------------


SQL> SELECT LOG_USER,SCHEMA_USER,LAST_DATE, LAST_SEC,THIS_DATE,THIS_SEC, NEXT_DATE, NEXT_SEC,TOTAL_TIME,BROKEN,INTERVAL,FAILURES,WHAT,INSTANCE FROM D
BA_JOBS;

----------------------------------------------------SCHEDULED JOBS--------------


SQL> SELECT OWNER,CLIENT_ID, PROGRAM_OWNER, PROGRAM_NAME,SCHEDULE_OWNER,START_DATE END_DATE,ENABLED,STATE,RUN_COUNT,LAST_START_DATE,LAST_RUN_DURATION,
NEXT_RUN_DATE, MAX_RUN_DURATION,COMMENTS FROM DBA_SCHEDULER_JOBS;

------------------------------------------------------------------------------

Thursday, January 14, 2010

Clone a Database Using Hot Backup

Introduction

This document gives you a detailed flow of how to backup the database in a 24x7 operation and use it for cloning a Development or Test database. By using the System switching of Log files option, it reduces the amount of archive logs to be applied and reduces the time of recovery (the goal).Most sites running Oracle databases need the latest production databases to be cloned for the development and test teams to work on. These sites need to be refreshed periodically with the production database and Application. Most DBAs use Cold Backup procedures to clone a database. Although cold backups are the most common backup procedures for database cloning, there may be situations where no downtime is available. Hot Backups can be performed when the database is up and in Archive log mode.
Scope

This document explains, in details, about how to perform user managed cloning of a active production database into test/development environment. It does not take into account the process of cloning using the Recovery Manager (RMAN) utility provided by Oracle.
Definitions and Assumptions

Source Database: The active production instance/database to be cloned.
Target Database: The cloned database, typically a test/development database.

Cloning steps
Switching Logs


We need to make sure that all the present online logs are switched and archived, so that the time to do the media recovery to the cloned database is as small as possible.
Query the number of log groups that exists in the database and switch as many times as there are log groups.
Code:
Select group#,archived,status from v$log;

GROUP# ARCHIVED STATUS
1 YES INACTIVE
2 NO CURRENT
We can see from the example above, we need to switch logs to archive the online logs.
Code:
alter system switch logfile;
System altered.
Make sure that no log group shows a “STALE” status.
Destination Directory structure creation

Create the destination directory structures as you would like to move the data files and redo log files. Also create the dump directory to hold the target database dumps (udmp, bdump cdump, adump etc).
Let’s note down the last archive log file from the archive log destination directory or from the alert log and then do another

Alter system switch logfile;

This would give us the current log that got archived by switching and also a
checkpoint occurs. Remember that switching leads to check pointing and check pointing not necessarily need lead to a log switch. So the data files all get consistent since the current scn’s are stamped. We do the switch to the number of log groups we have. That means the first archive log file that was switched might be your full file size and the subsequent ones would have lesser size depending on the data.
Copy source database files to the target

The next step is to copy all the data files of the source database to the destination database directories.
Before starting to copy the files of a particular tablespace, we need to put that tablespace into hot backup mode.
By putting a tablespace into hot backup mode, we will ensure two things.

1) The first time a block is changed in a data file that is in hot backup mode, the entire block is written to the redo log files, not just the changed bytes. Normally only the changed bytes (a redo vector) are written. In hot backup mode, the entire block is logged the first time. This is because you can get into a situation where the process copying the data file and DBWR are working on the same block simultaneously.

2) The data file headers which contain the SCN of the last completed checkpoint are not updated while a file is in hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully recover this file.
To limit the effect of this additional logging, you should ensure you only place one tablespace at a time in backup mode and bring the tablespace out of backup mode as soon as you have backed it up. This will reduce the number of blocks that may have to be logged to the minimum possible.
List out all the tablespaces present in the system and their corresponding data files.
select tablespace_name,status,contents from dba_tablespaces

TABLESPACE_NAME STATUS CONTENTS
SYSTEM ONLINE PERMANENT
UNDOTBS1 ONLINE UNDO
SYSAUX ONLINE PERMANENT
TEMP ONLINE TEMPORARY
USERS ONLINE PERMANENT
FLIRT ONLINE PERMANENT
6 rows selected.

select tablespace_name,file_name from dba_data_files order by tablespace_name;
TABLESPACE_NAME FILE_NAME
FLIRT C:\FLIRT\DBFILES\FLIRT01.DBF
SYSAUX C:\FLIRT\DBFILES\SYSAUX01.DBF
SYSTEM C:\FLIRT\DBFILES\SYSTEM01.DBF
UNDOTBS1 C:\FLIRT\DBFILES\UNDOTBS01.DBF
USERS C:\FLIRT\DBFILES\USERS01.DBF
5 rows selected
Now put each tablespace into hot backup mode, copy it’s corresponding data files to the destination directory, and then bring out the tablespace from the hot backup mode, one tablespace at a time.

Alter tablespace FLIRT begin backup;
!copy C:\FLIRT\DBFILES\FLIRT01.DBF C:\VALEN\DBFILES\FLIRT01.DBF
Alter tablespace FLIRT end backup;

Switch logs again

Once again, once the copy is complete, switch log groups the required number of times. In this way we need to copy over only those archive files starting with the ones that we noted down when we started the log switch to the ones we have when we did the log switch after the backup. These would be very few and hence the recovery would also be faster when we
apply these archive files. We would also have a set of archive files that we are sure of and consistency can thus be assured.

Backup the Control File.

Now we need a control file creation script for the target database. The best way to achieve this is to backup the source control file to trace with the following command.
Code:
alter database backup controlf ile to trace;
Database altered.
Copy the appropriate archive logs and online redo logs to the target location.
Prepare Initialization Parameter File

Make appropriate changes to the initialization parameter file of the target db to reflect the new file locations and new database. Create appropriate target dump locations (adump, bdump, udump etc).
Creating Service and control files in target

Set the sid to the target database.
If the target host is windows, then you will need to create database service control with the oradim command. Use the following syntax to create the service.
Oradim –new –sid -intpwd -startmode manual
start a sqlplus session as sysdba and start the instance in nomount mode with target init file.
Open the control file trace backup. Towards the end, you’ll find controlfile creation statement with resetlogs option .Following is an example:
Quote:
CREATE CONTROLFILE reuse DATABASE "FLIRT" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 2920
LOGFILE
GROUP 1 'C:\FLIRT\REDO1\REDO01.LOG' SIZE 5M,
GROUP 2 'C:\FLIRT\REDO2\REDO02.LOG' SIZE 5M
-- STANDBY LOGFILE
DATAFILE
'C:\FLIRT\DBFILES\SYSTEM01.DBF',
'C:\FLIRT\DBFILES\UNDOTBS01.DBF',
'C:\FLIRT\DBFILES\SYSAUX01.DBF',
'C:\FLIRT\DBFILES\USERS01.DBF',
'C:\FLIRT\DBFILES\FLIRT01.DBF',
'C:\FLIRT\DBFILES\DGOD01.DBF'
CHARACTER SET WE8MSWIN1252
;
Change the first line of the statement to look like this :
Quote:
CREATE CONTROLFILE set DATABASE "VALEN" RESETLOGS ARCHIVELOG ……..
Also change the file locations of the database mentioned in the statement to reflect the target file location. After all the changes to the statement, it should look like following :
Quote:
CREATE CONTROLFILE set DATABASE "VALEN" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 2920
LOGFILE
GROUP 1 'C:\valen\REDO1\REDO01.LOG' SIZE 5M,
GROUP 2 'C:\valen\REDO2\REDO02.LOG' SIZE 5M
-- STANDBY LOGFILE
DATAFILE
'C:\valen\DBFILES\SYSTEM01.DBF',
'C:\valen\DBFILES\UNDOTBS01.DBF',
'C:\valen\DBFILES\SYSAUX01.DBF',
'C:\valen\DBFILES\USERS01.DBF',
'C:\valen\DBFILES\flirt01.DBF',
'C:\valen\DBFILES\DGOD01.DBF'
CHARACTER SET WE8MSWIN1252
;
Go the sqlplus session and execute the above (changed) controlfile creation statement to create the controlfiles. Controlfile should be created without any errors.

Apply Media Recovery

Now you need to apply media recovery to make the database consistent. Execute the following statement.
Quote:
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
You may need to apply the online redo logs to make the media recovery complete.
After media recovery is complete open the database in restelogs mode.
Code:
Alter database open resetlogs;
Add tempfiles to the temporary tablespaces.
Code:
ALTER TABLESPACE TEMP ADD TEMPFILE
'C:\valen\DBFILES\TEMP01.DBF'
SIZE 20971520 REUSE AUTOEXTEND OFF;

Cloning is complete!.

Difference Between Truncate table and Drop table

Truncating a Table

TRUNCATE TABLE hr.employees;

• Truncating a table deletes all rows in a table and releases used space.
• Corresponding indexes are truncated.
• All rows in the table are deleted.
• No undo data is generated and the command commits implicitly because TRUNCATE
TABLE is a DDL command.
• Corresponding indexes are also truncated.
• A table that is being referenced by a foreign key cannot be truncated.
• The delete triggers do not fire when this command is used.

Dropping a Table

DROP TABLE hr.department
CASCADE CONSTRAINTS;

 When a table is dropped, the extents used by the table are released. If they are contiguous,they may be coalesced either automatically or manually at a later stage.
 The CASCADE CONSTRAINTS option is necessary if the table is the parent table in a foreign key relationship.
MANAGING TABLES:

There are several methods for storing user data in an Oracle database:
• Regular tables
• Partitioned tables
• Index-organized tables
• Clustered tables

Regular Table
A regular table (generally referred to as a “table”) is the most commonly used form of storing user data. This is the default table and is the main focus of discussion in this lesson. A
database administrator has very limited control over the distribution of rows in an unclustered
table. Rows can be stored in any order depending on the activity on the table.

Partitioned Table
A partitioned table enables the building of scalable applications. It has the following
characteristics:
• A partitioned table has one or more partitions, each of which stores rows that have been
partitioned using range partitioning, hash partitioning, composite partitioning, or list
partitioning.
• Each partition in a partitioned table is a segment and can be located in a different
tablespace.
• Partitions are useful for large tables that can be queried or manipulated using several
processes concurrently.
• Special commands are available to manage partitions within a table.

Index-Organized Table
An index-organized table is like a heap table with a primary key index on one or more of its columns. However, instead of maintaining two separate storage spaces for the table and a Btree index, an index-organized table maintains a single B-tree containing the primary key of the table and other column values. An overflow segment may exist due to the PCTTHRESHOLD value being set and the result of longer row lengths requiring the overflow area.
Index-organized tables provide fast key-based access to table data for queries involving exact matches and range searches.
Also, storage requirements are reduced because key columns are not duplicated in the table and index. The remaining non-key columns are stored in the index unless the index entry
becomes very large; in that case, the Oracle server provides an OVERFLOW clause to handle
the problem.

Clustered Table
A clustered table provides an optional method for storing table data. A cluster is made up of a
table or group of tables that share the same data blocks, which are grouped together because
they share common columns and are often used together.

Clusters have the following characteristics:
• Clusters have a cluster key, which is used to identify the rows that need to be stored
together.
• The cluster key can consist of one or more columns.
• Tables in a cluster have columns that correspond to the cluster key.
• Clustering is a mechanism that is transparent to the applications using the tables. Data
in a clustered table can be manipulated as though it were stored in a regular table.
• Updating one of the columns in the cluster key may entail physically relocating the row.
• The cluster key is independent of the primary key. The tables in a cluster can have a
primary key, which may be the cluster key or a different set of columns.
• Clusters are usually created to improve performance. Random access to clustered data
may be faster, but full table scans on clustered tables are generally slower.
• Clusters renormalize the physical storage of tables without affecting the logical
structure.

Scalar data type:

Fixed-length character

 CHAR and NCHAR
 upper limit of 2,000 bytes per row.
 The default is 1 character or 1 byte, depending on the character set.

Variable-length character

 VARCHAR2 and NVARCHAR2
 can vary in size for each row, up to 4,000 bytes




Numeric Data

Numbers in an Oracle database are always stored as variable-length data.
They can store up to 38 significant digits. Numeric data types require:
– One byte for the exponent
– One byte for every two significant digits in the mantissa
– One byte for negative numbers if the number of significant digits is less than 38
Bytes

DATE Data Type

The Oracle server stores dates in fixed-length fields of seven bytes. An Oracle DATE always includes the time.

TIMESTAMP Data Type

This data type stores the date and time including fractional seconds up to 9 decimal places. TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE can use time zones to factor items such as Daylight Savings Time. TIMESTAMP and TIMESTAMP WITH LOCAL TIME ZONE can be used in primary keys, TIMESTAMP WITH TIME ZONE can not.

RAW Data Type

This data type enables the storage of small binary data. The Oracle server does not perform character set conversion when RAW data is transmitted across machines in a network or if RAW data is moved from one database to another using Oracle utilities. The number of bytes needed to store the actual column value, and can vary in size for each row, up to 2,000 bytes.

Long, Long Raw and Large Object (LOBs) Data Types

Oracle provides six data types for storing LOBs:
• CLOB and LONG for large fixed-width character data
• NCLOB for large fixed-width national character set data
• BLOB and LONG RAW for storing unstructured data
• BFILE for storing unstructured data in operating system files.
LONG and LONG RAW data types were previously used for unstructured data, such as
binary images, documents, or geographical information, and are primarily provided for
backward compatibility. These data types are superseded by the LOB data types. LOB data
types are distinct from LONG and LONG RAW, and they are not interchangeable. LOBs will
not support the LONG application programming interface (API), and vice versa.
Varying Arrays (VARRAY)

Varying arrays are useful to store lists that contain a small number of elements, such as phone numbers for a customer.

VARRAYS have the following characteristics:

• An array is an ordered set of data elements.
• All elements of a given array are of the same data type.
• Each element has an index, which is a number corresponding to the position of the
element in the array.
• The number of elements in an array determines the size of the array.
• The Oracle server allows arrays to be of variable size, which is why they are called
VARRAYs, but the maximum size must be specified when declaring the array type.

Nested Tables

Nested tables provide a means of defining a table as a column within a table. They can be used to store sets that may have a large number of records such as number of items in an order.
Nested tables generally have the following characteristics:

• A nested table is an unordered set of records or rows.
• The rows in a nested table have the same structure.
• Rows in a nested table are stored separate from the parent table with a pointer from the
corresponding row in the parent table.
• Storage characteristics for the nested table can be defined by the database administrator.
• There is no predetermined maximum size for a nested table.

Relationship Data Types (REFs)

Relationship types are used as pointers within the database. The use of these types requires the Objects option. As an example, each item that is ordered could point to or reference a row in the PRODUCTS table, without having to store the product code.

Obtaining Undo Segments Information

• Data Dictionary Views

– DBA_ROLLBACK_SEGS

• Dynamic Performance Views

– V$ROLLNAME
– V$ROLLSTAT
– V$UNDOSTAT
– V$SESSION
– V$TRANSACTION

To obtain information about all the undo segments in the database

SQL > SELECT segment_name,owner,tablespace_name,status FROM dba_rollback_segs;

The OWNER column specifies the type of an undo segment:
• SYS refers to a private undo segment.
• PUBLIC refers to a public undo segment.

V$ROLLSTAT and V$ROLLNAME

Join the V$ROLLSTAT and V$ROLLNAME views to obtain the statistics of the undo
segments currently used by the instance.

SELECT n.name, s.extents, s.rssize,s.hwmsize, s.xacts, s.status
FROM v$rollname n, v$rollstat s
WHERE n.usn = s.usn;

What is UNDO_RETENTION ?

UNDO_RETENTION

 Determines how long to retain undo data to provide for consistent reads.
 defined in seconds

ALTER SYSTEM SET UNDO_RETENTION=900;

900 Sec = 15 min

Even with UNDO_RETENTION set, if the UNDO tablespace is sized too small, undo data is not retained for the time specified

Managing Undo Data in Oracle

Managing Undo Data

 Automatic Undo Management
 Manual Undo Management

Undo Segment

 A serial transaction uses only one undo segment to store all of its undo data.
 Many concurrent transactions can write to one undo segment.

Transaction Read Consistency
Read consistency is always provided for a SQL statement. However, you can request read consistency for a read-only transaction by issuing the following command at the beginning of the transaction:

SET TRANSACTION READ ONLY;

Or, you can request read consistency for a transaction performing DML by issuing the following command at the beginning of the transaction:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

In either case, the Oracle server provides data that is read consistent from the start of the transaction. Using SERIALIZABLE can have a negative impact on performance.

Types of Undo Segments

• SYSTEM: Used for objects in the SYSTEM tablespace
• Non-SYSTEM: Used for objects in other tablespaces:
– Auto Mode: Requires an UNDO tablespace
– Manual Mode:
– Private: Acquired by a single instance
– Public: Acquired by any instance
• Deferred: Used when tablespaces are taken offline immediate, temporary, or for recovery

Automatic Undo Management: Concepts

Undo segments are created with the naming convention:
_SYSSMUn$
For example:
_SYSSMU1$
_SYSSMU2$
Automatic Undo Management:Configuration

• Configure two parameters in the initialization file:
– UNDO_MANAGEMENT
– UNDO_TABLESPACE

EX:
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=UNDOTBS

UNDO_MANAGEMENT

 UNDO_MANAGEMENT cannot be changed dynamically after the database starts

UNDO_TABLESPACE

 altered dynamically using the ALTER SYSTEM

ALTER SYSTEM SET undo_tablespace = UNDOTBS;
Creationing a undo tablespace:

SQL > CREATE UNDO TABLESPACE undo1 DATAFILE 'undo1db01.dbf' SIZE 20M;

 During database creation, if the UNDO_MANAGEMENT parameter is set to AUTO and you omit the UNDO tablespace clause from the CREATE DATABASE statement then the Oracle server creates an UNDO tablespace with the name SYS_UNDOTBS.
 The default data file, for data file tablespace SYS_UNDOTS, will have the name 'dbu1.dbf' located in $ORACLE_HOME/dbs, the size is operating system dependent, and AUTOEXTEND is ON.

SQL > ALTER TABLESPACE undotbs ADD DATAFILE 'undotbs2.dbf' SIZE 30M AUTOEXTEND ON;

Dropping an UNDO Tablespace

• To drop an active UNDO tablespace:
– Switch to a new UNDO tablespace
– Drop the tablespace after all current transactions are complete

SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS1;

SQL > DROP TABLESPACE UNDO1 including contents and datafiles cascade constraints;

Segments

Segments

Types of segments:

1. Table

 A table segment stores that data for a table that is neither clustered nor partitioned.
 All the data in a table segment must be stored in one tablespace.

2. Table Partition

 Scalability and availability are major concerns, In such cases, data within a table may be stored in several partitions, each of which resides in a different tablespace.
 The Oracle server currently supports partitioning by a range of key values or by a hashing algorithm.

3. Cluster

 A cluster may contain one or more tables.
 Tables in a cluster belong to the same segment and share the same storage Characteristics.
 The rows in a clustered table can be accessed with an index or hashing algorithm.

4. Index

 All the entries for a particular index are stored within one index segment.
 If a table has three indexes, three index segments are used.
5. Index-Organized Table

 In an index-organized table, data is stored within the index based on the key value.
 An index organized table does not need a table lookup, because all the data can be retrieved directly from the index tree.
6. Index Partition

 An index can be partitioned and spread across several tablespaces.
 In this case, each partition in the index corresponds to a segment and cannot span multiple tablespaces.
 The primary use of a partitioned index is to minimize contention by spreading index I/O.

7. Undo Segment

 An undo segment is used by a transaction that is making changes to a database.
 Before changing the data or index blocks, the old value is stored in the undo segment. This allows a user to undo changes made.

8. Temporary Segment

 When a user executes commands such as CREATE INDEX, SELECT DISTINCT, and SELECT GROUP BY, the Oracle server tries to perform sorts in memory.
 When a sort needs more space than the space available in memory, intermediate results are written to the disk.
 Temporary segments are used to store these intermediate results.

9. LOB Segment

 It can be used to store large objects (LOBs) such as text documents, images, or videos.
 The table contains only a locator or a pointer to the location of the corresponding LOB data.

10. Nested Table

 A column in a table may be made up of a user-defined table as in the case of items within an order.
 In such cases, the inner table, which is known as a nested table, is stored as a separate segment.

11. Bootstrap Segment

 A bootstrap segment, also known as a cache segment, is created by the sql.bsq script when a database is created.
 This segment helps to initialize the data dictionary cache when the database is opened by an instance.
 The bootstrap segment cannot be queried or updated and does not require any maintenance by the database administrator.

Obtaining Tablespace Information

Obtaining Tablespace Information
• Tablespace information:

– DBA_TABLESPACES
– V$TABLESPACE
• Data file information:

– DBA_DATA_FILES
– V$DATAFILE
• Temp file information:

– DBA_TEMP_FILES
– V$TEMPFILE


Dynamic performance views
V$DATAFILE
V$TEMPFILE
V$TABLESPACE

Data dictionary views

DBA_DATA_FILES
DBA_TABLESPACES
DBA_TEMP_FILES

Manage Tablespace in Oracle

Dropping Tablespaces

DROP TABLESPACE userdata INCLUDING CONTENTS AND DATAFILES;

Enabling Automatic Extension

ALTER DATABASE DATAFILE '/u01/oradata/userdata02.dbf' SIZE 200M
AUTOEXTEND ON NEXT 10M MAXSIZE 500M;

RESIZING DATAFILES:

ALTER DATABASE DATAFILE '/oradata/imstest/users01.dbf' RESIZE 50M;

Dropping a datafile:

ALTER DATABASE DATAFILE '/orasoft/10.2.0/dbs/undo102.dbf' OFFLINE DROP;

Adding Data Files to a Tablespace

ALTER TABLESPACE users ADD DATAFILE '/oradata/imstest/users02.dbf'
SIZE 200M;

Read-Only Tablespaces

Read-Only Tablespaces

ALTER TABLESPACE userdata READ ONLY;

• Tablespace available only for read operations
• Objects can be dropped from tablespace
• Making tablespaces read-only causes a checkpoint on the data files of the tablespace

You can drop items, such as tables and indexes, from a read-only tablespace, because these commands affect only the data dictionary. This is possible because the DROP command
updates only the data dictionary, but not the physical files that make up the tablespace. For
locally managed tablespaces, the dropped segment is changed to a temporary segment, to
prevent the bitmap from being updated.

Temp tablespace

Temp tablespace:

The sort segment expands by allocating extents until the segment size is equal to or greater than the total storage demands of all of the active sorts running on that instance.

CREATE TEMPORARY TABLESPACE temp
TEMPFILE '/oradata/imstest/temp01.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M;

Temporary Segments
• Tempfiles are always set to NOLOGGING mode.
• You cannot make a tempfile read-only.
• You cannot rename a tempfile.
• You cannot create a tempfile with the ALTER DATABASE command.
• Tempfiles are required for read-only databases.
• Media recovery does not recover tempfiles.
• BACKUP CONTROLFILE does not generate any information for tempfiles.
• CREATE CONTROLFILE cannot specify any information about tempfiles.

Note: When creating a database without a default temporary tablespace the default tablespace,
assigned to any user created without a TEMPORARY TABLESPACE clause is the SYSTEM tablespace.

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

TABLESPACES

TABLESPACES:

• A tablespace may consist of zero or more segments.
• Except for the SYSTEM tablespace or a tablespace with an active undo segment,
tablespaces can be taken offline, leaving the database running.
• Each segment is made up of one or more extents.
The standard data block size for an Oracle database is specified by the
DB_BLOCK_SIZE initialization parameter when the database is created.

Tablespace creation:

CREATE TABLESPACE userdata
DATAFILE '/oradata/imstest/userdata01.dbf' SIZE 100M
AUTOEXTEND ON NEXT 5M MAXSIZE 200M;

MINIMUM EXTENT ensures that every used extent size in the tablespace is
a multiple of the integer. Use Kor M to specify this
size in kilobytes or megabytes.
LOGGING specifies that, by default, all tables, indexes, and partitions
within the tablespace have all changes written to redo.
LOGGING is the default.
NOLOGGING specifies that, by default, all tables, indexes, and partitions
within the tablespace do not have all changes written to redo.
NOLOGGING affects only some DML and DDL commands,
for example, direct loads.
DEFAULT specifies the default storage parameters for all objects created
in the tablespace creation
OFFLINE makes the tablespace unavailable immediately after creation
PERMANENT specifies that the tablespace can be used to hold permanent
objects
TEMPORARY specifies that the tablespace be used only to hold temporary
objects; for example, segments used by implicit sorts caused
by an ORDER BY clause extent_management_clause
specifies how the extents of the tablespace are managed. This
clause is discussed in a subsequent section of this lesson.
datafile_clause :== filename
[SIZE integer[K|M] [REUSE] | REUSE ] [ autoextend_clause ]
where: filename is the name of a data file in the tablespace
SIZE specifies the size of the file. Use K or M to
specify the size in kilobytes or megabytes.
REUSE allows the Oracle server to reuse an existing file
autoextend_clause enables or disables the automatic extension of the data file. This clause is discussed in a subsequent section of this lesson.

Space Management in Tablespaces

• Locally managed tablespaces:
– Free extents recorded in bitmap
– Each bit corresponds to a block or group of blocks
– Bit value indicates free or used
• Dictionary-managed tablespaces:
– Default method
– Free extents recorded in data dictionary tables



Locally Managed Tablespaces

CREATE TABLESPACE userdata
DATAFILE '/oradata/imstest/userdata01.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K;

• Reduced contention on data dictionary tables
• No undo generated when space allocation or deallocation occurs
• No coalescing required

DICTIONARY specifies that the tablespace is managed using
dictionary tables.
LOCAL specifies that tablespace is locally managed
with a bitmap. If you specify LOCAL, you
cannot specify DEFAULT storage_clause,
MINIMUM EXTENT, or TEMPORARY.
AUTOALLOCATE specifies that the tablespace is system managed.
Users cannot specify an extent size.
This is the default.

Redo Log Files

Redo Log Files

 The Oracle server needs a minimum of two online redo log file groups
 The maximum and default value for MAXLOGMEMBERS, MAXLOGFILES is dependent on your operating system.
 The minimum size of an online redo log file is 50 KB.


Adding new group:

ALTER DATABASE ADD LOGFILE GROUP 4
('C:\oracle\product\10.1.0\oradata\bank\REDO04a.rdo',
'C:\oracle\product\10.1.0\oradata\bank\REDO04b.rdo')
SIZE 10M;

Adding a member in a group:

ALTER DATABASE ADD LOGFILE MEMBER
'C:\oracle\product\10.1.0\oradata\bank\REDO01a.rdo','C:\oracle\product\10.1.0\oradata\bank\REDO01b.rdo' TO GROUP 1,
'C:\oracle\product\10.1.0\oradata\bank\REDO02a.rdo' TO GROUP 2,
'C:\oracle\product\10.1.0\oradata\bank\REDO03a.rdo' TO GROUP 3;

Dropping a Redo Log Group

ALTER DATABASE DROP LOGFILE GROUP 3;

• An instance requires at least two groups of online redo log files.
• An active or current group cannot be dropped.
• When an online redo log group is dropped, the operating system files are not deleted.


Dropping Online Redo Log Members

ALTER DATABASE DROP LOGFILE MEMBER 'C:\oracle\product\10.1.0\oradata\bank\REDO03a.rdo';

• If the member you want to drop is the last valid member of the group, you cannot drop
that member.
• If the group is current, you must force a log file switch before you can drop the
member.
• If the database is running in ARCHIVELOG mode and the log file group to which the
member belongs is not archived, then the member cannot be dropped.
• When an online redo log member is dropped, the operating system file is not deleted.

Control File

Control File Contents

A control file contains the following entries:
• Database name and identifier
• Time stamp of database creation
• Tablespace names
• Names and locations of data files and redo log files
• Current redo log file sequence number
• Checkpoint information
• Begin and end of undo segments
• Redo log archive information
• Backup information

Controlfile views:

 V$CONTROLFILE
 V$CONTROLFILE_RECORD_SECTION
 SHOW PARAMETERS CONTROL_FILES

Control File

Control File Contents

A control file contains the following entries:
• Database name and identifier
• Time stamp of database creation
• Tablespace names
• Names and locations of data files and redo log files
• Current redo log file sequence number
• Checkpoint information
• Begin and end of undo segments
• Redo log archive information
• Backup information

Controlfile views:

 V$CONTROLFILE
 V$CONTROLFILE_RECORD_SECTION
 SHOW PARAMETERS CONTROL_FILES

Alert Log File

Alert Log File

The alert log file keeps a record of the following information:
• When the database was started or shut down.
• A list of all non-default initialization parameters
• The startup of background processes
• The thread being used by the instance
• The log sequence number LGWR is writing to
• Information regarding a log switch
• Creation of tablespaces and undo segments
• Alter statements that have been issued
• Information regarding error messages such as ORA-600 and extent errors.

Create Standby Database - Dataguard

Step by step document to create Standby Database

Standby database are very critical for disaster recovery. This article takes you step by step to setup Oracle Standby Database
1 - Data Guard Operational Prerequisites
• Same Oracle software release must be used for both primary and standby databases. The operating system running on primary and standby locations must be same, but operating system release may not need to be same.
• The Primary Database must run in ARCHIVELOG mode.
• The hardware and Operating system architecture on primary and standby location must be same.
• Each primary and standby database must have its own control file.
• If primary and standby databases are placed on the same system, initialization parameters must be adjusted correctly.

• Primary database must be FORCE LOGGING mode.
2 - Preparing Primary Database for Standby Database creation
2.1 Ensure the primary database in ARCHIVELOG mode

Ensure the primary database in ARCHIVELOG mode using following command.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\oracle\ora92\database\archive\ORCLC
Oldest online log sequence 9
Next log sequence to archive 11
Current log sequence 11

2.2 Enable database in FORCE LOGGING mode
Place primary database in FORCE LOGGING mode using following SQL statement:
SQL> alter database force logging;
Database altered.

3 - Identify the primary database Datafiles
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------

C:\ORACLE\ORADATA\ORCLC\SYSTEM01.DBF
C:\ORACLE\ORADATA\ORCLC\UNDOTBS01.DBF
C:\ORACLE\ORADATA\ORCLC\CWMLITE01.DBF
C:\ORACLE\ORADATA\ORCLC\DRSYS01.DBF
C:\ORACLE\ORADATA\ORCLC\EXAMPLE01.DBF
C:\ORACLE\ORADATA\ORCLC\INDX01.DBF
C:\ORACLE\ORADATA\ORCLC\ODM01.DBF
C:\ORACLE\ORADATA\ORCLC\TOOLS01.DBF
C:\ORACLE\ORADATA\ORCLC\USERS01.DBF
C:\ORACLE\ORADATA\ORCLC\XDB01.DBF

10 rows selected.

4 - Make a copy of Primary Database
Make a closed backup copy of primary database by performing following steps:

4.1 Shutdown the Primary Database

Issue the following statement to shutdown the primary database.
SQL> shutdown immediate;

4.2 Copy the Datafiles to standby location

Copy the redo log files and Datafiles identified in section 3 to standby location.
Note: Primary Database must be shutdown while coping the files.

5 - Restart the Primary Database

Execute following command to restart the Primary Database.
SQL> startup;
6 - Create Control file for Standby Database

Issue the following command on primary database to create control file for the standby database.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS
2 'D:\oracle\oradata\stby\control_sb01.ctl';

Database altered.
The filename for newly created standby control file must be different of current control file of the primary database. Also control file for standby database must be created after the last timestamp for the backup Datafiles.

7 - Create pfile from for standby database from the primary database
Create pfile from the server parameter file of the primary database; this pfile can be copied to standby location and modified.
SQL> CREATE PFILE='C:\oracle\ora92\database\initstby.ora' from spfile;

File created.
8 - Set initialization parameters on physical standby database
Although most of the initialization parameter settings in the text initialization parameter file that you copied from the primary system are also appropriate for the physical standby database, some modifications need to be made. Edit created pfile from primary database.

db_name - Not modified. The same name as the primary database.

compatible - Not modified. The same as the primary database, 9.2.0.0.0.

control_files - Specify the path name and filename for the standby control file.

log_archive_start - Not modified. The same as the setting for the primary database, TRUE

standby_archive_dest - Specify the location of the archived redo logs that will be received from the primary database.

db_file_name_convert - Specify the location of the primary database datafiles followed by the standby location of the datafiles. This parameter will convert the filename of the primary database datafiles to the filename of the standby datafile filenames. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site then this parameter is required.

log_file_name_convert - Specify the location of the primary database logs followed by the standby location of the logs. This parameter will convert the filename of the primary database log to the filenames of the standby log. If the standby database is on the same system as the primary database or if the directory structure where the logs are located on the standby site is different from the primary site then this parameter is required.

log_archive_dest_1 - Specify the location where the redo logs are to be archived on the standby system. (If a switchover occurs and this instance becomes the primary database, then this parameter will specify the location where the online redo logs will be archived.)

standby_file_management - Set to AUTO.

remote_archive_enable - Set to TRUE.

instance_name - If this parameter is defined, specify a different value for the standby database than the primary database when the primary and standby databases reside on the same host.

lock_name_space - Specify the standby database instance name. Use this parameter when you create the physical standby database on the same system as the primary database. Change the INSTANCE_NAME parameter to a value other than its primary database value, and set this LOCK_NAME_SPACE initialization parameter to the same value that you specified for the standby database INSTANCE_NAME initialization parameter.

Also change the values of the parameters background_dump_dest, core_dump_dest and user_dump_dest to specify location of the standby database.

(Refer Annexure for initialization parameter settings for primary and standby database.)
9 - Create a Window service
If standby database is running on windows system, then oradim utility is used to create windows service. Issue following command from the command prompt window
C:\>oradim -new -sid stby -intpwd stby -startmode manual
10 - Configure listeners & tnsnames for standby and primary databases
Configure listeners in listeners.ora as follows
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = oraserver)(PORT = 1521))
)
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = D:\oracle\ora92)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = Orcl)
(ORACLE_HOME = D:\oracle\ora92)
(SID_NAME = Orcl)
)
(SID_DESC =
(GLOBAL_DBNAME = stby)
(ORACLE_HOME = D:\oracle\ora92)
(SID_NAME = stby)
)
)
Restart the listeners using LSNRCTL utility.
% lsnrctl stop
% lsnrctl start
Also make an entry into tnsnames.ora for standby database.
stby =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = oraserver)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = stby)
)
)
11 - Start Physical standby database
Start up the stand by database using following commands
C:\>set oracle_sid=stby

C:\>sqlplus /nolog

SQL*Plus: Release 9.2.0.1.0 - Production on Mon Apr 25 17:13:26 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup pfile='C:\oracle\ora92\database\initstby.ora' nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes

SQL> alter database mount standby database;
Database altered.
12 - Enabling archiving to Physical Standby Database
To configure archive logging from the primary database to the standby site the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n parameters must be defined.

Issue following commands from primary database session:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=stby' SCOPE=BOTH;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH;

System altered.
13 - Initiate Log apply services
The example includes the DISCONNECT FROM SESSION option so that log apply services run in a background session.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

SQL> alter database open read only;
alter database open read only
*
ERROR at line 1:
ORA-01154: database busy. Open, close, mount, and dismount not allowed now

SQL> recover managed standby database cancel;
Media recovery complete.

Now go to primary database prompt

SQL> alter system switch logfile;

Go to stand by database prompt

SQL> alter database open read only;

Database altered.
14 - Verifying the Standby Database
On standby database query the V$ARCHIVED_LOG view to verify that redo log received.
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME NEXT_TIME
-------------- ------------------- ----------------------
14 25-APR-05 16:50:34 25-APR-02 16:50:42
15 25-APR-05 16:50:42 25-APR-02 16:50:47
16 25-APR-05 16:50:47 25-APR-02 16:51:52

Archive the current log on the primary database using following statement.
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
On standby database query the V$ARCHIVED_LOG view
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME NEXT_TIME
-------------- ------------------- ----------------------
14 25-APR-05 16:50:34 25-APR-02 16:50:42
15 25-APR-05 16:50:42 25-APR-02 16:50:47
16 25-APR-05 16:50:47 25-APR-02 16:51:52
17 25-APR-05 16:51:52 25-APR-02 17:34:00
Now connect scott/tiger@orclc on primary database and create table or insert row in any table.

Now connect as sys on primary database and execute following SQL statement
SQL> alter system switch logfile;
On standby database execute following SQL statements
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

SQL> recover managed standby database cancel;
Media recovery complete.

SQL> alter database open read only;

Database altered.
And check whether the changes applied on the standby database or not.
15 - Annexure

15.1 Parameter file for Primary Database
*.aq_tm_processes=1
*.background_dump_dest='D:\oracle\admin\Orclc\bdump'
*.compatible='9.2.0.0.0'
*.control_files='C:\oracle\oradata\Orclc\CONTROL01.CTL',
'C:\oracle\oradata\Orclc\CONTROL02.CTL','C:\oracle\oradata\Orclc\CONTROL03.CTL'
*.core_dump_dest='D:\oracle\admin\Orclc\cdump'
*.db_block_size=8192
*.db_cache_size=25165824
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='Orclc'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OrclcXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='Orclc'
*.java_pool_size=33554432
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_archive_dest_1='LOCATION=C:\oracle\ora92\database\archive\ORCLC mANDATORY'
*.log_archive_dest_2='SERVICE=stby'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_start=true
*.open_cursors=300
*.pga_aggregate_target=25165824
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=50331648
*.sort_area_size=524288
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\admin\Orclc\udump'
15.2 Parameter file for Standby Database
*.aq_tm_processes=1
*.background_dump_dest='D:\oracle\admin\stby\bdump'
*.compatible='9.2.0.0.0'
*.control_files='D:\oracle\oradata\stby\CONTROL_SB01.CTL'
*.core_dump_dest='D:\oracle\admin\stby\cdump'
*.db_block_size=8192
*.db_cache_size=25165824
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='Orclc'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OrclcXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='stby'
*.java_pool_size=33554432
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_archive_dest_1='LOCATION=D:\oracle\admin\stby\archive'
*.log_archive_dest_state_1=enable
*.log_archive_start=true
*.open_cursors=300
*.pga_aggregate_target=25165824
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='NONE'
*.shared_pool_size=50331648
*.sort_area_size=524288
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\admin\stby\udump'
*.standby_archive_dest='C:\oracle\ora92\database\archive\ORCLC'
*.db_file_name_convert='C:\oracle\oradata\Orclc','D:\oracle\oradata\stby'
*.log_file_name_convert='C:\oracle\oradata\Orclc','D:\oracle\oradata\stby'
*.standby_file_management=AUTO
*.remote_archive_enable=TRUE
*.lock_name_space=stby

Wednesday, January 13, 2010

Remove Oracle from Linux

Standard Operating Procedure For Removing Oracle from UNIX


Procedure:
Completely remove 8i / 9i Database and Software from UNIX

Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 9.2.0.7
HP-UX Itanium
AIX5L Based Systems (64-bit)
Red Hat Advanced Server
Linux Itanium
Red Hat Enterprise Linux Advanced Server Itanium
Red Hat Enterprise Linux Advanced Server x86-64 (AMD Opetron Architecture)
Solaris Operating System (SPARC 64-bit)
Linux x86
HP-UX PA-RISC (64-bit)
HP Tru64 UNIX
UnitedLinux (32-bit)

--- Identify all instances associated with the Oracle home.
--- Remove the Oracle Database.
--- Shut down processes.
--- Remove the Oracle software.
--- Cleanup OUI files (Optional).

Identifying All Instances

To identify all instances associated with the Oracle home that you want to remove; enter one of the following commands:

Solaris:
$ more /var/opt/oracle/oratab

Other operating systems:
$ more /etc/oratab

The output for this command contains entries similar to the following:
CUST:/u01/app/oracle/product/9.2.0:N

This entry shows CUST Oracle database instance are associated with the
/u01/app/oracle/product/9.2.0 Oracle Home Directory.





Remove an Oracle Database

To completely remove Oracle Database software, you must remove any installed databases.

To remove an Oracle database:
Note: Removing an Oracle database deletes all of the data in the database. If you want to keep this data, make sure that you back up the database before deleting it.
Log in as the oracle user:
$ su - oracle

Run the oraenv or coraenv script to set the environment for the database that you want to remove, for example:

Bourne, Bash, or Korn shell:
$ . /usr/local/bin/oraenv

C shell:
% source /usr/local/bin/coraenv

At the prompt, specify the SID for the database that you want to remove.
Start the Database Configuration Assistant (DBCA):

$ dbca (or dbassist for 8i)
-- Step 1 The Welcome window appears.
-- Step 2 Click Next.
-- Step 3 The Operations window appears.
-- Step 4 Select Delete a Database, and then click next.
-- Step 5 Select the database that you want to delete, then click Finish.
-- Step 6 In the window that appears, confirm that you want to delete the database.
-- Step 7 When the DBCA removes the database; you are prompted to choose whether you want to perform another operation.
-- Step 8 Click Yes to return to the Operations screen or click No to exit from DBCA.
If you want to remove another database, click Yes and repeat steps 6 to 8.

Shut down Processes

Stop any processes running in this Oracle home:
Process Name Command :
Database Control $ORACLE_HOME/bin/emctl stop dbconsole
Oracle Net listener $ORACLE_HOME/bin/lsnrctl stop
iSQL*Plus $ORACLE_HOME/bin/isqlplusctl stop
Ultra Search $ORACLE_HOME/bin/searchctl stop

To confirm that all processes have been stopped, use the following command as root:
# ps -ef | grep $ORACLE_HOME
Where $ORACLE_HOME above is the Oracle Home of the instance to be stopped.

If the above commands still return the running processes, kill the processes using the following command as root:
# kill -9 [pid]

Remove the Oracle software

Use the Oracle Universal Installer (OUI) to de-install the installation. The following steps describe how to use the Installer to remove Oracle software from an Oracle home:
Note: Always use the Oracle Universal Installer to remove Oracle software.
Do not delete any Oracle home directories without first using the Installer to remove the software.
log in as the oracle user:
$ su - oracle
Set ORACLE_HOME environment variable to specify the path of? Oracle home directory that you want to remove:

Bourne, Bash, or Korn shell:
$ ORACLE_HOME=/u01/app/oracle/product/9.2.0
$ export ORACLE_HOME

C shell:
% setenv ORACLE_HOME /u01/app/oracle/product/9.2.0

Start the Installer as follows:
-- $ORACLE_HOME/oui/bin/runInstaller
or cd /opt/oracle/oui/install
sh runInstaller.sh

--
-- In the Welcome window, click Deinstall Products.
-- The Inventory window appears, listing all of the Oracle homes on the system.
-- In the Inventory window, select the Oracle home and the products that you want to remove, then click Remove.
-- When the Installer displays a confirmation window asking you to confirm that you want to deinstall the products and their dependant components, click Yes.

The Installer displays a progress indicator as it removes the software. When the products have been deleted, click Cancel to exit from the Installer, then click Yes.

Manually remove the Oracle Home of the de-installed installation.
$ cd /u01/app/oracle/product/
$ rm -rf 9.2.0

Also, remove any reference to the Oracle Home from all environment variables.
Be sure to verify that any Oracle environment is not set in the .login or .profile, nor any other custom startup scripts.

Cleanup OUI files (Optional)

CAUTION: The following step should only be taken if no other Oracle products exists on the machine. Removing the following files and directories while other Oracle products exist, will result in ALL Oracle products on the system rendered unusable.

To identify all in Solaris:
$ more /var/opt/oracle/oraInst.loc

Other operating systems:
$ more /etc/oraInst.loc

inventory_loc=/u01/app/oracle/product/oraInventory
inst_group=oracle

$ cd /u01/app/oracle/product
$ rm -rf oui
$ rm -rf oraInventory
$ rm -rf jre

As user root do the following
# rm -rf /usr/local/bin/dbhome
# rm -rf /usr/local/bin/oraenv
# rm -rf /usr/local/bin/coraenv

Solaris:
# cd /var/opt/oracle
# rm -rf oracle

Other operating systems:
# rm -rf /etc/oratab
# rm -rf /etc/oraInst.loc


Completely remove 10g Database and Software from UNIX

Oracle Universal Installer - Version: 10.1.0.2 to 10.1.0.2
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.1.0.2
Solaris Operating System (SPARC 64-bit)
Linux x86
HP-UX PA-RISC (64-bit)
HP Tru64 UNIX
HP-UX Itanium
IBM S/390 Based Linux
AIX5L Based Systems (64-bit)
Red Hat Advanced Server
Linux Itanium
Red Hat Enterprise Linux Advanced Server

To completely remove all Oracle databases, instances, and software from an Oracle home directory, you must:

• Identify all instances associated with the Oracle home.
• Remove database and ASM instances.
• Shut down processes.
• Reconfigure the Oracle Cluster Synchronization Services Daemon, if necessary.
• Remove the Oracle software.

Identifying All Instances

To identify all instances associated with the Oracle home that you want to remove, enter one of the following commands:
Solaris:
$ more /var/opt/oracle/oratab
Other operating systems:
$ more /etc/oratab

The output for this command contains entries similar to the following:
+ASM:/u01/app/oracle/product/10.1.0/db_1:N
CUST:/u01/app/oracle/product/10.1.0/db_1:N

These entries show that the +ASM Automatic Storage Manager instance and the CUST Oracle database instance are associated with the /u01/app/oracle/product/10.1.0/db_1 Oracle home directory.

Removing an Oracle Database
To completely remove Oracle Database software, you must remove any installed databases. To remove an Oracle database:
Note: Removing an Oracle database deletes all of the data in the database. If you want to keep this data, make sure that you back up the database before deleting it.
Log in as the oracle user:
$ su - oracle

Run the oraenv or coraenv script to set the environment for the database that you want to remove, for example:
Bourne, Bash, or Korn shell:
$ . /usr/local/bin/oraenv
C shell:
% source /usr/local/bin/coraenv

At the prompt, specify the SID for the database that you want to remove.
Start the Database Configuration Assistant (DBCA):
$ dbca

The Welcome window appears.
Click Next.
The Operations window appears.
Select Delete a Database, then click Next.
Select the database that you want to delete, then click Finish.
In the window that appears, confirm that you want to delete the database.
When the DBCA removes the database, you are prompted to choose whether you want to perform another operation. Click Yes to return to the Operations screen or click No to exit from DBCA. If you want to remove another database, click Yes and repeat steps 6 to 8.

Removing an ASM Instance
To completely remove Oracle database software, you must also remove any ASM instances running in the Oracle home. To remove an ASM instance:
If necessary, log in as the oracle user:
$ su - oracle

Run the oraenv or coraenv script to set the environment for the ASM instance that you want to remove, for example:
Bourne, Bash, or Korn shell:
$ . /usr/local/bin/oraenv
C shell:
$ source /usr/local/bin/coraenv

At the prompt, specify the SID for the ASM instance that you want to remove.
Connect to the ASM instance as the SYS user:
$ sqlplus "SYS/SYS_PASSWORD AS SYSDBA"

Enter the following command to determine whether any Oracle database instance is using the ASM instance:

SQL> SELECT INSTANCE_NAME FROM V$ASM_CLIENT;

This command lists all of the database instances that are using this ASM instance.
Note: This command only lists database instances that are running. It is possible that other instances are associated with the ASM instance, but they are not currently running. If you removed a database from this Oracle home but the output from the command shows that this ASM instance is supporting a database instance in another Oracle home, do not remove the ASM instance or the Oracle home.

If there are no database instances associated with this ASM instance, drop the disk group associated with this instance as follows:
Note: Dropping the ASM disk group makes the disk device available for use with another ASM instance if required. However, all data in the disk group is lost. Make sure that no other database instance requires any data from this disk group before you drop it.

Identify the disk groups associated with the ASM instance:
SQL> SELECT NAME FROM V$ASM_DISKGROUP;

For each disk group that you want to delete, enter a command similar to the following:
SQL> DROP DISKGROUP name INCLUDING CONTENTS;

Enter the following command to shut down the ASM instance:
SQL> SHUTDOWN

Remove the entry for the ASM instance from the /etc/oratab file (or from the /var/opt/oracle/oratab file on Solaris).

Reconfiguring Oracle Cluster Synchronization Services

Oracle Cluster Synchronization Services (CSS) is a daemon process that is configured by the root.sh script when you install Oracle Database 10g for the first time. It is configured to start every time the system boots. This daemon process is required to enable synchronization between Oracle ASM and database instances. It must be running if an Oracle database is using ASM for database file storage.
Note: On cluster systems with Oracle Real Application Clusters (RAC) installations, the CSS daemon is configured during the Oracle Cluster Ready Services (CRS) installation. If the system is running Oracle CRS, see the Oracle Real Application Clusters Installation and Configuration Guide for information about removing RAC or CRS.
Before you remove an Oracle Database 10g Oracle home, you must determine whether the CSS daemon is running from that Oracle home and whether any other Oracle Database 10g Oracle homes exist on the system:
If the Oracle Database 10g Oracle home that you want to remove is the only Oracle Database 10g installation on the system, you can delete the CSS daemon configuration.
If the CSS daemon is running from the Oracle Database 10g Oracle home that you want to remove and other Oracle Database 10g installations exist on the system, you must reconfigure the CSS daemon to run from another Oracle Database 10g Oracle home.
The following subsections describe how to complete these tasks.
Identifying Oracle Database 10g Oracle Homes
To identify all of the Oracle Database 10g Oracle home directories, enter one of the following commands:
Solaris:
$ more /var/opt/oracle/oratab
Other operating systems:
$ more /etc/oratab

From the output, identify any Oracle home directories where Oracle Database 10g is installed. Oracle homes that contain Oracle Database 10g typically have paths similar to the following. However, they might use different paths.
/mount_point/app/oracle/product/10.1.0/db_n

If there is only one Oracle home directory that contains Oracle Database 10g, see the "Deleting the Oracle CSS Daemon Configuration" section for information about deleting the Oracle CSS daemon configuration.
If you identify more than one Oracle Database 10g Oracle home directory, see the following section for information about reconfiguring the Oracle CSS daemon.


Reconfiguring the Oracle CSS Daemon

To reconfigure the Oracle CSS daemon so that it runs from an Oracle home that you are not removing, follow these steps:

In all Oracle home directories on the system, stop all Oracle ASM instances and any Oracle Database instances that use ASM for database file storage.
Switch user to root.
Depending on your operating system, enter one of the following commands to identify the Oracle home directory being used to run the CSS daemon:
AIX and Linux:
# more /etc/oracle/ocr.loc
Other operating systems:
# more /var/opt/oracle/ocr.loc

The output from this command is similar to the following:
ocrconfig_loc=/u01/app/oracle/product/10.1.0/db_1/cdata/localhost/local.ocr
local_only=TRUE

The ocrconfig_loc parameter specifies the location of the Oracle Cluster Registry (OCR) used by the CSS daemon. The path up to the cdata directory is the Oracle home directory where the CSS daemon is running (/u01/app/oracle/product/10.1.0/db_1 in this example).
Note: If the value for the local_only parameter is FALSE, Oracle CRS is installed on this system. See the Oracle Real Application Clusters Installation and Configuration Guide for information about removing RAC or CRS.

If this Oracle home directory is not the Oracle home that you want to remove, you can continue to the "Removing Oracle Software" section.

Change directory to the Oracle home directory for an Oracle Database 10g installation that you are not removing.

Set the ORACLE_HOME environment variable to specify the path to this Oracle home directory:
Bourne, Bash, or Korn shell:


# ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_2; export ORACLE_HOME

C shell:
# setenv ORACLE_HOME /u01/app/oracle/product/10.1.0/db_2

Enter the following command to reconfigure the CSS daemon to run from this Oracle home:
# $ORACLE_HOME/bin/localconfig reset $ORACLE_HOME

The script stops the Oracle CSS daemon, reconfigures it in the new Oracle home, and then restarts it. When the system boots, the CSS daemon starts automatically from the new Oracle home.

To remove the original Oracle home directory, see the "Removing Oracle Software" section.

Deleting the Oracle CSS Daemon Configuration

To delete the Oracle CSS daemon configuration, follow these steps:
Note: Delete the CSS daemon configuration only if you are certain that no other Oracle Database 10g installation requires it.
Remove any databases or ASM instances associated with this Oracle home. See the preceding sections for information about how to complete these tasks.
Switch user to root.
Change directory to the Oracle home directory that you are removing.
Set the ORACLE_HOME environment variable to specify the path to this Oracle home directory:
Bourne, Bash, or Korn shell:
# ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1; export ORACLE_HOME
C shell:
# setenv ORACLE_HOME /u01/app/oracle/product/10.1.0/db_1

Enter the following command to delete the CSS daemon configuration from this Oracle home:
# $ORACLE_HOME/bin/localconfig delete

The script stops the Oracle CSS daemon, then deletes its configuration. When the system boots, the CSS daemon no longer starts.

Removing Oracle Software
The following steps describe how to use the Installer to remove Oracle software from an Oracle home:
Note: Always use the Oracle Universal Installer to remove Oracle software. Do not delete any Oracle home directories without first using the Installer to remove the software.
If necessary, log in as the oracle user:
$ su - oracle

Set the ORACLE_HOME environment variable to specify the path of the Oracle home directory that you want to remove:
Bourne, Bash, or Korn shell:
$ ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
$ export ORACLE_HOME
C shell:
$ setenv ORACLE_HOME /u01/app/oracle/product/10.1.0/db_1

Remove any databases or ASM instances associated with this Oracle home and delete or reconfigure the Oracle CSS daemon.
See the preceding sections for information about how to complete these tasks.

Stop any processes running in this Oracle home:

Process Name Command
Database Control $ORACLE_HOME/bin/emctl stop dbconsole
Oracle Net listener $ORACLE_HOME/bin/lsnrctl stop
iSQL*Plus $ORACLE_HOME/bin/isqlplusctl stop
Ultra Search $ORACLE_HOME/bin/searchctl stop

Start the Installer as follows:
$ $ORACLE_HOME/oui/bin/runInstaller

In the Welcome window, click Deinstall Products. The Inventory window appears, listing all of the Oracle homes on the system. In the Inventory window, select the Oracle home and the products that you want to remove, then click Remove.

Note: If you choose to remove Oracle JVM, the Oracle Universal Installer removes all installed products that depend on Oracle JVM, including Oracle Database.

When the Installer displays a confirmation window asking you to confirm that you want to deinstall the products and their dependant components, click Yes. The Installer displays a progress indicator as it removes the software. When the products have been deleted, click

Different types of backups in Oracle.

1. Cold backup:


If the database is in noarchive log mode,then we have take cold backup.for taking cold backup follow the following steps:

1. Shutdown the database

Sql>shutdown immediate;

2. Manually copy data files ,control files and redolog files to destination.

3. Start database

Sql>startup;

2.Hot backup:

If database is in archive log mode, then we can take hot backup without any database outage.

1. If you want to receive logs for hot backup in separate logfile, then execute the following command.


2. Start hot backup by executing following command.

Sql>alter database begin backup;

3. Manually copy data files, control files and redo log files without shutting down the database.

4. Stop hot backup by executing following command.

Sql>alter database end backup;

5. Switch log file after completing hot backup.

Sql>alter system switch logfile;

copy the archive files and it is used to recover the database.

issue this command to recover database :

sql>recover database using contolfile autobackup untill cancel;

3. Import – Export Backup:

Data guard is a new feature in Oracle 10g which helps to transfer data from one location to another location at high speed.

It consist of two utilities:

1. Export utility
2. Import Utility

1. Export utility: is used to store database objects in storage media such as disks outside the database.
2. Import Utility: is used to load the objects to database from storage media.

Export operation:
1. Create a directory

C:\>mkdir c:\backup

2. Connect to database and create a directory for backup

Sql>create directory backup as ‘c:\backup’;

Directory created.

3. Now move to host mode.
Sql>host

4. Go to the directory C:\oracle\product\10.2.0\db_1\BIN
C:\>cd C:\oracle\product\10.2.0\db_1\BIN

5. Execute export command.
C:\oracle\product\10.2.0\db_1\BIN> expdp sys/test directory=bkup dumpfile=bfull.dmp logfile=bfull.log full=y

For import operation execute the following command.

C:\oracle\product\10.2.0\db_1\BIN> impdp sys/test directory=bkup dumpfile=bfull.dmp logfile=bfull.log full=y


4 RMAN Backup:
Target database: database of which we are going to take backup.
Recovery catalog: is a destination which holds information about the target database.

1. RMAN Without recovery catalog:

Requirements for rman backup:

a. Data base must be in archivelog mode.

A. From operating system prompt issue the following command to connect to target database without recovery catalog. When connecting to a target or auxiliary database, you must have the SYSDBA privilege.

B. To take the complete backup of database issue following command.

1. RMAN>backup database;

2. To take the backup of tablespace use following command.

Rman>backup tablespace tablespace_name;

3. To take backup of datafile issue following command.

Rman>backup datafile ‘C:\oracle\product\10.2.0\oradata\orcl\datafile_name’;

By default these backups will be stored in flash_recovery_area.

Recovery operation using RMAN :

1. To view list of backups use this command.

Rman>list backupset;

2.It is not possible to recover data from all backup set. So we have to validate the backup set. To check is the backup set is valid or not, issue following command.

Rman>validate backupset list_number;

Shutdown dataabse and put in mount mode to recover database.

Rman>recover database;

Clone a database from cold backup.

Procedure:


STEPS:

1 Backup the Source Database

Before attempting to clone the source database, it is necessary to perform a full backup of the database. This will ensure that no data will be lost if the copying of the database is unsuccessful.

2 Backup the Source Database's Control File to a Trace File

Connect to the source database PROD SQL*Plus using an account that has the SYSDBA, SYSOPER, or DBA role granted to it. Perform a backup of the control file to trace using the following command:

SQL> alter database backup controlfile to trace;

Examples:

On Windows Operating System:

C:\> set ORACLE_SID=prod
C:\> sqlplus “sys/password (please put sys password here) as sysdba”
SQL> alter database backup controlfile to trace;

On Unix Operating System:

$Export ORACLE_SID= prod
$ Sqlplus “sys/change_on_install as sysdba”
SQL> alter database backup controlfile to trace;

Note: This will create a text file that we'll use later. The text file has the format ora_.trc or _ora_.trc or ora.trc depending on the Operating System , where is the number of the process that created this file. The trace file is created in the location specified by the initialization parameter USER_DUMP_DEST. To find the value of this parameter, type the following query:


SQL> select name, value
From v$parameter
Where name='user_dump_dest';

Note: Go to the directory selected above, identify the most recent .trc file and rename it to c1.sql.

A listing of an example control file trace follows

Dump file e:\oracle\admin\prod\udump\ORA03552.TRC
Mon Aug 26 15:04:15 2002
ORACLE V8.1.7.2.1 - Production vsnsta=0
vsnsql=f vsnxtr=3
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Oracle9i Enterprise Edition Release 9.2.0.8.1 - Production
With the Partitioning option
Server Release 9.2.0.8.1 - Production
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Instance name: PROD
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 3552, image: ORACLE.EXE
*** SESSION ID:(12.554) 2002-08-26 15:04:15.796
*** 2002-08-26 15:04:15.796
# The following commands will create a new control file and use it
# to open the database.
# The contents of online logs will be lost and all backups will
# be invalidated. Use this only if online logs are damaged.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "PROD" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 96
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORADATA\PROD\REDO01.LOG' SIZE 10M,
GROUP 2 'E:\ORADATA\PROD\REDO02.LOG' SIZE 10M
DATAFILE
'E:\ORADATA\PROD\SYSTEM01.DBF',
'E:\ORADATA\PROD\RBS01.DBF',
'E:\ORADATA\PROD\TEMP01.DBF',
'E:\ORADATA\PROD\USERS01.DBF',
'E:\ORADATA\PROD\TOOLS01.DBF',
'E:\ORADATA\PROD\OCL_TEMP1.DBF',
'E:\ORADATA\PROD\OCL_GLIB.DBF',
'E:\ORADATA\PROD\OCL_GLIB_IDX.DBF',
'E:\ORADATA\PROD\OCL_LI.DBF',
'E:\ORADATA\PROD\OCL_LI_IDX.DBF',
'E:\ORADATA\PROD\OCL_DCMQ.DBF',
'E:\ORADATA\PROD\OCL_DCMQ_IDX.DBF',
'E:\ORADATA\PROD\OCL_DCD.DBF',
'E:\ORADATA\PROD\OCL_DCD_IDX.DX1\OCL_RESP.DBF',
'E:\ORADATA\PROD\OCL_RESP_IDX.DBF',
'E:\ORADATA\PROD\OCL_DISC.DBF',
CHARACTER SET WE8ISO8859P1
;
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
# Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
# No tempfile entries found to add.

You need to modify the trace file script for the new database “TEST” by doing the following:


Line #s 1 through 19: Delete the header information and the comments in the script file.
Line # 20: Modify the startup command statement to include the inittest.ora parameter file as follows:

STARTUP NOMOUNT PFILE=e:\oracle\admin\TEST\pfile\inittest.ora

Line # 21: Modify the control file command statement to change the database name to the new database, as follows:

CREATE CONTROLFILE SET DATABASE "test" RESETLOGS NOARCHIVELOG

Line #s 28 and 29: Modify the filenames to point to the names of the redo log files for test.

Line #s 31 through 62: Modify the names of the data files to point to the correct names of the data files for test.

Line #s 65, 66, and 67: Remove the lines containing the RECOVER DATABASE command and its associated comments.

Line #s 68, 70: Delete the lines # 68 and #70

The following shows the modified file listing:

STARTUP NOMOUNT PFILE=e:\oracle\admin\TEST\pfile\initTEST.ora
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 96
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORADATA\TEST\REDO01.LOG' SIZE 10M,
GROUP 2 'E:\ORADATA\TEST\REDO02.LOG' SIZE 10M

DATAFILE
'E:\ORADATA\TEST\SYSTEM01.DBF',
'E:\ORADATA\TEST\RBS01.DBF',
'E:\ORADATA\TEST\TEMP01.DBF',
'E:\ORADATA\TEST\USERS01.DBF',
'E:\ORADATA\TEST\TOOLS01.DBF',
'E:\ORADATA\TEST\OCL_TEMP1.DBF',
'E:\ORADATA\TEST\OCL_APP.DBF',
'E:\ORADATA\TEST\OCL_APP_IDX.DBF',
'E:\ORADATA\TEST\OCL_DEF.DBF',
'E:\ORADATA\TEST\OCL_DEF_IDX.DBF',
'E:\ORADATA\TEST\OCL_GLIB.DBF',
'E:\ORADATA\TEST\OCL_GLIB_IDX.DBF',
'E:\ORADATA\TEST\OCL_LI.DBF',
'E:\ORADATA\TEST\OCL_LI_IDX.DBF',
CHARACTER SET WE8ISO8859P1
;

3 Make a List of All of your Data and Redo Log Files

SQL> select name from v$datafile;
SQL> select member from v$logfile;

4 Perform a Clean Shutdown of the Source Database

SQL> shutdown immediate;

You must connect as a user (i.e. SYS) who has privileges (i.e. SYSDBA) to shut down the database.

5 Make a Copy of the Database

Copy all data and redo log files noted in step “Make a List of All of your Data and Redo Log Files” to their new location making sure you preserve ownership and permissions. Do not copy any control file.

Note: You must first create the directories for the target database TEST, to save the administrative files, the data and log files. Issue the following commands:


On Windows Operating System:

Create administrative directories:

e:>mkdir e:\oracle\admin\TEST
e:>mkdir e:\oracle\admin\TEST\bdump
e:>mkdir e:\oracle\admin\TEST\udump
e:>mkdir e:\oracle\admin\TEST\cdump
e:>mkdir e:\oracle\admin\TEST\pfile





Create data file directory:

e:>mkdir e:\oradata\TEST

Copy Database Files

C:> copy e:\oradata\PROD\*.dbf e:\oradata\TEST\
C:> copy e:\oradata\PROD\*.log e:\oradata\TEST\
C:> copy e:\oracle\admin\PROD\pfile\initPROD.ora e:\oracle\admin\TEST\pfile\initTEST.ora

On Unix Operating System:

Create administrative directories:

$mkdir /u01/app/oracle/admin/TEST
$ mkdir /u01/app/oracle/admin/TEST/bdump
$mkdir /u01/app/oracle/admin/TEST/udump
$ mkdir /u01/app/oracle/admin/TEST/cdump
$ mkdir /u01/app/oracle/admin/TEST/pfile

Create data file directory:

$mkdir /u01/oradata/TEST

Copy Database Files

$cp –p /u01/oradata/PROD/*.dbf /u01/oradata/TEST/
$cp –p /u01/oradata/PROD/*.log /u01/oradata/TEST/
$cp –p /u01/app/oracle/admin/PROD/pfile/initPROD.ora /u01/app/oracle/admin/TEST/pfile/initTEST.ora

Create symbolic link for the parameter file:

$ln –s /u01/app/oracle/admin/TEST/pfile/initTEST.ora $ORACLE_HOME/dbs/initTEST.ora

Where $ORACLE_HOME is the full path of the RDBMS home installation for PROD (i.e. /u01/app/oracle/product/9.2.0.80)

6 Modify the Parameter File for the Cloned Database

Using a text editor, revise the initTEST.ora parameter file to reflect the new values for the target database.
On Windows Operating System:
e:\oracle\admin\TEST\pfile\initTEST.ora
On Unix Operating System:
/u01/app/oracle/admin/TEST/pfile/initTEST.ora
This will involve changing the paths indicated in parameters such as CONTROL_FILES, USER_DUMP_DEST, BACKGROUND_DUMP_DEST, and any other parameter that specifies a path/file location.

A listing of an example initTEST.ora follows.

db_name = PROD
db_domain = us.oracle.com
Control_files = ("e:\oradata\PROD\control01_PROD.ctl", "e:\oradata\PROD\control02_PROD.ctl")
db_block_buffers = 10000
shared_pool_size = 25000000
java_pool_size = 20000000
Processes = 50
log_buffer = 163840
audit_trail = false
timed_statistics = true
log_archive_start = true
log_archive_dest_1 = "location=e:\oracle\admin\PROD\arch"
log_archive_format = %t_%s.dbf
If using private rollback segments, place lines of the following
global_names = true
oracle_trace_enable = true
Background_dump_dest = e:\oracle\admin\PROD\bdump
Core_dump_dest = e:\oracle\admin\PROD\cdump
user_dump_dest = e:\oracle\admin\PROD\udump
db_block_size = 8192
remote_login_passwordfile = exclusive
job_queue_processes = 2
compatible = 8.1.7.2.0
open_cursors=200
db_files=96
nls_date_format=dd-mon-rrrr
sort_area_size=3000000

You need to modify the initTEST.ora file for the new database TEST by doing the following:

Replace PROD with TEST
db_name = TEST
db_domain = us.oracle.com
Control_files = ("e:\oradata\TEST\control01.ctl", "e:\oradata\TEST\control02.ctl")
db_block_buffers = 10000
shared_pool_size = 25000000
java_pool_size = 20000000
Processes = 50
log_buffer = 163840
audit_trail = false
timed_statistics = true
log_archive_start = true
log_archive_dest_1 = "location=e:\oracle\admin\TEST\arch"
log_archive_format = %t_%s.dbf
If using private rollback segments, place lines of the following
global_names = true
oracle_trace_enable = true
Background_dump_dest = e:\oracle\admin\TEST\bdump
Core_dump_dest = e:\oracle\admin\TEST\cdump
user_dump_dest = e:\oracle\admin\TEST\udump
db_block_size = 8192
remote_login_passwordfile = exclusive
job_queue_processes = 2
compatible = 8.1.7.2.0
open_cursors=200
db_files=96
nls_date_format=dd-mon-yyyy
sort_area_size=3000000


7 Modify the Trace Output Script File

You must modify the c1.sql trace script file from Step Backup the Source Database's Control File to a Trace File

The modified controlfile will look like this:

CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 96
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORADATA\TEST\REDO01.LOG' SIZE 10M,
GROUP 2 'E:\ORADATA\TEST\REDO02.LOG' SIZE 10M

DATAFILE
'E:\ORADATA\TEST\SYSTEM01.DBF',
'E:\ORADATA\TEST\RBS01.DBF',
'E:\ORADATA\TEST\TEMP01.DBF',
'E:\ORADATA\TEST\USERS01.DBF',
'E:\ORADATA\TEST\TOOLS01.DBF',
'E:\ORADATA\TEST\OCL_TEMP1.DBF',
'E:\ORADATA\TEST\OCL_APP.DBF',
'E:\ORADATA\TEST\OCL_APP_IDX.DBF',
'E:\ORADATA\TEST\OCL_DEF.DBF',
'E:\ORADATA\TEST\OCL_DEF_IDX.DBF',
'E:\ORADATA\TEST\OCL_GLIB.DBF',
'E:\ORADATA\TEST\OCL_GLIB_IDX.DBF',
'E:\ORADATA\TEST\OCL_LI.DBF',
'E:\ORADATA\TEST\OCL_LI_IDX.DBF',
CHARACTER SET WE8ISO8859P1
;


8 Create the Oracle Service for the New Database (Windows Operating System Only)

C:\> set ORACLE_SID=TEST

Check whether the initialization parameter REMOTE_LOGIN_PASSWORDFILE in the initTEST.ora file is set to exclusive or shared. If it is set, then the original database PROD was using a password file. Use the oradim command to create a new Oracle SID service for the new database TEST with a password file:

C:\> oradim -new -sid TEST –intpwd sys -startmode auto


9 Update Network Files and Restart the Listener

Edit the listener.ora and tnsnames.ora files to mention the newly created instance TEST. Stop and restart the listener.

For Unix only, add an entry for the TEST database in the oratab file. In Tru64 and HP-UX , oratab is located in the /etc directory. In Solaris, the oratab file is located in the /var/opt/oracle directory. The line will be similar to this:

TEST:/u01/app/oracle/product/9.2.0.8.0:N

NOTE: The instance name is case sensitive. Make sure to match the instance name with the one in the listener.ora, tnsnames.ora and oratab files.


10 Recreate the Control File, and Open the Database

Start the instance and recreate the control file:

On Unix Operating System:

Set the Oracle environment to the new database TEST in the Operating System. Define the following environment variables:

Example:

$export ORACLE_SID=TEST

$cd $ORACLE_HOME/dbs
$orapwd file=orapwTEST password= entries=5

Connect to the target database’s idle instance via SQL*Plus using an account that has the SYSDBA, SYSOPER, or DBA role granted to it.

Example:

$ Sqlplus “sys/sys as sysdba”

SQL> STARTUP NOMOUNT pfile=’/u01/app/oracle/admin/TEST/pfile/initTEST.ora’

SQL> @c1.sql
SQL>alter database open resetlogs;
SQL>create spfile from pfile=’/u01/app/oracle/admin/TEST/pfile/initTEST.ora’
SQL>shutdown immediate;
SQL>startup;


On Windows Operating System:


Example:

C:\> set ORACLE_SID=TEST

Connect to the target database’s idle instance via SQL*Plus using an account that has the SYSDBA, SYSOPER, or DBA role granted to it.

Example:

C:\> sqlplus “sys/sys as sysdba”

SQL> STARTUP NOMOUNT PFILE=’e:\oracle\admin\TEST\pfile\initTEST.ora’
SQL> @c1.sql
SQL>alter database open resetlogs;
SQL>create spfile from pfile=’e:\oracle\admin\TEST\pfile\initTEST.ora’
SQL>shutdown immediate;
SQL>startup;


11 Change the Global Name of the Target Database

SQL> alter database rename global_name to TEST.us.oracle.com;

Manual Database creation

Creating an Oracle 10g database from the command line only
This article is the successor to Creating an Oracle 9i database on NT from the command line only. There are basically three ways to create an Oracle database:

•Using the Database Configuration Assistant (DBCA)
•With the SQL create database statement
•Through upgrading an existing database.

This article focuses on the second option. It can be completed on the command line only, that is, without any GUI tool. Also, the article is a bit biased towards Windows and its command prompt (cmd.exe: start->run->cmd). Specifying the Instance's SID
There can be more than one Oracle instance on a single machine. In order to be able to distinguish these instances, Oracle uses a SID (System Identifier) which is a string. The SID can be set through the ORACLE_SID environment variable.

D:\oracle\product\10.1.0>set ORACLE_SID=ORA10
Creating an Oracle Service
On Windows, each instance requires a Windows service. This service must first be created with oradim: D:\oracle\product\10.1.0\Db_1>oradim -new -sid %ORACLE_SID% -intpwd MYSECRETPASSWORD -startmode M
Instance created.

It can be verified that a Windows service was created by typing services.msc into the console. A service named OracleServiceORA10 (ORA10 = %ORACLE_SID%) will be found. Also, the startup type is manual as was requested by -startmode M. Oracle also created a password file under %ORACLE_HOME%\database: D:\oracle\product\10.1.0\Db_1>dir database
Volume in drive D has no label.

As can be seen, the SID is in the password file's name. Creating the initialization parameter file
When an Oracle instance starts up, it requires either an initialization paramter file (init.ora) or an SPFILE. SPFILES have binary content and must be created from init.ora files. Therefore, the init.ora file (which is an ordianary text file) is created first. Here's a minimal init.ora (under $ORACLE_HOME/dbs if it is Unix, or %ORACLE_HOME%\database, if it is windows) just to demonstrate how the control files are found. Of course, you will add more init params into the init.ora file. D:\oracle\product\10.1.0\Db_1\database\initORA10.ora
control_files = (d:\oracle\databases\ora10\control01.ora,
d:\oracle\databases\ora10\control02.ora,
d:\oracle\databases\ora10\control03.ora)
undo_management = auto
db_name = ora10
db_block_size = 8192
The undo_management parameter is necessary if we want to use automatic undo management. Although the above seems to be the bare required minimum, you probably also want do define background_dump_dest, core_dump_dest and user_dump_dest. Starting the instance
Now, that we have created an Oracle service and the init.ora file, we're ready to start the instance: D:\oracle\product\10.1.0\Db_1>sqlplus /nolog

SQL*Plus: Release 10.1.0.2.0 - Production on Sat Mar 5 16:05:15 2005

Copyright (c) 1982, 2004, Oracle. All rights reserved.

SQL> connect sys/MYSECRETPASSWORD as sysdba
Connected to an idle instance.
SQL*Plus tells us that we're connected to an idle instance. That means that it is not yet started. So, let's start the instance. We have to start the instance without mounting (nomount) as there is no database we could mount at the moment. SQL> startup nomount
ORACLE instance started.

Total System Global Area 113246208 bytes
Fixed Size 787708 bytes
Variable Size 61864708 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes
This created the SGA (System Global Area) and the background processes. Creating the database
We're now ready to finally create the database: SQL>create database ora10
logfile group 1 ('D:\oracle\databases\ora10\redo1.log') size 10M,
group 2 ('D:\oracle\databases\ora10\redo2.log') size 10M,
group 3 ('D:\oracle\databases\ora10\redo3.log') size 10M
character set WE8ISO8859P1
national character set utf8
datafile 'D:\oracle\databases\ora10\system.dbf'
size 50M
autoextend on
next 10M maxsize unlimited
extent management local
sysaux datafile 'D:\oracle\databases\ora10\sysaux.dbf'
size 10M
autoextend on
next 10M
maxsize unlimited
undo tablespace undo
datafile 'D:\oracle\databases\ora10\undo.dbf'
size 10M
default temporary tablespace temp
tempfile 'D:\oracle\databases\ora10\temp.dbf'
size 10M;
If something goes wrong with the creation, Oracle will write an error into the alert.log. The alert log is normaly found in the directory that is specified with the background_dump_dest. If this parameter was not specified (as is the case in our minimal init.ora), the alert.log will be written into %ORACLE_HOME%/RDMBS/trace. If an ORA-01031: insufficient privileges is returned, that means most likely, that the current user is not in the dba group (on unix), or the ORA_DBA (windows). If the init.ora file is not at its default location or has not been found with the pfile attribute, an ORA-01078: failure in processing system parameters and an LRM-00109: could not open parameter file '/appl/oracle/product/9.2.0.2/dbs/initadpdb.ora' error is issued. The create database command also executes a file whose name is determined by the (hidden) init parameter _init_sql_file (which seems to default to sql.bsq) After the creation of the database, it can be mounted and opened for use. Completing the DB creation
In order to complete the db creation, the following scripts must be run as sys:
•%ORACLE_HOME%/rdbms/admin/catalog.sql
•%ORACLE_HOME%/rdbms/admin/catproc.sql and
SQL*Plus provides a shortcut to refer to the ORACLE_HOME directory: the question mark (?). Therefore, these scripts can be called like so: SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
catalog.sql creates the data dictionary. catproc.sql creates all structures required for PL/SQL.

Now the Dataabse is ready for use..!