NAVEEN

Thursday, May 13, 2010

ASM in Windows server

Preparing Disks for ASM: Windows NT

To simulate implementation of ASM in the Windows NT environment, I first created raw disk partitions on some of my hard drives. Fortunately, I had extra space on three of the four disks in my test server, so I created several primary disk partitions sized at 100MB on each of those three drives using the Windows Disk Partitioning utility (DISKPART.EXE):

C:\> select disk 2
C:\> Disk 2 selected.
C:\> create partition primary size=100
C:\> Partition created.
C:\> create partition primary size=100
C:\> Partition created.
C:\> create partition primary size=100
C:\> Partition created.
C:\> select disk 3
C:\> Disk 3 selected.
C:\> create partition primary size=100
C:\> Partition created.

Once the partitions are completed, I used the ASMTOOLG.EXE utility to "stamp" each partition with an ASM label so that Oracle
can recognize these partitions as candidate disks for the ASM instance. I executed the ASMTOOLG.EXE program from the /bin
directory of the Oracle home path for my Windows NT database. Figure 2.1.2 shows the initial screen that this GUI tool
presented, and Figure 2.1.3 shows the screen that confirmed the creation of the ASM labels. Once the labels were assigned,
I then re-invoked ASMTOOLG to confirm them (see Figure 2.1.4). I will also use ASMTOOLG to remove the labels prior to
removing these partitions once my simulation is completed

Creating an ASM Instance Using DBCA in Linux
I used this same methodology to create an ASM instance in Linux using DBCA. The only significant difference was the contents of the Disk Selection window (Figure 2.2.9), which instead showed the candidate disks I had previously created on that server for the Linux environment. In either of these cases, the end result is the same: Once I clicked on the Cancel button on the last screen, Oracle dismounted the ASM disk group I had created and then shut down the ASM instance as well.

Creating an ASM Instance without Using DBCA
Of course, I don't have to use DBCA to create an ASM instance. Personally, I prefer to use command scripts to create my database because it is easier to customize the scripts to create other ASM instances in the future, and it also gives me complete control over what Oracle is doing "behind the screen." In addition, an ASM instance is extremely easy to create because no CREATE DATABASE script is required, just an initialization parameter file like the one shown in Listing 2.2. Besides the usual initialization parameters for trace file directories, there are only a few additional ones required to create an ASM instance:

ASM Initialization Parameters

Initialization Parameter
Description

INSTANCE_TYPE
Defines the instance as an ASM instance. This is the only required parameter to identify an ASM instance; the remainder can be left at their defaults

DB_UNIQUE_NAME
Defines the service provider name for which this ASM instance manages disk groups. +ASM is the default value, and should not be modified unless multiple ASM instances are on the same node

ASM_POWER_LIMIT
Controls rebalance operation speed. Values range from 1 to 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMIT parameter

ASM_DISKSTRING
An operating system dependent value; used by ASM to limit the set of disks considered for discovery

ASM_DISK_GROUPS
Describes the list of ASM disk group names to be mounted by an ASM instance at startup, or whenever the ALTER DISKGROUP ALL MOUNT command is used

LARGE_POOL_SIZE
The LARGE POOL size. This must be set to a minimum of 8MB, but Oracle recommends setting this to 16MB


To create the ASM instance without using DBCA, I first made sure that I had created the directories I specified for BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, and USER_DUMP_DEST. I also created a password file for the instance using the ORAPWD utility. (This is important, because when you attempt to connect to the ASM instance from Enterprise Manager, it will expect the instance to have the REMOTE_LOGIN_PASSWORDFILE initialization parameter set to EXCLUSIVE so that the instance can be contacted remotely and that means a password file will be required.)

ORAPWD file=c:\oracle\app\product\10.1.0\db_1\database\PWD+ASM.ora password=oracle
After I had made sure that the Oracle Cluster Services service was started in my Windows NT environment - it is usually named OracleCSService in the list of Windows Services that I can start - I then simply pointed my MS-DOS command window at the database instance by setting the value for ORACLE_SID to +ASM, started a SQL*Plus session, created an SPFILE from the parameter file, and then started the ASM instance in NOMOUNT mode:

C:\> set ORACLE_SID=+ASM
C:\> sqlplus "sys as sysdba"
SQL> Connected to an idle instance.
SQL> create spfile from pfile=c:\init+asm.ora;
SQL> File created.
SQL> startup nomount;
SQL> ASM instance started

# Standard ASM instance initialization parameters
*.asm_power_limit = 5
*.db_unique_name = '+ASM'
*.instance_type = 'asm'
*.large_pool_size = 16M

# Background, core, and user trace file directories. These will need
# to exist before starting the instance and configured for the server's
# operating system as appropriate
*.background_dump_dest = '/u01/app/oracle/admin/+ASM/bdump'
*.core_dump_dest = '/u01/app/oracle/admin/+ASM/cdump'
*.user_dump_dest = '/u01/app/oracle/admin/+ASM/udump'

# This requires a password file for the ASM instance, needed by
# Enterprise Manager for a connection as SYS:
REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE

# ASM candidate disk search string. It's not required, but it does help
# Oracle find the candidate disks more easily
*.asm_diskstring = '\\.\ORCLDISK*' # For Windows environment
*.asm_diskstring = '/u03/asmdisks/*', '/u04/asmdisks/*',# For Red Hat Linux environment

# ASM Mountable Disk Group(s). These can be activated after the ASM
# Instance has been created.
#asm_disk_groups='DGROUP1'

-- Create ASM disk group in NT environment
CREATE DISKGROUP dgroup1
NORMAL REDUNDANCY
FAILGROUP ctlr1
DISK
'\\.\ORCLDISKDATA0'
, '\\.\ORCLDISKDATA2'
FAILGROUP ctlr2
DISK
'\\.\ORCLDISKDATA1'
,'\\.\ORCLDISKDATA3'
;

-- Create ASM disk group in Linux environment
CREATE DISKGROUP dgroup1
NORMAL REDUNDANCY
FAILGROUP ctlr1
DISK
'/u03/asmdisks/disk0'
,'/u04/asmdisks/disk2'
FAILGROUP ctlr2
DISK
'/u03/asmdisks/disk1'
,'/u04/asmdisks/disk3'
;


ASMTOOL -create c:\asmdisks\asmdisk1 250

Oracle Streams Concept & Configurations

Abstract

An Oracle database upgrade is a process of transforming an existing Oracle database
into a later or the current release of the Oracle Database. This upgrade process
normally requires a significant amount of database down time. This task of upgrade
becomes more difficult for businesses which need to be 24*7 operational and any
outage negatively affects business operations and customer service. In 24*7
organizations like Banks the database availability is so crucial that the downtime is
bargained in minutes.
Minimizing downtime is a challenge every organization faces. Oracle Streams
replication enables us to perform an upgrade with near-zero downtime. This is
accomplished by configuring Oracle Streams on the existing Oracle 9i database which
captures all the changes on the source database and replicates the changes to the
target database (Oracle 10g). The rest of the document presents a step-by-step
approach of setting the Oracle Streams replication and performing an upgrade to
Oracle 10g database..

Introduction
This document provides a step-by-step approach to upgrade your existing Oracle 9i
database to Oracle 10g database using Oracle Streams attaining a near-zero
downtime. In this document, I create a new Oracle 9i database with an application
schema and upgrade this to Oracle 10g database.
A bi-directional replication is accomplished using Oracle Streams; both Oracle 9i and
Oracle 10g Databases will be in Active-Active mode. Although both databases will be
in Active-Active mode, only one database will be serving the users. A bi-directional
replication is required in order to fallback to Oracle 9i in case we have to.
Setting up Bi-Directional Replication
For the upgrade of Oracle 9i database, I have followed the below listed steps:
1. Create an Oracle 9i Database.
2. Create Application Schema with few objects.
3. Create an Oracle 10g Database.
4. Configure Streams Administrator on both Oracle 9i and Oracle 10g databases.
5. Configure Oracle Streams on both the databases.
6. Export Oracle 9i Application Schema(s) to Oracle 10g database.
7. Start Oracle Streams.
8. Test the data transfer from the source to the target database.
In this document, although a bi-directional replication is configured, I refer to the
Oracle 9i database as the Source Database and Oracle 10g database as the
destination database. I have created both the source and destination databases
on a single machine and following are the details:
4
Host Name Test Test
Operating System
MS Windows XP
Version 5.1 Service Pack 2
MS Windows XP
Version 5.1 Service Pack 2
Database Version
Oracle9i Enterprise Edition
Release 9.2.0.8.0
Oracle 10g Enterprise Edition
Release 10.2.0.1.0
Oracle Home C:\ora9i C:\oracle\ora10g
Instance Name DB9I DB10G
Database Role Source / Destination Source / Destination
Archiving? ARCHIVELOG Mode ARCHIVELOG Mode
Directory Structure
Pfile
Data Files
Control Files
Redo Logs
Archived Redo
Logs
C:\db9i\pfile
C:\db9i\data
C:\db9i\control1
C:\db9i\control2
C:\db9i\redo
C:\db9i\arch
C:\db10g\pfile
C:\db10g\data
C:\db10g\control1
C:\db10g\control2
C:\db10g\redo
C:\db9i\arch
Create an Oracle 9i database
I have created a blank Oracle 9i database using the following procedure.
(i) Create an Initialization Parameter File
Apart from the standard initialization parameters, following initialization parameters
are to be configured to effectively use Oracle Streams.
Parameter=Value Description of the Parameter
GLOBAL_NAMES = TRUE Specifies whether a database link is required to have
the same name as the database to which it connects
JOB_QUEUE_PROCESSES = 10 Specifies the number of Jn job queue processes for
each instance.
(ii) Create Password File and Windows Service
A password file is created using the “orapwd” utility and a Windows Service is
created using “oradim” utility from Oracle 9i home.
5
-- Create a Password File
C:\Ora9i\bin> orapwd file=C:\Ora9i\database\PWDdb9i.ora password=syssys
-- Create a Windows Service
C:\Ora9i\bin> oradim –new –sid db9i –pfile c:\db9i\pfile\initDB9I.ora
(iii)Create the source database
Below are the steps I have used to create the source database in Oracle 9i. Once the
database is created, I also create the listener service using Oracle Net Configuration
Assistant.
C:\Ora9i\bin> sqlplus "/as sysdba"
SQL> startup nomount pfile=c:\db9i\pfile\initDB9I.ora
SQL> Create database DB9I
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
character set AR8MSWIN1256
logfile group 1 ('C:\db9i\redo\redo01.log') size 64M,
group 2 ('C:\db9i\redo\redo02.log') size 64M,
group 3 ('C:\db9i\redo\redo03.log') size 64M
datafile 'C:\db9i\data\system01.dbf' size 257M autoextend on
maxsize unlimited extent management local
undo tablespace undotbs1 datafile 'C:\db9i\data\undotbs01.dbf' size 257M
default temporary tablespace temp tempfile 'C:\db9i\data\temp01.dbf' size 257M;
SQL> @?\rdbms\admin\catalog.sql
SQL> @?\rdbms\admin\catproc.sql
SQL> connect system/manager
SQL> @?/sqlplus/admin/pupbld
Enable the source database (DB9I) in Archivelog Mode
Oracle Streams requires that the source database must be running in ARCHIVELOG
mode when a capture process is configured to capture the changes. Following code
enables the database in an ARCHIVELOG mode.
6
SQL> conn /as sysdba
SQL> shutdown immediate;
SQL> startup mount pfile=c:\db9i\pfile\initdb9i.ora
SQL> alter database archivelog;
SQL> alter database open;
Create the destination Oracle 10g Database
(i) Create an Initialization Parameter File
Following initialization parameters are to be configured to use Oracle Streams apart
from the standard parameters.
Parameter=Value Description of the Parameter
STREAMS_POOL_SIZE=256M Specifies (in bytes) the size of the Streams pool. The
Streams pool contains buffered queue messages. In
addition, the Streams Pool is used for internal
communication during parallel capture and apply.
GLOBAL_NAMES = TRUE Specifies whether a database link is required to have
the same name as the database to which it
connects.
JOB_QUEUE_PROCESSES = 10 Specifies the number of Jn job queue processes for
each instance.
UNDO_RETENTION=10800 Specifies (in seconds) the amount of committed
undo information to retain in the database. For a
database running one or more capture processes,
make sure this parameter is set to specify an
adequate undo retention period.
(ii) Create Password File and Windows Service
A password file is created using the “orapwd” utility and create a Windows Service
using “oradim” utility from Oracle 10g Home.
7
-- Create a Password File
C:\oracle\ora10g> orapwd file=C:\Oracle\Ora10g\database\PWDdb10g.ora password=syssys
-- Create a Windows Service
C:\oracle\ora10g> oradim -new -sid db10g -pfile c:\db10g\pfile\inintDB10G.ora
(iii)Create the destination database
Below script is used to create a destination database in Oracle 10g. After the
database creation also create and start the listener.
C:\oracle\ora10g>sqlplus "/as sysdba"
SQL> startup nomount pfile=c:\db10g\pfile\initDB10G.ora
SQL> create database DB10G
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
character set AR8MSWIN1256
logfile group 1 ('c:\db10g\redo\redo01.log') size 128M,
group 2 ('c:\db10g\redo\redo02.log') size 128M,
group 3 ('c:\db10g\redo\redo03.log') size 128M
datafile 'c:\db10g\data\system01.dbf' size 257M autoextend on
maxsize unlimited extent management local
sysaux datafile 'c:\db10g\data\sysaux01.dbf' size 257M
autoextend on maxsize unlimited
undo tablespace undotbs1 datafile 'c:\db10g\data\undotbs01.dbf' size 257M
default temporary tablespace temp tempfile 'c:\db10g\data\temp01.dbf' size 257M;
SQL> @?\rdbms\admin\catalog.sql
SQL> @?\rdbms\admin\catproc.sql
SQL> conn system/manager
SQL> @?/sqlplus/admin/pupbld
Enable the destination database (DB10G) to run in Archivelog Mode
As DB10G will be acting as a Primary database in case we have to fallback to Oracle
9i, we have to configure “DB10G” to run in ARCHIVELOG mode.
8
SQL> conn /as sysdba
SQL> shutdown immediate;
SQL> startup mount pfile=c:\db10g\pfile\initdb10g.ora
SQL> alter database archivelog;
SQL> alter database open;
Create a sample Application Schema on the Source Database (DB9I).
Using the following code snippet, I create a separate tablespace for the application
along with an application schema named “APPS”. Finally, granting the required
privileges to the “APPS” schema.
SQL> create tablespace appsts datafile 'c:\db9i\data\appsts01.dbf' size 100M;
SQL> create user apps identified by apps default tablespace appsts temporary
tablespace temp;
SQL> grant connect, resource to apps;
Create objects and insert sample data in the APPS Schema on DB9I
I create two simple tables in the APPS schema and populate them with the test data.
Below is the code:
SQL> connect apps/apps
SQL> create table dept (
deptno number(10) not null,
dname varchar2(20) not null,
db_site varchar2(10) not null);
SQL> alter table dept add constraint dept_pk primary key (deptno);
SQL> insert into dept values (1, 'IT','db9i');
SQL> insert into dept values (2, 'HR', 'db9i');
SQL> insert into dept values (3, 'LOGISTICS','db9i');
SQL> commit;
SQL> create table emp (
empno number(10) not null,
ename varchar2(20) not null,
9
site varchar2(10) not null,
job varchar2(10),
deptno number(10));
SQL> alter table emp add constraint emp_pk primary key (empno);
SQL> alter table emp add constraint emp_fk foreign key (deptno) references dept
(deptno);
SQL> insert into emp values (101, 'SCOTT','db9i','CLERK',1);
SQL> insert into emp values (102, 'TIGER','db9i','ANALYST',2);
SQL> insert into emp values (103, 'MARK','db9i','ENGINEER',3);
SQL> commit;
Create Streams Administrator on the source database (DB9I)
By this time, I have a source database in Oracle 9i which contains application
schema accessed by end users for their daily processing and an empty Oracle 10g
database. To manage the Streams environment, I will create a new user (called
strmadmin) with appropriate privileges. We cannot use the SYS or SYSTEM user as
Streams Administrator, and the Streams Administrator should not use the SYSTEM
tablespace as its default tablespace.
SQL> conn sys/syssys as sysdba
SQL> create tablespace streamsts datafile 'c:\db9i\data\streamsts01.dbf' size 100M;
SQL> create user strmadmin identified by strmadmin default tablespace streamsts
temporary tablespace temp;
SQL> grant connect, resource, dba, select_catalog_role to strmadmin;
SQL> BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
SQL> BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
10
SQL> GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
SQL> GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
SQL> GRANT EXECUTE ON DBMS_APPLY_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_CAPTURE_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_PROPAGATION_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_FLASHBACK to strmadmin;
SQL> GRANT SELECT_CATALOG_ROLE to strmadmin;
SQL> GRANT SELECT ANY DICTIONARY to strmadmin;
SQL> GRANT ALL PRIVILEGES TO strmadmin;
Create a Heartbeat Table on the source database (DB9I)
Implementing a heartbeat table provides a quick feedback to the health of the
streams replication. I will create a heartbeat table with a date and site columns. I
will also create a database job to insert a record into the heartbeat table every 10
seconds.
SQL> conn apps/apps
SQL> create table heartbeat (a int, at_time date, db_site varchar2(30));
SQL> create sequence seq9i;
SQL> set serveroutput on
SQL> declare
l_job_no NUMBER;
begin
dbms_job.submit(job => l_job_no,
what => 'Begin insert into heartbeat values (seq9i.nextval, sysdate,
‘’DB9I’’); commit; End;',
next_date => sysdate,
interval =>'sysdate +10/86400');
commit;
dbms_output.put_line('Job No : '|| to_char(l_job_no));
end;
/
Enable Supplemental Logging on the source database (DB9I)
Supplemental logging logs additional columns information into the redo logs for row
identification. This is required by the Streams as the Capture process mines the redo
logs for the DDL’s and DML’s.
11
SQL> conn sys/syssys as sysdba
SQL> alter database add supplemental log data
(PRIMARY KEY, UNIQUE INDEX) columns;
Create Application User Schema on the destination database (DB10G)
I will create a tablespace(s) and a schema(s) in the destination database similar to
that of the source database.
SQL> conn sys/syssys as sysdba
SQL> create tablespace appsts datafile 'c:\db10g\data\appsts01.dbf' size 100M;
SQL> create user apps identified by apps default tablespace appsts temporary
tablespace temp;
SQL> grant connect,resource to apps;
Create Streams Administrator on DB10G
Similar to the source database, I have to configure a Streams Administrator on the
destination site to manage the Streams environment. Following code will create a
user “strmadmin” and grant appropriate privileges to the user.
SQL> conn /as sysdba
SQL> create tablespace streamsts datafile 'c:\db10g\data\streamsts01.dbf' size
100M;
SQL> create user strmadmin identified by strmadmin default tablespace streamsts
temporary tablespace temp;
SQL> grant dba,select_catalog_role to strmadmin;
SQL> exec dbms_streams_auth.grant_admin_privilege('strmadmin',true);
SQL> GRANT ALL PRIVILEGES TO strmadmin;
Create database link on the source database (DB9I)
As the Streams Administrator on the source database, create a private database link
to the destination. The link is required to ship the changes from the source to the
destination. Test the link by selecting the SYSDATE from the remote database.
Modify the %ORACLE_HOME%\network\admin\tnsnames.ora file to include an entry
that points to the DB10G database.
12
SQL> conn strmadmin/strmadmin
SQL> create database link db10g connect to strmadmin identified by strmadmin
using 'db10g';
SQL> select sysdate from dual@db10g;
Create database link on the destination database (DB10G)
Similarly, on the destination database create a private database link to the source
database. Modify the %ORACLE_HOME%\network\admin\tnsnames.ora file to
include an entry that points to the DB9I database.
SQL> conn strmadmin/strmadmin
SQL> create database link db9i connect to strmadmin identified by strmadmin using
'db9i';
SQL> select sysdate from dual@db9i;
Create Streams queues on the source database (DB9I)
In a Streams environment, queues stage LCR’s that encapsulate captured messages.
These queues can be used by capture process, propagations, and apply processes as
an LCR goes through a stream from a source database to a destination database.
As the Streams Administrator, I create a capture queue, “capture_queue_9i” and
assign “capture_table” as the table name to be used by the Capture process.
Similarly, I create an apply queue named “apply_queue_9i” and associate it with
“apply_table” table on the source database.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_table',
queue_name => 'capture_queue_9i',
queue_user => 'strmadmin');
end;
/
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_table',
13
queue_name => 'apply_queue_9i',
queue_user => 'strmadmin');
end;
/
Create Streams queues on the destination database (DB10G)
Similarly, I will create the capture and apply queues on the destination database.
The capture queue will be used to capture all the changes on DB10G database and
the apply queue will be used to apply the LCR’s captured on the DB9I database.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_table',
queue_name => 'capture_queue_10g',
queue_user => 'strmadmin');
end;
/
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_table',
queue_name => 'apply_queue_10g',
queue_user => 'strmadmin');
end;
/
Configure Capture process on the source database (DB9I)
As a Streams Administrator on the source database, create and configure a capture
process. The Capture process is configured to capture all the DML and DDL changes
on the entire database.
The ADD_GLOBAL_RULES procedure enables us to capture changes at the database
level. By passing a value of ‘TRUE’ to the parameters ‘INCLUDE_DML’, and
‘INCLUDE_DDL’ we inform the Capture process to capture all the DDL and DML
changes.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'capture',
streams_name => 'capture_stream_9i',
queue_name => 'capture_queue_9i',
include_dml => true,
14
include_ddl => true);
end;
/
Configure Apply process on the source database (DB9I)
The Apply process dequeues the LCR events from DB10G and applies to the
respective application schemas on the source database. Create an Apply process by
logging in as a Streams Administrator as shown below.
This step is optional and is required to support a fallback to Oracle 9i. The below
procedure attaches the “apply_queue_9i” queue with the apply process and instructs
the Apply process to apply all the DDL’s and DML’s from the ‘DB10G’ database.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'apply',
streams_name => 'apply_src_db10g',
queue_name => 'apply_queue_9i',
include_dml => true,
include_ddl => true,
source_database => 'db10g');
end;
/
Configure Propagation process on the Source Database (DB9I)
The configuration of a Propagation process is similar to the Capture and Apply
processes. We add rules to the positive rule set and specify the source queue name
and destination queue name. Changes are propagated from the source queue to the
destination queue.
The source queue being the capture queue (‘capture_queue_9i’) at the source
database (DB9I) and the destination queue being the apply queue
(‘apply_queue_10g’) at the destination (DB10G).
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_propagation_rules (
streams_name => 'db9i_to_db10g',
source_queue_name => 'capture_queue_9i',
destination_queue_name => 'apply_queue_10g@db10g',
include_dml => true,
15
include_ddl => true,
source_database => 'db9i');
end;
/
Configure a Capture process on the destination database (DB10G)
A Capture process is to be configured at the destination database to capture all the
changes at the Oracle 10g database. This process is optional and is required in case
we have to fallback to Oracle 9i database even after couple of transactions actually
has been performed on DB10G. This advantage is not available in any of the
supported Upgrade methods. We can fallback immediately or even after running the
applications against DB10G for couple of days.
Similar to the Oracle 9i database, I use the same procedure “ADM_GLOBAL_RULES”
to create a Capture process.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'capture',
streams_name => 'capture_stream_10g',
queue_name => 'capture_queue_10g',
include_dml => true,
include_ddl => true);
end;
/
Set the schema instantiation SCN on the destination database (DB10G)
Setting the global instantiation SCN for DB10G at DB9I ensures that only changes
after the instantiation SCN from DB10G are applied to DB9I.
SQL> connect strmadmin/strmadmin
SQL> declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_global_instantiation_scn@db9i (
source_database_name => 'db10g',
instantiation_scn => v_scn,
recursive => true);
end;
/
Configure Apply process on the destination database (DB10G)
16
An Apply process is to be created on the destination to apply all the changes made
and captured at the source database. Below script creates an Apply process and
instructs it to apply all the DDL and DML changes captured at the DB9I database.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'apply',
streams_name => 'apply_src_db9i',
queue_name => 'apply_queue_10g',
include_dml => true,
include_ddl => true,
source_database => 'db9i');
end;
/
Configure Propagation process on the destination database (DB10G)
The creation of Propagation process is optional and is required to support the fallback
strategy. Changes are propagated from the DB10G to DB9I database.
The source queue being the capture queue (‘capture_queue_10g’) at DB10G and the
destination queue being the apply queue (‘apply_queue_9i’) DB9I.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_propagation_rules (
streams_name => 'db10g_to_db9i',
source_queue_name => 'capture_queue_10g',
destination_queue_name => 'apply_queue_9i@db9i',
include_dml => true,
include_ddl => true,
source_database => 'db10g');
end;
/
Set the schema instantiation SCN on the source database (DB9I)
This is a very important step. It marks the SCN at the source database (DB9I) from
where all the changes are to be captured, propagated and applied at the DB10G
database.
17
SQL> connect strmadmin/strmadmin
SQL> declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_global_instantiation_scn@db10g (
source_database_name => 'db9i',
instantiation_scn => v_scn,
recursive => true);
end;
/
Export Application schema from the source database (DB9I)
Now that the instantiation is marked, I have to move the existing objects along with
its data from DB9I to DB10G. The only supported method of moving the existing
data from Oracle 9i database to Oracle 10g database is through the use of EXP/IMP
utilities. Below is the schema-level export of the Oracle 9i database.
exp system/manager consistent=y file=c:\db9i\expimp\apps.dmp owner=(apps)
Import Application schema to the destination database (DB10G)
As I have already prepared the similar environment on DB10G database (like
tablespaces, and schemas) I will simple do an import of the application schema. The
STREAMS_INSTANTIATION parameter specifies to import Streams instantiation
metadata that may be present in the export dump file. The import session sets its
Streams tag to ‘00’ to avoid cycling the changes made by the import.
imp system/manager fromuser=apps touser=apps file=c:\db9i\expimp\apps.dmp
STREAMS_INSTANTIATION=Y
Remove Heartbeat Job from db10g
After importing the application schema, I make sure that there exists no database
job on the destination database which inserts a record into the heartbeat table. If
one exists then I remove the job.
SQL> conn system/manager
SQL> select job, what from dba_jobs;
SQL> exec dbms_job.remove(1);
18
Start Capture and Apply processes on the destination database(DB10G)
Finally, I start the Capture and Apply processes on the destination database using
the below procedure:
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_apply_adm.set_parameter (
apply_name => 'apply_src_db9i',
parameter => 'disable_on_error',
value => 'N');
end;
/
SQL> begin
dbms_apply_adm.start_apply (
apply_name => 'apply_src_db9i');
end;
/
SQL> begin
dbms_capture_adm.start_capture (
capture_name => 'capture_stream_10g');
end;
/
Start Capture and Apply processes on the source database (DB9I)
Similar to starting the Capture and Apply processes on DB10G, I start these
processes on DB9I.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_apply_adm.set_parameter (
apply_name => 'apply_src_db10g',
parameter => 'disable_on_error',
value => 'N');
end;
/
SQL> begin
dbms_apply_adm.start_apply (
apply_name => 'apply_src_db10g');
end;
/
SQL> begin
dbms_capture_adm.start_capture (
capture_name => 'capture_stream_9i');
end;
/
19
Test Drive the Streams Replication
At this point, the Streams bi-directional replication is in place and working. Let us
test the replication by making some changes on both the databases and verifying the
replication. Also, check whether the heartbeat table on DB10G is in sync with the
DB9I.
-- Make DML and DDL changes on DB9I
SQL> insert into apps.dept values (21, 'new dept from db9i', 'db9i');
SQL> commit;
SQL> create table apps.db9i as select rownum a from all_objects where rownum <5;
-- Make DML and DDL changes on DB10G
SQL> insert into apps.dept values (31, 'new dept from db10g', 'db10g');
SQL> commit;
SQL> create table apps.db10g as select rownum a from all_objects where rownum
<5;
-- Test the replication of the Heartbeat table on (from DB9I to DB10G)
SQL> Select * from apps.heartbeat;
Disconnect users from the source database (DB9I)
After successfully testing the replication, we proceed with this step. The true
downtime begins now; disconnect all application users from logging-in to the source
database (DB9I). Wait for couple of minutes for all the transactions from DB9I to be
replicated to DB10G. Query “heartbeat” table on DB9I and make sure the recent
record is applied to the DB10G database. Also verify any errors in the Apply process
on DB10G by querying DBA_APPLY_ERROR view.
-- On DB10G
SQL> Select error_message from DBA_APPLY_ERROR;
Reconnect users to the new Oracle 10g database (DB10G)
The downtime ends by diverting all the users to the new Oracle 10g (DB10g)
database. We have our new upgraded Oracle 10g database functional in near-zero
downtime.
Removing Streams replication
Once the new database is found to be reliable and adhering to the standards, we
may proceed to remove the replication. Following is the procedure to of removing
replication from Oracle 10g:
20
-- Removing Streams Components
SQL>EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
-- Removing Supplemental Logging
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA
(PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
-- Removing Streams Administrator
SQL> DROP USER strmadmin CASCADE;
Restrictions/Drawbacks
During the process of replication, ensure that no Oracle-supplied PL/SQL package
subprograms are invoked during the upgrade process that modify both user data and
dictionary metadata at the same time. DBMS_RLS, DBMS_STATS, and DBMS_JOB
packages contain subprograms that modify both user data and dictionary metadata
at the same time.
It might be possible to perform such actions on the database if you ensure that the
same actions are performed on the source database and destination database.

Conclusions/Summary.

We have expanded our success with Oracle Streams by achieving a near-zero
downtime. Using Oracle Streams to upgrade an Oracle release is now as easy as a
piece of cake. Any crucial database which is running 24*7 can be upgraded with
near-zero downtime to Oracle 10g using the above described procedure.
Oracle Streams is the only supported upgrade path which fully supports a fallback
even after days of working. We may wish to fallback any time after the upgrade
where as the traditional upgrade paths only support an immediate fallback strategy.
With the advent of Oracle 10g, Oracle Streams has improvised to a great deal.