NAVEEN

Thursday, August 2, 2018

Collect Database User Metadata information 


Connect to database as administrator and you can use below script to collect any database user metadata information

set echo off
SET LONG 2000000
set pagesize 0
set heading off
set linesize 2000
set feed off
set tab off
set trimout on
set trims on
COL RFILE FORMAT A32000

prompt Create Sequence Skript: Create_user.sql
SET TERMOUT OFF
SET SERVEROUTPUT ON

exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);
spool create_users.sql

SELECT to_char(DBMS_METADATA.GET_DDL('USER', USERNAME)) FROM dba_users where default_tablespace='USERS' and account_status='OPEN' order by username asc;

declare
    no_grant exception;
    role_grants varchar2(255);
    pragma exception_init( no_grant, -31608 );
begin
    for c in (SELECT * FROM dba_users where default_tablespace='USERS' and account_status='OPEN' order by username asc)
    loop
        begin
           role_grants := to_char(DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', c.USERNAME));
        exception when no_grant then
           role_grants := '-- no role grants for user ' || c.username;
        end;
        dbms_output.put_line(role_grants);
    end loop;
end;
/

-- SELECT to_char(DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', USERNAME)) FROM dba_users where default_tablespace='USERS' and account_status='OPEN' order by username asc;

spool off

Different types of backups in Oracle.

1. Cold backup:


If the database is in noarchive log mode,then we have take cold backup.for taking cold backup follow the following steps:

1. Shutdown the database

Sql>shutdown immediate;

2. Manually copy data files ,control files and redolog files to destination.

3. Start database

Sql>startup;

2.Hot backup:

If database is in archive log mode, then we can take hot backup without any database outage.

1. If you want to receive logs for hot backup in separate logfile, then execute the following command.


2. Start hot backup by executing following command.

Sql>alter database begin backup;

3. Manually copy data files, control files and redo log files without shutting down the database.

4. Stop hot backup by executing following command.

Sql>alter database end backup;

5. Switch log file after completing hot backup.

Sql>alter system switch logfile;

copy the archive files and it is used to recover the database.

issue this command to recover database :

sql>recover database using contolfile autobackup untill cancel;

3. Import – Export Backup:

Data guard is a new feature in Oracle 10g which helps to transfer data from one location to another location at high speed.

It consist of two utilities:

1. Export utility
2. Import Utility

1. Export utility: is used to store database objects in storage media such as disks outside the database.
2. Import Utility: is used to load the objects to database from storage media.

Export operation:
1. Create a directory

C:\>mkdir c:\backup

2. Connect to database and create a directory for backup

Sql>create directory backup as ‘c:\backup’;

Directory created.

3. Now move to host mode.
Sql>host

4. Go to the directory C:\oracle\product\10.2.0\db_1\BIN
C:\>cd C:\oracle\product\10.2.0\db_1\BIN

5. Execute export command.
C:\oracle\product\10.2.0\db_1\BIN> expdp sys/test directory=bkup dumpfile=bfull.dmp logfile=bfull.log full=y

For import operation execute the following command.

C:\oracle\product\10.2.0\db_1\BIN> impdp sys/test directory=bkup dumpfile=bfull.dmp logfile=bfull.log full=y


4 RMAN Backup:
Target database: database of which we are going to take backup.
Recovery catalog: is a destination which holds information about the target database.

1. RMAN Without recovery catalog:

Requirements for rman backup:

a. Data base must be in archivelog mode.

A. From operating system prompt issue the following command to connect to target database without recovery catalog. When connecting to a target or auxiliary database, you must have the SYSDBA privilege.

B. To take the complete backup of database issue following command.

1. RMAN>backup database;

2. To take the backup of tablespace use following command.

Rman>backup tablespace tablespace_name;

3. To take backup of datafile issue following command.

Rman>backup datafile ‘C:\oracle\product\10.2.0\oradata\orcl\datafile_name’;

By default these backups will be stored in flash_recovery_area.

Recovery operation using RMAN :

1. To view list of backups use this command.

Rman>list backupset;

2.It is not possible to recover data from all backup set. So we have to validate the backup set. To check is the backup set is valid or not, issue following command.

Rman>validate backupset list_number;

Shutdown dataabse and put in mount mode to recover database.

Rman>recover database;




Thursday, May 13, 2010

ASM in Windows server

Preparing Disks for ASM: Windows NT

To simulate implementation of ASM in the Windows NT environment, I first created raw disk partitions on some of my hard drives. Fortunately, I had extra space on three of the four disks in my test server, so I created several primary disk partitions sized at 100MB on each of those three drives using the Windows Disk Partitioning utility (DISKPART.EXE):

C:\> select disk 2
C:\> Disk 2 selected.
C:\> create partition primary size=100
C:\> Partition created.
C:\> create partition primary size=100
C:\> Partition created.
C:\> create partition primary size=100
C:\> Partition created.
C:\> select disk 3
C:\> Disk 3 selected.
C:\> create partition primary size=100
C:\> Partition created.

Once the partitions are completed, I used the ASMTOOLG.EXE utility to "stamp" each partition with an ASM label so that Oracle
can recognize these partitions as candidate disks for the ASM instance. I executed the ASMTOOLG.EXE program from the /bin
directory of the Oracle home path for my Windows NT database. Figure 2.1.2 shows the initial screen that this GUI tool
presented, and Figure 2.1.3 shows the screen that confirmed the creation of the ASM labels. Once the labels were assigned,
I then re-invoked ASMTOOLG to confirm them (see Figure 2.1.4). I will also use ASMTOOLG to remove the labels prior to
removing these partitions once my simulation is completed

Creating an ASM Instance Using DBCA in Linux
I used this same methodology to create an ASM instance in Linux using DBCA. The only significant difference was the contents of the Disk Selection window (Figure 2.2.9), which instead showed the candidate disks I had previously created on that server for the Linux environment. In either of these cases, the end result is the same: Once I clicked on the Cancel button on the last screen, Oracle dismounted the ASM disk group I had created and then shut down the ASM instance as well.

Creating an ASM Instance without Using DBCA
Of course, I don't have to use DBCA to create an ASM instance. Personally, I prefer to use command scripts to create my database because it is easier to customize the scripts to create other ASM instances in the future, and it also gives me complete control over what Oracle is doing "behind the screen." In addition, an ASM instance is extremely easy to create because no CREATE DATABASE script is required, just an initialization parameter file like the one shown in Listing 2.2. Besides the usual initialization parameters for trace file directories, there are only a few additional ones required to create an ASM instance:

ASM Initialization Parameters

Initialization Parameter
Description

INSTANCE_TYPE
Defines the instance as an ASM instance. This is the only required parameter to identify an ASM instance; the remainder can be left at their defaults

DB_UNIQUE_NAME
Defines the service provider name for which this ASM instance manages disk groups. +ASM is the default value, and should not be modified unless multiple ASM instances are on the same node

ASM_POWER_LIMIT
Controls rebalance operation speed. Values range from 1 to 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMIT parameter

ASM_DISKSTRING
An operating system dependent value; used by ASM to limit the set of disks considered for discovery

ASM_DISK_GROUPS
Describes the list of ASM disk group names to be mounted by an ASM instance at startup, or whenever the ALTER DISKGROUP ALL MOUNT command is used

LARGE_POOL_SIZE
The LARGE POOL size. This must be set to a minimum of 8MB, but Oracle recommends setting this to 16MB


To create the ASM instance without using DBCA, I first made sure that I had created the directories I specified for BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, and USER_DUMP_DEST. I also created a password file for the instance using the ORAPWD utility. (This is important, because when you attempt to connect to the ASM instance from Enterprise Manager, it will expect the instance to have the REMOTE_LOGIN_PASSWORDFILE initialization parameter set to EXCLUSIVE so that the instance can be contacted remotely and that means a password file will be required.)

ORAPWD file=c:\oracle\app\product\10.1.0\db_1\database\PWD+ASM.ora password=oracle
After I had made sure that the Oracle Cluster Services service was started in my Windows NT environment - it is usually named OracleCSService in the list of Windows Services that I can start - I then simply pointed my MS-DOS command window at the database instance by setting the value for ORACLE_SID to +ASM, started a SQL*Plus session, created an SPFILE from the parameter file, and then started the ASM instance in NOMOUNT mode:

C:\> set ORACLE_SID=+ASM
C:\> sqlplus "sys as sysdba"
SQL> Connected to an idle instance.
SQL> create spfile from pfile=c:\init+asm.ora;
SQL> File created.
SQL> startup nomount;
SQL> ASM instance started

# Standard ASM instance initialization parameters
*.asm_power_limit = 5
*.db_unique_name = '+ASM'
*.instance_type = 'asm'
*.large_pool_size = 16M

# Background, core, and user trace file directories. These will need
# to exist before starting the instance and configured for the server's
# operating system as appropriate
*.background_dump_dest = '/u01/app/oracle/admin/+ASM/bdump'
*.core_dump_dest = '/u01/app/oracle/admin/+ASM/cdump'
*.user_dump_dest = '/u01/app/oracle/admin/+ASM/udump'

# This requires a password file for the ASM instance, needed by
# Enterprise Manager for a connection as SYS:
REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE

# ASM candidate disk search string. It's not required, but it does help
# Oracle find the candidate disks more easily
*.asm_diskstring = '\\.\ORCLDISK*' # For Windows environment
*.asm_diskstring = '/u03/asmdisks/*', '/u04/asmdisks/*',# For Red Hat Linux environment

# ASM Mountable Disk Group(s). These can be activated after the ASM
# Instance has been created.
#asm_disk_groups='DGROUP1'

-- Create ASM disk group in NT environment
CREATE DISKGROUP dgroup1
NORMAL REDUNDANCY
FAILGROUP ctlr1
DISK
'\\.\ORCLDISKDATA0'
, '\\.\ORCLDISKDATA2'
FAILGROUP ctlr2
DISK
'\\.\ORCLDISKDATA1'
,'\\.\ORCLDISKDATA3'
;

-- Create ASM disk group in Linux environment
CREATE DISKGROUP dgroup1
NORMAL REDUNDANCY
FAILGROUP ctlr1
DISK
'/u03/asmdisks/disk0'
,'/u04/asmdisks/disk2'
FAILGROUP ctlr2
DISK
'/u03/asmdisks/disk1'
,'/u04/asmdisks/disk3'
;


ASMTOOL -create c:\asmdisks\asmdisk1 250

Oracle Streams Concept & Configurations

Abstract

An Oracle database upgrade is a process of transforming an existing Oracle database
into a later or the current release of the Oracle Database. This upgrade process
normally requires a significant amount of database down time. This task of upgrade
becomes more difficult for businesses which need to be 24*7 operational and any
outage negatively affects business operations and customer service. In 24*7
organizations like Banks the database availability is so crucial that the downtime is
bargained in minutes.
Minimizing downtime is a challenge every organization faces. Oracle Streams
replication enables us to perform an upgrade with near-zero downtime. This is
accomplished by configuring Oracle Streams on the existing Oracle 9i database which
captures all the changes on the source database and replicates the changes to the
target database (Oracle 10g). The rest of the document presents a step-by-step
approach of setting the Oracle Streams replication and performing an upgrade to
Oracle 10g database..

Introduction
This document provides a step-by-step approach to upgrade your existing Oracle 9i
database to Oracle 10g database using Oracle Streams attaining a near-zero
downtime. In this document, I create a new Oracle 9i database with an application
schema and upgrade this to Oracle 10g database.
A bi-directional replication is accomplished using Oracle Streams; both Oracle 9i and
Oracle 10g Databases will be in Active-Active mode. Although both databases will be
in Active-Active mode, only one database will be serving the users. A bi-directional
replication is required in order to fallback to Oracle 9i in case we have to.
Setting up Bi-Directional Replication
For the upgrade of Oracle 9i database, I have followed the below listed steps:
1. Create an Oracle 9i Database.
2. Create Application Schema with few objects.
3. Create an Oracle 10g Database.
4. Configure Streams Administrator on both Oracle 9i and Oracle 10g databases.
5. Configure Oracle Streams on both the databases.
6. Export Oracle 9i Application Schema(s) to Oracle 10g database.
7. Start Oracle Streams.
8. Test the data transfer from the source to the target database.
In this document, although a bi-directional replication is configured, I refer to the
Oracle 9i database as the Source Database and Oracle 10g database as the
destination database. I have created both the source and destination databases
on a single machine and following are the details:
4
Host Name Test Test
Operating System
MS Windows XP
Version 5.1 Service Pack 2
MS Windows XP
Version 5.1 Service Pack 2
Database Version
Oracle9i Enterprise Edition
Release 9.2.0.8.0
Oracle 10g Enterprise Edition
Release 10.2.0.1.0
Oracle Home C:\ora9i C:\oracle\ora10g
Instance Name DB9I DB10G
Database Role Source / Destination Source / Destination
Archiving? ARCHIVELOG Mode ARCHIVELOG Mode
Directory Structure
Pfile
Data Files
Control Files
Redo Logs
Archived Redo
Logs
C:\db9i\pfile
C:\db9i\data
C:\db9i\control1
C:\db9i\control2
C:\db9i\redo
C:\db9i\arch
C:\db10g\pfile
C:\db10g\data
C:\db10g\control1
C:\db10g\control2
C:\db10g\redo
C:\db9i\arch
Create an Oracle 9i database
I have created a blank Oracle 9i database using the following procedure.
(i) Create an Initialization Parameter File
Apart from the standard initialization parameters, following initialization parameters
are to be configured to effectively use Oracle Streams.
Parameter=Value Description of the Parameter
GLOBAL_NAMES = TRUE Specifies whether a database link is required to have
the same name as the database to which it connects
JOB_QUEUE_PROCESSES = 10 Specifies the number of Jn job queue processes for
each instance.
(ii) Create Password File and Windows Service
A password file is created using the “orapwd” utility and a Windows Service is
created using “oradim” utility from Oracle 9i home.
5
-- Create a Password File
C:\Ora9i\bin> orapwd file=C:\Ora9i\database\PWDdb9i.ora password=syssys
-- Create a Windows Service
C:\Ora9i\bin> oradim –new –sid db9i –pfile c:\db9i\pfile\initDB9I.ora
(iii)Create the source database
Below are the steps I have used to create the source database in Oracle 9i. Once the
database is created, I also create the listener service using Oracle Net Configuration
Assistant.
C:\Ora9i\bin> sqlplus "/as sysdba"
SQL> startup nomount pfile=c:\db9i\pfile\initDB9I.ora
SQL> Create database DB9I
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
character set AR8MSWIN1256
logfile group 1 ('C:\db9i\redo\redo01.log') size 64M,
group 2 ('C:\db9i\redo\redo02.log') size 64M,
group 3 ('C:\db9i\redo\redo03.log') size 64M
datafile 'C:\db9i\data\system01.dbf' size 257M autoextend on
maxsize unlimited extent management local
undo tablespace undotbs1 datafile 'C:\db9i\data\undotbs01.dbf' size 257M
default temporary tablespace temp tempfile 'C:\db9i\data\temp01.dbf' size 257M;
SQL> @?\rdbms\admin\catalog.sql
SQL> @?\rdbms\admin\catproc.sql
SQL> connect system/manager
SQL> @?/sqlplus/admin/pupbld
Enable the source database (DB9I) in Archivelog Mode
Oracle Streams requires that the source database must be running in ARCHIVELOG
mode when a capture process is configured to capture the changes. Following code
enables the database in an ARCHIVELOG mode.
6
SQL> conn /as sysdba
SQL> shutdown immediate;
SQL> startup mount pfile=c:\db9i\pfile\initdb9i.ora
SQL> alter database archivelog;
SQL> alter database open;
Create the destination Oracle 10g Database
(i) Create an Initialization Parameter File
Following initialization parameters are to be configured to use Oracle Streams apart
from the standard parameters.
Parameter=Value Description of the Parameter
STREAMS_POOL_SIZE=256M Specifies (in bytes) the size of the Streams pool. The
Streams pool contains buffered queue messages. In
addition, the Streams Pool is used for internal
communication during parallel capture and apply.
GLOBAL_NAMES = TRUE Specifies whether a database link is required to have
the same name as the database to which it
connects.
JOB_QUEUE_PROCESSES = 10 Specifies the number of Jn job queue processes for
each instance.
UNDO_RETENTION=10800 Specifies (in seconds) the amount of committed
undo information to retain in the database. For a
database running one or more capture processes,
make sure this parameter is set to specify an
adequate undo retention period.
(ii) Create Password File and Windows Service
A password file is created using the “orapwd” utility and create a Windows Service
using “oradim” utility from Oracle 10g Home.
7
-- Create a Password File
C:\oracle\ora10g> orapwd file=C:\Oracle\Ora10g\database\PWDdb10g.ora password=syssys
-- Create a Windows Service
C:\oracle\ora10g> oradim -new -sid db10g -pfile c:\db10g\pfile\inintDB10G.ora
(iii)Create the destination database
Below script is used to create a destination database in Oracle 10g. After the
database creation also create and start the listener.
C:\oracle\ora10g>sqlplus "/as sysdba"
SQL> startup nomount pfile=c:\db10g\pfile\initDB10G.ora
SQL> create database DB10G
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
character set AR8MSWIN1256
logfile group 1 ('c:\db10g\redo\redo01.log') size 128M,
group 2 ('c:\db10g\redo\redo02.log') size 128M,
group 3 ('c:\db10g\redo\redo03.log') size 128M
datafile 'c:\db10g\data\system01.dbf' size 257M autoextend on
maxsize unlimited extent management local
sysaux datafile 'c:\db10g\data\sysaux01.dbf' size 257M
autoextend on maxsize unlimited
undo tablespace undotbs1 datafile 'c:\db10g\data\undotbs01.dbf' size 257M
default temporary tablespace temp tempfile 'c:\db10g\data\temp01.dbf' size 257M;
SQL> @?\rdbms\admin\catalog.sql
SQL> @?\rdbms\admin\catproc.sql
SQL> conn system/manager
SQL> @?/sqlplus/admin/pupbld
Enable the destination database (DB10G) to run in Archivelog Mode
As DB10G will be acting as a Primary database in case we have to fallback to Oracle
9i, we have to configure “DB10G” to run in ARCHIVELOG mode.
8
SQL> conn /as sysdba
SQL> shutdown immediate;
SQL> startup mount pfile=c:\db10g\pfile\initdb10g.ora
SQL> alter database archivelog;
SQL> alter database open;
Create a sample Application Schema on the Source Database (DB9I).
Using the following code snippet, I create a separate tablespace for the application
along with an application schema named “APPS”. Finally, granting the required
privileges to the “APPS” schema.
SQL> create tablespace appsts datafile 'c:\db9i\data\appsts01.dbf' size 100M;
SQL> create user apps identified by apps default tablespace appsts temporary
tablespace temp;
SQL> grant connect, resource to apps;
Create objects and insert sample data in the APPS Schema on DB9I
I create two simple tables in the APPS schema and populate them with the test data.
Below is the code:
SQL> connect apps/apps
SQL> create table dept (
deptno number(10) not null,
dname varchar2(20) not null,
db_site varchar2(10) not null);
SQL> alter table dept add constraint dept_pk primary key (deptno);
SQL> insert into dept values (1, 'IT','db9i');
SQL> insert into dept values (2, 'HR', 'db9i');
SQL> insert into dept values (3, 'LOGISTICS','db9i');
SQL> commit;
SQL> create table emp (
empno number(10) not null,
ename varchar2(20) not null,
9
site varchar2(10) not null,
job varchar2(10),
deptno number(10));
SQL> alter table emp add constraint emp_pk primary key (empno);
SQL> alter table emp add constraint emp_fk foreign key (deptno) references dept
(deptno);
SQL> insert into emp values (101, 'SCOTT','db9i','CLERK',1);
SQL> insert into emp values (102, 'TIGER','db9i','ANALYST',2);
SQL> insert into emp values (103, 'MARK','db9i','ENGINEER',3);
SQL> commit;
Create Streams Administrator on the source database (DB9I)
By this time, I have a source database in Oracle 9i which contains application
schema accessed by end users for their daily processing and an empty Oracle 10g
database. To manage the Streams environment, I will create a new user (called
strmadmin) with appropriate privileges. We cannot use the SYS or SYSTEM user as
Streams Administrator, and the Streams Administrator should not use the SYSTEM
tablespace as its default tablespace.
SQL> conn sys/syssys as sysdba
SQL> create tablespace streamsts datafile 'c:\db9i\data\streamsts01.dbf' size 100M;
SQL> create user strmadmin identified by strmadmin default tablespace streamsts
temporary tablespace temp;
SQL> grant connect, resource, dba, select_catalog_role to strmadmin;
SQL> BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
SQL> BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
10
SQL> GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
SQL> GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
SQL> GRANT EXECUTE ON DBMS_APPLY_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_CAPTURE_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_PROPAGATION_ADM to strmadmin;
SQL> GRANT EXECUTE ON DBMS_FLASHBACK to strmadmin;
SQL> GRANT SELECT_CATALOG_ROLE to strmadmin;
SQL> GRANT SELECT ANY DICTIONARY to strmadmin;
SQL> GRANT ALL PRIVILEGES TO strmadmin;
Create a Heartbeat Table on the source database (DB9I)
Implementing a heartbeat table provides a quick feedback to the health of the
streams replication. I will create a heartbeat table with a date and site columns. I
will also create a database job to insert a record into the heartbeat table every 10
seconds.
SQL> conn apps/apps
SQL> create table heartbeat (a int, at_time date, db_site varchar2(30));
SQL> create sequence seq9i;
SQL> set serveroutput on
SQL> declare
l_job_no NUMBER;
begin
dbms_job.submit(job => l_job_no,
what => 'Begin insert into heartbeat values (seq9i.nextval, sysdate,
‘’DB9I’’); commit; End;',
next_date => sysdate,
interval =>'sysdate +10/86400');
commit;
dbms_output.put_line('Job No : '|| to_char(l_job_no));
end;
/
Enable Supplemental Logging on the source database (DB9I)
Supplemental logging logs additional columns information into the redo logs for row
identification. This is required by the Streams as the Capture process mines the redo
logs for the DDL’s and DML’s.
11
SQL> conn sys/syssys as sysdba
SQL> alter database add supplemental log data
(PRIMARY KEY, UNIQUE INDEX) columns;
Create Application User Schema on the destination database (DB10G)
I will create a tablespace(s) and a schema(s) in the destination database similar to
that of the source database.
SQL> conn sys/syssys as sysdba
SQL> create tablespace appsts datafile 'c:\db10g\data\appsts01.dbf' size 100M;
SQL> create user apps identified by apps default tablespace appsts temporary
tablespace temp;
SQL> grant connect,resource to apps;
Create Streams Administrator on DB10G
Similar to the source database, I have to configure a Streams Administrator on the
destination site to manage the Streams environment. Following code will create a
user “strmadmin” and grant appropriate privileges to the user.
SQL> conn /as sysdba
SQL> create tablespace streamsts datafile 'c:\db10g\data\streamsts01.dbf' size
100M;
SQL> create user strmadmin identified by strmadmin default tablespace streamsts
temporary tablespace temp;
SQL> grant dba,select_catalog_role to strmadmin;
SQL> exec dbms_streams_auth.grant_admin_privilege('strmadmin',true);
SQL> GRANT ALL PRIVILEGES TO strmadmin;
Create database link on the source database (DB9I)
As the Streams Administrator on the source database, create a private database link
to the destination. The link is required to ship the changes from the source to the
destination. Test the link by selecting the SYSDATE from the remote database.
Modify the %ORACLE_HOME%\network\admin\tnsnames.ora file to include an entry
that points to the DB10G database.
12
SQL> conn strmadmin/strmadmin
SQL> create database link db10g connect to strmadmin identified by strmadmin
using 'db10g';
SQL> select sysdate from dual@db10g;
Create database link on the destination database (DB10G)
Similarly, on the destination database create a private database link to the source
database. Modify the %ORACLE_HOME%\network\admin\tnsnames.ora file to
include an entry that points to the DB9I database.
SQL> conn strmadmin/strmadmin
SQL> create database link db9i connect to strmadmin identified by strmadmin using
'db9i';
SQL> select sysdate from dual@db9i;
Create Streams queues on the source database (DB9I)
In a Streams environment, queues stage LCR’s that encapsulate captured messages.
These queues can be used by capture process, propagations, and apply processes as
an LCR goes through a stream from a source database to a destination database.
As the Streams Administrator, I create a capture queue, “capture_queue_9i” and
assign “capture_table” as the table name to be used by the Capture process.
Similarly, I create an apply queue named “apply_queue_9i” and associate it with
“apply_table” table on the source database.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_table',
queue_name => 'capture_queue_9i',
queue_user => 'strmadmin');
end;
/
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_table',
13
queue_name => 'apply_queue_9i',
queue_user => 'strmadmin');
end;
/
Create Streams queues on the destination database (DB10G)
Similarly, I will create the capture and apply queues on the destination database.
The capture queue will be used to capture all the changes on DB10G database and
the apply queue will be used to apply the LCR’s captured on the DB9I database.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_table',
queue_name => 'capture_queue_10g',
queue_user => 'strmadmin');
end;
/
SQL> begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_table',
queue_name => 'apply_queue_10g',
queue_user => 'strmadmin');
end;
/
Configure Capture process on the source database (DB9I)
As a Streams Administrator on the source database, create and configure a capture
process. The Capture process is configured to capture all the DML and DDL changes
on the entire database.
The ADD_GLOBAL_RULES procedure enables us to capture changes at the database
level. By passing a value of ‘TRUE’ to the parameters ‘INCLUDE_DML’, and
‘INCLUDE_DDL’ we inform the Capture process to capture all the DDL and DML
changes.
SQL> conn strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'capture',
streams_name => 'capture_stream_9i',
queue_name => 'capture_queue_9i',
include_dml => true,
14
include_ddl => true);
end;
/
Configure Apply process on the source database (DB9I)
The Apply process dequeues the LCR events from DB10G and applies to the
respective application schemas on the source database. Create an Apply process by
logging in as a Streams Administrator as shown below.
This step is optional and is required to support a fallback to Oracle 9i. The below
procedure attaches the “apply_queue_9i” queue with the apply process and instructs
the Apply process to apply all the DDL’s and DML’s from the ‘DB10G’ database.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'apply',
streams_name => 'apply_src_db10g',
queue_name => 'apply_queue_9i',
include_dml => true,
include_ddl => true,
source_database => 'db10g');
end;
/
Configure Propagation process on the Source Database (DB9I)
The configuration of a Propagation process is similar to the Capture and Apply
processes. We add rules to the positive rule set and specify the source queue name
and destination queue name. Changes are propagated from the source queue to the
destination queue.
The source queue being the capture queue (‘capture_queue_9i’) at the source
database (DB9I) and the destination queue being the apply queue
(‘apply_queue_10g’) at the destination (DB10G).
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_propagation_rules (
streams_name => 'db9i_to_db10g',
source_queue_name => 'capture_queue_9i',
destination_queue_name => 'apply_queue_10g@db10g',
include_dml => true,
15
include_ddl => true,
source_database => 'db9i');
end;
/
Configure a Capture process on the destination database (DB10G)
A Capture process is to be configured at the destination database to capture all the
changes at the Oracle 10g database. This process is optional and is required in case
we have to fallback to Oracle 9i database even after couple of transactions actually
has been performed on DB10G. This advantage is not available in any of the
supported Upgrade methods. We can fallback immediately or even after running the
applications against DB10G for couple of days.
Similar to the Oracle 9i database, I use the same procedure “ADM_GLOBAL_RULES”
to create a Capture process.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'capture',
streams_name => 'capture_stream_10g',
queue_name => 'capture_queue_10g',
include_dml => true,
include_ddl => true);
end;
/
Set the schema instantiation SCN on the destination database (DB10G)
Setting the global instantiation SCN for DB10G at DB9I ensures that only changes
after the instantiation SCN from DB10G are applied to DB9I.
SQL> connect strmadmin/strmadmin
SQL> declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_global_instantiation_scn@db9i (
source_database_name => 'db10g',
instantiation_scn => v_scn,
recursive => true);
end;
/
Configure Apply process on the destination database (DB10G)
16
An Apply process is to be created on the destination to apply all the changes made
and captured at the source database. Below script creates an Apply process and
instructs it to apply all the DDL and DML changes captured at the DB9I database.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_rules (
streams_type => 'apply',
streams_name => 'apply_src_db9i',
queue_name => 'apply_queue_10g',
include_dml => true,
include_ddl => true,
source_database => 'db9i');
end;
/
Configure Propagation process on the destination database (DB10G)
The creation of Propagation process is optional and is required to support the fallback
strategy. Changes are propagated from the DB10G to DB9I database.
The source queue being the capture queue (‘capture_queue_10g’) at DB10G and the
destination queue being the apply queue (‘apply_queue_9i’) DB9I.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_streams_adm.add_global_propagation_rules (
streams_name => 'db10g_to_db9i',
source_queue_name => 'capture_queue_10g',
destination_queue_name => 'apply_queue_9i@db9i',
include_dml => true,
include_ddl => true,
source_database => 'db10g');
end;
/
Set the schema instantiation SCN on the source database (DB9I)
This is a very important step. It marks the SCN at the source database (DB9I) from
where all the changes are to be captured, propagated and applied at the DB10G
database.
17
SQL> connect strmadmin/strmadmin
SQL> declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_global_instantiation_scn@db10g (
source_database_name => 'db9i',
instantiation_scn => v_scn,
recursive => true);
end;
/
Export Application schema from the source database (DB9I)
Now that the instantiation is marked, I have to move the existing objects along with
its data from DB9I to DB10G. The only supported method of moving the existing
data from Oracle 9i database to Oracle 10g database is through the use of EXP/IMP
utilities. Below is the schema-level export of the Oracle 9i database.
exp system/manager consistent=y file=c:\db9i\expimp\apps.dmp owner=(apps)
Import Application schema to the destination database (DB10G)
As I have already prepared the similar environment on DB10G database (like
tablespaces, and schemas) I will simple do an import of the application schema. The
STREAMS_INSTANTIATION parameter specifies to import Streams instantiation
metadata that may be present in the export dump file. The import session sets its
Streams tag to ‘00’ to avoid cycling the changes made by the import.
imp system/manager fromuser=apps touser=apps file=c:\db9i\expimp\apps.dmp
STREAMS_INSTANTIATION=Y
Remove Heartbeat Job from db10g
After importing the application schema, I make sure that there exists no database
job on the destination database which inserts a record into the heartbeat table. If
one exists then I remove the job.
SQL> conn system/manager
SQL> select job, what from dba_jobs;
SQL> exec dbms_job.remove(1);
18
Start Capture and Apply processes on the destination database(DB10G)
Finally, I start the Capture and Apply processes on the destination database using
the below procedure:
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_apply_adm.set_parameter (
apply_name => 'apply_src_db9i',
parameter => 'disable_on_error',
value => 'N');
end;
/
SQL> begin
dbms_apply_adm.start_apply (
apply_name => 'apply_src_db9i');
end;
/
SQL> begin
dbms_capture_adm.start_capture (
capture_name => 'capture_stream_10g');
end;
/
Start Capture and Apply processes on the source database (DB9I)
Similar to starting the Capture and Apply processes on DB10G, I start these
processes on DB9I.
SQL> connect strmadmin/strmadmin
SQL> begin
dbms_apply_adm.set_parameter (
apply_name => 'apply_src_db10g',
parameter => 'disable_on_error',
value => 'N');
end;
/
SQL> begin
dbms_apply_adm.start_apply (
apply_name => 'apply_src_db10g');
end;
/
SQL> begin
dbms_capture_adm.start_capture (
capture_name => 'capture_stream_9i');
end;
/
19
Test Drive the Streams Replication
At this point, the Streams bi-directional replication is in place and working. Let us
test the replication by making some changes on both the databases and verifying the
replication. Also, check whether the heartbeat table on DB10G is in sync with the
DB9I.
-- Make DML and DDL changes on DB9I
SQL> insert into apps.dept values (21, 'new dept from db9i', 'db9i');
SQL> commit;
SQL> create table apps.db9i as select rownum a from all_objects where rownum <5;
-- Make DML and DDL changes on DB10G
SQL> insert into apps.dept values (31, 'new dept from db10g', 'db10g');
SQL> commit;
SQL> create table apps.db10g as select rownum a from all_objects where rownum
<5;
-- Test the replication of the Heartbeat table on (from DB9I to DB10G)
SQL> Select * from apps.heartbeat;
Disconnect users from the source database (DB9I)
After successfully testing the replication, we proceed with this step. The true
downtime begins now; disconnect all application users from logging-in to the source
database (DB9I). Wait for couple of minutes for all the transactions from DB9I to be
replicated to DB10G. Query “heartbeat” table on DB9I and make sure the recent
record is applied to the DB10G database. Also verify any errors in the Apply process
on DB10G by querying DBA_APPLY_ERROR view.
-- On DB10G
SQL> Select error_message from DBA_APPLY_ERROR;
Reconnect users to the new Oracle 10g database (DB10G)
The downtime ends by diverting all the users to the new Oracle 10g (DB10g)
database. We have our new upgraded Oracle 10g database functional in near-zero
downtime.
Removing Streams replication
Once the new database is found to be reliable and adhering to the standards, we
may proceed to remove the replication. Following is the procedure to of removing
replication from Oracle 10g:
20
-- Removing Streams Components
SQL>EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
-- Removing Supplemental Logging
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA
(PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
-- Removing Streams Administrator
SQL> DROP USER strmadmin CASCADE;
Restrictions/Drawbacks
During the process of replication, ensure that no Oracle-supplied PL/SQL package
subprograms are invoked during the upgrade process that modify both user data and
dictionary metadata at the same time. DBMS_RLS, DBMS_STATS, and DBMS_JOB
packages contain subprograms that modify both user data and dictionary metadata
at the same time.
It might be possible to perform such actions on the database if you ensure that the
same actions are performed on the source database and destination database.

Conclusions/Summary.

We have expanded our success with Oracle Streams by achieving a near-zero
downtime. Using Oracle Streams to upgrade an Oracle release is now as easy as a
piece of cake. Any crucial database which is running 24*7 can be upgraded with
near-zero downtime to Oracle 10g using the above described procedure.
Oracle Streams is the only supported upgrade path which fully supports a fallback
even after days of working. We may wish to fallback any time after the upgrade
where as the traditional upgrade paths only support an immediate fallback strategy.
With the advent of Oracle 10g, Oracle Streams has improvised to a great deal.

Wednesday, April 7, 2010

Dataguard Broker Concepts and Configuration

Oracle Data Guard Overview

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the primary database. If the primary database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, recovery, and cluster techniques, as well as the Flashback Database feature to provide a high level of data protection and data availability.


Oracle Data Guard Configuration Overview

A Data Guard configuration consists of one primary database and up to nine standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located if they can communicate with each other. For example, you can have a standby database on the same system as the primary database, along with two standby databases on another system.

The Data Guard broker logically groups these primary and standby databases into a broker configuration that allows the broker to manage and monitor them together as an integrated unit. You can manage them using the broker's graphical user interface (GUI) that is integrated with Oracle Enterprise Manager or using a command-line interface (CLI) called DGMGRL.


Oracle Data Guard Broker Overview

The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following are some of the operations that the broker automates and simplifies:

•Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, log transport services, and log apply services, where any of the databases could be Real Application Clusters (RAC) databases.

•Adding up to 8 additional new or existing (physical or logical, RAC, or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the same configuration.

•Managing an entire Data Guard configuration, including all databases, log transport services, and log apply services, through a client connection to any database in the configuration.

•Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.

•Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.

You can perform all management operations locally or remotely through the broker's easy-to-use interfaces: the Data Guard web pages of Oracle Enterprise Manager, which is the broker's graphical user interface (GUI), and the Data Guard command-line interface (CLI) called DGMGRL.

Benefits of Data Guard Broker

The broker's interfaces improve usability and centralize management and monitoring of the Data Guard configuration. Available as a feature of the Enterprise Edition and Personal Edition of the Oracle database, the broker is also integrated with the Oracle database. These broker attributes result in the following benefits:

Automated switchover and failover operations:

Only one command is required to initiate complex role changes for switchover or failover operations across all databases in the configuration. The broker automates switchover and failover to a specified standby database in the broker configuration.

Benefits of Data Guard Broker

The broker's interfaces improve usability and centralize management and monitoring of the Data Guard configuration. Available as a feature of the Enterprise Edition and Personal Edition of the Oracle database, the broker is also integrated with the Oracle database. These broker attributes result in the following benefits:

Automated switchover and failover operations:

Only one command is required to initiate complex role changes for switchover or failover operations across all databases in the configuration. The broker automates switchover and failover to a specified standby database in the broker configuration.


Data Guard Broker Components

The Oracle Data Guard broker consists of the following components:

•Data Guard GUI

•Data Guard Command-Line Interface (DGMGRL)

•Data Guard Monitor

The Data Guard graphical user interface, tightly integrated with Oracle Enterprise Manager, and the Data Guard command-line interface are the broker client interfaces that help you define and manage a configuration consisting of a collection of primary and standby databases.


The Data Guard monitor is the broker server-side component that is integrated with the Oracle database. Data Guard monitor is composed of the DMON process and broker configuration files that allow you to control the databases of that configuration, modify their behavior at runtime, monitor the overall health of the configuration, and provide notification of other operational characteristics.

The following example lists the available commands:

DGMGRL> HELP

The following commands are available:

add Add a standby database into the broker configuration
connect Connect to an Oracle instance
create Create a broker configuration
disable Disable broker control of a configuration or database
edit Edit a configuration, database or instance
enable Enable broker control of a configuration or database
exit Exit the program
failover Change a standby database to be the primary database
help Display description and syntax for a given command
quit Exit the program
rem Comment to be ignored by DGMGRL
remove Remove a configuration, database or instance
show Display information of a configuration, database or instance
shutdown Shut down a currently running Oracle instance
startup Start an Oracle database instance
switchover Switch roles between the primary database and a standby database

C:\>set oracle_sid=prod

C:\>dgmgrl

DGMGRL for 32-bit Windows: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.


DGMGRL> connect sys/prod

Connected.


Cretate Configuration :

DGMGRL> create configuration 'DGConfig1' as

> primary database is 'prod'

> connect identifier is prod;

Configuration "DGConfig1" created with primary database "prod"


Add standby database to broker:

DGMGRL> add database 'stand' as connect identifier is stand maintained as physical;
Database "stand" added


DGMGRL> SHOW DATABASE 'prod' 'StatusReport';

Error: ORA-16548: object not enabled



DGMGRL> show configuration;



Configuration

Name: DGConfig1

Enabled: NO

Protection Mode: MaxPerformance

Fast-Start Failover: DISABLED

Databases:

prod - Primary database

stand - Physical standby database



Current status for "DGConfig1":

DISABLED



DGMGRL> enable configuration;

Enabled.

DGMGRL> show configuration;



Configuration

Name: DGConfig1

Enabled: YES

Protection Mode: MaxPerformance

Fast-Start Failover: DISABLED

Databases:

prod - Primary database

stand - Physical standby database



Current status for "DGConfig1":

SUCCESS



DGMGRL> SHOW DATABASE 'prod' 'StatusReport';

STATUS REPORT

INSTANCE_NAME SEVERITY ERROR_TEXT



DGMGRL> SHOW DATABASE 'stand' 'StatusReport';

STATUS REPORT

INSTANCE_NAME SEVERITY ERROR_TEXT



DGMGRL> SHOW DATABASE VERBOSE 'prod' ;



Database

Name: prod

Role: PRIMARY

Enabled: YES

Intended State: ONLINE

Instance(s):

prod



Properties:

InitialConnectIdentifier = 'prod'

LogXptMode = 'ASYNC'

Dependency = ''

DelayMins = '0'

Binding = 'OPTIONAL'

MaxFailure = '0'

MaxConnections = '1'

ReopenSecs = '300'

NetTimeout = '180'

LogShipping = 'ON'

PreferredApplyInstance = ''

ApplyInstanceTimeout = '0'

ApplyParallel = 'AUTO'

StandbyFileManagement = 'AUTO'

ArchiveLagTarget = '0'

LogArchiveMaxProcesses = '30'

LogArchiveMinSucceedDest = '1'

DbFileNameConvert = 'C:\oracle\product\oradata\stand, C:\oracle\product\10.2.0\oradata\prod\prod'

LogFileNameConvert = 'C:\oracle\product\oradata\stand, C:\oracle\product\10.2.0\oradata\prod\prod, C:\oracle\product\oradat

a\arch_stand, C:\oracle\product\10.2.0\oradata\arch_prod'

FastStartFailoverTarget = ''

StatusReport = '(monitor)'

InconsistentProperties = '(monitor)'

InconsistentLogXptProps = '(monitor)'

SendQEntries = '(monitor)'

LogXptStatus = '(monitor)'

RecvQEntries = '(monitor)'

HostName = 'IGTEWF9000'

SidName = 'prod'

LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=IGTEWF9000)(PORT=1521))'

StandbyArchiveLocation = 'C:\oracle\product\10.2.0\oradata\arch_prod'

AlternateLocation = ''

LogArchiveTrace = '0'

LogArchiveFormat = 'ARC%S_%R.%T.arc'

LatestLog = '(monitor)'

TopWaitEvents = '(monitor)'



Current status for "prod":

SUCCESS



DGMGRL> edit database prod set property 'logxptmode'='SYNC';

Property "logxptmode" updated

DGMGRL> SHOW DATABASE VERBOSE 'prod' ;



Database

Name: prod

Role: PRIMARY

Enabled: YES

Intended State: ONLINE

Instance(s):

prod



Properties:

InitialConnectIdentifier = 'prod'

LogXptMode = 'SYNC'

Dependency = ''

DelayMins = '0'

Binding = 'OPTIONAL'

MaxFailure = '0'

MaxConnections = '1'

ReopenSecs = '300'

NetTimeout = '180'

LogShipping = 'ON'

PreferredApplyInstance = ''

ApplyInstanceTimeout = '0'

ApplyParallel = 'AUTO'

StandbyFileManagement = 'AUTO'

ArchiveLagTarget = '0'

LogArchiveMaxProcesses = '30'

LogArchiveMinSucceedDest = '1'

DbFileNameConvert = 'C:\oracle\product\oradata\stand, C:\oracle\product\10.2.0\oradata\prod\prod'

LogFileNameConvert = 'C:\oracle\product\oradata\stand, C:\oracle\product\10.2.0\oradata\prod\prod, C:\oracle\product\oradat

a\arch_stand, C:\oracle\product\10.2.0\oradata\arch_prod'

FastStartFailoverTarget = ''

StatusReport = '(monitor)'

InconsistentProperties = '(monitor)'

InconsistentLogXptProps = '(monitor)'

SendQEntries = '(monitor)'

LogXptStatus = '(monitor)'

RecvQEntries = '(monitor)'

HostName = 'IGTEWF9000'

SidName = 'prod'

LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=IGTEWF9000)(PORT=1521))'

StandbyArchiveLocation = 'C:\oracle\product\10.2.0\oradata\arch_prod'

AlternateLocation = ''

LogArchiveTrace = '0'

LogArchiveFormat = 'ARC%S_%R.%T.arc'

LatestLog = '(monitor)'

TopWaitEvents = '(monitor)'



Current status for "prod":

SUCCESS



DGMGRL> SHOW DATABASE VERBOSE 'stand' ;



Database

Name: stand

Role: PHYSICAL STANDBY

Enabled: YES

Intended State: ONLINE

Instance(s):

stand



Properties:

InitialConnectIdentifier = 'stand'

LogXptMode = 'ASYNC'

Dependency = ''

DelayMins = '0'

Binding = 'OPTIONAL'

MaxFailure = '0'

MaxConnections = '1'

ReopenSecs = '300'

NetTimeout = '180'

LogShipping = 'ON'

PreferredApplyInstance = ''

ApplyInstanceTimeout = '0'

ApplyParallel = 'AUTO'

StandbyFileManagement = 'AUTO'

ArchiveLagTarget = '0'

LogArchiveMaxProcesses = '30'

LogArchiveMinSucceedDest = '1'

DbFileNameConvert = 'C:\oracle\product\10.2.0\oradata\prod\prod, C:\oracle\product\oradata\stand'

LogFileNameConvert = 'C:\oracle\product\10.2.0\oradata\prod\prod, C:\oracle\product\oradata\stand, C:\oracle\product\10.2.0

\oradata\arch_prod, C:\oracle\product\oradata\arch_stand'

FastStartFailoverTarget = ''

StatusReport = '(monitor)'

InconsistentProperties = '(monitor)'

InconsistentLogXptProps = '(monitor)'

SendQEntries = '(monitor)'

LogXptStatus = '(monitor)'

RecvQEntries = '(monitor)'

HostName = 'IGTEWF1635'

SidName = 'stand'

LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=IGTEWF1635)(PORT=1521))'

StandbyArchiveLocation = 'C:\oracle\product\oradata\arch_stand'

AlternateLocation = ''

LogArchiveTrace = '0'

LogArchiveFormat = 'ARC%S_%R.%T.arc'

LatestLog = '(monitor)'

TopWaitEvents = '(monitor)'



Current status for "stand":

SUCCESS



DGMGRL> edit database stand set property 'logxptmode'='SYNC';

Property "logxptmode" updated



Perform Switch over operation by this command:


DGMGRL> switchover to stand;

Performing switchover NOW, please wait...

Operation requires shutdown of instance "prod" on database "prod"

Shutting down instance "prod"...

ORA-01109: database not open



Database dismounted.

ORACLE instance shut down.

Operation requires shutdown of instance "stand" on database "stand"

Shutting down instance "stand"...

ORA-01109: database not open



Database dismounted.

ORACLE instance shut down.

Operation requires startup of instance "prod" on database "prod"

Starting instance "prod"...

ORACLE instance started.

Database mounted.

Operation requires startup of instance "stand" on database "stand"

Starting instance "stand"...

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is "stand"


Now standby is acting as priamry database.

DGMGRL>

Monday, March 22, 2010

Guarantee restore ponit - new feature of 10g

2 Document Overview
This document explains the best practice to recover the database with minimum outage and data loss with high availability of the database through Guaranteed Restore Point.

2.1 Scope
The Flashback Recovery includes the following activities

1) Guaranteed Restore Point overview
2) Requirement for Guaranteed Restore Point
3) Creation of Guaranteed Restore Point
4) DML Tasks
5) Restoration

2.2 Objectives
The objectives of Guaranteed Restore Point is to

• Recovering the database with point-in-time recovery
• Minimal outage of Production & DR.
• Enabling the Flashback Mode is not necessary.

3 Guaranteed Restore Point
3.1 Guaranteed Restore Point overview

Like normal restore points, guaranteed restore points can be used as aliases for SCNs in recovery operations. However, they also provide specific functionality related to the use of the Flashback Database feature.

Creating a guaranteed restore point at a particular SCN enforces the requirement that you can perform a Flashback Database operation to return your database to its state at that SCN, even if flashback logging is not enabled for your database. If flashback logging is enabled, creating a guaranteed restore point enforces the retention of flashback logs required for Flashback Database back to any point in time after the creation of the earliest guaranteed restore point.

A guaranteed restore point can be used to revert a whole database to a known good state days or weeks ago, as long as there is enough disk space in flash recovery area to store the needed logs. As with Flashback Database, even the effects of NOLOGGING operations like direct load inserts can be reversed using guaranteed restore points.

Note:
Limitations that apply to guarantee restore points: For example, shrinking a datafile or dropping a tablespace can prevent flashing back the affected datafiles to the guaranteed restore point.
3.2 Requirement for Guaranteed Restore Point


 Database must be running in ARCHIVELOG mode.
 Flash recovery area should be configured
 ORACLE Version should be compatible with 10.2 or more.









3.3 Creation of Guaranteed Restore Point
 Creation of table for testing.





 For creating Guaranteed Restore Point, We should have SYSDBA privilege.

 To create first guaranteed restore point when flashback off, first we need to start the database in mount state after a consistent shut down otherwise we will get an error like below.



 After opening in mount state we can create first guaranteed restore point like below.

















 Subsequent guaranteed restore points can be created when database is open. When you want to see the available guaranteed restore points you can use the select below

3.4 DML operations done for Test case

Some DML actions on the Created Table RSTR_EMP and create Restore points for that as below for test.







 Checking for number of restore points created














3.5 Restoration

 Set the database in mount stage.



 Restoring the database to the before_upgrade level.



 Checking for success of Restoration.

Database upgrade from 10g to 11g - Praposal document

2. Requirement Analysis:

DETAILED SYSTEM STUDY ON THE EXISTING ENVIRONMENT


Existing System Environment
Operating System Windows Server 2003 Enterprise Edition - 64 bit
Release Service pack 2
CPU Intel Xeon® CPU X5365
Processor speed 3 GHz
RAM Size 11.7 GB
PF Size Min 30 GB
PF Size Max 42 GB
Computer Name MELEXHTRAX01
IP Address 172.23.201.84
Oracle Software Oracle 10g Enterprise Edition x64 Bit


Drive Details
Drive Name Total Size in GB Free Size in GB
Local Disk (C Drive) 39.9 14.0
Swap Volume (D Drive) 39.9 9.22
Data Volume (E Drive) 249 82.9



















3. FEASIBILITY STUDY ON UPGRADE:
A feasibility study’s main goal is to assess the economic viability of the proposed approach. The outcome of the feasibility study will indicate whether or not to proceed with the proposed venture.
From technical view, following points are to be considered.
1. Cost involved for DB Migration/Upgrade
2. Time and effort involved
3. Technical Benefits of DB Migration/Upgrade

Why Upgrade to Oracle Database 11g?
The innovation continues with Oracle Database 11g Release 2 to lower IT costs and deliver a higher quality of service by:
 Consolidating business applications onto fast, reliable and scalable.
 Maximizing availability and eliminating idle data center redundancy
 Compressing data onto low cost storage partitions for faster performance
 Securely protecting information and enabling compliance
 Doubling DBA productivity and reducing the risk of change
The key value propositions of Oracle Database 11g and the faster and smooth upgrade process make it compelling for businesses to consider upgrading older Oracle databases. Upgrading provides businesses with access to new functionality and ensures that an Oracle database is fully supported by Oracle.



Oracle Database version 11.1g new features
Overall, Oracle Database 11g makes database infrastructure far more efficient, resilient, and manageable. For example, very compelling new features in the realm of partitioning ease the
design and management of partitioned tables immensely.

1) SQL Performance Analyzer (SPA)
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) Feature based patching
8) RMAN Undo bypass
9) New default Audit setting
10) Faster DML triggers
11) Parallel backup and restore for very large files
12) Improved block media recovery performance
13) Enhanced Index rebuild online: Online index build with NO pause to DML.
14) No recompilation of dependent objects
15) Fast incremental backups on physical standby database

Upgrade Methods

1. Database Upgrade Assistant
The Database Upgrade Assistant (DBUA) interactively steps you through the upgrade process and configures the database for the new Oracle Database 11g release. The DBUA automates the upgrade process by performing all of the tasks normally performed manually. The DBUA makes appropriate recommendations for configuration options such as tablespaces and redo logs. The DBUA provides support for Real Application Clusters (RAC) and Automatic Storage Management (ASM).
2. Manual Upgrade
A manual upgrade consists of running SQL scripts and utilities from a command line to upgrade a database to the new Oracle Database 11g release. While a manual upgrade gives you finer control over the upgrade process, it is more susceptible to error if any of the upgrade or pre-upgrade steps are either not followed or are performed out of order.
3. Export/Import Method
Unlike the DBUA or a manual upgrade, the Export/Import utilities physically copy data from current database to a new database. We can use either the Oracle Data Pump Export and Import utilities (available as of Oracle Database 10g) or the original Export and Import utilities to perform a full or partial export from your database, followed by a full or partial import into a new Oracle Database 11g database. Export/Import can copy a subset of the data in a database, leaving the database unchanged.




4. PROPOSED APPROACH
IMPLEMENTATION APPROACH (Manual/DBUA Upgrade)
A manual upgrade consists of running SQL scripts and utilities from a command line to upgrade a database to Oracle Database 11g Release 1 (11.1).While a manual upgrade gives you finer control over the upgrade process, it is more susceptible to error if any of the upgrade or pre-upgrade steps are either not followed or are performed out of order.



*RAT – REAL APPLICATION TESTING
Before the Upgrade
When manually upgrading a database, perform the following pre-upgrade steps:
• Analyze the database using the Pre-Upgrade Information Tool. The Upgrade Information Tool is a SQL script that ships with Oracle Database 11g Release 1 (11.1), and must be run in the environment of the database being upgraded.
The Upgrade Information Tool displays warnings about possible upgrade issues with the database. It also displays information about required initialization parameters for Oracle Database 11g Release 1 (11.1).
• Prepare the new Oracle home.
• Perform a backup of the database.
Depending on the release of the database being upgraded, you might be required to perform additional pre-upgrade steps (adjust the parameter file for the upgrade, remove obsolete initialization parameters and adjust initialization parameters that might cause upgrade problems).
After the Upgrade
Review the upgrade spool log file and use the Post-Upgrade Status Tool. The Post-Upgrade Status Tool is a SQL script that ships with Oracle Database 11g Release 1 (11.1), and should be run in the environment of the new release.
APPROACH FLOWCHART




5. IMPLEMENTATION PLAN

Prerequisites for Implementation

Hardware requirements for upgradation

Hardware requirement for Upgrade (recommended)
CPU Intel Extended memory (EM64T) or AMD64
Processor speed 3 GHz
RAM Size 15 GB
PF Size Min 30 GB
PF Size Max 45 GB

Software requirements for up gradation

Software requirement (recommended)
Operating System Windows Server 2008 Enterprise Edition - 64 bit
Oracle Oracle server software 10.2.0.2.0
Oracle Patch 10.2.0.4.0 Patch set
Oracle Oracle server software 11.1.0.7.0


Drive Details
Drive Name Total Size in GB Required (in GB)
Local Disk (C Drive) 40 40
Swap Volume (D Drive) 40 40
Data Volume (E Drive) 250 250

Network Requirements

The computer on which you want to install Oracle Database should be connected to the network.





Tools / Resources Required
This section deals with the tools and the resources that are required to do the migration or upgrade of the Database that has been identified. This may contain the following:

1. The way in which the Original data can be transferred to the new server.
E.g. like Export dump or a Cold back of the database, etc
2. Tools that needs to be used. E.g. Oracle client, Import Utility, DBUA, etc
3. Two new servers are required (Primary and Standby).
4. Servers should be connected to network.
5. IGATE IMS Team should be able to login to new servers through CITRIX.
6. Administrator rights should be provided for IGATE IMS team login ids.
7. Oracle server software and patch set specified in the above section should be downloaded, and the location of the software should be shared.
Pre upgrade and Post upgrades activities

This includes following activities:
1. Timing tests
2. Data dictionary growth observations after upgradation.
3. Database resource usage observations, such as undo and temporary segment usage

Collecting this information will help to compare the current database (10.2.0.4.0) with the new Oracle Database (11.1.0.7.0).

Installation steps

1 What the database name is?

SQL> Select name from v$database;

2 What version is running?

SQL> Select * from v$version;

3 What option is installed?

SQL>Select VERSION, COMP_NAME, STATUS from dba_registry;


4 What is the count of the objects in that database?

SQL> Select count(*) from dba_objects;

5 How many invalid objects in database?

SQL> Select object_name, object_type from dba_objects where status='INVALID';

6 Install oracle 10.2.0.2.0 server software on the new test server.

7 Download 10.2.0.4.0 patch and upgrade Oracle to 10.2.0.4.0

8 Copy parameter file (init file), control files, Redo log files and Data files to the corresponding location of new server.

9 Make required modifications in init file (change location of control, datafile, directory location etc...

10 Create new TRAXMP instance and password file in new server.

SQL>ORADIM NEW –SID TRAXMP
SQL> ORAPWD FILE=C:\ORACLE\PRODUCT\10.2.0.\DB_1\DATABASE\PWDTRAXP.ORA PASSWORD=******** ENTRIES=100

11 Startup oracle instance in mount mode using initfile.

SQL> STARTUP NOMOUNT PFILE=’C:\ORACLE\PRODUCT\10.2.0.\DB_1\DATABASE\INITTRAXP.ORA’;

SQL>ALTER DATABASE MOUNT;

12 In mount mode, change the location of Data file, Redo log file in the control file.

SQL> ALTER DATABASE RENAME FILE ‘LOCATION\OLD_FILE_NAME’ TO
‘NEW_LOATION\NEW_FILE_NAME’;

13 Open the database.

SQL>ALTER DATABASE OPEN;

14 Perform health checkup (confirm that DB is running in 10.2.0.4.0).

15 Install Oracle 11.1.0.7.0 Server software in new server.

16 Run pre-upgrade information script(utl111i.sql – present in ORACLE_HOME\db_1\RDBMS\admin directory) in database running under 10.2.0.4.0.

SQL>SPOOL C:\PREUPGRADE.TXT
SQL>@utl111i.sql;
SQL>SPOOL OFF;

17 Make required modifications to database as recommended by pre-upgrade script.

18 Change oracle_home path pointing to 11.1.0.7.0

C:\>SET ORACLE_HOME=’C:\ORACLE\PRODUCT\11.1.0\DB_1’

19 Create directory structure in 11.1.0.7 .0 and copy the data file, log file and control files.

Create adump, bdump, cdump, pfile, dump directory under C:\ORACLE\PRODUCT\11.1.0\admin

20 Delete old instance and recreate new instance.

C:\>ORADIM –DELSID SID_NAME – To delete instance
C:\>ORADIM –NEW –SID SID_NAME

21 Edit pfile (compatibility and directory locations).

Set compatibility = 11.1.0.7.0

22 Startup nomount database using pfile.

SQL> STARTUP NOMOUNT PFILE=’C:\ORACLE\PRODUCT\11.1.0.\DB_1\DATABASE\INITTRAXP.ORA’;

23 Keep database in mount mode.

SQL>ALTER DATABASE MOUNT;

24 Change the location of data files and Redolog files in controlfiles.

SQL> ALTER DATABASE RENAME FILE ‘LOCATION\OLD_FILE_NAME’ TO
‘NEW_LOATION\NEW_FILE_NAME’;

25 Shutdown database.

SQL>SHUTDOWN IMMEDIATE;

26 Startup database in upgrade mode.

SQL>STARTUP UPGRADE;

27 Run catupgrd.sql script which is in ORACLE_HOME/RDBMS/ADMIN directory for upgrade.

SQL>@C:\ORACLE\PRODUCT\11.1.0\DB_1\RDBMS\ADMIN\CATUPGRD.SQL;

28 Shutdown database.

SQL>SHUTDOWN IMMEDIATE;

29 Startup database normaly.

SQL>SET ORACLE_SID=SID_NAME;
SQL>STARTUP;

30 Run utlrp.sql script to find invalid objects (script which is in ORACLE_HOME/RDBMS/ADMIN directory)

SQL>@C:\ORACLE\PRODUCT\11.1.0\DB_1\RDBMS\ADMIN\UTLRP.SQL;

31 If any invalid objects observed, recompile such objects.

SQL>SELECT OBJECT_NAME, OBJECT_TYPE FROM DBA_OBJECTS WHERE STATUS=’INVALID’;

SQL> ALTER OBJECT_TYPE OBJECT_NAME COMPILE;

32 Perform complete health checkup of database.

Check the database hit ratios.
Check table space usage details.
Check application level compatibility for 11g
Check database performance metrics.

Once the actual migration or the Upgrade is completed, the following things needs to be done:

1. Compare the Number of objects in both the old one and the new database (in case if a migration has been done to a different server.
2. Check for the INVALID state of the objects and compiling the same.
3. Check for the disabled constraints and then enabling the same. Also, the triggers need to be checked and enabled.
4. Reorganization of the tables or indexes if it is required.
5. Do an Analyze of the schemas in order to gain good performance.
6. Check for the application access and also the Performance of the database.


6. EFFORT ESTIMATION


S.No. Activity Resource/Effort Requirement Responsibility Comments/Status
1 Taking a cold Backup of the old Database
2 Setting up and installation of the new Oracle Server
3 Transferring of the original Production Data
4 Creating an Empty Database(optional)
5 Imports/Upgrade Activity
6 Checking of Invalid Objects and compiling those objects
7 User testing of the application

7. Rollback Strategy
Any Migration or an upgrade that is done needs a rollback strategy which needs to be done in case if the new system implemented is not fulfilling the expectations by the users or the project team. In order for this, a rollback strategy needs to be identified and should be kept in place. Should there be an upgrade failure it will be necessary to revert back to the previous versions. It will be more difficult and time consuming to roll back from an in-place upgrade. A side-by-side upgrade will work better is simpler to rollback from.

The roll back strategy should be in such a way that either the database needs to be restored to the old status in the same server or we should have the correct copy of the database before starting the migration so that the old system can be made available to the users in a short span of time reducing the outage.

Procedure:

1. Take a Complete backup of the Oracle Home, Oracle Inventory and Database before the activity.
2. If the up gradation activity fails, it should be rolled back to 10.2.0.4.0 using the backup.

8. Risks and Risk Mitigation Process

1. Risk is low; if the up gradation fails or Application compatibility fails then it can
be rolled Back to 10.2.0.4.0 using backup.

No activity or migration is risk free as there is always a risk associated with any kind of migration that is being done. The following are some of the risks that might be faced during the migration:

1. Getting the correct time window or downtime of the Production databases for doing the migration.
2. Identifying the Proper tools or methodology to do the migration
3. Availability of the Proper disk space for migrating the database
4. Suitable hardware identification and sizing
5. Risks involved in getting the original data because of network connectivity.

9. Testing
Application tests and performance tests should be planned and implemented to confirm in advance that no major compatibility or performance problems will arise in the new system. This reduces the risk of migration failure and other risks following migration.

Minimum tests

All or parts of applications are transferred from the current system to the new system (or test environment). Tests are performed without enabling new functions. In
minimum tests, all possible potential problems cannot be discovered , but at least
problems related to application launching and invocation can be checked.

Functional tests

Functional tests include testing of all databases, network, and application components. These are conducted after new and existing functions of the system are upgraded. Functional tests confirm that individual system components and new functionality work properly as they did before the upgrade.

Integration tests

Integration tests check interactions among system components. Specifically, these tests assess the following points:
 Confirms that applications (Pro*C/C++, JDBC, ODBC) and other software causes no problems in the new software.
 Performs analysis to determine if changes in data type and data dictionary data
affect front-end applications.
 Checks connectivity when applications are connected through SQL*Net, Net8, or Oracle Net services.

Performance tests

To check performance, plan and carry out the following performance tests. Before performing the tests, you must obtain performance information on the current system based on the performance requirements of the new system. This will help determine whether the performance requirements of the new system are adequate and help identify factors leading to performance degradation, based on comparisons.

Confirming concurrent performance

To confirm concurrent performance, multiple factors comprising actual business
Processes are used to generate loads similar to those in the actual system and the
Performance of the overall system is analyzed. If the system fails to meet the required
Unit performance, confirming concurrent performance will not help improve
Performance. Additionally, the following problems not generated during confirmation
Of unit performance can also emerge during confirmation of concurrent performance.

• High CPU usage, CPU wait
• High disk usage, disk wait
• Exclusive control (lock) for files and databases
• Process wait due to inter-process communication

As described above, migration to a new system requires planning and confirmation of compatibility of the current system (applicability, extent of results of any incompatibilities, countermeasures), application tests, and performance tests. This increases required man-hours, entails higher costs, and results in system failures and other problems in various work processes. In our verification testing, we attempted to verify whether use of application tests, Database Replay to confirm concurrent performance, and SQL Performance Analyzer to confirm unit performance would reduce risks and facilitate the process of migration.

Once the migration or the upgrade of the database is completed, we need to rigorous testing on the new environment that has been developed. This involves active participation from the end users as well. During the testing phase, it is important that we check on the following things:

1. Performance of the new database
2. Validity of the objects in the database
3. Correctness and matching of the data in the tables, number of objects in the old and the new database
4. Access and the working of the application when compared to the old environment.