Friday, December 11, 2015

Oracle HASH Joins

HASH joins are the usual choice of the Oracle optimizer when the memory is set up to accommodate them. In a HASH join, Oracle accesses one table (usually the smaller of the joined results) and builds a hash table on the join key in memory. It then scans the other table in the join (usually the larger one) and probes the hash table for matches to it. Oracle uses a HASH join efficiently only if the parameter PGA_AGGREGATE_TARGET is set to a large enough value. If MEMORY_TARGET is used, the PGA_AGGREGATE_TARGET is included in the MEMORY_TARGET, but you may still want to set a minimum.

If you set the SGA_TARGET, you must set the PGA_AGGREGATE_TARGET as the SGA_TARGET does not include the PGA (unless you use MEMORY_TARGET as just described). The HASH join is similar to a NESTED LOOPS join in the sense that there is a nested loop that occurs—Oracle first builds a hash table to facilitate the operation and then loops through the hash table. When using an ORDERED hint, the first table in the FROM clause is the table used to build the hash table.
HASH joins can be effective when the lack of a useful index renders NESTED LOOPS joins inefficient. The HASH join might be faster than a SORT-MERGE join, in this case, because only one row source needs to be sorted, and it could possibly be faster than a NESTED LOOPS join because probing a hash table in memory can be faster than traversing a b-tree index.
As with SORT-MERGE joins and CLUSTER joins, HASH joins work only on equijoins. As with SORT-MERGE joins, HASH joins use memory resources and can drive up I/O in the temporary tablespace if the sort memory is not sufficient (which can cause this join method to be extremely slow).
Finally, HASH joins are available only when cost-based optimization is used (which should be 100 percent of the time for your application running on Oracle 11g).
Table 1 illustrates the method of executing the query shown in the listing that follows when a HASH join is used.

0468_001
Table 1. HASH join

0469_001

Tuesday, November 17, 2015

Configure EMC Storage Plugin for OEM12c

You will have to download the EMC Storage pluging and get it installed in your OEM.
This is a one time activity. Once it is done you can proceed as below for Adding the Target VNX Array to your Grid Monitoring.



1. From the Setup menu, select Add Target > Add Targets Manually.
2. In the Add Targets Manually page, select Add Non-Host Targets by Specifying Target
Monitoring Properties.
3. In the Target Type field, select EMC VNX Block Storage



4. To the right of the Monitoring Agent field, click the magnifying glass icon, and then
select the required agent or agents from the list.


5. Clilck Add Manually.

6. In the Add EMC Symmetrix VMAX page, enter the required properties for the target.
• Enter any string for the Target Name, which is used to identify the target being
created.
• Type Yes to accept the End User License Agreement that is packaged with the
plug-in.
• Enter the path to the directory where the inq utility is located (typically, /opt). Use
quotation marks if the path (such as on Windows) contains spaces. The inq utility
is included in the plug-in download package.
• If you use PowerBroker, enter the PB role that has privileges to run the command
listed earlier.
• Type 0 for normal logging. EMC Support may ask you to change this value to 1 to
help troubleshoot any issues you may have.
• Enter the path to the Solutions Enabler bin directory. Be sure to use double
quotation marks around this path if it contains spaces.
• Enter the EMC Array ID, which uniquely identifies the VNX storage system.




7. Click on Test Connection.



8. click OK and your target will be saved. You can repeat the same for all the arrays in your configuration.

Monday, November 16, 2015

Configure RAC cluster in OEM

Here are the step by step instructions to add a RAC database to 12c Grid. There is a catch in adding Clustered Databases is, when you try to discover (and add) a cluster databases like you do a standalone database, 
12c Grid does not show you an error but the process doesn’t complete. It is because of a missing a required step.
 Lets Start.
Deploy agents on all the nodes.
After all the  agents are deployed  to all nodes, we will add “Oracle Cluster and High Availability Service”.

Select Add Targets --> Add Targets manually



select “Add Targets Using Guided Process” and select  “Oracle Cluster and High Availability Service”.

Select one of the nodes which is part of the your cluster.No need to select all hosts.

Click on Discover Target

If you get an error - "No Cluster Was Found On The Host"

Workaround I: Execute the following commands on all cluster nodes, then try to discover the CRS target from EM console
  1. <AGENT_HOME>/agent_inst/bin/emctl stop agent
  2. export CRS_HOME=<CRS_HOME>
  3. <AGENT_HOME>/agent_inst/bin/emctl start agent
Workaround II: Discovery the cluster target manually by specifying target properties
  1. Setup/add targets/add targets manually
  2. Add Targets Declaratively by Specifying Target Monitoring Properties
  3. Target Type:Cluster
  4. Monitoring Agent: Choose one of the nodes of the cluster
  5. Click on "add manually"
  6. Enter the following details:-

        Target Name: <enter the name of the cluster target*>
        Oracle Home: <oracle home of the cluster*>
        Scan Name: <enter scan name*>
        Scan Port: <enter scan port*>
        ONS Port: <enter ONS port*>
  7. In the section "Cluster Host and High Availability Services Targets" the node which was chosen will be shown.  Click on "add" and choose the other nodes of the cluster.
  8. Click on save



It’ll discover the other nodes, and show you the findings so you may edit them if required. Click “Next”, review and then save the cluster.

 Now we have the cluster information, so we can add our clustered (RAC) database.
Go to “Add Targets Manually” page again, select “Add Targets Using Guided Process” and select  “Oracle Database, Listener and Automatic Storage Manager”.

Now you need to select “cluster” which was set up earlier. Click “Next”.

It will discover database, instances and ASM. Then you need to select the checkboxes of the discovered targets, enter the reqd passwords and  click “Next”. 

You’ll be able to review the result and add the database to EM12c by clicking “Save”. So we successfully added our RAC Databases(s) to EM12c.


How to Configure EMC VNX storage Plugin - Configure EMC Plugin for OEM

Tuesday, September 15, 2015

Archive log sync on Standby Database.

Situation
Your company’s primary database writes archive logs to a physical standby database using Oracle Dataguard and they are then applied. A nightly RMAN backup is performed and then all applied archive logs are purged from the archive log directory.

Problem

The RMAN backup is failing because flash_back_recovery area is full and the RMAN job is not purging the archive logs because they still have not been applied to the standby database.  Upon investigation you find that someone has brought the standby database online and opened it.  Due to this the standby will not apply the archive logs that have been shipped to it.  You have tried shutting down the standby database and bringing it up in a mounted state and starting managed recovery.  However, some of the archive logs to apply to standby are missing.

Solution

This scenario could have disastrous consequences if it is not taken care of immediately.  There is a simple solution to resolve this problem however; it can be a little scary to a less experienced Oracle DBA.  What needs to happen to remedy the problem is to re-sync the standby database with the primary database and get the archive logs to begin applying to the standby.  The simple solution to remedy this issue is described below.

Tips

Connect to the primary database and perform a full RMAN backup including the archive logs and the control files.  Since the flash_back_recovery area is full you will need to pass in a different directory to write to.  Here is an example of the RMAN backup script.
Set the Oracle environment to the production database and start RMAN:

$ rman
connect target /
run{
change archivelog all crosscheck;
allocate channel ch1 type disk;
backup incremental level 0 database format
'C:\oracle\orabase\backupfile\bk_inc0_%s_%p' setsize=8250000 include current
controlfile for standby ;
sql "alter system archive log current";
backup archivelog all format 'Your Location\al_%s_%p';
release channel ch1;
}

Once this is complete, copy the files from the location of your backup to the same location on the standby server.  Once the files are copied you will need to connect to the standby database and start RMAN to re-create standby.  Below is an example of the script that you will need to use modifying it to fit your environment.



$ rman
connect primary sys/password@primary
connect auxiliary /
run {
allocate auxiliary channel ch1 type disk;
duplicate target database for standby dorecover nofilenamecheck;
release channel ch1;
}

The restore of the standby database is now complete.  RMAN has started the standby database in mount standby
mode which is correct for us to continue. For reference the commands to start a standby database are:
SQL> startup nomount
SQL> alter database mount standby database;

Once the database is mounted connect to the standby database and put it in managed recovery mode so the archive logs from primary will apply automatically after they have been shipped over.

The command to do this is:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Once you have issued this command give the process a few minutes and then check to see if the archive logs are being shipped and applied to standby.  An easy way to tell is to query the V$ARCHIVE_DEST_STATUS.

The command to do this is:
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#,
APPLIED_SEQ# FROM V$ARCHIVE_DEST_STATUS;

Which will give you a result similar to the below example:

ARCHIVED_THREAD# ARCHIVED_SEQ# APPLIED_THREAD# APPLIED_SEQ#
---------------- ------------- --------------- ------------
1         14596               0            0
1         14596               0            0
1         14595               1        14595
0             0               0            0
0             0               0            0
0             0               0            0
0             0               0            0
0             0               0            0
0             0               0            0
0             0               0            0

10 rows selected.

As we can see from the query in this example the current applied archive log on primary is 14596 and the applied archive log on standby is 14595.

Once the logs are correctly applying from primary to standby go into RMAN on the primary database server and clean up the applied archive logs and run another full backup.  Once that is complete Oracle will free up the flash_back_recovery area and all RMAN errors will have been resolved and you now have a syncing standby database and good backups in case of an outage or disaster.

Manual steps to upgrade database from 9i to 10g

Step 1:

Log in to the system as the owner of the new 10gR2 ORACLE_HOME and copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the /tmp directory on your system:

ORACLE_HOME/rdbms/admin/utlu102i.sql
Make a note of the new location of these files
Step 2:

Change to the temporary directory that you copied files to in Step 1.

Start SQL*Plus and connect to the 9i database instance as a user with SYSDBA privileges. Then run and spool the utlu102i.sql file. 
sqlplus '/as sysdba'

SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off 
Then, check the spool file and examine the output of the upgrade information tool. The sections which follow, describe the output of the Upgrade Information Tool (utlu102i.sql).
Database:

This section displays global database information about the current database such as the database name, release number, and compatibility level. A warning is displayed if the COMPATIBLE initialization parameter needs to be adjusted before the database is upgraded. 

Logfiles:

This section displays a list of redo log files in the current database whose size is less than 4 MB. For each log file, the file name, group number, and recommended size is displayed. New files of at least 4 MB (preferably 10 MB) need to be created in the current database. Any redo log files less than 4 MB must be dropped before the database is upgraded.

Tablespaces:

This section displays a list of tablespaces in the current database. For each tablespace, the tablespace name and minimum required size is displayed. In addition, a message is displayed if the tablespace is adequate for the upgrade. If the tablespace does not have enough free space, then space must be added to the tablespace in the current database. Tablespace adjustments need to be made before the database is upgraded.

Update Parameters: 

This section displays a list of initialization parameters in the parameter file of the current database that must be adjusted before the database is upgraded. The adjustments need to be made to the parameter file after it is copied to the new Oracle Database 10g release.

Deprecated Parameters: 

This section displays a list of initialization parameters in the parameter file of the current database that are deprecated in the new Oracle Database 10g release. 

Obsolete Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that are obsolete in the new Oracle Database 10g release. Obsolete initialization parameters need to be removed from the parameter file before the database is upgraded.

Components:

This section displays a list of database components in the new Oracle Database 10g release that will be upgraded or installed when the current database is upgraded.

Miscellaneous Warnings:

This section provides warnings about specific situations that may require attention before and/or after the upgrade.

SYSAUX Tablespace: 

This section displays the minimum required size for the SYSAUX tablespace, which is required in Oracle Database 10g. The SYSAUX tablespace must be created after the new Oracle Database 10g release is started and BEFORE the upgrade scripts are invoked. 

Step 3:

Check for the deprecated CONNECT Role 

After upgrading to 10gR2, the CONNECT role will only have the CREATE SESSION privilege; the other privileges granted to the CONNECT role in earlier releases will be revoked during the upgrade. To identify which users and roles in your database are granted the CONNECT role, use the following query:
SELECT grantee FROM dba_role_privs
WHERE granted_role = 'CONNECT' and
grantee NOT IN (
'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
'LOGSTDBY_ADMINISTRATOR', 'ORDSYS', 
'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER', 
'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');

If users or roles require privileges other than CREATE SESSION, then grant the specific required privileges prior to upgrading. The upgrade scripts adjust the privileges for the Oracle-supplied users.

In Oracle 9.2.x and 10.1.x CONNECT role includes the following privileges:

SELECT GRANTEE,PRIVILEGE FROM DBA_SYS_PRIVS
WHERE GRANTEE='CONNECT'

GRANTEE PRIVILEGE 
------------------------------ ---------------------------
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK

In Oracle 10.2 the CONNECT role only includes CREATE SESSION privilege.

Step 4:

Check for invalid objects in the database:
spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <> 'VALID';
spool off 

Run the following script as a user with SYSDBA privs using SQL*Plus and then requery invalid objects: 

% sqlplus '/as sysdba'
SQL> @?/rdbms/admin/utlrp.sql

This last query will return a list of all objects that cannot be recompiled before the upgrade in the file 'invalid_pre.lst'
If you are upgrading from Oracle9iR2 (9.2), verify that the view dba_registry contains data. If the view is empty, run the following scripts from the 9.2 home:
% sqlplus '/as sysdba' 
SQL> @?/rdbms/admin/catalog.sql 
SQL> @?/rdbms/admin/catproc.sql 
SQL> @?/rdbms/admin/utlrp.sql 
and verify that the dba_registry view now contains data. 
Step 5:

Stop the listener for the database:
$ lsnrctl 
LSNRCTL> stop 

Ensure no files need media recovery: 

$ sqlplus '/ as sysdba' 
SQL> select * from v$recover_file; 

This should return no rows. 
Step 6:

Ensure no files are in backup mode: 

SQL> select * from v$backup where status!='NOT ACTIVE'; 

This should return no rows.
Step 7:

Resolve any outstanding unresolved distributed transaction: 

SQL> select * from dba_2pc_pending; 

If this returns rows you should do the following: 

SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry(''); 
SQL> commit;

Step 8:

Disable all batch and cron jobs. 
Step 9:

Ensure the users sys and system have 'system' as their default tablespace. 

SQL> select username, default_tablespace from dba_users 
where username in ('SYS','SYSTEM'); 

To modify use:

SQL> alter user sys default tablespace SYSTEM; 
SQL> alter user system default tablespace SYSTEM; 

Step 10:

Ensure that the aud$ is in the system tablespace when auditing is enabled.

SQL> select tablespace_name from dba_tables where table_name='AUD$';

Step 11:

Note down where all control files are located.

SQL> select * from v$controlfile;
Step 12:

Shutdown the database 

$ sqlplus '/as sysdba' 
SQL> shutdown immediate; 

Step 13:
Perform a full cold backup

Upgrading to the New Oracle Database 10g Release 2

Step 14:

Update the init.ora file: 

- Make a backup of the old init.ora file
On Unix/Linux, the default location of the file is the $ORACLE_HOME/dbs directory

- Comment out any obsoleted parameters. 

- Change all deprecated parameters. 
Step 15:
Copy configuration files from the ORACLE_HOME of the database being upgraded 
to the new Oracle Database 10g ORACLE_HOME:

If your parameter file resides within the old environment's ORACLE_HOME, 
then copy it to the new ORACLE_HOME. By default, Oracle looks for the parameter
file in ORACLE_HOME/dbs on UNIX platforms and in ORACLE_HOME\database on 
Windows operating systems. The parameter file can reside anywhere you wish,
but it should not reside in the old environment's ORACLE_HOME after you
upgrade to Oracle Database 10g.

If your parameter file is a text-based initialization parameter file with 
either an IFILE (include file) or a SPFILE (server parameter file) entry, 
and the file specified in the IFILE or SPFILE entry resides within the old 
environment's ORACLE_HOME, then copy the file specified by the IFILE or 
SPFILE entry to the new ORACLE_HOME. The file specified in the IFILE or SPFILE
entry contains additional initialization parameters.

If you have a password file that resides within the old environments 
ORACLE_HOME, then move or copy the password file to the new Oracle Database
10g ORACLE_HOME.

The name and location of the password file are operating system-specific. 
On UNIX platforms, the default password file is ORACLE_HOME/dbs/orapwsid. 
Step 16:

Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup:

SID:ORACLE_HOME:N

Step 17:

Update the environment variables like ORACLE_HOME and PATH 

$. oraenv

Step 18:

Make sure the following environment variables point to the new release (10g) directories: 
- ORACLE_HOME 
- PATH 
- ORA_NLS10
- ORACLE_BASE 
- LD_LIBRARY_PATH 
- LD_LIBRARY_PATH_64 (Solaris only)
- LIBPATH (AIX only)
- SHLIB_PATH (HPUX only)
- ORACLE_PATH 

$ env | grep ORACLE_HOME 
$ env | grep PATH 
$ env | grep ORA_NLS10
$ env | grep ORACLE_BASE 
$ env | grep LD_LIBRARY_PATH 
$ env | grep ORACLE_PATH


Step 19:

Startup upgrade the database:

$ cd $ORACLE_HOME/rdbms/admin 
$ sqlplus / as sysdba 
Use Startup with the UPGRADE option:
SQL> startup upgrade

Step 20:

Create a SYSAUX tablespace. In Oracle Database 10g, the SYSAUX tablespace is used to consolidate data from a number of tablespaces that were separate in previous releases.

The SYSAUX tablespace must be created with the following mandatory attributes:

- ONLINE
- PERMANENT
- READ WRITE
- EXTENT MANAGEMENT LOCAL
- SEGMENT SPACE MANAGEMENT AUTO

The Upgrade Information Tool(utlu102i.sql in step 4) provides an estimate of the minimum required size for the SYSAUX tablespace in the SYSAUX Tablespace section.

The following SQL statement would create a 500 MB SYSAUX tablespace for the database:

SQL> CREATE TABLESPACE sysaux DATAFILE 'path/sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Step 21:
Run the catupgrd.sql script, spooling the output so you can check whether any errors occured and investigate them

SQL> spool upgrade.log
SQL> @catupgrd.sql

The catupgrd.sql script determines which upgrade scripts need to be run and then runs 
each necessary script. You must run the script in the new release 10.2 environment.

The upgrade script creates and alters certain data dictionary tables. It also upgrades 
or installs the following database components in the new release 10.2 database:

spool off
Then, check the spool file and verify that the packages and procedures compiled successfully. You named the spool file earlier in this step; the suggested name was upgrade.log. Correct any problems you find in this file and rerun the appropriate upgrade script if necessary. You can rerun any of the scripts described in this note as many times as necessary.

Step 22:

Run utlu102s.sql, specifying the TEXT option:

SQL> @utlu102s.sql TEXT

This is the Post-upgrade Status Tool displays the status of the database components in the upgraded database.

Step 23: 

Restart the database:
SQL> shutdown immediate (DO NOT use "shutdown abort" !!!) 
SQL> startup restrict

Step 24: 

Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql 

Verify that all expected packages and classes are valid:

If there are still objects which are not valid after running the script run the following: 
spool invalid_post.lst 
Select substr(owner,1,12) owner, 
substr(object_name,1,30) object, 
substr(object_type,1,30) type, status 
from 
dba_objects where status <>'VALID'; 
spool off 

Now compare the invalid objects in the file 'invalid_post.lst' with the invalid objects in the file 'invalid_pre.lst' you create in step 4.

After Upgrading a Database

Step 25:

Modify the listener.ora file: 
For the upgraded instance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME

Step 26:

Start the listener 
$ lsnrctl 
LSNRCTL> start 

Step 27:

Enable cron and batch jobs 

Step 28:

Change oratab entry to use automatic startup 
SID:ORACLE_HOME:Y