Sunday, 10 November 2013

TRAININGS WE OFFER


TRAININGS WE OFFER TO LEARN ORACLE.

  • Database Administration 10g , 11g 
  • Oracle RAC 10g , 11g  Administration
  • Oracle DataGuard 11g
  • Database RMAN Backups 11g
  • Linux Shell Scripting


I have 11+ year experience in Oracle Database ,Currently Working in MNC company,

if any fresher interested to learn Oracle DBA program .You can reach me at 02.tarun@gmail.com.


Saturday, 2 October 2010

How to get Swedish character from application.

Issue : When we execute report query swedish character is not display.

It is getting display correctly from SQLPLUSW,TOAD and SQL DEVLOPER

but not with sqlplus CMD based window.

Solution : To get display of Swedish character  need to set  NLS_LANG envrionment variable.

export NLS_LANG=SWEDISH_SWEDEN.WE8MSWIN1252  --GUI/UNIX
export NLS_LANG=SWEDISH_SWEDEN.WE8PC850----------in WINDOW Dos Promprt

If you dont want date in SWEDISH, then only below setting is sufficent.

export NLS_LANG= AMERICAN_AMERICA.WE8MSWIN1252  for UNIX./GUI
export NLS_LANG= AMERICAN_AMERICA.WE8PC850

Friday, 24 September 2010

How to find top CPU process on unix server

Below are commands to get  top 10 process on UNIX

1) Sun Os

ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10

2) IBM AIX

ps uaxw | sort -r +2 | head -10

Thursday, 23 September 2010

NO_INVALIDATE in DBMS_STATS

What does No_Invalidate parameter signfy in DBMS_STATS in Oracle 10G onwards.

Prior to Oracle 10G DBMS_STAT  will gather statistics the Schema/tables and invalidate dependent cursors immediately from share pool,so the next time new plan will be executed based upon statistics.
This will cause Hard parsing which will take high CPU resources.

Now In Oracle 10G we have control whether dependent cursors need to be invalidate or not with
NO_VALIDATE parameter in dbms_stats package.

f       Following are details of  no_invalidate parameter and default value of this parameter is     
        DBMS_STATS.AUTO_INVALIDATE

      NO_INVALIDATE Determines whether to invalidate dependent cursors or not
  Default: DBMS_STATS.AUTO_INVALIDATE
Possible Values:


  • DBMS_STATS.AUTO_INVALIDATE - Oracle decide when to invalidate dependent cursors. 
  • TRUE - Does not invalidate the dependent cursors
  • FALSE - Invalidates dependent cursors
Hope this helps!!!
Tarun C

Execute to parse %

What does Execute to parse % signify in Statspack/AWR report.

If this % is  very less  for e.g 2% then it means so many parsing is going on in database,now you have to check whether soft parsing is going on or Hard parsing is going ,

Check Soft parse ratio,If soft parse ratio is quite good like above 90% then it means ,there is less Hard parsing in database but lot of soft parsing is going on ,As Execute to parse ratio is very less then in database all mostly execution are going for soft parsing.

To avoid Soft parsing need to check application behaviour,In Application, cursor should  not be close so earlier.
The only way to modify the execute-to-parse ratio would be to alter the two variables used in the formula. You can change either (1) the number of times you parse or (2) the number of times you execute.

Below is method to calculate Execute to Parse %.

100-( parse/total execution)*100

Hope this helps

Tarun C.

How to fnd SQL query of top CPU resoruces

This topic is very usefull for Oracle DBA beginers to find TopSQL which are consuming high CPU resouces on the oracle server.

Here are below steps to know top SQL queries in Oracle 10.2.0.4

1. Give Top command on the server or If top command is not workimg then use  prstat -a command on SUN OS.

Get the top 5 PID no from the  above output.

2, Now you have to fetch what are the queries running from thse TOP 5 PID,So below is SQL to fetch the queries.

select sql_text from v$sqlarea where sql_id in (select sql_id from v$session where paddr in (select
(addr from v$process where spid in (<UNIX TOP 5 PID>))

Hope this helps
Tarun C
02.tarun@gmail.com


;

ORACLE PARALLEL DML

          ORACLE PARALLEL DML
There is a myth that if the table is not partitioned, then parallel DML will not work. This is no
longer true. Several years ago I had updated 35 million rows table (size 40GB) using parallel
DML option in 8i.  The parallel update ran 6X faster than procedural update using PL/SQL.

In this paper, you will learn how to update a BIG Table using parallel DML option.

All the tests given below are conducted in 10gR2.

First, I created a table, T1 using ALL_OBJECTS.  The table has 10,000 rows.

One of the quickest way to find out the effectiveness (throughput) of parallelism in your system
is to run a query (SELECT) using parallel option, and measure the throughput by querying
V$PQ_TQSTAT table.  
(Note: V$PQ_TQSTAT will be renamed to V$PX_TQSTAT in 11g).
alter session set db_file_multiblock_read_count = 128;

select /*+ FULL(t1) parallel(t1,8) */ *  from tamil.t1 ;

select process, num_rows, bytes  from v$pq_tqstat order by process;

PROCESS      NUM_ROWS      BYTES
----------       ----------          ----------
P000             1300                664452
P001             1300                664551
P002             1300                664551
P003             1300                664551
P004             1200                613428
P005             1200                613428
P006             1200                613428
P007             1200                613428
QC               10000              5111817     
-- Query coordinator
If you see the number of rows evenly distributed among the parallel slave processes, then it
indicates that your system is well configured for parallel processing.
Case # 1 - Simple update on NON-PARTITIONED Table
I have built  a table, T1 that has 3.11 Million rows.

SQL> desc t1
Name                       Null?    Type
-------------------------- -------- ------------------
OWNER                       NOT NULL   VARCHAR2(30)
OBJECT_NAME           NOT NULL   VARCHAR2(30)
SUBOBJECT_NAME                        VARCHAR2(30)
OBJECT_ID                 NOT NULL  NUMBER
DATA_OBJECT_ID                         NUMBER
OBJECT_TYPE                                VARCHAR2(19)
CREATED                    NOT NULL    DATE
LAST_DDL_TIME       NOT NULL    DATE
TIMESTAMP                                     VARCHAR2(19)
STATUS                                           VARCHAR2(7)
TEMPORARY                                   VARCHAR2(1)
GENERATED                                    VARCHAR2(1)
SECONDARY                                   VARCHAR2(1)

SQL> select count(*) from t1;

COUNT(*)
----------
3111872

SQL> select table_name, num_rows, blocks, empty_blocks,
        last_analyzed from user_tables
where table_name= 'T1' ;

TABLE_NAME     NUM_ROWS     BLOCKS    EMPTY_BLOCKS      LAST_ANALYZ
------------         ----------           ----------   ------------               -----------
T1                        3116183            43085            0                       05-MAR-2007
[I used DBMS_STATS.GATHER_TABLE_STATS procedure to gather table statistics with estimate
percent NULL(meaning 100 percent), but it reported more number of  rows as you see in the
USER_TABLES].
Let me do the first test.

SQL> alter session set db_file_multiblock_read_count = 128;

Session altered.

Elapsed: 00:00:00.01

SQL> update tamil.t1 set object_name = lower(object_name);
3111872 rows updated.

Elapsed: 00:09:34.33
SQL> commit;

Commit complete.

Elapsed: 00:00:00.13

A single process ran for 9 Min and 34 seconds.
Case # 2 Parallel Update on NON-PARTITIONED TABLE
I have shutdown the instance and restared.

SQL> alter session enable parallel dml ;

Session altered.

Elapsed: 00:00:00.00

SQL> alter session force parallel dml parallel 8;

Session altered.

Elapsed: 00:00:00.01

SQL> alter session set db_file_multiblock_read_count = 128;

Session altered.

Elapsed: 00:00:00.00

SQL> update /*+ full(t1) parallel(t1,8) */
2         tamil.t1 set object_name = lower(object_name);
3111872 rows updated.

Elapsed: 00:00:28.03
SQL> commit;

Commit complete.

Elapsed: 00:00:01.12
Conclusion:
The parallel update took just 28 seconds versus 9 min 34 seconds in a single process update.
By setting highest value to DB_FILE_MULTIBLOCK_READ_COUNT parameter, "PARALLEL DML"
and "FORCE PARALLEL DML" with correct degree of parallelism at the session level, you can
update a big table in a shorter time.

Do not attempt to process in parallel on a small table. In fact, it will run longer than what a single
process will take.
Hope, this helps.