Evidence for successful #Oracle Performance Tuning

This article shows an easy way to determine, whether your Oracle Database Performance Tuning task has been successful – or not. In the end, it boils down to „The objective for tuning an Oracle system could be stated as reducing the time that users spend in performing some action on the database, or simply reducing DB time.“ as the Online Documentation says. Best proof would be a confirmation from the end users that run time got reduced; second best is a proof of reduced DB time, which is discussed here.

A tuning task should always end with such a proof; your gut feeling or high confidence is not sufficient – or as I like to say: „Don’t believe it, test it!“ 🙂

The demo scenario: With an Oracle Enterprise Edition version 11.2.0.3, an application uses these commands to delete rows:

SQL> create table t as select * from dual where 1=2;
 
Table created.
SQL> begin
 for i in 1..100000 loop
 execute immediate 'delete from t where dummy='||to_char(i);
 end loop;
 end;
 /

I am pretty sure that this code is not optimal, because it uses Literals instead of Bind Variables where it is not appropriate. Before I implement an improved version of the code, I take a Baseline with Automatic Workload Repository (AWR) snapshots. On my demo system, snapshots are taken every ten minutes:

EM Homepage at 8:51 amThe code with Literals was just called – now about 10 minutes later on the Enterprise Manager (EM) Performance page:

CPU peak on EM Performance pageThe AWR report that I take (with EM or with awrrpt.sql) as the baseline shows the following:

AWR report baseline DB timeNotice especially the poor Library Cache hit ration, significant for not using Bind Variables – and the meaningless high Buffer Cache hit ration 🙂

AWR baseline low Library Cache hit ratioStarting after 9:00 am, my improved code that uses Bind Variables runs:

SQL> begin
 for i in 1..100000 loop
 delete from t where dummy=to_char(i);
 end loop;
 end; 
 /

The EM Performance page show no peak during the next 10 minutes which represent my comparison period after the tuning task:

No peak on EM Performance page after tuning taskLet’s look at the AWR report of the second snapshot range after the tuning task:

AWR report after tuning task shows reduced DB timeSame wall clock time, same application load, but reduced DB time – I was successful! Could stop here, but some more details:

AWR report shows improved Library Cache hit ratioThe important (especially for OLTP systems) Library Cache hit ratio is now very good. A very convenient way to compare the two snapshot ranges is the ‚AWR Compare Periods‘ feature in EM (or awrddrpt.sql) , which shows us instructively:

AWR Compare Periods with less DB timeAlthough in both periods, CPU was the top event (also in % DB time), it took much less time in total for the 2nd period:

AWR compare periods with less CPU wait for 2nd periodThe Time Model Statistics confirm a strongly reduced Parse Time for the 2nd period:

Time Model Statistics show reduced Parse Time for 2nd periodEspecially, we see a striking improvement for the run time of the code with Bind Variables: From about 82 seconds down to about 3 seconds!

SQL comparison section show much less run time in 2nd periodThis kind of proof (less DB time) can be used also in cases where the reduction of run time for a single statement is not so obvious as in my example. If 1000 users had done each 100 deletes, they would have seen not much difference in run time each – but the parse time summarizes and impacts overall performance similar as seen here. If you would like to see the three original AWR reports were I took the screen shots above from, they are here as PDFs

AWR_Report_baseline

AWR_report_after_tuning_task

AWR_Compare_Periods

Conclusion: You will – and should – be able to prove the effectiveness of your Oracle Database Tuning task with a reduction of DB time from an AWR report comparison. After all, you don’t want to waste your efforts, do you? 🙂

Addendum: This posting was published in the Oracle University EMEA Newsletter May 2013

 

, ,

  1. #1 von Max am April 16, 2013 - 14:33

    Great article as usual!!!

    I would say that initial design is poor 😀 the dummy column should be NUMBER so delete from t where dummy between 1 and 100000; would make great work. (joking)

  2. #2 von Uwe Hesse am April 16, 2013 - 14:40

    You are absolutely right – I just did not realize that at first. Thank you for pointing that out!

  3. #3 von Amit Saraswat am April 16, 2013 - 16:46

    Great article Uwe!!

  4. #4 von goutham am April 22, 2013 - 12:56

    hi Uhesse.
    Good explanation.thanks fro sharing the document.

    thanks.

  5. #5 von Jonathan Timajo am April 23, 2013 - 15:34

    SQL> create table t as select * from dual;

    Table created.

    SQL> begin
    for i in 1..100000 loop
    execute immediate ‚delete from t where dummy=’||to_char(i);
    end loop;
    end;

    how to code if
    Assuming (i) is > million or unknown?

  6. #6 von Uwe Hesse am April 23, 2013 - 17:37

    Jonathan,
    when using a anonymous PL/SQL block like I did here, you may just replace 100000 with &n – you get then prompted for the desired number

  1. AWR baselines and performance comparison

Kommentar verfassen

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

WordPress.com-Logo

Du kommentierst mit deinem WordPress.com-Konto. Abmelden /  Ändern )

Facebook-Foto

Du kommentierst mit deinem Facebook-Konto. Abmelden /  Ändern )

Verbinde mit %s

Diese Seite verwendet Akismet, um Spam zu reduzieren. Erfahre, wie deine Kommentardaten verarbeitet werden..