Thursday, September 5, 2013

Using automatic SQL tuning and SQL Profiles for fast ETL performance troubleshooting

In the previous post we have showed how we can use SQL Profiles and a script that we can download from Metalink in order to fix the execution plan of a query to a specific one that we have found from the available plans in AWR (or Library Cache) that has a better elapsed.

In this post, we will continue our discussion on SQL Profiles and show another way that we can exploit them in order to very fast troubleshoot SQL tuning issues, without the need to write code, deploy new code into production etc.

Saturday, August 31, 2013

Using SQL Profiles for Forcing a Specific Execution Plan

In a previous post, we briefly discussed about SQL Profiles. In this post, we will discuss the use of SQL Profiles in order to quickly remedy an SQL statement that has suddenly chosen the "wrong" execution plan and its performance has been deteriorated dramatically. There might be many reasons why an execution plan has changed (statistics, changes in the schema, an upgrade to the optimizer code etc.) and indeed the root cause of the change is something that must be investigated. However, sometimes (and especially in production environments) the first priority is to cure the problem and then try to find what happened, so as to provide a more long-term solution.

SQL Profiles give exactly this fast remedy in the case of a problematic execution plan. As long as you know the SQL statement (i.e., the SQL_ID) and an old good plan (i.e., the plan_hash_value), then is a piece of cake to revert to the old plan. Please note, that this is completely transparent to the end users - no deployments, no bug fixing, no coding, nothing. Just create the SQL Profile for the specific SQL_ID and the optimizer is forced to use the old plan. Lets start with a small intro.

[Update]: In this newer post about how to revert to an older plan using SQL Plan Baselines, we describe how one can achieve the same thing with an SQL Plan Baseline.

Friday, August 30, 2013

Cardinality Feedback

Cardinality Feedback  (CFB) is  a  feature of the Oracle Optimizer for correcting cardinality estimation (i.e., number of rows returned by each operation) in execution plans. Rather than going to endless complications to determine the right cardinality, we just wait for the result of each step in the execution plan, store it in the shared pool and reference it on subsequent executions, in the hope that the information will give us a good idea of how well  we did the last time.

CFB is a feature of Oracle 11gR2. For more details read the following post.

In this post, we will show CFB in action.

Wednesday, August 28, 2013

Adaptive Cursor Sharing

Adaptive Cursor Sharing is a new feature of Oracle database 11g Release 2 that tries to cure the problem of when you have a cursor with bind variables and some columns have data skew, then a suboptimal plan might be chosen by the optimizer. Adaptive cursors have the ability to be "bind-aware" and choose a different plan for a different bind value.

Check out this great article by Maria Colgan to learn what it is and how it works. Also check out the relevant Oracle Performance Tuning Guide paragraph.

If you want to remember what is cursor sharing and the meaning of the values of the CURSOR_SHARING parameter, then check out our relevant post. Also to understand CURSOR_SHARING combined with Adaptive Cursor Sharing, then read this article.

Enjoy!
Oracle Learner.   

Tuesday, August 27, 2013

Find all available Execution Plans for a specific SQL statement


When you have a specific SQL statement that is been running in your database (e.g., it might be part of nightly ETL flow that is used for loading your Data Warehouse), then how can you get a list of all available execution plans that the optimizer has chosen for this specific statement?

Each statement is uniquely identified by an SQL_ID and whenever the statement undergoes a hard parsing, then an execution plan is generated, which is identified uniquely by a hash value, called plan_hash_value.

Monday, August 26, 2013

Child Cursors

If you want to understand what is a child cursor, (like the ones you see when you query V$SQL, and which are identified by a unique pair of (SQL_ID, CHILD_NUMBER)), and why they are created, then check out these two great explanations by Tom Kyte:

1. From the asktom site
2. From Oracle Magazine

Keep in mind that:
  • A parent cursor essentially represents the text of an SQL statement and is uniquely identified by an SQL_ID (prior to 10g was identified by address and hash_value). Parent cursors are externalized via dynamic view V$SQLAREA
  • A child cursor represents the execution plan as well as the execution environment for a specific SQL_ID (i.e., a parent cursor) and it is uniquely identified by SQL_ID and CHILD_NUMBER (address, hash_value and child_number in older versions). Child cursors are externalized via V$SQL.

    From the 2nd point we infer that whenever the execution plan changes for a specific SQL statement (i.e., a parent cursor), then a new child cursor is created. However, this is true per database instance. Because in a RAC enviroinment (e.g., Exadata) where there exist multiple instances, if you query the dynamic view GV$SQL (which covers all available instances - instead of V$SQL), then you might see that a new execution plan might be executed by the same child cursor but in a different instance. In other words, for the same (SQL_ID, child_number) pair you might see more than one plan_hash_values (i.e., different execution plans) but each such pair (i.e., child cursor), will be loaded in the library cache of a different instance (therefore, the value in the GV$SQL.INST_ID column will be different).

Enjoy!
Oracle Learner.

Wednesday, August 7, 2013

Code Instrumentation

Here is a very nice introduction to code instrumentation from David Njoku. Instrumentation is very important for your code and read this article to learn why. Of course it is well-known that Oracle database is one of the most well-instrumented software in the world, but instrumentation is equally important for software of any scale.

The article does not give many details on the "how" part of instrumentation, as it puts more weight on the "why it is necessary". Although it gives a brief outline.

An open source solution for instrumenting your PL/SQL code is Logger.  This is extremely useful for APEX applications where debugging can be really difficult  if your code does not leave the appropriate traces behind in order to find out what has happened (i.e., it has not been instrumented!). Check it out.

Enjoy!
Oracle Learner

Friday, July 26, 2013

Oracle Magazine useful Links

The following two links contain all the Oracle Magazine articles that have been published by Tom Kyte and Steven Feurstein. The third one, leads to all technical articles (check out Arup Nanda's Performance Tuning Series) and the last link points to all  issues archive.




enjoy
Oracle Learner

Friday, July 12, 2013

Oracle Internals Presentations

Check out this great set of Oracle Internals presentations by Julian Dyke. The presentations are all very comprehensive and I really like the animations that help you easily understand the sequence of events going on in the database.

Enjoy
Oracle Learner.

Thursday, July 11, 2013

Hints from an SQL Profile

A SQL profile is a set of auxiliary information specific to a SQL statement. It is an object containing information that helps the query optimizer find an efficient execution plan for a specific SQL statement. It provides information about the execution environment, object statistics, and corrections related to the estimations performed by the query optimizer.

One of its main advantages is the ability to influence the query optimizer without modifying the SQL statement or the execution environment of the session executing it. In other words, it is transparent to the application connected to the database engine.

Conceptually, a SQL profile is to a SQL statement what statistics are to a table or
index. The database can use the auxiliary information to improve execution plans.

Friday, June 28, 2013

Exadata Storage Indexes

Storage Indexes is one of the most significant features of Exadata that run at the storage cells. However, is also the feature which is the least (if at all) controllable by the user (developer/dba). You cannot control when they will be build, for which columns they will be build for, and  for which columns the storage indexes will remain as the database workload evolves. So it is a true "black box" and totally different to what we have been used from good old database indexes.

The following links from Richard Foote's Oracle Blog are very insightfull on how storage indexes really work:
First goes the introductory series of posts:



And then is the comparison series between SIs and database indexes (check out the last one on the "magic" 8-column limit):



Enjoy,
Oracle Learner.

Wednesday, June 19, 2013

Oracle Trace Files

In this post I will give an overview of Oracle trace files. The following are a collection of information from several books (especially [1]). Check out the References section at the end of this post.

Oracle trace files are text files that contain session information for the process that created
them. Trace files can be generated by the Oracle background processes, through the use of
trace events, or by user server processes. These trace files can contain useful information
for performance tuning and system troubleshooting.

Saturday, June 15, 2013

Big Data Intro

Here is a great post from Arup Nanda with introductory material on Big Data and all the relevant buzzwords headed at Oracle professionals!

Enjoy!
Oracle Learner

Thursday, April 11, 2013

parallel hint with no degree specified

In this post we try to investigate what degree of parallelism (DOP) Oracle assigns to DML operations when a parallel hint is used.

Tuesday, January 15, 2013

Happy Life KPI

If you want to measure how happy is your life, then ...you can use the following KPI! Of course you can modify the script to fit your own life priorities ;-)
I have created two versions of the query. The first one uses the analytic function ratio_to_report to generate the percentage needed for the KPI and thus makes the query a bit more "advanced" ;-)

select result, round(ratio_to_report(count(result)) over() * 100) ||'%' happy_life_KPI
from (
select  decode(time_spent, 'FAMILY', 'enjoy', 'FRIENDS', 'enjoy', 'ORACLE', 'enjoy', 'deadly dull') result
from mylife.days
start with yesterday is null and status = 'BORN'
connect by prior day = yesterday and status <> 'DEAD'
)
group by result
order by 2 desc
/



select round(count(decode(result,'enjoy',1,null))/count(*) *100) ||'%' happy_life_KPI
from (
select  decode(time_spent, 'FAMILY', 'enjoy', 'FRIENDS', 'enjoy', 'ORACLE', 'enjoy', 'deadly dull') result
from mylife.days
start with yesterday is null and status = 'BORN'
connect by prior day = yesterday and status <> 'DEAD'
)
/

Enjoy
OL

Update your stale statistics

Object statistics are very important for your data warehouse. Tables and indexes must have updated statistics in order for the query optimizer to choose the best execution plan for each query. So you must continously gather objects statistics (especially after ETL jobs have modified significantly the data).

The easiest way to do this is to utilize Oracle's automatic statistics gathering task. This can be easily configured to run at a specific maintenance window (that you will define, unless you want to use the default) from Enterprise Manager (Administration --> Automated Maintenance Tasks --> Optimizer Statistics Gathering).

Note that you should use an appropriate degree of parallelism so that the stats gathering process runs fast enough to fit in the maintenance window. You may have to experiment a bit with this, in order to find what is appropriate for your case.

But how can you monitor if the automated task has succesfully managed to gather statistics in the defined maintenance window?

Sunday, January 13, 2013

Don't treat your Data Warehouse as a warehouse!

   It is very common in a large organization an enterprise Data Warehouse to be considered as a "warehouse". In other words, just like in a typical warehouse we store anything that is useless but we cant just discard it - since we need to store it someplace-, or anything that we don't know where else to store it, or anything we just want to get rid of. In the same manner, many people, whenever they have some tables that they need in order to generate some reports, and these tables for X reasons cannot be stored in some transaction system, they choose the enterprise data warehouse as the best place for storing these data. Of course, this means that you treat your data warehouse as if it is the "recycle bin" of your IT systems!

Friday, January 4, 2013

How to connect to Exadata

How can one connect to an Exadata? In our example we will assume a half rack Exadata, which means that at the database layer we have a 4-node database RAC.

At the server side, we need to install a local listener on each database node. So we have 4 listeners "listening" for Exadata connections.

At the client side we assume the so-called "Local Naming Method" for name resolving, which simply means that we have at the client side a tnsnames.ora  file with an appropriate tns entry. But what should be an appropriate tns entry for connecting to our Exadata?

Wednesday, January 2, 2013

Connecting to Oracle bypassing the listener (aka a "bequeath connection")

Whenever you connect to Oracle from a client  running on the same machine as the database without specifying a net service name in the connect string, then you can connect to Oracle bypassing the listener. This is called a Bequeath connection. For example, when you dont specify @<net service name>, then you connect to Oracle without a listener. In the following example I have set the environment variable ORACLE_SID, so as to connect to a specific instance: