Quantcast
Channel: Mohamed Houri’s Oracle Notes
Viewing all 224 articles
Browse latest View live

Window SORT and Window BUFFER

$
0
0

I was thinking that I will see a WINDOW SORT operation in an execution plan for every over ( ) clause statement that differs in the partition and the order by options. For example

SQL> select
        id
       ,n_5000
       ,lead(id) over (partition by n_5000 order by id)
     from t1
     where n_5000 = 1778;

---------------------------------------------------------------------
| Id  | Operation             | Name   | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------------
|   0 | SELECT STATEMENT      |        |       |       |    94 (100)|
|   1 |  WINDOW SORT          |        |    22 |   572 |    94   (6)|
|*  2 |   INDEX FAST FULL SCAN| MY_IND |    22 |   572 |    93   (5)|
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N_5000"=1778)

But it suffices to have (1) an index that starts by the order by column (id) or by the partition by column(n_5000) and (2) add a predicate on the order by column or on the partition by column to that original query and the WINDOW SORT will be transformed into a less costly WINDOW BUFFER.

SQL> select
        id
       ,n_5000
       ,lead(id) over (partition by n_5000 order by id)
    from t1
    where id = 1778;

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |       |       |     3 (100)|          |
|   1 |  WINDOW BUFFER    |        |     1 |    26 |     3  (34)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN| MY_IND |     1 |    26 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ID"=1778)

That’s pretty straightforward. The index MY_IND (id, n_5000, n_10000) has been used to avoid the WINDOW SORT analytical operation. This transforms my initial thinking to: “I will see a WINDOW SORT operation in an execution plan for every over () clause statement that differs in the partition by and the order by options unless the CBO finds a suitable index that permits bypassing a SORT operation”

But does this mean that I will not see a parent WINDOW BUFFER operation without a child index scan operation?

SQL> select
       id
      ,n_5000
      ,padding
      ,sum(id) over (partition by n_5000)
     from t1
     where n_5000 = 444;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   478 (100)|          |
|   1 |  WINDOW BUFFER     |      |    20 |  2200 |   478   (2)| 00:00:02 |
|*  2 |   TABLE ACCESS FULL| T1   |    20 |  2200 |   477   (2)| 00:00:02 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N_5000"=444)

I get rid of the order by option in the over() operation and added a predicate on the partition by column.

And maybe I can add this:

I will NOT see a WINDOW SORT operation in an execution plan for every over () clause statement that contains ONLY a partition by option and where the container query includes the partition by column in the predicate part”.

Bottom Line: from now and on when I see an over () clause statement I will be paying more attention to the partition by clause to see if its related column is in the predicate part or not. It might explain why I have a WINDOW SORT instead of a WINDOW BUFFER

FootNote:  I have to warn that the above conclusions, despite they might be correct, they nevertheless remains to be sourced from a one day experiment. As such you should consider them with a careful attention before taken them as definitely demonstrated. It is not because they have been published that they are correct.



Redundant Indexes

$
0
0

I am very often warning:  dropping redundant indexes in production is not 100% safe. I have instead always been advocating paying a careful attention during design time to avoid creating redundant indexes. In my professional experience I have realized that, it is very often when creating indexes to cover the lock threat of unindexed foreign key constraints, that developers are creating unintentionally redundant indexes. It has been irritating me so that I have created a script which checks if a given foreign key is already indexed or not before creating supplementary non wanted indexes damaging the DML part of the application.

Having said that I still have not defined what is redundant indexes

Indexes ind_1 and ind_2 are said redundant when leading columns of one index are a superset of the leading columns of the other one

For example

Ind_1 (a,b) and ind_2(a,b,c) are redundant because ind_2 contains index ind_1.

If you are at design time it is obvious that you should not create index ind_1. However, once in production, it is not 100% safe to drop index ind_1 without any impact. There are, for sure, occasions were the clustering factor of ind_2 is so dramatic when compared to index ind_1 so that if the later index is dropped the optimizer will opt for a full table scan traumatizing the queries that were perfectly happy with the dropped redundant index.

I can also show you another type of what people might consider redundant indexes while they aren’t. Consider the following model where I have created a range partitioned table (mho_date being the partition key and I have created 1493 partitions) and two indexes as shown below

desc partitioned_tab

Name                            Null?    Type
------------------------------- -------- ------------
1      MHO_ID                          NOT NULL NUMBER(10)
2      MHO_DATE                        NOT NULL DATE
3      MHO_CODE                        NOT NULL VARCHAR2(1)
4      MHO_TYP_ID                      NOT NULL NUMBER(10)

create index local_ind_1 on partitioned_tab (mho_typ_id,mho_code) local;

create index global_ind_1 on partitioned_tab (mho_typ_id);

I am going to execute a simple query against the above engineered partitioned table.

select * from partitioned_tab where mho_typ_id = 0;

Which, in the presence of the above two indexes is honored via the following execution plan

----------------------------------------------------------------------
Plan hash value: 3042058313
----------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name            | Rows  |Cost (%CPU)| Pstart| Pstop |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                 |  1493 | 1496   (0)|       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PARTITIONED_TAB |  1493 | 1496   (0)| ROWID | ROWID |
|*  2 |   INDEX RANGE SCAN                         | GLOBAL_IND_1    |  1493 |    4   (0)|       |       |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("MHO_TYP_ID"=0)

Statistics
-------------------------------------------------------
48    recursive calls
0     db block gets
3201  consistent gets
0     physical reads
0     redo size
51244 bytes sent via SQL*Net to client
1632  bytes received via SQL*Net from client
101   SQL*Net roundtrips to/from client
7     sorts (memory)
0     sorts (disk)
1493  rows processed

As you might already know I am one of the fans of SQLT tool developed by Carlos Sierra. And here below what this tool said about redundant indexes in this particular case

Redundant_indexes

It is clearly suggesting considering dropping the redundant index global_ind_1 index

So let’s follow this advice and see what happens. Hopefully with the recent Oracle release I will first make the index invisible ( by the way that’s a good point to signal for Carlos Sierra and Mauro Pagano for them to suggest setting first the index invisible before considering dropping it)

alter index global_ind_1 invisible;

select * from partitioned_tab where mho_typ_id = 0;

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name            | Rows  | Bytes | Cost (%CPU)| Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                 |  1493 | 23888 |  2985   (1)|       |       |
|   1 |  PARTITION RANGE ALL                       |                 |  1493 | 23888 |  2985   (1)|     1 |  1493 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PARTITIONED_TAB |  1493 | 23888 |  2985   (1)|     1 |  1493 |
|*  3 |    INDEX RANGE SCAN                        | LOCAL_IND_1     |  1493 |       |  1492   (0)|     1 |  1493 |
-------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("MHO_TYP_ID"=0)

Statistics
--------------------------------------------------------
48    recursive calls
0     db block gets
4588  consistent gets
0     physical reads
0     redo size
47957 bytes sent via SQL*Net to client
1632  bytes received via SQL*Net from client
101   SQL*Net roundtrips to/from client
7     sorts (memory)
0     sorts (disk)
1493  rows processed

By making the ”redundant” index invisible we went from a smooth one global index range scan with a cost of 4 to 1493 index range scans with a cost of 1492 with an additional 1387 logical reads.

Bottom line: You should always consider dropping (avoiding) redundant index at design time. Once in production consider making them invisible first before thinking carefully about dropping them.

PS : if you want to create the table the script can be found partitioned table


DBMS_SCHEDULER and CALL statement

$
0
0

This is a brief reminder for those using intensively Oracle dbms_scheduler package to schedule a launch of a stored PL/SQL procedure. Recently I was investigating a wide range performance problem via a 60 minutes AWR snapshot until the following action that appears in the TOP SQL ordered by Gets has captured my attention:

SQL ordered by Gets

Buffer Gets Executions Gets per Exec %Total Elapsed Time (s) %CPU %IO SQL Id SQL Module SQL Text
858,389,984 2 429,194,992.00 89.58 3,606.79 97.2 1.9 6cjn7jnzpc160 DBMS_SCHEDULER call xxx_PA_REDPC.P_EXPORT_DA…
542,731,679 32,021 16,949.24 56.64 3,201.40 98.4 .6 4bnh2nc4shkz3 w3wp.exe SELECT WAGD_ID FROM S1

It is not the enormous Logical I/O done by this scheduled stored procedure that has retained my attention but it is the appearance of the call statement in the corresponding SQL Text.

Where does this come from?

Let’s get the DDL of the corresponding program

 SELECT dbms_metadata.get_ddl('PROCOBJ','XXX_PARSE_MESSAGE_PRG', 'SXXX') from dual;

Which gives this

 BEGIN
  DBMS_SCHEDULER.CREATE_PROGRAM
    (program_name         => 'XXX_PARSE_MESSAGE_PRG'
    ,program_type         => 'STORED_PROCEDURE'
    ,program_action       => 'XXX_PA_REDPC.P_EXPORT_DA.P_xxxx'
    ,number_of_arguments  => 0
    ,enabled              => FALSE
    ,comments             => NULL);
 COMMIT;
 END;
 /
 

I have  arranged a little bit the generated DDL script for clarity.

Now things become clear.

When you define your program using

     program_type         => 'STORED_PROCEDURE'
 

Then your job will be executed using the SQL command call

    call XXX_PA_REDPC.P_EXPORT_DA..'
 

This is in contrast to when you define your program using

  program_type         => 'PLSQL_BLOCK'
 

which has the consequence of making your job being executed using an anonymous PL/SQL block

 BEGIN
    XXX_PA_REDPC.P_EXPORT_DA..
 END;
 

And now the question: how would you prefer your scheduled stored procedure to be executed?

  1. via the  SQL call statement
  2. via the anonymous PL/SQL block

Well, after a brief research on My Oracle support I found a bug that seems closely related to it

DBMS_SCHEDULER Is Suppressing NO_DATA_FOUND Exceptions for Jobs that Execute Stored Procedures (Doc ID 1331778.1)

In fact, there is one fundamental threat when opting for the call statement. Consider this

 SQL> create or replace procedure p1 as
    begin
     insert into t values (1,2);
      raise no_data_found;
      commit;
    end;
 /
Procedure created.

SQL> select count(1) from t;   

   COUNT(1)
   ----------        
    0

I am going to call this procedure using the call statement which normally will raise a no_data_found exception and the inserted data into table t will be rolled back

SQL> call p1();

Call completed.

SQL> select * from t;

        N1         N2
---------- ----------
         1          2

Despite the raise of the no_data_found exception inserted data has been committed and the exception ignored. This will not have happened if I have managed to execute the stored procedure using an anonymous PL/SQL block as shown below:

SQL> truncate table t;

Table truncated.

SQL> begin
p1();
end;
/
begin
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "XXX.P1", line 4
ORA-06512: at line 2

SQL> select count(1) from t;

COUNT(1)
----------
0

So, please be informed :-)


Trouble shooting a performance issue

$
0
0

Here it is a case of Trouble shooting a performance issue taken from a real life production system that I wanted to share with you. There is an Oracle stored procedure continuously activated via a scheduled job. Its goal is to parse incoming XML messages before accepting them into or rejecting them from a production application. 80% of the XML messages are treated in an acceptable execution time. But 20% of those messages when injected into the queue are hanging and slowing down the database with a side effect on the execution time of the normal messages. My strategy was to ask for one of those problematic XML messages from PRODUCTION instance and execute it in the TEST data base with a 10046 trace file. So I did, and among many observations, I stopped at the following select I wanted to share with you the method I followed to trouble shoot it:

SELECT *
FROM
 ( SELECT WD.WAGD_ID,WH.WAGH_ID,WD.WAG_ID,WH.ARRIVAL_DATE,WH.DEPARTURE_DATE,
  WD.CHECK_REPERAGE ,ROW_NUMBER() OVER (PARTITION BY WH.WAGD_ID ORDER BY
  WH.ARRIVAL_DATE) RN FROM SRI_ORDER_DETAILS WD ,SRI_ORDER_HISTORIC WH
  WHERE WD.FROM_COM_ID = :B1 AND WD.WAGD_ID = WH.WAGD_ID AND WH.PDES_ID IS
  NULL ) WHERE RN = 1 

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute   3124      0.28       0.27          0          0          0           0
Fetch     3124     31.79      32.05          0   10212356          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     6250     32.07      32.33          0   10212356          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 151     (recursive depth: 1)
Number of plan statistics captured: 2

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  VIEW  (cr=3269 pr=0 pw=0 time=14483 us cost=709 size=1245 card=15)
         0          0          0   WINDOW SORT PUSHED RANK (cr=3269 pr=0 pw=0 time=14480 us cost=709 size=660 card=15)
         0          0          0    MERGE JOIN  (cr=3269 pr=0 pw=0 time=14466 us cost=708 size=660 card=15)
         0          0          0     TABLE ACCESS BY INDEX ROWID SRI_ORDER_DETAILS (cr=3269 pr=0 pw=0 time=14464 us cost=532
     15139      15139      15139       INDEX FULL SCAN SRI_WAGD_PK (cr=2822 pr=0 pw=0 time=2305 us cost=80 size=0
         0          0          0     SORT JOIN (cr=0 pr=0 pw=0 time=0 us cost=176 size=320528 card=12328)
         0          0          0      TABLE ACCESS FULL SRI_ORDER_HISTORIC (cr=0 pr=0 pw=0 time=0 us cost=67 size=320528
********************************************************************************

Notice that the above select undergone 3,124 executions which have necessitated more than one million of logical reads to end up finally by producing 0 records. There is for sure a way to reduce this enormous waste of time and energy in manipulating non necessary millions of logical I/O. My experience in trouble shooting performance issues shows me that this kind of excessive logical I/O can be very often linked to two situations:

  •  Use of a not very precise index to access a table via an index rowid and throwing away the majority of rows
  •  The first operation of the plan is the one that manipulated the big number of rows in the entire row source execution plan

Coincidently, the first operation in my row source plan is an INDEX FULL SCAN and it is the operation generating the maximum number of rows (15,139). Well, now I can say that in order to tune this select I need first to have a more precise access to the SRI_ORDER_DETAILS table. Here how I managed to solve this issue:

1. Get a real time execution plan from memory

The desired execution plan confirming the row source stats from the Tkprof file is given here below:

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                     |      1 |        |      1 |00:00:00.03 |    3469 |
|*  1 |  VIEW                          |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|*  2 |   WINDOW SORT PUSHED RANK      |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|   3 |    MERGE JOIN                  |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|*  4 |     TABLE ACCESS BY INDEX ROWID| SRI_ORDER_DETAILS   |      1 |     13 |      1 |00:00:00.02 |    3269 |
|   5 |      INDEX FULL SCAN           | SRI_WAGD_PK         |      1 |  15139 |  15139 |00:00:00.01 |    2822 |
|*  6 |     SORT JOIN                  |                     |      1 |  13410 |      1 |00:00:00.01 |     200 |
|*  7 |      TABLE ACCESS FULL         | SRI_ORDER_HISTORIC  |      1 |  13410 |  13410 |00:00:00.01 |     200 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"=1)
   2 - filter(ROW_NUMBER() OVER ( PARTITION BY "WH"."WAGD_ID" ORDER BY "WH"."ARRIVAL_DATE")<=1)
   4 - filter("WD"."FROM_COM_ID"=3813574)
   6 - access("WD"."WAGD_ID"="WH"."WAGD_ID")
       filter("WD"."WAGD_ID"="WH"."WAGD_ID")
   7 - filter("WH"."PDES_ID" IS NULL)

We identify a lot of data (15,139) by FULL SCANNING the primary key index SRI_WAGD_PK . The identified rows via the single PK column (wagd_id) index (operation n°5) are sent back to their parent operation (n°4) which wasted a lot of effort throwing away the entire set of generated rows (except one row for the bind variable I used during my test) by applying filter n°4 (FROM_COM_ID). Notice also how the CBO has made a complete mess of the cardinality estimate for the SORT JOIN operation n°6.

Now that the symptoms have been clearly identified next will be presented the solution

2. Create a new precise index

The solution that immediately came to my mind is to create an index with the FROM_COM_ID column. I was just wondering on how best is to design it?

  •  Using a single column index
  • Or a composite index
  •  And  in case of a composite index will the FROM_COM_ID be put at the beginning of the index or not?

As far as the single PK index contains the WAGD_ID column I was tended to create a composite index starting by the FROM_COM_ID. Something which resembles to this:

SQL> create index mho_test on SRI_ORDER_DETAILS(FROM_COM_ID, wagd_id);

Index created.

SQL> exec dbms_stats.gather_table_stats(user, 'SRI_ORDER_DETAILS', cascade => true, no_invalidate=> false);

PL/SQL procedure successfully completed.

When I re-queried again, the new row source execution plan looked like:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                           |      1 |        |      1 |00:00:00.01 |       7 |
|*  1 |  VIEW                           |                           |      1 |     13 |      1 |00:00:00.01 |       7 |
|*  2 |   WINDOW SORT PUSHED RANK       |                           |      1 |     13 |      1 |00:00:00.01 |       7 |
|   3 |    NESTED LOOPS                 |                           |      1 |        |      1 |00:00:00.01 |       7 |
|   4 |     NESTED LOOPS                |                           |      1 |     13 |      1 |00:00:00.01 |       6 |
|   5 |      TABLE ACCESS BY INDEX ROWID| SRI_ORDER_DETAILS         |      1 |     13 |      1 |00:00:00.01 |       3 |
|*  6 |       INDEX RANGE SCAN          | MHO_TEST                  |      1 |     13 |      1 |00:00:00.01 |       2 |
|*  7 |      INDEX RANGE SCAN           | SRI_WAGH_ARRIVAL_DATE_UK  |      1 |      1 |      1 |00:00:00.01 |       3 |
|*  8 |     TABLE ACCESS BY INDEX ROWID | SRI_ORDER_HISTORIC        |      1 |      1 |      1 |00:00:00.01 |       1 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"=1)
   2 - filter(ROW_NUMBER() OVER ( PARTITION BY "WH"."WAGD_ID" ORDER BY "WH"."ARRIVAL_DATE")<=1)
   6 - access("WD"."FROM_COM_ID"=3813574)
   7 - access("WD"."WAGD_ID"="WH"."WAGD_ID")
   8 - filter("WH"."PDES_ID" IS NULL)

Thanks to this new precise index(MHO_TEST ) we have drastically reduced the total number of logical reads (from 3,469 to 7) and the estimations done by the CBO are almost acceptable which made the CBO prefering a more convenient NESTED LOOPS instead of the previous inadequately estimated SORT JOIN operation.

3. Conclusion

When trouble shooting performance issue in real life production system, look carefully at INDEX access operations that generate a lot of rows and for which the parent table access operation is discarding the majority of those corresponding rows. In such case there is certainly a way to create a precise index which not only will avoid that waste of effort but might conduct the CBO to have better estimations and hence an optimal plan


Why my query is not using my unique index

$
0
0

Here’s a point that I have been asked to explain and that I failed to answer. It is a simple select on a table with an equality predicate on two columns. These two columns are indexed using a two columns unique index. So it seems quite obvious that when selecting from this table using these two columns to (a) select a unique row and (b) use that unique index for that purpose. However, the CBO seems making an illogic decision per regards to the choice of the index access when there is no statistics on the table (num_rows and last_analyzed are null) while there is 0 statistics in the indexes. Things will become clear with a concrete example.

As very often, I will use one of the Jonathan Lewis table scripts creation and population.

 create table t1
   (id number,
    n_1000 number,
    n_5000 number,
    n_10000 number,
    small_vc varchar2(20),
    padding  varchar2(100)
   );

create unique index a_unq_ind on t1(id, n_1000);
create index b_non_unq_ind on t1(n_1000, n_5000);

Spot how I have managed to name my indexes(unique and non-unique) in an alphabetic order so that when this specific order matters the CBO will choose the unique index as far as it starts with the ‘a’ letter.

Now that I have created my table and my two indexes I can populate the table with data without transmitting statistics to the indexes.

 insert into t1
  with generator as (
   select   --+ materialize
    rownum id
   from dual
  connect by
  rownum <= 10000
)
select
    rownum                    id,
    mod(rownum,1000) n_1000,
    mod(rownum,5000) n_5000,
    mod(rownum,10000) n_10000,
    lpad(rownum,10,'0')       small_vc,
    rpad('x',100)             padding
from
    generator        v1,
    generator        v2
where
rownum <= 100000
;

commit;

Before starting my select, let me show you the actual table and index statistics


SQL> select table_name, num_rows, last_analyzed from user_tables where table_name = 'T1';

TABLE_NAME   NUM_ROWS LAST_ANALYZED
---------- ---------- -----------------
T1

SQL> select index_name, num_rows, clustering_factor from user_indexes where table_name = 'T1';

INDEX_NAME        NUM_ROWS CLUSTERING_FACTOR
--------------- ---------- -----------------
B_NON_UNQ_IND            0                 0
A_UNQ_IND                0                 0

And now my desired select and its corresponding execution plan (statistics_level have been set to all before)

SQL> select * from t1
     where id = 1
     and   n_1000 = 1;

ID     N_1000     N_5000    N_10000 SMALL_VC             PADDING
---------- ---------- ---------- ---------- -------------------- -----------
1          1          1          1 0000000001           x

SQL_ID  29tnq7b69swdr, child number 0
-------------------------------------
select * from t1  where id = 1  and   n_1000 = 1

Plan hash value: 3790258116

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |      1 |        |      1 |00:00:00.01 |     103 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1            |      1 |     15 |      1 |00:00:00.01 |     103 |
|*  2 |   INDEX RANGE SCAN                  | B_NON_UNQ_IND |      1 |    615 |    100 |00:00:00.01 |       3 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID"=1)
2 - access("N_1000"=1)

It is clear by comparing the E-Rows and A-Rows that the statistics are not reflecting the reality.

And when I force the use of the unique index the CBO seems doing good estimations and generation less buffer gets

SQL> select /*+ index( t1 A_UNQ_IND ) */ * from t1  where id = 1  and   n_1000= 1;

ID     N_1000     N_5000    N_10000 SMALL_VC             PADDING
---------- ---------- ---------- ---------- --------------------
1          1          1          1 0000000001           x

SQL_ID  bt11jwur90xg0, child number 0
-------------------------------------
select /*+ index( t1 A_UNQ_IND ) */ * from t1  where id = 1  and n_1000= 1;

Plan hash value: 3585360496
---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |      1 |00:00:00.01 |       3 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1        |      1 |      1 |      1 |00:00:00.01 |       3 |
|*  2 |   INDEX UNIQUE SCAN         | A_UNQ_IND |      1 |      1 |      1 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ID"=1 AND "N_1000"=1)

While this unique index execution plan seems optimal with correct estimations and less Buffers gets it is nevertheless not the index that the CBO desires to use.

Anyway, you are going to object and to say we are not going to run a real life application without having correct statistics gathered. Believe me that if I am writing this case here it is because it has happened in a real life production application. Having said that let’s collect statistics and see the new reaction of the CBO

SQL> exec dbms_stats.gather_table_stats(user ,'t1');

PL/SQL procedure successfully completed.

SQL> select * from t1
where id = 1
and   n_1000 = 1;

ID     N_1000     N_5000    N_10000 SMALL_VC             PADDING
---------------------------------------------------------------------------------------
1          1          1          1 0000000001           x

SQL_ID  29tnq7b69swdr, child number 1
-------------------------------------
select * from t1  where id = 1  and   n_1000 = 1

Plan hash value: 3585360496

---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |      1 |00:00:00.01 |       3 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1        |      1 |      1 |      1 |00:00:00.01 |       3 |
|*  2 |   INDEX UNIQUE SCAN         | A_UNQ_IND |      1 |      1 |      1 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ID"=1 AND "N_1000"=1)

Note
-----
- statistics feedback used for this statement

After having collected statistics the CBO is now choosing the right index.

By the way I am doing this test case on a 12c data base (you might have already guessed that through the presence of the word BATCHED in the execution plan) and seeing this statistics feedback used note, pushed me to flush the shared pool and re-run my query in order to be sure that the use of the unique index is not due to this statistics feedback feature

SQL> alter system flush shared_pool;

System altered.

SQL> select * from t1
where id = 1
and   n_1000 = 1;

ID     N_1000     N_5000    N_10000 SMALL_VC             PADDING
--------- ---------- ---------- ---------- -------------------- ---------
1          1          1          1 0000000001           x

SQL_ID  29tnq7b69swdr, child number 0
-------------------------------------
select * from t1  where id = 1  and   n_1000 = 1

Plan hash value: 3585360496
---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |      1 |00:00:00.01 |       3 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1        |      1 |      1 |      1 |00:00:00.01 |       3 |
|*  2 |   INDEX UNIQUE SCAN         | A_UNQ_IND |      1 |      1 |      1 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ID"=1 AND "N_1000"=1)

That is the whole story. If you want to learn more about this particular situation then you have to read this related otn thread.

Bottom line: always be sure to have adequate statistics so that you will give the CBO all chances to produce an adequate plan

PS : I started by trying to explain why the CBO didn’t choose the unique index and ended up while writing this article to two questions

1 : why dynamic sampling didn’t occur : Martin Press gave a clue that needs only to be tested

2 : when the CBO uses the index range scan the table is accessed via this new 12c  TABLE ACCESS BY INDEX ROWID BATCHED  that has been explained by Timur Akhmadeed but when it uses the unique index scan the CBO opted for the classical TABLE ACCESS BY INDEX ROWID


Index design

$
0
0

Here it is a very brief discussion I have had with one of my colleagues about index design

Colleague: what kind of index would you suggest to cover the following query?

SELECT
     rowid
    ,a.*
FROM  message_out a
WHERE sms_status    in (700, 707)
AND  (scheduled_time is null
      OR scheduled_time   <= :1)
AND   provider_id     in (0,0)
ORDER BY
      priority_level desc,
      creation_time asc;

----------------------------------------------------------------------------
| Id | Operation         | Name        | Rows | Bytes |TempSpc| Cost (%CPU)|
----------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |             | 5529 | 1376K |       | 4769 (1)   |
| 1  | SORT ORDER BY     |             | 5529 | 1376K | 1856K | 4769 (1)   |
|* 2 |  TABLE ACCESS FULL| MESSAGE_OUT | 5529 | 1376K |       | 4462 (1)   |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(("SMS_STATUS"=700 OR "SMS_STATUS"=707)
AND ("SCHEDULED_TIME" IS NULL OR "SCHEDULED_TIME"<=:1)
AND "PROVIDER_ID"=0)

Me: and what have you ended up with until now?

Colleague: here my suggested index and the related execution plan

CREATE INDEX virtual_index ON MESSAGE_OUT(sms_status,scheduled_time,provider_id) ;

---------------------------------------------------------------------------------------
| Id | Operation                   | Name          | Rows | Bytes |TempSpc| Cost (%CPU)|
---------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT            |               | 5529 | 1376K |       | 446 (1)    |
| 1  | SORT ORDER BY               |               | 5529 | 1376K | 1856K | 446 (1)    |
| 2  | INLIST ITERATOR             |               |      |       |       |            |
|* 3 |  TABLE ACCESS BY INDEX ROWID| MESSAGE_OUT   | 5529 | 1376K |       | 140 (0)    |
|* 4 |   INDEX RANGE SCAN          | VIRTUAL_INDEX | 5529 |       |       | 6 (0)      |
---------------------------------------------------------------------------------------

Me: I would not have created the same index

Me: here it is the index I would have created (after several questions regarding the data distribution, the table data volume, the use of bind variables, etc…)

create index mho_ind on MESSAGE_OUT (status, provider_id, scheduled_time);

Me: and if sms_status contains repetitive values then I would have added a compress command to that index creation

Colleague: there is no difference in the execution plan either by using my index or your index

------------------------------------------------------------------------------------------
| Id | Operation                     | Name          | Rows | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT              |               | 5529 | 1376K |       | 446 (1)    |
| 1  | SORT ORDER BY                 |               | 5529 | 1376K | 1856K | 446 (1)    |
| 2  |  INLIST ITERATOR              |               |      |       |       |            |
|* 3 |   TABLE ACCESS BY INDEX ROWID | MESSAGE_OUT   | 5529 | 1376K |       | 140 (0)    |
|* 4 |    INDEX RANGE SCAN           | VIRTUAL_INDEX | 5529 |       |       | 6 (0)      |
------------------------------------------------------------------------------------------

Me: no, it is not the same plan. Please always consider the predicate part

Me: what is the predicate part of the plan using your index

Colleague: this is my index predicate part


Predicate Information (identified by operation id):
 ---------------------------------------------------
 3 - filter("SCHEDULED_TIME" IS NULL OR "SCHEDULED_TIME"<=:1)
 4 - access(("SMS_STATUS"=700 OR "SMS_STATUS"=707) AND "PROVIDER_ID"=0)
     filter("PROVIDER_ID"=0) --> additional filter operation

Colleague: and this is your index predicate part

Predicate Information (identified by operation id):
---------------------------------------------------
 3 - filter("SCHEDULED_TIME" IS NULL OR "SCHEDULED_TIME"<=:1)
 4 - access(("SMS_STATUS"=700 OR "SMS_STATUS"=707) AND "PROVIDER_ID"=0)
--> no additional filter operation

Me: and did you pointed out the difference or not yet?

Colleague: no, same plan, same cost and same execution time

Me: there is a fundamental difference between your plan and mine. In your plan there is a double operation on your engineered index:  “ACCESS + FILTER” operation while my engineered index needs only one precise operation :  “ACCESS

Me: and when it comes to performance you always prefers a precise index ACCESS operation to that double ACCESS and FILTER operations.

Me: your engineered index has a second columns on which an inequality predicate is applied

SCHEDULED_TIME <= :1

You should always start your index by the columns on which an equality predicate is applied. In my case, I put the SCHEDULED_TIME column at the trailing edge of my index and doing as such I have avoided a costly filter operation on my index while your engineered index has been subject to that costly filter operation

If you want to test this behaviour then below is an example to play with. I hope you will enjoy it

SQL> create table t1
     (id number,
     n_1000 number,
     n_5000 number,
     n_10000 number,
     small_vc varchar2(20),
     padding varchar2(100)
     );

Table created.

SQL> insert into t1
  with generator as (
  select --+ materialize
  rownum id
  from dual
  connect by
  rownum <= 10000
  )
  select
    rownum id,
    mod(rownum,1000) n_1000,
    mod(rownum,5000) n_5000,
    mod(rownum,10000) n_10000,
    lpad(rownum,10,'0') small_vc,
    rpad('x',100) padding
  from
    generator v1,
    generator v2
  where
  rownum <= 100000
  ;

SQL> create index my_ind on t1(id, n_5000, n_1000);

Index created.

SQL> create index colleague_ind on t1(id, n_1000, n_5000);

Index created.

SQL> alter index my_ind invisible;

Index altered.

SQL> exec dbms_stats.gather_table_stats(user, 't1');

PL/SQL procedure successfully completed.

SQL> select
        a.*
  from t1 a
  where id in (112,120)
  and (n_1000 is null
  or n_1000 <= 3000)
  and n_5000 in (120);

Statistics
------------------------------------------------------
 65 recursive calls
 0 db block gets
 95 consistent gets ---> spot this
 0 physical reads
 0 redo size
 1005 bytes sent via SQL*Net to client
 543 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 6 sorts (memory)
 0 sorts (disk)
 1 rows processed

SQL_ID 7d6ag1m1ztpgr, child number 1
-------------------------------------
select a.* from t1 a where id in (112,120) and (n_1000 is null
or n_1000 <= 3000) and n_5000 in (120)

Plan hash value: 3644584748
-------------------------------------------------------------------------------------------
| Id  | Operation                            | Name          | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |               |       |       |     4 (100)|
|   1 |  INLIST ITERATOR                     |               |       |       |            |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1            |     2 |   258 |     4   (0)|
|*  3 |    INDEX RANGE SCAN                  | COLLEAGUE_IND |     2 |       |     3   (0)|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 2 - filter(("N_1000"<=3000 OR "N_1000" IS NULL))
 3 - access((("ID"=112 OR "ID"=120)) AND "N_5000"=120)
 filter("N_5000"=120) ---> spot this

SQL> alter index colleague_ind invisible;

Index altered.

SQL> alter index my_ind visible;

Index altered.

SQL> select
       a.*
  from t1 a
  where id in (112,120)
  and (n_1000 is null
  or n_1000 <= 3000)
  and n_5000 in (120);

Statistics
------------------------------------------------------
 33 recursive calls
 0 db block gets
 49 consistent gets --> spot the reduction
 0 physical reads
 0 redo size
 1005 bytes sent via SQL*Net to client
 543 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 6 sorts (memory)
 0 sorts (disk)
 1 rows processed

SQL_ID 7d6ag1m1ztpgr, child number 1
-------------------------------------
select a.* from t1 a where id in (112,120) and (n_1000 is null
or n_1000 <= 3000) and n_5000 in (120)

Plan hash value: 4286547933</pre>
------------------------------------------------------------------------------------
| Id  | Operation                            | Name   | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |        |       |       |     4 (100)|
|   1 |  INLIST ITERATOR                     |        |       |       |            |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1     |     2 |   258 |     4   (0)|
|*  3 |    INDEX RANGE SCAN                  | MY_IND |     2 |       |     3   (0)|
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 2 - filter(("N_1000"<=3000 OR "N_1000" IS NULL))
 3 - access((("ID"=112 OR "ID"=120)) AND "N_5000"=120)
     --> spot the absence of filter on index

NLS_SORT and SPM reproducibility

$
0
0

I wrote a blog article which aims mainly to show what optimizer parameter the CBO will use to reproduce aSPM baseline plan: the current environment parameters or the parameters used during the SPM plan baseline capture? Having tested in this blog article only the influence of the optimizer_mode (all_rows versus first_rows mode) I ended up with a conclusion that the CBO will use the optimizer_mode parameter stored during the SPM plan capture. I have also put a careful warning that this is not a conclusion one can spread to other CBO parameters without testing; particularly that I have already seen an otn thread dealing with the inability to reproduce a SPM plan because of a change in the _optim_peek_user_binds  hidden parameter. Then, a post on oracle list about the non reproducibility of a SPM baseline following an upgrade from 10gR2 to 11gR2 prompted me to investigate the influence the NLS_SORT parameter can have on the reproducibility of a SPM baseline plan. Below are my investigations and findings:

First, the model

CREATE TABLE t
(c1 VARCHAR2(64), c2 CHAR(15), d1 DATE);

INSERT INTO t
SELECT
    mod(ABS(dbms_random.random),3)+ 1||chr(ascii('Y')) ,
    dbms_random.string('L',dbms_random.value(1,5))||rownum ,
    to_date(TO_CHAR(to_date('01/01/1980','dd/mm/yyyy'),'J') + TRUNC(dbms_random.value(1,11280)),'J')
FROM dual
CONNECT BY level <= 2e6;

ALTER TABLE t ADD CONSTRAINT t_pk PRIMARY KEY (c1,c2) USING INDEX;

EXEC dbms_stats.gather_table_stats (USER, 't', CASCADE => true, method_opt => 'FOR ALL COLUMNS SIZE 1');

Second, the preliminaries

SQL> select * from dba_sql_plan_baselines;

no rows selected  --  no baseline yet

SQL> show parameter nls_sort

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------
nls_sort                             string      BINARY

SQL> alter session set optimizer_capture_sql_plan_baselines = TRUE;

SQL> SELECT  c1
FROM t
GROUP BY c1
ORDER BY c1 ASC NULLS LAST;

C1
------
1Y
2Y
3Y

SQL> /

C1
------
1Y
2Y
3Y

SQL> alter session set optimizer_capture_sql_plan_baselines = FALSE;

SQL> select plan_name from dba_sql_plan_baselines;

PLAN_NAME
------------------------------
SQL_PLAN_90sg67694zwyj641607ca  -- one SPM plan baseline

So far I have engineered a model against which I executed a query returning data in a certain order under the classical Binary NLS_SORT parameter. I added to that situation a SPM baseline plan so that any “resembling” query will use that SPM plan. That is plan stability. This SPM plan looks like:

SQL> select * from table(dbms_xplan.display_sql_plan_baseline(plan_name => 'SQL_PLAN_90sg67694zwyj641607ca', format => 'ADVANCED'));

--------------------------------------------------------------------------------
SQL handle: SQL_9061e639924ff3d1
SQL text: SELECT  c1     FROM t     GROUP BY c1    ORDER BY c1 ASC NULLS LAST
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_90sg67694zwyj641607ca         Plan id: 1679165386
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Outline Data from SMB:
/*+
BEGIN_OUTLINE_DATA
INDEX(@"SEL$1" "T"@"SEL$1" ("T"."C1" "T"."C2"))
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.2.0.3')
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/
--------------------------------------------------------------------------------
Plan hash value: 2111031280
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     3 |     9 |  2069   (5)| 00:00:06 |
|   1 |  SORT GROUP BY NOSORT|      |     3 |     9 |  2069   (5)| 00:00:06 |
|   2 |   INDEX FULL SCAN    | T_PK |  2000K|  5859K|  2069   (5)| 00:00:06 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$1
2 - SEL$1 / T@SEL$1

Outline Data
------------
/*+
BEGIN_OUTLINE_DATA
INDEX(@"SEL$1" "T"@"SEL$1" ("T"."C1" "T"."C2"))
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.2.0.3')
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) "C1"[VARCHAR2,256]
2 - "C1"[VARCHAR2,256]

An index full scan with which the CBO has avoided the order by sort operation.

Now, for my “tranquility”,  I will execute my query and check if the SPM plan is used or not.

SQL> SELECT  c1
     FROM t
     GROUP BY c1
     ORDER BY c1 ASC NULLS LAST;

C1
----
1Y
2Y
3Y

SQL_ID  28dazsm20sbw6, child number 2
-------------------------------------
SELECT  c1     FROM t     GROUP BY c1    ORDER BY c1 ASC NULLS LAST
Plan hash value: 2111031280
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |  2069 (100)|          |
|   1 |  SORT GROUP BY NOSORT|      |     3 |     9 |  2069   (5)| 00:00:06 |
|   2 |   INDEX FULL SCAN    | T_PK |  2000K|  5859K|  2069   (5)| 00:00:06 |
-----------------------------------------------------------------------------

Note
-----
- SQL plan baseline SQL_PLAN_90sg67694zwyj641607ca used for this statement

Great it is used.

Third, the issue presentation and discussion

What happens if, in my current environment, I change the NLS_SORT parameter?

SQL> alter session set nls_sort=french; -- I altered my current environment when compared to the SPM capture time environment

SQL> SELECT  c1
FROM t
GROUP BY c1
ORDER BY c1 ASC NULLS LAST;

C1
----
1Y
2Y
3Y

SQL_ID  28dazsm20sbw6, child number 2
-------------------------------------
SELECT  c1     FROM t     GROUP BY c1    ORDER BY c1 ASC NULLS LAST

Plan hash value: 1760210272
------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |       |       |  2451 (100)|          |
|   1 |  SORT ORDER BY        |      |     3 |     9 |  2451  (20)| 00:00:07 |
|   2 |   SORT GROUP BY NOSORT|      |     3 |     9 |  2451  (20)| 00:00:07 |
|   3 |    INDEX FULL SCAN    | T_PK |  2000K|  5859K|  2069   (5)| 00:00:06 |
------------------------------------------------------------------------------

See how my query is not using the SPM baseline plan anymore. It is using a new plan where the sort order by operation has not been eliminated by the CBO. If my query is not using the SPM plan this is because the CBO was not able to reproduce the stored Baseline plan because of nls_sort parameter change. If this means something it then means that when trying to reproduce the SPM plan the CBO uses the current nls_sort parameter.

The new CBO plan have been added to the SPM baseline for an eventual evolution

SQL> select plan_name from dba_sql_plan_baselines;

PLAN_NAME
------------------------------
SQL_PLAN_90sg67694zwyj297df088
SQL_PLAN_90sg67694zwyj641607ca

If I alter again my current nls_sort parameter so that it will match the one stored against the baseline plan, then my query will be back to its initial use of the SPM plan

 SQL> alter session set nls_sort=binary;

 SQL> SELECT  c1
      FROM t
      GROUP BY c1
      ORDER BY c1 ASC NULLS LAST;

SQL_ID  5hrfv0352fzdr, child number 0
-------------------------------------
SELECT  c1 FROM t GROUP BY c1     ORDER BY c1 ASC NULLS LAST

Plan hash value: 2111031280
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |  2069 (100)|          |
|   1 |  SORT GROUP BY NOSORT|      |     3 |     9 |  2069   (5)| 00:00:06 |
|   2 |   INDEX FULL SCAN    | T_PK |  2000K|  5859K|  2069   (5)| 00:00:06 |
-----------------------------------------------------------------------------

Note
-----
- SQL plan baseline SQL_PLAN_90sg67694zwyj641607ca used for this statement

Bottom line: in contrast to the optimizer_mode parameter when it comes to NLS_SORT (and NLS_LANG) parameter, the CBO seems to use the current environment NLS_SORT value (and not the one that existed at the baseline time capture) to reproduce the stored SPM plan baseline.


parallel insert

$
0
0

Do you know that a direct path insert can be silently ignored when the inserted table has an insert trigger?

SQL> create table t1 as select rownum n1 from dual connect by level <=1e5;

SQL> create table t2 as select * from t1 where 0 = 1;

SQL> create or replace trigger t2_trg
      before insert on t2
      for each row
    begin
       null;
    end;
   /

SQL> insert /*+ append */ into t2 select * from t1;

100000 rows created.

SQL> select count(1) from t2;

COUNT(1)
---------
100000

SQL> alter trigger t2_trg disable;

SQL> insert /*+ append */ into t2 select * from t1;

100000 rows created.

SQL> select count(1) from t2;

select count(1) from t2
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

It is only when the trigger has been disabled that the insert has been done via a direct path load.

But are you aware that a parallel insert can also be silently ignored in presence of an insert trigger?

SQL> alter session enable parallel dml;

SQL> insert /*+ parallel(t2) */ into t2 select * from t1;

100000 rows created.

SQL_ID  5npb49pus3wtr, child number 0
-------------------------------------
insert /*+ parallel(t2) */ into t2 select * from t1

Plan hash value: 2315600204
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT        |          |       |       |    47 (100)|          |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10001 |   100K|  1269K|    47   (7)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT       |          |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE          |          |   100K|  1269K|    47   (7)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND ROUND-ROBIN| :TQ10000 |   100K|  1269K|    47   (7)| 00:00:01 |        | S->P | RND-ROBIN  |
|   6 |       TABLE ACCESS FULL | T1       |   100K|  1269K|    47   (7)| 00:00:01 |        |      |            |
-----------------------------------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)

SQL> select count(1) from t2;

select count(1) from t2
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

SQL> rollback;

SQL> alter trigger t2_trg enable;

SQL> insert /*+ parallel(t2) */ into t2 select * from t1;

100000 rows created.

SQL_ID  5npb49pus3wtr, child number 0
-------------------------------------
insert /*+ parallel(t2) */ into t2 select * from t1

Plan hash value: 3617692013
---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |       |       |    47 (100)|          |
|   1 |  LOAD TABLE CONVENTIONAL |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL      | T1   |   100K|  1269K|    47   (7)| 00:00:01 |
---------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)

SQL> select count(1) from t2;

COUNT(1)
----------
300000

When the trigger has been disabled the insert went through a parallel load path in contrast to when the same trigger has been enabled where the insert went through to a classical conventional load.

A classical conventional load have I said?

Yes. If you look closely to the case where the parallel insert has been honored you will realize that not only the insert run in parallel but it has also been done via a direct path mode.

Tell me: does this means that a parallel insert (when honored) is always accomplished via a direct path load?

Yes it is.

So is the append hint in the following instruction redundant?


SQL> insert /*+ append parallel(t2) */ into t2 select * from t1;

No it is not always redundant.

Simply because a parallel run and a direct path load share several common “impeachment” reasons like the presence of insert triggers or foreign integrity constraints((yes I have also tested this case also)  but they also have their proper source of impeachment. While the direct path is not a CBO decision, the parallel run is. So, what do you think will happen when the CBO decides that the parallel insert is not the best path to follow and you have already get rid of the append hint from your insert statement? The direct path, when possible, will not be followed.

If I simulate a parallel run impeachment by disabling the parallel dml and get rid of the append hint from my insert statement then the direct path load will not happen

SQL> alter session disable parallel dml;

SQL> insert /*+ parallel(t2) */ into t2 select * from t1;

100000 rows created.

SQL> select count(1) from t2;

COUNT(1)
----------
100000

However if, in the same situation,  I will add the append hint to my insert statement then the insert will follow a direct path load path as demonstrated below:

SQL> insert /*+ parallel(t2) append */ into t2 select * from t1;

100000 rows created.

SQL> select count(1) from t2;
select count(1) from t2
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

This is the otn thread that prompted this blog post

Bottom line: I learnt yesterday that a parallel insert when followed by the CBO will use a direct path load which is not always the best solution. I learnt also in this case that a parallel insert suffers from the same impeachment raisons as the direct path load does: presence of insert triggers and foreign integrity constraint. And I learnt that to secure your direct path load add the append hint  to your insert statement even when this statement contains a parallel hint



SPM reproducibility: changing the _optimizer_skip_scan_enabled value

$
0
0

I have had several articles about Sql Plan Management stability and evolution. I have also been recently focusing my attention particularly on the criteria, scenarios and circumstances that impeach an enabled and accepted SPM plan to be reproducible. Below are summarized those impeachment reasons:

  1. Changing the index name
  2. Changing the index type (with particular situations for function based and reverse indexes)
  3. Changing the index leading column(s)

Then I embarked on the investigation of the reaction a SPM plan could manifest, during a query execution time, to a change in one of the environment parameters that have been used during the SPM plan capture. I have investigated two situations in this context

  1. One for a change in an optimizer parameter (optimizer mode) value
  2. And the other one for a NLS parameter (nls_sort) change

For the optimizer mode I came up to a conclusion that a change of that mode in the current environment will not have an effect on the reproducibility of the SPM plan as far as this one will be reproduced using the optimizer mode stored against the SPM plan.

However, an nls_sort parameter change reveals to be a serious threat for the SPM reproducibility as far the CBO will use in this particular case the current nls_sort parameter value which might end up with an impossibility to reproduce the stored SPM plan.

Despite the above investigations, there is one thing that I wanted to investigate the effect it has on the reproducibility of a SPM plan: a change of an undocumented optimizer parameter value. This post is for that purpose. Follow me please

 create table t1
    (col1  number
    ,col2  varchar2(50)
    ,flag  varchar2(2));

 insert into t1
    select rownum
          ,lpad('X',50,'X')
          ,case when rownum = 1
            then 'Y1'
               when rownum = 2
            then 'Y2'
               when mod(rownum,2) = 0
            then 'N1'
            else 'N2'
           end
    from   dual
connect by rownum <= 100000;

create index i1 on t1(col1,flag);

exec dbms_stats.gather_table_stats(user ,'t1');

A simple model with an index which, as we will see later, has been engineered so that it will be skip scanned. The next lines of code will store a SPM plan for a given query using default optimizer parameters

SQL> var n varchar2(2);
SQL> exec :n := 'Y1'

SQL> alter session set optimizer_capture_sql_plan_baselines=TRUE;

SQL> select count(1) from t1 where flag = :n;

SQL> /

SQL> alter session set optimizer_capture_sql_plan_baselines=FALSE;

SQL> select count(1) from t1 where flag = :n;

SQL_ID  5k94675mwqz5j, child number 0
-------------------------------------
select count(1) from t1 where flag = :n

Plan hash value: 2775586896
-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    67 (100)|          |
|   1 |  SORT AGGREGATE  |      |     1 |     3 |            |          |
|*  2 |   INDEX SKIP SCAN| I1   | 25000 | 75000 |    67   (2)| 00:00:01 |
-------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$1
2 - SEL$1 / T1@SEL$1

Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
DB_VERSION('11.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_SS(@"SEL$1" "T1"@"SEL$1" ("T1"."COL1" "T1"."FLAG"))
END_OUTLINE_DATA
*/

Peeked Binds (identified by position):
--------------------------------------
1 - :N (VARCHAR2(30), CSID=873): 'Y1'

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("FLAG"=:N)
   filter("FLAG"=:N)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) COUNT(*)[22]

Note
-----
- SQL plan baseline SQL_PLAN_4njm93y44xuu38bc11ac1 used for this statement

A simple query constrained by a SPM plan using index skip scan access.

Having presented the model  I can now start my real investigations which can be summarize via this simple question : if I alter my current environment so that I will disable the use of index skip scan, will my query still be using the SPM plan? Or in more subtle and clear words: will the SPM plan be reproducible under such a change of an undocumented parameter value?

And the answer is : see below

SQL> alter session set "_optimizer_skip_scan_enabled"=FALSE;

SQL> alter session set optimizer_use_sql_plan_baselines = FALSE;

I have altered the hidden parameter and put on hold for a moment the use of sql plan baseline in order to show you that under this circumstances the CBO will come up with a new plan as shown below:

SQL> select count(1) from t1 where flag = :n;

SQL_ID  5k94675mwqz5j, child number 3
-------------------------------------
select count(1) from t1 where flag = :n

Plan hash value: 129980005
------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |       |       |    71 (100)|          |
|   1 |  SORT AGGREGATE       |      |     1 |     3 |            |          |
|*  2 |   INDEX FAST FULL SCAN| I1   | 25000 | 75000 |    71   (6)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("FLAG"=:N)

There is now a new plan using an INDEX FAST FULL SCAN since I have disabled the use of INDEX SKIP SCAN. It is a clear indication that under this undocumented parameter change the CBO is not able to reproduce the index skip scan plan. Let’s see then what happen to our query.

SQL> alter session set optimizer_use_sql_plan_baselines = TRUE;

SQL> select count(1) from t1 where flag = :n;

---------------------------------------------------
SQL_ID  5k94675mwqz5j, child number 5
-------------------------------------
select count(1) from t1 where flag = :n

Plan hash value: 2775586896
-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    67 (100)|          |
|   1 |  SORT AGGREGATE  |      |     1 |     3 |            |          |
|*  2 |   INDEX SKIP SCAN| I1   | 25000 | 75000 |    67   (2)| 00:00:01 |
-------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$1
2 - SEL$1 / T1@SEL$1

Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
DB_VERSION('11.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_SS(@"SEL$1" "T1"@"SEL$1" ("T1"."COL1" "T1"."FLAG"))
END_OUTLINE_DATA
*/

Peeked Binds (identified by position):
--------------------------------------
1 - :N (VARCHAR2(30), CSID=873): 'Y1'

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("FLAG"=:N)
   filter("FLAG"=:N)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) COUNT(*)[22]

Note
-----
- SQL plan baseline SQL_PLAN_4njm93y44xuu38bc11ac1 used for this statement

The CBO has been able to reproduce the SPM INDEX SKIP SCAN plan.

Bottom line: if the CBO has been able to reproduce the SPM INDEX SKIP SCAN plan under a disabled _optimizer_skip_scan_enabled parameter in the query execution environment, this means that the CBO will use the value of that hidden parameter stored during the SPM plan capture and not the value this parameter has in the current execution environment.


Indexing Foreign Key: the proof of the FK index use

$
0
0

If you have been looking for a material proof showing Oracle using the foreign key index created on the child table while deleting from a parent table then here it is

drop table t2;
drop table t1;

create table t1
(col1 number primary key);

create table t2
(col1    number primary key
,status  varchar2(12) not null
,col2    number
,col2v   number generated always as (case when status = 'ACTIVE' then col2 end) VIRTUAL
,constraint t2_fk foreign key (col2v) references t1(col1)
,constraint t2_ck check (status in ('ACTIVE','INACTIVE') and (status = 'INACTIVE' or col2 is not null))
);

create index t2_ind_fk on t2(col2v);

insert into t1
 select rownum
from dual
connect by level <=100;

commit;

insert into t2 (col1, status, col2) values (1, 'ACTIVE',50);

alter session set skip_unusable_indexes = false;

alter index t2_ind_fk unusable; -- implicit commit

I have created a pair of parent-child table (t1 and t2), an index on the foreign key on the t2 child table, set this index into an unusable state and changed the default skip_unusable_indexes parameter to false so that unusable indexes will not be skipped.

Now, I am going in the next PL/SQL anonymous block, to simulate a delete from a parent table using an autonomous transaction in order to mimic a different session (in fact a different transaction within the same session)

declare
 pragma autonomous_transaction;
begin
 delete from t1 –- deleting from the parent table
 where col1 = 99;
 commit;
end;
/

declare
*
ERROR at line 1:
ORA-01502: index 'XXX.T2_IND_FK' or partition of such index is in unusable state
ORA-06512: at line 4

See how deleting from the parent table (t1) triggered an error on the index of the foreign key constraint created on the child table (t2). This is a simple way to show the mechanism used by Oracle in order to avoid a child table lock (before eventually a deadlock situation) simply by using the index on the foreign key.


Window SORT and Window BUFFER

$
0
0

I was thinking that I will see a WINDOW SORT operation in an execution plan for every over ( ) clause statement that differs in the partition and the order by options. For example

SQL> select
        id
       ,n_5000
       ,lead(id) over (partition by n_5000 order by id)
     from t1
     where n_5000 = 1778;

---------------------------------------------------------------------
| Id  | Operation             | Name   | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------------
|   0 | SELECT STATEMENT      |        |       |       |    94 (100)|
|   1 |  WINDOW SORT          |        |    22 |   572 |    94   (6)|
|*  2 |   INDEX FAST FULL SCAN| MY_IND |    22 |   572 |    93   (5)|
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N_5000"=1778)

But it suffices to have (1) an index that starts by the order by column (id) or by the partition by column(n_5000) and (2) add a predicate on the order by column or on the partition by column to that original query and the WINDOW SORT will be transformed into a less costly WINDOW BUFFER.

SQL> select
        id
       ,n_5000
       ,lead(id) over (partition by n_5000 order by id)
    from t1
    where id = 1778;

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |       |       |     3 (100)|          |
|   1 |  WINDOW BUFFER    |        |     1 |    26 |     3  (34)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN| MY_IND |     1 |    26 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ID"=1778)

That’s pretty straightforward. The index MY_IND (id, n_5000, n_10000) has been used to avoid the WINDOW SORT analytical operation. This transforms my initial thinking to: “I will see a WINDOW SORT operation in an execution plan for every over () clause statement that differs in the partition by and the order by options unless the CBO finds a suitable index that permits bypassing a SORT operation”

But does this mean that I will not see a parent WINDOW BUFFER operation without a child index scan operation?

SQL> select
       id
      ,n_5000
      ,padding
      ,sum(id) over (partition by n_5000)
     from t1
     where n_5000 = 444;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   478 (100)|          |
|   1 |  WINDOW BUFFER     |      |    20 |  2200 |   478   (2)| 00:00:02 |
|*  2 |   TABLE ACCESS FULL| T1   |    20 |  2200 |   477   (2)| 00:00:02 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N_5000"=444)

I get rid of the order by option in the over() operation and added a predicate on the partition by column.

And maybe I can add this:

I will NOT see a WINDOW SORT operation in an execution plan for every over () clause statement that contains ONLY a partition by option and where the container query includes the partition by column in the predicate part”.

Bottom Line: from now and on when I see an over () clause statement I will be paying more attention to the partition by clause to see if its related column is in the predicate part or not. It might explain why I have a WINDOW SORT instead of a WINDOW BUFFER

FootNote:  I have to warn that the above conclusions, despite they might be correct, they nevertheless remains to be sourced from a one day experiment. As such you should consider them with a careful attention before taken them as definitely demonstrated. It is not because they have been published that they are correct.


Redundant Indexes

$
0
0

I am very often warning:  dropping redundant indexes in production is not 100% safe. I have instead always been advocating paying a careful attention during design time to avoid creating redundant indexes. In my professional experience I have realized that, it is very often when creating indexes to cover the lock threat of unindexed foreign key constraints, that developers are creating unintentionally redundant indexes. It has been irritating me so that I have created a script which checks if a given foreign key is already indexed or not before creating supplementary non wanted indexes damaging the DML part of the application.

Having said that I still have not defined what is redundant indexes

Indexes ind_1 and ind_2 are said redundant when leading columns of one index are a superset of the leading columns of the other one

For example

Ind_1 (a,b) and ind_2(a,b,c) are redundant because ind_2 contains index ind_1.

If you are at design time it is obvious that you should not create index ind_1. However, once in production, it is not 100% safe to drop index ind_1 without any impact. There are, for sure, occasions were the clustering factor of ind_2 is so dramatic when compared to index ind_1 so that if the later index is dropped the optimizer will opt for a full table scan traumatizing the queries that were perfectly happy with the dropped redundant index.

I can also show you another type of what people might consider redundant indexes while they aren’t. Consider the following model where I have created a range partitioned table (mho_date being the partition key and I have created 1493 partitions) and two indexes as shown below

desc partitioned_tab

Name                            Null?    Type
------------------------------- -------- ------------
1      MHO_ID                          NOT NULL NUMBER(10)
2      MHO_DATE                        NOT NULL DATE
3      MHO_CODE                        NOT NULL VARCHAR2(1)
4      MHO_TYP_ID                      NOT NULL NUMBER(10)

create index local_ind_1 on partitioned_tab (mho_typ_id,mho_code) local;

create index global_ind_1 on partitioned_tab (mho_typ_id);

I am going to execute a simple query against the above engineered partitioned table.

select * from partitioned_tab where mho_typ_id = 0;

Which, in the presence of the above two indexes is honored via the following execution plan

----------------------------------------------------------------------
Plan hash value: 3042058313
----------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name            | Rows  |Cost (%CPU)| Pstart| Pstop |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                 |  1493 | 1496   (0)|       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PARTITIONED_TAB |  1493 | 1496   (0)| ROWID | ROWID |
|*  2 |   INDEX RANGE SCAN                         | GLOBAL_IND_1    |  1493 |    4   (0)|       |       |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("MHO_TYP_ID"=0)

Statistics
-------------------------------------------------------
48    recursive calls
0     db block gets
3201  consistent gets
0     physical reads
0     redo size
51244 bytes sent via SQL*Net to client
1632  bytes received via SQL*Net from client
101   SQL*Net roundtrips to/from client
7     sorts (memory)
0     sorts (disk)
1493  rows processed

As you might already know I am one of the fans of SQLT tool developed by Carlos Sierra. And here below what this tool said about redundant indexes in this particular case

Redundant_indexes

It is clearly suggesting considering dropping the redundant index global_ind_1 index

So let’s follow this advice and see what happens. Hopefully with the recent Oracle release I will first make the index invisible ( by the way that’s a good point to signal for Carlos Sierra and Mauro Pagano for them to suggest setting first the index invisible before considering dropping it)

alter index global_ind_1 invisible;

select * from partitioned_tab where mho_typ_id = 0;

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name            | Rows  | Bytes | Cost (%CPU)| Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                 |  1493 | 23888 |  2985   (1)|       |       |
|   1 |  PARTITION RANGE ALL                       |                 |  1493 | 23888 |  2985   (1)|     1 |  1493 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PARTITIONED_TAB |  1493 | 23888 |  2985   (1)|     1 |  1493 |
|*  3 |    INDEX RANGE SCAN                        | LOCAL_IND_1     |  1493 |       |  1492   (0)|     1 |  1493 |
-------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("MHO_TYP_ID"=0)

Statistics
--------------------------------------------------------
48    recursive calls
0     db block gets
4588  consistent gets
0     physical reads
0     redo size
47957 bytes sent via SQL*Net to client
1632  bytes received via SQL*Net from client
101   SQL*Net roundtrips to/from client
7     sorts (memory)
0     sorts (disk)
1493  rows processed

By making the ”redundant” index invisible we went from a smooth one global index range scan with a cost of 4 to 1493 index range scans with a cost of 1492 with an additional 1387 logical reads.

Bottom line: You should always consider dropping (avoiding) redundant index at design time. Once in production consider making them invisible first before thinking carefully about dropping them.

PS : if you want to create the table the script can be found partitioned table


DBMS_SCHEDULER and CALL statement

$
0
0

This is a brief reminder for those using intensively Oracle dbms_scheduler package to schedule a launch of a stored PL/SQL procedure. Recently I was investigating a wide range performance problem via a 60 minutes AWR snapshot until the following action that appears in the TOP SQL ordered by Gets has captured my attention:

SQL ordered by Gets

Buffer Gets Executions Gets per Exec %Total Elapsed Time (s) %CPU %IO SQL Id SQL Module SQL Text
858,389,984 2 429,194,992.00 89.58 3,606.79 97.2 1.9 6cjn7jnzpc160 DBMS_SCHEDULER call xxx_PA_REDPC.P_EXPORT_DA…
542,731,679 32,021 16,949.24 56.64 3,201.40 98.4 .6 4bnh2nc4shkz3 w3wp.exe SELECT WAGD_ID FROM S1

It is not the enormous Logical I/O done by this scheduled stored procedure that has retained my attention but it is the appearance of the call statement in the corresponding SQL Text.

Where does this come from?

Let’s get the DDL of the corresponding program

 SELECT dbms_metadata.get_ddl('PROCOBJ','XXX_PARSE_MESSAGE_PRG', 'SXXX') from dual;

Which gives this

 BEGIN
  DBMS_SCHEDULER.CREATE_PROGRAM
    (program_name         => 'XXX_PARSE_MESSAGE_PRG'
    ,program_type         => 'STORED_PROCEDURE'
    ,program_action       => 'XXX_PA_REDPC.P_EXPORT_DA.P_xxxx'
    ,number_of_arguments  => 0
    ,enabled              => FALSE
    ,comments             => NULL);
 COMMIT;
 END;
 /
 

I have  arranged a little bit the generated DDL script for clarity.

Now things become clear.

When you define your program using

     program_type         => 'STORED_PROCEDURE'
 

Then your job will be executed using the SQL command call

    call XXX_PA_REDPC.P_EXPORT_DA..'
 

This is in contrast to when you define your program using

  program_type         => 'PLSQL_BLOCK'
 

which has the consequence of making your job being executed using an anonymous PL/SQL block

 BEGIN
    XXX_PA_REDPC.P_EXPORT_DA..
 END;
 

And now the question: how would you prefer your scheduled stored procedure to be executed?

  1. via the  SQL call statement
  2. via the anonymous PL/SQL block

Well, after a brief research on My Oracle support I found a bug that seems closely related to it

DBMS_SCHEDULER Is Suppressing NO_DATA_FOUND Exceptions for Jobs that Execute Stored Procedures (Doc ID 1331778.1)

In fact, there is one fundamental threat when opting for the call statement. Consider this

 SQL> create or replace procedure p1 as
    begin
     insert into t values (1,2);
      raise no_data_found;
      commit;
    end;
 /
Procedure created.

SQL> select count(1) from t;   

   COUNT(1)
   ----------        
    0

I am going to call this procedure using the call statement which normally will raise a no_data_found exception and the inserted data into table t will be rolled back

SQL> call p1();

Call completed.

SQL> select * from t;

        N1         N2
---------- ----------
         1          2

Despite the raise of the no_data_found exception inserted data has been committed and the exception ignored. This will not have happened if I have managed to execute the stored procedure using an anonymous PL/SQL block as shown below:

SQL> truncate table t;

Table truncated.

SQL> begin
p1();
end;
/
begin
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "XXX.P1", line 4
ORA-06512: at line 2

SQL> select count(1) from t;

COUNT(1)
----------
0

So, please be informed :-)


Trouble shooting a performance issue

$
0
0

Here it is a case of Trouble shooting a performance issue taken from a real life production system that I wanted to share with you. There is an Oracle stored procedure continuously activated via a scheduled job. Its goal is to parse incoming XML messages before accepting them into or rejecting them from a production application. 80% of the XML messages are treated in an acceptable execution time. But 20% of those messages when injected into the queue are hanging and slowing down the database with a side effect on the execution time of the normal messages. My strategy was to ask for one of those problematic XML messages from PRODUCTION instance and execute it in the TEST data base with a 10046 trace file. So I did, and among many observations, I stopped at the following select I wanted to share with you the method I followed to trouble shoot it:

SELECT *
FROM
 ( SELECT WD.WAGD_ID,WH.WAGH_ID,WD.WAG_ID,WH.ARRIVAL_DATE,WH.DEPARTURE_DATE,
  WD.CHECK_REPERAGE ,ROW_NUMBER() OVER (PARTITION BY WH.WAGD_ID ORDER BY
  WH.ARRIVAL_DATE) RN FROM SRI_ORDER_DETAILS WD ,SRI_ORDER_HISTORIC WH
  WHERE WD.FROM_COM_ID = :B1 AND WD.WAGD_ID = WH.WAGD_ID AND WH.PDES_ID IS
  NULL ) WHERE RN = 1 

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute   3124      0.28       0.27          0          0          0           0
Fetch     3124     31.79      32.05          0   10212356          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     6250     32.07      32.33          0   10212356          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 151     (recursive depth: 1)
Number of plan statistics captured: 2

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  VIEW  (cr=3269 pr=0 pw=0 time=14483 us cost=709 size=1245 card=15)
         0          0          0   WINDOW SORT PUSHED RANK (cr=3269 pr=0 pw=0 time=14480 us cost=709 size=660 card=15)
         0          0          0    MERGE JOIN  (cr=3269 pr=0 pw=0 time=14466 us cost=708 size=660 card=15)
         0          0          0     TABLE ACCESS BY INDEX ROWID SRI_ORDER_DETAILS (cr=3269 pr=0 pw=0 time=14464 us cost=532
     15139      15139      15139       INDEX FULL SCAN SRI_WAGD_PK (cr=2822 pr=0 pw=0 time=2305 us cost=80 size=0
         0          0          0     SORT JOIN (cr=0 pr=0 pw=0 time=0 us cost=176 size=320528 card=12328)
         0          0          0      TABLE ACCESS FULL SRI_ORDER_HISTORIC (cr=0 pr=0 pw=0 time=0 us cost=67 size=320528
********************************************************************************

Notice that the above select undergone 3,124 executions which have necessitated more than one million of logical reads to end up finally by producing 0 records. There is for sure a way to reduce this enormous waste of time and energy in manipulating non necessary millions of logical I/O. My experience in trouble shooting performance issues shows me that this kind of excessive logical I/O can be very often linked to two situations:

  •  Use of a not very precise index to access a table via an index rowid and throwing away the majority of rows
  •  The first operation of the plan is the one that manipulated the big number of rows in the entire row source execution plan

Coincidently, the first operation in my row source plan is an INDEX FULL SCAN and it is the operation generating the maximum number of rows (15,139). Well, now I can say that in order to tune this select I need first to have a more precise access to the SRI_ORDER_DETAILS table. Here how I managed to solve this issue:

1. Get a real time execution plan from memory

The desired execution plan confirming the row source stats from the Tkprof file is given here below:

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                     |      1 |        |      1 |00:00:00.03 |    3469 |
|*  1 |  VIEW                          |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|*  2 |   WINDOW SORT PUSHED RANK      |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|   3 |    MERGE JOIN                  |                     |      1 |     13 |      1 |00:00:00.03 |    3469 |
|*  4 |     TABLE ACCESS BY INDEX ROWID| SRI_ORDER_DETAILS   |      1 |     13 |      1 |00:00:00.02 |    3269 |
|   5 |      INDEX FULL SCAN           | SRI_WAGD_PK         |      1 |  15139 |  15139 |00:00:00.01 |    2822 |
|*  6 |     SORT JOIN                  |                     |      1 |  13410 |      1 |00:00:00.01 |     200 |
|*  7 |      TABLE ACCESS FULL         | SRI_ORDER_HISTORIC  |      1 |  13410 |  13410 |00:00:00.01 |     200 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"=1)
   2 - filter(ROW_NUMBER() OVER ( PARTITION BY "WH"."WAGD_ID" ORDER BY "WH"."ARRIVAL_DATE")<=1)
   4 - filter("WD"."FROM_COM_ID"=3813574)
   6 - access("WD"."WAGD_ID"="WH"."WAGD_ID")
       filter("WD"."WAGD_ID"="WH"."WAGD_ID")
   7 - filter("WH"."PDES_ID" IS NULL)

We identify a lot of data (15,139) by FULL SCANNING the primary key index SRI_WAGD_PK . The identified rows via the single PK column (wagd_id) index (operation n°5) are sent back to their parent operation (n°4) which wasted a lot of effort throwing away the entire set of generated rows (except one row for the bind variable I used during my test) by applying filter n°4 (FROM_COM_ID). Notice also how the CBO has made a complete mess of the cardinality estimate for the SORT JOIN operation n°6.

Now that the symptoms have been clearly identified next will be presented the solution

2. Create a new precise index

The solution that immediately came to my mind is to create an index with the FROM_COM_ID column. I was just wondering on how best is to design it?

  •  Using a single column index
  • Or a composite index
  •  And  in case of a composite index will the FROM_COM_ID be put at the beginning of the index or not?

As far as the single PK index contains the WAGD_ID column I was tended to create a composite index starting by the FROM_COM_ID. Something which resembles to this:

SQL> create index mho_test on SRI_ORDER_DETAILS(FROM_COM_ID, wagd_id);

Index created.

SQL> exec dbms_stats.gather_table_stats(user, 'SRI_ORDER_DETAILS', cascade => true, no_invalidate=> false);

PL/SQL procedure successfully completed.

When I re-queried again, the new row source execution plan looked like:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                           |      1 |        |      1 |00:00:00.01 |       7 |
|*  1 |  VIEW                           |                           |      1 |     13 |      1 |00:00:00.01 |       7 |
|*  2 |   WINDOW SORT PUSHED RANK       |                           |      1 |     13 |      1 |00:00:00.01 |       7 |
|   3 |    NESTED LOOPS                 |                           |      1 |        |      1 |00:00:00.01 |       7 |
|   4 |     NESTED LOOPS                |                           |      1 |     13 |      1 |00:00:00.01 |       6 |
|   5 |      TABLE ACCESS BY INDEX ROWID| SRI_ORDER_DETAILS         |      1 |     13 |      1 |00:00:00.01 |       3 |
|*  6 |       INDEX RANGE SCAN          | MHO_TEST                  |      1 |     13 |      1 |00:00:00.01 |       2 |
|*  7 |      INDEX RANGE SCAN           | SRI_WAGH_ARRIVAL_DATE_UK  |      1 |      1 |      1 |00:00:00.01 |       3 |
|*  8 |     TABLE ACCESS BY INDEX ROWID | SRI_ORDER_HISTORIC        |      1 |      1 |      1 |00:00:00.01 |       1 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"=1)
   2 - filter(ROW_NUMBER() OVER ( PARTITION BY "WH"."WAGD_ID" ORDER BY "WH"."ARRIVAL_DATE")<=1)
   6 - access("WD"."FROM_COM_ID"=3813574)
   7 - access("WD"."WAGD_ID"="WH"."WAGD_ID")
   8 - filter("WH"."PDES_ID" IS NULL)

Thanks to this new precise index(MHO_TEST ) we have drastically reduced the total number of logical reads (from 3,469 to 7) and the estimations done by the CBO are almost acceptable which made the CBO prefering a more convenient NESTED LOOPS instead of the previous inadequately estimated SORT JOIN operation.

3. Conclusion

When trouble shooting performance issue in real life production system, look carefully at INDEX access operations that generate a lot of rows and for which the parent table access operation is discarding the majority of those corresponding rows. In such case there is certainly a way to create a precise index which not only will avoid that waste of effort but might conduct the CBO to have better estimations and hence an optimal plan


ORA-04030 when parsing XML messages

$
0
0

Two days ago I have been asked to look at the following error which occurred in a PRE-PRODUCTION data base just few weeks before going live:

ORA-04030: out of process memory when trying to allocate 4032 bytes (qmxdGetElemsBy,qmemNextBuf:alloc)

The aim of this article is to explain how (with the help of my colleagues) I succeeded to circumvent this ORA-04030 error.

First, I was aware that ORA-04031 is linked to an issue with the SGA while the current ORA-04030 is linked with an abnormal consummation of PGA memory.Unfortunately, having no prior acquaintances with this application makes the issue hard to solve. All what I have been said is that when the corresponding PRE-PRODUCTION application has been triggered to parse 10 XML messages with 3MB of size for each message, it crashes during the parsing process of the 6th xml message with the above error.

I asked, nevertheless, the customer whether treating simultaneously 10 x 3MB XML messages is a situation that is possible to happen in their real life PRODUCTION application or they are just trying to fudge the database with extreme situations. And the answer was, to my disappointment, YES. This kind of XML load is very plausible once the application will go live.

I asked (again) if I can have those 10 XML messages in order to inject them into a DEVELOPMENT data base. Once they handled me a copy of the desired messages, I tried to parse them in the DEVELOPMENT instance which, hopefully (yes hopefully) crashes with the same error.

There is an ORA-04030 diagnostic tool developed by Oracle Support which, if you give it the corresponding alert log file, will generate for your system a set of recommendations that if implemented might solve the issue.

This tool when provided with the corresponding trace file, suggested us, among many other suggestions, to:

  1. Set the hidden parameter _realfree_heap_pagesize_hint
  2. Use bulk collect with the limit clause and gave us a link showin how to do that following Steven Feuerstein  article published on oracle magazine

Taken into account my ignorance of the application business and how the code has been exactly managed to parse the xml messages, I was looking to use a tactical fix, which will be safe, necessitate a minimum of tests and with a big chance of no side effects. As far as I was in a development database I instrumented (added extra logging) the PL/SQL code until I have clearly isolated the piece of code which is causing the error:


BEGIN
   -- New parser
       lx_parser := xmlparser.newParser;

   -- Parse XML file
       xmlparser.parseClob(lx_parser,pil_xml_file);

 
   --pil_xml_file);
       lxd_docxml := xmlparser.getDocument(lx_parser);

   -- get all elements
         lxd_element := xmldom.GETDOCUMENTELEMENT(lxd_docxml);

   FOR li_metadata_id IN 1..pit_metadata.COUNT
   LOOP
     IF pit_metadata(li_metadata_id).MXTA_FATHER_ID IS NULL
     THEN
       gi_num_row_table := 1;
       P_PARSE_XML_ELEMENTS(lxd_element
                           ,NULL
                           ,pit_metadata
                           ,li_metadata_id
                           ,pit_metadata(li_metadata_id).TAG_PATH
                            );
     END IF;
   END LOOP;
   …./….
 END;

The above piece of code is called in a loop and is traumatizing the PGA memory.

With the help of a colleague, we decided to use a tactical fix which consists of freeing up the pga memory each time a xml document is treated using the Oracle xmldom.freedocument API. In other words, I added the following piece of code at the end of the above stored procedure:


     END IF;

   END LOOP;

   …./….

xmldom.freedocument(lxd_docxml); -- added this

END;

And guess what?

The ORA-04030 ceases to disturb us.

Bottom Line: When troubleshooting an issue in a real life production application try first to reproduce it in a TEST environment. Then instrument again the PL/SQL code in this TEST environment until you narrow the error close to its calling program. Then look to a tactical fix that will not need a lot of non-regression tests and which should be of limited side effect.



Disjunctive subquery

$
0
0

Here it is a SQL query called from a .Net web service which is ”time outing” and to which I have been asked to bring a fix.

SELECT      d.col_id,
            d.col_no,
            d.col_dost_id,
            d.col_r_id,
            d.xxx,
            d.yyy,
            ……..
            d.zzz
      FROM table_mho d
      WHERE (d.COL_UK = ‘LRBRE-12052014’
          OR EXISTS (select 1
                     from table_mho d1
                     where d1.col_id = d.col_id
                       and exists (select 1
                                   from table_mho d2
                                   where d2.COL_UK = ‘LRBRE-12052014’
                                     and d1.master_col_id = d2.col_id
                                     and d2.col_type = 'M' )
                       and d1.col_type = 'S'
                       )
              )
    order by d.col_id;

Looking carefully at the content of this query I have immediately got a clue on what might be happening here: Disjunctive Subquery.

A disjunctive subquery represents a subquery that appears in an OR predicate (disjunction). And the above query has indeed an OR predicate followed by an EXISTS clause:

          OR EXISTS (select 1
                     from table_mho d1
                     where d1.col_id = d.col_id
                       and exists (select 1
                                   from table_mho d2
                                   where d2.COL_UK = ‘LRBRE-12052014’
                                     and d1.master_col_id = d2.col_id
                                     and d2.col_type = 'M' )
                       and d1.col_type = 'S'
                       )

I am not going to dig in the details of disjunctive subqueries and their inability to be unnested by the CBO for releases prior to 12c. I will be writing in a near future (I hope) a general article in which disjunctive subqueries will be explained and popularized via a reproducible model. The goal of this brief blog post is just to show how I have been successful to trouble shoot the above web service performance issue by transforming a disjunctive subquery into an UNION ALL SQL statement so that I gave the CBO an opportunity to choose an optimal plan.

Here it is the sub-optimal plan for the original query

---------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------
|   1 |  SORT ORDER BY                 |                  |      1 |  46854 |      1 |00:00:10.70 |
|*  2 |   FILTER                       |                  |      1 |        |      1 |00:00:10.70 |
|   3 |    TABLE ACCESS FULL           | TABLE_MHO        |      1 |    937K|    937K|00:00:00.94 |
|   4 |    NESTED LOOPS                |                  |    937K|      1 |      0 |00:00:07.26 |
|*  5 |     TABLE ACCESS BY INDEX ROWID| TABLE_MHO        |    937K|      1 |     60 |00:00:06.65 |
|*  6 |      INDEX UNIQUE SCAN         | COL_MHO_PK       |    937K|      1 |    937K|00:00:04.14 |
|*  7 |     TABLE ACCESS BY INDEX ROWID| TABLE_MHO        |     60 |      1 |      0 |00:00:00.01 |
|*  8 |      INDEX UNIQUE SCAN         | COL_MHO_UK       |     60 |      1 |     60 |00:00:00.01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("D"."COL_UK"=‘LRBRE-12052014’ OR  IS NOT NULL))
   5 - filter(("D1"."MASTER_COL_ID" IS NOT NULL AND "D1"."COL_TYPE"='S'))
   6 - access("D1"."COL_ID"=:B1)
   7 - filter(("D2"."COL_TYPE"='M' AND "D1"."MASTER_COL_ID"="D2"."COL_ID"))
   8 - access("D2"."COL_UK"=‘LRBRE-12052014’)

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
    3771234  consistent gets
      22748  physical reads
          0  redo size
       1168  bytes sent via SQL*Net to client
        244  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

Note the apparition of the FILTER operation (n°2) which is less efficient. One of the dramatic consequences of that is the NESTED LOOP operation (n°4) which has been started 937,000 times without producing any rows but nevertheless generating almost 4 millions of buffer gets. Because of this disjunctive subquery, Oracle is not able to merge the subquery clause with the rest of the query in order to consider another optimal path.

There is a simple technique if you want to re-write the above query in order to get rid of the disjunctive subquery: use of an UNION ALL as I did for my original query (bear in mind that in my actual case COL_UK column is NOT NULL)

SELECT ww.**
FROM
(SELECT     d.col_id,
            d.col_no,
            d.col_dost_id,
            d.col_r_id,
            d.xxx,
            d.yyy,
            ……..
            d.zzz
      FROM table_mho d
      WHERE d.COL_UK = ‘LRBRE-12052014’
UNION ALL
SELECT      d.col_id,
            d.col_no,
            d.col_dost_id,
            d.col_r_id,
            d.xxx,
            d.yyy,
            ……..
            d.zzz
      FROM table_mho d
      WHERE d.COL_UK != ‘LRBRE-12052014’
      AND EXISTS (select 1
                     from table_mho d1
                     where d1.col_id = d.col_id
                       and exists (select 1
                                   from table_mho d2
                                   where d2.COL_UK = ‘LRBRE-12052014’
                                     and d1.master_col_id = d2.col_id
                                     and d2.col_type = 'M' )
                       and d1.col_type = 'S'
                       )
              )
) ww
 order by ww.col_id;

And here it is the new corresponding optimal plan

------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Starts | E-Rows | A-Rows |   A-Time   |
------------------------------------------------------------------------------------------------------------
|   1 |  SORT ORDER BY                     |                       |      1 |      2 |      1 |00:00:00.01 |
|   2 |   VIEW                             |                       |      1 |      2 |      1 |00:00:00.01 |
|   3 |    UNION-ALL                       |                       |      1 |        |      1 |00:00:00.01 |
|   4 |     TABLE ACCESS BY INDEX ROWID    | TABLE_MHO             |      1 |      1 |      1 |00:00:00.01 |
|*  5 |      INDEX UNIQUE SCAN             | COL_MHO_UK            |      1 |      1 |      1 |00:00:00.01 |
|   6 |     NESTED LOOPS                   |                       |      1 |      1 |      0 |00:00:00.01 |
|   7 |      VIEW                          | VW_SQ_1               |      1 |      1 |      0 |00:00:00.01 |
|   8 |       HASH UNIQUE                  |                       |      1 |      1 |      0 |00:00:00.01 |
|   9 |        NESTED LOOPS                |                       |      1 |      1 |      0 |00:00:00.01 |
|* 10 |         TABLE ACCESS BY INDEX ROWID| TABLE_MHO             |      1 |      1 |      0 |00:00:00.01 |
|* 11 |          INDEX UNIQUE SCAN         | COL_MHO_UK            |      1 |      1 |      1 |00:00:00.01 |
|* 12 |         TABLE ACCESS BY INDEX ROWID| TABLE_MHO             |      0 |      1 |      0 |00:00:00.01 |
|* 13 |          INDEX RANGE SCAN          | COL_COL_MHO_FK_I      |      0 |     62 |      0 |00:00:00.01 |
|* 14 |      TABLE ACCESS BY INDEX ROWID   | TABLE_MHO             |      0 |      1 |      0 |00:00:00.01 |
|* 15 |       INDEX UNIQUE SCAN            | COL_MHO_PK            |      0 |      1 |      0 |00:00:00.01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("D"."COL_UK"=‘LRBRE-12052014’)
  10 - filter("D2"."COL_TYPE"='M')
  11 - access("D2"."COL_UK"=‘LRBRE-12052014’)
  12 - filter("D1"."COL_TYPE"='S')
  13 - access("D1"."MASTER_COL_ID"="D2"."COL_ID")
       filter("D1"."MASTER_COL_ID" IS NOT NULL)
  14 - filter("D"."COL_UK"<>‘LRBRE-12052014’)
  15 - access("COL_ID"="D"."COL_ID")

Statistics
------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
       1168  bytes sent via SQL*Net to client
        403  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

I went from a massif 3,771,234 of consistent gets to just 8 logical I/O. The subquery has been transformed into an in-line view (VW_SQ_1) while the not always good FILTER operation disappeared letting the place to rapid operations accurately estimated by the CBO without re-computing the statistics in between

Bottom Line:  Trouble shooting a query performance problem can sometime be achieved by reformulating the query so that you give the CBO a way to circumvent a transformation it can’t do with the original query.


OTN Mena Tour 2014

$
0
0

This is a brief note to signal that it has been a great pleasure for me to participate in the first Oracle Technology Network Middle East and North Africa tour in Tunisia this may 2014. I prepared a presentation on Adaptive Cursor Sharing in English when few minutes before the start of my presentation the organisation committee asked me to service it in French. The majority of the audience seems not to be at its complete ease with Shakespeare Language. So it was my pleasure to do it in Molière language with few words in English for what seems to be the minority in this conference. The moral of the story is that the Oracle French community is growing so that it needs more than often to have expert speaking French.

 


DDL optimization : french version

$
0
0

I wrote an article about DDL optimization which I have submitted to Oracle otn and which will be hopefully published very soon. As you might also know I have been invited to be a member of the Oracle World team in order to spread the use of the Oracle technology in Latin America. This is why the English (original) article has been translated into Portuguese and published in the otn Latin America. The Spanish version will be published very soon as well.

I have also translated the original article to French and have been looking unsuccessfully for a known place where I would have published it so that it will have reached many persons in several countries like France, Belgium, Luxembourg, Switzerland,  North Africa and several other countries too. Having been unable to find that place, I decided to publish it here in my blog. So here it is  DDL Optimization


Execution plan of a cancelled query

$
0
0

I have been reminded by this otn thread that I have to write something about the usefulness of an execution plan pulled from memory of a query that has been cancelled due to its long running execution time. Several months ago I have been confronted to the same issue where one of my queries was desperately running sufficiently slow so that I couldn’t resist the temptation to cancel it.

It is of course legitimate to ask in such a situation about the validity of the corresponding execution plan? Should we wait until the query has finished before starting investigating where things went wrong? What if the query takes more than 20 minutes to complete? Will we wait 20 minutes each time before getting an exploitable execution plan?

The answer of course is no. You shouldn’t wait until the end of the query. Once the query has started you could get its execution plan from memory using its sql_id. And you will be surprised that you might already be able to found a clue of what is wrong with your query even in this ‘’partial’’ execution plan.

Let me model an example and show you what I am meaning.

    create table t1
       as
       with generator as (
           select  --+ materialize
               rownum id
           from dual
           connect by
               level <= 10000)
       select
          rownum                  id,
          trunc(dbms_random.value(1,1000))    n1,
          lpad(rownum,10,'0') small_vc,
          rpad('x',100)       padding
      from
          generator   v1,
          generator   v2
      where
          rownum <= 1000000;

   create index t1_n1 on t1(id, n1);

   create table t2
       as
       with generator as (
           select  --+ materialize
               rownum id
           from dual
           connect by
               level <= 10000)
       select
          rownum                  id,
          trunc(dbms_random.value(10001,20001))   x1,
          lpad(rownum,10,'0') small_vc,
          rpad('x',100)       padding
      from
          generator   v1,
          generator   v2
      where
          rownum <= 1000000;
    
   create index t2_i1 on t2(x1);

   exec dbms_stats.gather_table_stats(user, 't2', method_opt => 'FOR ALL COLUMNS SIZE 1');
   exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'FOR ALL COLUMNS SIZE 1');
   

I have two simple indexed tables t1 and t2 against which I engineered the following query

    select *
       from t1
        where id in
                (Select id from t2 where x1 = 17335)
       order by id ;
   

The above query contains an order by clause on an indexed column (id). In order to make this query running sufficiently slow so that I can cancel it I will change the optimizer mode from all_rows to first_rows so that the CBO will prefer doing an index full scan and avoid the order by operation whatever the cost of the index full scan operation is.

   SQL> alter session set statistics_level=all;

   SQL> alter session set optimizer_mode=first_rows;

   SQL> select *
       from t1
        where id in
                (Select id from t2 where x1 = 17335)
       order by id ;

   107 rows selected.

   Elapsed: 00:02:14.70 -- more than 2 minutes

   SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

   SQL_ID  9gv186ag1mz0c, child number 0
  -------------------------------------
  Plan hash value: 1518369540
  ------------------------------------------------------------------------------------------------
  | Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
  ------------------------------------------------------------------------------------------------
  |   0 | SELECT STATEMENT             |       |      1 |        |    107 |00:02:39.85 |     108M|
  |   1 |  NESTED LOOPS SEMI           |       |      1 |    100 |    107 |00:02:39.85 |     108M|
  |   2 |   TABLE ACCESS BY INDEX ROWID| T1    |      1 |   1000K|   1000K|00:00:01.03 |   20319 |
  |   3 |    INDEX FULL SCAN           | T1_N1 |      1 |   1000K|   1000K|00:00:00.31 |    2774 |
  |*  4 |   TABLE ACCESS BY INDEX ROWID| T2    |   1000K|      1 |    107 |00:02:37.70 |     108M|
  |*  5 |    INDEX RANGE SCAN          | T2_I1 |   1000K|    100 |    106M|00:00:30.73 |    1056K|
  ------------------------------------------------------------------------------------------------

  Predicate Information (identified by operation id):
  ---------------------------------------------------
     4 - filter("ID"="ID")
     5 - access("X1"=17335)
 

And the plan_hash_value2 of this query is:

  SQL> @phv2 9gv186ag1mz0c

  SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER       PHV2
  ------------- --------------- ------------ ----------
  9gv186ag1mz0c      1518369540            0 2288661314
  

The query took more than 2 minutes to complete and for the purpose of this blog I didn’t cancelled it in order to get its final execution plan shown above. Now I am going to re-execute it again and cancel it after few seconds.

    SQL> select *
       from t1
        where id in
                (Select id from t2 where x1 = 17335)
       order by id ;

   ^C
   C:\>
   

And here below is the corresponding partial plan pulled from memory

  SQL> select * from table(dbms_xplan.display_cursor('9gv186ag1mz0c',0,'ALLSTATS LAST'));

  SQL_ID  9gv186ag1mz0c, child number 0
  -------------------------------------
  Plan hash value: 1518369540

  ------------------------------------------------------------------------------------------------
  | Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
  ------------------------------------------------------------------------------------------------
  |   0 | SELECT STATEMENT             |       |      1 |        |     17 |00:00:18.13 |      13M|
  |   1 |  NESTED LOOPS SEMI           |       |      1 |    100 |     17 |00:00:18.13 |      13M|
  |   2 |   TABLE ACCESS BY INDEX ROWID| T1    |      1 |   1000K|    129K|00:00:00.12 |    2631 |
  |   3 |    INDEX FULL SCAN           | T1_N1 |      1 |   1000K|    129K|00:00:00.04 |     361 |
  |*  4 |   TABLE ACCESS BY INDEX ROWID| T2    |    129K|      1 |     17 |00:00:18.26 |      13M|
  |*  5 |    INDEX RANGE SCAN          | T2_I1 |    129K|    100 |     13M|00:00:03.63 |     136K|
  ------------------------------------------------------------------------------------------------

  Predicate Information (identified by operation id):
  ---------------------------------------------------
     4 - filter("ID"="ID")
     5 - access("X1"=17335)

  SQL>  @phv2 9gv186ag1mz0c

  SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER       PHV2
  ------------- --------------- ------------ ----------
  9gv186ag1mz0c      1518369540            0 2288661314
  

The main observations are:

  1.   Both queries followed the same plan hash value
  2.   Both queries have the same phv2
  3.   Both execution plans contain the same operations at the same order of execution
  4.   Both execution plans contain the same Estimations (E-Rows) which is a clear indication that this is obtained at hard parse time

Either from the complete plan or from the ”partial” one we can have the same clue of what is going wrong here (even thought that in this particular case the performance penalty is due to the first_rows mode which has the tendency to avoid an order by on an indexed column at any price). Indeed it is the operation number 4 which is the most time consuming operation and from where there is the most important deviation between the CBO estimations and the Actual rows

So there is nothing that impeaches hurried developers from cancelling a query and getting its execution plan from memory. They can end up with valuable information from the cancelled query plan as well as from the complete plan.

This article is also a reminder for experienced Oracle DBA performance specialists that, there are cases where a difference between estimations done by the CBO and the actual rows obtained during execution time, does not necessary imply an absence of fresh and representative statistics. This difference could be due to a plan coming from a cancelled query.

By the way, I know another situation where an excellent optimal plan is showing a value of Starts*E-Rows diverging greatly from A-Rows and where statistics are perfect.  That will be explained, I hope,  in a different blog article.


On table constraint execution order

$
0
0

Introduction

Before to be left behind due to my infrequent blogging activity these days, here it is a simple note about the order of execution of constraints and triggers placed on a given table. Basically, I am inclined, according to my very recent tests done on 11.2.0.3 and shown below, to claimn that that order of execution obeys the following steps:

  1. Before (insert) trigger fires first
  2. Then CHECK or UNIQUE constraint follows. We will see herein after in what order
  3. Then it will be the turn of after (insert) trigger
  4.  And finally checking Foreign Key integrity constraint comes at the trailing edge of the process

The execution order of the check and unique constraints seems to be closely related to the order of the columns (in the table) with which the constraints has been implemented, as shown by Karthick_App in the above mentioned otn thread.

 Details

This is the model on which I have made the above observations:


drop table c purge;
drop table p purge;

create table p as select rownum n1, trunc((rownum-1)/3) n2
from dual connect by level <= 10;

alter table p add constraint p_pk primary key (n1);

create table c (m1 number not null, m2 number, m3 number);

alter table c add constraint child_fk foreign key (m2) references p(n1);

alter table c add constraint child_ck check (m3 in (1,2));

alter table c add constraint child_uk unique (m1);
  
create or replace trigger c_bfi_trg before insert on c
for each row
begin
    dbms_output.put_line('Trigger before insert fired');
end;
/

create or replace trigger c_afi_trg after insert on c
for each row
begin
    dbms_output.put_line('Trigger after insert fired');
end;
/       

The following select shows the order of the column within the table


select table_name, column_name, column_id
from user_tab_columns
where table_name = 'C'
order
by 1, 3;

TABLE_NAME                     COLUMN_NAME                     COLUMN_ID
------------------------------ ------------------------------ ----------
C                              M1                                      1
C                              M2                                      2
C                              M3                                      3

Note particularly that I have:

  • A not null CHECK constraint on the first column (m1) because it has been declared as the primary key of the child table c
  • A UNIQUE constraint on the first column (m1) because it has been declared as the primary key of the child table c
  • A FOREIGN Key constraint on the second column (m2) because it has been declared as a foreign key to the parent table p
  • A CHECK constraint on the third column(m3) to allow only 1 and 2 as possible values

A check constraint (being it unique or check on possible values) other than a foreign key constraint is executed according to the column order in the table. For example, the not null CHECK constraint will be the first to be verified because it has been implemented on column m1 which is the leading column of the table

SQL> set serveroutput on

SQL> insert into c values (null, 11, 3);

Trigger before insert fired
insert into c values (null, 11, 3)
                      *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("XXX"."C"."M1")

Note that in the above insert, I have 3 reasons for which the insert ought to be refused

  • Inserting a null value into the primary key (m1)
  • Inserting a non-existing parent key (m2 = 11)
  • Inserting a non-acceptable value into m3 (m3 in (1,2))

Among all those refusal reasons, it is the ORA-01400 that has fired up because the NOT NULL constraint has been implemented on the first column of the table (m1). The same observation can be made when the unique constraint placed on the first column is violated as shown below:

SQL> insert into c values (1, 10,1);
Trigger before insert fired
Trigger after insert fired
1 row created.

SQL> insert into c values (1, 11,3);
Trigger before insert fired
insert into c values (1, 11,3)
*
ERROR at line 1:
ORA-00001: unique constraint (XXX.CHILD_UK) violated

When we work around the not null refusal reason by supplying a not null value to the m1 column and re-insert the same statement, this time it is the check constraint m3 in (1,2) that raises up

SQL> insert into c values (1, 11,3);
Trigger before insert fired
insert into c values (1, 11,3)
*
ERROR at line 1:
ORA-02290: check constraint (XXX.CHILD_CK) violated

Despite the Foreign key is placed on the second column of the table, it is the check constraint on the third column that fires up.

And when all constraints are valid, the foreign key integrity constraint is finally checked as shown below:

SQL> insert into c values (1, 11,1);

Trigger before insert fired
Trigger after insert fired

insert into c values (1, 11,1)
*
ERROR at line 1:
ORA-02291: integrity constraint (XXX.CHILD_FK) violated - parent key not found

Conclusion

I have been very often asking myself (or being asked) the same question when confronted to such a kind of situation: what is the order of the constraint/trigger execution? And for each time I was obliged to spend few minutes creating a simple model in order to come up with a reliable answer. From now and on, I am sure that I will not redo this work as far as I definitely know that I wrote the answer into an article that has been published somewhere on the net.

In passing, spot why I am encouraging people to write:

  •  They will definitely not redo the same work again
  •  They will definitely find easily what they have done in this context
  • They  will definitely let other readers validate or correct their answer

Viewing all 224 articles
Browse latest View live