Home » RDBMS Server » Performance Tuning » Find the Explain Plan from Trace File
Find the Explain Plan from Trace File [message #199457] Tue, 24 October 2006 08:34 Go to next message
bellpk
Messages: 2
Registered: October 2006
Location: Helsinki
Junior Member

I’m trying to determine the execution plan for a particular statement from rather big trace file (330MB). The traced session issued a CREATE TABLE AS with remote databases included in the query and creates a table 7.5GB in size. The local db is 10g R2. The remote db is 9.2.0.7.

I ran the trace file thru TKPROF (see attached) but the resulting plan info is not in a readable form : it seems to be using internal object id’s rather than the tables names.

Question 1 : Is there an easy way to get the plan in a readable form from the raw tracefile ??

The reason I ask, is because the plan used for the traced session resulted in an acceptable execution time. Since then, both local and remote databases have been changed (PGA AGG sizes, CPU added, memory added) and the plan used by Oracle now is less than optimal (terrible).


Note ****** Actually, the query has also changed (it now differs to the one in the tracefile) but I was able to get the new query to execute in < 2 hours before the db changes – unfortunately, I did not enable sql trace for that test.

I now see that the stats for the tables on the remote site are inaccurate so I used dbms_stats.set_table_stats to correct them in the hope that Oracle chooses the right plan. See below :


begin

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_05',numrows => 33319003);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_06',numrows => 33894737);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_07',numrows => 36661852);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_08',numrows => 37233878);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_09',numrows => 3423957);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_2006_10',numrows => 0);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MTM',partname=>'WO_WALLET_MTM_MAX',numrows => 0);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_05',numrows => 33319003);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_06',numrows => 33894737);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_07',numrows => 36661852);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_08',numrows => 37233878);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_09',numrows => 3423957);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_2006_10',numrows => 0);

dbms_stats.set_table_stats(ownname=>'EGET',tabname=>'WO_WALLET_MASTER_TRANS',partname=>'WO_WALLET_MT_MAX',numrows => 0);

end;

/





Question 2 : Is this enough ? When I run get the explain plan (see below) from the local site, the cardinality for the tables listed above does not seem to add up (I expect in the region of 100 million + rows from each table)

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1414190613

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OU
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 168M| 30G| | 18E(100)| | |
|* 1 | HASH JOIN | | 168M| 30G| 3314M| 18E(100)| | |
| 2 | REMOTE | WO_WALLET_MTM | 54M| 2693M| | 387K (8)| 01:17:25 | EGETO~ | R->S
|* 3 | HASH JOIN | | 6722M| 889G| 2430M| 830K (15)| 02:46:03 | |
| 4 | REMOTE | WO_WALLET_MASTER_TRANS | 54M| 1810M| | 334K (8)| 01:06:49 | EGETO~
|* 5 | HASH JOIN | | 27M| 2805M| 13M| 123K (2)| 00:24:38 | |
| 6 | TABLE ACCESS FULL | WO_GAMING_ACCOUNT_BALANCE | 470K| 7817K| | 344 (5)| 00:00:
|* 7 | HASH JOIN | | 13M| 1179M| 8776K| 55218 (2)| 00:11:03 | |
| 8 | REMOTE | WO_WALLET | 236K| 6003K| | 105 (7)| 00:00:02 | EGETO~ | R->S
| 9 | MERGE JOIN CARTESIAN| | 13M| 839M| | 26662 (84)| 00:05:20 | |
|* 10 | TABLE ACCESS FULL | T_MERGE_VALUE_MAP | 62 | 2356 | | 3 (0)| 00:00:0
| 11 | BUFFER SORT | | 221K| 5634K| | 26659 (84)| 00:05:20 | |
| 12 | REMOTE | WO_ACCOUNT | 221K| 5634K| | 78 (8)| 00:00:01 | EGETO~ | R-
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("WWM"."MONETARY_RESOURCE_ID"="TVM"."X_COLUMN_VALUE" AND
"WWMT"."PK_WALLET_MASTER_TRANS_ID"="WWM"."FK_WALLET_MASTER_TRANS_ID")
filter("WWM"."CREATED_TIMESTAMP">="WWMT"."CREATED_TIMESTAMP")
3 - access("WW"."PK_WALLET_ID"="WWMT"."FK_WALLET_ID")
5 - access("WGAB"."FK_GAMING_ACCOUNT_ID"="WA"."PK_ACCOUNT_ID")
7 - access("WW"."FK_PLAYER_ID"="WA"."FK_PLAYER_ID")
10 - filter("TVM"."X_COLUMN_VALUE" IS NOT NULL AND "TVM"."COLUMN_NAME"='TRANS_TYPE' AND
"TVM"."TABLE_NAME"='WO_EVENT' AND "TVM"."OWNER"='EGET')

Remote SQL Information (identified by operation id):
----------------------------------------------------

2 - SELECT "FK_WALLET_MASTER_TRANS_ID","AMOUNT_TYPE","VERSION","STATUS","AMOUNT","CREATED_TIMESTA
STAMP","TRANS_INDEX","MONETARY_RESOURCE_ID" FROM "WO_WALLET_MTM" "WWM" WHERE "CREATED_TIMESTA
'EGETOLD2.PAFEGAMDB1.EGET.FI' )

4 - SELECT "PK_WALLET_MASTER_TRANS_ID","FK_WALLET_ID","CREATED_TIMESTAMP" FROM "WO_WALLET_MASTER_
WHERE "CREATED_TIMESTAMP">:1 (accessing 'EGETOLD2.PAFEGAMDB1.EGET.FI' )

8 - SELECT "PK_WALLET_ID","FK_PLAYER_ID" FROM "WO_WALLET" "WW" (accessing 'EGETOLD2.PAFEGAMDB1.EG

12 - SELECT "PK_ACCOUNT_ID","FK_PLAYER_ID" FROM "WO_ACCOUNT" "WA" (accessing 'EGETOLD2.PAFEGAMDB1.




The 2 tables (on the remote site) above are range partitioned tables on the created_timestamp column. They are joined (using >=) on the this partition key and a PK / FK key. Here’s the query :







SELECT

mwwm.amount,

mwwm.created_timestamp,

mwwm.direction,

wgab.pk_gaming_account_balance_id fk_gaming_account_balance_id,

mwwm.fk_wallet_master_trans_id,

wgab.fk_monetary_resource_id,

tvm.targ_value fk_trans_type_id,

mwwm.modified_timestamp,

0 resulting_balance,

mwwm.status,

0 total_resulting_balance,

mwwm.trans_index,

mwwm.VERSION,

mwwm.me_created_timestamp

FROM mig.T_MERGE_VALUE_MAP tvm,

eget.wo_gaming_account_balance wgab,

(SELECT wwm.amount,

wwm.created_timestamp,

amount_type direction,

wwm.fk_wallet_master_trans_id,

wwm.modified_timestamp,

wwm.status,

wwm.trans_index,

wwm.VERSION,

wwmt.created_timestamp me_created_timestamp,

wa.pk_account_id,

wwmt.trans_type,

wwm.monetary_resource_id

FROM wo_wallet_mtm@egetold2 wwm,

wo_wallet_master_trans@egetold2 wwmt,

wo_wallet@egetold2 ww,

wo_account@egetold2 wa

WHERE wwmt.created_timestamp >

TO_DATE('30.MAY.2006 00:00:00',

'dd.MON.yyyy HH24:MI:SS')

AND wwm.created_timestamp >= wwmt.created_timestamp

AND ww.pk_wallet_id = wwmt.fk_wallet_id

AND wwmt.pk_wallet_master_trans_id =

wwm.fk_wallet_master_trans_id

AND ww.fk_player_id = wa.fk_player_id) mwwm

WHERE wgab.fk_gaming_account_id = mwwm.pk_account_id

AND tvm.owner = 'EGET'

AND tvm.table_name = 'WO_EVENT'

AND tvm.column_name = 'TRANS_TYPE'

AND mwwm.monetary_resource_id = tvm.x_column_value

AND mwwm.status = tvm.x_src_value

AND tvm.src_value = mwwm.trans_type

AND mwwm.monetary_resource_id <> wgab.fk_monetary_resource_id



The other two remote tables (wo_wallet and wo_account) contain around 230,000 rows



T_MERGE_VALUE_MAP contains 300 rows

wo_gaming_account_balance 230,000 rows



Question 3 : Does the plan suggested by Oracle look reasonable ?? or would it be better to use hints to get Oracle to do the right thing ! Could someone suggest a suitable execution plan based on the information contained herein?? (SORT MERGE for the two big tables and then HASH JOINS for the subsequent joins with the smaller tables ??)





Thanks in advance

Re: Find the Explain Plan from Trace File [message #200224 is a reply to message #199457] Sun, 29 October 2006 13:10 Go to previous messageGo to next message
kimant
Messages: 201
Registered: August 2006
Location: Denmark
Senior Member
Hi

Why do You set the statistics, instead of gathering them?
There are many more information that the optimizer can gather, so this may be why You do not get the performance You need.

Try to replace the mwwm part with a view on the remote database, so that it returns the result from the inner view, instead.

You also have a MERGE JOIN CARTESIAN. That *could* be what is slowing You down.


Hints:
Hints are aa last resort - they limit the optimizer.

If You upgrade to say 11g, do You then go through al Your code to find the hints that are not needed anymore, or maybe even slow You down now? (I have not heard of any doing this)

(Try to) use hints only for finding out what is wrong, then fix it, and remove the hints.


Performance:
Look at Your session's waits and in v$session_longops to find out what You are waiting for.

Br
Kim Anthonisen
Re: Find the Explain Plan from Trace File [message #200242 is a reply to message #200224] Sun, 29 October 2006 23:27 Go to previous message
bellpk
Messages: 2
Registered: October 2006
Location: Helsinki
Junior Member

ok, thanks for the message - i'll try the view.

I used dbms_stats.set_table_stats just as a quick fix - the stats were wrong so I wanted to see if setting them would help (give the optimizer better information than it had before) and i didnt want to wait !

I agree - hints are the last resort but this was the last resort !!! - I put them in for the purpose of the test.

This code is a one-off data migration project - i would never do this in day-to-day production code.
Previous Topic: move all objects from tbs to new tbs /rebuild all indexes to new tbs
Next Topic: Copying statistics of one table to another
Goto Forum:
  


Current Time: Mon Apr 29 14:18:53 CDT 2024