Picking up from the end of the last example, I immediately generated a log file dump as follows. (You'll need to look back at Part 5 to see that the ROWID, log file name etc. match up or just trust me that these steps came from a continuation of the same SYS session used in that test.)
First I'm going to work out the file and block number for the block containing the locked row in the test. That's why I selected the ROWID before I locked it.
SYS@TEST1020> select file_id, file_name, dbms_rowid.rowid_block_number('&&my_rowid')
2 from dba_data_files
3 where file_id = dbms_rowid.rowid_to_absolute_fno('&&my_rowid','TESTUSER','TEST_TAB1');
Enter value for my_rowid: AAAN2NAAKAAAaBLAEX
old 1: select file_id, file_name, dbms_rowid.rowid_block_number('&&my_rowid')
new 1: select file_id, file_name, dbms_rowid.rowid_block_number('AAAN2NAAKAAAaBLAEX')
old 3: where file_id = dbms_rowid.rowid_to_absolute_fno('&&my_rowid', 'TESTUSER',
new 3: where file_id = dbms_rowid.rowid_to_absolute_fno('AAAN2NAAKAAAaBLAEX','TESTUSER',
Now that I have a file and block number, I'll dump the redo log file but restrict the dump to entries related to that block.
SYS@TEST1020> alter session set max_dump_file_size=unlimited;
SYS@TEST1020> alter system dump logfile '&&my_member'
2 dba min 10 106571 dba max 10 106571;
old 1: alter system dump logfile '&&my_member'
new 1: alter system dump logfile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1020\REDO01.LOG'
The trace file is created in user_dump_dest and I've uploaded it here. Near the start of the file, I can check that block range.
CLS: 1 - Data Class Block OP:11.19 - Operation Update Multiple Rows.
Verify OBJ:56717 is TEST_TAB1 -
TESTUSER@TEST1020> select object_name, object_type
2 from user_objects
3 where object_id=56717;
SYS@TEST1020> select to_char(dbms_utility.make_data_block_address(10,106571),
2 'XXXXXXXX') from dual;
Transaction ID 0x000a.024.00001864 matches XID 0A00240064180000 from V$TRANSACTION during the test, albeit in the rearranged format.
So this is definitely the blocking transaction and I can see the new value for the OBJECT_NAME column below ...
col 1: [ 9] 53 45 53 53 49 4f 4e 20 31
Which is the hexadecimal version of the ASCII string ''SESSION 1'. Further on still, we'll see it being updated to 'SESSION 2'.
I could wade through that dump all day (and it might be a subject for future posts) but a couple of things should be clear.
1) The redo log file contains plenty of information about the row updates, so if Log Miner is returning a single COMMIT action, either it's not working properly or I'm doing something wrong.
2) The contents of the redo log file might be an indicator of the SQL statement that caused the blocking problem, but it won't tell me what that statement was.
At this stage, I would say that ASH is starting to look more effective (in those cases where it can be used) than the alternatives being proposed. Remember, the whole point with ASH is that I don't need to predict when these problems might occur, so any solution that needs me to switch on traces is limited.
Part 1 - Default options - GLOBAL AND PARTITION Part 2 - Estimated Global Stats Part 3 - Stats Aggregation Problems I Part 4 - Stats Aggregation Problems II Part 5 - Minimal Stats Aggregation Part 6a - COPY_TABLE_STATS - Intro Part 6b - COPY_TABLE_STATS - Mistakes Part 6c - COPY_TABLE_STATS - Bugs and Patches Part 6d - COPY_TABLE_STATS - A Light-bulb Moment Part 6e - COPY_TABLE_STATS - Bug 10268597
For the avoidance of any doubt, all views expressed here are my own and not those of past or current employers, clients, friends, Oracle Corporation, my Mum or, indeed, Flatcat. If you want to sue someone, I suggest you pick on Tigger, but I hope you have a good lawyer. Frankly, I doubt any of the former agree with my views or would want to be associated with them in any way.