Earlier, I ran an experiment to see if we got what I considered to be true read consistency on a table.  Initially, admittedly, I didn’t think it was possible without first locking the entire table.  And I still believe that’s the case with updates I envisioned them initially.

I was, however, thrown for a loop when I discovered the SCN of the update that was being performed was after the SCN of an update that had been started after the update, which made me devise the theory that if an update reaches a row that has been updated since the start of the statement, the read consistency of the row is foiled and the query has to start back at the beginning to achieve true read-consistency.

Today, I’ll prove that theory either true or false.

We’ll start with a table very similar to the table I created yesterday.  This time, however, I’ve added a few different extra columns.

We’ll load it full of data just like before:

We’ll also have a table that will be updated frequently that will contain one row, which will have current timestamp.

There will be 3 processes at play.

First, a process that will continually be updating the timestamp in the UPDATED_TIMESTAMP table.

Second will be a full table scan that will record 3 different pieces of information.  SYSTIMESTAMP- the time when the update kicked off.  Also, the time listed in the KALEYC.UPDATED_TIMESTAMP table.  Also, the CURRENT_SCN from the V$DATABASE view.

Lastly will be a PL/SQL block that will perform intermittent updates on the table using the index, starting about halfway down the table, continuing until the full table scan finally “catches up” with it.  At that point, it’ll hang until the full table scan commits, then it will hurry through the rest of the process and not update anything.

The plan is to have the script that’s updating the timestamp be continually running the background while I kick off the full table scan update in one session, and almost immediately afterwards, kick off the pl/sql block in another session.  My theory is this:  Oracle starts the full table scan update.  The index starts changing rows that would’ve been updated by full table scan update quickly, and goes down the line.  Eventually, the full table scan will come to a row that has been updated by the index update and go “drat! my read-consistency is ruined!” since the row has already been updated since the start of the full table scan update.  I’m expecting that the query will go through the rest of the table at that point and continue locking rows, until everything is finally locked, and then to go back and re-apply the update to all of the rows.

If this is true, then:

  • I would expect there to be a large discrepancy between the timestamp recorded in the DATA column (systimestamp) vs the timestamp recorded in the FTS_UPDATE_TS column (which would come from KALEY.UPDATED_TIMESTAMP).
  • I would expect there to be practically no discrepancy between the timestamp recorded in the DATA column and the INDEX_UPDATE_TS column.
  • I would expect the full table scan update to have about twice as many logical reads as an uninterrupted update.

One additional piece.  Last time, I cleared the buffer cache to make the full table scan slower.  Which, I’ll do again here.

Something else that I didn’t think of, which would naturally slow down the full table scan update would be to decrease the multi block read count parameter in the session that’s going to perform the full table scan update.  I put it at about 1/10th of what it was.

Let’s give it a whirl!

Here’s what the table looks like afterwards:

You can see there’s almost a 3 minute difference between the systimestamp (recorded in DATA) and the read-consistent timestamp stored in FTS_UPDATE_TS from KALEY.UPDATED_TIMESTAMP.  So that’s completely as expected.   The index information has thrown a bit of a curveball in my theory though.

You can see there’s practically no difference between the SYSTIMESTAMP recorded in DATA and the INDEX_UPDATE_TS column, so that’s as expected.  However, notice that when comparing the FTS_UPDATE_SCN to INDEX_UPDATE_SCN that the FTS_UPDATE_SCN is before any of the index scans???

Update: 2017-05-04

The above piece where FTS_UPDATE_SCN was *before* any of the index scans really threw me for a loop there for a while!  About a month after doing the initial experiment, though, I stumbled across a piece in the documentation.

Dynamic performance views are based on virtual tables built from database memory structures. Thus, they are not conventional tables stored in the database. Read consistency is not guaranteed for the views because the data is updated dynamically.

So basically what happened, is Oracle started the update, got the information from the v$database view, then went on updating….the read consistency was foiled by the index scan….so Oracle kept “chugging,” continuing to lock rows…then when Oracle restarted the update, it said “peh, I’m not even going to bother trying to be read-consistent with v$database, because it’s not guaranteed to be read-consistent anyways.”

Apparently a great deal of overhead goes into making things read-consistent, and much of the data in v$views changes *very* rapidly, and in the interest of efficiency, Oracle says “we’ll omit all the locks and latches that would be needed to make sure these views are read-efficient.”

Mystery solved!  Although I might do a third one in this series that explores other methods of getting the current SCN (such as dbms_flashback.get_system_change_number),