Reading in Johnathan Lewis’ book, or on Martin Widlake’s blog, they both talk about “adjusted” or “modified” multi-block read counts.

What is this number?  Where does it come from?

Basically, what’s previously been said about this number is that Oracle “adjusts” the speed of your multi-block read count to compensate for time differences.

A Little Background

Oracle has two ways of performing a physical IO…either single-block reads (reading each block one at a time) or multi-block reads (reading multiple blocks in a single read).

Typically, “traditional index reads” (range scan, unique scan, index full scan) are read one block at a time (single block reads) and typically, full table scans are read multiple blocks at a time (multi-block reads).  I say typically…there are cases where indexes and index access to a table can be read using multi-block reads, there are cases when reading a portion of a table might only be done in a single block read….but if we ignore the finer details and paint with a broad stroke for simplicity’s sake, we’ll just say that tables scans are multi-block reads, and traditional index use is single-block read.

So when Oracle comes up with the cost to full-scan a table, it recognizes the concept that if you’re doing multi-block reads, it will be able to read data more quickly using multi-block reads, rather than single-block reads.  So if we have 1000 blocks to read, Oracle can use multiblock reads to process this more efficiently than single-block reads.  Oracle has a parameter called db_file_multiblock_read_count that is used to determine the maximum number of blocks that can be read in just one multi-block read (Oracle sets this and recommends not to touch it in later (11g+)  versions.  It’s usually set to whatever 1 MB worth of blocks is).

So if the DB_FILE_MULTIBLOCK_READ_COUNT is set to 128, then Oracle is capable of reading up to 128 blocks in a single read, as opposed to just 1 block.  Notice that reading the table 128-blocks at a time will not be 128x faster than reading the table 1 block at a time.

Which brings us to costing!

When Oracle gives us a cost of a specific plan, Oracle is trying to measure the amount of time it *thinks* the query will take.  The cost will only be accurate if Oracle has a good idea of “what’s going on.”  Part of the cost comes from how long the plan will burn CPU, and part of the cost comes from how long Oracle will spend doing I/O (and btw, Oracle doesn’t differentiate between costing “physical” IO and a “logical IO” with no physical IO component…it simply measures an estimated “IO”.  Or, alternatively, you could say that the CBO says that all IO will be physical IO).

So.  Let’s “pretend” that we have a table with a billion blocks, and tell Oracle to cost a full-scan on the table.

 

So if we look at the “Cost” column, Oracle estimates a cost of around 173 million.  Also notice that Oracle expects that roughly 1% of this cost will be CPU based…meaning 99% of it is expected to come from IO.  We can actually see the exact values if we query the underlying SYS.PLAN_TABLE$ directly.

 

 

So the total “COST” is a sum of CPU_COST and IO_COST.

CPU_COST is the number of CPU operations it thinks it will do (which, what exactly is a CPU operation? I don’t know).

IO_COST concerns itself with IO.

These two numbers (CPU_COST and IO_COST) make up the COST.  The CPU_COST in the end, gets converted to IO_COST units, and then gets added to IO_COST to form the total cost (or just plain cost).

So

COST = (CPU_COST * [some ratio representing the number of CPU operations that can be done in a single IO]) + IO_COST

Now, for this exercise, my DB_FILE_MULTIBLOCK_READ_COUNT was set to 128.  But you can see:

1,000,000,000 blocks / 128 (db_file_multiblock_read_count) is 7,812,500….which is *WAY* lower than our predicted IO cost (173,177,085).  So why is this number different?  Well, it relies on a couple of concepts.

  1. A multi-block read of 128 blocks will not last the same amount of time as a single-block read. (as mentioned earlier)
  2. Not every read done to this table will be a 128-block read.  (this is due to the fact that Oracle will not perform physical I/O on blocks that are already read from the cache, extents may not have 128 blocks remaining in them to be read, etc.

So Oracle “adjusts” this by a (seemingly) arbitrary number.  It makes sense that this number wouldn’t be as big as 128, right? (A 128-block multiblock read won’t read data 128x faster than a single block read…said differently, a single-block read won’t be as long as one 128-block read…a single block read, on average, would be shorter) But it shouldn’t be as small as 1, either….(reading 128 blocks 1 block at a time will take longer than reading 128 blocks in a single, multi-block read).  So our adjusted, or “modified” multiblock read should be somewhere between 1 (the amount of time required for a single read) and 128 (our actual db_file_multiblock_read).  This adjusted multiblock read should represent the ratio of how much faster we can read an arbitrary amount of data using multi-block reads, as we would read the same amount of data using single block reads.

And btw, feel free to read and re-read those last few paragraphs many times over…there’s a lot to break down…make sure you get a clear mental picture.  But if it’s not totally clear, here’s a picture you can try to envision.

Let’s say we’ve got a table.  It’s 64,000 blocks.

We could read this table using single block reads. It would take 64,000 single block reads.

Or, we could read this table using multi-block reads.  Let’s assume everything aligned exactly perfectly, and we were able to read the 64,000 blocks using multiblock reads, 128 blocks at a time each time.  It would take 64,000 blocks / 128 = 500 multiblock reads.

Let’s suppose we start two different processes with the task of reading 64,000 blocks.  One uses single-block reads.  One uses multiblock reads.  We’re racing them…we start them both at the same time.

When will the multiblock read process complete?

Well…it will probably finish first. Probably, it will finish before the single-block process completes.

What block number will the single-block process be on when the multi-block process finishes?  Well, the single-block process will probably be further along than 500 blocks.  After all, the multi-block process did 500 reads…and those reads would be longer than the single-block reader, since he’s reading more data with each read.  So it would be more than 500.  But at the same time, it would be less than 64,000…the total number of blocks in the table.

So that’s what the multi-block read count is supposed to represent…the speed ratio between how much data can be read using single block reads vs multi-block reads.  Mostlikely, it should always be greater than 1 (since we should be able to read data faster using multi-block reads) and less than the actual DB_FILE_MULTI_BLOCK_READ_COUNT parameter (since a multi-block read will likely on average take longer than a single-block read).

Hopefully this is clear by now (moving on…feel free to re-read as many times as necessary).

So here, in our case, Oracle says the IO cost of our query will be 173,177,085.  So if Oracle expects to read 1 billion blocks, and the cost is 173,177,085, then our adjusted multiblock read count would be: 1,000,000,000 blocks / 173,177,085 = roughly 5.774436.

So 5.774436 is > 1, and it < 128, as is expected.  We’re anticipating that, with a max cap of 128 blocks in a single read, we can read data approximately 5.77 times faster using multi-block reads, rather than reading the same amount of data using single-block reads.

So that whole introduction gets us to the crux of this blog post…how the heck did Oracle come up with 5.774436 as our adjusted multi-block read count?

Notice, by the way, that if you’ve been following along and doing the exercise on your own database, that there’s a possibility that you *DIDN’T* come up with the same cost as me…there’s a possibility that you DON’T have 5.774436 as your adjusted multi-block read…even if you adjust your DB_FILE_MULTIBLOCK_READ_COUNT parameter to be the same as mine.

So what’s going on here?  Why is this 5.774436 on my machine? Different on your machine? Where did Oracle come up with this number?

 

How Did Oracle Calculate My Adjusted MBRC?

So there are two different types of system statistics that Oracle keeps about your whole database.  There’s “workload” statistics, and there’s “noworkload” statistics.  And if you’re on exadata, there’s even  a specific type of statistic catered towards Exadata, but we’ll not talk about that here.

You can only ever use one or the other at a given point in time…EITHER workload statistics OR noworkload statistics.  Workload statistics, you have to gather when your database is undergoing it’s normal workload.  If you don’t gather workload statistics, though, Oracle will use noworkload statistics.  Which type of statistics you should be using is a hotbed for debate over which many holy wars have been fought, I’m not interested in going down that path.  What I will say is that for my given system at this point in time, I’m using noworkload statistics.

How can you tell which one your database is using?  Well, query SYS.AUX_STAT$.

 

Here, you can see the only populated values are CPUSPEEDNW, IOSEEKTIM, and IOTFRSPEED (oh, and FLAGS).  Had I collected system statistics, you wouldn’t see a bunch of nulls for the remaining rows in PVAL1, you would see numbers.  Basically, for workload statistics, Oracle measures the CPU speed (CPUSPEEDNW), it defaults the IO Seek time to 10 milliseconds per block (IOSEEKTIM) and the IO transfer speed to 4,096 bytes per millisecond (IOTFRSPEED).

Notice that MREADTIM (multi-block read time) isn’t populated.  Ideally, though, this would be the amount of time it takes to perform a multi-block read.

As an example of what I mean here, let’s go back to my system.  My adjusted multi-block read count is 5.774436.  Meaning, on average I can read the same amount of data…using multi-block reads…approximately 5.774436x faster, than I could read that same amount of data using single-block reads.

The FORUMULA for adjusted multi-block reads is:

AMBRC = Multiblock read count * SREADTIM (the time it takes to do a single-block read) / MREADTIM (time it takes to do a multi-block read)

But as we know, we’re using NOWORKLOAD statistics, so those values aren’t captured in our system…they’re calculated.

The formula for MREADTIM is calculated as

MREADTIM  = IOSEEKTIM + (db block size * Multi Block Read Count/ IOTFRSPEED)

The formula for SREADTIM is  calculated as

SREADTIM = IOSEEKTIM + (db block size / IOTFRSPEED)

So let’s plug these two formulas in to the Adjusted multi-block read formula:

First, we’ll restate the AMBRC formula with no changes:
AMBRC = MBRC  * SREADTIM (the time it takes to do a single-block read) / MREADTIM (time it takes to do a multi-block read)

Next, we’ll plug in the value for MREADTIM:
AMBRC = MBRC  * SREADTIM / (IOSEEKTIM + (db block size * MBRC  / IOTFRSPEED))

Next, we’ll plug in the formula for SREADTIM:
AMBRC = MBRC  * (IOSEEKTIM + (db block size / IOTFRSPEED)) / (IOSEEKTIM + (db block size * MBRC  / IOTFRSPEED))

Lets go ahead and plug in all the “default” values for noworkload statistics, assuming an 8kb block size.

We know the IOSEEKTIM is defaulted to 10:
AMBRC = MBRC  * (10 + (db block size / IOTFRSPEED)) / (10 + (db block size * MBRC  / IOTFRSPEED))

Block size is defaulted to 8k
AMBRC = MBRC  * (10 + (8192 / IOTFRSPEED)) / (10 + (8192 * MBRC  / IOTFRSPEED))

IOTFRSPEED is defaulted to 4096:
AMBRC = MBRC  * (10 + (8192 / 4096)) / (10 + (8192 * MBRC  / 4096))

For simplicity’s sake, let’s change this: (8192 * Multi Block Read Count/ 4096) …to this: (8192 /4096 * Multi Block Read Count) ….same expression, right? Just phrased differently.
AMBRC = MBRC * (10 + (8192 / 4096)) / (10 + (8192 / 4096 * MBRC))

We’ll simplify 8192/4096 to 2:
AMBRC = MBRC  * (10 + 2) / (10 + (2 * MBRC))

Add 10+2:
AMBRC = 12MBRC / (10 + 2MBRC)

That’s as far down as I know how to reduce that formula.  So let’s plug in our known MBRC (128) and see if it gets us anywhere near 5.774436

12*128 / (10 + 2(128)) =

1536 / (10 + 256) =

1536 / 266 =

5.7744360922556390977446309225536…

Nailed it.

So, let’s define a quick function for finding AMBRC based on block size (for the more common block sizes):

8kb:  AMBRC = 12MBRC / (10 + 2MBRC)
16kb: AMBRC = 14MBRC/(10 + 4MBRC)
32kb: AMBRC = 18MBRC/(10 + 8MBRC)

If these values don’t find your multiblock read count, it could be because:

  • Your block size isn’t 8kb, 16kb, or 32kb (I’ve never run across a database with a block size of 2kb or 4kb block size before)
  • You’ve gathered workload system statistics, or exadata statistics, etc.

An interesting point: If your block size is smaller, then having a large DB_FILE_MULTIBLOCK_READ_COUNT will make a larger difference (For an 8kb block size, you can get a AMBRC of around 6 before it starts to plateau out).  Having a larger block size basically means that you’ll end up with a smaller AMBRC, so it’s like having a larger DB_FILE_MULITBLOCK_READ_COUNT will make a smaller amount of difference (i.e. if your block size is 16kb, your AMBRC caps out at around 3.5, from what I can see). This makes sense, because essentially all reads become large reads, and less overhead is wasted in multiple smaller reads.