There are lots of resources that will tell you about the “access” and “filter” lines you see in the Predicates section of your execution plan.  Most resources will tell you…”access is more efficient than filter.” And, generally, that’s true.  Access tells us where to go in our index (thereby eliminating rows before we reach them) whereas filter tells us once we’ve arrived at a row that “it’s no good and must be discarded.”

But you may have noticed times when there seems to be and echo in your plan, and it repeats the same predicate filter in both the FILTER and ACCESS section of your execution plan.  What then?  Why does Oracle need both?  Shouldn’t just one be enough?

Let’s start by making a table with employees, their hire dates, and their favorite colors.


We’ll gather stats.

Suppose we write a query to figure out how many employees were hired on May 1st 2017 that have a favorite color of red.

Let’s run explain plan on this query:

SWEET! Our query uses *ALL* of our predicates as ACCESS PREDICATES!  Which, we know ACCESS is more efficient than if they were FILTER predicates.  Woo-hoo! Pat ourselves on the back, because we’re awesome at indexing and making thing runs super-efficient.  Buuuut what’s up with the EMPLOYEE_FAVORITE_COLOR=’Red’ being listed twice…once under access, once under filter?  Is there an echo in my explain plan?

Eh, who cares, right?  It’s an access predicate, therefore it’s efficient, we win at life.  Maybe that’s just Oracle being super-cautious, double checking itself, making doubly sure that all rows are in fact meeting the filter criteria. 😉

Here’s why you should care:  Even though it’s listed as an access filter,  you’re not getting all the efficiencies you might think you are with a “regular” access filter.  The fact that Oracle listed it twice, both in the filter section, and the access section, tells us that something is up.   One way we can tell that it’s not giving us the same efficiencies as a “regular” access filter is the fact that if we run our query as it stands, it takes 323 session logical reads, but if we remove the EMPLOYEE_FAVORITE_COLOR, it actually uses 321 session logical reads  (you read that right, removing the filter makes our query more efficient by 2 session logical reads).

The whole reason that access filters are deemed more efficient is because they’re guidelines with where to start an index range scan.  Let’s suppose we try a new query.

The important difference is that we’re no longer using a BETWEEN clause…we’re using an equality predicate instead.  Notice that this new plan has no “filters” in its predicate information. Everything is access.  This query takes a whopping 3 logical reads to complete.

Both queries have the same columns in the access clause, how could one be so much more efficient than the other?  Let’s look at some of the data:

Suppose this were the index (or, at least, a small portion of it).




With a predicate like:

employee_hire_date = date ‘2017-05-01’ and employee_favorite_color = ‘Red’

It’s easy for Oracle to swoop in and grab all (if any) necessary rows (in this case, there are none).

With the BETWEEN, though, Oracle has to scan all rows that are between the times of 2017-05-01 00:00:00 and 2017-05-01 23:59:59.

Well, the index is, first and foremost, sorted on EMPLOYEE_HIRE_DATE.  So we have to start with the earliest date (2017-05-01 00:00:00) and scan down to the latest (2017-05-01 23:59:59).

That’s going to include a lot records that aren’t meeting the EMPLOYEE_FAVORITE_COLOR=’RED’ criteria.  So we have to filter those out.  Hence, the filter(“EMPLOYEE_FAVORITE_COLOR”=’Red’) in the execution plan.

“But Kaley!” you say, “it was also listed as an access predicate!! How is it an access predicate as well?”

Well, normally, Oracle would have to start at the record that’s right 2017-05-01 00:00:00.

But, because we specified an additional column (EMPLOYEE_FAVORITE_COLOR), we might would be wasting a few buffer visits if we started with the very first 2017-05-01 00:00:00.  So in our example, because we specified “COLOR=RED” When we arrive at the second branch block…

….Even though the topmost block is the first block with Midnight at May 1st…we know that we don’t have to visit it, since the last listing on that block is Fuchsia, and “Red” comes alphabetically after that.  So we’re able to start our scan with just a smidgen narrower starting point.

Most of the time, this isn’t saving you very much though.  It depends on how the data is laid out, but many times you’re saving very small amounts of index scan through these kinds of access predicates.  The majority of the time, most records are being taken care of by the filter operation, but since we’re able to possibly eke out an extra buffer (or two, or three, or ten…again, it just depends on how the data’s laid out…) and it does, in fact, guide the starting point of the index range scan, then it is technically an access predicate.  Albeit not as good as it could be.

What’s the lesson?  Any time you’re using an index for a combination of equality predicate and a BETWEEN (or a >, or >=, or <, or <=, or LIKE, …anything that’s index-accessible that’s not an equality) predicate, usually you’re much better off putting your equality predicates first in the index.  I.e. the index column order should have been flipped:

Any columns after a BETWEEN, >, <, >=, <=, or LIKE column will only have the limited access predicate paired with the filter predicate that we have above.  Which, usually equates to FILTER doing most of the work, not ACCESS.

With the new index created, our plan gets much more efficient.  No more filter predicates, all access predicates.  And our plan takes 58 session logical reads, as opposed to 323 session logical reads with the old index.