ARC reactor also caches?

The fictional arc reactor in Iron Man’s suit was the epitome of coolness for us geeks. In the latest edition of Oracle Magazine, Iron Man is on the cover, as well as the other 5 Avengers in a limited edition series (see below).

Just about the same time, I am reading up on the ARC (Adaptive Replacement Caching) that is adopted in ZFS. I am learning in depth of how ZFS caching works as opposed to the more popular LRU (Least Recently Used) caching algorithm that is used in most storage cache memory. Having said that, most storage vendors employed a modified LRU algorithm, with the intention to keep the most recently accessed pages in memory as long as possible. This is true in NetApp’s Data ONTAP (maybe not the ONTAP GX in which I have little experience) and EMC FlareOE. ONTAP goes further to by keeping the most frequently accessed pages permanently in memory. EMC folks would probably refer to most recently accessed as spatial locality while most frequently accessed as temporal locality.

Why is ZFS using ARC and what is ARC?

In the previous paragraph, I referred to most recently accessed pages in the cache and the most frequently accessed pages in the cache. Here is any idea of how an LRU algorithm looks like in a read cache:

Think of LRU as a FIFO (first in, first out) process.

These 2 types of localities (most recently and most frequently) are not necessarily the same thing. In the popular LRU caching algorithm, the most recently used pages are cached and preferred over the most frequently used pages.Therefore, the likely more relevant, most frequently accessed pages may be discarded from the cache if there was a large set of sequential pages read into the cache, even though the relevance of the most recently read sequential pages is poor in significance to read performance. In this case, there is a cache hit, but the cache hit is not very useful in the context of why the read cache exists in the first place. 

To counter the disadvantages of LRU, there are also variants of LFU (Least Frequently Used) caching algorithms as well, but LFU has its own weaknesses as well. The Adaptive Replacement Cache (ARC) balances the idiosyncrasies of both LRU and LFU, improving the quality of the cache hits rather than the quantity of the cache hits, making the most relevant of pages accessed in the cache hits. Typically ARC will take up 10-15% time overhead to setup, but the average cache hit relevance will be 2x better than the LRU/LFU algorithm.

ARC does this by implementing and splitting the cache mechanism into 2 lists. One list addresses the most frequently accessed (MFU) and the other list addresses the most recently accessed (MRU). A simplified diagram of the ARC implementation is shown below:

In the diagram above, there are 2 additional lists at both ends of the cache directory/mechanism which are the “ghost list” for MFU or MRU respectively. Even if a page is discarded from cache from either the MFU or the MFU end, they are tracked respectively by the ghost list.

The reason for ghost list keeping track of discarded pages is the likelihood of recalling a discarded page is high. When discarded pages are recalled again, this constitute a phantom cache hit and the cache list for MRU or MFU will adapt and adjust to include the recalled discarded page in the MRU or MFU cache list, by increasing the MRU/MFU list by one. The diagram below shows the example of how the MRU/MFU “count” increasing by one to include the recalled discarded pages. Thus the cache list (MRU or MFU) adapts by increasing or decreasing, to the changes in how the pages are being discarded or being recalled. Hence, the adaptive nature of ARC.

ZFS employs a modified version of the patented true ARC concept as described by the IBM creators. Some differences are listed below (which I got from this link – courtesy of c0t0d0s0.org):

  • the ZFS ARC is variable in size and can react to the available memory. It can grow in size when memory is available or it can decrease in size when memory is needed for other things
  • the ZFS ARC can work with multiple block sized. The original implementation assumes an equal size for every block
  • You can lock pages in the cache to excempt them from eviction. This prevents the cache to evict pages, that are currently in use. The original implementation does not have this feature, thus the algorithm to choose pages for eviction is lightly more complex in the ZFS ARC. It chooses the oldest evictable page for eviction.

While the ARC implementation is in the memory, ZFS also extends it further with Level 2 ARC (L2ARC) utilizing faster disks and SSDs to become the extension of the ARC read cache (aka Readzilla).

ARC caching algorithm, in the views of many, is a better performing algorithm for read caching and IBM DS8000 is one of the few storage platforms that employs the ARC caching algorithm. ZFS employs a modified version of ARC and it certain has a leg up over its competition.

Perhaps ZFS’s ARC is just as cool as Iron Man’s ARC reactor. Ain’t that something?

About cfheoh

I am a technology blogger with 20+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I was previously the Chairman of SNIA Malaysia until Dec 2012. I have recently joined Hitachi Data Systems as an Industry Manager for Oil & Gas in Asia Pacific. The position does not require me to be super-technical (which is what I love) but it helps develop another facet of my career, which is building communities and partnership. I think this is crucial and more wholesome than just being technical alone. Given my present position, I am not obligated to write about HDS and its technology, but I am indeed subjected to Social Media Guidelines of the company. Therefore, I would like to make a disclaimer that what I write is my personal opinion, and mine alone. Therefore, I am responsible for what I say and write and this statement indemnify my employer from any damages.
Tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>