ARC reactor also caches?

The fictional arc reactor in Iron Man’s suit was the epitome of coolness for us geeks. In the latest edition of Oracle Magazine, Iron Man is on the cover, as well as the other 5 Avengers in a limited edition series (see below).

Just about the same time, I am reading up on the ARC (Adaptive Replacement Caching) that is adopted in ZFS. I am learning in depth of how ZFS caching works as opposed to the more popular LRU (Least Recently Used) caching algorithm that is used in most storage cache memory. Having said that, most storage vendors employed a modified LRU algorithm, with the intention to keep the most recently accessed pages in memory as long as possible. This is true in NetApp’s Data ONTAP (maybe not the ONTAP GX in which I have little experience) and EMC FlareOE. ONTAP goes further to by keeping the most frequently accessed pages permanently in memory. EMC folks would probably refer to most recently accessed as spatial locality while most frequently accessed as temporal locality.

Why is ZFS using ARC and what is ARC?

In the previous paragraph, I referred to most recently accessed pages in the cache and the most frequently accessed pages in the cache. Here is any idea of how an LRU algorithm looks like in a read cache:

Think of LRU as a FIFO (first in, first out) process.

These 2 types of localities (most recently and most frequently) are not necessarily the same thing. In the popular LRU caching algorithm, the most recently used pages are cached and preferred over the most frequently used pages.Therefore, the likely more relevant, most frequently accessed pages may be discarded from the cache if there was a large set of sequential pages read into the cache, even though the relevance of the most recently read sequential pages is poor in significance to read performance. In this case, there is a cache hit, but the cache hit is not very useful in the context of why the read cache exists in the first place. 

To counter the disadvantages of LRU, there are also variants of LFU (Least Frequently Used) caching algorithms as well, but LFU has its own weaknesses as well. The Adaptive Replacement Cache (ARC) balances the idiosyncrasies of both LRU and LFU, improving the quality of the cache hits rather than the quantity of the cache hits, making the most relevant of pages accessed in the cache hits. Typically ARC will take up 10-15% time overhead to setup, but the average cache hit relevance will be 2x better than the LRU/LFU algorithm.

ARC does this by implementing and splitting the cache mechanism into 2 lists. One list addresses the most frequently accessed (MFU) and the other list addresses the most recently accessed (MRU). A simplified diagram of the ARC implementation is shown below:

In the diagram above, there are 2 additional lists at both ends of the cache directory/mechanism which are the “ghost list” for MFU or MRU respectively. Even if a page is discarded from cache from either the MFU or the MFU end, they are tracked respectively by the ghost list.

The reason for ghost list keeping track of discarded pages is the likelihood of recalling a discarded page is high. When discarded pages are recalled again, this constitute a phantom cache hit and the cache list for MRU or MFU will adapt and adjust to include the recalled discarded page in the MRU or MFU cache list, by increasing the MRU/MFU list by one. The diagram below shows the example of how the MRU/MFU “count” increasing by one to include the recalled discarded pages. Thus the cache list (MRU or MFU) adapts by increasing or decreasing, to the changes in how the pages are being discarded or being recalled. Hence, the adaptive nature of ARC.

ZFS employs a modified version of the patented true ARC concept as described by the IBM creators. Some differences are listed below (which I got from this link – courtesy of c0t0d0s0.org):

  • the ZFS ARC is variable in size and can react to the available memory. It can grow in size when memory is available or it can decrease in size when memory is needed for other things
  • the ZFS ARC can work with multiple block sized. The original implementation assumes an equal size for every block
  • You can lock pages in the cache to excempt them from eviction. This prevents the cache to evict pages, that are currently in use. The original implementation does not have this feature, thus the algorithm to choose pages for eviction is lightly more complex in the ZFS ARC. It chooses the oldest evictable page for eviction.

While the ARC implementation is in the memory, ZFS also extends it further with Level 2 ARC (L2ARC) utilizing faster disks and SSDs to become the extension of the ARC read cache (aka Readzilla).

ARC caching algorithm, in the views of many, is a better performing algorithm for read caching and IBM DS8000 is one of the few storage platforms that employs the ARC caching algorithm. ZFS employs a modified version of ARC and it certain has a leg up over its competition.

Perhaps ZFS’s ARC is just as cool as Iron Man’s ARC reactor. Ain’t that something?

Tagged , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

3 Responses to ARC reactor also caches?

  1. Pingback: ARC reactor also caches? « Storage Gaga

  2. Pingback: Aaron Toponce : ZFS Administration, Part IV- The Adjustable Replacement Cache

  3. Pingback: ZFS Reference Information cheat sheet | Adcap Network Systems, Inc.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.