The Prophet has arrived

Early last week, I had a catch up with my friend. He was excited to share with me the new company he just joined. It was ProphetStor. It was a catchy name and after our conversation, I have decided to spend a bit of my weekend afternoon finding out more about the company and its technology.

From another friend at FalconStor, I knew of this company several months ago. Ex-FalconStor executives have ventured to found ProphetStor as the next generation of storage resource orchestration engine. And it has found a very interesting tack to differentiate from the many would-bes of so-called “software-defined storage” leaders. ProphetStor made their early appearance at the OpenStack Summit in Hong Kong back in November last year, positioning several key technologies including OpenStack Cinder, SNIA CDMI (Cloud Data Management Interface) and SMI-S (Storage Management Initiative Specification) to provide federation of storage resources discovery, provisioning and automation. 

The federation of storage resources and services solution is aptly called ProphetStor Federator. The diagram I picked up from the El Reg article presents the Federator working with different OpenStack initiatives quite nicely below:  There are 3 things that attracted me to the uniqueness of ProphetStor.

1. The underlying storage resources, be it files, objects, or blocks, can be presented and exposed as Cinder-style volumes.

2. The ability to define the different performance capabilities and SLAs (IOPS, throughput and latency) from the underlying storage resources and matching them to the right application requirements.

3. The use of SNIA of SMI-S and CDMI Needless to say that the Federator software will abstract the physical and logical structures of any storage brands or storage architectures, giving it a very strong validation of the “software-defined storage (SDS)” concept.

While the SDS definition is still being moulded in the marketplace (and I know that SNIA already has a draft SDS paper out), the ProphetStor SDS concept does indeed look similar to the route taken by EMC ViPR. The use of the control plane (ProphetStor Federator) and the data plane (underlying physical and logical storage resource) is obvious.

I wrote about ViPR many moons ago in my blog and I see ProphetStor as another hat in the SDS ring. I grabbed the screenshot (below) from the ProphetStor website which I thought did beautifully explained what ProphetStor is from 10,000 feet view.

ProphetStor How it works

The Cinder-style volume is a class move. It preserves the sanctity of many enterprise applications which still need block storage volumes but now it comes with a twist. These block storage volumes now will have different capability and performance profiles, tagged with the relevant classifications and SLAs.

And this is where SNIA SMI-S discovery component is critical because SMI-S mines these storage characteristics and presents them to the ProphetStor Federator for storage resource classification. For storage vendors that do not have SMI-S support, ProphetStor can customize the relevant interfaces to the proprietary API to discover the storage characteristics.

On the north-end, SNIA CDMI works with the ProphetStor Federator’s Offer & Provisioning functions to bundle wrap various storage resources for the cloud and other traditional storage network architectures.

I have asked my friend for more technology deep-dive materials (he has yet to reply me) of ProphetStor to ascertain what I have just wrote. (Simon, you have to respond to me!)

This is indeed very exciting times knowing ProphetStor as one of the early leaders in the SDS space. And I like to see ProphetStor go far with this.

Now let us pray … because the prophet has arrived.

The future is intelligent objects

We are used to block-based approach and also the file-based approach to data. The 2 diagrams below shows the basics of how we access data in both block-based and file-based data on the storage device.


For block-based , the storage of the blocks is merely in arrays of unrelated contiguous blocks. For file-based, as seen below,


there is another layer of abstraction, and this is called the file system. But if you seen both diagrams above, there are some random numbers in light blue and that is to represent the storage device, the hard disk drive’s export of “containers” to the file system or the application that is accessing the storage device. This is usually the LBA (Logical Block Addressing), which is basically set of schematics that defines the locations on the hard disk drives. LBA tells the location of where the data is stored. For more information about LBA, check out this Wikipedia definition. But the whole idea is LBA is dumb. It is pretty much static and exported to file systems and applications so that these guys can do something with it.

There’s something brewing in the background since 1994 and it is one of the many efforts to make intelligent storage devices. This new object-based interface was part of the research project done by Carnegie-Mellon University (CMU). Initially, it was known as Network Attached Secure Disk (NASD) but eventually made its way to the working group in SNIA, and developing it for ANSI T10 INCITS standard. ANSI T10 is the guardian of all SCSI standards. This is called Object Storage Device (OSD). The SCSI architecture diagram below shows the layer where OSD resides.


The motivation for this simple: To make storage devices of today to do more computational work, in particularly I/O, relieving the hosts and the local systems to concentrate other computational processing work. And the same time, the local systems must have some level of interactivity and management between the storage object and the computational hosts.

In the diagram below which compares both block-based and OSD,


you can see the separation of file system management interface that is at the kernel-space of the local host/system and this is replaced by the OSD Management interface at the storage device.

What does this all mean? This means that using LBA type of addressing that we are familiar with in the block-based and file-based storage is no longer the way to go, because as I mentioned before, LBA is dumb.

OSD, in some way, replaces the LBA with OIDs (Object IDs). The existing local system and/or its file system will interact with the storage devices with OIDs and the OIDs links to its respective objects storage. And the object will carry a lot of metadata, that represents the object, giving it the intelligent and management capability of the object.



The prominence of the metadata in the OSD would mean that we can build much more intelligent systems in the future. The OIDs and the objects can be grouped together in a flat design or can be organized and categorized in a virtual, hierarchical model.


Object storage is an intelligent evolution of disk drives that can store and serve objects rather than simply place data on tracks and sectors. And it can bring the following benefits:

  • Intelligent space management in the storage layer
  • Data aware pre-fetching and caching
  • Robust shared access by multiple clients
  • Scalable performance using off-loaded data path
  • Reliability security

Several vendors such as EMC and NetApp are already supporting OSD.

Novell Filr (How do you pronounce this?)

I let you in on a little secret … I am a great admirer of Novell’s technology.

Ok, ok, they aren’t what they used to be anymore (remember the great heydays of Netware, ZenWorks and Groupwise?) And some of their business decisions didn’t make a lot of fans either. Some notable ones in recent years were the joint patent agreement with Microsoft (November 2006) and their ownership of Unix operating system rights. Though Novell did finally protected the Unix community by being the rightful owner of Unix OS rights, the negativity from the lawsuit and counter lawsuit between SCO and Novell soured the relationship with the faithfuls of Unix. In the end, they were acquired by Attachmate late last year.

However, I have been picking up bits of Novell technology knowledge for the past 3-4 years. Somehow, despite the negative perception that most people I know had about Novell, I strongly believe the ideas and thinking that goes into their solutions and products are smart and innovative.

So, when my buddy (and ex-housemate) of mine, Mr. Ong Tee Kok, the Country Manager of Novell Malaysia, asked me to evaluate a new solution from Novell (it’s not even been released yet), I jumped at the chance.

Novell will soon be announce a solution called Novell Filr. I really don’t know how to pronounce the name, but the concept of Novell Filr makes a lot of sense. I cannot say that it is disruptive but it is coming to meet the changing world of how users are storing and accessing their files and balancing it with the needs of enterprise file management and access.

Yes, Novell Filr is a file virtualization solution. It comes between the user and their files. Previously in a network attached environment, files are presented to the users via the typical file sharing protocols, CIFS for Windows and NFS for Unix/Linux. These protocols have been around for ages, with some recent advents in the last few years for SMB 2.0 and NFS version 4. However, the updates to these protocols address the greater needs of the organizations and the enterprise rather than the needs of the users.

And because of this, users have been flocking to cloud-centric solutions out there such as DropBox, and Windows Live SkyDrive. These solutions cater to the needs of the users wanting to access their files anywhere, with any device. Unfortunately, the simplicity of file access the “cloud-way” is not there when the users are in the office network. They would have to be routinely reminded by the system administrator to keep the files in some special directory to have their files backed up. Otherwise, they shall be ostracized by the IT department and their straying files will not be backed up.

Well, Novell will be introducing their Novell Filr soon and they have released a video of their solution. Check this out.

I shall be spending some time this week to look into their solution deeper and hoping to see a demo soon. And I have great confidence in the Novell solutions. I intend to share more about them later.

Copy-on-Write and SSDs – A better match than other file systems?

We have been taught that file systems are like folders, sub-folders and eventually files. The criteria in designing file systems is to ensure that there are few key features

  • Ease of storing, retrieving and organizing files (sounds like a fridge, doesn’t it?)
  • Simple naming convention for files
  • Performance in storing and retrieving files – hence our write and read I/Os
  • Resilience in restoring full or part of a file when there are discrepancies

In file systems performance design, one of the most important factors is locality. By locality, I mean that data blocks of a particular file should be as nearby as possible. Hence, in most file systems designs originated from the Berkeley Fast File System (BFFS), requires the file system to seek the data block to be modified to ensure locality, i.e. you try not to split up the contiguity of the data blocks. The seek time to find the require data block takes time, but you are compensate with faster reads because the read-ahead feature allows you to read extra blocks ahead in anticipation that the data blocks are related.

In Copy-on-Write file systems (also known as shadow-paging file systems), the seek portion is usually not present because the new modified block is written somewhere else, not the present location of the original block. This is the foundation of Copy-on-Write file systems such as NetApp’s WAFL and Oracle Solaris ZFS. Because the new data blocks are written somewhere else, the storing (write operation) portion is faster. It eliminated the seek time and it also skipped the read-modify-write action to the original location of the data block. Therefore, write is likely to be faster.

However, the read portion will be slower because if you want to read a file, the file system has to go around looking for the data blocks because it lacks the locality. Therefore, as the COW file system ages, it tends to have higher file system fragmentation. I wrote about this in my previous blog. It is a case of ENJOY-FIRST/SUFFER-LATER. I am not writing this to say that COW file systems are bad. Obviously, NetApp and Oracle have done enough homework to make the file systems one of the better storage file systems in the market.

So, that’s Copy-on-Write file systems. But what about SSDs?

Solid State Drives (SSDs) will make enemies with file systems that tend prefer locality. Remember that some file systems prefer its data blocks to be contiguous? Well, SSDs employ “wear-leveling” and required writes to be spread out as much as possible across the SSDs device to prolong the life of the SSD device to reduce “wear-and-tear”. That’s not good news because SSDs just told the file systems, “I don’t like locality and I will spread out the data blocks“.

NAND Flash SSDs (the common ones we find in the market and not DRAM-based SSDs) are funny creatures. When you write to SSDs, you must ERASE first, WRITE AGAIN to the SSDs. This is the part that is creating the wear-and tear of the device. When I mean ERASE first, WRITE AGAIN, I describe it below

  • Writing 1 –> 0 (OK, no problem)
  • Writing 0 –> 1 (not OK, because NAND Flash can’t do that)

So, what does the SSD do? It ERASES everything, writing the entire data blocks on the device to 1s, and then converting some of them to 0s. Crazy, isn’t it? The firmware in the SSDs controller will also spread out the erase-and-then write operations across the entire SSD device to avoid concentrating the operations on a small location or dataset. This is the “wear-leveling” we often hear about.

Since SSDs shun locality and avoid the data blocks to be nearby, and Copy-on-Write file systems are already doing this because its nature to write new data blocks somewhere else, the combination of both COW file system and SSDs seems like a very good fit. It even looks symbiotic because it is a case of “I help you; and you help me“.

From this perspective, the benefits of COW file systems and SSDs extends beyond resiliency of the SSD device but also in performance. Since the data blocks are spread out at different locations in the SSD device, the effect of parallelism will inadvertently help with COW’s performance. Make sense, doesn’t it?

I have not learned about other file systems and how they behave with SSDs, but it is pretty clear that Copy-on-Write file systems works well with Solid State Devices. Have a good week ahead :-)!