Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications  will have implications on the brand of storage they will be locked in into.

And surprise, surprise, SSDs are the pawns of this new cloak-and-dagger game. How? Well, I have been observing this for quite a while now, and when HP announced their SMART portfolio for their storage, it’s time for me to say something.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.

It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

Similarly, Dell announced their ExpressFlash solution that ties up its 12th generation PowerEdge servers with their flagship (what else), Dell Compellent storage.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

 

Tagged , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

14 Responses to Server way of locked-in storage

  1. Pingback: Server way of locked-in storage « Storage Gaga

  2. JW says:

    One word.

    FalconStor

    • cfheoh says:

      Hi JW

      Thanks for your comment. Falconstor is a great company with great products. And the local guys, who are buddies of mine, are doing well.
      /Chin-Fah

  3. Hi Gaga,

    Hp are leveraging the tech off LSI (Hp smartarray cards are really LSI 92xx oem’d) and their caching softwar in card to do so. The cache works with sas or sata SSD’s rather than a faster through to pcie based card.

    As for vfcache, as I understand it, it’s vendor/array agnostic for the time being.

    Aus storage guy

    • cfheoh says:

      Hi Ausstorageguy,

      Thanks for your sharing your insights.

      As I understand from you, the HP card has either a SAS/SATA interface rather than PCIe. What is the reason behind the design? Would you care to share?

      For the VFCache, the EMC folks in Malaysia are saying that the card works with EMC arrays only. I will have to ask them again but your input has given me the incentive to dig deeper.

      Thanks a million!
      /Chin-Fah

      • Hi Chin-Fah,

        Sorry for the delayed response, a lot going on at the moment.

        Essentially, the Hp or LSI card is basically a raid controller card with sas/sata ports on a pcie card and as such, it utilizes sas or sata ssd drives to act as a cache over and above the dram cache.
        In turn, this Hp/LSI raid card is limited to the port speed of the controller card and drive (6Gb/s or about 600MB/s at best) and is subject to the overheads of sata and sas as well as cable length and any sas switching in the drive array.

        Where as VFcache is based on the micron p320h pcie ssd cars which is straight through from the pcie bus to the controller chip and onto the flash chips with no protocol conversion or other bottle necks to the data.
        Thus, the vfcache/micron p320h can deliver data at 3GB/s for read and 2GB/s for write versus a best rate of 600MB/s (1200MB/s with 2 drives) with the Hp/LSI smart array pcie card, and not suffer performance losses due to the protocol conversion and scsi overhead of sas. This also means that it can do so with lower latency and deliver higher iops than a sata/sas solution.

        Having said that, I personally have the LSI 9265-8i card for my own use and use sata ssd drives on that as part the cache and i can tell you, it’s a fantastic card.

        I’d also like to confirm it’s vendor agnostic, you can just google search to find out, if you don’t have a powerlink account.

        Regards,

        Ausstorageguy

      • Hi Chin-Fah,

        Sorry for the delayed response, a lot going on at the moment.

        Essentially, the Hp or LSI card is basically a raid controller card with sas/sata ports on a pcie card and as such, it utilizes sas or sata ssd drives to act as a cache over and above the dram cache.
        In turn, this Hp/LSI raid card is limited to the port speed of the controller card and drive (6Gb/s or about 600MB/s at best) and is subject to the overheads of sata and sas as well as cable length and any sas switching in the drive array.

        Where as VFcache is based on the micron p320h pcie ssd cars which is straight through from the pcie bus to the controller chip and onto the flash chips with no protocol conversion or other bottle necks to the data.
        Thus, the vfcache/micron p320h can deliver data at 3GB/s for read and 2GB/s for write versus a best rate of 600MB/s (1200MB/s with 2 drives) with the Hp/LSI smart array pcie card, and not suffer performance losses due to the protocol conversion and scsi overhead of sas. This also means that it can do so with lower latency and deliver higher iops than a sata/sas solution.

        Having said that, I personally have the LSI 9265-8i card for my own use and use sata ssd drives on that as part the cache and i can tell you, it’s a fantastic card.

        I’d also like to confirm it’s vendor agnostic, you can just google search to find out, if you don’t have a powerlink account.

        Regards,

        Ausstorageguy

        • cfheoh says:

          HI Ausstorageguy

          Very, very sorry for the late reply. There are a lot of things going on.

          Thanks for sharing more info about the HP/LSI card. The comparison between HP and EMC’s VFcache offering is important because that is the whole point of what I am doing here. To share correct information the best that I can and my objective is to educate.

          I believe you are very much like that too, to educate the people out there to make the right decisions based on the right information.

          Thanks once again and have a great week ahead.

          /Chin-Fah

  4. jgan says:

    Bro, with razor thin profit nowadays, if you do not lock-in, where’s the profit or year-end bonus? 🙂

  5. Sam Lucido says:

    Good article. Just one correction: EMC’s VFCache PCIe card is storage agnostic and virtualization agnostic too.

    Keep up the excellent work.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.