Storage Facebook likes

There is a mini revolution going on, and Facebook is the main force driving it.

It is the Open Compute Project (OCP), and its mission is to redesign the modern-day data centers and drive open hardware and architectural designs and specifications, including storage. The overall goals are to drive greater data center efficiency, flexibility, energy savings and cost effectiveness in a new class of “hyperscale” datacenters. Facebook, Google and Amazon are some of the examples of hyperscale datacenters, where their businesses relies on massive computing power, exponential storage performance and racks and racks of computing infrastructure to drive their web-computing or cloud-computing services.

Some of the cool technology innovations in mind includes having systems that support any CPUs from any vendors including Intel and AMD. We may even see both processor brands running on the same motherboard. The Open Common Slots component for processors is based on PCIe. Intel has pledged their Decathlete motherboard specifications for OCP and likewise AMD has produced its Roadrunner mobo series specification for the project as well. The ARM processor could also be supported in the near future in this “mix-and-match” OCP ideals.

Other proposed changes include OpenRack specifications, “sleds”, and of course, the Open Vault project for storage (aka “Knox”). Continue reading

Can VSA help NetApp?

Almost a year ago, I had an interview with VMware Malaysia for a Senior SE position. They wanted a pre-sales guy who knows Oil & Gas and a strong technology background. I had a strong storage background, and I was involved in Oil & Gas upstream since my NetApp and EMC days.

I thought I was their guy having being led to believe (mostly by my own self-belief) to be so. I didn’t get the job but I did not find out the reason why I lost the opportunity. But I remembered well that I brashly mentioned to the Australian interviewer over the phone that VMware could become the next “storage technology” company. At that time, VMware just launched their VMware 5.0 and along with it, their vSphere Storage Appliance (VSA). This was a turning point of the virtual storage appliance space.

My friend, whose company is a VMware partner, said that the list price for the vSphere VSA was USD5,000.00 a pop. The price wasn’t too bad to the small-medium-enterprise businesses in Malaysia, minus the hardware and storage capacity cost. But what intrigued me back then was this virtual storage appliance concept was disruptive.

VMware could potentially take large JBOD farms, each for the minimum of 3 physical ESXi nodes and build a shared storage using the vSphere Storage Appliance (VSA). Who needs shared iSCSI or Fibre Channel LUNs anymore if VMware had its way?

But VMware still pretty much depended on their storage partners, especially its master, EMC and so I believe VMware held back pushing VSA for the reason of allowing its storage partner ecosystem to thrive. And for that reason, the vSphere Storage API such as VAAI and VASA were developed since vSphere 4 to enhance the deeper integration of these storage vendor’s technology into the VMware world.

But of course, long before the VMware’s VSA venture, HP LeftHand already had one on the cards. The LeftHand Virtual SAN Appliance (also VSA) was already getting rave comments from their partners and customers, impressed with how they were able to showcase HP LeftHand storage solution and technology brilliantly. Eventually, HP recognized the prowess of the LeftHand VSA and started marketing it as HP StoreVirtual VSA. I don’t hear much about the HP LeftHand (since has been renamed as P4000) VSA nowadays, seeing the HP guys in Malaysia preferring to pitch the physical storage than the virtual storage software.

NetApp, back in Q1 of 2012, also decided to go down the path of virtual storage appliance, announcing the ONTAP-v to the world here. It was initially resold through the Fujitsu partnership, but the Q1 announcement expands the ONTAP-v to a larger set of server vendors as shown below. The key component is to have a qualified RAID controller in each of the server vendors.

Continue reading

The marriage in the cloud

Admit it! You are a terabyte junkie! I am sure many of us have one terabyte or more of your personal “stuff” at home. Heck, I even heard from a friend that he has almost 20TB of high definition movies (thank you Torrent!) at home! That’s crazy!

And what the typical Malaysian consumer would do after he or she runs out of hard disk space? In KL (our beloved capital city, Kuala Lumpur), they would throng the Low Yat IT mall or extensions of it, like Digital Mall in PJ Section 14. In other towns and cities in Malaysia, PC fairs are popular, as consumers try to get the best price possible (We Malaysian are good at squeezing the max of a deal)

It is difficult for the not-so-IT-literate consumer to differentiate which brand is the best. Buffalo, Iomega, DLink, Western Digital, etc, etc. But the tides are changing, because these vendors want to tie you down for the rest of your digital life. You see, buying a small NAS for the home now comes with a big carrot, an incentive to keep you wanting for more, and yet you can’t unbind yourself from the tether once you are hooked.

Cloud storage hasn’t taken off in a big way last year. But many cloud storage vendors know there are plenty of opportunities out there but how do they get the consumers to upload their files, photos and whatever stuff they might have, to cloud storage? Ingeniously, they work together with other smaller NAS storage players and use these vendor’s product offerings as baits. They bundle a significantly large FREE capacity or data protection offering in the Cloud Storage as the carrot, and once the consumer decides to put their files in the cloud storage, boom, they are ensnared to become a long term ATM machine to the Cloud Storage Provider.

Sneaky? No? I call this good, smart marketing. You have a market of opportunities out there, but cloud storage isn’t catching on. You have small NAS vendors that is reaching out to the market of consumer, but it’s a brutal, competitive arena and margins are razor thin. It’s a win-win situation for both sides.

And this trend is catching on. When I first read about Drobo (a high-end consumer NAS storage) partnering Carbonite (a remote backup vendor now repackaged as a Cloud storage backup provider), I thought it was a pretty darn good idea. It was a marriage that happened in the cloud. Late last year, another consumer NAS company, QNAP paired up with Symform, a cloud storage and backup vendor.

This was moving towards a market that scratches the itch. The consumers wanted reliable backup too, but consumer-grade disk drives fail ever so often. Laptops get stolen, and files could be infected by viruses. The list goes on, but the point is that the Cloud Storage Providers may have found a silver lining in getting the consumers to leap into the cloud. And the whole idea of small NAS vendor-big Cloud Backup dynamic duo, just got a big endorsement last night. Guess who has decided to dip its grubby hands into the pie?

EMC, the 800-pound gorilla of the information and storage world, through its Iomega subsidiary, wants your money! They had just married Iomega with EMC Atmos. It was quoted:

“EMC subsidiary and data protection specialist Iomega announced the integration between Iomega network storage solutions and EMC Atmos, extending Atmos cloud-based data protection and sharing to Iomega’s network storage product offerings. The new integration gives small and midsize businesses (SMBs), remote offices and distributed enterprises access to any Atmos powered cloud around the world.”

Surprised? Not really, but I guess EMC needs to breath new life into Atmos and this marriage just extended Atmos’ life support system.

We raid vRAID

I took a bit of time off to read through Violin’s vRAID technology because I realized that vRAID (other than Violin’s vXM architecture) is the other most important technology that differentiates Violin Memory from the other upstarts. I blogged at a high-level about Violin a few entries ago, and we are continuing Violin impressive entrance with a storage technology that have been around for almost 25 years – RAID. Incidentally, I found this picture of the original RAID paper (see below):

Has RAID evolved with solid state storage? Evidently, no, because I have not read of any vendors (so far) touting any RAID revolution in their solid state offerings. There has been a lot of negative talks about RAID, but RAID has been the cornerstone and the foundation of storage ever since the beginning. But with the onslaughts of very large capacity HDDs, the demands of packing more bits-per-inch and the insatiable needs for reliability, RAID is slowly beginning to show its age. Cracks in the armour, I would say. And there are many newer, slightly more refined versions of RAID, from the Network RAID-style of HP P4000 or the Dell EqualLogic, to the RAID-X of IBM XIV, to innovations of declustered RAID in Panasas. (Interestingly, one of the early founders of the actual RAID concept paper, Garth Gibson, is the founder of Panasas).

And the new vRAID from Violin-System doesn’t sway much from the good ol’ RAID, but it has been adapted to address the issues of Solid State Devices.

Solid State devices (notably NAND Flash since everyone is using them) are very different from the usual spinning disks of HDDs. They behave differently and pairing solid state devices with the present implementations of RAID could be like mixing oil and water. I am not saying that the present RAID cannot work with solid state devices, but has RAID adapted to the idiosyncrasies of Flash?

It is like putting an old crank shaft into a new car. It might work for a while, but in the long run, it could damage the car. Similarly, conventional RAID might have detrimental performance and availability impact with solid state devices. And we have hardly seen storage vendors coming up to say that their RAID technology has been adapted to the solid state devices that they are selling. This silence could likely mean that they are just adapting to market requirements and not changing their RAID codes very much to take advantage of Flash or other solid state storage for that matter. Violin Memory has boldly come forward to meet that requirement and vRAID is their answer.

Violin argues that there are bottlenecks at the external RAID controller or software RAID level as well as use of legacy disk drive interfaces. And this is indeed true, because this very common RAID implementation squeezes performance at the expense of the other components such as CPU cycles.

Furthermore, there are plenty of idiosyncrasies in Flash with things such as erase-first, then write mechanism. The nature of NAND Flash, unlike DRAM, requires a block to be erased first before a write to the block is allowed. It does not “modify” per se, where the operations of read-modify-write is often applied in parity-based RAIDs of 5 and 6. Because of this nature, it is more like read-erase-write, and when the erase of the block is occurring, the read operation is stalled. That is why most SSDs will have impressive read latency (in microseconds), but very poor writes (in milliseconds). Furthermore, the parity-based RAID’s write penalty, can further aggravate the situation when the typical RAID technology is applied to NAND Flash solid state storage.

As the blocks in the NAND Flash build up, the accumulation of read-erase-write will not only reduce the lifespan of the blocks in the NAND Flash, it will also reduce the IOPS to a state we called Normalized Steady State. I wrote about this in my blog, “Not all SSDs are the same” some moons ago. In my blog, SNIA Solid State Storage Performance Testing Suite (SSS-PTS), there were 3 distinct phases of a typical NAND Flash SSD:

  • Fresh of out the Box (FOB)
  • Transition
  • Steady State
This performance degradation is part of what vendors call “Write Cliff”, where there is a sudden drop in IOPS performance as the NAND Flash SSD ages. Here’s a graph that shows the performance drop.
Violin’s vRAID, implemented within its switched vXM architecture itself, and using proprietary high performance flash controllers and the flash-optimized vRAID technology, is able deliver sustained IOPS throughout the lifespan of the flash SSD, as shown below:
To understand vRAID we have to understand the building blocks of the Violin storage array. NAND Flash chips of 4GB are packed into a Flash Package of 8 giving it 32GB. And 16 of these 32GB Flash Package are then consolidated into a 512GB VIMM (Violin Inline Memory Module). The VIMM is the starting block and can be considered as a “disk”, since we are used to the concept of “disk” in the storage networking world. 5 of these VIMMs will create a RAID group of 4+1 (four data and one parity), giving the redundancy, performance and capacity similar to RAID-5.
The block size used is 4K block and this 4K block is striped across the RAID group with 1K pages each on each of the VIMMs in the RAID group. Each of this 1K page is managed independently and can be placed anywhere in any flash block in the VIMMs, and spread out for lowest possible latency and bandwidth. This contributes to the “spike free latency” of Violin Memory. Additionally, there is ECC protection within each 1K page to correct flash bit error.
To protect against metadata corruption, there is an additional, built-in RAID Check bit to correct the VIMM errors. Lastly, one important feature that addresses the read-erase-write weakness of NAND Flash, the vRAID ensures that the slow erases never block a Read or a Write. This architectural feature enable spike-free latency in mixed Read/Write environments.
Here’s a quick overview of Violin’s vRAID architecture:
I still feel that we need a radical move away from the traditional RAID and vRAID is moving in the right direction to evolve RAID to meet the demands of the data storage market. Revolutionary and radical it may not be, but then again, is the market ready for anything else?
As I said, so far Violin is the only all-Flash vendor that has boldly come forward to meet the storage latency problem head-on, and they have been winning customers very quickly. Well done!

Is Dell Fluid Enough?

Dell made a huge splash 2 weeks ago in London in their inaugural Dell Storage Forum. They dubbed their storage and management lineup as “Fluid Data Architecture” offering the ability for customers to quickly adapt and automate their business when it comes to storage networking and more importantly, data management.

In the London show, they showcased several key innovations and product development. Here’s a list of their jewels:

  • DR4000 – an inline, content optimized backup deduplication appliance (based on the acquired technology of Ocarina Networks)
  • Compellent Storage Center 6.0 – a major software release
  • Compellent key technology integration with VMware
  • Optimized object storage for Microsoft Sharepoint with the DX6000 Object Storage Platform – DX6000 is an OEM from Caringo
  • Broader support for Dell Force10, PowerConnect and their partner’s Brocade

The technology from Ocarina Networks is fantastic technology and I have always admired Ocarina. I have written about Ocarina in the past in my previous blog. But I was a bit perplexed why Dell chose to enter the secondary dedupe market with a backup dedupe appliance in the DR4000. They are already a latecomer into the secondary deduplication game and I thought HP was already late with their StoreOnce.

They could have used Ocarina’s technology to trailblaze the primary deduplication market. In my previous blog, I mentioned that primary deduplication hasn’t really taken off in a big way, and Dell with the technology from Ocarina could set the standard and establish themselves as the leader of the primary deduplication market space. I was disappointed that they didn’t, not just yet.

The Compellent Storage Center 6.0 release was a major release and it was, for better or for worse, coincided with the departure of Phil Soran, the founder and CEO of Compellent. Phil felt that he can let his baby go and Dell is certainly making the best of what they can do with Compellent as their flagship data storage product.

The major release included 64-bit support for greater performance and scalability and also include several key VMware technologies that other vendors already have. The technologies included:

  • VMware vStorage API for Array Integration (VAAI)
  • Storage Replication Adapter plug-in for VMware Site Recovery Manager (SRM)
  • VSphere 5 client plug-in
  • Integration of Enterprise Manager and VSphere

Other storage related releases (I am not going to talk about Force10 or their PowerConnect solutions here) included Dell offering 16Gbps FibreChannel switches from Brocade and also their DX6000 Object Storage Platform optimized for Microsoft Sharepoint.

I think it is fantastic that Dell is adapting and evolving into a business-oriented, enterprise solution provider and their acquisitions in the past 3 years – EqualLogic, Exanet, Ocarina Networks, Force10 and Compellent – proves that Dell aims to take market share in the storage networking and data management market. They have key initiatives with CommVault, Symantec, VMware and Microsoft as well. And Michael Dell is becoming quite a celebrity lately, giving Dell the boost it needs to battle in this market.

But the question is, “Is their Fluid Data Architecture” fluid enough?” If I were a customer, would I bite?

As a customer, I look for completeness in the total solution, and I cannot fault Dell for having most of the pieces in the solution stack. They have networking in their PowerConnect, Force10 and Brocade. They have SAN in both Compellent and EqualLogic but their unified storage story is still a bit lacking. That’s because we have not seen Dell’s NAS storage yet. Exanet was a scale-out NAS and we have seen little rah-rah about this product.

From a data management perspective, their data protection story gels well with the Commvault and Symantec partnership, but I feel that Dell sales and SEs (at least in Malaysia) spends too much time touting the Compellent Automated Storage Tiering. I have spoken to folks who have listened to Dell guys’ pitches and it’s too one-dimensional. It’s always about storage tiering and little else about other Compellent technology.

At this point of time, the story that Dell sells here in Malaysia is still disjointed, but they are getting better. And eventually, the fluidity (pun intended ;-)) of their Fluid Data Architecture will soon improve.

How will Dell fare in 2012? They had taken a beating in the past 2 IDC’s quarter storage market tracker, losing some percentage points in market share but I think Dell will continue to tinker to get it right.

2012 will be their watershed year.

Storage Architects no longer required

I picked up a new article this afternoon from SearchStorage – titled “Enterprise storage trends: SSDs, capacity optimization, auto tiering“. I cannot help but notice some of the things I have been writing about VMware being the storage killer and the rise of Cloud Computing which take away our jobs.

I did receive some feedback about what I wrote in the past and after reading the SearchStorage article, I can’t help but feeling justified. On the side bar, it wrote:

 

The rise of virtual machine-specific and cloud storage suggest that other changes are imminent. In both cases …. and would no longer require storage architects and managers.

Things are changing at an extremely fast pace and for those of us still languishing in the realms of NAS and SAN, our expertise could be rendered obsolete pretty quickly.

But all is not lost because it would be easier for a storage engineer, who already has the foundation to move into the virtualization space than a server virtualization engineer coming down to learn about the storage fundamentals. We can either choose to be dinosaur or be the species of the next generation.

More specialized appliances at Oracle OpenWorld

I was reading the news from Oracle OpenWorld and a slew of news about specialized appliances are on the menu.

Oracle added Big Data Appliance and Oracle Exalytics Business Intelligence Machine to its previous numero uno, Exadata Database Machine. EMC, also announced its Green Plum Data Computing Appliance and also its VNX Unified Storage for Oracle.

As quoted

The EMC VNX Unified Storage for Oracle is a VNX system that has 
Oracle installed in a VMware vSphere virtual machine environment. 
The system is meant to unify all Oracle environments--database over 
Oracle Direct NFS, application servers over NFS, and testing and 
development over NFS--resulting in less disk space used and faster 
testing. EMC says this configuration was made because 50% of Oracle 
customers are virtualizing their systems today.

The VNX Unified Storage for Oracle includes EMC's Fully Automated 
Storage Tiering (FAST) technology, which migrates most frequently 
used data between a primary Fibre Channel drive and solid state drives 
and migrates less frequently used data to Serial ATA (SATA) drives and 
its FAST Cache. In an Oracle environment, FAST is well-suited to 
database applications that generate a large number of random 
inputs-outputs, that experience sudden bursts in user query activity, 
or a high number of user loads and where the entire working set can 
be contained in the solid state drive cache.

Based on testing carried out on an Oracle Real Application Clusters 
(RAC) 11g database that was configured to access the VNX7500 file 
storage over the Network File System (NFS), using the Oracle 
Direct NFS (dNFS) client, results showed an 100% improvement in 
transactions per minute (TPM), 170% improvement in IOPS, and 
a 79% decrease in response time, the company said.

As for GreenPlum, EMC quoted:

The company also is showing off the EMC Greenplum Data Computing 
Appliance(DCA) for Big Data Analytics configuration, which provides 
a new migration path to Greenplum for Oracle Data Warehouse. This 
system includes the Greenplum Data Computing Appliance, EMC's 
Global Data Warehouse, and EMC's IT Business Intelligence Grid 
infrastructure. The EMC Greenplum DCA consists of 8 to 16 segment 
servers running Red Hat Enterprise Linux. Each segment server 
contains 96 to 192 processor cores, with 384 GB to 768 GB of 
memory per segment server. The DCA includes 12 600-GB Serial 
Attached SCSI (SAS) 15K RPM drives for a total useable and 
compressed capacity of 73 TB to 144 TB. The DCA competes with 
Oracle's Exadata Database Machine.

In tests performed with this server/storage configuration and a 
15-TB Oracle Data Warehouse, the DCA processed a 99 million rows 
query in less than 28 seconds vs. seven minutes in a traditional 
Oracle environment and data loads decreased from six days to 29 
minutes

It is getting pretty obvious that specialized appliances are making waves at Oracle OpenWorld but what’s more interesting is the return of a combined and integrated environment of compute and storage as I have mentioned in my previous blog. And I forsee that these specialized appliances will be one of the building blocks of cloud computing together with general purposes platforms such as x86, JBODs and the glue to all these, virtualization, notably VMware.

The rise of the specialized appliance

Compute and storage are 2 components within the IT infrastructure which are surely converging. SAN and NAS are facing their greatest adversary yet, and could be made insignificant if the cloud and virtualization game had their way. This is giving rise to the a new breed of solution, a specialized appliance where both compute and storage are ONE. Rising from the ashes of shared storage (SAN and NAS, take note), we are beginning to see things going back to way of direct, internal storage.

There were some scuffles in the bushes about 5 years, where Sun (now Oracle) was ahead of its game. The Sun Fire X4500 (aka Thumper) was one of the strong candidates to challenge the SAN/NAS duopoly in this networked storage period. X4500 integrated both the server and the storage components together, using ZFS as a file system and volume manager to deliver a very high throughput on all the JBOD disks very efficiently. ZFS acted as the RAID, so there was no need to have specialized RAID hardware. This proved that a very high performance storage solution can be easily integrated using standard off-the-shelf infrastructure components and the x86 architecture. By combining both compute and storage together, there were hints that the industry was about to rise up to Direct-Attached Storage (DAS) again, despite its perceived weakness against SAN and NAS.

Unfortunately, the applications were not ready for DAS then. Besides ZFS, applications such as databases, emails and file servers were not ready to jump into the DAS bandwagon and watch them ride into the sunset. But the fairy tale seems to be retold again, and this time, the evidence that DAS could rise again is much stronger.

The catalyst to this disruptive force? Virtualization!

I mentioned that VMware is the silent storage killer a few blogs ago. Needless to say, that ruffled a few featheres among the readers. I have no doubt that virtualization is changing how we storage guys look at SAN and NAS. In a traditional setup, the SAN or NAS is setup to provision LUNs or mount points to the data storage for VMFS volumes in the VMware environment. It will then be the storage array to provide snapshots, replications, thin provisioning and so on.

Perhaps VMware is nit picking that managing storage arrays for VMFS volumes is difficult. From the VMware administrators view, they are right. They don’t want to know what’s going on below the VM-level. All they want is storage, any kind of storage and VMware will manage the volumes, snapshots, replication and thin  provisioning. Indeed they were already doing that since vStorage API was introduced. In the new release of VMware version 5.0, the ante has been upped even higher, making networked storage less and less significant.

If you want to know about vStorage API and stuff, below is a diagram of the integration of the various components at the VMware API level.

 

VMware can now use direct, internal storage look like shared storage. The Virtual Storage Appliance (VSA) does just that. VMware already has a thriving market from the community and hobbists for VMware Appliances.

The appliance market has now evolved into new infrastructure too. Using x86 architecture, off-the-shelf infrastructure components (sounds familiar?), companies such as Nutanix and Tintri are taking advantage of this booming trend to introduce specialized VMware appliances as shown in their advertisements on their respective web sites.

Here’s the Nutanix Ad:

 

Here’s the Tintri Ad:

 

Both Tintri and Nutanix are a new breed of appliances – specialized appliances for VMware.

At the same time, other applications are building these specialized appliances as well. I have mentioned Oracle Exadata many times in the past and Oracle Exadata is the perfect example an a fine-tuned, hardcore database engine to make the Oracle run at the best performance possible.

Likewise HP has announced their E5000 Messaging System for Microsoft Exchange. The E5000 is a specialized appliance optimized and well-tuned for the Microsoft Exchange Server 2010. From the words of HP,

“HP E5000 Messaging System is the industry’s first fully self-contained platform built for the next-generation of Microsoft Exchange to deliver enterprise-class messaging to businesses of all sizes. Built as a turnkey solution that can be up and running in a few hours vs. days, the HP E5000 Messaging System gives business users the experience they want most: large mailboxes, centralized archiving of mailboxes files and 24×7 access from any device. IT staffs benefit the solutions simplicity to setup, scale and manage and to meet new demands affordably. Ideal for multi-site enterprises as well as branch office and remote office environments, each HP Messaging System delivers greater simplicity and accelerates deployment with preconfigured solutions starting at 500 mailboxes up to 3000 mailboxes, while delivering large, 1 to 2.5GB mailbox sizes. Clients can grow by adding storage capacity or more appliances within the environment up from hundreds to thousands of mailboxes.”

What are the specs of this E5000 box, you say? Here you go:

 

And look at Row#2 in the table above … Direct, Internal Disks! Look at Row #4, Xeon CPUs! Both Compute and Storage in the same appliance!

While the HP E5000 announcement was recently, Hitachi Data Systems were already in the game early with their Unified Compute Platform and their Converged Platform for Microsoft Exchange with relatively the same idea – specialized appliances.

Perhaps the HDS solutions aren’t exactly direct, internal storage but the concept is still the same – specialized appliance. HDS Unified Compute Platform (UCP) has these components.

 

HDS Converged Platform for MS Exchange provides their specialized “appliance” with Reference Architectures that can support up to 68,000 Microsoft Exchange mailboxes. Here’s an architecture diagram of their “appliance”

 

There’s no denying that the networked storage landscape is changing. So are the computing platforms. We are already seeing the compute and storage components being integrated together, tighter than ever. The wave is rising for specialized appliances and it can only get more intense from now on.

No wonder HP’s Converged Infrastructure vision is betting on x86 architecture, simple storage platforms with SAS/SATA disks and Virtualization. Other vendors are doing the same as well – Cisco, NetApp and VMware with their FlexPod solution and EMC with their VBlocks of VMware, Cisco and EMC Storage.

Hail to the Rise of the Specialized Appliance!