Is Dell Fluid Enough?

Dell made a huge splash 2 weeks ago in London in their inaugural Dell Storage Forum. They dubbed their storage and management lineup as “Fluid Data Architecture” offering the ability for customers to quickly adapt and automate their business when it comes to storage networking and more importantly, data management.

In the London show, they showcased several key innovations and product development. Here’s a list of their jewels:

  • DR4000 – an inline, content optimized backup deduplication appliance (based on the acquired technology of Ocarina Networks)
  • Compellent Storage Center 6.0 – a major software release
  • Compellent key technology integration with VMware
  • Optimized object storage for Microsoft Sharepoint with the DX6000 Object Storage Platform – DX6000 is an OEM from Caringo
  • Broader support for Dell Force10, PowerConnect and their partner’s Brocade

The technology from Ocarina Networks is fantastic technology and I have always admired Ocarina. I have written about Ocarina in the past in my previous blog. But I was a bit perplexed why Dell chose to enter the secondary dedupe market with a backup dedupe appliance in the DR4000. They are already a latecomer into the secondary deduplication game and I thought HP was already late with their StoreOnce.

They could have used Ocarina’s technology to trailblaze the primary deduplication market. In my previous blog, I mentioned that primary deduplication hasn’t really taken off in a big way, and Dell with the technology from Ocarina could set the standard and establish themselves as the leader of the primary deduplication market space. I was disappointed that they didn’t, not just yet.

The Compellent Storage Center 6.0 release was a major release and it was, for better or for worse, coincided with the departure of Phil Soran, the founder and CEO of Compellent. Phil felt that he can let his baby go and Dell is certainly making the best of what they can do with Compellent as their flagship data storage product.

The major release included 64-bit support for greater performance and scalability and also include several key VMware technologies that other vendors already have. The technologies included:

  • VMware vStorage API for Array Integration (VAAI)
  • Storage Replication Adapter plug-in for VMware Site Recovery Manager (SRM)
  • VSphere 5 client plug-in
  • Integration of Enterprise Manager and VSphere

Other storage related releases (I am not going to talk about Force10 or their PowerConnect solutions here) included Dell offering 16Gbps FibreChannel switches from Brocade and also their DX6000 Object Storage Platform optimized for Microsoft Sharepoint.

I think it is fantastic that Dell is adapting and evolving into a business-oriented, enterprise solution provider and their acquisitions in the past 3 years – EqualLogic, Exanet, Ocarina Networks, Force10 and Compellent – proves that Dell aims to take market share in the storage networking and data management market. They have key initiatives with CommVault, Symantec, VMware and Microsoft as well. And Michael Dell is becoming quite a celebrity lately, giving Dell the boost it needs to battle in this market.

But the question is, “Is their Fluid Data Architecture” fluid enough?” If I were a customer, would I bite?

As a customer, I look for completeness in the total solution, and I cannot fault Dell for having most of the pieces in the solution stack. They have networking in their PowerConnect, Force10 and Brocade. They have SAN in both Compellent and EqualLogic but their unified storage story is still a bit lacking. That’s because we have not seen Dell’s NAS storage yet. Exanet was a scale-out NAS and we have seen little rah-rah about this product.

From a data management perspective, their data protection story gels well with the Commvault and Symantec partnership, but I feel that Dell sales and SEs (at least in Malaysia) spends too much time touting the Compellent Automated Storage Tiering. I have spoken to folks who have listened to Dell guys’ pitches and it’s too one-dimensional. It’s always about storage tiering and little else about other Compellent technology.

At this point of time, the story that Dell sells here in Malaysia is still disjointed, but they are getting better. And eventually, the fluidity (pun intended ;-)) of their Fluid Data Architecture will soon improve.

How will Dell fare in 2012? They had taken a beating in the past 2 IDC’s quarter storage market tracker, losing some percentage points in market share but I think Dell will continue to tinker to get it right.

2012 will be their watershed year.

Lightning about to strike

Watch out for February 6th, 2012 folks! The Lightning is about to strike!

Yes, it is likely that EMC will be announcing their server-based, 8-lane PCIe Flash memory card in early week of February. The PCIe card was dubbed “Project Lightning” when it was first announced in EMC World in May last year. It represents EMC’s first foray of products that sits on the server side, giving the impression that EMC could be entering the server business. I blogged about this way back in September last year. As explained by the EMC folks, they are not going into the server business but rather “extending” their performance tiering into the server space. Think of it like an umbilical cord that  sucks the server’s CPU processing power to give maximum performance boost for the EMC storage.

The card will sport Solid State Drive from LSI Warp Drive and comes in 100/200/300GB capacity. Here’s a picture of how the Lightning card would look like:

The SSD is an SLC (Single Level Cell) and is capable of delivering 150,000 random reads IOPS based on 4K blocks and 190,000 random writes IOPS. It can squeeze 1.4GB/sec in read throughput. While it is not on par with the performance of Fusion-IO, it can definitely do well leveraging EMC’s huge customer base. Furthermore, PCIe-based Flash memory cards such as Fusion-IO will not be able to take advantage of the bridge that links the server and the storage, making it confined to the server’s resources. The advantage is definitely EMC when you explore the possibilities.

Here’s a view of a slide from Virtual Geek summarizing the Project Lightning:

The Lightning card is aimed at customers who demand the highest performance, even higher that Tier 0. It will be integrated with EMC’s FAST (Fully Automated Storage Tiering) technology and is available to the VNX and VMAX platforms.

So watch out folks, because Lightning is about to strike soon!

Amazon makes it easy

I like the way Amazon is building their Cloud Computing services. Amazon Web Services (AWS) is certainly on track to become the most powerful Cloud Computing company in the world. In fact, AWS might already is.  But they are certainly not resting on their laurels when they launched 2 new services in as many weeks – Amazon DynamoDB (last week) and Amazon Storage Gateway (this week).

I am particularly interested in the Amazon Storage Gateway, because it is addressing one of the biggest fears of Cloud Computing head-on. A lot of large corporations are still adamant to keep their data on-premise where it is private and secure. Many large corporations are still very skeptical about it even though Cloud Computing is changing the IT landscape in a massive way. The barrier to entry for large corporations is not something easy, but Amazon is adapting to get more IT divisions and departments to try out Cloud Computing in a less disruptive way.

The new service, is really about data storage and data backup for large corporations. This is important because large corporations have plenty of requirements for data storage and data to be backed up. And as we know, a large portion of the data stored does not need to be transactional or to be accessed frequently. This set of data is usually less frequently used, for archiving or regulatory compliance reasons, particular in the banking and healthcare industry.

In the data backup operations, the reason data is backed up is to provide a data recovery mechanism when a disaster strikes. Large corporations back up tons of data every day, weeks or month and this data only has value when there is a situation that requires data relevance, data immediacy or data recovery. Otherwise, it is just plenty of data taking up storage space, be it on disk or on tape.

Both data storage and data backup cost a lot of money, both CAPEX and OPEX. In CAPEX, you are constantly pressured to buy more storage to store the ever growing data. This leads to greater management and administration costs, both contributing heavily into OPEX costs. And I have not included the OPEX costs of floor space, power and cooling, people (training, salary, time and so on) typically adding up to 3-5x the operations costs relative to the capital investments. Such a model of IT operations related to storage cannot continue forever, and storage in the Cloud offers an alternative.

These 2 scenarios – data storage and data backup – are exactly the type of market AWS is targeting. In order to simplify and pacify large corporations, AWS introduced the Amazon Storage Gateway, that eases the large corporations to take some of their IT storage operations to the Cloud in the form of Amazon S3.

The video below shows the Amazon Storage Gateway:

The Amazon Storage Gateway is a piece of software “appliance” that is installed on-premise in the large corporation’s data center. It seamlessly integrates into the LAN and provides a SSL (Secure Socket Layer) connection to the Amazon S3. The data being transferred to the S3 is also encrypted with AES (Advanced Encryption Standard) 256-bit. Both SSL and AES-256 can give customers a sense of security and AWS claims that the implementation meets the data storage and data recovery standards used in the banking and healthcare industries.

The data storage and backup service regularly protects the customer’s data in snapshots, and giving the customer a rapid recovery platform should the customer experienced on-premise data corruption or data disruption. At the same time, the snapshot copies in the Amazon S3 can also be uploaded into Amazon EBS (Elastic Block Store) and testing or development environments can be evaluated and testing with Amazon EC2 (Elastic Compute Cloud). The simplicity of sharing and combining different Amazon services will no doubt, give customers a peace of mind, easing their adoption of Cloud Computing with AWS.

This new service starts with a 60-day free trial and moving on to a USD$125.00 (about Malaysian Ringgit $400.00) per gateway per month subscription fee. The data storage (inclusive of the backup service), costs only 14 cents per gigabyte per month. For 1TB of data, that is approximately MYR$450 per month. Therefore, minus the initial setup costs, that comes to a total of MYR$850 per month, slightly over MYR$10,000 per year.

At this point, I like to relate an experience I had a year ago when implementing a so-called private cloud for an oil-and-gas customers in KL. They were using the HP EVS (Electronic Vaulting Service) to an undisclosed HP data center hosting site in the Klang Valley. The HP EVS, which was an OEM of Asigra, was not an easy solution to implement but what was more perplexing was the fact that the customer had a poor understanding of what would be the objectives and their 5-year plan in keeping with the data protected.

When the first 3-4TB data storage and backup were almost used up, the customer asked for a quotation for an additional 1TB of the EVS solution. The subscription for 1TB was MYR$70,000 per year. That is 7x time more than the AWS MYR$10,000 per year cost! I have to salute the HP sales rep. It must have been a damn good convincing sell!

In the long run, the customer could be better off running their storage and backup on-premise with their HP EVA4400 and adding an additional of 1TB (and hiring another IT administrator) would have cost a whole lot less.

Amazon Web Services has already operating in Singapore for the past 2 years, and I am sure they are eyeing Malaysia as their regional market. Unless and until Malaysian companies offering Cloud Services know to use economies-of-scale to capitalize the Cloud Computing market, AWS is always going to be a big threat to CSP companies in Malaysia and a boon of any companies seeking cloud computing services anywhere in the world.

I urge customers in Malaysia to start questioning their so-called Cloud Service Providers if they can do what AWS is doing. I have low confidence of what the most local “cloud computing” companies can deliver right now. I hope they stop window dressing their service offerings and start giving real cloud computing services to customers. And for customers, you must continue to research and find out more which cloud services meet your business objectives. Don’t be flashed by the fancy jargons or technical idealism thrown at you. Always, always find out more because your business cost is at stake. Don’t be like the customer who paid MYR$70,000 for 1TB per year.

AWS is always innovating and the Amazon Storage Gateway is just another easy-to-adopt step in their quest for world domination.

Not all SSDs are the same

Happy Lunar New Year! The Chinese around world has just ushered in the Year of the Water Dragon yesterday. To all my friends and family, and readers of my blog, I wish you a prosperous and auspicious Chinese New Year!

Over the holidays, I have been keeping up with the progress of Solid State Drives (SSDs). I am sure many of us are mesmerized by SSDs and the storage vendors are touting the best of SSDs have to offer. But let me tell you one thing – you are probably getting the least of what the best SSDs have to offer. You might be puzzled why I say things like this.

Let me share with a common sales pitch. Most (if not all) storage vendors will tout performance (usually IOPS) as the greatest benefits of SSDs. The performance numbers have to be compared to something, and that something is your regular spinning Hard Disk Drives (HDDs). The slowest SSDs in terms of IOPS is about 10-15x faster than the HDDs. A single SSD can at least churn 5,000 IOPS when compared to the fastest 15,000 RPM HDDs, which churns out about 200 IOPS (depending on HDD vendors). Therefore, the slowest SSDs can be 20-25x faster than the fastest HDDs, when measured in IOPS.

But the intent of this blogger is to share with you more about SSDs. There’s more to know because SSDs are not built the same. There are write-bias SSDs, read-bias SSDs; there are SLC (single level cell) and MLC (multi level cell) SSDs and so on. How do you differentiate them if Vendor A touts their SSDs and Vendor B touts their SSDs as well? You are not comparing SSDs and HDDs anymore. How do you know what questions to ask when they show you their performance statistics?

SNIA has recently released a set of methodology called “Solid State Storage (SSS) Performance Testing Specifications (PTS)” that helps customers evaluate and compare the SSD performance from a vendor-neutral perspective. There is also a whitepaper related to SSS PTS. This is something very important because we have to continue to educate the community about what is right and what is wrong.

In a recent webcast, the presenters from the SNIA SSS TWG (Technical Working Group) mentioned a few questions that I  think we as vendors and customers should think about when working with an SSD sales pitch. I thought I share them with you.

  • Was the performance testing done at the SSD device level or at the file system level?
  • Was the SSD pre-conditioned before the testing? If so, how?
  • Was the performance results taken at a steady state?
  • How much data was written during the testing?
  • Where was the data written to?
  • What data pattern was tested?
  • What was the test platform used to test the SSDs?
  • What hardware or software package(s) used for the testing?
  • Was the HBA bandwidth, queue depth and other parameters sufficient to test the SSDs?
  • What type of NAND Flash was used?
  • What is the target workload?
  • What was the percentage weight of the mix of Reads and Writes?
  • Are there warranty life design issue?

I thought that these questions were very relevant in understanding SSDs’ performance. And I also got to know that SSDs behave differently throughout the life stages of the device. From a performance point of view, there are 3 distinct performance life stages

  • Fresh out of the box (FOB)
  • Transition
  • Steady State

 

As you can see from the graph below, a SSD, fresh out of the box (FOB) displayed considerable performance numbers. Over a period of time (the graph shown minutes), it transitioned into a mezzanine stage of lower IOPS and finally, it normalized to the state called the Steady State. The Steady State is the desirable test range that will give the most accurate type of IOPS numbers. Therefore, it is important that your storage vendor’s performance numbers should be taken during this life stage.

Another consideration when understanding the SSDs’ performance numbers are what type of tests used? The test could be done at the file system level or at the device level. As shown in the diagram below, the test numbers could be taken from many different elements through the stack of the data path.

 

Performance for cached data would given impressive numbers but it is not accurate. File system performance will not be useful because the data travels through different layers, masking the true performance capability of the SSDs. Therefore, SNIA’s performance is based on a synthetic device level test to achieve consistency and a more accurate IOPS numbers.

There are many other factors used to determine the most relevant performance numbers. The SNIA PTS test has 4 main test suite that addresses different aspects of the SSD’s performance. They are:

  • Write Saturation test
  • Latency test
  • IOPS test
  • Throughput test

The SSS PTS would be able to reveal which is a better SSD. Here’s a sample report on latency.

Once again, it is important to know and not to take vendors’ numbers in verbatim. As the SSD market continue to grow, the responsibility lies on both side of the fence – the vendor and the customer.

 

SSDs rising in the flood crisis

The Thailand flood last year spelled disaster to the storage industry. We have already seen several big boys in the likes of HP, EMC and NetApp announcing the rise of prices because of the flood.

NetApp’s announcement is here; EMC is here; and HP is here, if you want to read about it. Below is a nice and courteous EMC letter to their customers.

But the Chinese character of “crisis” (below) also spells opportunities; opportunities for Solid State Drives (SSDs) that is.

For those of us close to the ground, the market for spinning hard disk drives (HDDs) has certainly been challenging for the past few months, especially for smaller system providers like us. Without the leveraging powers of the bigger boys, we practically had to beg to buy HDDs, not to mention the fact that the price has practically doubled.

Before the Thailand flood crisis, the GB/$ of a 2TB HDD was 0.325 Malaysian ringgit per GB. That’s about 33 cents. Today, the price is about 55 cents per GB. In comparison, at least from my experience, the GB/$ of SSDs has gone down from $5.83 to $4.99.

I know some of you might pooh-pooh the price difference between a 2TB SATA/SAS and a 120GB SSD, partly because the SSD seems so expensive. But when you consider that doing the math, the SSDs is likely to be 50x faster (at worst average) and 200x faster (at best average) for applications requiring IOPS, this could mean that transactional applications are likely to be completed an average of 100x faster, with better response time, with lower latency. This will have a domino effect on other related applications, making the entire service request performing and completing faster. When we put a price to the transactional hours, for example $10/hour work, then we can see the cost savings coming from using SSDs in the storage.

Interestingly, a friend of mine asked me about the prominence of an all SSDs storage systems. I have written about all SSDs systems in the past, and also did a high overview of Pure Storage some time back. And a very interesting fact I recalled was these systems having massive amount of IOPS. Having plenty of IOPS helps because you do away with Automated Storage Tiering (AST) because you don’t have to tier your data, and you don’t have to pay for such a feature.

Yes, all-SSDs pure-play storage systems are gaining prominence and it’s time to take notice. Nimbus beat NetApp and HP 3PAR last year to win eBay with an all SSDs storage solution and other players such as Violin Memory Systems, Pure Storage, SolidFire and of course, Texas Memory Systems (aka RAMSAN). And they are attracting big names into their management portfolios and getting VC dollars of course.

The Thailand flood aftermath will probably take 6 months or more to return to its previous production capacity prior to the crisis and SSDs can take this window of opportunity in the crisis to surge ahead. And if this flood is going to be an annual thing for Thailand (God bless Thailand), HDD market is going to have a perennial problem. And SSDs is going to rise even faster.

 

Joy(ent) to the World

When someone as important and as prominent as Jason Hoffman reads and follows your blog, you tend to stand up and take notice. I found out last week that Jason Hoffman, Founder and CTO of Joyent, was doing just that, I was deeply honoured and elated.

My Asian values started kicking in and I felt that I should reciprocate his gracious visits with a piece on Joyent. I have known about Joyent, thanks to Bryan Cantrill as the VP of Engineering because I am bloody impressed with his work with DTrace. And I have followed Joyent’s announcements every now and then, even recommending a job that was posted on Joyent’s website for a Service Delivery Manager in Asia Pacific for my buddy a couple of months ago. He’s one of the best Solaris engineers I have ever worked with but the problem with techies is, they tend to wait for everything to fall into place before they do the next thing. Too methodical!

I took some time over the weekend to understand a bit more about Joyent and their solution offerings. They are doing some mighty cool stuff and if you are Unix/Linux buff/bigot like me, you would be damn impressed. For those people who has experienced Unix and especially Solaris, there is an unexplained element that describes the fire and the passion of such a techie. I was feeling all the good vibes all over again.

Unfortunately, Joyent is not well known in this part of the world but I am well aware of their partnership with a local company called XyBase in an announcement in June last year. Xybase, through its vehicle called Anise Asia, entered into the partnership to resell Joyent’s SmartCenter solution. For those who has worked with XyBase in Malaysia, let’s not go there. 😉

Enough chitter-chatter! What’s Joyent about?

Well, for Malaysian IT followers, we are practically drowned in VMware. VMware does a seminar every 1.5 months or so, and they get invited to other vendors’ events ever so frequently as well. My buddy, Mr. Ong Kok Leong, who was an early employee in VMware Malaysia, has been elevated to superstardom, thanks to his presence in everything VMware. It’s a good thing and kudos to VMware to take advantage of their first-to-market, super gung-ho approach in the last 3 years or so. They have built a sizable lead in the local market and the competitors like Citrix Xen, Microsoft Hyper-V are being left in a dust. I believe only RedHat’s KVM is making a bit of a dent but they are primarily confined to their own RedHat space. Furthermore, most of VMware competitors do not have a strong portfolio and a complete software stack to challenge VMware and what they have been churning out.

Here’s my take … consider Joyent because I see Joyent having a very, very strong portfolio to give VMware a run for its money. Public listed VMware has deep pockets to continue their marketing blitz and because of where they are right now, they have gotten very pricey and complicated. And this blogger intends to level the playing field a bit by sharing more about Joyent and their solutions.

I see Joyent having 4 very strong technologies that differentiates them from others. These technologies (in no particular order) are:

  • node.js
  • ZFS
  • DTrace
  • KVM

These technologies have been proven in the field because Joyent has been deploying, stress testing them and improving on them in their own cloud offering called Joyent Cloud for the last few year. This is true “eating your own dogfood” and putting your money where your mouth is. This is a very important considering when building a Cloud Computing offering, especially in the public cloud space. You need something that is proven and Joyent Cloud is testimonial to Joyent’s technology.

So let’s start with a diagram of the Joyent Cloud Software Stack.

 

Key to the performance of Joyent Cloud is node.js.

node.js as quoted in its website is “Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.” The key to this is being event-driven and asynchronous and cloud solutions developed using node.js are able to go faster, scale bigger and respond better. The event-based model follows a programming approach in which the flow of the program is determined by events that occurred.

A simple analogy is when you (in Malaysia) is at McDonald’s. In the past, the McDonald’s staff will service and fulfill your order before they service the next customer and so on. That was the flow of the past. Some time last year, McDonalds’ decide that their front staff would take your order, sends you to a queue and then took the order of the next customer. The back-end support staff would then fulfill your order putting that burger and drink on your tray. That is why they are able to serve (take your money) faster and get more things done. This is what I understand about event-driven, when it is applied in a programming content.

node.js has been touted as the new “Ruby-on-Rails” and it is all about low-latency, and concurrency in applications, especially cloud applications. Here’s a video introducing node.js, by Joyent’s very own Ryan Dahl, the creator of node.js.

Besides performance, you would also need a strong and robust file system to ensure security, data integrity and protection of data as it scales. ZFS is a 128-bit, enterprise file system that was developed in Sun more than 10 years ago, and I am a big admirer of the ZFS technology. I have written about ZFS in the past, comparing it with NetApp’s Data ONTAP and also written about ZFS self-healing properties in dealing with Silent Data Corruption. In fact, my buddy (him being the more technical one) and I have been developing storage solutions with ZFS.

Cloud Computing is complex and you have to know what’s happening in the Cloud. For the Cloud Service Provider, they must know the real-time behaviour of the cloud properties. It could be for performance, resource consumption and contention, bottlenecks, applications characteristics, and even for finding the problems as quickly as possible. For the customers, they must have the ability to monitor, understand and report what they are consuming and using in the Cloud.

The regular used buzzword is Analytics and DTrace is the framework developed for Cloud Analytics. When it comes to analytics, nothing comes close to what DTrace can do. Most vendors (including VMware) will provide APIs for 3rd party ISVs to develop cloud analytics but nothing beats having the creator of the cloud technology given you the tools that they use internally. That is what Joyent is giving to the customer, DTrace, a tool that they use themselves internally. Here’s a screenshot of DTrace in action for Joyent’s SmartDataCenter.

 

I have always said that you got to see  it to know it. Cloud visibility is crucial for the optimal operational efficiency of the cloud.

Joyent already has Solaris Zones technology in its offering. But the missing piece was bare metal hypervisor and last year, Joyent added the final piece. KVM (Kernel-based Virtualization) was ported to Joyent, and KVM is more secure, and faster than the traditional approach of VMware, which relies on binary translation. KVM would mean that the virtualization kernel has direct interaction and communication with the native  x86 virtualization on processors that supports hardware virtualization extension. There is a whole religious debate about native, paravirtualization and binary translation on the web. You can read one here, and as I said, KVM is native virtualization.

There are lots more to know about Joyent but you got to spend some time to learn about it. It is not well known (yet) in this part of the world, my intention in this blog entry is to disseminate information so that you readers don’t have to be droned into one thing only.

There are choices and in the virtualization space, it is just not always about VMware. VMware deserves to be where they are but when one comes into power (like VMware), he/she tends to become less friendly to work it. A customer should not be subjected to this new order of oppression because businesses are there when there are customers. And as customers, they are always choices and Joyent is one good choice.

Primary Dedupe where are you?

I am a bit surprised that primary storage deduplication has not taken off in a big way, unlike the times when the buzz of deduplication first came into being about 4 years ago.

When the first deduplication solutions first came out, it was particularly aimed at the backup data space. It is now more popularly known as secondary data deduplication, the technology has reduced the inefficiencies of backup and helped sparked the frenzy of adulation of companies like Data Domain, Exagrid, Sepaton and Quantum a few years ago. The software vendors were not left out either. Symantec, Commvault, and everyone else in town had data deduplication for backup and archiving.

It was no surprise that EMC battled NetApp and finally won the rights to acquire Data Domain for USD$2.4 billion in 2009. Today, in my opinion, the landscape of secondary data deduplication has pretty much settled and matured. Practically everyone has some sort of secondary data deduplication technology or solution in place.

But then the talk of primary data deduplication hardly cause a ripple when compared a few years ago, especially here in Malaysia. Yeah, the IT crowd is pretty fickle that way because most tend to follow the trend of the moment. Last year was Cloud Computing and now the big buzz word is Big Data.

We are here to look at technologies to solve problems, folks, and primary data deduplication technology solutions should be considered in any IT planning. And it is our job as storage networking professionals to continue to advise customers about what is relevant to their business and addressing their pain points.

I get a bit cheesed off that companies like EMC, or HDS continue to spend their marketing dollars on hyping the trends of the moment rather than using some of their funds to promote good technologies such as primary data deduplication that solve real life problems. The same goes for most IT magazines, publications and other communications mediums, rarely giving space to technologies that solves problems on the ground, and just harping on hypes, fuzz and buzz. It gets a bit too ordinary (and mundane) when they are trying too hard to be extraordinary because everyone is basically talking about the same freaking thing at the same time, over and over again. (Hmmm … I think I am speaking off topic now .. I better shut up!)

We are facing an avalanche of data. The other day, the CEO of Nexenta used the word “data tsunami” but whatever terms used do not matter. There is too much data. Secondary data deduplication solved one part of the problem and now it’s time to talk about the other part, which is data in primary storage, hence primary data deduplication.

What is out there?  Who’s doing what in term of primary data deduplication?

NetApp has their A-SIS (now NetApp Dedupe) for years and they are good in my books. They talk to customers about the benefits of deduplication on their FAS filers. (Side note: I am seeing more benefits of using data compression in primary storage but I am not going to there in this entry). EMC has primary data deduplication in their Celerra years ago but they hardly talk much about it. It’s on their VNX as well but again, nobody in EMC ever speak about their primary deduplication feature.

I have always loved Ocarina Networks ECO technology and Dell don’t give much hoot about Ocarina since the acquisition in  2010. The technology surfaced a few months ago in Dell DX6000G Storage Compression Node for its Object Storage Platform, but then again, all Dell talks about is their Fluid Data Architecture from the Compellent division. Hey Dell, you guys are so one-dimensional! Ocarina is a wonderful gem in their jewel case, and yet all their storage guys talk about are Compellent  and EqualLogic.

Moving on … I ought to knock Oracle on the head too. ZFS has great data deduplication technology that is meant for primary data and a couple of years back, Greenbytes took that and made a solution out of it. I don’t follow what Greenbytes is doing nowadays but I do hope that the big wave of primary data deduplication will rise for companies such as Greenbytes to take off in a big way. No thanks to Oracle for ignoring another gem in ZFS and wasting their resources on pre-sales (in Malaysia) and partners (in Malaysia) that hardly know much about the immense power of ZFS.

But an unexpected source coming from Microsoft could help trigger greater interest in primary data deduplication. I have just read that the next version of Windows Server OS will have primary data deduplication integrated into NTFS. The feature will be available in Windows 8 and the architectural view is shown below:

The primary data deduplication in NTFS will be a feature add-on for Windows Server users. It is implemented as a filter driver on a per volume basis, with each volume a complete, self describing unit. It is cluster aware, and fully crash consistent on all operations.

The technology is Microsoft’s own technology, built from scratch and will be working to position Hyper-V as an strong enterprise choice in its battle for the server virtualization space with VMware. Mind you, VMware already has a big, big lead and this is just something that Microsoft must do-or-die to keep Hyper-V playing catch-up. Otherwise, the gap between Microsoft and VMware in the server virtualization space will be even greater.

I don’t have the full details of this but I read that the NTFS primary deduplication chunk sizes will be between 32KB to 128KB and it will be post-processing.

With Microsoft introducing their technology soon, I hope primary data deduplication will get some deserving accolades because I think most companies are really not doing justice to the great technologies that they have in their jewel cases. And I hope Microsoft, with all its marketing savviness and adeptness, will do some justice to a technology that solves real life’s data problems.

I bid you good luck – Primary Data Deduplication! You deserved better.

Phoenix rising from OpenSolaris ashes

I got a little nostalgic over the weekend. As I was working on Solaris 11 x86 over the past few weeks, I got a little bit peeved about how much Oracle has changed the OS.

Command like ifconfig doesn’t not appear to be very functional anymore and instead ipadm has taken over most of the configuration options. And when I working with Jumpstart (damn!), it does not work the way that I know anymore. And now AI (Automated Install) has taken over Jumpstart and I got to relearn the whole what-ca-ma-callit. Dang!

I remembered the day when Solaris x86 first came out in the early 90s. I was ecstatic because I could finally test and run Solaris on x86 platform. I could get things running at home and have fun with it. Drivers were limited then (and still is but has gotten much better) but I was happily hacking away together with other Linux distros as the open source revolution was just beginning. After I joined NetApp, things started to change and I abandoned Solaris in favour of Linux as my job, as well as my interest, were on Linux, especially RedHat. I eventually got my RHCE and completely lost touch with Solaris. By 2005, when OpenSolaris was announced under CDDL (Common Development and Distribution License), I was no longer well versed with the developments of Solaris and OpenSolaris.

Enough about my nostalgia because I am beginning to see a young phoenix (a mythical firebird) rising from the mess of what Oracle did with OpenSolaris! Since Oracle purchased Sun in 2010, Oracle has practically burned OpenSolaris to ashes. On August 13 2010, Oracle announced the end of OpenSolaris in an internal memo and it read:

Solaris Engineering,

Today we are announcing a set of decisions regarding the path to
Solaris 11, and answering key pending questions on open source, open
development, software and binary licenses, and how developers and
early adopters will be able to use Solaris 11 technology before its
release in 2011.

As you all know, the term “OpenSolaris” has been used colloquially to
refer to any or all of a collection of source code, a development
model, a web site, a logo, a binary release, a source license, a
community, and many other related things. So it’s taken a while to go
over each issue from an organizational and business perspective, and
align on the correct next step. Therefore, please take the time to
read all of the detail here carefully. We’ll discuss our strategy
first, and then the decisions and changes to our policies and
processes that implement that strategy.

If you want the entire memo (and all the fa-lah-lah that goes with it), go to Steven Stallion’s blog. Incidentally Steven Stallion was the OpenSolaris kernel developer who leaked the memo into the open.

It became pretty obvious that Oracle business suit culture and “is this going to make money?” ways were suffocating talents and innovations of the Sun engineering tribes. Some of the high profile leavers were James Gosling (father of Java) and Jeff Bonwick (father of RAID-Z and led the ZFS development team in Sun). And there were many top talents exodus within 90-120 days after the Oracle acquisition.

The key technologies that went into OpenSolaris (and Solaris) were slowly but surely deprived of their inventors’ and maintainers nourishment. These technologies were:

  • ZFS (Project Pacific)
  • DTrace
  • Zones (aka Solaris Containers, aka Project Kevlar)
  • Fault Management Architecture (FMA)
  • Service Management Facility (SMF)
  • Advanced Network Virtualization (Project Crossbow)
  • Least-privilege

and many more. Some of these technologies were already open under CDDL license but some were still very much proprietary to Sun (I mean, Oracle). It was difficult to use what was available under OpenSolaris CDDL license to rebuild again, especially when the inventors, talents and maintainers are now all scattered in companies like Delphix, Nexenta, Greenbytes, Joyent and so on .

At the end of last year, shortly before Solaris 11 was announced by Oracle, the people who are passionate about OpenSolaris (and Solaris) have got together in full force again. Dubbed “Project Illumos“, the key people who has developed for Sun convened to build a new open-source, Solaris-based operating environment. The proprietary bits that are closely guarded by Oracle are going to be either rebuilt from scratch or ported from BSD into the last OpenSolaris-kernel before Oracle killed it. That kernel was Solaris Nevada, which was supposed to be the successor of Solaris 10.

The Illumos team already has a bootable and working operating environment and new developments are going on at a frantic pace. From the words of Bryan Cantrill (father of DTrace) and now VP of Engineering at Joyent,

“illumos was not designed to be a fork,but rather an entirely open downstream repository of OpenSolaris”

And the talents congregating to the Illumos project (like moths to a flame) are super-stellar. Just have a look at this list:

  • ZFS –> Matt Ahrens, Eric Schrock,  George Wilson, Adam Leventhal, Bill Pijewski and BrendanGregg
  • SMF –> Dan McDonald and Sumit Gupta
  • DTrace –> Bryan Cantrill, Adam Leventhal, Brendan Gregg, Eric Schrock, Dave Pacheco
  • Zones & Jumpstart –> Jerry Jelinek
  • and many, many more.

KVM (the Linux kernel-based virtual machine) is being added into the Illumos operating environment, giving it the final piece of the puzzle.

I cannot help but to feel extremely proud that OpenSolaris (and Solaris) is not dead yet and it’s alive and rising. Oracle cannot lay claim to the source code and the rights of Illumos (according to Bryan Cantrill) without itself abiding to the CDDL licensing and distribution scheme that it had killed off a year ago.

And this is indeed the young phoenix rising!

Why VAAI?

This is Part 2 of my previous blog about VAAI (vStorage API for Array Integration) with more details about VAAI. VAAI offloads some of the I/O related functions to the VAAI-enable storage array, hence giving the hypervisor more compute and memory resource to do it other functions. And the storage array, upon receiving the VAAI command, will execute whatever that is required of it.

Why is VAAI important? What does it do that makes it so useful and important to the hypervisor?

VAAI is about a set of new SCSI commands. And there are 3 important ones:

  • WRITE-SAME
  • XSET
  • ATS

What exactly do these SCSI commands do?

WRITE-SAME is a SCSI command that instructs the storage array to zeroes the virtual VMDK disks or VMFS LUNs. This usually happens when guest OS require a brand new set of virtual disks and initializing the virtual disks is required. In the past (before VAAI), the hypervisor has to repetitively send 0s to the storage to perform the disks zeroing. As shown in the diagram below, you can see that each zero operation is sent from the hypervisor to the storage.

This back-and-forth of sending 0s and acknowledgments between the hypervisor and the storage is not efficient. With VAAI, the command WRITE-SAME  is sent from the hypervisor to the storage array and the storage array will do the zeroing on the disks and LUNs. The hypervisor will not intervene with the process until it gets and acknowledgment of its completion. See diagram below of how VAAI helps in bulk-zeroing of disks and LUNs in the storage array.

The animated GIFs are the taken from Luke Reed’s blog, a fantastic read.

The second VAAI SCSI command is XSET and it performs hardware accelerated full copy. This command is also known as  XCOPY and it offloads the process of copying the blocks of data that makeup a VMDK file. Such copying operations occur when the hypervisor is doing things like VM cloningStorage vMotion or VM creation from templates (bulk copying to create many similar VMs in one go).

Again with the courtesy of Luke Reed’s animated GIFs, the diagram below shows a full copy without VAAI

and after implementing VAAI, where the full, bulk copy operations is offloaded to the storage array to execute.

The third and last SCSI command of VAAI is ATS or hardware-assisted locking. ATS stands for Atomic, Test and Set and the command allow the hypervisor to lock only the required blocks rather than the entire LUN.

Without VAAI, the entire LUN temporarily could be locked by the numerous VMFS operations of one single hypervisor and this prevents other hypervisors from accessing the shared LUNs. The ATS API offloads lock management from the host to the storage array and keeps the LUN available by locking only required blocks, not the entire VMFS file system. Please see the pleasing diagrams below of

(without VAAI ATS)

(with VAAI ATS)

And if you want to see the VAAI Hardware Accelerated Full Copy (aka XSET) in action, here’s a little video showing how it is done in an EMC environment.

The primary significance and noticeable benefit is definitely performance. The secondary benefit, though not so obvious, is allowing VMware and its hypervisor to scale because it does not get bogged down by some of the I/O functions that it is not meant to do.

There were some new additions in vSphere 5.0 for VAAI. From its FAQ, it listed in ESX5.0, support for NAS Hardware Acceleration is included with support for the following primitives:

  • Full File Clone – Like the Full Copy VAAI primitive provided for block arrays, this Full File Clone primitive enables virtual disks to be cloned by the NAS device.
  • Native Snapshot Support – Allows creation of virtual machine snapshots to be offloaded the array.
  • Extended Statistics – Enables visibility to space usage on NAS datastores and is useful for Thin Provisioning.
  • Reserve Space – Enables creation of thick virtual disk files on NAS.

So, there you have it folks. Why VAAI? Here’s why.

VAAI to go!

First of all, let me apologize. I am guilty of not updating my blogs as regularly as I did in the past. Things got a bit crazy after Christmas and I had to juggle several things that demand more of my attention but I am confident things will sort itself out soon enough.

Today’s topic is about VMware’s VAAI (vSphere vStorage API for Array Integration). This feature was announced more than 3 years ago but was only introduced in vSphere 4.1 July 2010 and now with newer enhancements in the latest release of vSphere 5.0.

What is this VAAI and what does this mean from a storage perspective?

When VMware came into prominence in version 3.0/3.5 time, the whole world revolved around the ESX hypervisor. It tried to do everything on its own, in its own proprietary nature. Given its nascent existence then, ESX had to do what it had to do and control everything with its hypervisor universe. Yes, it was a good move then and it did what it was supposed to do. This was back when server virtualization was in its infancy, and resources requirements were less demanding.

Hence when VMware wants to initialize VMs, or create VMDK files on the datastore, or creating clones or snapshots, or even executing VMotion and Storage VMotion, it tends to execute it at the hypervisor level. For example, when creating virtual disks with VMFS, most of the commands to initialization of the disks were done at the VMFS level. Zeroing the virtual disks would mean sending zeroing commands to the actual physical disks on the shared storage. And this would go on back and forth, taxing the CPU cycles and memory on the hypervisor layer, and sending wasteful and unnecessary zeroes over the network to the storage array. This was very inefficient, wasteful and degrades the performance tremendously, especially at the hypervisor layer (compute and memory).

There are also other operations such as virtual disks locking that locks up the entire LUN that housed several datastores. Again, not good.

But VMware took off like a rocket, and quickly established itself as a Tier 1, enterprise server virtualization solution addressing the highest demands of the enterprise. It is also defining the future of Cloud Computing, building exorbitant requirements as it pushes forward. And VMware began to realize that if the hypervisor is to scale, it needs to leave the I/O operations to the “experts”, and the “experts” here being the respective storage array itself.

So, in version 4.1, VAAI (vStorage API for Array Integration) was introduced as an API suite, following 3 other earlier APIs – vStorage API for Site Recovery Manager (SRM), vStorage API for Data Protection and vStorage API for Multipathing.

In a nutshell, as I have mentioned before, VAAI offloads I/O and storage related operations to the VAAI-capable storage array (leave it to the experts) as shown in the diagram below:

 

Of course, the storage vendors themselves has to rework their array OS layer to integrate with the VAAI API. You can say that the VAAI are “hooks” that enhances the storage connectivity and communications with vSphere’s hypervisor. But then again, if you look at it from the other angle, vSphere need the storage vendors more in order for its universe to scale. Good thing VMware has a big, big market share. Imagine if there are no takers for the VAAI APIs. That would be a strange predicament instead.

What is the big deal that we get from VAAI? The significant and noticeable benefit is increase performance. By offloading the I/O functionality and operations to the storage array itself, the hypervisor and the compute and memory resource are not bogged down, resulting in higher performance and better response time to serve its VMs and other VM operations.

I am going off to another meeting and I shall write of VAAI in more details later. Until the next entry, adios and have a great year ahead.