It’s all about executing the story

I have been in hibernation mode, with a bit of “writer’s block”.

I woke up in Bangalore in India at 3am, not having adjusted myself to the local timezone. Plenty of things were on my mind but I can’t help thinking about what’s happening in the enterprise storage market after the Gartner Worldwide External Controller-Based report for 4Q12 came out  last night. Below is the consolidated table from Gartner:

Just a few weeks ago, it was IDC with its Worldwide Disk Storage Tracker and below is their table as well:

(more…)

VMware in step 1 breaking big 6 hegemony

Happy Lunar New Year! This is the Year of the Water Snake, which just commenced 3 days ago.

I have always maintain that VMware has to power to become a storage killer. I mentioned that it was a silent storage killer in my blog post many moons ago.

And this week, VMware is not so silent anymore. Earlier this week, VMware had just acquired Virsto, a storage hypervisor technology company. News of the acquisition are plentiful on the web and can be found here and here. VMware is seriously pursuing its “Software-Defined Data Center (SDDC)” agenda and having completed its software-defined networking component with the acquisition of Nicira back in July 2012, the acquisition of Virsto represents another bedrock component of SDDC, software-defined storage.

Who is Virsto and what do they do? Well, in a nutshell, they abstract the underlying storage architecture and presents a single, global namespace for storage, a big storage pool for VM datastores. I got to know about their presence last year, when I was researching on the topic of storage virtualization.

I was looking at Datacore first, because I was familiar with Datacore. I got to know Roni Putra, Datacore’s CTO, through a mutual friend, when he was back in Malaysia. There was a sense of pride knowing that Roni is a Malaysian. That was back in 2004. But Datacore isn’t the only player in the game, because the market is teeming with folks like Tintri, Nutanix, IBM, HDS and many more. It just so happens that Virsto has caught the eye of VMware as it embarks its first high-profile step (the one that VMware actually steps on the toes of the Storage Big 6 literally) into the storage game. The Big 6 are EMC, NetApp, IBM, HP, HDS and Dell (maybe I should include Fujitsu as well, since it has been taking market share of late)

Virsto installs as a VSA (virtual storage appliance) into ESXi, and in version 2.0, it plugs right in as an almost-native feature of ESXi, not a vCenter tab like most other storage. It looks and feels very much like a vSphere functionality and this blurs the lines of storage and VM management. To the vSphere administrator, the only time it needs to be involved in storage administration is when he/she is provisioning storage or expanding it. Those are the only 2 common “touch-points” that a vSphere administrator has to deal with storage. This, therefore, simplifies the administration and management job.

Here’s a look at the Virsto Storage Hypervisor architecture (credits to Google Images):

What Virsto does, as I understand from high-level, is to take any commodity storage and provides a virtual storage layer and consolidate them into a very large storage pool. The storage pool is called vSpace (previously known as LiveSpace?) and “allocates” Virsto vDisks to each VMs. Each Visto vDisk will look like a native zeroed thick VMDK, with the space efficiency of Linked Clones, but without the performance penalty of provisioning them.  The Virsto vDisks are presented as NFS exports to each VM.

Another important component is the asynchronous write to Virsto vLogs. This is configured at the deployment stage, and this is basically a software-based write cache, quickly acknowledging all writes for write optimization and in the background, asynchronously de-staged to the vSpace. Obviously it will have its own “secret sauce” to optimize the writes.

Within the vSpace, as disk clone groups internal to the Virsto, storage related features such as tiering, thin provisioning, cloning and snapshots are part and parcel of it. Other strong features of Virsto are its workflow wizard in storage provisioning, and its intuitive built-in performance and management console.

As with most technology acquisitions, the company will eventually come to a fork where they have to decide which way to go. VMware has experienced it before with its Nicira acquisition. It had to decide between VxLAN (an IETF standard popularized by Cisco) or Nicira’s own STT (Stateless Transport Tunneling). There is no clear winner because choosing one over the other will have its rewards and losses.

Likewise, the Virsto acquisition will have to be packaged in a friendly manner by VMware. It does not want to step on all toes of its storage Big 6 partners (yet). It still has to abide to some industry “co-opetition” game rules but it has started the ball rolling.

And I see that 2 critical disruptive points about this acquisition in this:

  1. It has endorsed the software-defined storage/storage hypervisor/storage virtualization technology and started the commodity storage hardware technology wave. This could the beginning of the end of proprietary storage hardware. This is also helped by other factors such as the Open Compute Project by Facebook. Read my blog post here.
  2. It is pushing VMware into a monopoly ala-Microsoft of the yesteryear. But this time around, Microsoft Hyper-V could be the benefactor of the VMware agenda. No wonder VMware needs to restructure and streamline its business. News of VMware laying off about 900 staff can be read here. Its unfavourable news of its shares going down can be read here.

I am sure the Storage Big 6 is on the alert and is probably already building other technology and partnerships beyond VMware. It the natural thing to do but there is no stopping VMware if it wants to step on the Big 6 toes now!

Is there no one to challenge EMC?

It’s been a busy, busy month for me.

And when the IDC Worldwide Quarterly Disk Storage Systems Tracker for 3Q12 came out last week, I was reading in awe how impressive EMC was at the figures that came out. But most impressive of all is how the storage market continue to grow despite very challenging and uncertain business conditions. With the Eurozone crisis, China experiencing lower economic growth numbers and the uncertainty in the US economic sectors, it is unbelievable that the storage market grew 24.4% y-o-y. And for the first time, 7,104PB was shipped! Yes folks, more than 7 exabytes was shipped during that period!

In the Top 5 external disk storage market based on revenue, only EMC and HDS recorded respectable growth, recording 8.7% and 13.8% respectively. NetApp, my “little engine that could” seems to be running out of steam, earning only 0.9% growth. The rest of the field, IBM and HP, recorded negative growth. Here’s a look at the Top 5 and the rest of the pack:

HP -11% decline is shocking to me, and given the woes after woes that HP has been experiencing, HP has not seen the bottom yet. Let’s hope that the new slew of HP storage products and technologies announced at HP Discover 2012 will lift them up. It also looked like a total rebranding of the HP storage products as well, with a big play on the word “Store”. They have names like StoreOnce, StoreServ, StoreAll, StoreVirtual, StoreEasy and perhaps more coming.

The Open SAN market, which includes iSCSI has EMC again at Number 1, with 29.8%, followed by IBM (14%), HDS (12.2%) and HP (11.8%). When combined with NAS numbers, the NAS + Open SAN market, EMC has 33.5% while NetApp is 13.7%.

Of course, it is just not about external storage because the direct-attached storage numbers count too. With that, the server vendors of IBM, HP and Dell are still placed behind EMC. Here’s a look at that table from IDC:

There’s a highlight of Dell in the table above. Dell actually grew by 4.0% compared to decline in HP and IBM, gaining 0.1%. However, their numbers seem too tepid and led to the exit of Darren Thomas, Dell’s storage group head honco. News of Darren’s exit was on TheRegister.

I also want to note that NAS growth numbers actually outpaced Open SAN numbers including iSCSI.

This leads me to say that there is a dire need for NAS technical and technology expertise in the local storage market. As the adoption of NFSv4 under way and SMB 2.0 and 3.0 coming into the picture, I urge all storage networking professionals who are more pro-SAN to step out of their comfort zone and look into NAS as well. The world is changing and it is no longer SAN vs NAS anymore. And NFSv4.1 is blurring the lines even more with the concepts of layout.

But back to the subject to storage market, is there no one out there challenging EMC in a big way? NetApp was, some years ago, recorded double digit growth and challenging EMC neck-and-neck, but that mantle seems to be taken over by HDS. But both are long way to go to get close to EMC.

Kudos to the EMC team for damn good execution!

Did Dell buy a dud?

In the past few weeks, I certainly have an axe to grind with Dell, notably their acquisition of Quest Software. I have been full of praise of how Dell was purchasing the right companies in the past and how the companies Dell acquired were important chess pieces that will propel Dell into the enterprise space. Until now …

Since its first significant acquisition into the enterprise with EqualLogic in 2008, there were PerotSystems, Kace, Scalent, Boomi, Compellent, Exanet, Ocarina Networks, Force10, SonicWall, Wyse Technologies, AppAssure, and RNA Networks. (I might have missed one or two). To me, all these were good buys, and these were solid companies with a strong future in their technology and offerings. Until Dell decides to acquire Quest Software.

At the back of my mind, why the heck is Dell buying Quest Software for? And for a ballistic USD2.4 billion! That’s hell of a lot of money to spend on a company which does not have a strong portfolio of solutions and are not exactly leaders in their respective disciplines, barring Quest’s Foglight and TOAD. A quick check into Quest’s website revealed that they are in the following disciplines:

 

  (more…)

The reports are out!

It’s another quarter and both Gartner and IDC reports on disk storage market are out.

What does it take to slow down EMC, who is like a behemoth beast mowing down its competition? EMC, has again tops both the charts. IDC Worldwide Disk Storage Tracker for Q1 of 2012 puts EMC at 29.0% of the market share, followed by NetApp at 14.1%, and IBM at 11.4%. In fourth place is HP with 10.2% and HDS is placed fifth with 9.4%.

In the Gartner report, EMC has the lead of 32.5%, followed by NetApp at 12.7% and IBM with 11.0%. HDS held fourth place at 9.5% and HP is fifth with 9.0%. (more…)

Dell acquires Wyse Technology

There is no stopping Dell. It is in the news again, this time, acquiring privately owned Wyse Technology.

The name Wyse certainly brings back memories about the times where Wyse were the VT100 and VT220 terminals. They were also one of the early leaders in thin client computing, where it required an X Windows server to provide client applications on “dumb” workstations running X Windows Manager. They used to compute with companies like NCD (Network Computing Devices) and HummingBird. My first company, CSA, was a distributor of NCD clients and I remember Sime Darby was the distributor of Wyse thin clients.

Wyse as quoted:

Wyse Technology is the global leader in Cloud Client Computing. The Wyse portfolio includes industry-leading thin, zero and cloud PC client solutions with advanced management, desktop virtualization and cloud software supporting desktops, laptops and next generation mobile devices. Wyse has shipped more than 20 million units and has over 200 million people interacting with their products each day, enabling the leading private, public, hybrid and government cloud implementations worldwide. Wyse works with industry-leading IT vendors, including Cisco®, Citrix®, IBM®, Microsoft, and VMware® as well as globally-recognized distribution and service providers. Wyse is headquartered in San Jose, California, U.S.A., with offices worldwide.

The Dell acquisition of Wyse shows that Dell is serious about Virtual Desktop Infrastructure type of technology (VDI), especially when the client cloud computing space. And the VDI space is going to heat up as many vendors are pushing hard to get the market going.

Dell, for better or for worse, has just added another acquisition that fits into the jigsaw puzzle that they are trying to build. Wyse looks like a good buy as it has a mature technology and the legacy in the thin client space. I hope Dell will energize the Wyse Technology team but while acquisition is easy, the tough part will be the implementation part. How well Dell mobilizes the Wyse Technology team will depend on how well Wyse blends into Dell’s culture.

Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications  will have implications on the brand of storage they will be locked in into.

And surprise, surprise, SSDs are the pawns of this new cloak-and-dagger game. How? Well, I have been observing this for quite a while now, and when HP announced their SMART portfolio for their storage, it’s time for me to say something.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.

It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

Similarly, Dell announced their ExpressFlash solution that ties up its 12th generation PowerEdge servers with their flagship (what else), Dell Compellent storage.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

 

Gartner WW ECB 4Q11

The Gartner Worldwide External Controller Based Disk Storage market numbers were out last night, and perennially follows IDC Disk Storage System Tracker.

The numbers posted little surprise, after a topsy-turvy year for vendors like IBM, HP and especially NetApp. Overall, the positions did not change much, but we can see that the 3 vendors I mentioned are facing very challenging waters ahead. Here’s a look at the overall 2011 numbers:

EMC is unstoppable, and gaining 3.6% market share and IBM lost 0.2% market share despite having strong sales with their XIV and StorWize V7000 solutions. This could be due to the lower than expected numbers from their jaded DS-series. IBM needs to ramp up.

HP stayed stagnant, even though their 3PAR numbers have been growing well. They were hit by poor numbers from the EVA (now renumbered as P6000s), and surprisingly their P4000s as well. Looks like they are short-lefthanded (pun intended) and given the C-level upheavals it went through in the past year, things are not looking good for HP.

Meanwhile, Dell is unable to shake off their EMC divorce alimony, losing 0.8% market share. We know that Dell has been pushing very, very hard with their Compellent, EqualLogic, and other technologies they acquired, but somehow things are not working as well yet.

HDS has been the one to watch, with its revenue numbers growing in double digits like NetApp and EMC. Their market share gain was 0.6%, which is very good for HDS standards. NetApp gained 0.8% market share but they seem vulnerable after 2 poor quarters.

The 4th quarter for 2011 numbers are shown below:

I did not blog about IDC QView numbers, which reports the storage software market share but just to give this entry a bit of perspective from a software point of view. From the charts of The Register, EMC has been gaining marketshare at the expense of the rest of the competitors like Symantec, IBM and NetApp.

Tabulated differently, here’s another set of data:

On all fronts, EMC is firing all cylinders. Like a well-oiled V12 engine, EMC is going at it with so much momentum right now. Who is going to stop EMC?

One smart shopper

Dell had just acquired AppAssure earlier this week, adding the new company into its stable of Compellent, EqualLogic, Perot Systems, Scalent, Force10, RNA Networks, Ocarina Networks, and ExaNet (did I miss anyone one?). This is not including the various partnerships Dell has with the likes of CommVault, VMware, Caringo, Citrix, Kaminario etc.

From 10,000 feet, Dell is building a force to be reckoned with. With its PC business waning, Dell is making all the moves to secure the datacenter space from various angles. And I like what I see. Each move is seen as a critical cog, moving Dell forward.

But the question is “Can Dell deliver?” It had just missed out Wall Street’s revenue expectation last week, but the outlook of Dell’s business, especially in storage, is looking bright. I caught this piece in Dell’s earnings call transcript, which said:

"Server and networking revenue increased 6%. Total storage 
declined 13% while Dell-owned IP storage growth accelerated 33% 
to $463 million, led by continued growth in all of our Dell IP 
categories including Compellent, which saw over 60% sequential 
revenue growth."

Those are healthy numbers, but what’s most important is how Dell executes in the next 12-18 months. Dell has done very well with both Compellent and EqualLogic and is slowly bringing out its Exanet and Ocarina Networks technology in new products such as the EqualLogic FS7500 and the DR4000 respectively. Naturally, the scale-out engine from Exanet and the deduplication/compression engine from Ocarina will find these technologies integrated into Dell Compellent line in the months to come. And I am eager to see how the “memory virtualization” technology of RNA Networks fits into Dell’s Fluid Data Architecture.

The technologies from Scalent and AppAssure will push Dell into the forefront of the virtualization space. I have no experience with both products, but by the looks of things, these are solid products which Dell can easily and seamlessly plug in to their portfolio of solutions.

The challenge for Dell is their people in the field. Dell has been pretty much a PC company, and still is. The mindset of a consumer based PC company versus a datacenter-centric, enterprise is very different.

Dell Malaysia has been hiring good people.These are enterprise-minded people. They have been moulded by the fires of the datacenters, and they were hired to give Dell Malaysia the enterprise edge. But the challenge for Dell Malaysia remains, and that is changing the internal PC-minded culture.

Practices such as dropping price (disguised as discounts) at first sign of competition, or giving away high-end storage solutions at almost-free price, to me, are not good strategies. Selling enterprise products with just speeds and feeds and articulating a product’s features and benefits, and lacking the regards for the customer’s requirements and pain points are missing the target all together. This kind of mindset, aiming for a quick sell, is not Dell would want. Yes, we agree that quarterly numbers are important, but pounding the field sales for daily updates and forecasts, will only push for unpleasant endings.

Grapevines aside, I am still impressed with how Dell is getting the right pieces to build its datacenter juggernaut.

The last bastion – Memory

I have been in this industry for almost 20 years. March 2, 2012 will be my 20th year, to be exact. I have never been in the mainframe era, dabbled a bit in the mini computers era during my university days and managed to ride the wave of client-server, Internet explosion in the beginning WWW days, the dot-com crash, and now Cloud Computing.

In that 20 years, I have seen the networking wars (in which TCP/IP and Cisco prevailed), the OS wars and the Balkanization of Unix (Windows NT came out the winner), the CPU wars (SPARC, PowerPC, in which x86 came out tops) and now data and storage. Yet, the last piece of the IT industry has yet to begun or has it?

In the storage wars, it was pretty much the competition between NAS and SAN and religious groups of storage in the early 2000s but now that I have been in the storage networking industry for a while, every storage vendor are beginning to look pretty much the same for me, albeit some slight differentiating factors once in a while.

In the wars that I described, there is a vendor for the product(s) that are peddled but what about memory? We never question what brand of memory we put in our servers and storage, do we? In the enterprise world, it has got to be ECC, DDR2/3 memory DIMMs and that’s about it. Why????

Even in server virtualization, the RAM and the physical or virtual memory are exactly just that – memory! Sure VMware differentiates them with a cool name called vRAM, but the logical and virtual memory is pretty much confined to what’s inside the physical server.

In clustering, architectures such as SMP and NUMA, do use shared memory. Oracle RAC shares its hosts memory for the purpose of Oracle database scalability and performance. Such aggregated memory architectures in one way or another, serves the purpose of the specific applications’ functionality rather than having the memory shared in a pool for all general applications.

What if some innovative company came along, and decided to do just that? Pool all the physical memory of all servers into a single, cohesive and integrated memory pool and every application of each of the server can use the “extended” memory in an instance, without some sort of clustering software or parallel database. One company has done it using RDMA (Remote Direct Memory Access) and their concept is shown below:

 

I am a big fan of RDMA ever since NetApp came out with DAFS some years ago, and I only know a little bit about RDMA because I didn’t spend a lot of time on it. But I know RDMA’s key strength in networking and when this little company called RNA Networks news came up using RDMA to create a Memory Cloud, it piqued my interest.

RNA innovated with their Memory Virtualization Acceleration (MVX) and this is layered on top of 10Gigabit Ethernet or Infiniband networks with RDMA. Within the MVX, there are 2 components of interest – RNAcache and RNAmessenger. This memory virtualization technology allows hundreds of server nodes to lend their memory into the Memory Cloud, thus creating a very large and very scalable memory pool.

As quoted:

RNA Networks then plunks a messaging engine, an API layer, and a pointer updating algorithm
on top of the global shred memory infrastructure, with the net effect that all nodes in the
cluster see the global shared memory as their own main memory.

The RNA code keeps the memory coherent across the server, giving all the benefits of an SMP
or NUMA server without actually lashing the CPUs on each machine together tightly so they
can run one copy of the operating system.

The performance gains, as claimed by RNA Networks, was enormous. In a test  published, running MVX had a significant performance gain over SSDs, as shown in the results below:

This test was done in 2009/2010, so there were no comparisons with present day server-side PCIe Flash cards such as FusionIO. But even without these newer technologies, the performance gains were quite impressive.

In a previous version of 2.5, the MVX technology introduced 3 key features:

  • Memory Cache
  • Memory Motion
  • Memory Store

The Memory Cache, as the name implied, turned the memory pool into a cache for NAS and file systems that are linked to the server. At the time, the NAS protocol supported was only NFS. The cache stored frequently accessed data sets used by the servers. Each server could have simultaneous access to the data set in the pool and MVX would be handling the contention issues.

The Memory Motion feature gives OSes and physical servers (including hypervisors) access to shared pools of memory that acts as a giant swap device during page out/swap out scenarios.

Lastly, the Memory Store was the most interesting for me. It turned the memory pool into a collection of virtual block device and was similar to the concept of RAMdisks. These RAMdisks extended very fast disks to the server nodes and the OSes, and one server node can mount multiple instances of these virtual RAMdisks. Similarly multiple server nodes can mount a single virtual RAMdisk for shared disk reasons.

The RNA Networks MVX scales hundreds of server nodes and supported architectures such as 32/64 bit x86, PowerPC, SPARC and Itanium. At the time, the MVX was available for Unix and Linux only.

The technology that RNA Networks was doing was a perfect example of how RDMA can be implemented. Before this, memory was just memory but this technology takes the last bastion of IT – the memory – out into the open. As the Cloud Computing matures, memory is going to THE component that defines the next leap forward, which is to make the Cloud work like one giant computer. Extending the memory and incorporating memory both on-premise, on the host side as well as memory in the cloud, into a fast, low latency memory pool would complete the holy grail of Cloud Computing as one giant computer.

RNA Networks was quietly acquired by Dell in July 2011 for an undisclosed sum and got absorbed into Dell Fluid Architecture’s grand scheme of things. One blog, Juku, captured an event from Dell Field Tech Day back in 2011, and it posted:

The leitmotiv here is "Fluid Data". This tagline, that originally was used by Compellent
(the term was coined by one of the earlier Italian Compellent customer), has been adopted
for all the storage lineup, bringing the fluid concept to the whole Dell storage ecosystem,
by integrating all the acquired tech in a single common platform: Ocarina will be the
dedupe engine, Exanet will be the scale-out NAS engine, RNA networks will provide an interesting
cache coherency technology to the mix while both Equallogic and Compellent have a different
targeted automatic tiering solution for traditional block storage.

Dell is definitely quietly building something and this could go on for some years. But for the author to quote – “Ocarina will be the dedupe engine, Exanet will be the scale-out NAS engine; RNA Networks will provide cache coherency technology … ” mean that Dell is determined to out-innovate some of the storage players out there.

How does it all play in Dell’s Fluid Architecture? Here’s a peek:

It will be interesting how to see how RNA Networks technology gels the Dell storage technologies together but one thing’s for sure. Memory will be the last bastion that will cement Cloud Computing into an IT foundation of the next generation.