Cohesity SpanFS – a foundational shift

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

Cohesity SpanFS impressed me. Their filesystem was designed from ground up to meet the demands of the voluminous cloud-scale data, and yes, the sheer magnitude of data everywhere needs to be managed.

We all know that primary data is always the more important piece of data landscape but there is a growing need to address the secondary data segment as well.

Like a floating iceberg, the piece that is sticking out is the more important primary data but the larger piece beneath the surface of the water, which is the secondary data, is becoming more valuable. Applications such as file shares, archiving, backup, test and development, and analytics and insights are maturing as the foundational data management frameworks and fast becoming the bedrock of businesses.

The ability of businesses to bounce back after a disaster; the relentless testing of large data sets to develop new competitive advantage for businesses; the affirmations and the insights of analyzing data to reduce risks in decision making; all these are the powerful back engine applicability that thrust businesses forward. Even the ability to search for the right information in a sea of data for regulatory and compliance reasons is part of the organization’s data management application.

Continue reading

Magic happening

[Preamble: I am a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

The magic is happening.

Dropbox, the magical disruptor, is going IPO.

When Dropbox first entered into the market which eventually termed as BYOD (Bring your Own Device), it was a phenomenon. There was nothing else that matched its simplicity and ease-of-use. A file uploaded into the cloud was instantaneously available on the tablets and smart phones. It was on every storage vendor’s presentation slides, using Dropbox as the perennial name dropping tactic to get end users buy-in.

Dropbox was more than that, and it went on to define a whole new market segment known as Enterprise File Synchronization and Sharing (EFSS), together with everybody else such as Box, Easishare (they are here in South East Asia), and just about everybody else. And the executive team at Dropbox knew they were special too, so much so that they rejected a buyout attempt by Apple in 2011.

Today, Dropbox is beyond BYOD and EFSS. They are a full fledged collaboration platform that includes project management, project workflow, file versioning, secure file transfer, smart file synchronization and Dropbox Paper. And they offer comprehensive plans from Basic, Plus and Professional to Business and Enterprise. Their upcoming IPO, I am sure, will give them far greater capital to expand, and realize their full potential as the foremost content-based collaboration platform in the world.

Dropbox began their exodus from AWS a couple of years ago. They wanted to control their destiny and have moved more than 500PB into their own private data center for their customer data. That was half-an-exabyte, people! And two years later, they saved $75million of operating costs after they exited AWS. Today, they have more than 1 Exabyte of customer data! That is just incredible.

And Dropbox’s storage architecture started with a simple foundational design called “Magic Pocket“. Magic Pocket is a “fixed-length, immutable” block storage layer.

The block size is fixed at 4MB chunks (for parallel performance and service resumption reasons), compressed and deduped (for capacity savings reasons), encrypted (for security reasons) and replicated (for high availability reasons).

Continue reading

DellEMC SC progressing well

[Preamble: I was a delegate of Storage Field Day 14. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I haven’t had a preview of the Compellent technology for a long time. My buddies at Impact Business Solutions were the first to introduce the Compellent technology called Data Progression to the local Malaysian market and I was invited to a preview back then. Around the same time, I also recalled another rather similar preview invitation by PTC Singapore for the 3PAR technology called Adaptive Provisioning (it is called Adaptive Optimization now).

Storage tiering was on the rise in the 2009-2010 years. Both Compellent and 3PAR were neck and neck leading the conversation and mind share of storage tiering, and IBM easyTIER and EMC FAST (Fully Automated Storage Tiering) were nowhere to be seen or heard. Vividly, the Compellent Data Progression technology was much more elegant compared to the 3PAR technology. While both intelligent storage tiering technologies were equally good, I took that the 3PAR founders were ex-Sun Microsystems folks, and Unix folks sucked at UX. In this case, Compellent’s Data Progression was a definitely a leg up better than 3PAR.

History aside, this week I have the chance to get a new preview of the Compellent technology again. Compellent was now rebranded as the SC series and was positioned as the mid-range storage arrays of DellEMC. And together with the other Storage Field Day 14 delegates, I have the pleasure to experience the latest SC Data Progression technology update, as well their latest SC All-Flash.

In Data Progression, one interesting feature which caught my attention was the RAID Tiering. This was a dynamic auto expand and auto contract set of RAID tiersRAID 10 and RAID 5/6 in the Fast Tier and RAID 5/6 in the Lower Tier. RAID 10, RAID 5 and RAID 6 on the same set of drives (including SSDs), and depending on the “hotness” of the data, the location of the data blocks switched between the several RAID tiers in the Fast Tier. Over a longer period, the data blocks would relocate transparently to the Capacity Tier from the Fast Tier.

The Data Progression technology is extremely efficient. The movement of the data between the RAID Tiers and between the Performance/Capacity Tiers are in pages instead of blocks, making the write penalty and bandwidth to a negligible minimum.

The Storage Field Day 14 delegates were also privileged to be the first to get into the deep dive of the new All-Flash SC, just days of the announcement of the All-Flash SC. The All Flash SC redefines and refines the Data Progression to the next level. Among the new optimization, NAND Flash in the SC (both SLCs and MLCs, read-intensive and write-intensive) set the Data Progression default page size from 2MB to 512KB. These smaller 512KB pages enabled reduced bandwidth for tiering between the write-intensive and the read-intensive tier.

I didn’t get the latest SC family photos yet, but I managed to grab a screenshot of the announcement from The Register of the new DellEMC SC Series.

I was very encouraged with the DellEMC Midrange Storage presentation. Besides giving us a fantastic deep dive about the DellEMC SC All-Flash Storage, I was also very impressed by the candid and straightforward attitude of the team, led by their VP of Product Management, Pierluca Chiodelli. An EMC veteran, he was taking up the hard questions onslaught by the SFD14 delegate like a pro. His team’s demeanour was critical in instilling confidence and trust in how the bloggers and the analysts viewed Dell EMC merger, and how the SC and the Unity series would pan out in the technology roadmap.

Unlike the fiasco I went through with the DellEMC Forum 2017 in Malaysia, where I was disturbed with 3 calls in 3 consecutive days by DellEMC Malaysia, I was left with a profound respect for this DellEMC Storage team. They strongly supported their position within the DellEMC storage universe, and imparted their confidence in their technology solution in the marketplace.

Without a doubt, in my point of view, this DellEMC Mid-Range Storage team was the best I have enjoyed in Storage Field Day 14. Thank you.

Commvault UDI – a new CPUU

[Preamble: I am a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am here at the Commvault GO 2017. Bob Hammer, Commvault’s CEO is on stage right now. He shares his wisdom and the message is clear. IT to DT. IT to DT? Yes, Information Technology to Data Technology. It is all about the DATA.

The data landscape has changed. The cloud has changed everything. And data is everywhere. This omnipresence of data presents new complexity and new challenges. It is great to get Commvault acknowledging and accepting this change and the challenges that come along with it, and introducing their HyperScale technology and their secret sauce – Universal Dynamic Index.

Continue reading

Commvault calling again

[Preamble: I will be a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented in this event]

I am off to the US again next Monday. I am attending Storage Field Day 14 and it will be a 20+ hour long haul flight. But this SFD has a special twist, because I will be Washington DC first for Commvault GO 2017 conference. And I can’t wait.

My first encounter with Commvault goes way back in early 2001. I recalled they had their Galaxy version but in terms of market share, they were relatively small compared to Veritas and IBM at the time. I was with NetApp back then, and customers in Malaysia hardly heard of them, except for the people in Shell IT International (SITI). For those of us in the industry, we all knew that SITI worldwide had an exclusive Commvault fork just for them.

Continue reading

Considerations of Hadoop in the Enterprise

I am guilty. I have not been tendering this blog for quite a while now, but it feels good to be back. What have I been doing? Since leaving NetApp 2 months or so ago, I have been active in the scenes again. This time I am more aligned towards data analytics and its burgeoning impact on the storage networking segment.

I was intrigued by an article posted by a friend of mine in Facebook. The article (circa 2013) was titled “Never, ever do this to Hadoop”. It described the author’s gripe with the SAN bigots. I have encountered storage professionals who throw in the SAN solution every time, because that was all they know. NAS, to them, was like that old relative smelled of camphor oil and they avoid NAS like a plague. Similar DAS was frowned upon but how things have changed. The pendulum has swung back to DAS and new market segments such as VSANs and Hyper Converged platforms have been dominating the scene in the past 2 years. I highlighted this in my blog, “Praying to the Hypervisor God” almost 2 years ago.

I agree with the author, Andrew C. Oliver. The “locality” of resources is central to Hadoop’s performance.

Consider these 2 models:

moving-compute-storage

In the model on your left (Moving Data to Compute), the delivery process from Storage to Compute is HEAVY. That is because data has dependencies; data has gravity. However, if you consider the model on your right (Moving Compute to Data), delivering data processing to the storage layer is much lighter. Compute or data processing is transient, and the data in the compute layer is volatile. Once compute’s power is turned off, everything starts again from a clean slate, hence the volatile stage.

Continue reading

Solid in the Fire

December 22 2015: I kept this blog in draft for 6 months. Now I am releasing it as NetApp acquires Solidfire.

真金不怕紅爐火

The above is an old Chinese adage which means “True Gold fears no Fire“. That is how I would describe my revisited view and assessment of SolidFire, a high performance All-Flash array vendor which is starting to make its presence felt in South Asia.

I first blogged about SolidFire 3 years ago, and I have been following the company closely as more and more All-Flash array players entered the market over the 3 years. Many rode on the hype and momentum of flash storage, and as a result, muddied and convoluted the storage infrastructure market understanding. It seems to me spin marketing ruled the day and users could not make a difference between vendor A and vendor B, and C and D, and so on….

I have been often asked, which is the best All-Flash array today. I have always hesitated to say which is the best because there aren’t much to say, except for 2-3 well entrenched vendors. Pure Storage and EMC XtremIO come to mind but the one that had stayed under the enterprise storage radar was SolidFire, until now.

SolidFire Logo

Continue reading

Don’t get too drunk on Hyper Converged

I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.

The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).

Gartner MQ Integrated Systems Aug 2015

First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.

Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed.  Continue reading

The transcendence of Data Fabric

The Register wrote a damning piece about NetApp a few days ago. I felt it was irresponsible because this is akin to kicking a man when he’s down. It is easy to do that. The writer is clearly missing the forest for the trees. He was targeting NetApp’s Clustered Data ONTAP (cDOT) and missing the entire philosophy of NetApp’s mission and vision in Data Fabric.

I have always been a strong believer that you must treat Data like water. Just like what Jeff Goldblum famously quoted in Jurassic Park, “Life finds a way“, data as it moves through its lifecycle, will find its way into the cloud and back.

And every storage vendor today has a cloud story to tell. It is exciting to listen to everyone sharing their cloud story. Cloud makes sense when it addresses different workloads such as the sharing of folders across multiple devices, backup and archiving data to the cloud, tiering to the cloud, and the different cloud service models of IaaS, PaaS, SaaS and XaaS.

Continue reading