Huawei Dorado – All about Speed

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

Since Storage Field Day 15 3 weeks ago, the thoughts of the session with Huawei lingered. And one word came to describe Huawei Dorado V3, their flagship All-Flash storage platform is SPEED.

My conversation with Huawei actually started the night before our planned session at their Santa Clara facility the next day. We had a evening get-together at Bourbon Levi’s Stadium. I was with my buddy, Ammar Zolkipli, who coincidentally was in the Silicon Valley for work. Ammar is from Hitachi Vantara Japan, and has been a good friend of mine for over 17 years now.

Shortly, the Huawei team arrived to join the camaraderie. And we introduced ourselves to Chun Liu, not knowing that he is the Chief Architect at Huawei. A big part of that evening was our conversation with him. Ammar and I have immersed in the Oil & Gas EP (Exploration & Production) data management and petrotechnical applications when he was in Schlumberger and after that a reseller of NetApp. I was a Consulting Engineer with NetApp back then. So, the 2 of us started blabbering (yeah, that would be us when we get together to talk technology).

I observed that Chun was very interested to find learn about real world application use cases that would push storage performance to its limits. And I guessed that the best type of I/O characteristics would be small block, random I/O and billions of them, with near-real time latency. After that evening I did some research and could only think of a few, such as deep analytics or some applications with needs for Monte Carlo simulations. Oh, well, maybe I would share that with Chun the following day.

The moment the session started, it was already about the speed prowess of Huawei Storage. It was like the greyhounds unleashed going after the rabbit. In the lineup, the Dorado series stood out.

Continue reading

Magic happening

[Preamble: I am a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

The magic is happening.

Dropbox, the magical disruptor, is going IPO.

When Dropbox first entered into the market which eventually termed as BYOD (Bring your Own Device), it was a phenomenon. There was nothing else that matched its simplicity and ease-of-use. A file uploaded into the cloud was instantaneously available on the tablets and smart phones. It was on every storage vendor’s presentation slides, using Dropbox as the perennial name dropping tactic to get end users buy-in.

Dropbox was more than that, and it went on to define a whole new market segment known as Enterprise File Synchronization and Sharing (EFSS), together with everybody else such as Box, Easishare (they are here in South East Asia), and just about everybody else. And the executive team at Dropbox knew they were special too, so much so that they rejected a buyout attempt by Apple in 2011.

Today, Dropbox is beyond BYOD and EFSS. They are a full fledged collaboration platform that includes project management, project workflow, file versioning, secure file transfer, smart file synchronization and Dropbox Paper. And they offer comprehensive plans from Basic, Plus and Professional to Business and Enterprise. Their upcoming IPO, I am sure, will give them far greater capital to expand, and realize their full potential as the foremost content-based collaboration platform in the world.

Dropbox began their exodus from AWS a couple of years ago. They wanted to control their destiny and have moved more than 500PB into their own private data center for their customer data. That was half-an-exabyte, people! And two years later, they saved $75million of operating costs after they exited AWS. Today, they have more than 1 Exabyte of customer data! That is just incredible.

And Dropbox’s storage architecture started with a simple foundational design called “Magic Pocket“. Magic Pocket is a “fixed-length, immutable” block storage layer.

The block size is fixed at 4MB chunks (for parallel performance and service resumption reasons), compressed and deduped (for capacity savings reasons), encrypted (for security reasons) and replicated (for high availability reasons).

Continue reading

DellEMC SC progressing well

[Preamble: I was a delegate of Storage Field Day 14. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I haven’t had a preview of the Compellent technology for a long time. My buddies at Impact Business Solutions were the first to introduce the Compellent technology called Data Progression to the local Malaysian market and I was invited to a preview back then. Around the same time, I also recalled another rather similar preview invitation by PTC Singapore for the 3PAR technology called Adaptive Provisioning (it is called Adaptive Optimization now).

Storage tiering was on the rise in the 2009-2010 years. Both Compellent and 3PAR were neck and neck leading the conversation and mind share of storage tiering, and IBM easyTIER and EMC FAST (Fully Automated Storage Tiering) were nowhere to be seen or heard. Vividly, the Compellent Data Progression technology was much more elegant compared to the 3PAR technology. While both intelligent storage tiering technologies were equally good, I took that the 3PAR founders were ex-Sun Microsystems folks, and Unix folks sucked at UX. In this case, Compellent’s Data Progression was a definitely a leg up better than 3PAR.

History aside, this week I have the chance to get a new preview of the Compellent technology again. Compellent was now rebranded as the SC series and was positioned as the mid-range storage arrays of DellEMC. And together with the other Storage Field Day 14 delegates, I have the pleasure to experience the latest SC Data Progression technology update, as well their latest SC All-Flash.

In Data Progression, one interesting feature which caught my attention was the RAID Tiering. This was a dynamic auto expand and auto contract set of RAID tiersRAID 10 and RAID 5/6 in the Fast Tier and RAID 5/6 in the Lower Tier. RAID 10, RAID 5 and RAID 6 on the same set of drives (including SSDs), and depending on the “hotness” of the data, the location of the data blocks switched between the several RAID tiers in the Fast Tier. Over a longer period, the data blocks would relocate transparently to the Capacity Tier from the Fast Tier.

The Data Progression technology is extremely efficient. The movement of the data between the RAID Tiers and between the Performance/Capacity Tiers are in pages instead of blocks, making the write penalty and bandwidth to a negligible minimum.

The Storage Field Day 14 delegates were also privileged to be the first to get into the deep dive of the new All-Flash SC, just days of the announcement of the All-Flash SC. The All Flash SC redefines and refines the Data Progression to the next level. Among the new optimization, NAND Flash in the SC (both SLCs and MLCs, read-intensive and write-intensive) set the Data Progression default page size from 2MB to 512KB. These smaller 512KB pages enabled reduced bandwidth for tiering between the write-intensive and the read-intensive tier.

I didn’t get the latest SC family photos yet, but I managed to grab a screenshot of the announcement from The Register of the new DellEMC SC Series.

I was very encouraged with the DellEMC Midrange Storage presentation. Besides giving us a fantastic deep dive about the DellEMC SC All-Flash Storage, I was also very impressed by the candid and straightforward attitude of the team, led by their VP of Product Management, Pierluca Chiodelli. An EMC veteran, he was taking up the hard questions onslaught by the SFD14 delegate like a pro. His team’s demeanour was critical in instilling confidence and trust in how the bloggers and the analysts viewed Dell EMC merger, and how the SC and the Unity series would pan out in the technology roadmap.

Unlike the fiasco I went through with the DellEMC Forum 2017 in Malaysia, where I was disturbed with 3 calls in 3 consecutive days by DellEMC Malaysia, I was left with a profound respect for this DellEMC Storage team. They strongly supported their position within the DellEMC storage universe, and imparted their confidence in their technology solution in the marketplace.

Without a doubt, in my point of view, this DellEMC Mid-Range Storage team was the best I have enjoyed in Storage Field Day 14. Thank you.

The changing face of storage

No, we are not a storage company anymore. We are a data management company now.

I was reading a Forbes article interviewing NetApp’s CIO, Bill Miller. It was titled:

NetApp’s CIO Helps Drive Company’s Shift From Data Storage To Data Management

I was fairly surprised about the time it took for that mindset shift messaging from storage to data management. I am sure that NetApp has been doing that for years internally.

To me, the writing has been in the wall for years. But weak perception of storage, at least in this part of Asia, still lingers as that clunky, behind the glassed walls and crufty closets, noisy box of full of hard disk drives lodged with snakes and snakes of orange, turquoise or white cables. ūüėČ

The article may come as a revelation to some, but the world of storage has changed indefinitely. The blurring of the lines began when software defined storage, or even earlier in the form of storage virtualization, took form. I even came up with my definition a couple of years ago about the changing face of storage framework. Instead of calling it data management, I called the new storage framework,  the Data Services Platform.

So, this is my version of the storage technology platform of today. This is the Data Services Platform I have been touting to many for the last couple of years. It is not just storage technology anymore; it is much more than that.

Continue reading

Let’s smoke the storage peace pipe

NVMe (Non-Volatile Memory Express) is upon us. And in the next 2-3 years, we will see a slew of new storage solutions and technology based on NVMe.

Just a¬†few days ago, The Register released an article “Seventeen hopefuls fight for the NVMe Fabric array crown“, and it was¬†timely. I, for one, cannot be more excited about the development and advancement of NVMe and the upcoming NVMeF (NVMe over Fabrics).

This is it. This is the one that will end the wars of DAS, NAS and SAN and unite the warring factions between server-based SAN (the sexy name differentiating old DAS and new DAS) and the networked storage of SAN and NAS. There will be PEACE.

Remember this?

nutanix-nosan-buntingNutanix popularized the “No SAN” movement which later¬†led to VMware VSAN and other server-based SAN solutions, hyperconverged techs such as PernixData (acquired by Nutanix), DataCore, EMC ScaleIO and also operated in hyperscalers –¬†the likes of Facebook and Google. The hyperconverged solutions and the server-based SAN lines blurred of storage but still, they are not the usual networked storage architectures of SAN and NAS. I blogged about this, mentioning about how the pendulum has swung back to favour DAS, or to put it more appropriately, server-based SAN.¬†There was always a “Great Divide” between the 2 modes of storage architectures. Continue reading

Don’t get too drunk on Hyper Converged

I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.

The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this¬†faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).

Gartner MQ Integrated Systems Aug 2015

First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.

Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed.¬† Continue reading

Really? Disk is Dead? From Violin?

A catchy email from one of the forums I subscribed to, caught my attention. It goes something like “…Grateful … Disk is Dead“. Here the blog from Kevin Doherty, a Senior Account Manager at Violin Memory.

Coming from Violin Memory, this is pretty obvious because they have an¬†agenda against HDDs. They don’t use any disks at all …. in any form factor. They use VIMMs (Violin Inline Memory Modules), something no vendor in the industry use today.

violin-memory4

I recalled my blog in 2012, titled “Violin pulling the strings“. It came up here in South Asia with much fan fare, lots of razzmatazz and there was plenty of excitement. I was even invited to their product training at Ingram Micro in Singapore and met their early SE, Mike Thompson. Mike is still there I believe, but the EMC veteran in Singapore whom I mentioned in my previous blog, left almost a year later after joining. So was the ex-Sun, General Manager of Violin Memory in Singapore.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”¬† was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Why demote archived data access?

We are all familiar with the concept of data archiving. Passive data gets archived from production storage and are migrated to a slower and often, cheaper storage medium such tapes or SATA disks. Hence the terms nearline and offline data are created. With that, IT constantly reminds users that the archived data is infrequently accessed, and therefore, they have to accept the slower access to passive, archived data.

The business conditions have certainly changed, because the need for data to be 100% online is becoming more relevant. The new competitive nature of businesses dictates that data must be at the fingertips, because speed and agility are the new competitive advantage. Often the total amount of data, production and archived data, is into hundred of TBs, even into PetaBytes!

The industries I am familiar with РOil & Gas, and Media & Entertainment Рare facing this situation. These industries have a deluge of files, and unstructured data in its archive, and much of it dormant, inactive and sitting on old tapes of a bygone era. Yet, these files and unstructured data have the most potential to be explored, mined and analyzed to realize its value to the organization. In short, the archived data and files must be democratized!

The flip side is, when the archived files and unstructured data are coupled with a slow access interface or unreliable storage infrastructure, the value of archived data is downgraded because of the aggravated interaction between access and applications and business requirements. How would organizations value archived data more if the access path to the archived data is so damn hard???!!!

An interesting solution fell upon my lap some months ago, and putting A and B together (A + B), I believe the access path to archived data can be unbelievably of high performance, simple, transparent and most importantly, remove the BLOODY PAIN of FILE AND DATA MIGRATION!  For storage administrators and engineers familiar with data migration, especially if the size of the migration is into hundreds of TBs or even PBs, you know what I mean!

I have known this solution for some time now, because I have been avidly following its development after its founders left NetApp following their Spinnaker venture to start Avere Systems.

avere_220

Continue reading

Praying to the hypervisor God

I was reading a great article by Frank Denneman about storage intelligence moving up the stack. It was pretty much in line with what I have been observing in the past 18 months or so, about the storage pendulum having swung back to DAS (direct attached storage). To be more precise, the DAS form factor I am referring to are physical server hardware that houses many disk drives.

Like it or not, the hypervisor has become the center of the universe in the IT space. VMware has become the indomitable force in the hypervisor technology, with Microsoft Hyper-V playing catch-up. The seismic shift of these 2 hypervisor technologies are leading storage vendors to place them on to the altar and revering them as deities. The others, with the likes of Xen and KVM, and to lesser extent Solaris Containers aren’t really worth mentioning.

This shift, as the pendulum swings from networked storage back to internal “direct-attached” storage are¬†dictated by 4 main technology factors:

  • The x86 server architecture
  • Software-defined
  • Scale-out architecture
  • Flash-based storage technology

Anyone remember Thumper? Not the Disney character from the Bambi movie!

thumper-bambi-cartoon-character

When the SunFire X4500 (aka Thumper) was first released in (intermission: checking Wiki for the right year) in 2006, I felt that significant wound inflicted in the networked storage industry. Instead of the usual 4-8 hard disk drives in the all the industry servers at the time, the X4500 4U chassis¬†housed 48 hard disk drives. The design and architecture were so astounding to me, I even went and bought a 1U SunFire X4150¬†for my personal server collection. Such was my adoration for Sun’s technology at the time.

Continue reading