FlashForward to Beyond

The flash frenzy has reached its zenith in 2016. We now no longer are interested in listening to storage technology vendors touting the power of solid state storage (NAND Flash included) over spinning drives.

The capacity of 3D NAND Flash SSDs has reached a whopping 15.3TB (that is even bigger than the 12TB 7200RPM HDDs of today), and with deduplication and compression, the storage efficiency has reached a conservative 4:1 or 5:1. Effective capacity of most mid-end storage arrays can easily reach 1-2 Petabytes.

And flash and hybrid platforms have reached maturity in these few short years. So what is next?

The landscape has obviously changed. The performance landscape, the capacity landscape and all related to the storage data points have changed. And the speed of SSDs together with the up-and-coming NVMe and NVDIMM technology in new storage array controllers are also shifting the data bottlenecks to another part of the architecture. The development of I/O communications and interfaces has to change as well, to take advantage of the asynchronous I/Os in storage tiering and caching using NAND Flash.

With this mature and well understood landscape, it is time to take Flash to the next level. This next level comes in the form of an exciting end-user conference in Singapore on 25th April 2017. It is called FlashForward.

The 2016 FlashForward event in Europe has already garnered great support from the cream of the storage technologists around the world, and had fantastic feedbacks from the end-user attendees. That FlashForward event has also seen the birth of an international business and technology exchange in its inaugural introduction.  Yes, it is time to learn from the field experts, and it is time to build on the Flash Platform for new Data Services.

From the sponsorship package brochure I have received, it is definitely an event not to be missed.

The FlashForward Conference in Singapore is exquisitely procured by Evito Ltd, under the stewardship of Mr. Paul Talbut. Paul is a very seasoned veteran in the global circuit as an SNIA director of several initiatives. He has been immensely involved in the development of several SNIA chapters around the world, including South Asia, Malaysia, India, China, and even Brazil. He also leads by example with the SNIA Global Steering Committee (GSC); he is the SNIA Global Education Director and at one time, SNIA DPCO (Data Protection & Capacity Optimization) global proctor.

I have had the honour working with Paul for almost 8 years now, and I am sure he will lead the FlashForward Conference with valuable insights and experiences.

This is probably the greatest period for the industry and end users to get involved in the FlashForward Conference. For one, it is endorsed by SNIA, the vendor-neutral association which has been the growth beacon of the storage networking industry.

Secondly, it is the perfect opportunity for technology vendors to build their mindshare with end users and customers. And with the endorsement of the independent field experts and technology practitioners, end users would have a field day garnering approvals for their decisions, as well as learning the best practices to build upon the Flash technology they have implemented in their data center space.

The sponsorship packages are listed below, and I do encourage technology vendors, especially the All-Flash vendors to use the FlashForward conference as a platform to build their mindshare, and most of all, their branding. Continue reading

Let’s smoke the storage peace pipe

NVMe (Non-Volatile Memory Express) is upon us. And in the next 2-3 years, we will see a slew of new storage solutions and technology based on NVMe.

Just a few days ago, The Register released an article “Seventeen hopefuls fight for the NVMe Fabric array crown“, and it was timely. I, for one, cannot be more excited about the development and advancement of NVMe and the upcoming NVMeF (NVMe over Fabrics).

This is it. This is the one that will end the wars of DAS, NAS and SAN and unite the warring factions between server-based SAN (the sexy name differentiating old DAS and new DAS) and the networked storage of SAN and NAS. There will be PEACE.

Remember this?

nutanix-nosan-buntingNutanix popularized the “No SAN” movement which later led to VMware VSAN and other server-based SAN solutions, hyperconverged techs such as PernixData (acquired by Nutanix), DataCore, EMC ScaleIO and also operated in hyperscalers – the likes of Facebook and Google. The hyperconverged solutions and the server-based SAN lines blurred of storage but still, they are not the usual networked storage architectures of SAN and NAS. I blogged about this, mentioning about how the pendulum has swung back to favour DAS, or to put it more appropriately, server-based SAN. There was always a “Great Divide” between the 2 modes of storage architectures. Continue reading

Can CDMI emancipate an interoperable medical records cloud ecosystem?

PREFACE: This is just a thought, an idea. I am by no means an expert in this area. I have researched this to inspire a thought process of how we can bring together 2 disparate worlds of medical records and imaging with the emerging cloud services for healthcare.

Healthcare has been moving out of its archaic shell in the past few years, and digital healthcare technology and services are booming. And this movement is part of the digital transformation which could eventually lead to a secure and compliant distribution and collaboration of health data, medical imaging and electronic medical records (EMR).

It is a blessing that today’s medical imaging industry has been consolidated with the DICOM (Digital Imaging and Communications in Medicine) standard. DICOM dictates the how medical imaging information and pictures are used, stored, printed, transmitted and exchanged. It is also a communication protocol which runs over TCP/IP, and links up different service class providers (SCPs) and service class users (SCUs), and the backend systems such as PACS (Picture Archiving & Communications Systems) and RIS (Radiology Information Systems).

Another well accepted standard is HL7 (Health Level 7), a similar Layer 7, application-level communication protocol for transferring and exchanging clinical and administrative data.

The diagram below shows a self-contained ecosystem involving the front-end HIS (Hospital Information Systems), and the integration of healthcare, medical systems and other DICOM modalities.

Hospital Enterprise

(Picture courtesy of Meddiff Technologies)

Continue reading

Solid in the Fire

December 22 2015: I kept this blog in draft for 6 months. Now I am releasing it as NetApp acquires Solidfire.

真金不怕紅爐火

The above is an old Chinese adage which means “True Gold fears no Fire“. That is how I would describe my revisited view and assessment of SolidFire, a high performance All-Flash array vendor which is starting to make its presence felt in South Asia.

I first blogged about SolidFire 3 years ago, and I have been following the company closely as more and more All-Flash array players entered the market over the 3 years. Many rode on the hype and momentum of flash storage, and as a result, muddied and convoluted the storage infrastructure market understanding. It seems to me spin marketing ruled the day and users could not make a difference between vendor A and vendor B, and C and D, and so on….

I have been often asked, which is the best All-Flash array today. I have always hesitated to say which is the best because there aren’t much to say, except for 2-3 well entrenched vendors. Pure Storage and EMC XtremIO come to mind but the one that had stayed under the enterprise storage radar was SolidFire, until now.

SolidFire Logo

Continue reading

Don’t get too drunk on Hyper Converged

I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.

The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).

Gartner MQ Integrated Systems Aug 2015

First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.

Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed.  Continue reading

Oops, excuse me but your silo is showing

It is the morning that the SNIA Global Steering Committee reporting session is starting soon. I am in the office extremely early waiting for my turn to share the happenings in SNIA Malaysia.

And of late, I have been getting a lot of calls to catch up on hot technologies, notably All Flash Storage arrays and hyper-converged infrastructure. Even though I am now working for Interica, a company that focuses on Oil & Gas exploration and production software, my free coffee sessions with folks from the IT side have not diminished. And I recalled a week back in mid-March where I had coffee overdose!

Flash storage and hyperconvergence are HOT! Despite the hypes and frenzies of both flash storage and hyperconvergence, I still believe that integrating either or, or both, still have an effect that many IT managers overlook. The effect is a data silo.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Why demote archived data access?

We are all familiar with the concept of data archiving. Passive data gets archived from production storage and are migrated to a slower and often, cheaper storage medium such tapes or SATA disks. Hence the terms nearline and offline data are created. With that, IT constantly reminds users that the archived data is infrequently accessed, and therefore, they have to accept the slower access to passive, archived data.

The business conditions have certainly changed, because the need for data to be 100% online is becoming more relevant. The new competitive nature of businesses dictates that data must be at the fingertips, because speed and agility are the new competitive advantage. Often the total amount of data, production and archived data, is into hundred of TBs, even into PetaBytes!

The industries I am familiar with – Oil & Gas, and Media & Entertainment – are facing this situation. These industries have a deluge of files, and unstructured data in its archive, and much of it dormant, inactive and sitting on old tapes of a bygone era. Yet, these files and unstructured data have the most potential to be explored, mined and analyzed to realize its value to the organization. In short, the archived data and files must be democratized!

The flip side is, when the archived files and unstructured data are coupled with a slow access interface or unreliable storage infrastructure, the value of archived data is downgraded because of the aggravated interaction between access and applications and business requirements. How would organizations value archived data more if the access path to the archived data is so damn hard???!!!

An interesting solution fell upon my lap some months ago, and putting A and B together (A + B), I believe the access path to archived data can be unbelievably of high performance, simple, transparent and most importantly, remove the BLOODY PAIN of FILE AND DATA MIGRATION!  For storage administrators and engineers familiar with data migration, especially if the size of the migration is into hundreds of TBs or even PBs, you know what I mean!

I have known this solution for some time now, because I have been avidly following its development after its founders left NetApp following their Spinnaker venture to start Avere Systems.

avere_220

Continue reading

Hail Hydra!

The last of the Storage Field Day 6 on November 7th took me and the other delegates to NEC. There was an obvious, yet eerie silence among everyone about this visit. NEC? Are you kidding me?

NEC isn’t exactly THE exciting storage company in the Silicon Valley, yet I was pleasantly surprised with their HydraStorprowess. It is indeed quite a beast, with published numbers of backup throughput of 4PB/hour, and scales to 100PB of capacity. Most impressive indeed, and HydraStor deserves this blogger’s honourable architectural dissection.

HydraStor is NEC’s grid-based, scale-out storage platform with an object storage backend. The technology, powered by the DynamicStor ™ software, a distributed file system laid over the HydraStor grid architecture. At the same time, it has the DataRedux™ technology that provides the global in-line deduplication as the HydraStor ingests data for data protection, replication, archiving and WORM purposes. It is a massive data consolidation platform, storing gazillion loads of data (100PB you say?) for short-term and long-term retention and recovery.

The architecture is indeed solid, and its data availability goes beyond traditional RAID-level resiliency. HydraStor employs their proprietary erasure coding, called Distributed Resilient Data™. The resiliency knob can be configured to withstand 6 concurrent disks or nodes failure, but by default configured with a resiliency level of 3.

We can quickly deduce that DynamicStor™, DataRedux™ and Distributed Resilient Data™ are the technology pillars of HydraStor. How do they work, and how do they work together?

Let’s look a bit deeper into the HydraStor architecture.

HydraStor is made up of 2 types of nodes:

  • Accelerator Nodes
  • Storage Nodes

The Accelerator Nodes (AN) are the access nodes. They interface with the HydraStor front end, which could be CIFS, NFS or OST (Open Storage Technology). The AN nodes chunks the in-coming data and performs in-line deduplication at a very high speed. It can reach speed of 300TB/hour, which is blazingly fast!

The AN nodes also runs DynamicStor™, handling the performance heavy-lifting portion of HydraStor. The chunked data from the AN nodes are then passed on to the Storage Nodes (SN), where they are further “deduped in-line” to determined if the chunks are unique or not. It is a two-step inline deduplication process. Below is a diagram showing the ANs built above the SNs in the HydraStor grid architecture.

NEC AN & SN grid architecture

 

The HydraStor grid architecture is also a very scalable architecture, allow the dynamic scale-in and scale-out of both ANs and SNs. AN nodes and SN nodes can be added or removed into the system, auto-configuring and auto-optimizing while everything stays online. This capability further strengthens the reliability and the resiliency of the HydraStor.

NEC Hydrastor dynamic topology

Moving on to DataRedux™. DataRedux™ is HydraStor’s global in-line data deduplication technology. It performs dedupe at the sub-file level, with variable length window. This is performed at the AN nodes and the SN nodes level,chunking and creating unique hash values. All unique chunks are further compressed with a modified LZ compression algorithm, shrinking the data to its optimized footprint on the disk storage. To maintain the global in-line deduplication, the hash table is available across the HydraStor cluster.

NEC Deduplication & Compression

The unique data chunk resulting from deduplication and compression are then written to disks using the configured Distributed Resilient Data™ (DRD) algorithm, at its set resiliency level.

At the junction of DRD, with erasure coding parity, the data is broken up into multiples of fragments and assigned a parity to a grouping of fragments. If the resiliency level is set to 3 (the default), the data is broken into 12 pieces, 9 data fragments + 3 parity fragments. The 3 parity fragments corresponds to the resiliency level of 3. See diagram below of the 12 fragments spread across a group of selected disks in the storage pool of the Storage Nodes.

NEC DRD erasure coding on Storage Nodes

 

If the HydraStor experiences a failure in the disks or nodes, and has resulted in the loss of a fragment or fragments, the DRD self-healing function will auto-rebuild and auto-reconfigure the recovered fragments in another set of disks, maintaining the level of 3 parities.

The resiliency level, as mentioned earlier, can be set up to 6, boosting the HydraStor survival factor of 6 disks or nodes failure in the grid. See below of how the autonomous DRD recovery works:

NEC Autonomous Data recovery

Despite lacking the razzle dazzle of most Silicon Valley storage startups and upstarts, credit be given where credit is due. NEC HydraStor is indeed a strong show stopper.

However, in a market that is as fickle as storage, deduplication solutions such as HydraStor, EMC Data Domain, and HP StoreOnce, are being superceded by Copy Data Management technology, touted by Actifio. It was rumoured that EMC restructured their entire BURA (Backup Recovery Archive) division to DPAD (Data Protection and Availability Division) to go after the burgeoning copy data management market.

It would be good if NEC can take notice and turn their HydraStor “supertanker” towards the Copy Data Management market. That would be something special to savour.

P/S: NEC. Sorry about the title. I just couldn’t resist it 😉