Industry 4.0 secret gem with Dell

[Preamble: I have been invited by Dell Technologies as a delegate to their upcoming Dell Technologies World from Apr 30-May 2, 2018 in Las Vegas, USA. My expenses, travel and accommodation will be paid by Dell Technologies, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

This may seem a little strange. How does Industry 4.0 relate to Dell Technologies?

Recently, I was involved in an Industry 4.0 consortium called Data Industry 4.0 (di 4.0). The objective of the consortium is to combine the foundations of 5S (seiri, seiton, seiso, seiketsu, and shitsuke), QRQC (Quick Response Quality Control) and Kaizen methodologies with the 9 pillars of Industry 4.0 with a strong data insight focus.

Industry 4.0 has been the latest trend in new technologies in the manufacturing world. It is sweeping the manufacturing industry segment by storm, leading with the nine pillars of Industry 4.0:

  • Horizontal and Vertical System Integration
  • Industrial Internet of Things
  • Simulation
  • Additive Manufacturing
  • Cloud Computing
  • Augmented Reality
  • Big Data and Analytics
  • Cybersecurity
  • Autonomous Robots

Continue reading

Own the Data Pipeline

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am a big proponent of Go-to-Market (GTM) solutions. Technology does not stand alone. It must be in an ecosystem, and in each industry, in each segment of each respective industry, every ecosystem is unique. And when we amalgamate data, the storage infrastructure technologies and the data management into the ecosystem, we reap the benefits in that ecosystem.

Data moves in the ecosystem, from system to system, north to south, east to west and vice versa, random, sequential, ad-hoc. Data acquires different statuses, different roles, different relevances in its lifecycle through the ecosystem. From it, we derive the flow, a workflow of data creating a data pipeline. The Data Pipeline concept has been around since the inception of data.

To illustrate my point, I created one for the Oil & Gas – Exploration & Production (EP) upstream some years ago.

 

Continue reading

Cohesity SpanFS – a foundational shift

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

Cohesity SpanFS impressed me. Their filesystem was designed from ground up to meet the demands of the voluminous cloud-scale data, and yes, the sheer magnitude of data everywhere needs to be managed.

We all know that primary data is always the more important piece of data landscape but there is a growing need to address the secondary data segment as well.

Like a floating iceberg, the piece that is sticking out is the more important primary data but the larger piece beneath the surface of the water, which is the secondary data, is becoming more valuable. Applications such as file shares, archiving, backup, test and development, and analytics and insights are maturing as the foundational data management frameworks and fast becoming the bedrock of businesses.

The ability of businesses to bounce back after a disaster; the relentless testing of large data sets to develop new competitive advantage for businesses; the affirmations and the insights of analyzing data to reduce risks in decision making; all these are the powerful back engine applicability that thrust businesses forward. Even the ability to search for the right information in a sea of data for regulatory and compliance reasons is part of the organization’s data management application.

Continue reading

Magic happening

[Preamble: I am a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

The magic is happening.

Dropbox, the magical disruptor, is going IPO.

When Dropbox first entered into the market which eventually termed as BYOD (Bring your Own Device), it was a phenomenon. There was nothing else that matched its simplicity and ease-of-use. A file uploaded into the cloud was instantaneously available on the tablets and smart phones. It was on every storage vendor’s presentation slides, using Dropbox as the perennial name dropping tactic to get end users buy-in.

Dropbox was more than that, and it went on to define a whole new market segment known as Enterprise File Synchronization and Sharing (EFSS), together with everybody else such as Box, Easishare (they are here in South East Asia), and just about everybody else. And the executive team at Dropbox knew they were special too, so much so that they rejected a buyout attempt by Apple in 2011.

Today, Dropbox is beyond BYOD and EFSS. They are a full fledged collaboration platform that includes project management, project workflow, file versioning, secure file transfer, smart file synchronization and Dropbox Paper. And they offer comprehensive plans from Basic, Plus and Professional to Business and Enterprise. Their upcoming IPO, I am sure, will give them far greater capital to expand, and realize their full potential as the foremost content-based collaboration platform in the world.

Dropbox began their exodus from AWS a couple of years ago. They wanted to control their destiny and have moved more than 500PB into their own private data center for their customer data. That was half-an-exabyte, people! And two years later, they saved $75million of operating costs after they exited AWS. Today, they have more than 1 Exabyte of customer data! That is just incredible.

And Dropbox’s storage architecture started with a simple foundational design called “Magic Pocket“. Magic Pocket is a “fixed-length, immutable” block storage layer.

The block size is fixed at 4MB chunks (for parallel performance and service resumption reasons), compressed and deduped (for capacity savings reasons), encrypted (for security reasons) and replicated (for high availability reasons).

Continue reading

Of Object Storage, Filesystems and Multi-Cloud

Data storage silos everywhere. The early clarion call was to eliminate IT data storage silos by moving to the cloud. Fast forward to the present. Data storage silos are still everywhere, but this time, they are in the clouds. I blogged about this.

Object Storage was all the rage when it first started. AWS, with its S3 (Simple Storage Service) offering, started the cloud storage frenzy. Highly available, globally distributed, simple to access, and fitted superbly into the entire AWS ecosystem. Quickly, a smorgasbord of S3-compatible, S3-like object-based storage emerged. OpenStack Swift, HDS HCP, EMC Atmos, Cleversafe (which became IBM SpectrumScale), Inktank Ceph (which became RedHat Ceph), Bycast (acquired by NetApp to be StorageGrid), Quantum Lattus, Amplidata, and many more. For a period of a few years prior, it looked to me that the popularity of object storage with an S3 compatible front has overtaken distributed file systems.

What’s not to like? Object storage are distributed, they are metadata rich (at a certain structural level), they are immutable (hence secure from a certain point of view), and some even claim self-healing (depending on data protection policies). But one thing that object storage rarely touted dominance was high performance I/O. There were some cases, but they were either fronted by a file system (eg. NFSv4.1 with pNFS extensions), or using some host-based, SAN-client agent (eg. StorNext or Intel Lustre). Object-based storage, in its native form, has not been positioned as high performance I/O storage.

A few weeks ago, I read an article from Storage Soup, Dave Raffo. When I read it, it felt oxymoronic. SwiftStack was just nominated as a visionary in the Gartner Magic Quadrant for Distributed File Systems and Object Storage. But according to Dave’s article, Swiftstack did not want to be “associated” with object storage that much, even though Swiftstack’s technology underpinning was all object storage. Strange.

Continue reading

DellEMC SC progressing well

[Preamble: I was a delegate of Storage Field Day 14. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I haven’t had a preview of the Compellent technology for a long time. My buddies at Impact Business Solutions were the first to introduce the Compellent technology called Data Progression to the local Malaysian market and I was invited to a preview back then. Around the same time, I also recalled another rather similar preview invitation by PTC Singapore for the 3PAR technology called Adaptive Provisioning (it is called Adaptive Optimization now).

Storage tiering was on the rise in the 2009-2010 years. Both Compellent and 3PAR were neck and neck leading the conversation and mind share of storage tiering, and IBM easyTIER and EMC FAST (Fully Automated Storage Tiering) were nowhere to be seen or heard. Vividly, the Compellent Data Progression technology was much more elegant compared to the 3PAR technology. While both intelligent storage tiering technologies were equally good, I took that the 3PAR founders were ex-Sun Microsystems folks, and Unix folks sucked at UX. In this case, Compellent’s Data Progression was a definitely a leg up better than 3PAR.

History aside, this week I have the chance to get a new preview of the Compellent technology again. Compellent was now rebranded as the SC series and was positioned as the mid-range storage arrays of DellEMC. And together with the other Storage Field Day 14 delegates, I have the pleasure to experience the latest SC Data Progression technology update, as well their latest SC All-Flash.

In Data Progression, one interesting feature which caught my attention was the RAID Tiering. This was a dynamic auto expand and auto contract set of RAID tiersRAID 10 and RAID 5/6 in the Fast Tier and RAID 5/6 in the Lower Tier. RAID 10, RAID 5 and RAID 6 on the same set of drives (including SSDs), and depending on the “hotness” of the data, the location of the data blocks switched between the several RAID tiers in the Fast Tier. Over a longer period, the data blocks would relocate transparently to the Capacity Tier from the Fast Tier.

The Data Progression technology is extremely efficient. The movement of the data between the RAID Tiers and between the Performance/Capacity Tiers are in pages instead of blocks, making the write penalty and bandwidth to a negligible minimum.

The Storage Field Day 14 delegates were also privileged to be the first to get into the deep dive of the new All-Flash SC, just days of the announcement of the All-Flash SC. The All Flash SC redefines and refines the Data Progression to the next level. Among the new optimization, NAND Flash in the SC (both SLCs and MLCs, read-intensive and write-intensive) set the Data Progression default page size from 2MB to 512KB. These smaller 512KB pages enabled reduced bandwidth for tiering between the write-intensive and the read-intensive tier.

I didn’t get the latest SC family photos yet, but I managed to grab a screenshot of the announcement from The Register of the new DellEMC SC Series.

I was very encouraged with the DellEMC Midrange Storage presentation. Besides giving us a fantastic deep dive about the DellEMC SC All-Flash Storage, I was also very impressed by the candid and straightforward attitude of the team, led by their VP of Product Management, Pierluca Chiodelli. An EMC veteran, he was taking up the hard questions onslaught by the SFD14 delegate like a pro. His team’s demeanour was critical in instilling confidence and trust in how the bloggers and the analysts viewed Dell EMC merger, and how the SC and the Unity series would pan out in the technology roadmap.

Unlike the fiasco I went through with the DellEMC Forum 2017 in Malaysia, where I was disturbed with 3 calls in 3 consecutive days by DellEMC Malaysia, I was left with a profound respect for this DellEMC Storage team. They strongly supported their position within the DellEMC storage universe, and imparted their confidence in their technology solution in the marketplace.

Without a doubt, in my point of view, this DellEMC Mid-Range Storage team was the best I have enjoyed in Storage Field Day 14. Thank you.

Commvault UDI – a new CPUU

[Preamble: I am a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am here at the Commvault GO 2017. Bob Hammer, Commvault’s CEO is on stage right now. He shares his wisdom and the message is clear. IT to DT. IT to DT? Yes, Information Technology to Data Technology. It is all about the DATA.

The data landscape has changed. The cloud has changed everything. And data is everywhere. This omnipresence of data presents new complexity and new challenges. It is great to get Commvault acknowledging and accepting this change and the challenges that come along with it, and introducing their HyperScale technology and their secret sauce – Universal Dynamic Index.

Continue reading

Commvault calling again

[Preamble: I will be a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented in this event]

I am off to the US again next Monday. I am attending Storage Field Day 14 and it will be a 20+ hour long haul flight. But this SFD has a special twist, because I will be Washington DC first for Commvault GO 2017 conference. And I can’t wait.

My first encounter with Commvault goes way back in early 2001. I recalled they had their Galaxy version but in terms of market share, they were relatively small compared to Veritas and IBM at the time. I was with NetApp back then, and customers in Malaysia hardly heard of them, except for the people in Shell IT International (SITI). For those of us in the industry, we all knew that SITI worldwide had an exclusive Commvault fork just for them.

Continue reading

Pure Electric!

I didn’t get a chance to attend Pure Accelerate event last month. From the blogs and tweets of my friends, Pure Accelerate was an awesome event. When I got the email invitation for the localized Pure Live! event in Kuala Lumpur, I told myself that I have to attend the event.

The event was yesterday, and I was not disappointed. Coming off a strong fiscal Q1 2018, it has appeared that Pure Storage has gotten many things together, chugging full steam at all fronts.

When Pure Storage first come out, I was one of the early bloggers who took a fancy of them. My 2011 blog mentioned the storage luminaries in their team. Since then, they have come a long way. And it was apt that on the same morning yesterday, the latest Gartner Magic Quadrant for Solid State Arrays 2017 was released.

Continue reading