Bridges to the clouds and more – NetApp NDAS

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The NetApp Data Fabric Vision

The NetApp Data Fabric vision has always been clear to me. Maybe it was because of my 2 stints with them, and I got well soaked in their culture. 3 simple points define the vision.

  • The Data Fabric is THE data singularity. Data can be anywhere – on-premises, the clouds, and more.
  • Have bridges, paths and workflows management to the Data, to move the data to wherever the data may be.
  • Work with technology partners to build tools and data systems to elevate the value of the data

That is how I see it. I wrote about the Transcendence of the Data Fabric vision 3+ years ago, and I emphasized the importance of the Data Pipeline in another NetApp blog almost a year ago. The introduction of NetApp Data Availability Services (NDAS) in the recently concluded Storage Field Day 18 was no different as NetApp constructs data bridges and paths to the AWS Cloud.

NetApp Data Availability Services

The NDAS feature is only available with ONTAP 9.5. With less than 5 clicks, data from ONTAP primary systems can be backed up to the secondary ONTAP target (running the NDAS proxy and the Copy to Cloud API), and then to AWS S3 buckets in the cloud.

Continue reading

Minio – the minimalist object storage technology

The Marie Kondo Konmari fever is sweeping the world. Her decluttering and organizing the home methods are leading to a new way of life – Minimalism.

Complicated Storage Experience

Storage technology and its architecture are complex. We layer upon layer of abstraction and virtualization into storage design until at some stage, choke points lead to performance degradation, and management becomes difficult.

I recalled a particular training I attended back in 2006. I just joined Hitachi Data Systems for the Shell GUSto project. I was in Baltimore for the Hitachi NAS course. This was not their HNAS (their BlueArc acquisition) but their home grown NAS based on Linux. In the training, we were setting up NFS service. There were 36 steps required to setup and provision NFS and if there was a misstep, you start from the first command again. Coming from NetApp at the time, it was horrendous. NetApp ONTAP NFS setup and provisioning probably took 3 commands, and this Hitachi NAS setup and configuration was so much more complex. In the end, the experience was just unworldly for me.

Introducing Minio to my world, to Malaysia

Continue reading

Quantum Corp should spin off Stornext

What’s happening at Quantum Corporation?

I picked up the latest development news about Quantum Corporation. Last month, in December 2018, they secured a USD210 million financial lifeline to support their deflating business and their debts. And if you follow their development, they are with their 3rd CEO in the past 12 months, which is quite extraordinary. What is happening at Quantum Corp?

Quantum Logo (PRNewsFoto/Quantum Corp.)

Stornext – The Swiss Army knife of Data Management

I have known Quantum since 2000, very focused on the DLT tape library business. At that time, prior to the coming of LTO, DLT and its successor, SuperDLT dominated the tape market together with IBM. In 2006, they acquired ADIC, another tape vendor and became one of the largest tape library vendors in the world. From the ADIC acquisition, Quantum also got their rights on Stornext, a high performance scale out file system. I was deeply impressed with Stornext, and I once called it the Swiss Army knife of Data Management. The versatility of Stornext addressed many of the required functions within the data management lifecycle and workflows, and thus it has made its name in the Media and Entertainment space.

Jack of all trades, master of none

However, Quantum has never reached great heights in my opinion. They are everything to everybody, like a Jack of all trades, master of none. They are backup with their tape libraries and DXi series, archive and tiering with the Lattus, hybrid storage with QXS, and file system and scale-out with Stornext. If they have good business run rates and a healthy pipeline, having a broad product line is fine and dandy. But Quantum has been having CEO changes like turning a turnstile, and amid “a few” accounting missteps and a 2018 CEO who only lasted 5 months, they better steady their rocking boat quickly. Continue reading

The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from https://aws.amazon.com/outposts/

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company. Continue reading

Sexy HPC storage is all the rage

HPC is sexy

There is no denying it. HPC is sexy. HPC Storage is just as sexy.

Looking at the latest buzz from Super Computing Conference 2018 which happened in Dallas 2 weeks ago, the number of storage related vendors participating was staggering. Panasas, Weka.io, Excelero, BeeGFS, are the ones that I know because I got friends posting their highlights. Then there are the perennial vendors like IBM, Dell, HPE, NetApp, Huawei, Supermicro, and so many more. A quick check on the SC18 website showed that there were 391 exhibitors on the floor.

And this is driven by the unrelentless demand for higher and higher performance of computing, and along with it, the demands for faster and faster storage performance. Commercialization of Artificial Intelligence (AI), Deep Learning (DL) and newer applications and workloads together with the traditional HPC workloads are driving these ever increasing requirements. However, most enterprise storage platforms were not designed to meet the demands of these new generation of applications and workloads, as many have been led to believe. Why so?

I had a couple of conversations with a few well known vendors around the topic of HPC Storage. And several responses thrown back were to put Flash and NVMe to solve the high demands of HPC storage performance. In my mind, these responses were too trivial, too irresponsible. So I wanted to write this blog to share my views on HPC storage, and not just about its performance.

The HPC lines are blurring

I picked up this video (below) a few days ago. It was insideHPC Rich Brueckner interview with Dr. Goh Eng Lim, HPE CTO and renowned HPC expert about the convergence of both traditional and commercial HPC applications and workloads.

I liked the conversation in the video because it addressed the 2 different approaches. And I welcomed Dr. Goh’s invitation to the Commercial HPC community to work with the Traditional HPC vendors to help push the envelope towards Exascale SuperComputing.

Continue reading

Is Pure Play Storage good?

I post storage and cloud related articles to my unofficial SNIA Malaysia Facebook community (you are welcomed to join) every day. It is a community I started over 9 years ago, and there are active live banters of the posts of the day. Casual, personal were the original reasons why I started the community on Facebook rather than on LinkedIn, and I have been curating it religiously for the longest time.

The Big 5 of Storage (it was Big 6 before this)

Looking back 8-9 years ago, the storage vendor landscape of today has not changed much. The Big 5 hegemony is still there, still dominating the Gartner Magic Quadrant for Enterprise and Mid-end Arrays, and is still there in the All-Flash quadrant as well, albeit the presence of Pure Storage in that market.

The Big 5 of today – Dell EMC, NetApp, HPE, IBM and Hitachi Vantara – were the Big 6 of 2009-2010, consisting of EMC, NetApp, Dell, HP, IBM and Hitachi Data Systems. The All-Flash, or Gartner calls it Solid State Arrays (SSA) market was still an afterthought, and Pure Storage was just founded. Pure Storage did not appear in my radar until 2 years later when I blogged about Pure Storage’s presence in the market.

Here’s a look at the Gartner Magic Quadrant for 2010:

We see Pure Play Storage vendors in the likes of EMC, NetApp, Hitachi Data Systems (before they adopted the UCP into their foray), 3PAR, Compellent, Pillar Data Systems, BlueArc, Xiotech, Nexsan, DDN and Infortrend. And when we compare that to the 2017 Magic Quadrant (I have not seen the 2018 one yet) below:

Continue reading

Disaggregation or hyperconvergence?

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

There is an argument about NetApp‘s HCI (hyperconverged infrastructure). It is not really a hyperconverged product at all, according to one school of thought. Maybe NetApp is just riding on the hyperconvergence marketing coat tails, and just wanted to be associated to the HCI hot streak. In the same spectrum of argument, Datrium decided to call their technology open convergence, clearly trying not to be related to hyperconvergence.

Hyperconvergence has been enjoying a period of renaissance for a few years now. Leaders like Nutanix, VMware vSAN, Cisco Hyperflex and HPE Simplivity have been dominating the scene, and touting great IT benefits and eliminating IT efficiencies. But in these technologies, performance and capacity are tightly intertwined. That means that in each of the individual hyperconverged nodes, typically starting with a trio of nodes, the processing power and the storage capacity comes together. You have to accept both resources as a node. If you want more processing power, you get the additional storage capacity that comes with that node. If you want more storage capacity, you get more processing power whether you like it or not. This means, you get underutilized resources over time, and definitely not rightsized for the job.

And here in Malaysia, we have seen vendors throw in hyperconverged infrastructure solutions for every single requirement. That was why I wrote a piece about some zealots of hyperconverged solutions 3+ years ago. When you think you have a magical hammer, every problem is a nail. 😉

In my radar, NetApp and Datrium are the only 2 vendors that offer separate nodes for compute processing and storage capacity and still fall within the hyperconverged space. This approach obviously benefits the IT planners and the IT architects, and the customers too because they get what they want for their business. However, the disaggregation of compute processing and storage leads to the argument of whether these 2 companies belong to the hyperconverged infrastructure category.

Continue reading

My first TechFieldDay

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are paid by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I have attended a bunch of Storage Field Days over the years but I have never attended a Tech Field Day. This coming week, I will be attending their 17th edition, TechFieldDay 17, but my first. I have always enjoyed Storage Field Days. Everytime I joined as a delegate, there were new things to discover but almost always, serendipity happened.

Continue reading

The Commvault 5Ps of change

[Preamble: I have been invited by Commvault via GestaltIT as a delegate to their Commvault GO conference from Oct 9-11, 2018 in Nashville, TN, USA. My expenses, travel and accommodation are paid by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I am a delegate of Commvault GO 2018 happening now in Nashville, Tennessee. I was also a delegate of Commvault GO 2017 held at National Harbor, Washington D.C. Because of scheduling last year, I only managed to stay about a day and a half before flying off to the West Coast. This year I was given the opportunity to experience the full conference at Commvault GO 2018. And I was able to savour the energy, the mindset and the culture of Commvault this time around.

Make no mistakes folks, BIG THINGS are happening with Commvault. I can feel it with their people, with their partners and their customers at the GO conference. How so?

For one, Commvault is making big changes, from People, Process, Pricing, Products and Perception (that’s 5 Ps). Starting with Products, they have consolidated from 20+ products into 4, and simplifying the perception of how the industry sees Commvault. The diagram below shows the 4 products portfolio.

Continue reading