Quantum Corp should spin off Stornext

What’s happening at Quantum Corporation?

I picked up the latest development news about Quantum Corporation. Last month, in December 2018, they secured a USD210 million financial lifeline to support their deflating business and their debts. And if you follow their development, they are with their 3rd CEO in the past 12 months, which is quite extraordinary. What is happening at Quantum Corp?

Quantum Logo (PRNewsFoto/Quantum Corp.)

Stornext – The Swiss Army knife of Data Management

I have known Quantum since 2000, very focused on the DLT tape library business. At that time, prior to the coming of LTO, DLT and its successor, SuperDLT dominated the tape market together with IBM. In 2006, they acquired ADIC, another tape vendor and became one of the largest tape library vendors in the world. From the ADIC acquisition, Quantum also got their rights on Stornext, a high performance scale out file system. I was deeply impressed with Stornext, and I once called it the Swiss Army knife of Data Management. The versatility of Stornext addressed many of the required functions within the data management lifecycle and workflows, and thus it has made its name in the Media and Entertainment space.

Jack of all trades, master of none

However, Quantum has never reached great heights in my opinion. They are everything to everybody, like a Jack of all trades, master of none. They are backup with their tape libraries and DXi series, archive and tiering with the Lattus, hybrid storage with QXS, and file system and scale-out with Stornext. If they have good business run rates and a healthy pipeline, having a broad product line is fine and dandy. But Quantum has been having CEO changes like turning a turnstile, and amid “a few” accounting missteps and a 2018 CEO who only lasted 5 months, they better steady their rocking boat quickly. Continue reading

Storage and Data Management Planning crucial for Malaysian SMBs

Hybrid IT for 2019 and beyond

2019 is here.

I am especially buoyed by the strong network storage industry footing in 2018, reported by The Register last week. 2018 was certainly a blowout year for storage infrastructure and storage software, both for on-premises and the cloud computing platforms. The AWS Outposts announcement over a month ago also just affirmed that the new world is Hybrid IT. And there is plenty to look forward to in 2019.

Malaysian Economic Doldrums

Things are not as rosy for the Malaysia economy in 2019. It will be a challenging 2019 as reported by the Edge, a local business publication. The GDP (gross domestic product) of the first half of 2018 shrunk, from 5.9% in 2017, to 4.65%, and it is estimated to be 4.9% in 2019. With an inexperienced new government, a weak currency, and more competitive economies emerging in ASEAN, Malaysia small and medium businesses (SMBs) could be challenged.

The knee jerk reaction would be to cut the IT spending and revert to buying on price. This has happened too often, because there are always other operating costs that may be more pressing. Furthermore, many of the SMBs are still aimless when it comes to transforming their businesses into the digital data era, groping in the dark and sputtering to get its worth with their IT investments. Often, many are misinformed and stumbled, resulting in much higher wastage and costs.

There is a local saying here:

Good thing No Cheap; Cheap thing No Good

And the saying is very apt to describe that there is value in investing well, and the price factor should not always be the main determinant criteria of buying IT infrastructure, software and services.

Many of these SMBs also lack experienced IT staff to manage their IT environment. There is also a hurried urgency to modernize IT, because a well-planned and executed IT strategy and operations would definitely increase their Competitive Advantage. Continue reading

From the past to the future

2019 beckons. The year 2018 is coming to a close and I look upon what I blogged in the past years to reflect what is the future.

The evolution of the Data Services Platform

Late 2017, I blogged about the Data Services Platform. Storage is no longer the storage infrastructure we know but has evolved to a platform where a plethora of data services are served. The changing face of storage is continually evolving as the IT industry changes. I take this opportunity to reflect what I wrote since I started blogging years ago, and look at the articles that are shaping up the landscape today and also some duds.

Some good ones …

One of the most memorable ones is about memory cloud. I wrote the article when Dell acquired a small company by the name of RNA Networks. I vividly recalled what was going through my mind when I wrote the blog. With the SAN, NAS and DAS, and even FAN (File Area Network) happening during that period, the first thing was the System Area Network, the original objective Infiniband and RDMA. I believed the final pool of where storage will be is the memory, hence I called it the “The Last Bastion – Memory“. RNA’s technology became part of Dell Fluid Architecture.

True enough, the present technology of Storage Class Memory and SNIA’s NVDIMM are along the memory cloud I espoused years ago.

What about Fibre Channel over Ethernet (FCoE)? It wasn’t a compelling enough technology for me when it came into the game. Reduced port and cable counts, and reduced power consumption were what the FCoE folks were pitching, but the cost of putting in the FC switches, the HBAs were just too great as an investment. In the end, we could see the cracks of the FCoE story, and I wrote the pre-mature eulogy of FCoE in my 2012 blog. I got some unsavoury comments writing that blog back then, but fast forward to the present, FCoE isn’t a force anymore.

Weeks ago, Amazon Web Services (AWS) just became a hybrid cloud service provider/vendor with the Outposts announcement. It didn’t surprise me but it may have shook the traditional systems integrators. I took the stance 2 years ago when AWS partnered with VMware and juxtaposed it to the philosophical quote in the 1993 Jurassic Park movie – “Life will not be contained, … Life finds a way“.

Continue reading

Is Pure Play Storage good?

I post storage and cloud related articles to my unofficial SNIA Malaysia Facebook community (you are welcomed to join) every day. It is a community I started over 9 years ago, and there are active live banters of the posts of the day. Casual, personal were the original reasons why I started the community on Facebook rather than on LinkedIn, and I have been curating it religiously for the longest time.

The Big 5 of Storage (it was Big 6 before this)

Looking back 8-9 years ago, the storage vendor landscape of today has not changed much. The Big 5 hegemony is still there, still dominating the Gartner Magic Quadrant for Enterprise and Mid-end Arrays, and is still there in the All-Flash quadrant as well, albeit the presence of Pure Storage in that market.

The Big 5 of today – Dell EMC, NetApp, HPE, IBM and Hitachi Vantara – were the Big 6 of 2009-2010, consisting of EMC, NetApp, Dell, HP, IBM and Hitachi Data Systems. The All-Flash, or Gartner calls it Solid State Arrays (SSA) market was still an afterthought, and Pure Storage was just founded. Pure Storage did not appear in my radar until 2 years later when I blogged about Pure Storage’s presence in the market.

Here’s a look at the Gartner Magic Quadrant for 2010:

We see Pure Play Storage vendors in the likes of EMC, NetApp, Hitachi Data Systems (before they adopted the UCP into their foray), 3PAR, Compellent, Pillar Data Systems, BlueArc, Xiotech, Nexsan, DDN and Infortrend. And when we compare that to the 2017 Magic Quadrant (I have not seen the 2018 one yet) below:

Continue reading

Disaggregation or hyperconvergence?

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

There is an argument about NetApp‘s HCI (hyperconverged infrastructure). It is not really a hyperconverged product at all, according to one school of thought. Maybe NetApp is just riding on the hyperconvergence marketing coat tails, and just wanted to be associated to the HCI hot streak. In the same spectrum of argument, Datrium decided to call their technology open convergence, clearly trying not to be related to hyperconvergence.

Hyperconvergence has been enjoying a period of renaissance for a few years now. Leaders like Nutanix, VMware vSAN, Cisco Hyperflex and HPE Simplivity have been dominating the scene, and touting great IT benefits and eliminating IT efficiencies. But in these technologies, performance and capacity are tightly intertwined. That means that in each of the individual hyperconverged nodes, typically starting with a trio of nodes, the processing power and the storage capacity comes together. You have to accept both resources as a node. If you want more processing power, you get the additional storage capacity that comes with that node. If you want more storage capacity, you get more processing power whether you like it or not. This means, you get underutilized resources over time, and definitely not rightsized for the job.

And here in Malaysia, we have seen vendors throw in hyperconverged infrastructure solutions for every single requirement. That was why I wrote a piece about some zealots of hyperconverged solutions 3+ years ago. When you think you have a magical hammer, every problem is a nail. 😉

In my radar, NetApp and Datrium are the only 2 vendors that offer separate nodes for compute processing and storage capacity and still fall within the hyperconverged space. This approach obviously benefits the IT planners and the IT architects, and the customers too because they get what they want for their business. However, the disaggregation of compute processing and storage leads to the argument of whether these 2 companies belong to the hyperconverged infrastructure category.

Continue reading

The Commvault 5Ps of change

[Preamble: I have been invited by Commvault via GestaltIT as a delegate to their Commvault GO conference from Oct 9-11, 2018 in Nashville, TN, USA. My expenses, travel and accommodation are paid by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I am a delegate of Commvault GO 2018 happening now in Nashville, Tennessee. I was also a delegate of Commvault GO 2017 held at National Harbor, Washington D.C. Because of scheduling last year, I only managed to stay about a day and a half before flying off to the West Coast. This year I was given the opportunity to experience the full conference at Commvault GO 2018. And I was able to savour the energy, the mindset and the culture of Commvault this time around.

Make no mistakes folks, BIG THINGS are happening with Commvault. I can feel it with their people, with their partners and their customers at the GO conference. How so?

For one, Commvault is making big changes, from People, Process, Pricing, Products and Perception (that’s 5 Ps). Starting with Products, they have consolidated from 20+ products into 4, and simplifying the perception of how the industry sees Commvault. The diagram below shows the 4 products portfolio.

Continue reading

The Malaysian Openstack storage conundrum

The Openstack blippings on my radar have ratcheted up this year. I have been asked to put together the IaaS design several times, either with the flavours of RedHat or Ubuntu, and it’s a good thing to see the Openstack interest level going up in the Malaysian IT scene. Coming into its 8th year, Openstack has become a mature platform but in the storage projects of Openstack, my observations tell me that these storage-related projects are not as well known as we speak.

I was one of the speakers at the Openstack Malaysia 8th Summit over a month ago. I started my talk with question – “Can anyone name the 4 Openstack storage projects?“. The response from the floor was “Swift, Cinder, Ceph and … (nobody knew the 4th one)” It took me by surprise when the floor almost univocally agreed that Ceph is one of the Openstack projects but we know that Ceph isn’t one. Ceph? An Openstack storage project?

Besides Swift, Cinder, there is Glance (depending on how you look at it) and the least known .. Manila.

I have also been following on many Openstack Malaysia discussions and discussion groups for a while. That Ceph response showed the lack of awareness and knowledge of the Openstack storage projects among the Malaysian IT crowd, and it was a difficult issue to tackle. The storage conundrum continues to perplex me because many whom I have spoken to seemed to avoid talking about storage and viewing it like a dark art or some voodoo thingy.

I view storage as the cornerstone of the 3 infrastructure pillars  – compute, network and storage – of Openstack or any software-defined infrastructure stack for that matter. So it is important to get an understanding the Openstack storage projects, especially Cinder.

Cinder is the abstraction layer that gives management and control to block storage beneath it. In a nutshell, it allows Openstack VMs and applications consume block storage in a consistent and secure way, regardless of the storage infrastructure or technology beneath it. This is achieved through the cinder-volume service which is a driver most storage vendors integrate with (as shown in the diagram below).

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Cinder-volume together with cinder-api, and cinder-scheduler, form the Block Storage Services for Openstack. There is another service, cinder-backup which integrates with Openstack Swift but in my last check, this service is not as popular as cinder-volume, which is widely supported by many storage vendors with both Fibre Channel and iSCSi implementations, and in a few vendors, with NFS and SMB as well. Continue reading

Industry 4.0 secret gem with Dell

[Preamble: I have been invited by Dell Technologies as a delegate to their upcoming Dell Technologies World from Apr 30-May 2, 2018 in Las Vegas, USA. My expenses, travel and accommodation will be paid by Dell Technologies, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

This may seem a little strange. How does Industry 4.0 relate to Dell Technologies?

Recently, I was involved in an Industry 4.0 consortium called Data Industry 4.0 (di 4.0). The objective of the consortium is to combine the foundations of 5S (seiri, seiton, seiso, seiketsu, and shitsuke), QRQC (Quick Response Quality Control) and Kaizen methodologies with the 9 pillars of Industry 4.0 with a strong data insight focus.

Industry 4.0 has been the latest trend in new technologies in the manufacturing world. It is sweeping the manufacturing industry segment by storm, leading with the nine pillars of Industry 4.0:

  • Horizontal and Vertical System Integration
  • Industrial Internet of Things
  • Simulation
  • Additive Manufacturing
  • Cloud Computing
  • Augmented Reality
  • Big Data and Analytics
  • Cybersecurity
  • Autonomous Robots

Continue reading

Own the Data Pipeline

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am a big proponent of Go-to-Market (GTM) solutions. Technology does not stand alone. It must be in an ecosystem, and in each industry, in each segment of each respective industry, every ecosystem is unique. And when we amalgamate data, the storage infrastructure technologies and the data management into the ecosystem, we reap the benefits in that ecosystem.

Data moves in the ecosystem, from system to system, north to south, east to west and vice versa, random, sequential, ad-hoc. Data acquires different statuses, different roles, different relevances in its lifecycle through the ecosystem. From it, we derive the flow, a workflow of data creating a data pipeline. The Data Pipeline concept has been around since the inception of data.

To illustrate my point, I created one for the Oil & Gas – Exploration & Production (EP) upstream some years ago.

 

Continue reading

NetApp and IBM gotta take risks

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

Storage Field Day 15 was full of technology. There were a few avant garde companies in the line-up which I liked but unfortunately NetApp and IBM were the 2 companies that came in at the least interesting end of the spectrum.

IBM presented their SpectrumProtect Plus. The data protection space, especially backup isn’t exactly my forte when it comes to solution architecture but I know enough to get by. However, as IBM presented, there were some many questions racing through my mind. I was interrupting myself so much because almost everything presented wasn’t new to me. “Wait a minute … didn’t Company X already had this?” or “Company Y had this years ago” or “Isn’t this…??

I was questioning myself to validate my understanding of the backup tech shared by the IBM SpectrumProtect Plus team. And they presented with such passion and gusto which made me wonder if I was wrong in the first place. Maybe my experience and knowledge in the backup software space weren’t good enough. But then the chatter in the SFD15 Slack channel started pouring in. Comments, unfortunately were mostly negative, and jibes became jokes. One comment, in particular, nailed it. “This is Veeam 0.2“, and then someone else downgraded to version 0.1.

Continue reading