The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from https://aws.amazon.com/outposts/

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company. Continue reading

Sexy HPC storage is all the rage

HPC is sexy

There is no denying it. HPC is sexy. HPC Storage is just as sexy.

Looking at the latest buzz from Super Computing Conference 2018 which happened in Dallas 2 weeks ago, the number of storage related vendors participating was staggering. Panasas, Weka.io, Excelero, BeeGFS, are the ones that I know because I got friends posting their highlights. Then there are the perennial vendors like IBM, Dell, HPE, NetApp, Huawei, Supermicro, and so many more. A quick check on the SC18 website showed that there were 391 exhibitors on the floor.

And this is driven by the unrelentless demand for higher and higher performance of computing, and along with it, the demands for faster and faster storage performance. Commercialization of Artificial Intelligence (AI), Deep Learning (DL) and newer applications and workloads together with the traditional HPC workloads are driving these ever increasing requirements. However, most enterprise storage platforms were not designed to meet the demands of these new generation of applications and workloads, as many have been led to believe. Why so?

I had a couple of conversations with a few well known vendors around the topic of HPC Storage. And several responses thrown back were to put Flash and NVMe to solve the high demands of HPC storage performance. In my mind, these responses were too trivial, too irresponsible. So I wanted to write this blog to share my views on HPC storage, and not just about its performance.

The HPC lines are blurring

I picked up this video (below) a few days ago. It was insideHPC Rich Brueckner interview with Dr. Goh Eng Lim, HPE CTO and renowned HPC expert about the convergence of both traditional and commercial HPC applications and workloads.

I liked the conversation in the video because it addressed the 2 different approaches. And I welcomed Dr. Goh’s invitation to the Commercial HPC community to work with the Traditional HPC vendors to help push the envelope towards Exascale SuperComputing.

Continue reading

Disaggregation or hyperconvergence?

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

There is an argument about NetApp‘s HCI (hyperconverged infrastructure). It is not really a hyperconverged product at all, according to one school of thought. Maybe NetApp is just riding on the hyperconvergence marketing coat tails, and just wanted to be associated to the HCI hot streak. In the same spectrum of argument, Datrium decided to call their technology open convergence, clearly trying not to be related to hyperconvergence.

Hyperconvergence has been enjoying a period of renaissance for a few years now. Leaders like Nutanix, VMware vSAN, Cisco Hyperflex and HPE Simplivity have been dominating the scene, and touting great IT benefits and eliminating IT efficiencies. But in these technologies, performance and capacity are tightly intertwined. That means that in each of the individual hyperconverged nodes, typically starting with a trio of nodes, the processing power and the storage capacity comes together. You have to accept both resources as a node. If you want more processing power, you get the additional storage capacity that comes with that node. If you want more storage capacity, you get more processing power whether you like it or not. This means, you get underutilized resources over time, and definitely not rightsized for the job.

And here in Malaysia, we have seen vendors throw in hyperconverged infrastructure solutions for every single requirement. That was why I wrote a piece about some zealots of hyperconverged solutions 3+ years ago. When you think you have a magical hammer, every problem is a nail. 😉

In my radar, NetApp and Datrium are the only 2 vendors that offer separate nodes for compute processing and storage capacity and still fall within the hyperconverged space. This approach obviously benefits the IT planners and the IT architects, and the customers too because they get what they want for their business. However, the disaggregation of compute processing and storage leads to the argument of whether these 2 companies belong to the hyperconverged infrastructure category.

Continue reading

Pondering Redhat’s future with IBM

I woke up yesterday morning with a shocker of a news. IBM announced that they were buying Redhat for USD34 billion. Never in my mind that Redhat would sell but I guess that USD190.00 per share was too tempting. Redhat (RHT) was trading at USD116.68 on the previous Friday’s close.

Redhat is one of my favourite technology companies. I love their Linux development and progress, and I use a lot of Fedora and CentOS in my hobbies. I started with Redhat back in 2000, when I became obsessed to get my RHCE (Redhat Certified Engineer). I recalled on almost every weekend (Saturday and Sunday) back in 2002 when I was in the office, learning Redhat, and hacking scripts to be really good at it. I got certified with RHCE 4 with a 96% passing mark, and I was very proud of my certification.

One of my regrets was not joining Redhat in 2006. I was offered the job as an SE by Josep Garcia, and the very first position in Malaysia. Instead, I took up the Hitachi Data Systems job to helm the project implementation and delivery for the Shell GUSto project. It might have turned out differently if I did.

The IBM acquisition of Redhat left a poignant feeling in me. In many ways, Redhat has been the shining star of Linux. They are the only significant one left leading the charge of open source. They are the largest contributors to the Openstack projects and continue to support the project strongly whilst early protagonists like HPE, Cisco and Intel have reduced their support. They are of course, the perennial top 3 contributors to the Linux kernel since the very early days. And Redhat continues to contribute to projects such as containers and Kubernetes and made that commitment deeper with their recent acquisition of CoreOS a few months back.

Continue reading

Oracle Cloud Infrastructure to prove skeptics wrong

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The much maligned Oracle Cloud is getting a fresh reboot, starting with their Oracle Cloud Infrastructure (OCI), and significant enhancements and technology updates were announced at the Oracle Open World this week. I had the privilege to hear about Oracle Cloud’s new attack plan when they presented at Tech Field Day 17 last week.

Oracle Cloud has not have the best of days in recent months. Thomas Kurian’s resignation as their President of Product Development was highly publicized in a disagreement with CTO and founder, Larry Ellison over cloud software strategy. Then there was an on-going lawsuit about how Oracle was misrepresenting their cloud revenue growth, which puts Oracle in a bad light.

On the local front here in Malaysia, I have heard from the grapevine of the aggressive nature of Oracle personnel pushing partners and customers to adopt their cloud services using legal scare tactics on their database licensing. A buddy of mine, who was previously the cloud business development manager at CTC Global, also shared Oracle’s cloud shortcomings compared to Amazon Web Service and Microsoft Azure a year ago.

Oracle Cloud Infrastructure team aimed to turnover the bad perceptions, starting with the delegates of Tech Field Day 17, including yours truly.Their strategy was clear. Oracle Cloud Infrastructure runs the highest performance and the highest enterprise grade Infrastructure-as-a-Service (IaaS), bar none. Unlike the IBM Cloud, which in my opinion is a wishy-washy cloud service platform, Oracle Cloud’s ambition is solid.

They did a demo on JDEdwards EnterpriseOne application, and they continue to demonstrate their prowess running the highest performance computing experience ever, for all enterprise-grade workload. And that enterprise pedigree is clear.

Just this week, Amazon Prime Day had an outage. Amazon is in the process of weaning Oracle database from their entire ecosystem by 2020, and this outage clearly showed that the Oracle database and the enterprise applications would only run best on Oracle Cloud Infrastructure.

Continue reading

The Network is Still the Computer

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Sun Microsystems coined the phrase “The Network is the Computer“. It became one of the most powerful ideologies in the computing world, but over the years, many technology companies have tried to emulate and practise the mantra, but fell short.

I have never heard of Drivescale. It wasn’t in my radar until the legendary NFS guru, Brian Pawlowski joined them in April this year. Beepy, as he is known, was CTO of NetApp and later at Pure Storage, and held many technology leadership roles, including leading the development of NFSv3 and v4.

Prior to Tech Field Day 17, I was given some “homework”. Stephen Foskett, Chief Cat Herder (as he is known) of Tech Field Days and Storage Field Days, highly recommended Drivescale and asked the delegates to pick up some notes on their technology. Going through a couple of the videos, Drivescale’s message and philosophy resonated well with me. Perhaps it was their Sun Microsystems DNA? Many of the Drivescale team members were from Sun, and I was previously from Sun as well. I was drinking Sun’s Kool Aid by the bucket loads even before I graduated in 1991, and so what Drivescale preached made a lot of sense to me.Drivescale is all about Scale-Out Architecture at the webscale level, to address the massive scale of data processing. To understand deeper, we must think about “Data Locality” and “Data Mobility“. I frequently use these 2 “points of discussion” in my consulting practice in architecting and designing data center infrastructure. The gist of data locality is simple – the closer the data is to the processing, the cheaper/lightweight/efficient it gets. Moving data – the data mobility part – is expensive.

Continue reading

The Dell EMC Data Bunker

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Another new announcement graced the Tech Field Day 17 delegates this week. Dell EMC Data Protection group announced their Cyber Recovery solution. The Cyber Recovery Vault solution and services is touted as the “The Last Line of Data Protection Defense against Cyber-Attacks” for the enterprise.

Security breaches and ransomware attacks have been rampant, and they are reeking havoc to organizations everywhere. These breaches and attacks cost businesses tens of millions, or even hundreds, and are capable of bring these businesses to their knees. One of the known practices is to corrupt backup metadata or catalogs, rendering operational recovery helpless before these perpetrators attack the primary data source. And there are times where the malicious and harmful agent could be dwelling in the organization’s network or servers for long period of times, launching and infecting primary images or gold copies of corporate data at the opportune time.

The Cyber Recovery (CR) solution from Dell EM focuses on Recovery of an Isolated Copy of the Data. The solution isolates strategic and mission critical secondary data and preserves the integrity and sanctity of the secondary data copy. Think of the CR solution as the data bunker, after doomsday has descended.

The CR solution is based on the Data Domain platforms. Describing from the diagram below, data backup occurs in the corporate network to a Data Domain appliance platform as the backup repository. This is just the usual daily backup, and is for operational recovery.

Diagram from Storage Review. URL Link: https://www.storagereview.com/dell_emc_releases_cyber_recovery_software

Continue reading

The Commvault 5Ps of change

[Preamble: I have been invited by Commvault via GestaltIT as a delegate to their Commvault GO conference from Oct 9-11, 2018 in Nashville, TN, USA. My expenses, travel and accommodation are paid by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I am a delegate of Commvault GO 2018 happening now in Nashville, Tennessee. I was also a delegate of Commvault GO 2017 held at National Harbor, Washington D.C. Because of scheduling last year, I only managed to stay about a day and a half before flying off to the West Coast. This year I was given the opportunity to experience the full conference at Commvault GO 2018. And I was able to savour the energy, the mindset and the culture of Commvault this time around.

Make no mistakes folks, BIG THINGS are happening with Commvault. I can feel it with their people, with their partners and their customers at the GO conference. How so?

For one, Commvault is making big changes, from People, Process, Pricing, Products and Perception (that’s 5 Ps). Starting with Products, they have consolidated from 20+ products into 4, and simplifying the perception of how the industry sees Commvault. The diagram below shows the 4 products portfolio.

Continue reading

Let there be light with Commvault Activate

[Preamble: I have been invited by Commvault via GestaltIT as a delegate to their Commvault GO conference from Oct 9-11, 2018 in Nashville, TN, USA. My expenses, travel and accommodation are paid by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Nobody sees well in the dark.

I am piqued and I want to know more about Commvault Activate. The conversation started after lunch yesterday as the delegates were walking back to the Gaylord Opryland Convention Center. I was walking next to Patrick McGrath, one of Commvault marketing folks, and we struck up a conversation in the warm breeze. Patrick started sharing a bit of Commvault Activate and what it could do and the possibilities of many relevant business cases for the solution.

There was a dejà vu moment, bringing my thoughts back to mid-2009. I was just invited by a friend to join him to restructure his company, Real Data Matrix (RDM). They were a NetApp distributor, then Platinum reseller in the early and mid-2000s and they had fell into hard times. Most of their technical team had left them, putting them in a spot to retain one of the largest NetApp support contract in Malaysia at the time.

I wanted to expand on their NetApp DNA and I started to seek out complementary solutions to build on that DNA. Coming out of my gig at EMC, there was an interesting solution which tickled my fancy – VisualSRM. So, I went about seeking the most comprehensive SRM (storage resource management) solution for RDM, one which has the widest storage platforms support. I found Tek-Tools Software and I moved that RDM sign up as their reseller. We got their SE/Developer, Aravind Kurapati, from India to train the RDM engineers. We were ready to hit the market late-2009/early-2010 but a few weeks later, Tek-Tools was acquired by Solarwinds.

Long story short, my mindset about SRM was “If you can’t see your storage resource, you can’t manage your storage“.  Resource visibility is so important in SRM, and the same philosophy applies to Data as well. That’s where Commvault Activate comes in. More than ever, Data Insights is already the biggest differentiator in the Data-Driven transformation in any modern business today. Commvault Activate is the Data Insights that shines the light to all the data in every organization.

After that casual chat with Patrick, more details came up in the early access to Commvault embargoed announcements later that afternoon. Commvault Activate announcement came up in my Twitter feed.

Commvault Activate has a powerful dynamic Index Engine called the Commvault 4D Index and it is responsible to search, discover and learn about different types of data, data context and relationships within the organization. I picked up more information as the conference progressed and found out that the technology behind the Commvault Activate is based on the Apache Lucene Solr enterprise search and indexing platform, courtesy of Lucidworks‘ technology. Suddenly I had a recall moment. I had posted the Commvault and Lucidworks partnership a few months back in my SNIA Malaysia Facebook community. The pictures connected. You can read about the news of the partnership here at Forbes.

Continue reading