The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from https://aws.amazon.com/outposts/

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company. Continue reading

Pondering Redhat’s future with IBM

I woke up yesterday morning with a shocker of a news. IBM announced that they were buying Redhat for USD34 billion. Never in my mind that Redhat would sell but I guess that USD190.00 per share was too tempting. Redhat (RHT) was trading at USD116.68 on the previous Friday’s close.

Redhat is one of my favourite technology companies. I love their Linux development and progress, and I use a lot of Fedora and CentOS in my hobbies. I started with Redhat back in 2000, when I became obsessed to get my RHCE (Redhat Certified Engineer). I recalled on almost every weekend (Saturday and Sunday) back in 2002 when I was in the office, learning Redhat, and hacking scripts to be really good at it. I got certified with RHCE 4 with a 96% passing mark, and I was very proud of my certification.

One of my regrets was not joining Redhat in 2006. I was offered the job as an SE by Josep Garcia, and the very first position in Malaysia. Instead, I took up the Hitachi Data Systems job to helm the project implementation and delivery for the Shell GUSto project. It might have turned out differently if I did.

The IBM acquisition of Redhat left a poignant feeling in me. In many ways, Redhat has been the shining star of Linux. They are the only significant one left leading the charge of open source. They are the largest contributors to the Openstack projects and continue to support the project strongly whilst early protagonists like HPE, Cisco and Intel have reduced their support. They are of course, the perennial top 3 contributors to the Linux kernel since the very early days. And Redhat continues to contribute to projects such as containers and Kubernetes and made that commitment deeper with their recent acquisition of CoreOS a few months back.

Continue reading

Oracle Cloud Infrastructure to prove skeptics wrong

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The much maligned Oracle Cloud is getting a fresh reboot, starting with their Oracle Cloud Infrastructure (OCI), and significant enhancements and technology updates were announced at the Oracle Open World this week. I had the privilege to hear about Oracle Cloud’s new attack plan when they presented at Tech Field Day 17 last week.

Oracle Cloud has not have the best of days in recent months. Thomas Kurian’s resignation as their President of Product Development was highly publicized in a disagreement with CTO and founder, Larry Ellison over cloud software strategy. Then there was an on-going lawsuit about how Oracle was misrepresenting their cloud revenue growth, which puts Oracle in a bad light.

On the local front here in Malaysia, I have heard from the grapevine of the aggressive nature of Oracle personnel pushing partners and customers to adopt their cloud services using legal scare tactics on their database licensing. A buddy of mine, who was previously the cloud business development manager at CTC Global, also shared Oracle’s cloud shortcomings compared to Amazon Web Service and Microsoft Azure a year ago.

Oracle Cloud Infrastructure team aimed to turnover the bad perceptions, starting with the delegates of Tech Field Day 17, including yours truly.Their strategy was clear. Oracle Cloud Infrastructure runs the highest performance and the highest enterprise grade Infrastructure-as-a-Service (IaaS), bar none. Unlike the IBM Cloud, which in my opinion is a wishy-washy cloud service platform, Oracle Cloud’s ambition is solid.

They did a demo on JDEdwards EnterpriseOne application, and they continue to demonstrate their prowess running the highest performance computing experience ever, for all enterprise-grade workload. And that enterprise pedigree is clear.

Just this week, Amazon Prime Day had an outage. Amazon is in the process of weaning Oracle database from their entire ecosystem by 2020, and this outage clearly showed that the Oracle database and the enterprise applications would only run best on Oracle Cloud Infrastructure.

Continue reading

The Network is Still the Computer

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Sun Microsystems coined the phrase “The Network is the Computer“. It became one of the most powerful ideologies in the computing world, but over the years, many technology companies have tried to emulate and practise the mantra, but fell short.

I have never heard of Drivescale. It wasn’t in my radar until the legendary NFS guru, Brian Pawlowski joined them in April this year. Beepy, as he is known, was CTO of NetApp and later at Pure Storage, and held many technology leadership roles, including leading the development of NFSv3 and v4.

Prior to Tech Field Day 17, I was given some “homework”. Stephen Foskett, Chief Cat Herder (as he is known) of Tech Field Days and Storage Field Days, highly recommended Drivescale and asked the delegates to pick up some notes on their technology. Going through a couple of the videos, Drivescale’s message and philosophy resonated well with me. Perhaps it was their Sun Microsystems DNA? Many of the Drivescale team members were from Sun, and I was previously from Sun as well. I was drinking Sun’s Kool Aid by the bucket loads even before I graduated in 1991, and so what Drivescale preached made a lot of sense to me.Drivescale is all about Scale-Out Architecture at the webscale level, to address the massive scale of data processing. To understand deeper, we must think about “Data Locality” and “Data Mobility“. I frequently use these 2 “points of discussion” in my consulting practice in architecting and designing data center infrastructure. The gist of data locality is simple – the closer the data is to the processing, the cheaper/lightweight/efficient it gets. Moving data – the data mobility part – is expensive.

Continue reading

The Malaysian Openstack storage conundrum

The Openstack blippings on my radar have ratcheted up this year. I have been asked to put together the IaaS design several times, either with the flavours of RedHat or Ubuntu, and it’s a good thing to see the Openstack interest level going up in the Malaysian IT scene. Coming into its 8th year, Openstack has become a mature platform but in the storage projects of Openstack, my observations tell me that these storage-related projects are not as well known as we speak.

I was one of the speakers at the Openstack Malaysia 8th Summit over a month ago. I started my talk with question – “Can anyone name the 4 Openstack storage projects?“. The response from the floor was “Swift, Cinder, Ceph and … (nobody knew the 4th one)” It took me by surprise when the floor almost univocally agreed that Ceph is one of the Openstack projects but we know that Ceph isn’t one. Ceph? An Openstack storage project?

Besides Swift, Cinder, there is Glance (depending on how you look at it) and the least known .. Manila.

I have also been following on many Openstack Malaysia discussions and discussion groups for a while. That Ceph response showed the lack of awareness and knowledge of the Openstack storage projects among the Malaysian IT crowd, and it was a difficult issue to tackle. The storage conundrum continues to perplex me because many whom I have spoken to seemed to avoid talking about storage and viewing it like a dark art or some voodoo thingy.

I view storage as the cornerstone of the 3 infrastructure pillars  – compute, network and storage – of Openstack or any software-defined infrastructure stack for that matter. So it is important to get an understanding the Openstack storage projects, especially Cinder.

Cinder is the abstraction layer that gives management and control to block storage beneath it. In a nutshell, it allows Openstack VMs and applications consume block storage in a consistent and secure way, regardless of the storage infrastructure or technology beneath it. This is achieved through the cinder-volume service which is a driver most storage vendors integrate with (as shown in the diagram below).

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Cinder-volume together with cinder-api, and cinder-scheduler, form the Block Storage Services for Openstack. There is another service, cinder-backup which integrates with Openstack Swift but in my last check, this service is not as popular as cinder-volume, which is widely supported by many storage vendors with both Fibre Channel and iSCSi implementations, and in a few vendors, with NFS and SMB as well. Continue reading

Own the Data Pipeline

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am a big proponent of Go-to-Market (GTM) solutions. Technology does not stand alone. It must be in an ecosystem, and in each industry, in each segment of each respective industry, every ecosystem is unique. And when we amalgamate data, the storage infrastructure technologies and the data management into the ecosystem, we reap the benefits in that ecosystem.

Data moves in the ecosystem, from system to system, north to south, east to west and vice versa, random, sequential, ad-hoc. Data acquires different statuses, different roles, different relevances in its lifecycle through the ecosystem. From it, we derive the flow, a workflow of data creating a data pipeline. The Data Pipeline concept has been around since the inception of data.

To illustrate my point, I created one for the Oil & Gas – Exploration & Production (EP) upstream some years ago.

 

Continue reading

Cohesity SpanFS – a foundational shift

[Preamble: I was a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

Cohesity SpanFS impressed me. Their filesystem was designed from ground up to meet the demands of the voluminous cloud-scale data, and yes, the sheer magnitude of data everywhere needs to be managed.

We all know that primary data is always the more important piece of data landscape but there is a growing need to address the secondary data segment as well.

Like a floating iceberg, the piece that is sticking out is the more important primary data but the larger piece beneath the surface of the water, which is the secondary data, is becoming more valuable. Applications such as file shares, archiving, backup, test and development, and analytics and insights are maturing as the foundational data management frameworks and fast becoming the bedrock of businesses.

The ability of businesses to bounce back after a disaster; the relentless testing of large data sets to develop new competitive advantage for businesses; the affirmations and the insights of analyzing data to reduce risks in decision making; all these are the powerful back engine applicability that thrust businesses forward. Even the ability to search for the right information in a sea of data for regulatory and compliance reasons is part of the organization’s data management application.

Continue reading

Storage dinosaurs evolving too

[Preamble: I am a delegate of Storage Field Day 15 from Mar 7-9, 2018. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I have been called a dinosaur. We storage networking professionals and storage technologists have been called dinosaurs. It wasn’t offensive or anything like that and I knew it was coming because the writing was on the wall, … or is it?

The cloud and the breakneck pace of all the technologies that came along have made us, the storage networking professionals, look like relics. The storage guys have been pigeonholed into a sunset segment of the IT industry. SAN and NAS, according to the non-practitioners, were no longer relevant. And cloud has clout (pun intended) us out of the park.

I don’t see us that way. I see that the Storage Dinosaurs are evolving as well, and our storage foundational knowledge and experience are more relevant that ever. And the greatest assets that we, the storage networking professionals, have is our deep understanding of data.

A little over a year ago, I changed the term Storage in my universe to Data Services Platform, and here was the blog I wrote. I blogged again just before the year 2018 began.

 

Continue reading