Data Renaissance in Oil and Gas

The Oil and Gas industry, especially in the upstream Exploration and Production (EP) sector, has been enjoying a renewed vigour in the past few years. I have kept in touch with the developments of the EP side because I always have a soft spot for the industry. I have engaged in infrastructure and solutions in the petrotechnical side in my days at Sun Microsystems back in the late 90s. The engagements with EP intensified in my first stint at NetApp, wearing the regional Oil & Gas consulting engineer here in South Asia for almost 6 years. Then, with Interica in 2014, I was dealing with subsurface data and seismic interpretation technology. EP is certainly an exciting sector to cover because there are so much technical work involved and the technologies, especially the non-IT, are breath taking.

I have been an annual registrant to the Digital Energy Journal events since 2013, except last year, and I have always enjoyed their newsletter. This week I attended Digital Energy 2-day conference again, and I was taken in by the exciting times in EP. Here are a few of my views and trends observation in this data renaissance.

Continue reading

Thinking small to solve Big

[This article was posted in my LinkedIn at https://www.linkedin.com/pulse/thinking-small-solve-big-chin-fah-heoh/ on Sep 9th 2019]

The world’s economy has certainly turned. And organizations, especially the SMEs, are demanding more. There were times that many technology vendors and their tier 1 systems integrators could get away with plenty of high level hobnobbing, and showering the prospect with their marketing wow-factor. But those fancy, smancy days are drying up and SMEs now do a lot of research and demand a more elaborate and a more comprehensive technology solution to their requirements.

The SMEs have the same problems faced by the larger organizations. They want more data stored, protected and recoverable, and maximize the value of data. However, their risk factors are much higher than the larger enterprises, because a disruption or a simple breakdown could affect their business and operations far greater than larger organizations. In most situations, they have no safety net.

So, the past 3 odd years, I have learned that as a technology solution provider, as a systems integrator to SMEs, I have to be on-the-ball with their pains all the time. And I have to always remember that they do not have the deep pockets, especially when the economy in Malaysia has been soft for years.

That is why I have gravitated to technology solutions that matter to the SMEs and gentle to their pockets as well. Take for instance a small company called Itxotic I discovered earlier this year. Itxotic is a 100% Malaysian home-grown technology startup, focusing on customized industry intelligence, notably computer vision AI. Their prominent technology include defect detection in a manufacturing production line.

 

At the Enterprise level, it is easy for large technology providers like Hitachi or GE or Siemens to peddle similar high-tech solutions to SMEs requirements. But this would come with a price tag of hundreds of thousands of ringgit. SMEs will balk at such a large investment because the price tag is definitely something not comprehensible to the SME factories. That is why I gravitated to the small thinking of Itxotic, where their small, yet powerful technology solves big problems in the SMEs.

And this came about when more Industry 4.0 opportunities started to come into my radar. Similarly, I was also approached to look into a edge-network data analytics technology to be integrated into PLCs (programmable logic controllers). At present, the industry consultants who invited me, are peddling a foreign technology solution, and the technology costs RM13,000 per CPU core. In a typical 4-core processor IPC (industrial PC), that is a whopping RM52,000, minus the hardware and integration services. This can easily drive up the selling price of over RM100K, again, a price tag that will trigger a mini heart attack with the SMEs.

I am tasked by the industry consultants to design a more cost-friendly, aka cheaper solution and today, we are already building an alternative with Apache Kafka, its connectors and Grafana for visual reporting. And I think the cost to build this alternative technology will be probably 70-80% cheaper than the one they are reselling now. The “think small, solve Big” mantra is beginning to take hold, and I am excited about it.

In the “small” mantra, I mean to be intimate and humble with the end users. One lesson I have learned over the past years is, the SMEs count on their technology partners to be with them. They have no room for failure because a costly failure is likely to be devastating to their operations and business. Know the technology you are pitching well, so that the SMEs are confident that you can deliver, not some over-the-top high-level technology pitch. Look deep into the technology integration with their existing technology and operations, and carefully and meticulously craft and curate a well mapped plan for them. Commit to their journey to ensure their success.

I have often seen technology vendors and resellers leaving SMEs high and dry when it comes to something outside their scope, and this has been painful. That is why this isn’t a downgrade for me when I started working with the SMEs more often in the past 3 years, even though I have served the enterprise for more than 25 years. This invaluable lesson is an upgrade for me to serve my SME customers better.

Continue reading

Perils of avoiding BC and DR

News in recent months have been unfavourable, even to the point of poignancy. Maybe I didn’t have all the details to place my opinion, but it has appeared that these recent events have neglected the practice of  BC (business continuity) and DR (disaster recovery).

The recent bad news

The most recent is one close to home. The KLIA (Kuala Lumpur International Airport) and KLIA2 operations were disrupted quite significantly for 4 days due to “network switch” failure. I followed the news and comments quite intently in those bad days, and I did not see any single comment discussing about BC or DR. If BC and DR were present at the airports, the airport operations would have been restored within minutes or hours, not days. Investigations are still on-going to find out what really happened in the KLIA/KLIA2 incident.

Continue reading

Hybrid is the new Black

It is hard for enterprise to let IT go, isn’t it?

For years, we have seen the cloud computing juggernaut unrelenting in getting enterprises to put their IT into public clouds. Some of the biggest banks have put their faith into public cloud service providers. Close to home, Singapore United Overseas Bank (UOB) is one that has jumped into the bandwagon, signing up for VMware Cloud on AWS. But none will come bigger than the US government Joint Enterprise Defense Infrastructure (JEDI) project, where AWS and Azure are the last 2 bidders for the USD10 billion contract.

Confidence or lack of it

Those 2 cited examples should be big enough to usher enterprises to confidently embrace public cloud services, but many enterprises have been holding back. What gives?

In the past, it was a matter of confidence and the FUDs (fears, uncertainties, doubts). News about security breaches, massive blackouts have been widely spread and amplified to sensationalize the effects and consequences of cloud services. But then again, we get the same thing in poorly managed data centers in enterprises and government agencies, often with much less fanfare. We shrug our shoulder and say “Oh well!“.

The lack of confidence factor, I think, has been overthrown. The “Cloud First” strategy in enterprises in recent years speaks volume of the growing and maturing confidence in cloud services. The poor performance and high latency reasons, which were once an Achilles heel of cloud services, are diminishing. HPC-as-a-Service is becoming real.

The confidence in cloud services is strong. Then why is on-premises IT suddenly is a cool thing again? Why is hybrid cloud getting all the attention now?

Hybrid is coming back

Even AWS wants on-premises IT. Its Outposts offering outlines its ambition. A couple of years earlier, the Azure Stack was already made beachhead on-premises in its partnership with many server vendors. VMware, is in both on-premises and the public clouds. It has strong business and technology integration with AWS and Azure. IBM Cloud, Big Blue is thinking hybrid as well. 2 months ago, Dell jumped too, announcing Dell Technologies Cloud with plenty of a razzmatazz, using all the right moves with its strong on-premises infrastructure portfolio and its crown jewel of the federation, VMware. Continue reading

Storage Performance Considerations for AI Data Paths

The hype of Deep Learning (DL), Machine Learning (ML) and Artificial Intelligence (AI) has reached an unprecedented frenzy. Every infrastructure vendor from servers, to networking, to storage has a word to say or play about DL/ML/AI. This prompted me to explore this hyped ecosystem from a storage perspective, notably from a storage performance requirement point-of-view.

One question on my mind

There are plenty of questions on my mind. One stood out and that is related to storage performance requirements.

Reading and learning from one storage technology vendor to another, the context of everyone’s play against their competitors seems to be  “They are archaic, they are legacy. Our architecture is built from ground up, modern, NVMe-enabled“. And there are more juxtaposing, but you get the picture – “We are better, no doubt“.

Are the data patterns and behaviours of AI different? How do they affect the storage design as the data moves through the workflow, the data paths and the lifecycle of the AI ecosystem?

Continue reading

The Heart of Digital Transformation is …

Businesses have taken up Digital Transformation in different ways and at different pace. In Malaysia, company boardrooms are accepting Digital Transformation as a core strategic initiative, crucial to develop competitive advantage in their respective industries. Time and time again, we are reminded that Data is the lifeblood and Data fuels the Digital Transformation initiatives.

The rise of CDOs

In line with the rise of the Digital Transformation buzzword, I have seen several unique job titles coming up since a few years ago. Among those titles, “Chief Digital Officer“, “Chief Data Officer“, “Chief Experience Officer” are some eye-catching ones. I have met a few of them, and so far, those I met were outward facing, customer facing. In most of my conversations with them respectively, they projected a front that their organization, their business and operations have been digital transformed. They are ready to help their customers to transform. Are they?

Tech vendors add more fuel

The technology vendors have an agenda to sell their solutions and their services. They paint aesthetically pleasing stories of how their solutions and wares can digitally transform any organizations, and customers latch on to these ‘shiny’ tech. End users get too fixated that technology is the core of Digital Transformation. They are wrong.

Missing the Forest

As I gather more insights through observations, and more conversations and more experiences, I think most of the “digital transformation ready” organizations are not adopting the right approach to Digital Transformation.

Digital Transformation is not tactical. It is not a one-time, big bang action that shifts from not-digitally-transformed to digitally-transformed in a moment. It is not a sprint. It is a marathon. It is a journey that will take time to mature. IDC and its Digital Transformation MaturityScape Framework is spot-on when they first released the framework years ago.

IDC Digital Transformation Maturityscape

Continue reading

Scaling new HPC with Composable Architecture

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. Tech Field Day Extra was an included activity as part of the Dell Technologies World. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Deep Learning, Neural Networks, Machine Learning and subsequently Artificial Intelligence (AI) are the new generation of applications and workloads to the commercial HPC systems. Different from the traditional, more scientific and engineering HPC workloads, I have written about the new dawn of supercomputing and the attractive posture of commercial HPC.

Don’t be idle

From the business perspective, the investment of HPC systems is high most of the time, and justifying it to the executives and the investors is not easy. Therefore, it is critical to keep feeding the HPC systems and significantly minimize the idle times for compute, GPUs, network and storage.

However, almost all HPC systems today are inflexible. Once assigned to a project, the resources pretty much stay with the project, even when the workload processing of the project is idle and waiting. Of course, we have to bear in mind that not all resources are fully abstracted, virtualized and software-defined whereby you can carve out pieces of the hardware and deliver a percentage of that resource. Case in point is the CPU, where you cannot assign certain clock cycles of CPU to one project and another half to the other. The technology isn’t there yet. Certain resources like GPU is going down the path of Virtual GPU, and into the realm of resource disaggregation. Eventually, all resources of the HPC systems – CPU, memory, FPGA, GPU, PCIe channels, NVMe paths, IOPS, bandwidth, burst buffers etc – should be disaggregated and pooled for disparate applications and workloads based on demands of usage, time and performance.

Hence we are beginning to see the disaggregated HPC systems resources composed and built up the meet the diverse mix and needs of HPC applications and workloads. This is even more acute when a AI project might grow cold, but the training of AL/ML/DL workloads continues to stay hot

Liqid the early leader in Composable Architecture

Continue reading

Dell go big with Cloud

[Disclaimer: I have been invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. My expenses, travel and accommodation are covered by Dell Technologies, the organizer and I am not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Talk about big. Dell Technologies just went big with the Cloud.

The Microsoft Factor

Day 1 of Dell Technologies World 2019 (DTW19) started with a big surprise to many, including yours truly when Michael Dell, together with Pat Gelsinger invited Microsoft CEO, Satya Nadella on stage.

There was nothing new about Microsoft working with Dell Technologies. Both have been great partners since the PC days, but when they announced Azure VMware Solutions to the 15,000+ attendees of the conference, there was a second of disbelief, followed by an ovation of euphoria.

VMware solutions will run native on Microsoft Azure Cloud. The spread of vSphere, VSAN, vCenter, NSX-T and VMware tools and environment will run on Azure Bare Metal Infrastructure at multiple Azure locations. How big is that. Continue reading

Figuring out storage for Kubernetes and containers

Oops! I forgot about you!

To me, containers and container orchestration (CO) engines such as Kubernetes, Mesos, Docker Swarm are fantastic. They scale effortlessly and are truly designed for cloud native applications (CNA).

But one thing irks me. Storage management for containers and COs. It was as if when they designed and constructed containers and the containers orchestration (CO) engines, they forgot about the considerations of storage and storage management. At least the persistent part of storage.

Over a year ago, I was in two minds about persistent storage, especially when it comes to the transient nature of microservices which was so prevalent and were inundating the cloud native applications landscape. I was searching for answers in my blog. The decentralization of microservices in containers means mass deployment at the edge, but to have the pre-processed and post-processed data stick to the persistent storage at the edge device is a challenge. The operative word here is “STICK”.

Two different worlds

Containers were initially designed and built for lightweight applications such as microservices. The runtime, libraries, configuration files and dependencies are all in one package. They were meant to do simple tasks quickly and scales to thousands easily. They could be brought up and brought down in little time and did not have to bother about the persistent data stored by the host. The state of the containers were also not important to the application tasks at hand.

Today containers like Docker have matured to run enterprise applications and the state of the container is important. The applications must know the state and the health of the container. The container could be in online mode, online but not accepting data mode, suspended mode, paused mode, interrupted mode, quiesced mode or halted mode. Each mode or state of the container is important to the running applications and the container can easily brought up or down in an instance of a command. The stateful nature of the containers and applications is critical for the business. The same situation applies to container orchestration engines such as Kubernetes.

Container and Kubernetes Storage

Docker provides 3 methods to local storage. In the diagram below, it describes:

Continue reading

WekaIO controls their performance destiny

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I was first introduced to WekaIO back in Storage Field Day 15. I did not blog about them back then, but I have followed their progress quite attentively throughout 2018. 2 Storage Field Days and a year later, they were back for Storage Field Day 18 with a new CTO, Andy Watson, and several performance benchmark records.

Blowout year

2018 was a blowout year for WekaIO. They have experienced over 400% growth, placed #1 in the Virtual Institute IO-500 10-node performance challenge, and also became #1 in the SPEC SFS 2014 performance and latency benchmark. (Note: This record was broken by NetApp a few days later but at a higher cost per client)

The Virtual Institute for I/O IO-500 10-node performance challenge was particularly interesting, because it pitted WekaIO against Oak Ridge National Lab (ORNL) Summit supercomputer, and WekaIO won. Details of the challenge were listed in Blocks and Files and WekaIO Matrix Filesystem became the fastest parallel file system in the world to date.

Control, control and control

I studied WekaIO’s architecture prior to this Field Day. And I spent quite a bit of time digesting and understanding their data paths, I/O paths and control paths, in particular, the diagram below:

Starting from the top right corner of the diagram, applications on the Linux client (running Weka Client software) and it presents to the Linux client as a POSIX-compliant file system. Through the network, the Linux client interacts with the WekaIO kernel-based VFS (virtual file system) driver which coordinates the Front End (grey box in upper right corner) to the Linux client. Other client-based protocols such as NFS, SMB, S3 and HDFS are also supported. The Front End then interacts with the NIC (which can be 10/100G Ethernet, Infiniband, and NVMeoF) through SR-IOV (single root IO virtualization), bypassing the Linux kernel for maximum throughput. This is with WekaIO’s own networking stack in user space. Continue reading