Open Source and Open Standards open the Future

[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]

Western Digital dived into Storage Field Day 19 in full force as they did in Storage Field Day 18. A series of high impact presentations, each curated for the diverse requirements of the audience. Several open source initiatives were shared, all open standards to address present inefficiencies and designed and developed for a greater future.

Zoned Storage

One of the initiatives is to increase the efficiencies around SMR and SSD zoning capabilities and removing the complexities and overlaps of both mediums. This is the Zoned Storage initiatives a technical working proposal to the existing NVMe standards. The resulting outcome will give applications in the user space more control on the placement of data blocks on zone aware devices and zoned SSDs, collectively as Zoned Block Device (ZBD). The implementation in the Linux user and kernel space is shown below:

Continue reading

Data Renaissance in Oil and Gas

The Oil and Gas industry, especially in the upstream Exploration and Production (EP) sector, has been enjoying a renewed vigour in the past few years. I have kept in touch with the developments of the EP side because I always have a soft spot for the industry. I have engaged in infrastructure and solutions in the petrotechnical side in my days at Sun Microsystems back in the late 90s. The engagements with EP intensified in my first stint at NetApp, wearing the regional Oil & Gas consulting engineer here in South Asia for almost 6 years. Then, with Interica in 2014, I was dealing with subsurface data and seismic interpretation technology. EP is certainly an exciting sector to cover because there are so much technical work involved and the technologies, especially the non-IT, are breath taking.

I have been an annual registrant to the Digital Energy Journal events since 2013, except last year, and I have always enjoyed their newsletter. This week I attended Digital Energy 2-day conference again, and I was taken in by the exciting times in EP. Here are a few of my views and trends observation in this data renaissance.

Continue reading

Thinking small to solve Big

[This article was posted in my LinkedIn at https://www.linkedin.com/pulse/thinking-small-solve-big-chin-fah-heoh/ on Sep 9th 2019]

The world’s economy has certainly turned. And organizations, especially the SMEs, are demanding more. There were times that many technology vendors and their tier 1 systems integrators could get away with plenty of high level hobnobbing, and showering the prospect with their marketing wow-factor. But those fancy, smancy days are drying up and SMEs now do a lot of research and demand a more elaborate and a more comprehensive technology solution to their requirements.

The SMEs have the same problems faced by the larger organizations. They want more data stored, protected and recoverable, and maximize the value of data. However, their risk factors are much higher than the larger enterprises, because a disruption or a simple breakdown could affect their business and operations far greater than larger organizations. In most situations, they have no safety net.

So, the past 3 odd years, I have learned that as a technology solution provider, as a systems integrator to SMEs, I have to be on-the-ball with their pains all the time. And I have to always remember that they do not have the deep pockets, especially when the economy in Malaysia has been soft for years.

That is why I have gravitated to technology solutions that matter to the SMEs and gentle to their pockets as well. Take for instance a small company called Itxotic I discovered earlier this year. Itxotic is a 100% Malaysian home-grown technology startup, focusing on customized industry intelligence, notably computer vision AI. Their prominent technology include defect detection in a manufacturing production line.

 

At the Enterprise level, it is easy for large technology providers like Hitachi or GE or Siemens to peddle similar high-tech solutions to SMEs requirements. But this would come with a price tag of hundreds of thousands of ringgit. SMEs will balk at such a large investment because the price tag is definitely something not comprehensible to the SME factories. That is why I gravitated to the small thinking of Itxotic, where their small, yet powerful technology solves big problems in the SMEs.

And this came about when more Industry 4.0 opportunities started to come into my radar. Similarly, I was also approached to look into a edge-network data analytics technology to be integrated into PLCs (programmable logic controllers). At present, the industry consultants who invited me, are peddling a foreign technology solution, and the technology costs RM13,000 per CPU core. In a typical 4-core processor IPC (industrial PC), that is a whopping RM52,000, minus the hardware and integration services. This can easily drive up the selling price of over RM100K, again, a price tag that will trigger a mini heart attack with the SMEs.

I am tasked by the industry consultants to design a more cost-friendly, aka cheaper solution and today, we are already building an alternative with Apache Kafka, its connectors and Grafana for visual reporting. And I think the cost to build this alternative technology will be probably 70-80% cheaper than the one they are reselling now. The “think small, solve Big” mantra is beginning to take hold, and I am excited about it.

In the “small” mantra, I mean to be intimate and humble with the end users. One lesson I have learned over the past years is, the SMEs count on their technology partners to be with them. They have no room for failure because a costly failure is likely to be devastating to their operations and business. Know the technology you are pitching well, so that the SMEs are confident that you can deliver, not some over-the-top high-level technology pitch. Look deep into the technology integration with their existing technology and operations, and carefully and meticulously craft and curate a well mapped plan for them. Commit to their journey to ensure their success.

I have often seen technology vendors and resellers leaving SMEs high and dry when it comes to something outside their scope, and this has been painful. That is why this isn’t a downgrade for me when I started working with the SMEs more often in the past 3 years, even though I have served the enterprise for more than 25 years. This invaluable lesson is an upgrade for me to serve my SME customers better.

Continue reading

Intel IoT Revolution for Malaysia Industry 4.0

Intel rocks!

I have been following Intel for a few years now, a big part was for their push of the 3D Xpoint technology. Under the Optane brand, Intel has several forms of media types, addressing persistent memory to storage class and solid state storage. Intel, in recent years, has been more forefront with their larger technology portfolio and it is not just about their processors anymore. One of the bright areas I am seeing myself getting more engrossed in (and involved into) is their IoT (Internet of Things) portfolio, and it has been very exciting so far.

Intel IoT and Deep Learning Frameworks

The efforts of the Intel IoTG (Internet of Things Group) in Asia Pacific are recognized rapidly. The drive of the Industry 4.0 revolution is strong. And I saw the brightest spark of the Intel folks pushing the Industry 4.0 message on homeground Malaysia.

After the large showing by Intel at the Semicon event 2 months ago, they turned up a notch in Penang at their own Intel IoT Summit 2019, which concluded last week.

At the event, Intel brought out their solid engineering geeks. There were plenty of talks and workshops on Deep Learning, AI, Neural Networks, with chatters on Nervana, Nauta and Saffron. Despite all the technology and engineering prowess of Intel was showcasing, there was a worrying gap.

Continue reading

Storage Performance Considerations for AI Data Paths

The hype of Deep Learning (DL), Machine Learning (ML) and Artificial Intelligence (AI) has reached an unprecedented frenzy. Every infrastructure vendor from servers, to networking, to storage has a word to say or play about DL/ML/AI. This prompted me to explore this hyped ecosystem from a storage perspective, notably from a storage performance requirement point-of-view.

One question on my mind

There are plenty of questions on my mind. One stood out and that is related to storage performance requirements.

Reading and learning from one storage technology vendor to another, the context of everyone’s play against their competitors seems to be  “They are archaic, they are legacy. Our architecture is built from ground up, modern, NVMe-enabled“. And there are more juxtaposing, but you get the picture – “We are better, no doubt“.

Are the data patterns and behaviours of AI different? How do they affect the storage design as the data moves through the workflow, the data paths and the lifecycle of the AI ecosystem?

Continue reading

Dell go big with Cloud

[Disclaimer: I have been invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. My expenses, travel and accommodation are covered by Dell Technologies, the organizer and I am not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Talk about big. Dell Technologies just went big with the Cloud.

The Microsoft Factor

Day 1 of Dell Technologies World 2019 (DTW19) started with a big surprise to many, including yours truly when Michael Dell, together with Pat Gelsinger invited Microsoft CEO, Satya Nadella on stage.

There was nothing new about Microsoft working with Dell Technologies. Both have been great partners since the PC days, but when they announced Azure VMware Solutions to the 15,000+ attendees of the conference, there was a second of disbelief, followed by an ovation of euphoria.

VMware solutions will run native on Microsoft Azure Cloud. The spread of vSphere, VSAN, vCenter, NSX-T and VMware tools and environment will run on Azure Bare Metal Infrastructure at multiple Azure locations. How big is that. Continue reading

Sexy HPC storage is all the rage

HPC is sexy

There is no denying it. HPC is sexy. HPC Storage is just as sexy.

Looking at the latest buzz from Super Computing Conference 2018 which happened in Dallas 2 weeks ago, the number of storage related vendors participating was staggering. Panasas, Weka.io, Excelero, BeeGFS, are the ones that I know because I got friends posting their highlights. Then there are the perennial vendors like IBM, Dell, HPE, NetApp, Huawei, Supermicro, and so many more. A quick check on the SC18 website showed that there were 391 exhibitors on the floor.

And this is driven by the unrelentless demand for higher and higher performance of computing, and along with it, the demands for faster and faster storage performance. Commercialization of Artificial Intelligence (AI), Deep Learning (DL) and newer applications and workloads together with the traditional HPC workloads are driving these ever increasing requirements. However, most enterprise storage platforms were not designed to meet the demands of these new generation of applications and workloads, as many have been led to believe. Why so?

I had a couple of conversations with a few well known vendors around the topic of HPC Storage. And several responses thrown back were to put Flash and NVMe to solve the high demands of HPC storage performance. In my mind, these responses were too trivial, too irresponsible. So I wanted to write this blog to share my views on HPC storage, and not just about its performance.

The HPC lines are blurring

I picked up this video (below) a few days ago. It was insideHPC Rich Brueckner interview with Dr. Goh Eng Lim, HPE CTO and renowned HPC expert about the convergence of both traditional and commercial HPC applications and workloads.

I liked the conversation in the video because it addressed the 2 different approaches. And I welcomed Dr. Goh’s invitation to the Commercial HPC community to work with the Traditional HPC vendors to help push the envelope towards Exascale SuperComputing.

Continue reading

Disaggregation or hyperconvergence?

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

There is an argument about NetApp‘s HCI (hyperconverged infrastructure). It is not really a hyperconverged product at all, according to one school of thought. Maybe NetApp is just riding on the hyperconvergence marketing coat tails, and just wanted to be associated to the HCI hot streak. In the same spectrum of argument, Datrium decided to call their technology open convergence, clearly trying not to be related to hyperconvergence.

Hyperconvergence has been enjoying a period of renaissance for a few years now. Leaders like Nutanix, VMware vSAN, Cisco Hyperflex and HPE Simplivity have been dominating the scene, and touting great IT benefits and eliminating IT efficiencies. But in these technologies, performance and capacity are tightly intertwined. That means that in each of the individual hyperconverged nodes, typically starting with a trio of nodes, the processing power and the storage capacity comes together. You have to accept both resources as a node. If you want more processing power, you get the additional storage capacity that comes with that node. If you want more storage capacity, you get more processing power whether you like it or not. This means, you get underutilized resources over time, and definitely not rightsized for the job.

And here in Malaysia, we have seen vendors throw in hyperconverged infrastructure solutions for every single requirement. That was why I wrote a piece about some zealots of hyperconverged solutions 3+ years ago. When you think you have a magical hammer, every problem is a nail. 😉

In my radar, NetApp and Datrium are the only 2 vendors that offer separate nodes for compute processing and storage capacity and still fall within the hyperconverged space. This approach obviously benefits the IT planners and the IT architects, and the customers too because they get what they want for their business. However, the disaggregation of compute processing and storage leads to the argument of whether these 2 companies belong to the hyperconverged infrastructure category.

Continue reading