StorageGRID gets gritty

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]

NetApp® presented StorageGRID® Webscale (SGWS) at Storage Field Day 19 last month. It was timely when the general purpose object storage market, in my humble opinion, was getting disillusioned and almost about to deprive itself of the value of what it was supposed to be.

Cheap and deep“, “Race to Zero” were some of the less storied calls I have come across when discussing about object storage, and it was really de-valuing the merits of object storage as vendors touted their superficial glory of being in the IDC Marketscape for Object-based Storage 2019.

Almost every single conversation I had in the past 3 years was either explaining what object storage is or “That is cheap storage right?

Continue reading

Tiger Bridge extending NTFS to the cloud

[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]

The NTFS File System has been around for more than 3 decades. It has been the most important piece of the Microsoft Windows universe, although Microsoft is already replacing it with ReFS (Resilient File System) since Windows Server 2012. Despite best efforts from Microsoft, issues with ReFS remain and thus, NTFS is still the most reliable and go-to file system in Windows.

First reaction to Tiger Technology

When Tiger Technology was first announced as a sponsor to Storage Field Day 19, I was excited of the company with such a cool name. Soon after, I realized that I have encountered the name before in the media and entertainment space.


Continue reading

ZFS Replication and Recovery with FreeNAS

We get requests to recover data from a secondary platform all the time. RPO (recovery point objective) of 30 minutes can be challenging to small to medium sized companies, especially if there is an SLA (service level agreement) to meet.

This week, my team and I took some time to create a FreeNAS replication demo for a potential client. I thought I document the whole thing about ZFS replication, the key steps to set it up and show how recovery is done.

ZFS Snapshots

ZFS replication relies on periodic ZFS snapshots. ZFS snapshot is an inherent feature from the ZFS file system, and often used as a point-in-time copy of the existing ZFS file system tree in memory. Once a snapshot has been triggered, either manually or on schedule (periodic), the file system tree and its metadata in the memory are committed to disk to ensure an updated and consistent state of the file system at all times.

To start, a running snapshot policy on a schedule must be in place. This snapshot policy can be on a specific dataset or zvol, or even the entire zpool. Yeah, I am using quite a few ZFS terminology here – zpool, zvol, dataset. You can read more about each of the structures and more here.

Once the ZFS replication task has been setup, every snapshot occurred in the snapshot policy is automatically duplicated and copied to the target ZFS dataset. Usually, the target ZFS dataset is on a secondary FreeNAS storage server, serving as a disaster recovery platform. Sending and receiving data in the snapshots rely on SSH service.

This is the network diagram explaining the FreeNAS ZFS replication setup.

Continue reading

Commvault coming all together

[Disclosure: I was invited by Commvault as a Media person and Social Ambassador to their Commvault GO 2019 Conference and also a Tech Field Day eXtra delegate from Oct 13-17, 2019 in the Denver CO, USA. My expenses, travel, accommodation and conference fees were covered by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

This trip to the Commvault GO conference was pretty much a mission to find answers to their Hedvig acquisition just a month ago. It was an unprecedented move for Commvault and I, as an industry observer and pundit, took the news positively. I wrote in my blog about Commvault’s big bet and I liked their boldness in their approach.

But the news did not bode well back here in Malaysia. The local technology news portal, Data Storage Asean picked up the news in a rather unconvinced way. 2 long time Commvault partners I spoke to were obviously unhappy because the acquisition made little sense to them on the back of closing of the Commvault Malaysia office just weeks before this with more unsettling rumours of the Commvault team in Asia Pacific. The broken trust and the fear of what the future held for the Commvault customers in Malaysia and in the region were riding along with me on this trip.

But I have seen the beginning of the Commvault transformation from the Commvault GO conferences I have attended since 2017. This is my 3rd Commvault GO and I ended Day 1 with good vibes.

Here were some of my highlights in the first day. Continue reading

Data Renaissance in Oil and Gas

The Oil and Gas industry, especially in the upstream Exploration and Production (EP) sector, has been enjoying a renewed vigour in the past few years. I have kept in touch with the developments of the EP side because I always have a soft spot for the industry. I have engaged in infrastructure and solutions in the petrotechnical side in my days at Sun Microsystems back in the late 90s. The engagements with EP intensified in my first stint at NetApp, wearing the regional Oil & Gas consulting engineer here in South Asia for almost 6 years. Then, with Interica in 2014, I was dealing with subsurface data and seismic interpretation technology. EP is certainly an exciting sector to cover because there are so much technical work involved and the technologies, especially the non-IT, are breath taking.

I have been an annual registrant to the Digital Energy Journal events since 2013, except last year, and I have always enjoyed their newsletter. This week I attended Digital Energy 2-day conference again, and I was taken in by the exciting times in EP. Here are a few of my views and trends observation in this data renaissance.

Continue reading

Commvault big bet

I woke up at 2.59am in the morning of Sept 5th morning, a bit discombobulated and quickly jumped into the Commvault call. The damn alarm rang and I slept through it, but I got up just in time for the 3am call.

As I was going through the motion of getting onto UberConference, organized by GestaltIT, I was already sensing something big. In the call, Commvault was acquiring Hedvig and it hit me. My drowsy self centered to the big news. And I saw a few guys from Veritas and Cohesity on my social media group making gestures about the acquisition.

I spent the rest of the week thinking about the acquisition. What is good? What is bad? How is Commvault going to move forward? This is at pressing against the stark background from the rumour mill here in South Asia, just a week before this acquisition news, where I heard that the entire Commvault teams in Malaysia and Asia Pacific were released. I couldn’t confirm the news in Asia Pacific, but the source of the news coming from Malaysia was strong and a reliable one.

What is good?

It is a big win for Hedvig. Nestled among several scale-out primary storage vendors and little competitive differentiation, this Commvault acquisition is Hedvig’s pay day.

Continue reading

Scaling new HPC with Composable Architecture

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. Tech Field Day Extra was an included activity as part of the Dell Technologies World. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Deep Learning, Neural Networks, Machine Learning and subsequently Artificial Intelligence (AI) are the new generation of applications and workloads to the commercial HPC systems. Different from the traditional, more scientific and engineering HPC workloads, I have written about the new dawn of supercomputing and the attractive posture of commercial HPC.

Don’t be idle

From the business perspective, the investment of HPC systems is high most of the time, and justifying it to the executives and the investors is not easy. Therefore, it is critical to keep feeding the HPC systems and significantly minimize the idle times for compute, GPUs, network and storage.

However, almost all HPC systems today are inflexible. Once assigned to a project, the resources pretty much stay with the project, even when the workload processing of the project is idle and waiting. Of course, we have to bear in mind that not all resources are fully abstracted, virtualized and software-defined whereby you can carve out pieces of the hardware and deliver a percentage of that resource. Case in point is the CPU, where you cannot assign certain clock cycles of CPU to one project and another half to the other. The technology isn’t there yet. Certain resources like GPU is going down the path of Virtual GPU, and into the realm of resource disaggregation. Eventually, all resources of the HPC systems – CPU, memory, FPGA, GPU, PCIe channels, NVMe paths, IOPS, bandwidth, burst buffers etc – should be disaggregated and pooled for disparate applications and workloads based on demands of usage, time and performance.

Hence we are beginning to see the disaggregated HPC systems resources composed and built up the meet the diverse mix and needs of HPC applications and workloads. This is even more acute when a AI project might grow cold, but the training of AL/ML/DL workloads continues to stay hot

Liqid the early leader in Composable Architecture

Continue reading

Dell go big with Cloud

[Disclaimer: I have been invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. My expenses, travel and accommodation are covered by Dell Technologies, the organizer and I am not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Talk about big. Dell Technologies just went big with the Cloud.

The Microsoft Factor

Day 1 of Dell Technologies World 2019 (DTW19) started with a big surprise to many, including yours truly when Michael Dell, together with Pat Gelsinger invited Microsoft CEO, Satya Nadella on stage.

There was nothing new about Microsoft working with Dell Technologies. Both have been great partners since the PC days, but when they announced Azure VMware Solutions to the 15,000+ attendees of the conference, there was a second of disbelief, followed by an ovation of euphoria.

VMware solutions will run native on Microsoft Azure Cloud. The spread of vSphere, VSAN, vCenter, NSX-T and VMware tools and environment will run on Azure Bare Metal Infrastructure at multiple Azure locations. How big is that. Continue reading

The full force of Western Digital

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

3 weeks after Storage Field Day 18, I was still trying to wrap my head around the 3-hour session we had with Western Digital. I was like a kid in a candy store for a while, because there were too much to chew and I couldn’t munch them all.

From “Silicon to System”

Not many storage companies in the world can claim that mantra – “From Silicon to Systems“. Western Digital is probably one of 3 companies (the other 2 being Intel and nVidia) I know of at present, which develops vertical innovation and integration, end to end, from components, to platforms and to systems.

For a long time, we have always known Western Digital to be a hard disk company. It owns HGST, SanDisk, providing the drives, the Flash and the Compact Flash for both the consumer and the enterprise markets. However, in recent years, through 2 eyebrow raising acquisitions, Western Digital was moving itself up the infrastructure stack. In 2015, it acquired Amplidata. 2 years later, it acquired Tegile Systems. At that time, I was wondering why a hard disk manufacturer was buying storage technology companies that were not its usual bread and butter business.

Continue reading

WekaIO controls their performance destiny

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I was first introduced to WekaIO back in Storage Field Day 15. I did not blog about them back then, but I have followed their progress quite attentively throughout 2018. 2 Storage Field Days and a year later, they were back for Storage Field Day 18 with a new CTO, Andy Watson, and several performance benchmark records.

Blowout year

2018 was a blowout year for WekaIO. They have experienced over 400% growth, placed #1 in the Virtual Institute IO-500 10-node performance challenge, and also became #1 in the SPEC SFS 2014 performance and latency benchmark. (Note: This record was broken by NetApp a few days later but at a higher cost per client)

The Virtual Institute for I/O IO-500 10-node performance challenge was particularly interesting, because it pitted WekaIO against Oak Ridge National Lab (ORNL) Summit supercomputer, and WekaIO won. Details of the challenge were listed in Blocks and Files and WekaIO Matrix Filesystem became the fastest parallel file system in the world to date.

Control, control and control

I studied WekaIO’s architecture prior to this Field Day. And I spent quite a bit of time digesting and understanding their data paths, I/O paths and control paths, in particular, the diagram below:

Starting from the top right corner of the diagram, applications on the Linux client (running Weka Client software) and it presents to the Linux client as a POSIX-compliant file system. Through the network, the Linux client interacts with the WekaIO kernel-based VFS (virtual file system) driver which coordinates the Front End (grey box in upper right corner) to the Linux client. Other client-based protocols such as NFS, SMB, S3 and HDFS are also supported. The Front End then interacts with the NIC (which can be 10/100G Ethernet, Infiniband, and NVMeoF) through SR-IOV (single root IO virtualization), bypassing the Linux kernel for maximum throughput. This is with WekaIO’s own networking stack in user space. Continue reading