OpenZFS 2.0 exciting new future

The OpenZFS (virtual) Developer Summit ended over a weekend ago. I stayed up a bit (not much) to listen to some of the talks because it started midnight my time, and ran till 5am on the first day, and 2am on the second day. Like a giddy schoolboy, I was excited, not because I am working for iXsystems™ now, but I have been a fan and a follower of the ZFS file system for a long time.

History wise, ZFS was conceived at Sun Microsystems in 2005. I started working on ZFS reselling Nexenta in 2009 (my first venture into business with my company nextIQ) after I was professionally released by EMC early that year. I bought a Sun X4150 from one of Sun’s distributors, and started creating a lab server. I didn’t like the workings of NexentaStor (and NexentaCore) very much, and it was priced at 8TB per increment. Later, I started my second company with a partner and it was him who showed me the elegance and beauty of ZFS through the command lines. The creed of ZFS as a volume and a file system at the same time with the CLI had an effect on me. I was in love.

OpenZFS Developer Summit 2020 Logo

OpenZFS Developer Summit 2020 Logo

Exciting developments

Among the many talks shared in the OpenZFS Developer Summit 2020 , there were a few ideas and developments which were exciting to me. Here are 3 which I liked and I provide some commentary about them.

  • Block Reference Table
  • dRAID (declustered RAID)
  • Persistent L2ARC

Continue reading

Kubernetes Persistent Storage Managed Well

[ Disclosure: This is a StorPool Storage sponsored blog ]

StorPool Storage – Distributed Storage

There is a rapid adoption of Kubernetes in the enterprise and in the cloud. The push for digital transformation to modernize businesses for a cloud native world in the next decade has lifted both containerized applications and the Kubernetes container orchestration platform to an unprecedented level. The application landscape, especially the enterprise, is looking at Kubernetes to address these key areas:

  • Scale
  • High performance
  • Availability and Resiliency
  • Security and Compliance
  • Controllable Costs
  • Simplified

The Persistent Storage Question

Enterprise applications such as relational databases, email servers, and even the cloud native ones like NoSQL, analytics engines, demand a single data source of truth. Fundamentals properties such as ACID (atomicity, consistency, isolation, durability) and BASE (Basic Availability, Soft State, Eventual Consistency) have to have persistent storage as the foundational repository for the data. And thus, persistent storage have rallied under Container Storage Interface (CSI), and fast becoming a de facto standard for Kubernetes. At last count, there are more than 80 CSI drivers from 60+ storage and cloud vendors, each providing block-level storage to Kubernetes pods.

However, at this juncture, Kubernetes is still very engineering-centric. Persistent storage is equally as challenging, despite all the new developments and hype around it.

Continue reading

Storage in a shiny multi-cloud space

The multi-cloud for infrastructure-as-a-service (IaaS) era is not here (yet). That is what the technology marketers want you to think. The hype, the vapourware, the frenzy. It is what they do. The same goes to technology analysts where they describe vision and futures, and the high level constructs and strategies to get there. The hype of multi-cloud is often thought of running applications and infrastructure services seamlessly in several public clouds such as Amazon AWS, Microsoft® Azure and Google Cloud Platform, and linking it to on-premises data centers and private clouds. Hybrid is the new black.

Multicloud connectivity to public cloud providers and on-premises private cloud

Multi-Cloud, on-premises, public and hybrid clouds

And the aspiration of multi-cloud is the right one, when it is truly ready. Gartner® wrote a high level article titled “Why Organizations Choose a Multicloud Strategy“. To take advantage of each individual cloud’s strengths and resiliency in respective geographies make good business sense, but there are many other considerations that cannot be an afterthought. In this blog, we look at a few of them from a data storage perspective.

In the beginning there was … 

For this storage dinosaur, data storage and compute have always coupled as one. In the mainframe DASD days. these 2 were together. Even with the rise of networking architectures and protocols, from IBM SNA, DECnet, Ethernet & TCP/IP, and Token Ring FC-SAN (sorry, this is just a joke), the SANs, the filers to the servers were close together, albeit with a network buffered layer.

A decade ago, when the public clouds started appearing, data storage and compute were mostly inseparable. There was demarcation of public clouds and private clouds. The notion of hybrid clouds meant public clouds and private clouds can intermix with on-premise computing and data storage but in almost all cases, this was confined to a single public cloud provider. Until these public cloud providers realized they were not able to entice the larger enterprises to move their IT out of their on-premises data centers to the cloud convincingly. So, these public cloud providers decided to reverse their strategy and peddled their cloud services back to on-prem. Today, Amazon AWS has Outposts; Microsoft® Azure has Arc; and Google Cloud Platform launched Anthos.

Continue reading

The instant value of Open Source Storage

[ Full disclosure: I work for iXsystems™ . Opinions and views are mine. ]

TrueNAS Open Storage logo

The story began …

It was 2011. A friend of a friend called me out of the blue. He was rambling about his company’s storage needs. I recalled vividly that he wanted 100TB, and Dell and HP (before HPE) were hopeless doing NAS (network attached storage) in an Apple environment. They assembled a Frankenstein-ish NAS and plastered a price over MYR$100K around it.

In his environment, the Apple workstations were connected to dozens of WD Cloud Book storage (whatever it was called back then), daisy chained via Firewire to each other. I recalled one workstation had 3 WD “books” daisy chained together. They got the exploding storage needs but performance sucked. With every 2nd or 3rd user, access to files were at a snail pace, taking up to more than 2 minutes to open a file sometimes.

At that time, my old colleague at Sun was fervently talking about ZFS and OpenSolaris™. I told him about this opportunity, and so we began. It was him who used the word “crafter”. “We are not building“, he said, “we are crafting“. He was right.

OpenSolaris logo

Continue reading

The True Value of TrueNAS CORE

A funny thing came up on my Twitter feed last week. There was an ongoing online voting battle pitting FreeNAS™ (now shall be known as TrueNAS® CORE) against Unraid. I wasn’t aware of it before that and I would not comment about Unraid because I have no experience with the software. But let me share with you my philosophy and my thoughts why I would choose TrueNAS® CORE over Unraid and of course TrueNAS® Enterprise along with it. We have to bear in mind that TrueNAS® SCALE is in development and will soon be here next year in 2021.

The new TrueNAS CORE logo

The real proving grounds

I have been in enterprise storage for a long time. If I were to count the days I entered the industry, that was more than 28 years ago. When people talked about their first PC (personal computer), they would say Atari or Commodore 64, or something retro that was meant for home use. Not me.

My first computer I was affiliated with was a SUN SPARC®station 2 (SS2). I took it home (from the company I was working with), opened it apart, and learned about the SBUS. My computer life started with a technology that was meant for the businesses, for the enterprise. Heck, I even installed and supported a few of the Sun E10000 for 2 years when I was with Sun Microsystems. Since that SS2, my pursuit of knowledge, experience and worldview evolved around storage technologies for the enterprise.

Open source software has also always interested me. I tried a few file systems including Lustre®, that parallel file system that powered some of the world’s supercomputers and I am a certified BeeGFS® Systems Engineer too. In the end, for me, and for many, the real proving grounds isn’t on personal and home use. It is about a storage systems and an OS that are built for the enterprise.

Continue reading

A Dialogue between 2 Drives

I was talking to an end user who was slowly getting exposed to the cloud amid this Covid-19 pandemic. The whole work from home thingy was not new to him, but the scale of the practice suddenly escalated when more than 80 of his staff have to work from wherever they were stuck at during the past 6 weeks. Initially all of his staff had to alternate their folders and files access because their Sonicwall® Global Client license and SSL VPN Clients were inadequate. Even after their upgrade of the licenses, the performance of getting the folders and files through the Z: drive was poor and the network was chocked up. I told them that regardless, the SMB protocol of the NAS shared folders was chatty and generated a lot of network traffic on the VPN, along with the inadequacies of running this over the wide area Internet network. Staff productivity obviously nosedived.

We are now exploring putting their work in the cloud but maintaining a consistent synchronized set of folders and files at all times. Wasabi® Cloud has emerged the most attractive price/GB/month and no egress or API requests fees.

Combining 2 shared drives into one

NAS Drive talking to Cloud Drive like 2 buddies

Now here is a story of 2 Drives

The end user is not an IT savvy user. They were unfamiliar with Cloud Storage other than the free personal ones like Google Drive, or Dropbox. They have more than 200TB and I have introduced to them Wasabi® Cloud. They were very familiar with their Z:, their NAS Drive. I introduced to them the Cloud Drive.

NAS: Hey, how’s it going?

Cloud: Not bad. My boss and your boss are talking about bringing me and Wasabi® Cloud to join your gang. Hope you are OK with that.

Continue reading

Cloud Sync Prowess of FreeNAS

The COVID-19 situation has driven technology to find new ways to adapt to the new digital workspace. Difficulty in remote access to content files and media assets has disrupted the workflow of the practitioners of many business segments. Many are trying to find ways to get the files and folders into their home computers and laptops to do work when they were used to getting them from the regular NAS shared drives.

These challenges have put hybrid cloud file sharing into the forefront, making it the best possible option to access the NAS folders and files inside and outside the boundaries of the company’s network. However, end users are pressured to invest into new technologies to adjust to this new normal. It does not have to be this way, because FreeNAS™ (and in that aspect TrueNAS®) has plenty of cloud help to offer. Most of the features are Free!

TrueNAS CORE

TrueNAS Core replacing FreeNAS in version 12.0

[ Note: FreeNAS™ will become TrueNAS® Core in the release 12. News was announced 2 months ago ]

FreeNAS™ Cloud Sync

One of the underrated features of FreeNAS™ is Cloud Sync. It was released in version 11.1 and it is invaluable extending the hybrid cloud file sharing to the masses. Cloud Sync makes the shares available to public cloud services such as AWS S3, Dropbox, Google Cloud Storage, Google Drive, Microsoft Blob Storage, Microsoft OneDrive, pCloud, Wasabi™ Cloud and more. This means that the files and folders used within the NAS space in the LAN, can synchronized and used through the public cloud services mentioned.

There are 2 steps to setup Cloud Sync.

  • Add the Cloud Credentials for the cloud provider to use
  • Create the Cloud Sync Task

Continue reading

Falconstor Software Defined Data Preservation for the Next Generation

Falconstor® Software is gaining momentum. Given its arduous climb back to the fore, it is beginning to soar again.

Tape technology and Digital Data Preservation

I mentioned that long term digital data preservation is a segment within the data lifecycle which has merits and prominence. SNIA® has proved that this is a strong growing market segment through its 2007 and 2017 “100 Year Archive” surveys, respectively. 3 critical challenges of this long, long-term digital data preservation is to keep the archives

  • Accessible
  • Undamaged
  • Usable

For the longest time, tape technology has been the king of the hill for digital data preservation. The technology is cheap, mature, and many enterprises has built their long term strategy around it. And the pulse in the tape technology market is still very healthy.

The challenges of tape remain. Every 5 years or so, companies have to consider moving the data on the existing tape technology to the next generation. It is widely known that LTO can read tapes of the previous 2 generations, and write to it a generation before. The tape transcription process of migrating digital data for the sake of data preservation is bad because it affects the structural integrity and quality of the content of the data.

In my times covering the Oil & Gas subsurface data management, I have seen NOCs (national oil companies) with 500,000 tapes of all generations, from 1/2″ to DDS, DAT to SDLT, 3590 to LTO 1-7. And millions are spent to transcribe these tapes every few years and we have folks like Katalyst DM, Troika and more hovering this landscape for their fill.

Continue reading