My 2-day weekend with Nextcloud on FreeNAS

In recent weeks, I have been asked by friends and old cust0mers on how to extend their NAS shared drives to work-from-home, the new reality. Malaysia went into a full lockdown as of June 1st several days ago.

I have written about file synchronization stories before but I have never done a Nextcloud blog. I have little experience with TrueNAS® CORE Nextcloud plugin and this was a good weekend to build it up from scratch with Virtualbox with FreeNAS™ 11.2U5 (because my friend was using that version).

[ Note ] FreeNAS™ 11.2U5 has been EOLed.

Nextcloud login screen

So, here it how it went for my little experiment. FYI, this is not a How-to guide. That will come later after I have put all my notes together with screenshots and all. This is just a collection of my thoughts while setting up Nextcloud on FreeNAS™.

Dropbox® is expensive

Using cloud storage with file sync and share capability is not exactly a cheap thing especially when you are a small medium sized business or a school or a charity organization. Here is the pricing table for Dropbox® for Business :

Dropbox for business pricing

I am using Dropbox® as the example here but the same can be said for OneDrive or Google Drive and others. The pricing can quickly add up when the price is calculated per user per month.

Continue reading

Is Software Defined right for Storage?

George Herbert Leigh Mallory, mountaineer extraordinaire, was once asked “Why did you want to climb Mount Everest?“, in which he replied “Because it’s there“. That retort demonstrated the indomitable human spirit and probably exemplified best the relationship between the human being’s desire to conquer the physical limits of nature. The software of humanity versus the hardware of the planet Earth.

Juxtaposing, similarities can be said between software and hardware in computer systems, in storage technology per se. In it, there are a few schools of thoughts when it comes to delivering storage services with the notable ones being the storage appliance model and the software-defined storage model.

There are arguments, of course. Some are genuinely partisan but many a times, these arguments come in the form of the flavour of the moment. I have experienced in my past companies touting the storage appliance model very strongly in the beginning, and only to be switching to a “software company” chorus years after that. That was what I meant about the “flavour of the moment”.

Software Defined Storage

Continue reading

The hot cold times of HCI

Hyperconverged Infrastructure (HCI) is a hot technology. It has been for the past decade since Nutanix™ took the first mover advantage from the Converged Infrastructure (CI) technology segment and made it pretty much its ownfor a while.

Hyper Converged Infrastructure

But the HCI market (not the technology) is a strange one. It is hot. It is cold. The perennial leader, Nutanix™, has yet to eke out a profitable year. VMware® is strong in the market. Cisco™, which was hot with their HyperFlex solution in 2019, was also stopped short with a dismal decline in the IDC Worldwide HCI 2Q2020 tracker below:

IDC Worldwide Hyperconverged Infrastructure Tracker – 2Q2020

dHCI = Disaggregated or discombobulated? 

dHCI is known as disaggregated HCI. The disaggregation part is disaggregated hardware, especially on the storage part. Vendors like HPE® with Nimble Storage, Hitachi Vantara, NetApp® and a few more have touted the disaggregation of the performance and capacity, the separation of storage and compute as a value proposition but through close inspection, it is just another marketing ploy to attach a SAN storage to servers. It was marketing old wine in a new bottle. As rightly pointed out by my friend, Charles Chow of Commvault® quoted in his blog

Continue reading

Layers in Storage – For better or worse

Storage arrays and storage services are built upon by layers and layers beneath its architecture. The physical components of hard disk drives and solid states are abstracted into RAID volumes, virtualized into other storage constructs before they are exposed as shares/exports, LUNs or objects to the network.

Everyone in the storage networking industry, is cognizant of the layers and it is the foundation of knowledge and experience. The public cloud storage services side is the same, albeit more opaque. Nevertheless, both have layers.

In the early 2000s, SNIA® Technical Council outlined a blueprint of the SNIA® Shared Storage Model, a framework describing layers and properties of a storage system and its services. It was similar to the OSI 7-layer model for networking. The framework helped many industry professionals and practitioners shaped their understanding and the development of knowledge in their respective fields. The layering scheme of the SNIA® Shared Storage Model is shown below:

SNIA Shared Storage Model – The layering scheme

Storage vendors layering scheme

While SNIA® storage layers were generic and open, each storage vendor had their own proprietary implementation of storage layers. Some of these architectures are simple, but some, I find a bit too complex and convoluted.

Here is an example of the layers of the Automated Volume Management (AVM) architecture of the EMC® Celerra®.

EMC Celerra AVM Layering Scheme

I would often scratch my head about AVM. Disks were grouped into RAID groups, which are LUNs (Logical Unit Numbers). Then they were defined as Celerra® dvols (disk volumes), and stripes of the dvols were consolidated into a storage pool.

From the pool, a piece of a storage capacity construct, called a slice volume, were combined with other slice volumes into a metavolume which eventually was presented as a file system to the network and their respective NAS clients. Explaining this took an effort because I was the IP Storage product manager for EMC® between 2007 – 2009. It was a far cry from the simplicity of NetApp® ONTAP 7 architecture of RAID groups and volumes, and the WAFL® (Write Anywhere File Layout) filesystem.

Another complicated layered framework I often gripe about is Ceph. Here is a look of how the layers of CephFS is constructed.

Ceph Storage Layered Framework

I work with the OpenZFS filesystem a lot. It is something I am rather familiar with, and the layered structure of the ZFS filesystem is essentially simpler.

Storage architecture mixology

Engineers are bizarre when they get too creative. They have a can do attitude that transcends the boundaries of practicality sometimes, and boggles many minds. This is what happens when they have their own mixology ideas.

Recently I spoke to two magnanimous persons who had the idea of providing Ceph iSCSI LUNs to the ZFS filesystem in order to use the simplicity of NAS file sharing capabilities in TrueNAS® CORE. From their own words, Ceph NAS capabilities sucked. I had to draw their whole idea out in a Powerpoint and this is the architecture I got from the conversation.

There are 3 different storage subsystems here just to provide NAS. As if Ceph layers aren’t complicated enough, the iSCSI LUNs from Ceph are presented as Cinder volumes to the KVM hypervisor (or VMware® ESXi) through the Cinder driver. Cinder is the persistent storage volume subsystem of the Openstack® project. The Cinder volumes/hypervisor datastore are virtualized as vdisks to the respective VMs installed with TrueNAS® CORE and OpenZFS filesystem. From the TrueNAS® CORE, shares and exports are provisioned via the SMB and NFS protocols to Windows and Linux respectively.

It works! As I was told, it worked!

A.P.P.A.R.M.S.C. considerations

Continuing from the layered framework described above for NAS, other aspects beside the technical work have to be considered, even when it can work technically.

I often use a set of diligent data storage focal points when considering a good storage design and implementation. This is the A.P.P.A.R.M.S.C. Take for instance Protection as one of the points and snapshot is the technology to use.

Snapshots can be executed at the ZFS level on the TrueNAS® CORE subsystem. Snapshots can be trigged at the volume level in Openstack® subsystem and likewise, rbd snapshots at the Ceph subsystem. The question is, which snapshot at which storage subsystem is the most valuable to the operations and business? Do you run all 3 snapshots? How do you execute them in succession in a scheduled policy?

In terms of performance, can it truly maximize its potential? Can it churn out the best IOPS, and deliver at wire speed? What is the latency we can expect with so many layers from 3 different storage subsystems?

And supporting this said architecture would be a nightmare. Where do you even start the troubleshooting?

Those are just a few considerations and questions to think about when such a layered storage architecture along. IMHO, such a design was over-engineered. I was tempted to say “Just because you can, doesn’t mean you should

Elegance in Simplicity

Einstein (I think) quoted:

Einstein’s quote on simplicity and complexity

I am not saying that having too many layers is wrong. Having a heavily layered architecture works for many storage solutions out there, where they are often masked with a simple and intuitive UI. But in yours truly point of view, as a storage architecture enthusiast and connoisseur, there is beauty and elegance in simple designs.

The purpose here is to promote better understanding of the storage layers, and how they integrate and interact with each other to deliver the data services to the network. In the end, that is how most storage architectures are built.

 

Fueling the Flywheel of AWS Storage

It was bound to happen. It happened. AWS Storage is the Number 1 Storage Company.

The tell tale signs were there when Silicon Angle reported that AWS Storage revenue was around USD$6.5-7.0 billion last year and will reach USD$10 billion at the end of 2021. That news was just a month ago. Last week, IT Brand Pulse went a step further declaring AWS Storage the Number 1 in terms of revenue. Both have the numbers to back it up.

AWS Logo

How did it become that way? How did AWS Storage became numero uno?

Flywheel juggernaut

I became interested in the Flywheel concept some years back. It was conceived in Jim Collins’ book, “Good to Great” almost 20 years ago, and since then, Amazon.com has become the real life enactment of the Flywheel concept.

Amazon.com Flywheel – How each turn becomes sturdier, brawnier.

Every turn of the flywheel requires the same amount of effort although in the beginning, the noticeable effect is minuscule. But as every turn gains momentum, the returns of each turn scales greater and greater to the fixed efforts of operating a single turn.

Continue reading

Storageless shan’t be thy name

Storageless??? What kind of a tech jargon is that???

This latest jargon irked me. Storage vendor NetApp® (through its acquisition of Spot) and Hammerspace, a metadata-driven storage agnostic orchestration technology company, have begun touting the “storageless” tech jargon in hope that it will become an industry buzzword. Once again, the hype cycle jargon junkies are hard at work.

Clear, empty storage containers

Clear, nondescript storage containers

It is obvious that the storageless jargon wants to ride on the hype of serverless computing, an abstraction method of computing resources where the allocation and the consumption of resources are defined by pieces of programmatic code of the running application. The “calling” of the underlying resources are based on the application’s code, and thus, rendering the computing resources invisible, insignificant and not sexy.

My stand

Among the 3 main infrastructure technology – compute, network, storage, storage technology is a bit of a science and a bit of dark magic. It is complex and that is what makes storage technology so beautiful. The constant innovation and technology advancement continue to make storage as a data services platform relentlessly interesting.

Cloud, Kubernetes and many data-as-a-service platforms require strong persistent storage. As defined by NIST Definition of Cloud Computing, the 4 of the 5 tenets – on-demand self-service, resource pooling, rapid elasticity, measured servicedemand storage to be abstracted. Therefore, I am all for abstraction of storage resources from the data services platform.

But the storageless jargon is doing a great disservice. It is not helping. It does not lend its weight glorifying the innovations of storage. In fact, IMHO, it felt like a weighted anchor sinking storage into the deepest depth, invisible, insignificant and not sexy. I am here dutifully to promote and evangelize storage innovations, and I am duly unimpressed with such a jargon.

Continue reading

OpenZFS 2.0 exciting new future

The OpenZFS (virtual) Developer Summit ended over a weekend ago. I stayed up a bit (not much) to listen to some of the talks because it started midnight my time, and ran till 5am on the first day, and 2am on the second day. Like a giddy schoolboy, I was excited, not because I am working for iXsystems™ now, but I have been a fan and a follower of the ZFS file system for a long time.

History wise, ZFS was conceived at Sun Microsystems in 2005. I started working on ZFS reselling Nexenta in 2009 (my first venture into business with my company nextIQ) after I was professionally released by EMC early that year. I bought a Sun X4150 from one of Sun’s distributors, and started creating a lab server. I didn’t like the workings of NexentaStor (and NexentaCore) very much, and it was priced at 8TB per increment. Later, I started my second company with a partner and it was him who showed me the elegance and beauty of ZFS through the command lines. The creed of ZFS as a volume and a file system at the same time with the CLI had an effect on me. I was in love.

OpenZFS Developer Summit 2020 Logo

OpenZFS Developer Summit 2020 Logo

Exciting developments

Among the many talks shared in the OpenZFS Developer Summit 2020 , there were a few ideas and developments which were exciting to me. Here are 3 which I liked and I provide some commentary about them.

  • Block Reference Table
  • dRAID (declustered RAID)
  • Persistent L2ARC

Continue reading

Kubernetes Persistent Storage Managed Well

[ Disclosure: This is a StorPool Storage sponsored blog ]

StorPool Storage – Distributed Storage

There is a rapid adoption of Kubernetes in the enterprise and in the cloud. The push for digital transformation to modernize businesses for a cloud native world in the next decade has lifted both containerized applications and the Kubernetes container orchestration platform to an unprecedented level. The application landscape, especially the enterprise, is looking at Kubernetes to address these key areas:

  • Scale
  • High performance
  • Availability and Resiliency
  • Security and Compliance
  • Controllable Costs
  • Simplified

The Persistent Storage Question

Enterprise applications such as relational databases, email servers, and even the cloud native ones like NoSQL, analytics engines, demand a single data source of truth. Fundamentals properties such as ACID (atomicity, consistency, isolation, durability) and BASE (Basic Availability, Soft State, Eventual Consistency) have to have persistent storage as the foundational repository for the data. And thus, persistent storage have rallied under Container Storage Interface (CSI), and fast becoming a de facto standard for Kubernetes. At last count, there are more than 80 CSI drivers from 60+ storage and cloud vendors, each providing block-level storage to Kubernetes pods.

However, at this juncture, Kubernetes is still very engineering-centric. Persistent storage is equally as challenging, despite all the new developments and hype around it.

Continue reading

Storage in a shiny multi-cloud space

The multi-cloud for infrastructure-as-a-service (IaaS) era is not here (yet). That is what the technology marketers want you to think. The hype, the vapourware, the frenzy. It is what they do. The same goes to technology analysts where they describe vision and futures, and the high level constructs and strategies to get there. The hype of multi-cloud is often thought of running applications and infrastructure services seamlessly in several public clouds such as Amazon AWS, Microsoft® Azure and Google Cloud Platform, and linking it to on-premises data centers and private clouds. Hybrid is the new black.

Multicloud connectivity to public cloud providers and on-premises private cloud

Multi-Cloud, on-premises, public and hybrid clouds

And the aspiration of multi-cloud is the right one, when it is truly ready. Gartner® wrote a high level article titled “Why Organizations Choose a Multicloud Strategy“. To take advantage of each individual cloud’s strengths and resiliency in respective geographies make good business sense, but there are many other considerations that cannot be an afterthought. In this blog, we look at a few of them from a data storage perspective.

In the beginning there was … 

For this storage dinosaur, data storage and compute have always coupled as one. In the mainframe DASD days. these 2 were together. Even with the rise of networking architectures and protocols, from IBM SNA, DECnet, Ethernet & TCP/IP, and Token Ring FC-SAN (sorry, this is just a joke), the SANs, the filers to the servers were close together, albeit with a network buffered layer.

A decade ago, when the public clouds started appearing, data storage and compute were mostly inseparable. There was demarcation of public clouds and private clouds. The notion of hybrid clouds meant public clouds and private clouds can intermix with on-premise computing and data storage but in almost all cases, this was confined to a single public cloud provider. Until these public cloud providers realized they were not able to entice the larger enterprises to move their IT out of their on-premises data centers to the cloud convincingly. So, these public cloud providers decided to reverse their strategy and peddled their cloud services back to on-prem. Today, Amazon AWS has Outposts; Microsoft® Azure has Arc; and Google Cloud Platform launched Anthos.

Continue reading

The instant value of Open Source Storage

[ Full disclosure: I work for iXsystems™ . Opinions and views are mine. ]

TrueNAS Open Storage logo

The story began …

It was 2011. A friend of a friend called me out of the blue. He was rambling about his company’s storage needs. I recalled vividly that he wanted 100TB, and Dell and HP (before HPE) were hopeless doing NAS (network attached storage) in an Apple environment. They assembled a Frankenstein-ish NAS and plastered a price over MYR$100K around it.

In his environment, the Apple workstations were connected to dozens of WD Cloud Book storage (whatever it was called back then), daisy chained via Firewire to each other. I recalled one workstation had 3 WD “books” daisy chained together. They got the exploding storage needs but performance sucked. With every 2nd or 3rd user, access to files were at a snail pace, taking up to more than 2 minutes to open a file sometimes.

At that time, my old colleague at Sun was fervently talking about ZFS and OpenSolaris™. I told him about this opportunity, and so we began. It was him who used the word “crafter”. “We are not building“, he said, “we are crafting“. He was right.

OpenSolaris logo

Continue reading