Ridding consumer storage mindset for Enterprise operations

I cut my teeth in Enterprise Storage for 3 decades. On and off, I get the opportunity to work on Cloud Storage as well, mostly more structured storage infrastructure services such as blocks and files, in cloud offerings on AWS, Azure and Alibaba Cloud. I am familiar with S3 operations (mostly the CRUD operations and HTTP headers stuff) too, although I have yet to go deep with S3 with Restful API. And I really wanted to work on stuff with the S3 Select when the opportunity arises. (Note: Homelab project to-do list)

Along with the experience is the enterprise mindset of designing and crafting storage infrastructure and data management practices that evolve around data. Understanding the characteristics of data and the behaviours data in motion is part of my skills repertoire, and I continue to have conversations with organizations, small and large alike every day of the week.

This week’s blog was triggered by an article by Tech Republic® Jack Wallen‘s interview with Fedora project leader Matthew Miller. I have been craning my neck waiting for the full release of Fedora 36 (which now has been pushed to May 10th 2022), and the Tech Republic®’s article, “The future of Linux: Fedora project leader weighs in” touched me. Let me set the context of my expanded commentaries here.

History of my open source experience- bringing Enterprise to the individual

I have been working with open source software for a long time. My first Linux experience was Soft Landing Linux in the early 90s. It was a bunch of diskettes I purchased online while dabbling with FreeBSD® on the sides. Even though my day job was on the SunOS, and later Solaris®, having the opportunity to build stuff and learn the enterprise ways with Sun Microsystems® hardware and software were difficult at my homelab. I did bring home a SPARCstation® 2 once but the CRT monitor almost broke my computer table at that time.

Having open source software on 386i (before x86) architecture was great (no matter how buggy they were) because I got to learn hardcore enterprise technology at home. I am a command line person, so the desktop experience does not bother me much because my OS foundation is there. Open source gave me a world I could master my skills as an individual. For an individual like me, my mindset is always on the Enterprise.

The Tech Republic interview and my reflections

I know the journey open source OSes has taken at the server (aka Enterprise) level. They are great, and are getting better and better. But at the desktop (aka consumer) level, the Linux desktop experience has been an arduous one even though the open source Linux desktop experience is so much better now. This interview reflected on that.

There were a few significant points that were brought up. Those poignant moments explained about the free software in open source projects, how consumers glazed over (if I get what Matt Miller meant) the cosmetics of the open source software without the deeper meaningful objectives of the software had me feeling empty. Many assumed that just because the software is open source, it should be free or of low costs and continue to apply a consumer mindset to the delivery and the capability of the software.

Case in point is the way I have been seeing many TrueNAS®/FreeNAS™ individuals who downloaded the free software and using them in consumer ways. That is perfectly fine but when they want to migrate their consumer experience with the TrueNAS® software to their critical business operations, things suddenly do not look so rosy anymore. From my experience, having built enterprise-grade storage solutions with open source software like ZFS on OpenSolaris/OpenIndiana, FreeNAS™ and TrueNAS® for over a decade plus gaining plenty of experience on many proprietary and software-defined storage platforms along this 30 year career, the consumer mindsets do not work well in enterprise missions.

And over the years, I have been seeing this newer generation of infrastructure people taking less and less interest in learning the enterprise ways or going deep dive into the workings of the open source platforms I have mentioned. Yet, they have lofty enterprise expectations while carrying a consumer mindset. More and more, I am seeing a greying crew of storage practitioners with enterprise experiences dealing with a new generation of organizations and end users with consumer practices and mindsets.

Open Source Word Cloud

Continue reading

Is Software Defined right for Storage?

George Herbert Leigh Mallory, mountaineer extraordinaire, was once asked “Why did you want to climb Mount Everest?“, in which he replied “Because it’s there“. That retort demonstrated the indomitable human spirit and probably exemplified best the relationship between the human being’s desire to conquer the physical limits of nature. The software of humanity versus the hardware of the planet Earth.

Juxtaposing, similarities can be said between software and hardware in computer systems, in storage technology per se. In it, there are a few schools of thoughts when it comes to delivering storage services with the notable ones being the storage appliance model and the software-defined storage model.

There are arguments, of course. Some are genuinely partisan but many a times, these arguments come in the form of the flavour of the moment. I have experienced in my past companies touting the storage appliance model very strongly in the beginning, and only to be switching to a “software company” chorus years after that. That was what I meant about the “flavour of the moment”.

Software Defined Storage

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Praying to the hypervisor God

I was reading a great article by Frank Denneman about storage intelligence moving up the stack. It was pretty much in line with what I have been observing in the past 18 months or so, about the storage pendulum having swung back to DAS (direct attached storage). To be more precise, the DAS form factor I am referring to are physical server hardware that houses many disk drives.

Like it or not, the hypervisor has become the center of the universe in the IT space. VMware has become the indomitable force in the hypervisor technology, with Microsoft Hyper-V playing catch-up. The seismic shift of these 2 hypervisor technologies are leading storage vendors to place them on to the altar and revering them as deities. The others, with the likes of Xen and KVM, and to lesser extent Solaris Containers aren’t really worth mentioning.

This shift, as the pendulum swings from networked storage back to internal “direct-attached” storage are dictated by 4 main technology factors:

  • The x86 server architecture
  • Software-defined
  • Scale-out architecture
  • Flash-based storage technology

Anyone remember Thumper? Not the Disney character from the Bambi movie!

thumper-bambi-cartoon-character

When the SunFire X4500 (aka Thumper) was first released in (intermission: checking Wiki for the right year) in 2006, I felt that significant wound inflicted in the networked storage industry. Instead of the usual 4-8 hard disk drives in the all the industry servers at the time, the X4500 4U chassis housed 48 hard disk drives. The design and architecture were so astounding to me, I even went and bought a 1U SunFire X4150 for my personal server collection. Such was my adoration for Sun’s technology at the time.

Continue reading

ARC reactor also caches?

The fictional arc reactor in Iron Man’s suit was the epitome of coolness for us geeks. In the latest edition of Oracle Magazine, Iron Man is on the cover, as well as the other 5 Avengers in a limited edition series (see below).

Just about the same time, I am reading up on the ARC (Adaptive Replacement Caching) that is adopted in ZFS. I am learning in depth of how ZFS caching works as opposed to the more popular LRU (Least Recently Used) caching algorithm that is used in most storage cache memory. Having said that, most storage vendors employed a modified LRU algorithm, with the intention to keep the most recently accessed pages in memory as long as possible. This is true in NetApp’s Data ONTAP (maybe not the ONTAP GX in which I have little experience) and EMC FlareOE. ONTAP goes further to by keeping the most frequently accessed pages permanently in memory. EMC folks would probably refer to most recently accessed as spatial locality while most frequently accessed as temporal locality.

Why is ZFS using ARC and what is ARC? Continue reading

Phoenix rising from OpenSolaris ashes

I got a little nostalgic over the weekend. As I was working on Solaris 11 x86 over the past few weeks, I got a little bit peeved about how much Oracle has changed the OS.

Command like ifconfig doesn’t not appear to be very functional anymore and instead ipadm has taken over most of the configuration options. And when I working with Jumpstart (damn!), it does not work the way that I know anymore. And now AI (Automated Install) has taken over Jumpstart and I got to relearn the whole what-ca-ma-callit. Dang!

I remembered the day when Solaris x86 first came out in the early 90s. I was ecstatic because I could finally test and run Solaris on x86 platform. I could get things running at home and have fun with it. Drivers were limited then (and still is but has gotten much better) but I was happily hacking away together with other Linux distros as the open source revolution was just beginning. After I joined NetApp, things started to change and I abandoned Solaris in favour of Linux as my job, as well as my interest, were on Linux, especially RedHat. I eventually got my RHCE and completely lost touch with Solaris. By 2005, when OpenSolaris was announced under CDDL (Common Development and Distribution License), I was no longer well versed with the developments of Solaris and OpenSolaris.

Enough about my nostalgia because I am beginning to see a young phoenix (a mythical firebird) rising from the mess of what Oracle did with OpenSolaris! Since Oracle purchased Sun in 2010, Oracle has practically burned OpenSolaris to ashes. On August 13 2010, Oracle announced the end of OpenSolaris in an internal memo and it read:

Solaris Engineering,

Today we are announcing a set of decisions regarding the path to
Solaris 11, and answering key pending questions on open source, open
development, software and binary licenses, and how developers and
early adopters will be able to use Solaris 11 technology before its
release in 2011.

As you all know, the term “OpenSolaris” has been used colloquially to
refer to any or all of a collection of source code, a development
model, a web site, a logo, a binary release, a source license, a
community, and many other related things. So it’s taken a while to go
over each issue from an organizational and business perspective, and
align on the correct next step. Therefore, please take the time to
read all of the detail here carefully. We’ll discuss our strategy
first, and then the decisions and changes to our policies and
processes that implement that strategy.

If you want the entire memo (and all the fa-lah-lah that goes with it), go to Steven Stallion’s blog. Incidentally Steven Stallion was the OpenSolaris kernel developer who leaked the memo into the open.

It became pretty obvious that Oracle business suit culture and “is this going to make money?” ways were suffocating talents and innovations of the Sun engineering tribes. Some of the high profile leavers were James Gosling (father of Java) and Jeff Bonwick (father of RAID-Z and led the ZFS development team in Sun). And there were many top talents exodus within 90-120 days after the Oracle acquisition.

The key technologies that went into OpenSolaris (and Solaris) were slowly but surely deprived of their inventors’ and maintainers nourishment. These technologies were:

  • ZFS (Project Pacific)
  • DTrace
  • Zones (aka Solaris Containers, aka Project Kevlar)
  • Fault Management Architecture (FMA)
  • Service Management Facility (SMF)
  • Advanced Network Virtualization (Project Crossbow)
  • Least-privilege

and many more. Some of these technologies were already open under CDDL license but some were still very much proprietary to Sun (I mean, Oracle). It was difficult to use what was available under OpenSolaris CDDL license to rebuild again, especially when the inventors, talents and maintainers are now all scattered in companies like Delphix, Nexenta, Greenbytes, Joyent and so on .

At the end of last year, shortly before Solaris 11 was announced by Oracle, the people who are passionate about OpenSolaris (and Solaris) have got together in full force again. Dubbed “Project Illumos“, the key people who has developed for Sun convened to build a new open-source, Solaris-based operating environment. The proprietary bits that are closely guarded by Oracle are going to be either rebuilt from scratch or ported from BSD into the last OpenSolaris-kernel before Oracle killed it. That kernel was Solaris Nevada, which was supposed to be the successor of Solaris 10.

The Illumos team already has a bootable and working operating environment and new developments are going on at a frantic pace. From the words of Bryan Cantrill (father of DTrace) and now VP of Engineering at Joyent,

“illumos was not designed to be a fork,but rather an entirely open downstream repository of OpenSolaris”

And the talents congregating to the Illumos project (like moths to a flame) are super-stellar. Just have a look at this list:

  • ZFS –> Matt Ahrens, Eric Schrock,  George Wilson, Adam Leventhal, Bill Pijewski and BrendanGregg
  • SMF –> Dan McDonald and Sumit Gupta
  • DTrace –> Bryan Cantrill, Adam Leventhal, Brendan Gregg, Eric Schrock, Dave Pacheco
  • Zones & Jumpstart –> Jerry Jelinek
  • and many, many more.

KVM (the Linux kernel-based virtual machine) is being added into the Illumos operating environment, giving it the final piece of the puzzle.

I cannot help but to feel extremely proud that OpenSolaris (and Solaris) is not dead yet and it’s alive and rising. Oracle cannot lay claim to the source code and the rights of Illumos (according to Bryan Cantrill) without itself abiding to the CDDL licensing and distribution scheme that it had killed off a year ago.

And this is indeed the young phoenix rising!

Solaris virgin again!

This week I went off the beaten track to get back to my first love – Solaris. Now that Oracle owns it, it shall be known as Oracle Solaris. I am working on a small project based on (Oracle) Solaris Containers and I must say, I am intrigued by it. And I felt good punching the good ‘ol command lines in Solaris again.

Oracle actually offers a lot of virtualization technologies – Oracle VM, Oracle VM Dynamic Domains, Oracle Solaris Logical Domains (LDOMs), Oracle Solaris Containers (aka Zones) and Oracle VirtualBox. Other than VirtualBox, the other VE (Virtualized Environment) solutions are enterprise solutions but unfortunately, they lack the pizazz of VMware at this point in time. From my perspective, they are also very Oracle/Solaris-centric, making them less appealing to the industry at this moment

Here’s an old Sun diagram of what Sun virtualization solutions are:

What I am working on this week is Solaris Containers or Zones. The Containers solution is rather similar to VMware’s gamut of Tier-2 Virtualization solutions that are host-based. Solutions that fall into this category are VMware Server, VMware Workstation, VMware Player, VMware ACE and VMware Fusion for MacOS. Therefore, it requires a host OS to run the Solaris Containers.

I did not have a Solaris Resource Manager software to run the GUI stuff, so I had to get back to basics with CLI, which is good for  me. In fact, I liked it even more and with the CLI, I could pretty much create zones with ease. And given the fact that the host OS is Solaris 10, I could instantly feel the robustness, the performance, the stability and the power of Solaris 10, unlike the flaky Windows hosting VMware host-based virtualization solutions or the iffiness of Linux.

A more in depth look of Solaris Containers/Zones is shown below.

At first touch, 2 things impressed me

  • The isolation of each Container and its global master domain is very well defined. What can be done, and what cannot be done; what can be configured and what cannot, is very clear and the configurability of each parameter is quickly acknowledged and controlled by the Solaris kernel. From what I read, Solaris Containers has achieved the highest level of security with its Trusted Extension component, which is a re-implementation of Trusted Solaris. Solaris 10 has received the highest commercial level of Common Criteria Certification.  This is known as EAL4+ and has been accepted by the U.S DoD (Department of Defense).
  • It’s simplicity in administering compute and memory resources to the Containers. I will share that in CLI with you later.

To start, we acknowledge that there is likely a global zone that has been created when Solaris 10 was first installed.

 

To create a zone and configuring it with CLI, it is pretty straightforward. Here’s a glimpse of what I did yesterday.

# zonecfg –z perf-rac1

Use ‘create’ to be configuring a zone

zonecfg:perf-rac1> create

zonecfg:perf-rac1> set zonepath=rpool/perfzones/perf-rac1

zonecfg:perf-rac1> set autoboot=true

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/lib

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/sbin

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/usr

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/usr/local

zonecfg:perf-rac1> add net

zonecfg:perf-rac1:net> set address=<input from parameter>

zonecfg:perf-rac1:net> set physical=<bge0|or correct Ethernet interface>

zonecfg:perf-rac1:net> end

zonecfg:perf-rac1> add dedicated-cpu

zonecfg:perf-rac1:dedicated-cpu> set ncpus=2-4 (or any potential cpus on sun box)

zonecfg:perf-rac1:dedicated-cpu>end

zonecfg:perf-rac1> add capped-memory

zonecfg:perf-rac1:capped-memory> set physical=4g

zonecfg:perf-rac1:capped-memory>set swap=1g

zonecfg:perf-rac1:capped-memory>set locked=1g

zonecfg:perf-rac1:capped-memory>end

zonecfg:perf-rac1> verify

zonecfg:perf-rac1> commit

zonecfg:perf-rac1> exit

The command zonecfg -z <zonename> triggers a configuration prompt where I run create to create the zone. I set the zonepath to list where the zone files will be contained and set the autoboot=true so that it will automatically start during a reboot.

Solaris Containers is pretty cool where it has the ability to either inherit or share the common directories such as /usr, /lib, /sbin and others or create its own set of directories separate from the global root directory tree. Here I choose to remove the inheritance and allow the Solaris in the Container to have its own independent directories.

The commands add net sends me into another sub-category where I can configure the network interface as well as the network address. Nothing spectacular there. I end  the configuration and do a couple of cool things which are related to resource management.

I have added add dedicated-cpu and set ncpus=2-4 and also add capped-memory of physical=4g, swap=1gb, locked=1gb. What I have done is to allocate a minimum of 2 CPU resources and a maximum of 4 CPU resources (if resource permits) to the zone called perf-rac1. Additionally, I have allowed it to have a capped memory of at most 4GB of RAM, with assured of 1GB of RAM. Swap space wis set at 1GB.

This resource management allows me to build a high performance Solaris Container for Oracle 11g RAC. Of course, you are free to create as many containers as long as the system resources allow it. Note that I did not include the shared memory and semaphores parameters required for Oracle 11g RAC but go ahead and consult your favourite Oracle DBA (have fun doing so!)

After the perf-rac1 zone/container has been created (and configured), I just need to run the following

# zoneadm –z perf-rac1 install

# zoneadm –z perf-rac1 boot

These 2 commands will install the zone and start the installation process. It will copy all the packages from the global zone and start the installation as per normal. Once the “installation” is complete, there will be the usual Solaris configuration form where information such as timezone, IP address, root login/password and so on are input. That will take about 20-40 minutes, depending on the amount of things to be installed and of course, the power of the Sun system. I am running an old Sun V210 with 512MB, so it took a while.

When it’s done, we can just login into the zone with the command

# zlogin –C perf-rac1

and I get into another Solaris OS in the Solaris Container.

What I liked what the fact that Solaris Containers is rather simple to understand but the flexibility to configure computing resources to it is pretty impressive. It’s fun working on this stuff again after years away from Solaris. (This was after I took my RedHat RHCE certification and I pretty much left Sun Solaris for quite a while).

More testing to be done, but overall I am quite happy to be back as a Solaris virgin again.