Down the rabbit hole with Kubernetes Storage

Kubernetes is on fire. Last week VMware® released the State of Kubernetes 2020 report which surveyed companies with 1,000 employees and above. Results were not surprising as the adoptions of this nascent technology are booming. But persistent storage remained the nagging concern for the Kubernetes serving the infrastructure resources to applications instances running in the containers of a pod in a cluster.

The standardization of storage resources have settled with CSI (Container Storage Interface). Storage vendors have almost, kind of, sort of agreed that the API objects such as PersistentVolumes, PersistentVolumeClaims, StorageClasses, along with the parameters would be the way to request the storage resources from the Pre-provisioned Volumes via the CSI driver plug-in. There are already more than 50 vendor specific CSI drivers in Github.

Kubernetes and CSI initiative

Kubernetes and the CSI (Container Storage Interface) logos

The CSI plug-in method is the only way for Kubernetes to scale and keep its dynamic, loadable storage resource integration with external 3rd party vendors, all clamouring to grab a piece of this burgeoning demands both in the cloud and in the enterprise.

Continue reading

Falconstor Software Defined Data Preservation for the Next Generation

Falconstor® Software is gaining momentum. Given its arduous climb back to the fore, it is beginning to soar again.

Tape technology and Digital Data Preservation

I mentioned that long term digital data preservation is a segment within the data lifecycle which has merits and prominence. SNIA® has proved that this is a strong growing market segment through its 2007 and 2017 “100 Year Archive” surveys, respectively. 3 critical challenges of this long, long-term digital data preservation is to keep the archives

  • Accessible
  • Undamaged
  • Usable

For the longest time, tape technology has been the king of the hill for digital data preservation. The technology is cheap, mature, and many enterprises has built their long term strategy around it. And the pulse in the tape technology market is still very healthy.

The challenges of tape remain. Every 5 years or so, companies have to consider moving the data on the existing tape technology to the next generation. It is widely known that LTO can read tapes of the previous 2 generations, and write to it a generation before. The tape transcription process of migrating digital data for the sake of data preservation is bad because it affects the structural integrity and quality of the content of the data.

In my times covering the Oil & Gas subsurface data management, I have seen NOCs (national oil companies) with 500,000 tapes of all generations, from 1/2″ to DDS, DAT to SDLT, 3590 to LTO 1-7. And millions are spent to transcribe these tapes every few years and we have folks like Katalyst DM, Troika and more hovering this landscape for their fill.

Continue reading

Reap at low tide

[ Note: This article was published on Linkedin more than 6 months ago. Here is the original link to the article ]

[ Update (Apr 13 2020): Amid the COVID-19 pandemic and restricted movement globally,  we can turn our pessimism into an opportunistic one ]

Nature has a way of teaching us. What works and what doesn’t are often hidden in plain sight, but we human are mostly too occupied to notice the things that work.

Why are they not spending?

This news appeared in my LinkedIn feed. It read “Malaysian Banks Don’t Spend Enough on Tech“. It irked me immensely because in a soft economy climate (the low tide), our Malaysian financial institutions should be spending more on technology (reaping the opportunity) to get ahead.

Why are the storks and the egrets in my page photo above waiting and wading in the knee-deep waters? Because at low tide, when the waves ebb, food is exposed to them abundantly. They scurry for shrimps, small crabs, cockles, mussels and more. This is nature’s way.

From the report, the technology spending average among the Malaysian banks is pathetic.

No alt text provided for this image

The negative domino effect on SMEs

When the banks are not spending on technology, the other industries, especially the SMEs (small medium enterprises) follow suit. The “penny pinching” and “tightening purse string” effect permeates across industries, slowly and surely putting the negative effect in tech spending into a volatile spin-cycle.

From a macro-economic point of view, spending slows down. Buying less means lesser demands and effectively, lowering supply, and it rolls on. The law of demand and supply just got dumped into an abyss.

A great opportunity for those who see it

When I was an engineer at Sun Microsystems more than 2 decades ago, I read a comment delivered by one of the executives. It said “When times are bad, those who know will get the best parts“. I took his comment to heart because what he said held true, even until today.

This is the best time, when the country is experiencing an economic downturn. When the competitors are holding back and may be reeling from the negative effects of the economy, the banks are in the best position to grab the best deals. This is the time to gain market share, when the competition is holding back for fear that the economy will become softer.

Furthermore, with the low interest rates across the board, there is no better time than the present to step up the tech spending. Banks should know this very well but I am perplexed.

That is why the Malaysian banks must kick start their tech spending campaign now. And the SMEs will follow, overturning the downturn with demands of spending for the best “parts”. The supply “factories” are fired up again, and will lead to a positive growth to the economy.

Bank Negara RMiT is that one opportunity

One thing which has been looming is Bank Negara, Malaysia’s Central Bank, RMiT (Risk Management in Technology) framework. A new version was released in July 2019, and to me as an outsider, is a great opportunity to grab the best parts. And some of these standards will come into effect in January 2020

Bank Negara is strongly encouraging banks to improve the security and the confidence of the country’s financial industry, and the RMiT framework is really a prod to increase tech spending. Unfortunately, in some of my business interactions with a few of the banks, the feet dragging practice is prevalent.

Nature’s lesson

The best time to have your best pick is at low tide. This is nature’s lesson for us. What are we waiting for?

Iconik Content Management Solutions with FreeNAS – Part 2

[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]

This part is the continuation of Part 1 posted earlier.

iconik has partnered with iXsystems™ almost a year ago. iconik is a cloud-based media content management platform. Its storage repository has many integration with public cloud storage such as Google Cloud, Wasabi® Cloud and more. The on-premises storage integration is made through iconik storage gateway, and it presents itself to FreeNAS™ and TrueNAS® via plugins.

For a limited, you get free access to iconik via this link.

iconik  – The Application setup

[ Note: A lot of the implementation details come from this iXsystems™ documentation by Joe Dutka. This is an updated version for the latest 11.3 U1 release ]

iconik is feature rich and navigating it to setup the storage gateway can be daunting. Fortunately the iXsystems™ documentation was extremely helpful. It is also helpful to consider this as a 2-step approach so that you won’t get overwhelmed of what is happening.

  • Set up the Application section
    • Get Application ID
    • Get Authorization Token
  • Set up the Storage section
    • Get Storage ID

The 3 credentials (Application ID, Authorization Token, Storage ID) are required to set up the iconik Storage Gateway at the FreeNAS™ iconik plug-in setup.

Continue reading

Iconik Content Management Solutions with FreeNAS – Part 1

[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]

The COVID-19 situation goes on unabated. A couple of my customers asked about working from home and accessing their content files and coincidentally both are animation studios. Meanwhile, there was another opportunity asking about a content management solution that would work with the FreeNAS™ storage system we were proposing. Over the weekend, I searched for a solution that would combine both content management and cloud access that worked with both FreeNAS™ and TrueNAS®, and I was glad to find the iconik and TrueNAS® partnership.

In this blog (and part 2 later), I document the key steps to setup the iconik plug-in with FreeNAS™. I am using FreeNAS™ 11.3U1.

Dataset 777

A ZFS dataset assigned to be the storage repository for the “Storage Target” in iconik. Since iconik has a different IAM (identity access management) than the user/group permissions in FreeNAS, we have make the ZFS dataset to have Read/Write access to all. That is the 777 permission in Unix speak. Note that there is a new ACL manager in version 11.3, and the permissions/access rights screenshot is shown here.

Take note that this part is important. We have to assign @everyone to have Full Control because the credentials at iconik is tied to the permissions we set for @everyone. Missing this part will deny the iconik storage gateway scanner to peruse this folder, and the status will remain “Inactive”.  We will discuss this part more in Part 2.

Continue reading

StorageGRID gets gritty

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]

NetApp® presented StorageGRID® Webscale (SGWS) at Storage Field Day 19 last month. It was timely when the general purpose object storage market, in my humble opinion, was getting disillusioned and almost about to deprive itself of the value of what it was supposed to be.

Cheap and deep“, “Race to Zero” were some of the less storied calls I have come across when discussing about object storage, and it was really de-valuing the merits of object storage as vendors touted their superficial glory of being in the IDC Marketscape for Object-based Storage 2019.

Almost every single conversation I had in the past 3 years was either explaining what object storage is or “That is cheap storage right?

Continue reading

Paradigm shift of Dev to Storage Ops

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]

A funny photo (below) came up on my Facebook feed a couple of weeks back. In an honest way, it depicted how a developer would think (or the lack of thinking) about the storage infrastructure designs and models for the applications and workloads. This also reminded me of how DBAs used to diss storage engineers. “I don’t care about storage, as long as it is RAID 10“. That was aeons ago 😉

The world of developers and the world of infrastructure people are vastly different. Since cloud computing birthed, both worlds have collided and programmable infrastructure-as-code (IAC) have become part and parcel of cloud native applications. Of course, there is no denying that there is friction.

Welcome to DevOps!

The Kubernetes factor

Containerized applications are quickly defining the cloud native applications landscape. The container orchestration machinery has one dominant engine – Kubernetes.

In the world of software development and delivery, DevOps has taken a liking to containers. Containers make it easier to host and manage life-cycle of web applications inside the portable environment. It packages up application code other dependencies into building blocks to deliver consistency, efficiency, and productivity. To scale to a multi-applications, multi-cloud with th0usands and even tens of thousands of microservices in containers, the Kubernetes factor comes into play. Kubernetes handles tasks like auto-scaling, rolling deployment, computer resource, volume storage and much, much more, and it is designed to run on bare metal, in the data center, public cloud or even a hybrid cloud.

Continue reading

DellEMC Project Nautilus Re-imagine Storage for Streams

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at this event. The content of this blog is of my own opinions and views ]

Cloud computing will have challenges processing data at the outer reach of its tentacles. Edge Computing, as it melds with the Internet of Things (IoT), needs a different approach to data processing and data storage. Data generated at source has to be processed at source, to respond to the event or events which have happened. Cloud Computing, even with 5G networks, has latency that is not sufficient to how an autonomous vehicle react to pedestrians on the road at speed or how a sprinkler system is activated in a fire, or even a fraud detection system to signal money laundering activities as they occur.

Furthermore, not all sensors, devices, and IoT end-points are connected to the cloud at all times. To understand this new way of data processing and data storage, have a look at this video by Jay Kreps, CEO of Confluent for Kafka® to view this new perspective.

Data is continuously and infinitely generated at source, and this data has to be compiled, controlled and consolidated with nanosecond precision. At Storage Field Day 19, an interesting open source project, Pravega, was introduced to the delegates by DellEMC. Pravega is an open source storage framework for streaming data and is part of Project Nautilus.

Rise of  streaming time series Data

Processing data at source has a lot of advantages and this has popularized Time Series analytics. Many time series and streams-based databases such as InfluxDB, TimescaleDB, OpenTSDB have sprouted over the years, along with open source projects such as Apache Kafka®, Apache Flink and Apache Druid.

The data generated at source (end-points, sensors, devices) is serialized, timestamped (as event occurs), continuous and infinite. These are the properties of a time series data stream, and to make sense of the streaming data, new data formats such as Avro, Parquet, Orc pepper the landscape along with the more mature JSON and XML, each with its own strengths and weaknesses.

You can learn more about these data formats in the 2 links below:

DIY is difficult

Many time series projects started as DIY projects in many organizations. And many of them are still DIY projects in production systems as well. They depend on tribal knowledge, and these databases are tied to an unmanaged storage which is not congruent to the properties of streaming data.

At the storage end, the technologies today still rely on the SAN and NAS protocols, and in recent years, S3, with object storage. Block, file and object storage introduce layers of abstraction which may not be a good fit for streaming data.

Continue reading

Hadoop is truly dead – LOTR version

[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]

This blog was not intended because it was not in my plans to write it. But a string of events happened in the Storage Field Day 19 week and I have the fodder to share my thoughts. Hadoop is indeed dead.

Warning: There are Lord of the Rings references in this blog. You might want to do some research. 😉

Storage metrics never happened

The fellowship of Arjan Timmerman, Keiran Shelden, Brian Gold (Pure Storage) and myself started at the office of Pure Storage in downtown Mountain View, much like Frodo Baggins, Samwise Gamgee, Peregrine Took and Meriadoc Brandybuck forging their journey vows at Rivendell. The podcast was supposed to be on the topic of storage metrics but was unanimously swung to talk about Hadoop under the stewardship of Mr. Stephen Foskett, our host of Tech Field Day. I saw Stephen as Elrond Half-elven, the Lord of Rivendell, moderating the podcast as he would have in the plans of decimating the One Ring in Mount Doom.

So there we were talking about Hadoop, or maybe Sauron, or both.

The photo of the Oliphaunt below seemed apt to describe the industry attacks on Hadoop.

Continue reading

AI needs data we can trust

[ Note: This article was published on LinkedIn on Jan 21th 2020. Here is the link to the original article ]

In 2020, the intensity on the topic of Artificial Intelligence will further escalate.

One news which came out last week terrified me. The Sarawak courts want to apply Artificial Intelligence to mete judgment and punishment, perhaps on a small scale.

Continue reading