Enterprise Storage is not just a Label

I have many anecdotes around the topic of Enterprise Storage, but the conversations in the past 2 weeks made it important for me to share this.

Enterprise Storage is …

Amusing, painful, angry

I get riled up whenever people do not want to be educated about Enterprise Storage. Here are a few that happened in the last 2 weeks.

[ Story #1 ]

A guy was building his own storage for cryptocurrency. He was informed by his supplier that the RAID card was enterprise, and he could get the best performance using “Enterprise” RAID-0.

  • Well, “Enterprise” RAID-0 volume crashed, and he lost all data. Painfully, he said he lost a hefty sum financially

[ Story #2 ]

A media company complained about the reliability of previous storage vendor. The GM was shopping around and was told that there are “Enterprise” SATA drives and the reliability is as good, if not better than SAS drives.

  • The company wanted a fully reliable Enterprise Storage system with 99.999% availability, and yet the SATA interface was not meant to build a more highly reliable enterprise storage. The GM insisted to use “Enterprise” SATA drives for his “enterprise” storage system instead of SAS.  

[ Story #3 ]

An IT admin of a manufacturing company claimed that they had an “Enterprise Storage” system for a few years, and could not figure out why his hard disk drives would die every 12-15 months.

  • He figured out that the drives supplied by his vendor were consumer SATA drives, even though he was told it was an “Enterprise Storage” system when he bought the system.

Continue reading

Multicloud is sprouting Storage Silos

Grain Silos

We get an avalanche of multicloud selling from storage vendors. We get promises and benefits of multicloud but from whose point of view?

Multicloud is multiple premises

This is an overly simplistic example how I created 3 copies of the same spreadsheet yesterday. I have a quotation on Google Sheets. A fairly complicated one. Someone wanted it in Excel format, but the format and the formulas were all messed up when I tried to download it as XLSX. What I had to do was to download the Google Sheets as ODS (OpenDocument Spreadsheet) format to my laptop, and then upload the LibreOffice file to my OneDrive account, and use Excel Online to open the ODS file and saved as XLSX. In one fell swoop, I have the same spreadsheet in Google Drive, my laptop and OneDrive. 3 copies in 3 different premises. 

As we look to the behaviour of data creation and data acquisition, data sharing and data movement, the central repository is the gold image, the most relevant copy of the data. However, for business reasons, data has to be moved to where the applications are. It could be in cloud A or cloud B or cloud C or it could be on-premises. The processed output from cloud A is stored in cloud A, and likewise, cloud B in cloud B and so on.

To get the most significant and relevant copy, data from all premises must be consolidated, thus it has to be moved to a centralized data storage repository. But intercloud data movement is bogged down by egress fees, latency, data migration challenges (like formats and encoding), security, data clearance policies and many other hoops and hurdles.

With all these questions and concerns in mind, the big question mark is “Is multicloud really practical?” From a storage guy like me who loves a great data management story, “It is not. Multicloud creates storage silos“.

Continue reading

Storage in a shiny multi-cloud space

The multi-cloud for infrastructure-as-a-service (IaaS) era is not here (yet). That is what the technology marketers want you to think. The hype, the vapourware, the frenzy. It is what they do. The same goes to technology analysts where they describe vision and futures, and the high level constructs and strategies to get there. The hype of multi-cloud is often thought of running applications and infrastructure services seamlessly in several public clouds such as Amazon AWS, Microsoft® Azure and Google Cloud Platform, and linking it to on-premises data centers and private clouds. Hybrid is the new black.

Multicloud connectivity to public cloud providers and on-premises private cloud

Multi-Cloud, on-premises, public and hybrid clouds

And the aspiration of multi-cloud is the right one, when it is truly ready. Gartner® wrote a high level article titled “Why Organizations Choose a Multicloud Strategy“. To take advantage of each individual cloud’s strengths and resiliency in respective geographies make good business sense, but there are many other considerations that cannot be an afterthought. In this blog, we look at a few of them from a data storage perspective.

In the beginning there was … 

For this storage dinosaur, data storage and compute have always coupled as one. In the mainframe DASD days. these 2 were together. Even with the rise of networking architectures and protocols, from IBM SNA, DECnet, Ethernet & TCP/IP, and Token Ring FC-SAN (sorry, this is just a joke), the SANs, the filers to the servers were close together, albeit with a network buffered layer.

A decade ago, when the public clouds started appearing, data storage and compute were mostly inseparable. There was demarcation of public clouds and private clouds. The notion of hybrid clouds meant public clouds and private clouds can intermix with on-premise computing and data storage but in almost all cases, this was confined to a single public cloud provider. Until these public cloud providers realized they were not able to entice the larger enterprises to move their IT out of their on-premises data centers to the cloud convincingly. So, these public cloud providers decided to reverse their strategy and peddled their cloud services back to on-prem. Today, Amazon AWS has Outposts; Microsoft® Azure has Arc; and Google Cloud Platform launched Anthos.

Continue reading

Falconstor Software Defined Data Preservation for the Next Generation

Falconstor® Software is gaining momentum. Given its arduous climb back to the fore, it is beginning to soar again.

Tape technology and Digital Data Preservation

I mentioned that long term digital data preservation is a segment within the data lifecycle which has merits and prominence. SNIA® has proved that this is a strong growing market segment through its 2007 and 2017 “100 Year Archive” surveys, respectively. 3 critical challenges of this long, long-term digital data preservation is to keep the archives

  • Accessible
  • Undamaged
  • Usable

For the longest time, tape technology has been the king of the hill for digital data preservation. The technology is cheap, mature, and many enterprises has built their long term strategy around it. And the pulse in the tape technology market is still very healthy.

The challenges of tape remain. Every 5 years or so, companies have to consider moving the data on the existing tape technology to the next generation. It is widely known that LTO can read tapes of the previous 2 generations, and write to it a generation before. The tape transcription process of migrating digital data for the sake of data preservation is bad because it affects the structural integrity and quality of the content of the data.

In my times covering the Oil & Gas subsurface data management, I have seen NOCs (national oil companies) with 500,000 tapes of all generations, from 1/2″ to DDS, DAT to SDLT, 3590 to LTO 1-7. And millions are spent to transcribe these tapes every few years and we have folks like Katalyst DM, Troika and more hovering this landscape for their fill.

Continue reading

The Falcon to soar again

One of the historical feats which had me mesmerized for a long time was the 14-year journey China’s imperial treasures took to escape the Japanese invasion in the early 1930s, sandwiched between rebellions and civil wars in China. More than 20,000 pieces of the imperial treasures took a perilous journey to the west and back again. Divided into 3 routes over a decade and four years, not a single piece of treasure was broken or lost. All in the name of preservation.

Today, that 20,000 over pieces live in perpetuity in 2 palaces – Beijing Palace Museum in China and National Palace Museum Taipei in Taiwan

Digital data preservation

Digital data preservation is on another end of the data lifecycle spectrum. More often than not, it is not the part that many pay attention to. In the past 2 decades, digital data has grown so much that it is now paramount to keep the data forever. Mind you, this is not the data hoarding kind but to preserve the knowledge and wisdom which is in the digital content of the data.

[ Note: If you are interested to know more about Data -> Information -> Knowledge -> Wisdom, check out my 2015 article on LinkedIn ]

SNIA (Storage Networking Industry Association) conducted 2 surveys – one in 2007 and another in 2017 – called the 100 Year Archive, and found that the requirement for preserving digital data has grown multiple folds over the 10 years. In the end, the final goal is to ensure that the perpetual digital contents are

  • Accessible
  • Undamaged
  • Usable

All at an affordable cost. Therefore, SNIA has the vision that the digital content must transcend beyond the storage medium, the storage system and the technology that holds it.

The Falcon reemerges

A few weeks ago, I had the privilege to speak with Falconstor® Software‘s David Morris (VP of Global Product Strategy & Marketing) and Mark Delsman (CTO). It was my first engagement with Falconstor® in almost 9 years! I wrote a piece of Falconstor® in my blog in 2011.

Continue reading

Iconik Content Management Solutions with FreeNAS – Part 2

[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]

This part is the continuation of Part 1 posted earlier.

iconik has partnered with iXsystems™ almost a year ago. iconik is a cloud-based media content management platform. Its storage repository has many integration with public cloud storage such as Google Cloud, Wasabi® Cloud and more. The on-premises storage integration is made through iconik storage gateway, and it presents itself to FreeNAS™ and TrueNAS® via plugins.

For a limited, you get free access to iconik via this link.

iconik  – The Application setup

[ Note: A lot of the implementation details come from this iXsystems™ documentation by Joe Dutka. This is an updated version for the latest 11.3 U1 release ]

iconik is feature rich and navigating it to setup the storage gateway can be daunting. Fortunately the iXsystems™ documentation was extremely helpful. It is also helpful to consider this as a 2-step approach so that you won’t get overwhelmed of what is happening.

  • Set up the Application section
    • Get Application ID
    • Get Authorization Token
  • Set up the Storage section
    • Get Storage ID

The 3 credentials (Application ID, Authorization Token, Storage ID) are required to set up the iconik Storage Gateway at the FreeNAS™ iconik plug-in setup.

Continue reading

Iconik Content Management Solutions with FreeNAS – Part 1

[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]

The COVID-19 situation goes on unabated. A couple of my customers asked about working from home and accessing their content files and coincidentally both are animation studios. Meanwhile, there was another opportunity asking about a content management solution that would work with the FreeNAS™ storage system we were proposing. Over the weekend, I searched for a solution that would combine both content management and cloud access that worked with both FreeNAS™ and TrueNAS®, and I was glad to find the iconik and TrueNAS® partnership.

In this blog (and part 2 later), I document the key steps to setup the iconik plug-in with FreeNAS™. I am using FreeNAS™ 11.3U1.

Dataset 777

A ZFS dataset assigned to be the storage repository for the “Storage Target” in iconik. Since iconik has a different IAM (identity access management) than the user/group permissions in FreeNAS, we have make the ZFS dataset to have Read/Write access to all. That is the 777 permission in Unix speak. Note that there is a new ACL manager in version 11.3, and the permissions/access rights screenshot is shown here.

Take note that this part is important. We have to assign @everyone to have Full Control because the credentials at iconik is tied to the permissions we set for @everyone. Missing this part will deny the iconik storage gateway scanner to peruse this folder, and the status will remain “Inactive”.  We will discuss this part more in Part 2.

Continue reading

NetApp double stitching Data Fabric

Is NetApp® Data Fabric breaking at the seams that it chose to acquire Talon Storage a few weeks ago?

It was a surprise move and the first thing that came to my mind was “Who is Talon Storage?” I have seen that name appeared in Tech Target and CRN last year but never took the time to go in depth about their technology. I took a quick check of their FAST™ software technology with the video below:

It had the reminiscence of Andrew File System, something I worked on briefly in the 90s and WAFS (Wide Area File System), a technology buzz word in the early to mid-2000s led by Tacit Networks, a company I almost joined with a fellow NetApp-ian back then. WAFS DNA appeared ingrained in Talon Storage, after finding out that Talon’s CEO and Founder, Shirish Phatak, was the architect of Tacit Networks 20 years ago.

Continue reading

StorageGRID gets gritty

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]

NetApp® presented StorageGRID® Webscale (SGWS) at Storage Field Day 19 last month. It was timely when the general purpose object storage market, in my humble opinion, was getting disillusioned and almost about to deprive itself of the value of what it was supposed to be.

Cheap and deep“, “Race to Zero” were some of the less storied calls I have come across when discussing about object storage, and it was really de-valuing the merits of object storage as vendors touted their superficial glory of being in the IDC Marketscape for Object-based Storage 2019.

Almost every single conversation I had in the past 3 years was either explaining what object storage is or “That is cheap storage right?

Continue reading