Is AI my friend?

I am sorry, Dave …

Let’s start this story with 2 supposed friends – Dave and Hal.

How do we become friends?

We have friends and we have enemies. We become friends when trust is established. Trust is established when there is an unsaid pact, a silent agreement that I can rely on you to keep my secrets private. I will know full well that you will protect my personal details with a strong conviction. Your decisions and your actions towards me are in my best interest, unbiased and would benefit both me and you.

I feel secure with you.

AI is my friend

When the walls of uncertainty and falsehood are broken down, we trust our friends more and more. We share deeper secrets with our friends when we believe that our privacy and safety are safeguarded and protected. We know well that we can rely on them and their decisions and actions on us are reliable and unbiased.

AI, can I count on you to protect my privacy and give me security that my personal data is not abused in the hands of the privileged few?

AI, can I rely on you to be ethical, unbiased and give me the confidence that your decisions and actions are for the benefit and the good of me, myself and I?

My AI friends (maybe)

As I have said before, I am not a skeptic. When there is plenty of relevant, unbiased data fed into the algorithms of AI, the decisions are fair. People accept these AI decisions when the degree of accuracy is very close to the Truth. The higher the accuracy, the greater the Truth. The greater the Truth, the more confident people are towards the AI system.

Here are some AI “friends” in the news:

But we have to careful here as well. Accuracy can be subjective, paradoxical and enigmatic. When ethics are violated, we terminate the friendship and we reject the “friend”. We categorically label him or her as an enemy. We constantly have to check, just like we might, once in a while, investigate on our friends too.

In Conclusion

AI, can we be friends now?

[Apology: sorry about the Cyberdyne link 😉 ]

[This blog was posted in LinkedIn on Apr 19th 2019]

Quantum Corp should spin off Stornext

What’s happening at Quantum Corporation?

I picked up the latest development news about Quantum Corporation. Last month, in December 2018, they secured a USD210 million financial lifeline to support their deflating business and their debts. And if you follow their development, they are with their 3rd CEO in the past 12 months, which is quite extraordinary. What is happening at Quantum Corp?

Quantum Logo (PRNewsFoto/Quantum Corp.)

Stornext – The Swiss Army knife of Data Management

I have known Quantum since 2000, very focused on the DLT tape library business. At that time, prior to the coming of LTO, DLT and its successor, SuperDLT dominated the tape market together with IBM. In 2006, they acquired ADIC, another tape vendor and became one of the largest tape library vendors in the world. From the ADIC acquisition, Quantum also got their rights on Stornext, a high performance scale out file system. I was deeply impressed with Stornext, and I once called it the Swiss Army knife of Data Management. The versatility of Stornext addressed many of the required functions within the data management lifecycle and workflows, and thus it has made its name in the Media and Entertainment space.

Jack of all trades, master of none

However, Quantum has never reached great heights in my opinion. They are everything to everybody, like a Jack of all trades, master of none. They are backup with their tape libraries and DXi series, archive and tiering with the Lattus, hybrid storage with QXS, and file system and scale-out with Stornext. If they have good business run rates and a healthy pipeline, having a broad product line is fine and dandy. But Quantum has been having CEO changes like turning a turnstile, and amid “a few” accounting missteps and a 2018 CEO who only lasted 5 months, they better steady their rocking boat quickly. Continue reading

From the past to the future

2019 beckons. The year 2018 is coming to a close and I look upon what I blogged in the past years to reflect what is the future.

The evolution of the Data Services Platform

Late 2017, I blogged about the Data Services Platform. Storage is no longer the storage infrastructure we know but has evolved to a platform where a plethora of data services are served. The changing face of storage is continually evolving as the IT industry changes. I take this opportunity to reflect what I wrote since I started blogging years ago, and look at the articles that are shaping up the landscape today and also some duds.

Some good ones …

One of the most memorable ones is about memory cloud. I wrote the article when Dell acquired a small company by the name of RNA Networks. I vividly recalled what was going through my mind when I wrote the blog. With the SAN, NAS and DAS, and even FAN (File Area Network) happening during that period, the first thing was the System Area Network, the original objective Infiniband and RDMA. I believed the final pool of where storage will be is the memory, hence I called it the “The Last Bastion – Memory“. RNA’s technology became part of Dell Fluid Architecture.

True enough, the present technology of Storage Class Memory and SNIA’s NVDIMM are along the memory cloud I espoused years ago.

What about Fibre Channel over Ethernet (FCoE)? It wasn’t a compelling enough technology for me when it came into the game. Reduced port and cable counts, and reduced power consumption were what the FCoE folks were pitching, but the cost of putting in the FC switches, the HBAs were just too great as an investment. In the end, we could see the cracks of the FCoE story, and I wrote the pre-mature eulogy of FCoE in my 2012 blog. I got some unsavoury comments writing that blog back then, but fast forward to the present, FCoE isn’t a force anymore.

Weeks ago, Amazon Web Services (AWS) just became a hybrid cloud service provider/vendor with the Outposts announcement. It didn’t surprise me but it may have shook the traditional systems integrators. I took the stance 2 years ago when AWS partnered with VMware and juxtaposed it to the philosophical quote in the 1993 Jurassic Park movie – “Life will not be contained, … Life finds a way“.

Continue reading

The Dell EMC Data Bunker

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Another new announcement graced the Tech Field Day 17 delegates this week. Dell EMC Data Protection group announced their Cyber Recovery solution. The Cyber Recovery Vault solution and services is touted as the “The Last Line of Data Protection Defense against Cyber-Attacks” for the enterprise.

Security breaches and ransomware attacks have been rampant, and they are reeking havoc to organizations everywhere. These breaches and attacks cost businesses tens of millions, or even hundreds, and are capable of bring these businesses to their knees. One of the known practices is to corrupt backup metadata or catalogs, rendering operational recovery helpless before these perpetrators attack the primary data source. And there are times where the malicious and harmful agent could be dwelling in the organization’s network or servers for long period of times, launching and infecting primary images or gold copies of corporate data at the opportune time.

The Cyber Recovery (CR) solution from Dell EM focuses on Recovery of an Isolated Copy of the Data. The solution isolates strategic and mission critical secondary data and preserves the integrity and sanctity of the secondary data copy. Think of the CR solution as the data bunker, after doomsday has descended.

The CR solution is based on the Data Domain platforms. Describing from the diagram below, data backup occurs in the corporate network to a Data Domain appliance platform as the backup repository. This is just the usual daily backup, and is for operational recovery.

Diagram from Storage Review. URL Link: https://www.storagereview.com/dell_emc_releases_cyber_recovery_software

Continue reading

Commvault UDI – a new CPUU

[Preamble: I am a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am here at the Commvault GO 2017. Bob Hammer, Commvault’s CEO is on stage right now. He shares his wisdom and the message is clear. IT to DT. IT to DT? Yes, Information Technology to Data Technology. It is all about the DATA.

The data landscape has changed. The cloud has changed everything. And data is everywhere. This omnipresence of data presents new complexity and new challenges. It is great to get Commvault acknowledging and accepting this change and the challenges that come along with it, and introducing their HyperScale technology and their secret sauce – Universal Dynamic Index.

Continue reading

Disaster Recovery has changed

Simple and affordable Disaster Recovery? Sounds oxymoronic, right?

I have thronged the small medium businesses (SMBs) space in the past few months. I have seen many SMBs resort to the cheapest form they can get their hands on. It could be a Synology here or a QNAP there, and that’s their backup plan. That’s their DR plan. When disaster strikes, they just shrug their shoulders and accept their fate. It could be a human error, accidental data deletion, virus infection, data corruption and recently, RANSOMware! But these SMBs do not have the IT resources to deal with the challenges these “disasters” bring.

Recently I attended a Business Continuity Institute forum organized by the Malaysian Chapter. Several vendors and practitioners spoke about the organization’s preparedness and readiness for DR. And I would like to stress the words “preparedness” and “readiness”. In the infrastructure world, we often put redundancy into the DR planning, and this means additional cost. SMBs cannot afford this redundancy. Furthermore, larger organizations have BC and DR coordinators who are dedicated for the purpose of BC and DR. SMBs probably has a person who double up an the IT administrator.

However, for IT folks, virtualization and cloud technologies are beginning to germinate a new generation of DR solutions. DR solutions which are able to address the simplicity of replication and backup, and at the same time affordable. Many are beginning to offer DR-as-a-Service and indeed, DR-as-a-Service has become a Gartner Magic Quadrant category. Here’s a look at the 2016 Gartner Magic Quadrant for DR-as-a-Service.

gartner-mq-dr-as-a-service-2016

And during these few months, I have encountered 3 vendors in this space. They are sitting in the Visionaries quadrant. One came to town and started smashing laptops to jazz up their show (I am not going to name that vendor). Another kept sending me weird emails, sounding kind of sleazy like “Got time for a quick call?”

Continue reading

Technology prowess of Riverbed SteelFusion

The Riverbed SteelFusion (aka Granite) impressed me the moment it was introduced to me 2 years ago. I remembered that genius light bulb moment well, in December 2012 to be exact, and it had left its mark on me. Like I said last week in my previous blog, the SteelFusion technology is unique in the industry so far and has differentiated itself from its WAN optimization competitors.

To further understand the ability of Riverbed SteelFusion, a deeper inspection of the technology is essential. I am fortunate to be given the opportunity to learn more about SteelFusion’s technology and here I am, sharing what I have learned.

What does the technology of SteelFusion do?

Riverbed SteelFusion takes SAN volumes from supported storage vendors in the central datacenter and projects the storage volumes (aka LUNs)to applications and hosts at the remote branches. The technology requires a paired relationship between SteelFusion Core (in the centralized datacenter) and SteelFusion Edge (at the branch). Both SteelFusion Core and Edge are fronted respectively by the Riverbed SteelHead WAN optimization device, to deliver the performance required.

The diagram below gives an overview of how the entire SteelFusion network architecture is like:

Riverbed SteelFusion Overall Solution 2 Continue reading

APIs that stick in Storage

The competition in storage networking and data management is forever going to get fiercer. And there is always going to be the question of either having open standards APIs or proprietary APIs because storage networking and data management technologies constantly have to balance between gaining a competitive advantage with proprietary APIs  or getting greater market acceptance with open standards APIs.

The flip side, is having proprietary APIs could limit and stunt the growth of the solution but with much better integration and interoperability with complementary solutions. Open standards APIs could make the entire market a plain, vanilla one where there is little difference between technology A or B or C or X, and in the long run, could give lesser incentive for technology innovation.

I am not an API guy. I do not code or do development work on APIs, but I do like APIs (Application Programming Interface). I have my fair share of APIs which can be considered open or proprietary depending on who you talk to. My understanding is that an API might be more open if there are many ISVs, developers and industry supporters endorsing it and have a valid (and usually profit-related) agenda to make the API open.

I can share some work experience with some APIs I have either worked in the past or give my views of some present cool APIs that are related to storage networking and data management.

One of the API-related works I did was with the EMC Centera. I was working with Schlumberger to create a file-level archiving/lifecycle management solution for the GeoFrame seismic files with the EMC Centera. This was back in 2008.

EMC Centera does not present itself as a NAS box (even though I believe, IDC lumps Centera sales numbers to worldwide NAS market figures, unless I am no longer correct chronologically) but rather through ISVs and application-level integration with the EMC Centera API. Here’s a high-level look of how the EMC Centera talks to application with the API.

Note: EMC Centera can also present a NAS integration interface through NFS, CIFS, HTTP and FTP protocols, but the customer must involve (may have to purchase) the EMC Centera Universal Access software appliance. This is for applications that do not have the level of development and integration to interface with the EMC Centera API. 

Continue reading

Apple chomps Anobit

A few days ago, Apple paid USD$500 million to buy an Israeli startup, Anobit, a maker of flash storage technology.

Obviously, one of the reasons Apple did so is to move up a notch to differentiate itself from the competition and positions itself as a premier technology innovator. It has won the MP3 war with its iPod, but in the smartphones, tablets and notebooks space, Apple is being challenged strongly.

Today, flash storage technology is prevalent, and the demand to pack more capacity into a small real-estate of flash will eventually lead to reliability issues. The most common type of NAND flash storage is the MLC (multi-level cells) versus the more expensive type called SLC (single level cells).

But physically and the internal-build of MLC and SLC are the exactly the same, except that in SLC, one cell contains 1 bit of data. Obviously this means that 2 or more bits occupy one cell in MLC. That’s the only difference from a physical structure of NAND flash. However, if you can see from the diagram below, SLCs has advantages over MLCs.

 

NAND Flash uses electrical voltage to program a cell and it is always a challenge to store bits of data in a very, very small cell. If you apply too little voltage, the bit in the cell does not register and will result in something unreadable or an error. If you apply too much voltage, the adjacent cells are disturbed and resulting in errors in the flash. Voltage leak is not uncommon.

The demands of packing more and more data (i.e. more bits) into one cell geometry results in greater unreliability. Though the reliability of  the NAND Flash storage is predictable, i.e. we would roughly know when it will fail, we will eventually reach a point where the reliability of MLCs will no longer be desirable if we continue the trend of packing more and more capacity.

That’s when Anobit comes in. Anobit has designed and implemented architectural changes of the way NAND Flash storage is used. The technology in laymen terms comes in 2 stages.

  1. Error reduction – by understanding what causes flash impairment. This could be cross-coupling, read disturbs, data retention impairments, program disturbs, endurance impairments
  2. Error Correction and Signal Processing – Advanced ECC (error-correcting code), and introducing the patented (and other patents pending) Memory Signal Processing (TM) to improve the reliability and performance of the NAND Flash storage as show in the diagram below:

In a nutshell, Anobit’s new and innovative approach will result in

  • More reliable MLCs
  • Better performing MLCs
  • Cheaper NAND Flash technology

This will indeed extend the NAND Flash technology into greater innovation of flash storage technology in the near future. Whatever Apple will do with Anobit’s technology is anybody’s guess but one thing is certain. It’s going to propel Apple into newer heights.

Silent Data Corruption (SDC) …it’s more prevalent that you think

Have you heard about Silent Data Corruption (SDC)? It’s everywhere and yet in the storage networking world, you can hardly find a storage vendor talking about it.

I did a paper for MNCC (Malaysian National Computer Confederation) a few years ago and one of the examples I used was what they found at CERN. CERN, the European Center for Nuclear Research published a paper in 2007 describing the issue of SDC. Later in 2008, they found approximately 38,000 files were corrupted in the 15,000TB of data they generated. Therefore SDC is very real and yet to the people in the storage networking industry, where data matters the most, it is one of the issues that is the least talked about.

What is Silent Data Corruption? Every computer component that we use is NOT perfect. It could be the memory; it could be the network interface cards (NICs); it could be the hard disk; it could also be the bus, the file system, the data block structure. Any computer component, whether it is hardware or software, which deals with the bits of data is subjected to the concern of SDC.

Data corruption happens all the time. It is when a bit or a set of bits is changed unintentionally due to various reasons. Some of the reasons are listed below:

  • Hardware errors
  • Data transfer noise
  • Electromagnetic Interference (EMI)
  • Firmware bugs
  • Software bugs
  • Poor electrical current distribution
  • Many more …

And that is why there are published statistics for some hardware components such as memory, NICs, hard disks, and even protocols such as Fibre Channel. These published statistics talk about BER or bit-error-rate, which is the occurrence of an erroneous bit in every billion or trillion of bits transferred or processed.

And it is also why there are inherent mechanisms within these channels to detect data corruption. We see them all the time in things such as checksums (CRC32, SHA1, MD5 …), parity and ECC (error correction code). Because we can detect them, we see errors and warnings about their existence.

However, SILENT data corruption does not appear as errors and warnings, and they do OCCUR! And this problem is getting more and more prevalent in modern day disk drives, especially solid state drives (SSDs). As the disk manufacturers are coming out with more compact, higher capacity and performance drives, the cell geometry of SSDs are becoming smaller and smaller. This means each cell will have a smaller area to contain the electrical charge and maintain the bit-value, either a -0 or -1. At the same time, the smaller cell is more sensitive and susceptible to noise, electrical charge leakage and interference of nearby cells as some SSDs has different power modes to address green requirements.

When such things happen, a 0 can look like a 1 or vice versa and if the error is undetected, this becomes silent data corruption.

Most common storage networking technology such as RAID or file systems were introduced during the 80’s or 90’s when disks were 9GB, 18GB and so on, and FastEthernet was the standard for networking. Things have changed at a very fast pace, and data growth has been phenomenal. We need to look at storage vendors’ technology more objectively now and get more in-depth about issues such as SDC.

SDC is very real but until and unless we learn and equip ourselves with the knowledge, just don’t take things from vendors verbatim. Find out … and be in control of what you are putting into your IT environment.