Swiss army of data management

Back in 2000, before I joined NetApp, I bought one of my first storage technology books. It was “The Holy Grail of Data Storage Management” by Jon William Toigo. The book served me very well, because it opened up my eyes about the storage networking and data management world.

I mean, I have been doing storage for 7 years before the year 2000, but I was an implementation and support engineer. I installed my first storage arrays in 1993, the trusty but sometimes odd, SPARCstorage Array 1000. These “antiques” were running 0.25Gbps Fibre Channel, and that nationwide bank project gave me my first taste and insights of SAN. Point-to-point, but nonetheless SAN.

Then at Sun from 1997-2000, I was implementing the old Storage Disk Packs with FastWide SCSI, moving on to the A5000 Photons (remember these guys?) and was trained on the A7000, Sun’s acquisition of Encore way back in the late nineties. Then there was “Purple”, the T300s which I believe came from the acquisition of MaxStrat.

The implementation and support experience was good but my world opened up when I joined NetApp in mid-2000. And from the Jon Toigo’s book, I learned one of the most important lessons that I have carried with me till this day – “Data Storage Management is 3x more expensive that the data storage equipment itself“. Given the complexity of the data today compared to the early 2000s, I would say that it is likely to be 4-5x more expensive.

And yet, I am still perplexed that many customers and prospects still cannot see the importance and the gravity of data storage management, and more precisely, data management itself.

A couple of months ago, I had to opportunity to work on an RFP for project in Singapore. The customer had thousands of tapes storing digital media files in addition to tens of TBs running on IBM N-series storage (translated to a NetApp FAS3xxx). They wanted to revamp their architecture, and invited several vendors in Singapore to propose. I was working for a friend, who is an EMC reseller. But when I saw that tapes figured heavily in their environment, and the other resellers were proposing EMC Isilon and NetApp C-Mode, I thought that these resellers were just trying to stuff a square peg into a round hole. They had not addressed the customer’s issues and problems at all, and was just merely proposing storage for the sake of storage capacity. Sure, EMC Isilon is great for the media and entertainment business, but EMC Isilon is not the data management solution for this customer’s situation. Neither was NetApp with the C-Mode solution.

What the customer needed to solve was a data management solution, one that involved

  • Single namespace for video editors and programmers, regardless of online disk storage or archived tape storage
  • Transparent and automated storage tiering and addressing the value of the data to the storage media
  • A backup tier which kept a minimum 2 recent copies for file restoration in case of disasters
  • An archived tier which they could share with their counterparts in other regions
  • A transparent replication tier which would allow them to implement a simplified disaster recovery mechanism with their counterparts in Japan and China

And these were the key issues that needed to be addressed, not the scale-out, usual snapshot mechanism. These features are good for a primary, production storage infrastructure, but this customer’s business operations had about 70-80% data and files which were offline in tapes. I took the liberty to advise my friend to look into Quantum StorNext, because the solution could solve the business problem NOT solving it from an IT point of view. Continue reading

Whitewashing Cloudsh*t

Pardon my French but I just had about enough of it!

I was invited to attend the Internet Alliance Association‘s event today at OneWorld Hotel. It was aptly titled “Global Trends on Cloud Technology”. I don’t know much about the Internet Alliance but I was intrigued by the event because I wanted to know what the Malaysian hosting and service providers are doing on the cloud. I was not in touch with the hosting providers landscape for a few years now, so I was like an eager-beaver, raring to learn more.

After registration, I quickly went to the first booth behind the front counter. He said he was a cloud consultant, so I asked what his company does. He said they provide IaaS, PaaS and so on. I asked him if I could purchase IaaS with a credit card and what was the turnaround time to get a normal server with Windows 2008 running.

He obliged with a yes. They accept credit card purchases. But the turnaround to have the virtual server ready is 1 day. It would take 24-hours before I get a virtual server running Windows. So, I assumed the entire process was manual and I told him that. He assured me that the whole process is automatic. At the back of my mind, if this was automatic, will it take 24-hours? Reality set in when I realized I am dealing with a Malaysian company. Ah, I see.

A few more sentences were exchanged. He told me that they are hosted at AIMS, a popular choice. I inquired about their Disaster Recovery. They don’t have a disaster recovery. More perplexity for me. Hmmm …

In the end, I was kinda turned off by his “story” about how great they are, better than Freenet and AIMS and so on. If they are better than AIMS, why host their cloud at AIMS?

I went to another booth which had a sign call “1-Nimbus”. The number “1” is the usual 1-Malaysia Logo with the word “Nimbus” next to it. Here’s that “1” logo below.

It was the word “Nimbus” that capture my attention. I thought, “Wow, is this really Nimbus?” Apparently not. Probably some Malaysian company borrowed that name .. we are smart that way. “1-Nimbus, Cloud Backup”, it read. I asked the chap (another consultant), who gave me the brochure, “How does it work?” “Does it require any agent?”

“Err, actually, I am not really technical. Let me refer you to my colleague”. A bespectacled chap popped over and introduced himself as a technical guy. I asked again, “How does this cloud backup work?”. His reply … “Err, it’s not really our product. Go check out the website”, and gave me another brochure.  Damn!

From then on, there were more excuses as I kept repeating the same questions from one booth to another – tell me what you do in the cloud? Right now, I decided to do a pie chart of how I assessed the exhibition lobby floor.

 

I went on. There were about 15 booths. With exception of Falconstor, only one booth managed to tell me some decent stuff. They were KumoWorks and the guy spoke well about their Cloud Desktop with Citrix and iGel thin client. And they are from Singapore. It figures!

I cannot but to feel nauseated by most of the booths at the OneWorld Hotel exhibition lobby. If this is the state our “Cloud Service Providers”, I think we are in deep sh*t. Whitewashing aside and over using the word “Cloud” everywhere is one thing. These guys don’t even know what they are talking about. It is about time we admit that the Singaporeans are better than us. Even they might not know their stuff well, at least they know how to package the whole thing and BS to me intelligently!

And I learned a new “as-a-Service” today. One cloud consultant introduced me to “Application-as-a-Service”. I was so tempted to call it “Ass“.

Atempo – 3 gals, 1 guy and 1 LB handbag

I have known Atempo for years and even contacted them once when I was in NetApp several years ago. I don’t know much about them until a friend recently took up the master resellership of Atempo here in Malaysia. And when people ask me “Atempo who?”, I would reply “3 gals, 1 guy and 1 LB handbag”.

Atempo, is a company that specializes in data protection and archiving solutions and has been around for almost 20 years. They compete with Symantec Netbackup, Commvault Simpana and Bakbone Netvault and I have seen their solutions. It’s pretty decent and with an attractive price as well. Perhaps they don’t market themselves as strongly as some the bigger data protection companies, but I would recommend to anyone, any day. If you need more information, contact me.

But the usual puzzled faces will soon go away once they start recognizing Atempo’s solutions because that is where my usual Atempo introduction comes from – their solutions.

Atempo has 5 key products

  • Time Navigator (TINA)
  • Live Navigator (LINA)
  • Atempo Digital Archive (ADA)
  • Atempo Digital Archive for Messaging (ADAM)
  • Live Backup (LB)
Wow, with a cool one like ADAM, 3 hotties in TINA, LINA and ADA, plus LV, err, I mean LB,what more can you ask for?So, before you get into kinky ideas (a foursome?), Atempo is attempting (pun intended ;-)), to take up of your mindshare when it comes to data backup and data archiving.
I am planning to find out more about Atempo in the coming months. Things have been hectic for me but my good buddy now the master reseller of Atempo in Malaysia will make sure that I focus on Atempo more.
Later – guy, gals and a nice handbag. :D

Can snapshots replace traditional backups?

Backup is necessary evil. In IT, every operator, administrator, engineer, manager, and C-level executive knows that you got to have backup. When it comes to the protection of data and information in a business, backup is the only way.

Backup has also become the bane of IT operations. Every product that is out there in the market is trying to cram as much production data to backup as possible just to fit into the backup window. We only have 24 hours in a day, so there is no way the backup window can be increased unless

  • You reduce the size of the primary data to be backed up – think compression, deduplication, archiving
  • You replicate the primary data to a secondary device and backup the secondary device – which is ironic because when you replicate, you are creating a copy of the primary data, which technically is a backup. So you are technically backing up a backup
  • You speed up the transfer of primary data to the backup device

Either way, the IT operations is trying to overcome the challenges of the backup window. And the whole purpose for backup is to be cock-sure that data can be restored when it comes to recovery. It’s like insurance. You pay for the premium so that you are able to use the insurance facility to recover during the times of need. We have heard that analogy many times before.

On the flip side of the coin, a snapshot is also a backup. Snapshots are point-in-time copies of the primary data and many a times, snapshots are taken and then used as the source of a “true” backup to a secondary device, be it disk-based or tape-based. However, snapshots have suffered the perception that it is a pseudo-backup, until recent last couple of years.

Here are some food for thoughts …

WHAT IF we eliminate backing data to a secondary device?

WHAT IF the IT operations is ready to embrace snapshots as the true backup?

WHAT IF we rely on snapshots for backup and replicated snapshots for disaster recovery?

First of all, it will solve the perennial issues of backup to a “secondary device”. The operative word here is the “secondary device”, because that secondary device is usually external to the primary storage.

Tape subsystems and tape are constantly being ridiculed as the culprit of missing backup windows. Duplications after duplications of the same set of files in every backup set triggered the adoption of deduplication solutions from Data Domain, Avamar, PureDisk, ExaGrid, Quantum and so on. Networks are also blamed because network backup runs through the LAN. LANless backup will use another conduit, usually Fibre Channel, to transport data to the secondary device.

If we eliminate the “secondary device” and perform backup in the primary storage itself, then networks are no longer part of the backup. There is no need for deduplication because the data could already have been deduplicated and compressed in the primary storage.

Note that what I have suggested is to backup, compress and dedupe, AND also restore from the primary storage. There is no secondary storage device for backup, compress, dedupe and restore.

Wouldn’t that paint a better way of doing backup?

Snapshots will be the only mechanism to backup. Snapshots are quick, usually in minutes and some in seconds. Most snapshot implementations today are space efficient, consuming storage only for delta changes. The primary device will compress and dedupe, depending on the data’s characteristics.

For DR, snapshots are shipped to a remote storage of equal prowess at the DR site, where the snapshot can be rebuild and be in a ready mode to become primary data when required. NetApp SnapVault is one example. ZFS snapshot replication is another.

And when it comes to recovery, quick restores of primary data will be from snapshots. If the primary storage goes down, clients and host initiators can be rerouted quickly to the DR device for services to resume.

I believe with the convergence of multi-core processing power, 10GbE networks, SSDs, very large capacity drives, we could be seeing a shift in the backup design model and possible the entire IT landscape. Snapshots could very likely replace traditional backup in the near future, and secondary device may be a thing of the past.