Swiss army of data management

Back in 2000, before I joined NetApp, I bought one of my first storage technology books. It was “The Holy Grail of Data Storage Management” by Jon William Toigo. The book served me very well, because it opened up my eyes about the storage networking and data management world.

I mean, I have been doing storage for 7 years before the year 2000, but I was an implementation and support engineer. I installed my first storage arrays in 1993, the trusty but sometimes odd, SPARCstorage Array 1000. These “antiques” were running 0.25Gbps Fibre Channel, and that nationwide bank project gave me my first taste and insights of SAN. Point-to-point, but nonetheless SAN.

Then at Sun from 1997-2000, I was implementing the old Storage Disk Packs with FastWide SCSI, moving on to the A5000 Photons (remember these guys?) and was trained on the A7000, Sun’s acquisition of Encore way back in the late nineties. Then there was “Purple”, the T300s which I believe came from the acquisition of MaxStrat.

The implementation and support experience was good but my world opened up when I joined NetApp in mid-2000. And from the Jon Toigo’s book, I learned one of the most important lessons that I have carried with me till this day – “Data Storage Management is 3x more expensive that the data storage equipment itself“. Given the complexity of the data today compared to the early 2000s, I would say that it is likely to be 4-5x more expensive.

And yet, I am still perplexed that many customers and prospects still cannot see the importance and the gravity of data storage management, and more precisely, data management itself.

A couple of months ago, I had to opportunity to work on an RFP for project in Singapore. The customer had thousands of tapes storing digital media files in addition to tens of TBs running on IBM N-series storage (translated to a NetApp FAS3xxx). They wanted to revamp their architecture, and invited several vendors in Singapore to propose. I was working for a friend, who is an EMC reseller. But when I saw that tapes figured heavily in their environment, and the other resellers were proposing EMC Isilon and NetApp C-Mode, I thought that these resellers were just trying to stuff a square peg into a round hole. They had not addressed the customer’s issues and problems at all, and was just merely proposing storage for the sake of storage capacity. Sure, EMC Isilon is great for the media and entertainment business, but EMC Isilon is not the data management solution for this customer’s situation. Neither was NetApp with the C-Mode solution.

What the customer needed to solve was a data management solution, one that involved

  • Single namespace for video editors and programmers, regardless of online disk storage or archived tape storage
  • Transparent and automated storage tiering and addressing the value of the data to the storage media
  • A backup tier which kept a minimum 2 recent copies for file restoration in case of disasters
  • An archived tier which they could share with their counterparts in other regions
  • A transparent replication tier which would allow them to implement a simplified disaster recovery mechanism with their counterparts in Japan and China

And these were the key issues that needed to be addressed, not the scale-out, usual snapshot mechanism. These features are good for a primary, production storage infrastructure, but this customer’s business operations had about 70-80% data and files which were offline in tapes. I took the liberty to advise my friend to look into Quantum StorNext, because the solution could solve the business problem NOT solving it from an IT point of view. Continue reading

Expensive hard disk is good

No, I don’t mean to be bad, but the spinning HDDs’ prices will remain high even if the post-Thailand flood production has resumed to normalcy.

According to IHS iSuppli, a market research intelligence firm, the prices will continue to hold steady and will not fall to pre-flood level until 2014. The reason is simple. The prices of the hard disk drives are pretty much dictated by the only 2 real remaining hard disk companies in the world – Seagate and Western Digital. These guys controls more than 85% of the hard disk market and as demand of HDDs outstrips supply, the current hard disk prices are hitting the bottom line hard for just about everyone.

But the bad news is turning into good news for solid state storage devices. NAND-Flash based devices are driving a new clan of storage start-ups in the likes of Violin Memory, Kaminario, Pure Storage and Virident. The EMC acquisition of XtremIO was a strong endorsement that cements the cornerstone of all enterprise storage arrays to come. Even the Register predicted that the EMC VMAX will be the last primary storage array before the flash tsunami.

The NAND-Flash solid state of multi-level cells (MLCs) and single level cells (SLCs) and even triple level cells (TLCs) are going through birth, puberty, adolescent extremely fast because the demand for faster and faster IOPS, throughput and lower latency is hitting at full speed. And it is likely that all the xLCs (SLCs, MLCs and TLCs) could go through cycle in an extremely short lifespan, because there is a new class of solid state that is pushing the performance-price envelope closer and closer to speed of DRAM but with the price of Flash. This new type of solid state is Storage Class Memory (SCM). Continue reading

The reports are out!

It’s another quarter and both Gartner and IDC reports on disk storage market are out.

What does it take to slow down EMC, who is like a behemoth beast mowing down its competition? EMC, has again tops both the charts. IDC Worldwide Disk Storage Tracker for Q1 of 2012 puts EMC at 29.0% of the market share, followed by NetApp at 14.1%, and IBM at 11.4%. In fourth place is HP with 10.2% and HDS is placed fifth with 9.4%.

In the Gartner report, EMC has the lead of 32.5%, followed by NetApp at 12.7% and IBM with 11.0%. HDS held fourth place at 9.5% and HP is fifth with 9.0%. Continue reading

“I want to put in my own hard disk”

I want to put in my own hard disk“.

If a customer ever utter that sentence, it will trigger a storage vendor meltdown. Panic buttons, alarm bells, and everything else that will lead a salesman to go berserk. That’s a big NO, NO!

For decades, storage vendors have relied on proprietary hardware to keep customers in line, and have customers continue to sign hefty maintenance contracts until the next tech refresh. The maintenance contract, with support, software upgrades and hardware spares replacement, defines the storage networking industry that we are in. Even as some vendors have commoditized their hardware on the x86 platforms, and on standard enterprise hard disk drives (HDDs), NICs and HBAs, that openness and convenience of commodity hardware savings are usually not passed on the customers.

It is easy to explain to customers that keeping their enterprise data in reliable and high performance storage hardware with performance optimization and special firmware is paramount, and any unwarranted and unvalidated hardware would put the customer’s data at high risk.

There is a choice now. The ripple of enterprise-grade, open storage kernel and file system has just started its first ring, and we hope that this small ripple will reverberate across the storage industry in the next few years.

Continue reading

Xtreme future?

EMC acquisition of XtremIO sent shockwaves across the industry. The news of the acquisition, reported costing EMC USD$430 million can be found here, here and here.

The news of EMC’s would be acquisition a few weeks ago was an open secret and rumour has it that NetApp was eyeing XtremIO as well. Looks like EMC has beaten NetApp to it yet again.

The interesting part was of course, the price. USD$430 million is a very high price to pay for a stealthy, 2-year old company which has 2 rounds of funding totaling USD$25 million. Why such a large amount?

XtremIO has a talented team of engineers; the notable ones being Yaron Segev and Shahar Frank. They have their background in InfiniBand, and Shahar Frank was the chief architect of Exanet scale-out NAS (which was acquired by Dell). However, as quoted by 451Group, XtremeIO is building an all-flash SAN array that “provides consistently high performance, high levels of flash endurance, and advanced functionality around thin provisioning, de-dupe and space-efficient snapshots“.

Furthermore, XtremeIO has developed a real-time inline deduplication engine that does not degrade performance. It does this by spreading the write I/Os over the entire array. There is little information about this deduplication engine, but I bet XtremIO has developed a real-time, inherent deduplication file system that spreads all the I/Os to balance the wear-leveling as well as having scaling performance. I bet XtremIO will dedupe everything that it stores, has a B+ tree, copy-on-write file system with a super-duper efficient hashing algorithm for address mapping (pointers) with this deduplication file system. Ok, ok, I am getting carried away here, because it is likely that I will be wrong, but I can imagine, can’t I? Continue reading

ARC reactor also caches?

The fictional arc reactor in Iron Man’s suit was the epitome of coolness for us geeks. In the latest edition of Oracle Magazine, Iron Man is on the cover, as well as the other 5 Avengers in a limited edition series (see below).

Just about the same time, I am reading up on the ARC (Adaptive Replacement Caching) that is adopted in ZFS. I am learning in depth of how ZFS caching works as opposed to the more popular LRU (Least Recently Used) caching algorithm that is used in most storage cache memory. Having said that, most storage vendors employed a modified LRU algorithm, with the intention to keep the most recently accessed pages in memory as long as possible. This is true in NetApp’s Data ONTAP (maybe not the ONTAP GX in which I have little experience) and EMC FlareOE. ONTAP goes further to by keeping the most frequently accessed pages permanently in memory. EMC folks would probably refer to most recently accessed as spatial locality while most frequently accessed as temporal locality.

Why is ZFS using ARC and what is ARC? Continue reading

SAP wants to kill Oracle

It’s not new. SAP has been trying to do it for years but with little success. SAP applications and its modules still very much rely on the Oracle database as its core engine but all that that could change within the next few years. SAP has HANA now.

I thought it is befitting to use the movie poster of “Hanna” (albeit an extra “N” in the spelling) to portray SAP who clearly has Oracle in its sights now, with a sharpened arrow head aimed at the jugular of the Oracle beast. (If you haven’t watched the movie, you will see the girl Hanna, using the bow and arrow to hunt a large reindeer).

What is HANA anyway? It was previously an analytics appliance in SAP HANA 1.0SP2. Its key component is the HANA in-memory database (IMDB) and it was not aimed for the general purpose, relational database market yet. Or perhaps, that’s what SAP wants Oracle to believe. Continue reading

SMP than VMware

VMware is not a panacea for all your server virtualization requirements but because they do fantastic marketing (not to mention doing 1 small seminar every 1.5-2 months here in Malaysia last year), everyone thinks they are the only choice for server virtualization.

Efforts from Citrix Xen, Microsoft Hyper-V and RedHat Virtualization do not seem to make a dent into VMware’s armour and it is beginning to feel that VMware is the only choice for server virtualization. However, every new server virtualization proposal would end up with the customer buying a brand new, much more powerful server. More CPUs, more cores, and more RAM (I am not going into VMware vRAM licensing issues here but customers know they are caged-in).

You see, VMware’s style of server virtualization is a in-system virtualization. The amount of physical resources within the system are being pooled, virtualized and shared with the virtual machines (VMs) in the physical chassis. With exception to the concept of distributed vSwitches (dvSwitch), CPUs, processing CPU cores and RAM are pretty much confined within what’s available in the physical box in most server virtualization environment. You can envision the concept of VMware’s in-system virtualization in the diagram below:

So, the consolidation (and virtualization) phase of older physical servers would involve packing tons of CPU cores and tons of RAMs in a newer, high end server.

I just visited a prospect a few days ago. For about 30 users for an ERP system and perhaps 100 users of Zimbra mailboxes, he lamented that he had to invest into 2 Dell R710 servers with 64GB of RAM each and sporting 2 x 8-core Intel Xeon. That sounded to like an overkill but that is what is happening here in this part of the world. The customer is given the perception and the doubt of inadequacy when they virtualize their servers. “What if I don’t have enough cores?; what if I don’t have enough RAM?” That in itself is the typical Malaysian (and Singaporean) kiasu mentality. Check out the Wikipedia definition of kiasu here.

Such a high-end server costs a lot of moolahs. And furthermore, the scalability and performance of the virtualized servers in the VMs are trapped within how much these servers can scale physically. If the server is maxed out at 16-cores and 128GB of RAM, then the customer to upgrade again with a server forklift. That’s not good.

And one more thing. VMware server virtualization is not ready for High Performance Computing (HPC) …yet.

Let’s look at this in another way. Let’s assume that you can look the server virtualization approach in an outward manner rather than the inward within kind of thinking, like the VMware in-system method.

What if you can invest in lower-end x86 servers with 1 x quad-core CPUs, with 8GB of RAM? What if you can put aggregate many of these lower-end servers together and build a large cluster of lower-end x86 servers into a huge symmetric multiprocessing server farm that supports 1,024 CPUs of 16,384 cores, 64TB of RAM? Have a look at this video that explains what I just mentioned:

ScaleMP video

Yeah, yeah .. it’s a marketing video from ScaleMP. But I am looking beyond the company and looking at the possibility of this out-system type of server virtualization. The ability to pool together all the CPU processing power of many physical servers and the aggregation of physical RAMs of all the combined servers into a single shared memory architecture unleashes the true power of server virtualization. This is THE next generation symmetric multiprocessing (SMP) architecture, and it breaks free from the limitations and scalability the in-ward virtualization of physical servers.

In the past, SMP system rely on heavy programmability of the applications to scale with SMP systems. Applications didn’t necessary scale on-the-fly with SMP systems, and some level of configuration and programming have to be applied to address the proprietary  SMP methods and interconnects. ScaleMP’s vSMP Foundation hypervisor solution removes the proprietary nature of SMP and bringing x86 server virtualization to meet the demands of HPC.

Here’s a look at the high level architecture of ScaleMP vSMP:

This type architecture brings similarity to RNA Networks solutions that I blogged some time ago. RNA Network, which was acquired by Dell late last year, based their solution on the RDMA technology and protocol, and was more about enhancing scalability and performance with memory pooling via Memory Cloud. ScaleMP’s patent-pending technology is more than that. It pools both memory and processing cores as well, giving it greater scalability and performance, the much needed resources for the demands of HPC environments.

The folks at ScaleMP contacted me a couple of weeks back and shared some of their marketing datasheets and whitepapers. While the information passed to me were OK, I wish the information could have a deeper dive into the technology and implementation as well. I hope they could share it, and I don’t mind signing an NDA.

Well, this is done pro bono, because I want everyone to know the choices and possibilities out there. It is my worldly cause to have people educated because only by being informed, we make better choices. The server virtualization world isn’t always about VMware, you know.

4TB disks – the end of RAID

Seriously? 4 freaking terabyte disk drives?

The enterprise SATA/SAS disks have just grown larger, up to 4TB now. Just a few days ago, Hitachi boasted the shipment of the first 4TB HDD, the 7,200 RPM Ultrastar™ 7K4000 Enterprise-Class Hard Drive.

And just weeks ago, Seagate touted their Heat-Assisted Magnetic Recording (HAMR) technology will bring forth the 6TB hard disk drives in the near future, and 60TB HDDs not far in the horizon. 60TB is a lot of capacity but a big, big nightmare for disks availability and data backup. My NetApp Malaysia friend joked that the RAID reconstruction of 60TB HDDs would probably finish by the time his daughter finishes college, and his daughter is still in primary school!.

But the joke reflects something very serious we are facing as the capacity of the HDDs is forever growing into something that could be unmanageable if the traditional implementation of RAID does not change to meet such monstrous capacity.

Yes, RAID has changed since 1988 as every vendor approaches RAID differently. NetApp was always about RAID-4 and later RAID-DP and I remembered the days when EMC had a RAID-S. There was even a vendor in the past who marketed RAID-7 but it was proprietary and wasn’t an industry standard. But fundamentally, RAID did not change in a revolutionary way and continued to withstand the ever ballooning capacities (and pressures) of the HDDs. RAID-6 was introduced when the first 1TB HDDs first came out, to address the risk of a possible second disk failure in a parity-based RAID like RAID-4 or RAID-5. But today, the 4TB HDDs could be the last straw that will break the camel’s back, or in this case, RAID’s back.

RAID-5 obviously is dead. Even RAID-6 might be considered insufficient now. Having a 3rd parity drive (3P) is an option and the only commercial technology that I know of which has 3 parity drives support is ZFS. But having 3P will cause additional overhead in performance and usable capacity. Will the fickle customer ever accept such inadequate factors?

Note that 3P is not RAID-7. RAID-7 is a trademark of a old company called Storage Computer Corporation and RAID-7 is not a standard definition of RAID.

One of the biggest concerns is rebuild times. If a 4TB HDD fails, the average rebuild speed could take days. The failure of a second HDD could up the rebuild times to a week or so … and there is vulnerability when the disks are being rebuilt.

There are a lot of talks about declustered RAID, and I think it is about time we learn about this RAID technology. At the same time, we should demand this technology before we even consider buying storage arrays with 4TB hard disk drives!

I have said this before. I am still trying to wrap my head around declustered RAID. So I invite the gurus on this matter to comment on this concept, but I am giving my understanding on the subject of declustered RAID.

Panasas‘ founder, Dr. Garth Gibson is one of the people who proposed RAID declustering way back in 1999. He is a true visionary.

One of the issues of traditional RAID today is that we still treat the hard disk component in a RAID domain as a whole device. Traditional RAID is designed to protect whole disks with block-level redundancy.  An array of disks is treated as a RAID group, or protection domain, that can tolerate one or more failures and still recover a failed disk by the redundancy encoded on other drives. The RAID recovery requires reading all the surviving blocks on the other disks in the RAID group to recompute blocks lost on the failed disk. In short, the recovery, in the event of a disk failure, is on the whole object and therefore, a entire 4TB HDD has to be recovered. This is not good.

The concept of RAID declustering is to break away from the whole device idea. Apply RAID at a more granular scale. IBM GPFS works with logical tracks and RAID is applied at the logical track level. Here’s an overview of how is compares to the traditional RAID:

The logical tracks are spread out algorithmically spread out across all physical HDDs and the RAID protection layer is applied at the track level, not at the HDD device level. So, when a disk actually fails, the RAID rebuild is applied at the track level. This significant improves the rebuild times of the failed device, and does not affect the performance of the entire RAID volume much. The diagram below shows the declustered RAID’s time and performance impact when compared to a traditional RAID:

While the IBM GPFS approach to declustered RAID is applied at a semi-device level, the future is leaning towards OSD. OSD or object storage device is the next generation of storage and I blogged about it some time back. Panasas is the leader when it comes to OSD and their radical approach to this is applying RAID at the object level. They call this Object RAID.

With object RAID, data protection occurs at the file-level. The Panasas system integrates the file system and data protection to provide novel, robust data protection for the file system.  Each file is divided into chunks that are stored in different objects on different storage devices (OSD).  File data is written into those container objects using a RAID algorithm to produce redundant data specific to that file.  If any object is damaged for whatever reason, the system can recompute the lost object(s) using redundant information in other objects that store the rest of the file.

The above was a quote from the blog of Brent Welch, Panasas’ Director of Software Architecture. As mentioned, the RAID protection of the objects in the OSD architecture in Panasas occurs at file-level, and the file or files constitute the object. Therefore, the recovery domain in Object RAID is at the file level, confining the risk and damage of data loss within the file level and not at the entire device level. Consequently, the speed of recovery is much, much faster, even for 4TB HDDs.

Reliability is the key objective here. Without reliability, there is no availability. Without availability, there is no performance factors to consider. Therefore, the system’s reliability is paramount when it comes to having the data protected. RAID has been the guardian all these years. It’s time to have a revolutionary approach to safeguard the reliability and ensure data availability.

So, how many vendors can claim they have declustered RAID?

Panasas is a big YES, and they apply their intelligence in large HPC (high performance computing) environments. Their technology is tried and tested. IBM GPFS is another. But where are the rest?

 

Dell acquires Wyse Technology

There is no stopping Dell. It is in the news again, this time, acquiring privately owned Wyse Technology.

The name Wyse certainly brings back memories about the times where Wyse were the VT100 and VT220 terminals. They were also one of the early leaders in thin client computing, where it required an X Windows server to provide client applications on “dumb” workstations running X Windows Manager. They used to compute with companies like NCD (Network Computing Devices) and HummingBird. My first company, CSA, was a distributor of NCD clients and I remember Sime Darby was the distributor of Wyse thin clients.

Wyse as quoted:

Wyse Technology is the global leader in Cloud Client Computing. The Wyse portfolio includes industry-leading thin, zero and cloud PC client solutions with advanced management, desktop virtualization and cloud software supporting desktops, laptops and next generation mobile devices. Wyse has shipped more than 20 million units and has over 200 million people interacting with their products each day, enabling the leading private, public, hybrid and government cloud implementations worldwide. Wyse works with industry-leading IT vendors, including Cisco®, Citrix®, IBM®, Microsoft, and VMware® as well as globally-recognized distribution and service providers. Wyse is headquartered in San Jose, California, U.S.A., with offices worldwide.

The Dell acquisition of Wyse shows that Dell is serious about Virtual Desktop Infrastructure type of technology (VDI), especially when the client cloud computing space. And the VDI space is going to heat up as many vendors are pushing hard to get the market going.

Dell, for better or for worse, has just added another acquisition that fits into the jigsaw puzzle that they are trying to build. Wyse looks like a good buy as it has a mature technology and the legacy in the thin client space. I hope Dell will energize the Wyse Technology team but while acquisition is easy, the tough part will be the implementation part. How well Dell mobilizes the Wyse Technology team will depend on how well Wyse blends into Dell’s culture.