Mr. Black divorces Miss Purple

The writing’s on the wall and the relationship has been on the rocks since Mr. Black decided to take on 2 new wives (one in 2007 and one in 2010) and Miss Purple had a good run when things were hot.

Why Black and Purple? For a while within the local circle of EMC Malaysia, Dell’s EMC CLARiiONs were known as “Black” while EMC’s own CLARiiON was “Purple”. They were the colours of the bezels of each respective storage box. And the relationship, which Dell signed with EMC in 2003, was supposed to last 10 years but today, Dell has decided to end that relationship 2 years early. Here is one of the news at eWeek.com.

The “divorce” was inevitable. Gaps started showing up in the relationship when Dell acquired EqualLogic in 2007 and this relationship went to a point of no return when Dell started pursuing 3PAR back in 2010. Dell eventually lost 3PAR to HP and got Compellent instead. It was bound to happen, sooner or later.

Storage is becoming a very important strategy for Dell. As server virtualization grows, the demand for Dell servers wanes but storage demand kept growing. That is why it makes sense for Dell to have their own storage techonology. In addition to Compellent and EqualLogic, Dell has also acquired Exanet and Ocarina Networks in 2010.

It has been a good run for both companies, especially EMC, who was able to make use of Dell’s aggressive sales force to increase their market penetration for CLARiiON. And given the market dynamics, it is crucial that a company like Dell, with little innovation in the past, change their approach of reselling other people’s products and start owning and developing their own technology.

Novell Fil(e)r … Files, my way

I took a bit of time of my busy schedule this week to learn a bit more about the Novell Filr.

Firstly, it is a F-I-L-E-R, spelled “Filr”, something like Tumblr, or Razr. I think it’s pretty inventive but putting marketing aside, I learned about a little of how the idea works behind the concept. Right now, my evaluation is pretty much on the surface because I am working out the time for a real-life demo and hands-on later on.

As I mentioned in my previous blog, the idea behind Novell Filr is to allow the users to access their files anywhere, any device. The importance of this concept is to allow the users to stay in their comfort zone. This simple concept, of having the users being comfortable, is something that we should not overlook, because it brings together the needs of the enterprise and the IT organization and the needs of the individual users in a subtle, yet powerful way. It allows the behavioral patterns of the “lazy” users to be corralled into what IT wants them to do, that is to have the users’ files secured, protected and be in IT’s control. OK, that was my usual blunt way of saying it but I believe this is a huge step forward to address the issues at hand. And I am sorry for saying that the users are “lazy” but that’s what the IT guys would say.

What are the usual issues usually faced when it comes to dealing with user files? Let me count the ways:

  • Users don’t put the files in backup folders as they were told and they blame IT for not backing up their files
  • Users keep several copies of the files and email, share through thumb drives etc, to their friends and colleagues. IT gets blames for ever growing storage capacity needs and even worse, breaching the security of the organization as internal files are shared to outsiders.
  • Users wants to get their files on iPads, iPhones, Android Pads, BlackBerry and other smart devices and saying IT is too archaic. Users said that they are less productive if they can’t get the files anywhere. IT gets the blame again
  • Users has little discipline to change their habits and to think about file security and ownership of company’s private and confidential data – sharing files happily and IT gets blame

These points, from the IT point-of-view, are exactly the challenges faced daily. That is why users are flocking to Box.Net, DropBox and Windows Live SkyDrive because they want simplicity; they want freedom; they want IT to get off their back. But all these “confrontations” are comprising the integrity of the files and data of the organization.

Novell Filr, is likely to be one of the earliest solutions to address this problem. It attempts to marry both the simplicity and freedom ala-DropBox for the users, but in the IT backend, where the organization’s files will be stored, IT runs a tight ship of the users AAA (authentication, authorization and auditing) and at the same time, includes the Novell File Management Suite. As shown below, Novell File Management Suite consists of 3 main solutions.

I will probably talk more about the File Management Suite in another blog entry, but meanwhile, how does the Novell Filr work?

First of all, it sits between the conversation between the users’ devices (typically, this will be a Windows computer accessing a network drive via CIFS) and the central file storage. You know? The usual file sharing concept, but this traditional approach limits the users to only computers, not smart devices such as smartphones and tablets.

In the spirit of DropBox, I believe a Novell Filr client (computers, smart devices etc) speaks with the Novell Filr “middleware” with standard RESTful API, over HTTP. I still need to ascertain this because I have not had any engagement with Novell yet, nor have I seen the product. In the slides given to me, the explanation at 10,000 feet is shown below.

I will share more details later once I have more information.

At the same time, I cannot help but notice this changing trend of NAS. It seems to me that many of the traditional NAS ideas going the way of the REST protocol, especially in a object-based “file” access. In fact, the definition of a “file” would also be changing into a web object. While the tide has certainly rising on this subject, we shall see how it pans out as SMB 2.0 and NFS version 4.0 start making inroads to replace the NAS protocols of CIFS 1.1 and NFS version 3.0.

As I mentioned previously, this is not disruptive to me and I know of several vendors already have developments similar to this. But the fundamental shift of users behaviors to the Web 3.0 type of data, files and information access might be addressed well with the Novell Filr.

I can’t wait for the hands-on and demo, knowing that much can be addressed in the enterprise file management space by changing the users habits, in a subtle but definitely more effective way.

Novell Filr (How do you pronounce this?)

I let you in on a little secret … I am a great admirer of Novell’s technology.

Ok, ok, they aren’t what they used to be anymore (remember the great heydays of Netware, ZenWorks and Groupwise?) And some of their business decisions didn’t make a lot of fans either. Some notable ones in recent years were the joint patent agreement with Microsoft (November 2006) and their ownership of Unix operating system rights. Though Novell did finally protected the Unix community by being the rightful owner of Unix OS rights, the negativity from the lawsuit and counter lawsuit between SCO and Novell soured the relationship with the faithfuls of Unix. In the end, they were acquired by Attachmate late last year.

However, I have been picking up bits of Novell technology knowledge for the past 3-4 years. Somehow, despite the negative perception that most people I know had about Novell, I strongly believe the ideas and thinking that goes into their solutions and products are smart and innovative.

So, when my buddy (and ex-housemate) of mine, Mr. Ong Tee Kok, the Country Manager of Novell Malaysia, asked me to evaluate a new solution from Novell (it’s not even been released yet), I jumped at the chance.

Novell will soon be announce a solution called Novell Filr. I really don’t know how to pronounce the name, but the concept of Novell Filr makes a lot of sense. I cannot say that it is disruptive but it is coming to meet the changing world of how users are storing and accessing their files and balancing it with the needs of enterprise file management and access.

Yes, Novell Filr is a file virtualization solution. It comes between the user and their files. Previously in a network attached environment, files are presented to the users via the typical file sharing protocols, CIFS for Windows and NFS for Unix/Linux. These protocols have been around for ages, with some recent advents in the last few years for SMB 2.0 and NFS version 4. However, the updates to these protocols address the greater needs of the organizations and the enterprise rather than the needs of the users.

And because of this, users have been flocking to cloud-centric solutions out there such as DropBox, Box.net and Windows Live SkyDrive. These solutions cater to the needs of the users wanting to access their files anywhere, with any device. Unfortunately, the simplicity of file access the “cloud-way” is not there when the users are in the office network. They would have to be routinely reminded by the system administrator to keep the files in some special directory to have their files backed up. Otherwise, they shall be ostracized by the IT department and their straying files will not be backed up.

Well, Novell will be introducing their Novell Filr soon and they have released a video of their solution. Check this out.

I shall be spending some time this week to look into their solution deeper and hoping to see a demo soon. And I have great confidence in the Novell solutions. I intend to share more about them later.

A great has passed on – Dennis Ritchie

We pay tribute to another great and perhaps even greater than Steve Jobs in his contribution to the computer industry. Dennis Ritchie, the creator of the C Programming Language and co-developer of the Unix Operating Systems, has passed away at 70.

If you think about how his work has influenced and spawn the birth of other programming languages such as C++, Java, and other C variants as well as the ideas and the foundation of Linux, Solaris, HP-UX, FreeBSD and so on, that’s massive.

It was him that made me a Unix bigot, a true believer that technology should be shared because sharing means giving life to ideas and innovations.

In my books collection, these are 2 of my most coveted books and Dennis Ritchie was very much part of the contents and ideas in the books.

 

I would like to share a few excerpts from the book, ” A Quarter Century of Unix” by Peter H. Salus (ISBN #: 0-201-54777-5). In page 48,

Mike Mahoney asked Dennis Ritchie about designing C:

“It was an adaptation of B that was pretty much Ken’s. B actually started out as system FORTRAN… Anyway, it took him about a day to realize that he didn’t want to do a FORTRAN compiler after all. So he did this very simple language called B and got it going on the PDP-7.  …”

“The basic construction of the compiler – of the code generator of the compiler – was based on an idea I’d heard about; some at the [Bell] Labs at Indian Hill. I never actually did find or read the thesis, but I had the ideas in it explained to me, and some of the code generator for NB, which became C, was based on this Ph.D thesis. It was also the technique used in the language called EPL, which was used for switching systems  and ESS machines;it stood for ESS Programming Language. So that the first phase of C was really these two phases in short succession of, first, some language changes from B, really, adding the type syntax structure without too much change in the syntax and doing the compiler”

“The combination of things caused Ken to give up that summer. Over the year, I added structures and probably made the compiler somewhat better – better code – and so over the next summer, we made the concerted effort and actually did redo the whole operating system in C”

This was from 1971-1973, at Bell Telephone Labs (BTL), where some of the most important chain of event happened. In the summer of 1972, the hardware arrived:

DEC PDP-11/20 processor
56 Kbytes of core memory
High-speed paper tape reader/puch
ASR-33 Teletype - console
DECtape - twin drive
RK11/RK05 disk (2) - 2.4 Mbytes
RF11 fixed head disk (2 at first, 3 more added later)
DC11 (6 lines) for local terminals
DM11 16-line multiplexers (3)

This was the machine that wrote the history on Unix. This was the machine that ran the Unix that was completely rewritten in C. Ken Thompson, Dennis Ritchie, Joe Ossanna, were all part of Unix history.

Earlier in 1970, Dennis Ritchie recounts the history of Unix:

“Unix came up in two stages. Ken got it going before there was a disk, he divided the memory up into two chunks and got the operating system going in one piece and use the other piece for a sort of RAM disc. To try it out, you’d first load this paper tape that initialized the disk and then load the operating system. So there was a cp [copy file], a cat [catenate files] , and an ls [list files] actually running before there was a disc”. 

Classic stuff!

My last bow of respect to Dr. Dennis Ritchie, the creator of the C Programming Language and co-developer of the Unix Operating System (with Dr. Ken Thompson).

Playing with NetApp … After Rightsizing

It has been a tough week for me and that’s why I haven’t been writing much this week. So, right now, right after dinner, I am back on keyboard again, continuing where I have left off with NetApp’s usable capacity.

A blog and a half ago, I wrote about the journey of getting NetApp’s usable capacity and stopping up to the point of the disk capacity after rightsizing. We ended with the table below.

Manufacturer Marketing Capacity NetApp Rightsized Capacity
36GB 34.0/34.5GB*
72GB 68GB
144GB 136GB
300GB 272GB
600GB 560GB
1TB 847GB
2TB 1.69TB
3TB 2.48TB

* The size of 34.5GB was for the Fibre Channel Zone Checksum mechanism employed prior to ONTAP version 6.5 of 512 bytes per sector. After ONTAP 6.5, block checksum of 520 bytes per sector was employed for greater data integrity protection and resiliency.

At this stage, the next variable to consider is RAID group sizing. NetApp’s ONTAP employs 2 types of RAID level – RAID-4 and the default RAID-DP (a unique implementation of RAID-6, employing 2 dedicated disks as double parity).

Before all the physical hard disk drives (HDDs) are pooled into a logical construct called an aggregate (which is what ONTAP’s FlexVol is about), the HDDs are grouped into a RAID group. A RAID group is also a logical construct, in which it combines all HDDs into data or parity disks. The RAID group is the building block of the Aggregate.

So why a RAID group? Well, first of all, (although likely possible), it is not prudent to group a large number of HDDs into a single group with only 2 parity drives supporting the RAID. Even though one can maximize the allowable, aggregated capacity from the HDDs, the data reconstruction or data resilvering operation following a HDD failure (disks are supposed to fail once in a while, remember?) would very much slow the RAID operations to a trickle because of the large number of HDDs the operation has to address. Therefore, it is best to spread them out into multiple RAID groups with a recommended fixed number of HDDs per RAID group.

RAID group is important because it is used to balance a few considerations

  • Performance in recovery if there is a disk reconstruction or resilvering
  • Combined RAID performance and availability through a Mean Time Between Data Loss (MTBDL) formula

Different ONTAP versions (and also different disk types) have different number of HDDs to constitute a RAID group. For ONTAP 8.0.1, the table below are its recommendation.

 

So, given a large pool of HDDs, the NetApp storage administrator has to figure out the best layout and the optimal number of HDDs to get to the capacity he/she wants. And there is also a best practice to set aside 2 HDDs for a RAID-DP configuration with every 30 or so HDDs. Also, it is best practice to take the default recommended RAID group size most of the time.

I would presume that this is all getting very confusing, so let me show that with an example. Let’s use the common 2TB SATA HDD and let’s assume the customer has just bought a 100 HDDs FAS6000. From the table above, the default (and recommended) RAID group size is 14. The customer wants to have maximum usable capacity as well. In a step-by-step guide,

  1. Consider the hot sparing best practice. The customer wants to ensure that there will always be enough spares, so using the rule-of-thumb of 2 HDDs per 30 HDDs, 6 disks are set aside as hot spares. That leaves 94 HDDs from the initial 100 HDDs.
  2. There is a root volume, rootvol, and it is recommended to put this into an aggregate of its own so that it gets maximum performance and availability. To standardize, the storage administrator configures 3 HDDs as 1 RAID group to create the rootvol aggregate, aggr0. Even though the total capacity used by the rootvol is just a few hundred GBs, it is not recommended to place data into rootvol. Of course, this situation cannot be avoided in most of the FAS2000 series, where a smaller HDDs count are sold and implemented. With 3 HDDs used up as rootvol, the customer now has 91 HDDs.
  3. With 91 HDDs, and using the default RAID group size of 14, for the next aggregate of aggr1, the storage administrator can configure 6 x full RAID group of 14 HDDs (6 x 14 = 84) and 1 x partial RAID group of 7. (91/14 = 6 remainder 7). And 84 + 7 = 91 HDDs.
  4. RAID-DP requires 2 disks per RAID group to be used as parity disks. Since there are a total of 7 RAID groups from the 91 HDDs, 14 HDDs are parity disks, leaving 77 HDDs as data disks.

This is where the rightsized capacity comes back into play again. 77 x 2TB HDDs is really 77 x 1.69TB = 130.13TB from an initial of 100 x 2TB = 200TB.

If you intend to create more aggregates (in our example here, we have only 2 aggregates – aggr0 and aggr1), there will be more consideration for RAID group sizing and parity disks, further reducing the usable capacity.

This is just part 2 of our “Playing with NetApp Capacity” series. We have not arrived at the final usable capacity yet and I will further share that with you over the weekend.

Solaris virgin again!

This week I went off the beaten track to get back to my first love – Solaris. Now that Oracle owns it, it shall be known as Oracle Solaris. I am working on a small project based on (Oracle) Solaris Containers and I must say, I am intrigued by it. And I felt good punching the good ‘ol command lines in Solaris again.

Oracle actually offers a lot of virtualization technologies – Oracle VM, Oracle VM Dynamic Domains, Oracle Solaris Logical Domains (LDOMs), Oracle Solaris Containers (aka Zones) and Oracle VirtualBox. Other than VirtualBox, the other VE (Virtualized Environment) solutions are enterprise solutions but unfortunately, they lack the pizazz of VMware at this point in time. From my perspective, they are also very Oracle/Solaris-centric, making them less appealing to the industry at this moment

Here’s an old Sun diagram of what Sun virtualization solutions are:

What I am working on this week is Solaris Containers or Zones. The Containers solution is rather similar to VMware’s gamut of Tier-2 Virtualization solutions that are host-based. Solutions that fall into this category are VMware Server, VMware Workstation, VMware Player, VMware ACE and VMware Fusion for MacOS. Therefore, it requires a host OS to run the Solaris Containers.

I did not have a Solaris Resource Manager software to run the GUI stuff, so I had to get back to basics with CLI, which is good for  me. In fact, I liked it even more and with the CLI, I could pretty much create zones with ease. And given the fact that the host OS is Solaris 10, I could instantly feel the robustness, the performance, the stability and the power of Solaris 10, unlike the flaky Windows hosting VMware host-based virtualization solutions or the iffiness of Linux.

A more in depth look of Solaris Containers/Zones is shown below.

At first touch, 2 things impressed me

  • The isolation of each Container and its global master domain is very well defined. What can be done, and what cannot be done; what can be configured and what cannot, is very clear and the configurability of each parameter is quickly acknowledged and controlled by the Solaris kernel. From what I read, Solaris Containers has achieved the highest level of security with its Trusted Extension component, which is a re-implementation of Trusted Solaris. Solaris 10 has received the highest commercial level of Common Criteria Certification.  This is known as EAL4+ and has been accepted by the U.S DoD (Department of Defense).
  • It’s simplicity in administering compute and memory resources to the Containers. I will share that in CLI with you later.

To start, we acknowledge that there is likely a global zone that has been created when Solaris 10 was first installed.

 

To create a zone and configuring it with CLI, it is pretty straightforward. Here’s a glimpse of what I did yesterday.

# zonecfg –z perf-rac1

Use ‘create’ to be configuring a zone

zonecfg:perf-rac1> create

zonecfg:perf-rac1> set zonepath=rpool/perfzones/perf-rac1

zonecfg:perf-rac1> set autoboot=true

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/lib

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/sbin

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/usr

zonecfg:perf-rac1> remove inherit-pkg-dir dir=/usr/local

zonecfg:perf-rac1> add net

zonecfg:perf-rac1:net> set address=<input from parameter>

zonecfg:perf-rac1:net> set physical=<bge0|or correct Ethernet interface>

zonecfg:perf-rac1:net> end

zonecfg:perf-rac1> add dedicated-cpu

zonecfg:perf-rac1:dedicated-cpu> set ncpus=2-4 (or any potential cpus on sun box)

zonecfg:perf-rac1:dedicated-cpu>end

zonecfg:perf-rac1> add capped-memory

zonecfg:perf-rac1:capped-memory> set physical=4g

zonecfg:perf-rac1:capped-memory>set swap=1g

zonecfg:perf-rac1:capped-memory>set locked=1g

zonecfg:perf-rac1:capped-memory>end

zonecfg:perf-rac1> verify

zonecfg:perf-rac1> commit

zonecfg:perf-rac1> exit

The command zonecfg -z <zonename> triggers a configuration prompt where I run create to create the zone. I set the zonepath to list where the zone files will be contained and set the autoboot=true so that it will automatically start during a reboot.

Solaris Containers is pretty cool where it has the ability to either inherit or share the common directories such as /usr, /lib, /sbin and others or create its own set of directories separate from the global root directory tree. Here I choose to remove the inheritance and allow the Solaris in the Container to have its own independent directories.

The commands add net sends me into another sub-category where I can configure the network interface as well as the network address. Nothing spectacular there. I end  the configuration and do a couple of cool things which are related to resource management.

I have added add dedicated-cpu and set ncpus=2-4 and also add capped-memory of physical=4g, swap=1gb, locked=1gb. What I have done is to allocate a minimum of 2 CPU resources and a maximum of 4 CPU resources (if resource permits) to the zone called perf-rac1. Additionally, I have allowed it to have a capped memory of at most 4GB of RAM, with assured of 1GB of RAM. Swap space wis set at 1GB.

This resource management allows me to build a high performance Solaris Container for Oracle 11g RAC. Of course, you are free to create as many containers as long as the system resources allow it. Note that I did not include the shared memory and semaphores parameters required for Oracle 11g RAC but go ahead and consult your favourite Oracle DBA (have fun doing so!)

After the perf-rac1 zone/container has been created (and configured), I just need to run the following

# zoneadm –z perf-rac1 install

# zoneadm –z perf-rac1 boot

These 2 commands will install the zone and start the installation process. It will copy all the packages from the global zone and start the installation as per normal. Once the “installation” is complete, there will be the usual Solaris configuration form where information such as timezone, IP address, root login/password and so on are input. That will take about 20-40 minutes, depending on the amount of things to be installed and of course, the power of the Sun system. I am running an old Sun V210 with 512MB, so it took a while.

When it’s done, we can just login into the zone with the command

# zlogin –C perf-rac1

and I get into another Solaris OS in the Solaris Container.

What I liked what the fact that Solaris Containers is rather simple to understand but the flexibility to configure computing resources to it is pretty impressive. It’s fun working on this stuff again after years away from Solaris. (This was after I took my RedHat RHCE certification and I pretty much left Sun Solaris for quite a while).

More testing to be done, but overall I am quite happy to be back as a Solaris virgin again.

Storage Architects no longer required

I picked up a new article this afternoon from SearchStorage – titled “Enterprise storage trends: SSDs, capacity optimization, auto tiering“. I cannot help but notice some of the things I have been writing about VMware being the storage killer and the rise of Cloud Computing which take away our jobs.

I did receive some feedback about what I wrote in the past and after reading the SearchStorage article, I can’t help but feeling justified. On the side bar, it wrote:

 

The rise of virtual machine-specific and cloud storage suggest that other changes are imminent. In both cases …. and would no longer require storage architects and managers.

Things are changing at an extremely fast pace and for those of us still languishing in the realms of NAS and SAN, our expertise could be rendered obsolete pretty quickly.

But all is not lost because it would be easier for a storage engineer, who already has the foundation to move into the virtualization space than a server virtualization engineer coming down to learn about the storage fundamentals. We can either choose to be dinosaur or be the species of the next generation.

Playing with NetApp … (Capacity) BR

Much has been said about usable disk storage capacity and unfortunately, many of us take the marketing capacity number given by the manufacturer in verbatim. For example, 1TB does not really equate to 1TB in usable terms and that is something you engineers out there should be informing to the customers.

NetApp, ever since the beginning, has been subjected to the scrutiny of the customers and competitors alike about their usable capacity and I intend to correct this misconception. And the key of this misconception is to understand what is the capacity before rightsizing (BR) and after rightsizing (AR).

(Note: Rightsizing in the NetApp world is well documented and widely accepted with different views. It is part of how WAFL uses the disks but one has to be aware that not many other storage vendors publish their rightsizing process, if any)

Before Rightsizing (BR)

First of all, we have to know that there are 2 systems when it comes to system of unit prefixes. These 2 systems can be easily said as

  • Base-10 (decimal) – fit for human understanding
  • Base-2 (binary) – fit for computer understanding

So according the International Systems of Units, the SI prefixes for Base-10 are

Text Factor Unit
kilo 103 1,000
mega 106 1,000,000
giga 109 1,000,000,000
tera 1012 1,000,000,000,000

In computer context, where the binary, Base-2 system is relevant, that SI prefixes for Base-2 are

Text Factor Unit
kilo-byte 210 1,024
mega-byte 220 1,048,576
giga-byte 230 1,073,741,824
tera-byte 240 1,099,511,627,776

And we must know that the storage capacity is in Base-2 rather than in Base-10. Computers are not humans.

With that in mind, the next issue are the disk manufacturers. We should have an axe to grind with them for misrepresenting the actual capacity. When they say their HDD is 1TB, they are using the Base-10 system i.e. 1TB = 1,000,000,000,000 bytes. THIS IS WRONG!

Let’s see how that 1TB works out to be in Gigabytes in the Base-2 system:

1,000,000,000/1,073,741,824 = 931.3225746154785 Gigabytes

Note: 230 =1,073,741,824

That result of 1TB, when rounded, is only about 931GB! So, the disk manufacturers aren’t exactly giving you what they have advertised.

Thirdly, and also the most important factor in the BR (Before Rightsizing) phase is how WAFL handles the actual capacity before the disk is produced to WAFL/ONTAP operations. Note that this is all done before all the logical structures of aggregates, volumes and LUNs are created.

In this third point, WAFL formats the actual disks (just like NTFS formats new disks) and this reduces the usable capacity even further. As a starting point, WAFL uses 4K (4,096 bytes) per block

For Fibre Channel disks, WAFL formats them with a 520 byte per sector. Therefore, for each block, 8 sectors (520 x 8 = 4160 bytes) fill 1 x 4K block, with remainder of 64 bytes (4,160 – 4,096 = 64 bytes) for the checksum of the 1 x 4K block. This additional 64 bytes per block checksum is not displayed by WAFL or ONTAP and not accounted for in its usable capacity.

512 bytes per sector are used for formatting SATA/SAS disks and it consumes 9 sectors (9 x 512 = 4,608 bytes). 8 sectors will be used for WAFL’s 4K per block (4,096/512 = 8 sectors), the remainder of 1 sector (the 9th sector) of 512 bytes is used partially for its 64 bytes checksum. Again, this 448 bytes (512 – 64 = 448 bytes) is not displayed and not part of the usable capacity of WAFL and ONTAP.

And WAFL also compensates for the ever-so-slightly irregularities of the hard disk drives even though they are labelled with similar marketing capacities. That is to say that 1TB from Seagate and 1TB from Hitachi will be different in terms actual capacity. In fact, 1TB Seagate HDD with firmware 1.0a (for ease of clarification) and 1TB Seagate HDD with firmware 1.0b (note ‘a’ and ‘b’) could be different in actual capacity even when both are shipped with a 1.0TB marketing capacity label.

So, with all these things in mind, WAFL does what it needs to do – Right Size – to ensure that nothing get screwed up when WAFL uses the HDDs in its aggregates and volumes. All for the right reason – Data Integrity – but often criticized for their “wrongdoing”. Think of WAFL as your vigilante superhero, wanted by the law for doing good for the people.

In the end, what you are likely to get Before Rightsizing (BR) from NetApp for each particular disk capacity would be:

Manufacturer Marketing Capacity NetApp Rightsized Capacity Percentage Difference
36GB 34.0/34.5GB* 5%
72GB 68GB 5.55%
144GB 136GB 5.55%
300GB 272GB 9.33%
600GB 560GB 6.66%
1TB 847GB 11.3%
2TB 1.69TB 15.5%
3TB 2.48TB 17.3%

* The size of 34.5GB was for the Fibre Channel Zone Checksum mechanism employed prior to ONTAP version 6.5 of 512 bytes per sector. After ONTAP 6.5, block checksum of 520 bytes per sector was employed for greater data integrity protection and resiliency.

From the table, the percentage of “lost” capacity is shown and to the uninformed, this could look significant. But since the percentage value is relative to the Manufacturer’s Marketing Capacity, this is highly inaccurate. Therefore, competitors should not use these figures as FUD and NetApp should use these as a way to properly inform their customers.

You have been informed about NetApp capacity before Right Sizing.

I will follow on another day with what happens next after Right Sizing and the final actual usable capacity to the users and operations. This will be called After Rightsizing (AR). Till then, I am going out for an appointment.

IDC EMEA External Disk Storage Systems 2Q11 trends

Europe is the worst hit region in this present economic crisis. We have seen countries such as Greece, Portugal and Ireland being some of the worst hit countries and Italy was just downgraded last week by S&P. Last week was also the release of the 2Q2011 External Disk Storage Systems figures from IDC and the poor economic sentiments are reflected in the IDC figures as well.

Overall, the factory revenue for Western Europe grew 6% compared to the year before, but declined 5% when compared to 1Q2011. As I was reading a summary of the report, 2 very interesting trends were clear.

  • The high-end market of above USD250,000 AND the lower-end market of less than USD50,000 increased while the mid-end market of between USD50,000-100,000 price range declined
  • Sentiments revealed that storage buyers are increasingly looking for platforms that are quick to deploy and easy to manage.

As older systems are refreshed, larger companies are definitely consolidating into larger, higher-end systems to support the consolidation of their businesses and operations. Fundamentals such as storage consolidation, centralized data protection, disaster recovery and server virtualization are likely to be the key initiatives by larger organization to cut operational costs and maximizing of storage economics. This has translated to the EMEA market spending more on the higher-end storage solutions from EMC, IBM, HDS and HP.

NetApp, which has been always very strong in the mid-end market, did well to increase their market share and factory revenue at IBM’s and HP’s expense because their sales were flattish. Dell, while transitioning from its partnership with EMC to its Dell Compellent boxes, was the worst hit.

The lower-end storage solution market, according to IDC figures, increased between 10-25% depending on the price ranges of USD5,000 to USD10,000 to USD15,000. This could mean a few things but the obvious call would be the economic situation of most Western European SMBs/SMEs. This could also mean that the mid-end market could be on the decline as many of the lower-end systems are good enough to do the job. One thing the economic crisis can teach us is to be very prudent with our spendings and I believe the Western European companies are taking the same path to control their costs and maximizing their investments.

The second trend was more interesting to me. The quote of “quick to deploy and easy to manage” is definitely pushing the market to react to more off-the-shelf and open components. From an HP stand point of their Converged Infrastructure, the x86 strategy for their storage solutions is making good sense, because I believe there will be lesser need for proprietary hardware from traditional storage vendors like EMC, NetApp and others (HP included). Likewise, having storage solutions such as VSA (Virtual Storage Appliance) and storage appliance software that runs on the x86 platforms such as Nexenta and Gluster could spell out the next wave in the storage networking industry. To have things easy, specialized appliances which I have spoke much of lately, hits the requirement of “quick to deploy and easy to manage” right on the dot.

The overall fundamentals of the external disk storage systems market remain strong. Below is the present standings in the EMEA market as reported by IDC.