Falconstor – soaring to 7th heaven

I was invited to Falconstor version 7.0 launch to the media this morning at Sunway Resort Hotel.

I must admit that I am a fan of Falconstor from a business perspective because they have nifty solutions. Many big boys OEMed Falconstor’s VTL solutions such as EMC with its CDL (CLARiiON Disk Library) and Sun Microsystems virtual tape library solutions. Things have been changing. There are still OEM partnerships with HDS (with Falconstor VTL and FDS solutions), HP (with Falconstor NSS solution) and a few others, but Falconstor has been taking up a more aggressive stance with their new business model. They are definitely more direct with their approach and hence, it is high time we in the industry recognize Falconstor’s prowess.

The launch today is Falconstor version 7.0 suite of data recovery and storage enhancement solutions. Note that while the topic of their solutions were on data protection, I used data recovery, simply because the true objective of their solutions are on data recovery, doing what matters most to business – RECOVERY.

Falconstor version 7.0 family of products is divided into 3 pillars

  • Storage Virtualization – with Falconstor Network Storage Server (NSS)
  • Backup & Recovery – with Falconstor Continuous Data Protector (CDP)
  • Deduplication – with Falconstor Virtual Tape Library (VTL) and File-Interface Deduplication System (FDS)

NSS virtualizes heterogeneous storage platforms and sits in between the application servers, or virtualized servers. It simplifies disparate storage platforms by consolidating volumes and provides features such as thin provisioning and snapshots. In the new version, NSS now supports up to 1,000 snapshots per volume from the previous number of 255 snapshots. That is a 4x increase as the demand for data protection is greater than ever. This allows the protection granularity to be in the minutes, well meeting the RPO (Recovery Point Objectives) standard of the most demanding customers.

The NSS also replicates the snapshots to a secondary NSS platform at a DR to extend the company’s data resiliency and improves the business continuance factor for the organization.

In a revamp new algorithm in version 7.0, the Microscan technology used in the replication technology is now more potent and higher in performance. For the uninformed, Microscan, as quoted in the datasheet is:

MicroScan™, a patented FalconStor technology, minimizes the
amount of data transmitted by eliminating redundancies at the
application and file system layers. Rather than arbitrarily
transmitting entire blocks or pages (as is typical of other
replication solutions), MicroScan technology maps, identifies, and
transmits only unique disk drive sectors (512 bytes), reducing
network traffic by as much as 95%, in turn reducing remote
bandwidth requirements.

Another very strong feature of the NSS is the RecoverTrac, which is an automated DR technology. In business, business continuity and disaster recovery usually go hand-in-hand. Unfortunately, triggering either BC or DR or both is an expensive and resource-consuming exercise. But organizations have to prepare and therefore, a proper DR process must be tested and tested again.

I am a certified Business Continuity Planner, so I am fully aware of the beauty RecoverTrac brings to the organization. The ability to test non-intrusive, simulated DR, and find out the weak points of recovery is crucial and RecoverTrac brings that confidence of DR testing to the table. Furthermore, well-tested automated DR processes also eliminates human errors in DR recovery. And RecoverTrac also has the ability to track the logical relationships between different applications and computing resource, making this technology an invaluable tool in the DR coordinator’s arsenal.

The diagram below shows the NSS solution:

 

And NSS touts to be one true any storage platform to any storage platform over any protocol replication solution. Most vendors will have either FC or iSCSI or NAS protocols but I believe so far, only Falconstor offers all protocols in one solution.

Item #2 in the upgrade list is Falconstor’s CDP solution. Continuous Data Protection (CDP) is a very interesting area in data protection. CDP provides almost near-zero RTO/RPO solution on disk, and yet not many people are aware of the power of CDP.

About 5-6 years ago, CDP was hot and there were many start-ups in this area. Companies such Kashya (bought by EMC to become RecoverPoint), Mendocino, Revivio (gobbled up by Symantec) and StoneFly have either gone belly up or gobbled up by bigger boys in the industry. Only a few remained, and Falconstor CDP is one of the true survivors in this area.

CDP should be given more credit because there are always demand for very granular data protection. In fact, I sincerely believe that both CDP, snapshots and snapshot replication are the real flagships of data protection today and the future because data protection using the traditional backup method, in a periodic and less frequent manner, is no longer adequate. And the fact that backup is generating more and more data to keep is truly not helping.

Falconstor CDP has the HyperTrac™ Backup Accelerator (HyperTrac) works in conjunction with FalconStor Continuous Data Protector (CDP) and FalconStor Network Storage Server (NSS) to increase tape backup speed, eliminate backup windows, and offload processing from application servers. A quick glimpse of HyperTrac technology is shown below:

 

In the Deduplication pillar, there were upgrades to both Falconstor VTL and Falconstor FDS. As I said earlier, CDP, snapshots and replication of the snapshot are already becoming the data protection of this new generation of storage solutions. Coupled with deduplication, data protection is made more significant because it makes smart noodles to keep one copy of the same old files, over and over again.

Falconstor File-Interface Deduplication Systems (FDS) addresses the requirement to storage more effectively, efficiently, economically. Its Single Instance Repository (SIR) technology has now been enhanced as a global deduplication repository, giving it the ability to truly store a single copy of the object. Previously, FDS was not able to recognize duplicated objects in a different controller. FDS also has improved its algorithms, driving performance up to 30TB/hour and is able to deliver a higher deduplication ratio.

 

In addition to the NAS interface, the FDS solution now has a tighter integration with the Symantec Open Storage Technology (OST) protocol.

The Falconstor VTL is widely OEM by many partners and remains one of the most popular VTL solutions in the market. VTL is also enhanced significantly in this upgrade and not surprisingly, the VTL solution from Falconstor is strengthened by its near-seamless integration with the other solutions in their stable. The VTL solution now supports up to 1 petabyte usable capacity.

 

Falconstor has always been very focused in the backup and data recovery space and has done well favourably with Gartner. In January of 2011, Gartner has release their Magic Quadrant report for Enterprise Disk-based Backup and Recovery, and Falconstor was positioned as one of the Visionaries in this space. Below is the magic quadrant:

 

As their business model changes to a more direct approach, it won’t be long before you seen Falconstor move into the Leader quadrant. They will be soaring, like a Falcon.

Performance benchmarks – the games that we play

First of all, congratulations to NetApp for beating EMC Isilon in the latest SPECSfs2008 benchmark for NFS IOPS. The news is everywhere and here’s one here.

EMC Isilon was blowing its horns several months ago when it  hit 1,112,705 IOPS recorded from a 140-node S200 cluster with 3,360 disk drives and a overall response time of 2.54 msecs. Last week, NetApp became top dog, pounding its chest with 1,512,784 IOPS on a 24 x FAS6240 nodes  with an overall response time of 1.53msecs. There were 1,728 450GB, 15,000rpm disk drives and the FAS6240s were fitted with Flash Cache.

And with each benchmark that you and I have seen before and after, we will see every storage vendors trying to best the other and if they did, their horns will be blaring, the fireworks are out and they will pounding their chests like Tarzan, saying “Who’s your daddy?” The euphoria usually doesn’t last long as performance records are broken all the time.

However, the performance benchmark results are not to be taken in verbatim because they are not true representations of real life, production environment. 2 years ago, the magazine, the defunct Byte and Switch (which now is part of Network Computing), did a 9-year study on File Systems and Storage Benchmarking. In a very interesting manner, it revealed that a lot of times, benchmarks results are merely reduced to single graphs which has little information about the details of how the benchmark was conducted, how long the benchmark took and so on.

The paper, published by Avishay Traeger and Erez Zadok from Stony Brook University and Nikolai Joukov and Charles P. Wright from the IBM T.J. Watson Research Center entitled, “A Nine Year Study of File System and Storage Benchmarking” studied 415 file systems from 106 published results and the article quoted:

Based on this examination the paper makes some very interesting observations and 
conclusions that are, in many ways, very critical of the way “research” papers have 
been written about storage and file systems.

 

Therefore, the paper highlighted the way the benchmark was done and the way the benchmark results were reported and judging by the strong title (It was titled “Lies, Damn Lies and File Systems Benchmarks”) of the online article that reviewed the study, benchmarks are not the pictures that says a thousand words.

Be it TPC-C, SPC1 or SPECSfs benchmarks, I have gone through some interesting experiences myself, and there are certain tricks of the trade, just like in a magic show. Some of the very common ones I come across are

  • Short stroking – a method to format a drive so that only the outer sectors of the disk platter are used to store data. This practice is done in I/O-intensive environments to increase performance.
  • Shortened test – performance tests that run for several minutes to achieve the numbers rather than prolonged periods (which mimics real life)
  • Reporting aggregated numbers – Note the number of nodes or controllers used to achieve the numbers. It is not ONE controller than can achieve the numbers, but an aggregated performance results factored by the number of controllers

Hence, to get to the published benchmark numbers in real life is usually not practical and very expensive. But unfortunately, customers are less educated about the way benchmarks are performed and published. We, as storage professionals, have to disseminate this information.

Ok, this sounds oxymoronic because if I am working for NetApp, why would I tell the truth that could actually hurt NetApp sales? But I don’t work for NetApp now and I think it is important for me do my duty to share more information. Either way, many people switch jobs every now and then, and so if you want to keep your reputation, be honest up front. It could save you a lot of work.

A cloud economy emerges … somewhat

A few hours ago, Rackspace had just announced the first “productized” Rackspace Private Cloud solution based on OpenStack. According to Openstack.org,

OpenStack OpenStack is a global collaboration of developers and cloud computing 
technologists producing the ubiquitous open source cloud computing platform for 
public and private clouds. The project aims to deliver solutions for all types of 
clouds by being simple to implement, massively scalable, and feature rich. 
The technology consists of a series of interrelated projects delivering various 
components for a cloud infrastructure solution.

Founded by Rackspace Hosting and NASA, OpenStack has grown to be a global software 
community of developers collaborating on a standard and massively scalable open 
source cloud operating system. Our mission is to enable any organization to create 
and offer cloud computing services running on standard hardware. 
Corporations, service providers, VARS, SMBs, researchers, and global data centers 
looking to deploy large-scale cloud deployments for private or public clouds 
leveraging the support and resulting technology of a global open source community.
All of the code for OpenStack is freely available under the Apache 2.0 license. 
Anyone can run it, build on it, or submit changes back to the project. We strongly 
believe that an open development model is the only way to foster badly-needed cloud 
standards, remove the fear of proprietary lock-in for cloud customers, and create a 
large ecosystem that spans cloud providers.

And Openstack just turned 1 year old.

So, what’s this Rackspace private cloud about?

In the existing cloud economy, customers subscribe from a cloud service provider. The customer pays a monthly (usually) subscription fee in a pay-as-you-use-model. And I have courageously predicted that the new cloud economy will drive the middle tier (i.e. IT distributors, resellers and system integrators) in my previous blog out of IT ecosystem. Before I lose the plot, Rackspace is now providing the ability for customers to install an Openstack-ready, Rackspace-approved private cloud architecture in their own datacenter, not in Rackspace Hosting.

This represents a tectonic shift in the cloud economy, putting the control and power back into the customers’ hands. For too long, there were questions about data integrity, security, control, cloud service provider lock-in and so on but with the new Rackspace offering, customers can build their own private cloud ecosystem or they can get professional service from Rackspace cloud systems integrators. Furthermore, once they have built their private cloud, they can either manage it themselves or get Rackspace to manage it for them.

How does Rackspace do it?

From their vast experience in building Openstack clouds, Rackspace Cloud Builders have created a free reference architecture.  Currently OpenStack focuses on two key components: OpenStack Compute, which offers computing power through virtual machine and network management, and OpenStack Object Storage, which is software for redundant, scalable object storage capacity.

In the Openstack architecture, there are 3 major components – Compute, Storage and Images.

More information about the Openstack Architecture here. And with 130 partners in the Openstack alliance (which includes Dell, HP, Cisco, Citrix and EMC), customers have plenty to choose from, and lessening the impact of lock-in.

What does this represent to storage professionals like us?

This Rackspace offering is game changing and could perhaps spark an economy for partners to work with Cloud Service Providers. It is definitely addressing some key concerns of customers related to security and freedom to choose, and even change service providers. It seems to be offering the best of both worlds (for now) but Rackspace is not looking at this for immediate gains. But we still do not know how this economic pie will grow and how it will affect the cloud economy. And this does not negate the fact that us storage professionals have to dig deeper and learn more and this not does change the fact that we have to evolve to compete against the best in the world.

Rackspace has come out beating its chest and predicted that the cloud computing API space will boil down these 3 players – Rackspace Openstack, VMware and Amazon Web Services (AWS). Interestingly, Redhat Aeolus (previously known as Deltacloud) was not worthy to mentioned by Rackspace. Some pooh-pooh going on?

Data Deduplication – Dell is first and last

A very interesting report surfaced in front of me today. It is Information Week’s IT Pro ranking of Data Deduplication vendors, just made available a few weeks ago, and it is the overview of the dedupe market so far.

It surveyed over 400 IT professionals from various industries with companies ranging from less than 50 employees to over 10,000 employees and revenues of less than USD5 million to USD1 billion. Overall, it had a good mix of respondents. But the results were quite interesting.

It surveyed 2 segments

  1. Overall performance – product reliability, product performance, acquisition costs, operations costs etc.
  2. Technical features – replication, VTL, encryption, iSCSI and FCoE support etc.

When I saw the results (shown below), surprise, surprise! Here’s the overall performance survey chart:

Dell/Compellent scored the highest in this survey while EMC/Data Domain ranked the lowest. However, the difference between the first place and the last place vendor is only 4%, and this is to suggest that EMC/Data Domain was about just as good as the Dell/Compellent solution, but it scored poorly in the areas that matters most to the customer. In fact, as we drill down into the requirements of the overall performance one-by-one, as shown below,

there is little difference among the 7 vendors.

However, when it comes to Technical Features, Dell/Compellent is ranked last, the complete opposite. As you can see from the survey chart below, IBM ProtecTier, NetApp and HP are all ranked #1.

The details, as per the technical requirements of the customers, are shown below:

These figures show that the competition between the vendors is very, very stiff, with little edge difference from one to another. But what I was more interested were the following findings, because these figures tell a story.

In the survey, only 34% of the respondents say they have implemented some data deduplication solutions, while the rest are evaluating and plan to evaluation. This means that the overall market is not saturated and there is still a window of opportunity for the vendors. However, the speed of the a maturing data deduplication market, from early adopters perhaps 4-5 years ago to overall market adoption, surprised many, because the storage industry tend to be a bit less trendy than most areas of IT. With the way the rate of data deduplication is going, it will be very much a standard feature of all storage vendors in the very near future.

The second figures that is probably not-so-surprising is, for most of the customers who have already implemented the data deduplication solution, almost 99% are satisfied or somewhat satisfied with their solutions. Therefore, the likelihood of these customer switching vendors and replacing their gear is very low, perhaps partly because of the reliability of the solution as well as those products performing as they should.

The Information Week’s IT Pro survey probably reflected well of where the deduplication market is going and there isn’t much difference in terms of technical and technology features from vendor to vendor. Customer will have to choose beyond the usual technology pitch, and look for other (and perhaps more important) subtleties such as customer service, price and flexibility of doing business with. EMC/Data Domain, being king-of-the-hill, has not been the best of vendor when it comes to price, quality of post-sales support and service innovation. Let’s hope they are not like the EMC sales folks of the past, carrying the “Take it or leave it” tag when they develop their relationship with their future customers. And it will not help if word-of-mouth goes around the industry about EMC’s arrogance of their dominance. It may not be true, and let’s hope it is not true because the EMC of today has changed plenty compared to the Symmetrix days. EMC/Data Domain is now part of their Backup Recovery Service (BRS) team, and I have good friends there at EMC Malaysia and Singapore. They are good guys but remember guys, customer is still king!

Dell, new with their acquisition of Compellent and Ocarina Networks, seems very eager to win the business and kudos to them as well. In fact, I heard from a little birdie that Dell is “giving away” several units of Compellents to selected customers in Malaysia. I did not and cannot ascertain if this is true or not but if it is, that’s what I call thinking-out-of-the-box, given Dell as a late comer into the storage game. Well done!

One thing to note is that the survey took in 17 vendors, including Exagrid, Falconstor, Quantum, Sepaton and so on, but only the top-7 shown in the charts qualified.

In the end, I believe the deduplication vendors had better scramble to grab as much as they can in the coming months, because this market will be going, going, gone pretty soon with nothing much to grab after that, unless there is a disruptive innovation to the deduplication technology

Novell Filr product insight being arranged

Hello reader,

I can see that there are a lot of interests for the Novell Filr and let me assure you that I am already speaking with Novell to introduce this solution soon when it comes available next year.

I am hoping to get a front row seat and even better, be the first in Malaysia to test this product extensively. I can’t make any promises at this point but Novell Country Manager for Malaysia and South Asia will be in Australia this month to help get my enthusiasm across to their corporate people. (Fingers crossed).

I thank you for your support.

Thank you
/storagegaga 🙂

The Greening of Storage

Gartner, in its recent Symposium and IT/Expo, laid out the top 10 IT trends for 2012. Here’s a list of their top 10 (in no particular order)

  1. Virtualization
  2. Big Data, patterns and analytics
  3. Energy efficiency and monitoring
  4. Context-aware applications
  5. Staff retention and retraining
  6. Social Networks
  7. Consumerization
  8. Cloud Computing
  9. Compute per square foot
  10. Fabrics stacks

For those who read IT news a lot, we are mostly aware of all 10 of them. But one of them strikes me in a different sort of way – Energy efficiency and monitoring. There’s been a lot of talk about it and I believe every vendor is doing something about this Green IT/Computing, but in magnitude are they doing it? A lot of them, may be doing this thing called “Green Washing“, which is basically taking advantage of the circumstances and promote themselves as green, without putting much effort into it. How many times have we as consumer heard that this is green or that is green, without realizing the internals of how these companies derive green and label themselves as green. We can pooh pooh some of these claims because there is little basis to their claims to green.

One of the good things in IT is, it is measurable. You know how green a computer equipment is by measuring its power and cooling ingesting, how much of that power that is consumed, how much energy is derived from the power and how much work did it do, usually for a period of time. It’s measurable, and that’s good.

Unfortunately for storage, we as data creators and data consumers, tend to be overly paranoid of our data. We make redundant copies and we have every right to do so because we fear the unexpected. Storage technology is not perfect. As shown in an SNIA study some years ago,

from a single copy of “App Data” on the left of the chart, we mirror the data, doubling the amount of data and increasing the capacity. Then we overprovision, to prepare for a rainy day. Then we backup, once, twice … thrice! In case of disaster, we replicate and for regulatory compliance, we archive and keep and keep and keep, so that the lawyers can make plenty of money from any foul-ups with the rules and regulations.

That single copy of “App Data” just grew 10x more by the end of the chart.

The growth of data is synonymous to power as shown in an IDC study below.

 

the more data you create, you copy, you share, you keep, you keep some more and so on, draws more power to make the data continuously available to you, you and you!

And there are also storage technologies today from different storage vendors, in different capacities, that alleviates the data capacity pain. These technologies reduce the capacity required to store the data by eliminating redundancies or, maximize the ability to compact more bits per blocks of data with compression, as well as other techniques. SNIA summarized this beautifully in the chart below.

But with all these technologies, vendors tend to oversell their green features, and customers do not always have a way to make an informed choice. We do not have a proper tool to define how green the storage equipment, or at least a tool that is vendor-neutral and provides an unbiased view of storage.

For several years, SNIA Green Storage Initiatives’ Technical Working Group (TWG) has been developing a set of test metrics to measure and publish energy consumption and efficiency in storage equipment. Through its SNIA Emerald program, it has released a set of guidelines and user guide in October 2011 with the intention to give a fair and apple-to-apple comparison when it comes to green.

From the user guide, the basic testing criteria is pretty straight forward. I pinched the following below from the user guide to share with my readers.

The testing criteria for all storage solutions are basically the same, as follows:

1. The System Under Test (SUT) is run through a SUT Conditioning Test to get it into a
known and stable state.
2. The SUT Conditioning Test is followed by a defined series of Active Test phases that
collect data for the active metrics, each with a method to assure stability of each metric
value.
3. The Active Test is followed by the so-called Ready Idle Test that collects data for the
capacity metric.
4. Lastly, the Capacity Optimization Test phases are executed which demonstrate the
storage system’s ability to perform defined capacity optimization methods.

For each of the categories of storage, there will be different workloads and run times
depending on the category characteristics.

After the testing, the following test data metrics are collected and published. The test data metrics are:

And there are already published results with IBM and HP taking the big brother lead. From IBM, their IBM DS3400 and from HP,  HP P6500.

Hoping that I have read the SNIA Emerald Terms & Use correctly (lawyers?), I want to state that what I am sharing is not for commercial gains. So here’s the link: SNIA Emerald published results for IBM and HP.

The greening of storage is very new, and likely to evolve over time, but what’s important is, it is the first step towards a more responsible planet. And this could be the next growth engine for storage professionals like us.

Atempo – 3 gals, 1 guy and 1 LB handbag

I have known Atempo for years and even contacted them once when I was in NetApp several years ago. I don’t know much about them until a friend recently took up the master resellership of Atempo here in Malaysia. And when people ask me “Atempo who?”, I would reply “3 gals, 1 guy and 1 LB handbag”.

Atempo, is a company that specializes in data protection and archiving solutions and has been around for almost 20 years. They compete with Symantec Netbackup, Commvault Simpana and Bakbone Netvault and I have seen their solutions. It’s pretty decent and with an attractive price as well. Perhaps they don’t market themselves as strongly as some the bigger data protection companies, but I would recommend to anyone, any day. If you need more information, contact me.

But the usual puzzled faces will soon go away once they start recognizing Atempo’s solutions because that is where my usual Atempo introduction comes from – their solutions.

Atempo has 5 key products

  • Time Navigator (TINA)
  • Live Navigator (LINA)
  • Atempo Digital Archive (ADA)
  • Atempo Digital Archive for Messaging (ADAM)
  • Live Backup (LB)
Wow, with a cool one like ADAM, 3 hotties in TINA, LINA and ADA, plus LV, err, I mean LB,what more can you ask for?So, before you get into kinky ideas (a foursome?), Atempo is attempting (pun intended ;-)), to take up of your mindshare when it comes to data backup and data archiving.
I am planning to find out more about Atempo in the coming months. Things have been hectic for me but my good buddy now the master reseller of Atempo in Malaysia will make sure that I focus on Atempo more.
Later – guy, gals and a nice handbag. 😀

iSCSI old CHAP

For folks working on iSCSI, especially the typical implementation engineers, they like to have things easy. “Let’s get this thing working so that I can go home” and usually done without the ever important CHAP (Challenge Handshake Authentication Protocol) enabled and configured.

We are quite lax when it comes to storage security and have always assumed that storage security is inherent in most setup, especially Fibre Channel. Well, let me tell you something, buddy. IT’S NOT! Even Fibre Channel has inherent vulnerability; it’s just that not many technical folks know about the 5 layers of Fibre Channel and it doesn’t mean that Fibre Channel is secure.

As the world turns to more iSCSI implementations, the fastest and easiest way to get a iSCSI connection is to do it without CHAP in the LAN, and CHAP authentication is not enabled by default. And this is happening in the IP world, not Fibre Channel, where there are more sniffers and hackers lurking. But even with CHAP applied, there are ways that CHAP can be broken and iSCSI security can be compromised easily. Below is the typical Windows iSCSI connection screenshot.

First of all, CHAP communication goes through back and forth in the network in clear-text, and the packets are easily captured. Then the hacker can take its own sweet time brute forcing to obtain the CHAP’s encrypted password, challenge and username.

iSCSI communication happens over the popular TCP port of 3260. This gives the hacker a good idea what he/she is able to do. They could sniff out the packets that is going through the wire from their computer but the hacker probably won’t do that. They would use another computer, one that has been compromised and trusted in the network.

From this compromised computer, the hackers would initiate a man-in-the-middle (MITM) attack. They can easily redirect the iSCSCI packets to this compromised computer to further their agenda. I found a nice diagram from SearchStorage about the iSCSI MITM attack and I shared it below.

A highly popular utility used in MITM attacks is one called Cain and Abel. Using a technique called ARP Cache Poisoning or ARP Poison Routing (APR), the compromised computer is able to intercept the iSCSI communication between the iSCSI initiator and the iSCSI target. The intercepted iSCSI packets can then be analyzed by Wireshark, the free and open source packet analyzer.

As Wireshark is capturing and analyzing the iSCSI packets, all the iSCSI communication that is happening between the initiator and the target is read in clear-text. The IQN number, the username are in clear-text as well. As Wireshark follows the TCP stream, the hacker will be looking out for a variable called “CHAP_N=iscsisecurity” and followed by “CHAP_R which equates to the encrypted password in the CHAP authentication. It will probably be in hexadecimal and begins with “Ox….“.

Voila, your encrypted iSCSI password, which now can be hacked in brute-force offline. It’s that easy folks!

Either way, having configured CHAP enabled is still better than no authentication at all (which most of us are likely to do during iSCSI setup). There are other ways to make the iSCSI communication more secure and IPSec is one of the considerations. But usually, we as techies have to balance between security and performance and we would end up choosing performance, relaxing the security bit.

But the exposure of iSCSI in the IP world is something we should think more about. Instead of having the easy way out, at least enable CHAP, old chap. OK?

3TB Seagate – a performance sloth

I can’t get home. I am stuck here at the coffee shop waiting out the traffic jam after the heavily downpour an hour ago.

It has been an interesting week for me, which began last week when we were testing the new Seagate 3TB Constellation ES.2 hard disk drives. It doesn’t matter if it was SAS or SATA because the disks were 7,200 RPM, and basically built the same. SAS or SATA is merely the conduit to the disks and we were out there maneuvering the issue at hand.

Here’s an account of  testing done by my team. My team has been testing the drives meticulously, using every trick in the book to milk performance from the Seagate drives. In the end, it wasn’t the performance we got but more like duds from Seagate where these type of drives are concerned.

How did the tests go?

We were using a Unix operating system to test the sequential writes on different partitions of the disks, each with a sizable GB per partition. In one test, we used 100GB per partition. With each partition, we were testing the outer cylinders to the inner cylinders, and as the storage gurus will tell you, the outer rings runs at a faster speed than the inner rings.

We thought it could be the file system we were using, so we switched the sequential writes to raw disks. We tweaked the OS parameters and tried various combinations of block sizes and so on. And what discovered was a big surprise.

The throughput we got from the sequential writes were horrible, started out with MB/sec lower almost 25% lower than a 2TB Western Digital RE4 disk, and as it went on, the throughput in the inner rings went down to single digit MB/sec. According to reliable sources, the slowest published figures by Seagate were in the high 60’s for MB/sec but what we got were close to 20+MB/sec. The Western Digital RE4 was giving out consistent throughput numbers throughout the test. We couldn’t believe it!

We scoured the forums looking for similar issues, but we did not find much about this.This could be a firmware bug. We are in the midst of opening an escalation channel to Seagate to seek explanation. I would like to share what we have discovered and the issue can be easily reproduced. For customers who have purchased storage arrays with 2 or 3TB Seagate Constellation ES/2 drives, please take note. We were disappointed with the disks but thanks to my team for their diligent approach that resulted in this discovery.

Brocade is ripe again

Like seasonable fruits, Brocade is ready to be plucked from the Fibre Channel tree (again). A few years ago, it put itself up for sale. There were suitors but no one offered to take up Brocade. Over the last few days, the rumour mill is at it again, and while Brocade did not comment, the news is happening again.

Why is Brocade up for sale? One can only guess. Over the past year, their stock has been pounded in the past months and as of last Friday, stood at USD4.51. The news mentioned that Brocade market capitalization is around USD2.7-2.8 billion, low enough to be acquired.

Brocade has been a fantastic Fibre Channel company in the past, and still pretty much is. They have survived the first Fibre Channel shake-up, and companies like Vixel, Gadzoox, and Ancor are no longer in the Fibre Channel’s industry map. They have thrived throughout, until Cisco MDS started to make dents into Brocade’s armour.

Today, a big portion of their business still relies on Fibre Channel to drive revenues and profits. A few years ago in 2008, they acquired Foundry Networks, an Gigabit Ethernet company and it was the right move as the world was converging towards 10 Gigabit. However, it is only in the past 2-3 years, that Brocade has come out with a more direct approach rather than spending most of their time on their OEM business in this region. Perhaps this laggard approach and their inaction in the past have cost them their prime position and now they are primed to be swooped up by probable suitors.

Who, will be the probable suitors now? IBM, Oracle, Juniper and even possibly Cisco could be strong candidates. IBM makes a lot of sense because I believe IBM wants to own technology and Brocade has a lot of technology and patents to offer. Oracle, hmm … they are not a hardware company. It is true that they bought Sun, but from my internal sources, Oracle is not cool with hardware innovations. They just want to sell more Oracle software licenses, keeping R&D and innovation on a short leash, and keeping R&D costs on Sun’s hardware low.

Juniper makes sense too, because they have a sizeable Ethernet business. I was a tad bit disappointed when I got to know that Juniper started selling entry-level Gigabit switches, because I have always placed them at lofty heights with their routers. But I guess, as far as business goes, Juniper did the only natural thing – If there money to be made, why not? If Juniper takes up Brocade, they can have 2 formidable storage networking businesses, Fibre Channel and Data Center Ethernet (DCE). The question now is – Does Juniper want the storage business?

If Cisco buys Brocade, that would mean alarm bells everywhere. It would trigger the US side to look into anti-competitive implications of the purchase. Unfortunately, Cisco has become a stagnant giant, and John Chambers, their CEO is dying to revive the networking juggernaut. There were also rumours of Cisco breaking up to unlock the value of the many, many companies and technologies they acquired in the past. I believe, buying Brocade does not help Cisco, because as they have done in the past with other acquisitions, there are too many technology similarities to extract Brocade’s value.

We will not know how Brocade will fare in 2012, suitors or not, because they are indeed profitable. Unfortunately, the stock options scandal last year plus the poor track record of their acquisitions such as NuView, Silverback, and even Foundry Networks, are not helping to put Brocade in a different light.

If the rumours are true, putting itself up for sale only cheapens the Brocade image. Quid proxima, Brocade?