SMB Witness Protection Program

No, no, FBI is not in the storage business and there are no witnesses to protect.

However, SMB 3.0 has introduced a RPC-based mechanism to inform the clients of any state change in the SMB servers. Microsoft calls it Service Witness Protocol [SWP], and its objective is provide a much faster notification service allow the SMB 3.0 clients to do a failover. In previous SMB 1.0 and even in SMB 2.x, the SMB clients rely on time-out services. The time-out services, either SMB or TCP, could take up as much as 30-45 seconds, and this creates a high latency that is disruptive to enterprise applications.

SMB 3.0, as mentioned in my previous post, had a total revamp, and is now enterprise ready. In what Microsoft calls “Continuously Available” File Service, the SMB 3.0 supports clustered or scale-out file servers. The SMB shares must be shared as “Continuously Available” shares and mapped to SMB 3.0 clients. As shown in the diagram below (provided by SNIA’s webinar),

SMB 3.0 CA Shares

Client A mapping to Server 1 share (\\srv1\CAshr). Client A has a share “handle” that establishes a connection with a corresponding state of the session. The state of the session is synchronously kept consistent with a corresponding state in Server 2.

The Service Witness Protocol is not responsible for the synchronization of the states in the SMB file server cluster. Microsoft has left the HA/cluster/scale-out capability to the proprietary technology method of the NAS vendor. However, SWP regularly observes the status of all services under its watch. (more…)

Has Object Storage become the everything store?

I picked up a copy of latest Brad Stone’s book, “The Everything Store: Jeff Bezos and the Age of Amazon at the airport on my way to Beijing last Saturday. I have been reading it my whole time I have been in Beijing, reading in awe about the turbulent ups and downs of Amazon.com.

The Everything Store cover

In its own serendipitous ways, Object-based Storage Devices (OSDs) have been floating in my universe in the past few weeks. Seems like OSDs have been getting a lot of coverage lately and suddenly, while in the shower, I just had an epiphany!

Are storage vendors now positioning Object-based Storage Devices (OSDs) as Everything Store?

(more…)

The Storage Compass

I am sure many people in IT get pissed with IT jargons and terminologies. More so if it is a customer, especially when he or she is not well versed with the fundamental concept behind the technology architecture.

Even after 20 years, with most of it in storage, I have a hard time switching from one vendor’s jargon to another (sometimes). But it has gotten harder for me lately, since I teach ONTAP courses for NetApp, EMC Cloud Infrastructure and doing my work with the ZFS stuff. Soon, I will take on EMC VNX, Information Storage Management (ISM), Big Data courses as well, and I also plan to do some Nexenta training too.

Who would know that an ONTAP NAS volume would be known as file system in EMC VNX for File (aka Celerra), and a data set in ZFS? Or a ONTAP aggregate is almost like a ZFS pool but with some differences or a clone might be called a replica in HDS and so on …

In fact, all the definitions above could be wrong because I am getting confused. ;-) You would be too if you have to switch from one vendor’s jargon to another. And the poor EMC pre-sales who has not been with any other vendor except for EMC all his career would have a hard time rewiring his brain if he had joined another vendor like NetApp.  Or IBM, or Dell, or Oracle or anyone for that matter.  No wonder the customers are pissed.  (more…)

Not all SSDs are the same

Happy Lunar New Year! The Chinese around world has just ushered in the Year of the Water Dragon yesterday. To all my friends and family, and readers of my blog, I wish you a prosperous and auspicious Chinese New Year!

Over the holidays, I have been keeping up with the progress of Solid State Drives (SSDs). I am sure many of us are mesmerized by SSDs and the storage vendors are touting the best of SSDs have to offer. But let me tell you one thing – you are probably getting the least of what the best SSDs have to offer. You might be puzzled why I say things like this.

Let me share with a common sales pitch. Most (if not all) storage vendors will tout performance (usually IOPS) as the greatest benefits of SSDs. The performance numbers have to be compared to something, and that something is your regular spinning Hard Disk Drives (HDDs). The slowest SSDs in terms of IOPS is about 10-15x faster than the HDDs. A single SSD can at least churn 5,000 IOPS when compared to the fastest 15,000 RPM HDDs, which churns out about 200 IOPS (depending on HDD vendors). Therefore, the slowest SSDs can be 20-25x faster than the fastest HDDs, when measured in IOPS.

But the intent of this blogger is to share with you more about SSDs. There’s more to know because SSDs are not built the same. There are write-bias SSDs, read-bias SSDs; there are SLC (single level cell) and MLC (multi level cell) SSDs and so on. How do you differentiate them if Vendor A touts their SSDs and Vendor B touts their SSDs as well? You are not comparing SSDs and HDDs anymore. How do you know what questions to ask when they show you their performance statistics?

SNIA has recently released a set of methodology called “Solid State Storage (SSS) Performance Testing Specifications (PTS)” that helps customers evaluate and compare the SSD performance from a vendor-neutral perspective. There is also a whitepaper related to SSS PTS. This is something very important because we have to continue to educate the community about what is right and what is wrong.

In a recent webcast, the presenters from the SNIA SSS TWG (Technical Working Group) mentioned a few questions that I  think we as vendors and customers should think about when working with an SSD sales pitch. I thought I share them with you.

  • Was the performance testing done at the SSD device level or at the file system level?
  • Was the SSD pre-conditioned before the testing? If so, how?
  • Was the performance results taken at a steady state?
  • How much data was written during the testing?
  • Where was the data written to?
  • What data pattern was tested?
  • What was the test platform used to test the SSDs?
  • What hardware or software package(s) used for the testing?
  • Was the HBA bandwidth, queue depth and other parameters sufficient to test the SSDs?
  • What type of NAND Flash was used?
  • What is the target workload?
  • What was the percentage weight of the mix of Reads and Writes?
  • Are there warranty life design issue?

I thought that these questions were very relevant in understanding SSDs’ performance. And I also got to know that SSDs behave differently throughout the life stages of the device. From a performance point of view, there are 3 distinct performance life stages

  • Fresh out of the box (FOB)
  • Transition
  • Steady State

 

As you can see from the graph below, a SSD, fresh out of the box (FOB) displayed considerable performance numbers. Over a period of time (the graph shown minutes), it transitioned into a mezzanine stage of lower IOPS and finally, it normalized to the state called the Steady State. The Steady State is the desirable test range that will give the most accurate type of IOPS numbers. Therefore, it is important that your storage vendor’s performance numbers should be taken during this life stage.

Another consideration when understanding the SSDs’ performance numbers are what type of tests used? The test could be done at the file system level or at the device level. As shown in the diagram below, the test numbers could be taken from many different elements through the stack of the data path.

 

Performance for cached data would given impressive numbers but it is not accurate. File system performance will not be useful because the data travels through different layers, masking the true performance capability of the SSDs. Therefore, SNIA’s performance is based on a synthetic device level test to achieve consistency and a more accurate IOPS numbers.

There are many other factors used to determine the most relevant performance numbers. The SNIA PTS test has 4 main test suite that addresses different aspects of the SSD’s performance. They are:

  • Write Saturation test
  • Latency test
  • IOPS test
  • Throughput test

The SSS PTS would be able to reveal which is a better SSD. Here’s a sample report on latency.

Once again, it is important to know and not to take vendors’ numbers in verbatim. As the SSD market continue to grow, the responsibility lies on both side of the fence – the vendor and the customer.

 

The recipe for storage performance modeling

Good morning, afternoon, evening, Ladies & Gentlemen, wherever you are.

Today, we are going to learn how to bake, errr … I mean, make a storage performance model. Before we begin, allow me to set the stage.

Don’t you just hate it when you are asked to do storage performance sizing and you don’t have a freaking idea how to get started? A typical techie would probably say, “Aiya, just use the capacity lah!”, and usually, they will proceed to size the storage according to capacity. In fact, sizing by capacity is the worst way to do storage performance modeling.

Bear in mind that storage is not a black box, although some people wished it was. It is not black magic when it comes to performance sizing because things can be applied in a very scientific and logical manner.

SNIA (Storage Networking Industry Association) has made a storage performance modeling methodology (that’s quite a mouthful), and basically simplified it into these few key ingredients. This recipe is for storage performance modeling in general and I am advising you guys out there to engage your storage vendors professional services. They will know their storage solutions best.

And I am going to say to you – Don’t be cheap and not engage professional services – to get to the experts out there. I was having a chat with an consultant just now at McDonald’s. I have known this friend of mine for about 6-7 years now and his name is Sugen Sumoo, the Director of DBORA Consulting. They specialize in Oracle and database performance tuning and performance forecasting and it is something that a typical DBA can’t do, because DBORA Consulting is the Professional Service that brings expertise and value to Oracle customers. Likewise, you have to engage your respective storage professional services as well.

In a cook book or a cooking show, you are presented with the ingredients used and in this recipe for storage performance modeling, the ingredients (in no particular order) are:

  • Application block size
  • Read and Write ratio
  • Application access patterns
  • Working set size
  • IOPS or throughput
  • Demand intensity

Application Block Size

First of all, the storage is there to serve applications. We always have to look from the applications’ point of view, not storage’s point of view.  Different applications have different block size. Databases typically range from 8K-64K and backup applications usually deal with larger block sizes. Video applications can have 256K block sizes or higher. It all depends.

The best way is to find out from the DBA, email administrator or application developers. The unfortunate thing is most so-called technical people or administrators in Malaysia doesn’t have a clue about the applications they manage. So, my advice to you storage professionals, do your research on the application and take the default value. These clueless fellas are likely to take the default.

Read and Write ratio

Applications behave differently at different times of the day, and at different times of the month (no, it’s not PMS). At the end of the financial year or calendar, there are some tasks that these applications do as well. But in a typical day, there are different weightage or percentage of read operations versus write operations.

Most OLTP (online transaction processing)-based applications tend to be read heavy and write light, but we need to find out the ratio. Typically, it can be a 2:1 ratio or 60%:40%, but it is best to speak to the application administrators about the ratio. DSS (Decision Support Systems) and data warehousing applications could have much higher reads than writes while a seismic-analysis applications can have multiple writes during the analysis periods. It all depends.

To counter the “clueless” administrators, ask lots of questions. Find out the workflow of several key tasks and ask what that particular tasks do at different checkpoints of the application’s processing. If you are lazy (please don’t be lazy, because it degrades your value as a storage professional), use a rule of thumb.

Application access patterns

Applications behave differently in general. They can be sequential, like backup or video streaming. They can be random like emails, databases at certain times of the day, and so on. All these behavioral patterns affect how we design and size the disks in the storage.

Some RAID levels tend to work well with sequential access and others, with random access. It is not difficult to find out about the applications’ pattern and if you read more about the different RAID-levels in storage, you can easily identify the type of RAID levels suitable for each type of behavioral patterns.

Working set size

This variable is a bit more difficult to determine. This means that a chunk of the application has to be loaded into a working area, usually memory and cache memory, to be used and abused by the application users.

Unless someone is well versed with the applications, one would not be able to determine how much of the applications would be placed in memory and in cache memory. Typically, this can only be determined after the application has been running for some time.

The flexibility of having SSDs, especially the DRAM-type of SSDs, are very useful to ensure that there is sufficient “working space” for these applications.

IOPS or Throughput

According to SNIA model, for I/O less than 64K, IOPS should be used as a yardstick to do storage performance modeling. Anything larger, use throughput, in which MB/sec is the measurement unit.

The application guy would be able to tell you what kind of IOPS their application is expecting or what kind of throughput they want. Again, ask a lot of questions, because this will help you determine the type of disks and the kind of performance you give to the application guys.

If the application guy is clueless again, ask someone more senior or ask the vendor. If the vendor engineers cannot give you an answer, then they should not be working for the vendor.

Demand intensity

This part is usually overlooked when it comes to performance sizing. Demand intensity refers to how intense is the I/O requests. It could come from 1 channel or 1 part of the applications, or it could come from several parts of the applications in parallel. It is as if the storage is being ‘bombarded’ by applications and this is the part that is hard to determine as well.

In some applications, the degree of intensity or parallelism can be tuned and to find out, ask the application administrator or developer. If not, ask the vendor. Also do a lot of research on the application’s architecture.

And one last thing. What I have learned is to add buffers to the storage performance model. Typically I would add about 10-20% extra but you never know. As storage professionals, I would strongly encourage to engage professional services, because it is worthwhile, especially in the early stages of the sizing. It is usually a more expensive affair to size it after the applications have been installed and running.

“Failure to plan is planning to fail”.  The recipe isn’t that difficult. Go figure it out.

The Greening of Storage

Gartner, in its recent Symposium and IT/Expo, laid out the top 10 IT trends for 2012. Here’s a list of their top 10 (in no particular order)

  1. Virtualization
  2. Big Data, patterns and analytics
  3. Energy efficiency and monitoring
  4. Context-aware applications
  5. Staff retention and retraining
  6. Social Networks
  7. Consumerization
  8. Cloud Computing
  9. Compute per square foot
  10. Fabrics stacks

For those who read IT news a lot, we are mostly aware of all 10 of them. But one of them strikes me in a different sort of way – Energy efficiency and monitoring. There’s been a lot of talk about it and I believe every vendor is doing something about this Green IT/Computing, but in magnitude are they doing it? A lot of them, may be doing this thing called “Green Washing“, which is basically taking advantage of the circumstances and promote themselves as green, without putting much effort into it. How many times have we as consumer heard that this is green or that is green, without realizing the internals of how these companies derive green and label themselves as green. We can pooh pooh some of these claims because there is little basis to their claims to green.

One of the good things in IT is, it is measurable. You know how green a computer equipment is by measuring its power and cooling ingesting, how much of that power that is consumed, how much energy is derived from the power and how much work did it do, usually for a period of time. It’s measurable, and that’s good.

Unfortunately for storage, we as data creators and data consumers, tend to be overly paranoid of our data. We make redundant copies and we have every right to do so because we fear the unexpected. Storage technology is not perfect. As shown in an SNIA study some years ago,

from a single copy of “App Data” on the left of the chart, we mirror the data, doubling the amount of data and increasing the capacity. Then we overprovision, to prepare for a rainy day. Then we backup, once, twice … thrice! In case of disaster, we replicate and for regulatory compliance, we archive and keep and keep and keep, so that the lawyers can make plenty of money from any foul-ups with the rules and regulations.

That single copy of “App Data” just grew 10x more by the end of the chart.

The growth of data is synonymous to power as shown in an IDC study below.

 

the more data you create, you copy, you share, you keep, you keep some more and so on, draws more power to make the data continuously available to you, you and you!

And there are also storage technologies today from different storage vendors, in different capacities, that alleviates the data capacity pain. These technologies reduce the capacity required to store the data by eliminating redundancies or, maximize the ability to compact more bits per blocks of data with compression, as well as other techniques. SNIA summarized this beautifully in the chart below.

But with all these technologies, vendors tend to oversell their green features, and customers do not always have a way to make an informed choice. We do not have a proper tool to define how green the storage equipment, or at least a tool that is vendor-neutral and provides an unbiased view of storage.

For several years, SNIA Green Storage Initiatives’ Technical Working Group (TWG) has been developing a set of test metrics to measure and publish energy consumption and efficiency in storage equipment. Through its SNIA Emerald program, it has released a set of guidelines and user guide in October 2011 with the intention to give a fair and apple-to-apple comparison when it comes to green.

From the user guide, the basic testing criteria is pretty straight forward. I pinched the following below from the user guide to share with my readers.

The testing criteria for all storage solutions are basically the same, as follows:

1. The System Under Test (SUT) is run through a SUT Conditioning Test to get it into a
known and stable state.
2. The SUT Conditioning Test is followed by a defined series of Active Test phases that
collect data for the active metrics, each with a method to assure stability of each metric
value.
3. The Active Test is followed by the so-called Ready Idle Test that collects data for the
capacity metric.
4. Lastly, the Capacity Optimization Test phases are executed which demonstrate the
storage system’s ability to perform defined capacity optimization methods.

For each of the categories of storage, there will be different workloads and run times
depending on the category characteristics.

After the testing, the following test data metrics are collected and published. The test data metrics are:

And there are already published results with IBM and HP taking the big brother lead. From IBM, their IBM DS3400 and from HP,  HP P6500.

Hoping that I have read the SNIA Emerald Terms & Use correctly (lawyers?), I want to state that what I am sharing is not for commercial gains. So here’s the link: SNIA Emerald published results for IBM and HP.

The greening of storage is very new, and likely to evolve over time, but what’s important is, it is the first step towards a more responsible planet. And this could be the next growth engine for storage professionals like us.