I had the privilege to attend HP’s D2D workshop yesterday, thanks to the invitation of my old friend, Mr. CC Chung. He is Malaysia’s HP StorageWorks Division Country Manager
I am allowed to assess their D2D solution without fear or favour (I think) and the plush sling bag door gift has nothing to do with my assessment (what do you think? Ha, ha) So here goes.
I based my assessment from these criteria (something I picked up when I was mucking around with Data Domain for 3 months at MTech Security some years ago). The criteria are
- Hash-based chunking granularity vs Single Instance Store (ala-EMC Centera)
- Inline or post-processing
- Source-based or target-based deduplication
- Forward or reverse referencing (though it has little significance – for now)
- Global or Local Deduplication
First of all, most people would ask about how well it dedupes and the technical guy’s answer would be “It depends …“. The sales would probably say “YMMV” (can anyone tell me what this acronym is for?). I believe the advertised rate is 20:1, pretty realistic because as we know in the deduplication world, the longer the data is retained, the higher the ratio can get. It also depends on the type of data to be deduped.
And of course, one of the participants (there are always skeptics) was bickering about how his customer was complaining that the deduplication ratio for a SQL database was lower than what was advertised. My take on this matter – Both the customer and the reseller are at fault! The customer happily took what the sales/pre-sales guy said in verbatim and expected fantastic results. The reseller was ill-equipped to know the D2D solution well and therefore, screwed the customer with realistic numbers for the wrong data type.
To me, as Justin (the HP Solution Architect) was presenting the HP D2D solution, I was ticking my check boxes for these criteria. And in my opinion, the HP D2D solution does the job. HP was telling the attendees that they will be surprised to know the end pricing for the D2D solution. I never got to know the figures and I never asked. But when compared to the king of the deduplication devices, Data Domain, it is likely to be lower.
So, here are the ticks to the HP D2D solution
- In-line deduplication
- Target-based (of course)
- Hash-based chunking with variable length for deduplication granularity
- Local Deduplication
They have several models ranging from the entry-level 2500 series to the 4100 and the 4300 series. After that, HP has another disparate deduplication solution meant for the higher end market called the VLS, and it was not presented in the workshop.
The D2D can be both a VTL and a NAS target dedupe device and the browser-based management GUI was simple and uncluttered. But what interested me was the HP StoreOnce technology, but I did not dig deeper into it. I found a nice video (below) to show a whiteboarding session for HP StoreOnce.
I promised to look deeper into it in a few days time. This week has been such a muck for me but overall, it has been turning up well at the end of the day.
Another thing that was interesting was its sparse indexing for the hashes and there were some dedupe vendors already doing the same thing. But, if you know me, I will research this for knowledge and benefit of all.
After the workshop, HP was so kind to give me an update about their Converged vision, how LeftHand, IBRIX, and 3PAR fit into their strategy and more importantly, their story to the storage market. I will speak more about this in the future. Of course, I will not reveal what’s in store for the future of the D2D solution, but all I can say is, I left the workshop feeling that the solution will do what it is supposed to, nothing more, nothing less. And I meant it in a good way.
I still reserve my opinions about HP because a lot of their storage business are still attached to the server side but hopefully with the upcoming P4000 and P6000 workshops coming up, my opinions may change a little.
Quantum has had the same solution for quite a few years now. DXi series of products.
Just as a comparison to HP:
– In-line deduplication (DXi2.0) (DXi1.X – option given for inline or deferred)
– Quantum developed the pioneering patent for variable-length block data deduplication
– Local Deduplication (in the near future we have hybrid dedupe link: http://blog.quantum.com/index.php/2011/09/who-benefits-from-hybrid-mode-dedupe/)
Some how the name Quantum doesn’t appear that much, sad to say =(
Thanks for your update. Actually, a few months ago, CM Tan invited me to meet up with your team to learn about DXi. Just that it never materialized. I would be glad to meet up to learn more. In fact, I learned about the Quantum dedupe technology when I was at EMC. EMC was OEM of DX with the EDL product.
Appreciate if you can join in our upcoming meeting.
CM Tan? Hmm… must be from Quantum South Asia at Phileo Damansara.
Oh yes, the DL3D300, etc. Most of the support is from our Sustaining engineering team direct to EMC engineering in India.
I won’t be able to make it to the upcoming meeting, will be travelling to China. Leisure trip.
Will try to make it to the next one.
How’s your time next week? Perhaps, we can catch up for a drink? Never met you before.
Probably won’t be able to make it. I have 2 groups of friends asking for a meet up next week.
Pingback: HP StoreOnce – Further Depth « Storage Gaga