- February 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- May 2017
- April 2017
- March 2017
- February 2017
- December 2016
- September 2016
- December 2015
- November 2015
- September 2015
- August 2015
- July 2015
- May 2015
- April 2015
- March 2015
- November 2014
- October 2014
- September 2014
- June 2014
- January 2014
- December 2013
- October 2013
- August 2013
- July 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
[Preamble: I was a delegate of Storage Field Day 12. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented in this event]
In Day 2 of Storage Field Day 12, I and the other delegates were hustled to NetApp’s Sunnyvale campus headquarters. That was a homecoming for me, and it was a bit ironic too.
Just 8 months ago, I was NetApp Malaysia Country Manager. That country sales lead role was my second stint with NetApp. I lasted almost 1 year.
17 years ago, my first stint with NetApp was the employee #2 in Malaysia as an SE. That SE stint went by quickly for 5 1/2 years, and I loved that time. Those Fall Classics NetApp used to have at the Batcave and the Fortress of Solitude left a mark with me, and the experiences still are as vivid as ever.
Despite what has happened in both stints and even outside the circle, I am still one of NetApp’s active cheerleaders in the Asia Pacific region. I even got accused by being biased as a community leader in the SNIA Malaysia Facebook page (unofficial but recognized by SNIA), because I was supposed to be neutral. I have put in 10 years to promote the storage technology community with SNIA Malaysia. [To the guy named Stanley, my response was be “Too bad, pick a religion“.]
The highlight of the SFD12 NetApp visit was of course, having lunch with Dave Hitz, one of the co-founders and the one still remaining. But throughout the presentations, I was unimpressed.
For me, the only one which stood out was CloudSync. I have read about CloudSync since NetApp Insight 2016 and yes, it’s a nice little piece of data shipping service between on-premise and AWS cloud.
Here’s how CloudSync looks like:
PREFACE: This is just a thought, an idea. I am by no means an expert in this area. I have researched this to inspire a thought process of how we can bring together 2 disparate worlds of medical records and imaging with the emerging cloud services for healthcare.
Healthcare has been moving out of its archaic shell in the past few years, and digital healthcare technology and services are booming. And this movement is part of the digital transformation which could eventually lead to a secure and compliant distribution and collaboration of health data, medical imaging and electronic medical records (EMR).
It is a blessing that today’s medical imaging industry has been consolidated with the DICOM (Digital Imaging and Communications in Medicine) standard. DICOM dictates the how medical imaging information and pictures are used, stored, printed, transmitted and exchanged. It is also a communication protocol which runs over TCP/IP, and links up different service class providers (SCPs) and service class users (SCUs), and the backend systems such as PACS (Picture Archiving & Communications Systems) and RIS (Radiology Information Systems).
Another well accepted standard is HL7 (Health Level 7), a similar Layer 7, application-level communication protocol for transferring and exchanging clinical and administrative data.
The diagram below shows a self-contained ecosystem involving the front-end HIS (Hospital Information Systems), and the integration of healthcare, medical systems and other DICOM modalities.
(Picture courtesy of Meddiff Technologies)
I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.
The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).
First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.
Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed. Continue reading
I am dusting off the cobwebs of my blog. After almost 3 months of inactivity, (and trying to avoid the Social Guidelines Media of my present company), I have bolstered enough energy to start writing again. I am tired, and I am finishing off the previous engagements prior to joining HDS. But I am glad those are coming to an end, with the last job in Beijing next week.
So officially, I will be in HDS as of November 4, 2013 . And to get into my employer’s good books, I think I should start with something that HDS has proved many critics wrong. The notion that HDS is poor with NAS solutions has been dispelled with a recent benchmark report from SPECSfs, especially when it comes to NFS file performance. HDS has never been much of a big shouter about their HNAS, even back in the days of OEM with BlueArc. The gap period after the BlueArc acquisition was also, in my opinion, quiet unless it was the gestation period for this Kick-Ass announcement a couple of weeks ago. Here is one of the news circling in the web, from the ever trusty El-Reg.
HDS has never been big shouting like the guys, like EMC and NetApp, who have plenty of marketing dollars to spend. EMC Isilon and NetApp C-Mode have always touted their mighty SPECSfs numbers, usually with a high number of controllers or nodes behind the benchmarks. More often than not, many readers would probably focus more on the NFSops/sec figures rather than the number of heads required to generate the figures.
Unaware of this HDS announcement, I was already asking myself that question about NFSops/sec per SINGLE controller head. So, on September 26 2013, I did a table comparing some key participants of the SPECSfs2008_nfs.v3 and here is the table:
In the last columns of the 2 halves (which I have highlighted in Red), the NFSops/sec/single controller head numbers are shown. I hope that readers would view the performance numbers more objectively after reading this. Therefore, I let you make your own decisions but ultimately, they are what they are. One should not be over-mesmerized by the super million NFSops/sec until one looks under the hood. Secondly, one should also look at things more holistically such as $/NFSops/sec, $/ORT (overall response time), and $/GB/NFSops/managed and other relevant indicators of the systems sold.
But I do not want to take the thunder away from HDS’ HNAS platforms in this recent benchmark. In summary,
To reach a respectable number of 607,647 NFSops/sec with a sub-second response time is quite incredible. The ORT of 0.59 msecs should not be taken lightly because to eke just about a 0.1 msec is not easy. Therefore, reaching 0.5 millisecond is pretty awesome.
This is my first blog after 3 months. I am glad to be back and hopefully with the monkey off my back (I am referring to my outstanding engagements), I can concentrating on writing good stuff again. I know, I know … I still owe some people some entries. It’s great to be back 🙂
Never bet against Ethernet!
I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.
That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.
That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.
When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?
Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!
In the past few weeks, I certainly have an axe to grind with Dell, notably their acquisition of Quest Software. I have been full of praise of how Dell was purchasing the right companies in the past and how the companies Dell acquired were important chess pieces that will propel Dell into the enterprise space. Until now …
Since its first significant acquisition into the enterprise with EqualLogic in 2008, there were PerotSystems, Kace, Scalent, Boomi, Compellent, Exanet, Ocarina Networks, Force10, SonicWall, Wyse Technologies, AppAssure, and RNA Networks. (I might have missed one or two). To me, all these were good buys, and these were solid companies with a strong future in their technology and offerings. Until Dell decides to acquire Quest Software.
At the back of my mind, why the heck is Dell buying Quest Software for? And for a ballistic USD2.4 billion! That’s hell of a lot of money to spend on a company which does not have a strong portfolio of solutions and are not exactly leaders in their respective disciplines, barring Quest’s Foglight and TOAD. A quick check into Quest’s website revealed that they are in the following disciplines:
The news of EMC’s would be acquisition a few weeks ago was an open secret and rumour has it that NetApp was eyeing XtremIO as well. Looks like EMC has beaten NetApp to it yet again.
The interesting part was of course, the price. USD$430 million is a very high price to pay for a stealthy, 2-year old company which has 2 rounds of funding totaling USD$25 million. Why such a large amount?
XtremIO has a talented team of engineers; the notable ones being Yaron Segev and Shahar Frank. They have their background in InfiniBand, and Shahar Frank was the chief architect of Exanet scale-out NAS (which was acquired by Dell). However, as quoted by 451Group, XtremeIO is building an all-flash SAN array that “provides consistently high performance, high levels of flash endurance, and advanced functionality around thin provisioning, de-dupe and space-efficient snapshots“.
Furthermore, XtremeIO has developed a real-time inline deduplication engine that does not degrade performance. It does this by spreading the write I/Os over the entire array. There is little information about this deduplication engine, but I bet XtremIO has developed a real-time, inherent deduplication file system that spreads all the I/Os to balance the wear-leveling as well as having scaling performance. I bet XtremIO will dedupe everything that it stores, has a B+ tree, copy-on-write file system with a super-duper efficient hashing algorithm for address mapping (pointers) with this deduplication file system. Ok, ok, I am getting carried away here, because it is likely that I will be wrong, but I can imagine, can’t I? Continue reading
It’s not new. SAP has been trying to do it for years but with little success. SAP applications and its modules still very much rely on the Oracle database as its core engine but all that that could change within the next few years. SAP has HANA now.
I thought it is befitting to use the movie poster of “Hanna” (albeit an extra “N” in the spelling) to portray SAP who clearly has Oracle in its sights now, with a sharpened arrow head aimed at the jugular of the Oracle beast. (If you haven’t watched the movie, you will see the girl Hanna, using the bow and arrow to hunt a large reindeer).
What is HANA anyway? It was previously an analytics appliance in SAP HANA 1.0SP2. Its key component is the HANA in-memory database (IMDB) and it was not aimed for the general purpose, relational database market yet. Or perhaps, that’s what SAP wants Oracle to believe. Continue reading
There is no stopping Dell. It is in the news again, this time, acquiring privately owned Wyse Technology.
The name Wyse certainly brings back memories about the times where Wyse were the VT100 and VT220 terminals. They were also one of the early leaders in thin client computing, where it required an X Windows server to provide client applications on “dumb” workstations running X Windows Manager. They used to compute with companies like NCD (Network Computing Devices) and HummingBird. My first company, CSA, was a distributor of NCD clients and I remember Sime Darby was the distributor of Wyse thin clients.
Wyse as quoted:
Wyse Technology is the global leader in Cloud Client Computing. The Wyse portfolio includes industry-leading thin, zero and cloud PC client solutions with advanced management, desktop virtualization and cloud software supporting desktops, laptops and next generation mobile devices. Wyse has shipped more than 20 million units and has over 200 million people interacting with their products each day, enabling the leading private, public, hybrid and government cloud implementations worldwide. Wyse works with industry-leading IT vendors, including Cisco®, Citrix®, IBM®, Microsoft, and VMware® as well as globally-recognized distribution and service providers. Wyse is headquartered in San Jose, California, U.S.A., with offices worldwide.
The Dell acquisition of Wyse shows that Dell is serious about Virtual Desktop Infrastructure type of technology (VDI), especially when the client cloud computing space. And the VDI space is going to heat up as many vendors are pushing hard to get the market going.
Dell, for better or for worse, has just added another acquisition that fits into the jigsaw puzzle that they are trying to build. Wyse looks like a good buy as it has a mature technology and the legacy in the thin client space. I hope Dell will energize the Wyse Technology team but while acquisition is easy, the tough part will be the implementation part. How well Dell mobilizes the Wyse Technology team will depend on how well Wyse blends into Dell’s culture.