Hybrid is the new Black

It is hard for enterprise to let IT go, isn’t it?

For years, we have seen the cloud computing juggernaut unrelenting in getting enterprises to put their IT into public clouds. Some of the biggest banks have put their faith into public cloud service providers. Close to home, Singapore United Overseas Bank (UOB) is one that has jumped into the bandwagon, signing up for VMware Cloud on AWS. But none will come bigger than the US government Joint Enterprise Defense Infrastructure (JEDI) project, where AWS and Azure are the last 2 bidders for the USD10 billion contract.

Confidence or lack of it

Those 2 cited examples should be big enough to usher enterprises to confidently embrace public cloud services, but many enterprises have been holding back. What gives?

In the past, it was a matter of confidence and the FUDs (fears, uncertainties, doubts). News about security breaches, massive blackouts have been widely spread and amplified to sensationalize the effects and consequences of cloud services. But then again, we get the same thing in poorly managed data centers in enterprises and government agencies, often with much less fanfare. We shrug our shoulder and say “Oh well!“.

The lack of confidence factor, I think, has been overthrown. The “Cloud First” strategy in enterprises in recent years speaks volume of the growing and maturing confidence in cloud services. The poor performance and high latency reasons, which were once an Achilles heel of cloud services, are diminishing. HPC-as-a-Service is becoming real.

The confidence in cloud services is strong. Then why is on-premises IT suddenly is a cool thing again? Why is hybrid cloud getting all the attention now?

Hybrid is coming back

Even AWS wants on-premises IT. Its Outposts offering outlines its ambition. A couple of years earlier, the Azure Stack was already made beachhead on-premises in its partnership with many server vendors. VMware, is in both on-premises and the public clouds. It has strong business and technology integration with AWS and Azure. IBM Cloud, Big Blue is thinking hybrid as well. 2 months ago, Dell jumped too, announcing Dell Technologies Cloud with plenty of a razzmatazz, using all the right moves with its strong on-premises infrastructure portfolio and its crown jewel of the federation, VMware. Continue reading

Connecting ideas and people with Dell Influencers

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I just got home from Vegas yesterday after attending my 2nd Dell Technologies World as one of the Dell Luminaries. The conference was definitely a bigger one than the one last year, with more than 15,000 attendees. And there was a frenzy of announcements, from Dell Technologies Cloud to new infrastructure solutions, and more. The big one for me, obviously was Azure VMware Solutions officiated by Microsoft CEO Satya Nadella and VMware CEO Pat Gelsinger, with Michael Dell bringing together the union. I blogged about Dell jumping into the cloud in a big way.

AI Tweetup

In the razzmatazz, the most memorable moments were one of the Tweetups organized by Dr. Konstanze Alex (Konnie) and her team, and Tech Field Day Extra.

Tweetup was alien to me. I didn’t know how the concept work and I did google tweetup before that. There were a few tweetups on the topics of data protection and 5G, but the one that stood out for me was the AI tweetup.

No alt text provided for this image

Continue reading

Data Privacy First before AI Framework

A few days ago, I discovered that Malaysia already had plans for a National Artificial Intelligence (AI) Framework. It is led by Malaysia Digital Economy Corporation (MDEC) and it will be ready by the end of 2019. A Google search revealed a lot news and announcements, with a few dating back to 2017, but little information of the framework itself. Then again, Malaysia likes to take the “father knows best” approach, and assumes that what it is doing shouldn’t be questioned (much). I will leave this part as it is, because perhaps the details of the framework is under the OSA (Official Secrets Act).

Are we AI responsible or are we responsible for AI?

But I would like to highlight the data privacy part that is likely to figure strongly in the AI Framework, because the ethical use of AI is paramount. It will have economical, social and political impact on Malaysians, and everybody else too. I have written a few articles on LinkedIn about ethics, data privacy, data responsibility, impact of AI. You can read about them in the links below:

I may sound like a skeptic of AI. I am not. I believe AI will benefit mankind, and bring far reaching developments to the society as a whole. But we have to careful and this is my MAIN concern when I voice about AI. I continue to question the human ethics and the human biases that go into the algorithms that define AI. This has always been the crux of my gripes, my concerns, my skepticism of everything we call AI. I am not against AI but I am against the human flaws that shape the algorithms of AI.

Everything is a Sheep (or a Giraffe)

A funny story was shared with me last year. It was about Microsoft Azure computer vision algorithm in recognizing visuals in photos. Apparently the algorithm of the Microsoft Azure’s neural network was fed with some overzealous data of sheep (or giraffes), and the AI system started to point out that every spot that it “saw” was either a sheep, or any vertical long ones was a giraffe.

In the photo below, there were a bunch of sheep on a tree. Check out the tags/comments in the red rectangle published by the AI neural network software below and see how both Microsoft Azure and NeutralTalk2 “saw” in the photo. You can read more about the funny story here.

This proves my point that if you feed the learning system and the AI behind it with biased and flawed information, the result can be funny (in this case here) or disastrous. Continue reading

Microsoft desires Mellanox

My lazy Thursday morning was spurred by a posting by Stephen Foskett, Chief Organizer of Tech Field Days. “Microsoft mulls the acquisition of Mellanox

The AWS factor

A quick reaction leans towards a strange one. Microsoft of all people, buying a chip company? Does it make sense? However, leaning deeper, it starts to make some sense. And I believe the desire is spurred by Amazon Web Services announcement of their Graviton processor at AWS re:Invent last month.

AWS acquired Annapurna Labs in early 2015. From the sources, Annapurna was working on low powered, high performance networking chips for the mid-range market. The key words – lower powered, high performance, mid-range – are certainly the musical notes to the AWS opus. And that would mean the ability for AWS to control their destiny, even at the edge. Continue reading

Pure Electric!

I didn’t get a chance to attend Pure Accelerate event last month. From the blogs and tweets of my friends, Pure Accelerate was an awesome event. When I got the email invitation for the localized Pure Live! event in Kuala Lumpur, I told myself that I have to attend the event.

The event was yesterday, and I was not disappointed. Coming off a strong fiscal Q1 2018, it has appeared that Pure Storage has gotten many things together, chugging full steam at all fronts.

When Pure Storage first come out, I was one of the early bloggers who took a fancy of them. My 2011 blog mentioned the storage luminaries in their team. Since then, they have come a long way. And it was apt that on the same morning yesterday, the latest Gartner Magic Quadrant for Solid State Arrays 2017 was released.

Continue reading

Ryussi MoSMB – High performance SMB

I am back in the Silicon Valley as a Storage Field Day 12 delegate.

One of the early presenters was Ryussi, who was sharing a proprietary SMB server implementation of Linux and Unix systems. The first thing which comes to my mind was why not SAMBA? It’s free; It works; It has the 25 years maturity. But my experience with SAMBA, even in the present 4.x, does have its quirks and challenges, especially in the performance of large file transfers.

One of my customers uses our FreeNAS box. It’s a 50TB box for computer graphics artists and a rendering engine. After running the box for about 3 months, one case escalated to us was the SMB shares couldn’t be mapped all of a sudden. All the Windows clients were running version 10. Our investigation led us to look at the performance of SMB in the SAMBA 4 of FreeNAS.

This led to other questions such as the vfs_aio_pthread, FreeBSD/FreeNAS implementation of asynchronous I/O to overcome the performance weaknesses of the POSIX AsyncIO interface. The FreeNAS forum is flooded with sightings of missing SMB service that during large file transfer. Without getting too deep into the SMB performance issue, we decided to set the “Server Minimum Protocol” and “Server Maximum Protocol” to be SMB 2.1. The FreeNAS box at the customer has been stable now for the past 5 months.

Continue reading

Disaster Recovery has changed

Simple and affordable Disaster Recovery? Sounds oxymoronic, right?

I have thronged the small medium businesses (SMBs) space in the past few months. I have seen many SMBs resort to the cheapest form they can get their hands on. It could be a Synology here or a QNAP there, and that’s their backup plan. That’s their DR plan. When disaster strikes, they just shrug their shoulders and accept their fate. It could be a human error, accidental data deletion, virus infection, data corruption and recently, RANSOMware! But these SMBs do not have the IT resources to deal with the challenges these “disasters” bring.

Recently I attended a Business Continuity Institute forum organized by the Malaysian Chapter. Several vendors and practitioners spoke about the organization’s preparedness and readiness for DR. And I would like to stress the words “preparedness” and “readiness”. In the infrastructure world, we often put redundancy into the DR planning, and this means additional cost. SMBs cannot afford this redundancy. Furthermore, larger organizations have BC and DR coordinators who are dedicated for the purpose of BC and DR. SMBs probably has a person who double up an the IT administrator.

However, for IT folks, virtualization and cloud technologies are beginning to germinate a new generation of DR solutions. DR solutions which are able to address the simplicity of replication and backup, and at the same time affordable. Many are beginning to offer DR-as-a-Service and indeed, DR-as-a-Service has become a Gartner Magic Quadrant category. Here’s a look at the 2016 Gartner Magic Quadrant for DR-as-a-Service.

gartner-mq-dr-as-a-service-2016

And during these few months, I have encountered 3 vendors in this space. They are sitting in the Visionaries quadrant. One came to town and started smashing laptops to jazz up their show (I am not going to name that vendor). Another kept sending me weird emails, sounding kind of sleazy like “Got time for a quick call?”

Continue reading

The transcendence of Data Fabric

The Register wrote a damning piece about NetApp a few days ago. I felt it was irresponsible because this is akin to kicking a man when he’s down. It is easy to do that. The writer is clearly missing the forest for the trees. He was targeting NetApp’s Clustered Data ONTAP (cDOT) and missing the entire philosophy of NetApp’s mission and vision in Data Fabric.

I have always been a strong believer that you must treat Data like water. Just like what Jeff Goldblum famously quoted in Jurassic Park, “Life finds a way“, data as it moves through its lifecycle, will find its way into the cloud and back.

And every storage vendor today has a cloud story to tell. It is exciting to listen to everyone sharing their cloud story. Cloud makes sense when it addresses different workloads such as the sharing of folders across multiple devices, backup and archiving data to the cloud, tiering to the cloud, and the different cloud service models of IaaS, PaaS, SaaS and XaaS.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading