Data Privacy First before AI Framework

A few days ago, I discovered that Malaysia already had plans for a National Artificial Intelligence (AI) Framework. It is led by Malaysia Digital Economy Corporation (MDEC) and it will be ready by the end of 2019. A Google search revealed a lot news and announcements, with a few dating back to 2017, but little information of the framework itself. Then again, Malaysia likes to take the “father knows best” approach, and assumes that what it is doing shouldn’t be questioned (much). I will leave this part as it is, because perhaps the details of the framework is under the OSA (Official Secrets Act).

Are we AI responsible or are we responsible for AI?

But I would like to highlight the data privacy part that is likely to figure strongly in the AI Framework, because the ethical use of AI is paramount. It will have economical, social and political impact on Malaysians, and everybody else too. I have written a few articles on LinkedIn about ethics, data privacy, data responsibility, impact of AI. You can read about them in the links below:

I may sound like a skeptic of AI. I am not. I believe AI will benefit mankind, and bring far reaching developments to the society as a whole. But we have to careful and this is my MAIN concern when I voice about AI. I continue to question the human ethics and the human biases that go into the algorithms that define AI. This has always been the crux of my gripes, my concerns, my skepticism of everything we call AI. I am not against AI but I am against the human flaws that shape the algorithms of AI.

Everything is a Sheep (or a Giraffe)

A funny story was shared with me last year. It was about Microsoft Azure computer vision algorithm in recognizing visuals in photos. Apparently the algorithm of the Microsoft Azure’s neural network was fed with some overzealous data of sheep (or giraffes), and the AI system started to point out that every spot that it “saw” was either a sheep, or any vertical long ones was a giraffe.

In the photo below, there were a bunch of sheep on a tree. Check out the tags/comments in the red rectangle published by the AI neural network software below and see how both Microsoft Azure and NeutralTalk2 “saw” in the photo. You can read more about the funny story here.

This proves my point that if you feed the learning system and the AI behind it with biased and flawed information, the result can be funny (in this case here) or disastrous. Continue reading

Microsoft desires Mellanox

My lazy Thursday morning was spurred by a posting by Stephen Foskett, Chief Organizer of Tech Field Days. “Microsoft mulls the acquisition of Mellanox

The AWS factor

A quick reaction leans towards a strange one. Microsoft of all people, buying a chip company? Does it make sense? However, leaning deeper, it starts to make some sense. And I believe the desire is spurred by Amazon Web Services announcement of their Graviton processor at AWS re:Invent last month.

AWS acquired Annapurna Labs in early 2015. From the sources, Annapurna was working on low powered, high performance networking chips for the mid-range market. The key words – lower powered, high performance, mid-range – are certainly the musical notes to the AWS opus. And that would mean the ability for AWS to control their destiny, even at the edge. Continue reading

Pure Electric!

I didn’t get a chance to attend Pure Accelerate event last month. From the blogs and tweets of my friends, Pure Accelerate was an awesome event. When I got the email invitation for the localized Pure Live! event in Kuala Lumpur, I told myself that I have to attend the event.

The event was yesterday, and I was not disappointed. Coming off a strong fiscal Q1 2018, it has appeared that Pure Storage has gotten many things together, chugging full steam at all fronts.

When Pure Storage first come out, I was one of the early bloggers who took a fancy of them. My 2011 blog mentioned the storage luminaries in their team. Since then, they have come a long way. And it was apt that on the same morning yesterday, the latest Gartner Magic Quadrant for Solid State Arrays 2017 was released.

Continue reading

Ryussi MoSMB – High performance SMB

I am back in the Silicon Valley as a Storage Field Day 12 delegate.

One of the early presenters was Ryussi, who was sharing a proprietary SMB server implementation of Linux and Unix systems. The first thing which comes to my mind was why not SAMBA? It’s free; It works; It has the 25 years maturity. But my experience with SAMBA, even in the present 4.x, does have its quirks and challenges, especially in the performance of large file transfers.

One of my customers uses our FreeNAS box. It’s a 50TB box for computer graphics artists and a rendering engine. After running the box for about 3 months, one case escalated to us was the SMB shares couldn’t be mapped all of a sudden. All the Windows clients were running version 10. Our investigation led us to look at the performance of SMB in the SAMBA 4 of FreeNAS.

This led to other questions such as the vfs_aio_pthread, FreeBSD/FreeNAS implementation of asynchronous I/O to overcome the performance weaknesses of the POSIX AsyncIO interface. The FreeNAS forum is flooded with sightings of missing SMB service that during large file transfer. Without getting too deep into the SMB performance issue, we decided to set the “Server Minimum Protocol” and “Server Maximum Protocol” to be SMB 2.1. The FreeNAS box at the customer has been stable now for the past 5 months.

Continue reading

Disaster Recovery has changed

Simple and affordable Disaster Recovery? Sounds oxymoronic, right?

I have thronged the small medium businesses (SMBs) space in the past few months. I have seen many SMBs resort to the cheapest form they can get their hands on. It could be a Synology here or a QNAP there, and that’s their backup plan. That’s their DR plan. When disaster strikes, they just shrug their shoulders and accept their fate. It could be a human error, accidental data deletion, virus infection, data corruption and recently, RANSOMware! But these SMBs do not have the IT resources to deal with the challenges these “disasters” bring.

Recently I attended a Business Continuity Institute forum organized by the Malaysian Chapter. Several vendors and practitioners spoke about the organization’s preparedness and readiness for DR. And I would like to stress the words “preparedness” and “readiness”. In the infrastructure world, we often put redundancy into the DR planning, and this means additional cost. SMBs cannot afford this redundancy. Furthermore, larger organizations have BC and DR coordinators who are dedicated for the purpose of BC and DR. SMBs probably has a person who double up an the IT administrator.

However, for IT folks, virtualization and cloud technologies are beginning to germinate a new generation of DR solutions. DR solutions which are able to address the simplicity of replication and backup, and at the same time affordable. Many are beginning to offer DR-as-a-Service and indeed, DR-as-a-Service has become a Gartner Magic Quadrant category. Here’s a look at the 2016 Gartner Magic Quadrant for DR-as-a-Service.

gartner-mq-dr-as-a-service-2016

And during these few months, I have encountered 3 vendors in this space. They are sitting in the Visionaries quadrant. One came to town and started smashing laptops to jazz up their show (I am not going to name that vendor). Another kept sending me weird emails, sounding kind of sleazy like “Got time for a quick call?”

Continue reading

The transcendence of Data Fabric

The Register wrote a damning piece about NetApp a few days ago. I felt it was irresponsible because this is akin to kicking a man when he’s down. It is easy to do that. The writer is clearly missing the forest for the trees. He was targeting NetApp’s Clustered Data ONTAP (cDOT) and missing the entire philosophy of NetApp’s mission and vision in Data Fabric.

I have always been a strong believer that you must treat Data like water. Just like what Jeff Goldblum famously quoted in Jurassic Park, “Life finds a way“, data as it moves through its lifecycle, will find its way into the cloud and back.

And every storage vendor today has a cloud story to tell. It is exciting to listen to everyone sharing their cloud story. Cloud makes sense when it addresses different workloads such as the sharing of folders across multiple devices, backup and archiving data to the cloud, tiering to the cloud, and the different cloud service models of IaaS, PaaS, SaaS and XaaS.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Praying to the hypervisor God

I was reading a great article by Frank Denneman about storage intelligence moving up the stack. It was pretty much in line with what I have been observing in the past 18 months or so, about the storage pendulum having swung back to DAS (direct attached storage). To be more precise, the DAS form factor I am referring to are physical server hardware that houses many disk drives.

Like it or not, the hypervisor has become the center of the universe in the IT space. VMware has become the indomitable force in the hypervisor technology, with Microsoft Hyper-V playing catch-up. The seismic shift of these 2 hypervisor technologies are leading storage vendors to place them on to the altar and revering them as deities. The others, with the likes of Xen and KVM, and to lesser extent Solaris Containers aren’t really worth mentioning.

This shift, as the pendulum swings from networked storage back to internal “direct-attached” storage are dictated by 4 main technology factors:

  • The x86 server architecture
  • Software-defined
  • Scale-out architecture
  • Flash-based storage technology

Anyone remember Thumper? Not the Disney character from the Bambi movie!

thumper-bambi-cartoon-character

When the SunFire X4500 (aka Thumper) was first released in (intermission: checking Wiki for the right year) in 2006, I felt that significant wound inflicted in the networked storage industry. Instead of the usual 4-8 hard disk drives in the all the industry servers at the time, the X4500 4U chassis housed 48 hard disk drives. The design and architecture were so astounding to me, I even went and bought a 1U SunFire X4150 for my personal server collection. Such was my adoration for Sun’s technology at the time.

Continue reading

No Flash in the pan

The storage networking market now is teeming with flash solutions. Consumers are probably sick to their stomach getting a better insight which flash solution they should be considering. There are so much hype, fuzz and buzz and like a swarm of bees, in the chaos of the moment, there is actually a calm and discerning pattern slowly, but surely, emerging. Storage networking guys would probably know this thing well, but for the benefit of the other readers, how we view flash (and other solid state storage) becomes clear with the picture below: Flash performance gap

(picture courtesy of  http://electronicdesign.com/memory/evolution-solid-state-storage-enterprise-servers)

Right at the top, we have the CPU/Memory complex (labelled as Processor). Our applications, albeit bytes and pieces of them, run in this CPU/Memory complex.

Therefore, we can see Pattern #1 showing up. Continue reading