The Big Elephant in IoT Storage

It has been on my mind for a long time and I have been avoiding it too. But it is time to face the inevitable and just talk about it. After all, the more open the discussions, the more answers (and questions) will arise, and that is a good thing.

Yes, it is the big elephant in the room called Data Security. And the concern is going to get much worse as the proliferation of edge devices and fog computing, and IoT technobabble goes nuclear.

I have been involved in numerous discussions on IoT (Internet of Things) and Industrial Revolution 4.0. I have been in a consortium for the past 10 months, discussing with several experts of their field to face future with IR4.0. Malaysia just announced its National Policy for Industry 4.0 last week, known as Industry4WRD. Whilst the policy is a policy, there are many thoughts for implementation of IoT devices, edge and fog computing. And the thing that has been bugging me is related to of course, storage, most notably storage and data security.

Storage on the edge devices are likely to be ephemeral, and the data in these storage, transient. We can discuss about persistence in storage at the edge another day, because what I would like to address in the data security in these storage components. That’s the Big Elephant in the room I was relating to.

The more I work with IoT devices and the different frameworks (there are so many of them), I became further enlightened by the need to address data security. The proliferation and exponential multiplication of IoT devices at present and in the coming future have increased the attack vectors many folds. Many of the IoT devices are simplified components lacking the guards of data security and are easily exposed. These components are designed for simplicity and efficiency in mind. Things such as I/O performance, storage management and data security are probably the least important factors, because every single manufacturer and every single vendor are slogging to make their mark and presence in this wild, wild west world.

Picture from https://fcw.com/articles/2018/08/07/comment-iot-physical-risk.aspx

Continue reading

The Network is Still the Computer

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Sun Microsystems coined the phrase “The Network is the Computer“. It became one of the most powerful ideologies in the computing world, but over the years, many technology companies have tried to emulate and practise the mantra, but fell short.

I have never heard of Drivescale. It wasn’t in my radar until the legendary NFS guru, Brian Pawlowski joined them in April this year. Beepy, as he is known, was CTO of NetApp and later at Pure Storage, and held many technology leadership roles, including leading the development of NFSv3 and v4.

Prior to Tech Field Day 17, I was given some “homework”. Stephen Foskett, Chief Cat Herder (as he is known) of Tech Field Days and Storage Field Days, highly recommended Drivescale and asked the delegates to pick up some notes on their technology. Going through a couple of the videos, Drivescale’s message and philosophy resonated well with me. Perhaps it was their Sun Microsystems DNA? Many of the Drivescale team members were from Sun, and I was previously from Sun as well. I was drinking Sun’s Kool Aid by the bucket loads even before I graduated in 1991, and so what Drivescale preached made a lot of sense to me.Drivescale is all about Scale-Out Architecture at the webscale level, to address the massive scale of data processing. To understand deeper, we must think about “Data Locality” and “Data Mobility“. I frequently use these 2 “points of discussion” in my consulting practice in architecting and designing data center infrastructure. The gist of data locality is simple – the closer the data is to the processing, the cheaper/lightweight/efficient it gets. Moving data – the data mobility part – is expensive.

Continue reading

My first TechFieldDay

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are paid by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I have attended a bunch of Storage Field Days over the years but I have never attended a Tech Field Day. This coming week, I will be attending their 17th edition, TechFieldDay 17, but my first. I have always enjoyed Storage Field Days. Everytime I joined as a delegate, there were new things to discover but almost always, serendipity happened.

Continue reading

The Commvault 5Ps of change

[Preamble: I have been invited by Commvault via GestaltIT as a delegate to their Commvault GO conference from Oct 9-11, 2018 in Nashville, TN, USA. My expenses, travel and accommodation are paid by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I am a delegate of Commvault GO 2018 happening now in Nashville, Tennessee. I was also a delegate of Commvault GO 2017 held at National Harbor, Washington D.C. Because of scheduling last year, I only managed to stay about a day and a half before flying off to the West Coast. This year I was given the opportunity to experience the full conference at Commvault GO 2018. And I was able to savour the energy, the mindset and the culture of Commvault this time around.

Make no mistakes folks, BIG THINGS are happening with Commvault. I can feel it with their people, with their partners and their customers at the GO conference. How so?

For one, Commvault is making big changes, from People, Process, Pricing, Products and Perception (that’s 5 Ps). Starting with Products, they have consolidated from 20+ products into 4, and simplifying the perception of how the industry sees Commvault. The diagram below shows the 4 products portfolio.

Continue reading

Commvault UDI – a new CPUU

[Preamble: I am a delegate of Storage Field Day 14. My expenses, travel and accommodation are paid for by GestaltIT, the organizer and I am not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

I am here at the Commvault GO 2017. Bob Hammer, Commvault’s CEO is on stage right now. He shares his wisdom and the message is clear. IT to DT. IT to DT? Yes, Information Technology to Data Technology. It is all about the DATA.

The data landscape has changed. The cloud has changed everything. And data is everywhere. This omnipresence of data presents new complexity and new challenges. It is great to get Commvault acknowledging and accepting this change and the challenges that come along with it, and introducing their HyperScale technology and their secret sauce – Universal Dynamic Index.

Continue reading

Cloud silos after eliminating silos

I love cloud computing. I love the economics and the agility of the cloud and how it changed IT forever. The cloud has solved some of the headaches of IT, notably the silos in operations, the silos in development and the silos in infrastructure.

The virtualization and abstraction of rigid infrastructures and on-premise operations have given birth to X-as-a-Service and Cloud Services. Along with this, comes cloud orchestration, cloud automation, policies, DevOps and plenty more. IT responds well to this and thus, public clouds services like Amazon Web Services, Microsoft Azure, and Google Cloud Platforms are dominating the landscape. Other cloud vendors like Rackspace, SoftLayer, Alibaba Cloud are following the leaders pack offering public, private, hybrid and specialized services as well.

In this pile, we can now see the certain “camps” emerging. Many love Azure Stack and many adore AWS Lambda. Google just had their summit here in Malaysia yesterday, appealing to a green field and looking for new adopters. What we are seeing is we have customers and end users adopting various public cloud services providers, their services, their ecosystem, their tools, their libraries and so on. We also know that many customers and end users having several applications on AWS, and some on Azure and perhaps looking for better deals with another cloud vendor. Multi-cloud is becoming flavour of the season, and that word keeps appearing in presentations and conversations.

Yes, multi-cloud is a good thing. Customers and end users would love it because they can get the most bang for their buck, if only … it wasn’t so complicated. There aren’t many “multi-cloud” platforms out there yet. Continue reading

Don’t get too drunk on Hyper Converged

I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.

The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).

Gartner MQ Integrated Systems Aug 2015

First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.

Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed.  Continue reading

Praying to the hypervisor God

I was reading a great article by Frank Denneman about storage intelligence moving up the stack. It was pretty much in line with what I have been observing in the past 18 months or so, about the storage pendulum having swung back to DAS (direct attached storage). To be more precise, the DAS form factor I am referring to are physical server hardware that houses many disk drives.

Like it or not, the hypervisor has become the center of the universe in the IT space. VMware has become the indomitable force in the hypervisor technology, with Microsoft Hyper-V playing catch-up. The seismic shift of these 2 hypervisor technologies are leading storage vendors to place them on to the altar and revering them as deities. The others, with the likes of Xen and KVM, and to lesser extent Solaris Containers aren’t really worth mentioning.

This shift, as the pendulum swings from networked storage back to internal “direct-attached” storage are dictated by 4 main technology factors:

  • The x86 server architecture
  • Software-defined
  • Scale-out architecture
  • Flash-based storage technology

Anyone remember Thumper? Not the Disney character from the Bambi movie!

thumper-bambi-cartoon-character

When the SunFire X4500 (aka Thumper) was first released in (intermission: checking Wiki for the right year) in 2006, I felt that significant wound inflicted in the networked storage industry. Instead of the usual 4-8 hard disk drives in the all the industry servers at the time, the X4500 4U chassis housed 48 hard disk drives. The design and architecture were so astounding to me, I even went and bought a 1U SunFire X4150 for my personal server collection. Such was my adoration for Sun’s technology at the time.

Continue reading

Supercharging Ethernet … with a PAUSE

It’s been a while since I wrote. I had just finished a 2-week stint in Melbourne, conducting 2 Data ONTAP classes and had a blast.

But after almost 3 1/2 months of doing little except teaching NetApp classes, the stint is ending. I wanted it that way, to take a break and also to take on a new challenge. I will be taking on a job with Hitachi Data Systems, going back to the industry that I have termed the “Wild, wild west”. After a 4 1/2-year hiatus, I think that industry still behaves the way it is .. brash, exclusive, rich! The oligarchy of the oilmen are still laughing their way to the banks. And it will be my job to sell storage (and cloud) solutions to them.

In my Netapp (and EMC) engagements in the past 6 months, I have seen the greater adoption of iSCSI over Fibre Channel, and many has predicted that 10Gigabit Ethernet will be the infliction point where iSCSI can finally stand shoulder-to-shoulder with Fibre Channel. After all, 10 Gigabit/sec is definitely faster than 8 Gigabit/sec Fibre Channel, right? WRONG! (I am perfectly aware there is a 16 Gigabit/sec Fibre Channel, but can’t you see I am trying to start an argument here?)

Delivering SCSI data load over iSCSI on 10 Gigabit/sec Ethernet does not necessarily mean that it would be faster than delivering the same payload over 8 Gigabit/sec Fibre Channel. This statement can be viewed in many different ways and hence the favourite IT reply would be … “It depends“.

I would leave this performance argument for another day but today we are going to talk about some of the key additions to supercharge 10 Gigabit Ethernet for data delivery in storage networking capacity. In addition, 10 Gigabit Ethernet is the primary transport for Fibre Channel over Ethernet (FCoE) and it is absolutely critical that 10 Gigabit Ethernet must be close to as reliable as Fibre Channel for data delivery in a storage network.

Ethernet is a non-deterministic protocol, and therefore, its delivery result is dependent on many factors. Likewise 10 Gigabit Ethernet has inherited part of that feature. The delivery of data over Ethernet can be lossy, i.e. packets can get lost and the upper layer application protocols will have to respond to detecte the dropped packets and to ensure lost packets are redelivered to complete the consignment. But delivering data in a storage network cannot be lossy and in most cases of SANs, the requirement is to have the data arrive in the sequence they were delivered. The SAN fabric (especially with the common services of Layer 3 of the FC protocol stack) and the deterministic nature of Fibre Channel protocol were the reasons many has relied on Fibre Channel SAN technology for more than a decade. How can 10 Gigabit Ethernet respond?

Continue reading

The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Continue reading