[ This is part two of “Where are your files living now?”. You can read Part One here ]
“Data locality, Data mobility“. It was a term I like to use a lot when describing about data consolidation, leading to my mention about files and folders, and where they live in my previous blog. The thinking of where the files and folders are now as in everywhere as they can be in a plethora of premises stretches the premise of SSOT (Single Source of Truth). And this expatriation of files with minimal checks and balances disturbs me.
A year ago, just before I joined iXsystems, I was given Google® embargoed news, probably a week before they announced BigQuery Omni. Then I was interviewed by Enterprise IT News, a local Malaysian technology news portal to provide an opinion quote. This was what I quoted:
“’The data warehouse in the cloud’ managed services of Big Query is underpinned by Google® Anthos, its hybrid cloud infra and service management platform based on GKE (Google® Kubernetes Engine). The containerised applications, both on-prem and in the multi-clouds, would allow Anthos to secure and orchestrate infra, services and policy management under one roof.”
I further quoted ” The data repositories remain in each cloud is good to address data sovereignty, data security concerns but it did not mention how it addresses “single source of truth” across multi-clouds.”
Single Source of Truth – regardless of repositories
[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at this event. The content of this blog is of my own opinions and views ]
Cloud computing will have challenges processing data at the outer reach of its tentacles. Edge Computing, as it melds with the Internet of Things (IoT), needs a different approach to data processing and data storage. Data generated at source has to be processed at source, to respond to the event or events which have happened. Cloud Computing, even with 5G networks, has latency that is not sufficient to how an autonomous vehicle react to pedestrians on the road at speed or how a sprinkler system is activated in a fire, or even a fraud detection system to signal money laundering activities as they occur.
Furthermore, not all sensors, devices, and IoT end-points are connected to the cloud at all times. To understand this new way of data processing and data storage, have a look at this video by Jay Kreps, CEO of Confluent for Kafka® to view this new perspective.
Data is continuously and infinitely generated at source, and this data has to be compiled, controlled and consolidated with nanosecond precision. At Storage Field Day 19, an interesting open source project, Pravega, was introduced to the delegates by DellEMC. Pravega is an open source storage framework for streaming data and is part of Project Nautilus.
The data generated at source (end-points, sensors, devices) is serialized, timestamped (as event occurs), continuous and infinite. These are the properties of a time series data stream, and to make sense of the streaming data, new data formats such as Avro, Parquet, Orc pepper the landscape along with the more mature JSON and XML, each with its own strengths and weaknesses.
You can learn more about these data formats in the 2 links below:
Many time series projects started as DIY projects in many organizations. And many of them are still DIY projects in production systems as well. They depend on tribal knowledge, and these databases are tied to an unmanaged storage which is not congruent to the properties of streaming data.
At the storage end, the technologies today still rely on the SAN and NAS protocols, and in recent years, S3, with object storage. Block, file and object storage introduce layers of abstraction which may not be a good fit for streaming data.
I have been following Intel for a few years now, a big part was for their push of the 3D Xpoint technology. Under the Optane brand, Intel has several forms of media types, addressing persistent memory to storage class and solid state storage. Intel, in recent years, has been more forefront with their larger technology portfolio and it is not just about their processors anymore. One of the bright areas I am seeing myself getting more engrossed in (and involved into) is their IoT (Internet of Things) portfolio, and it has been very exciting so far.
Intel IoT and Deep Learning Frameworks
The efforts of the Intel IoTG (Internet of Things Group) in Asia Pacific are recognized rapidly. The drive of the Industry 4.0 revolution is strong. And I saw the brightest spark of the Intel folks pushing the Industry 4.0 message on homeground Malaysia.
After the large showing by Intel at the Semicon event 2 months ago, they turned up a notch in Penang at their own Intel IoT Summit 2019, which concluded last week.
At the event, Intel brought out their solid engineering geeks. There were plenty of talks and workshops on Deep Learning, AI, Neural Networks, with chatters on Nervana, Nauta and Saffron. Despite all the technology and engineering prowess of Intel was showcasing, there was a worrying gap.
[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]
3 weeks after Storage Field Day 18, I was still trying to wrap my head around the 3-hour session we had with Western Digital. I was like a kid in a candy store for a while, because there were too much to chew and I couldn’t munch them all.
From “Silicon to System”
Not many storage companies in the world can claim that mantra – “From Silicon to Systems“. Western Digital is probably one of 3 companies (the other 2 being Intel and nVidia) I know of at present, which develops vertical innovation and integration, end to end, from components, to platforms and to systems.
For a long time, we have always known Western Digital to be a hard disk company. It owns HGST, SanDisk, providing the drives, the Flash and the Compact Flash for both the consumer and the enterprise markets. However, in recent years, through 2 eyebrow raising acquisitions, Western Digital was moving itself up the infrastructure stack. In 2015, it acquired Amplidata. 2 years later, it acquired Tegile Systems. At that time, I was wondering why a hard disk manufacturer was buying storage technology companies that were not its usual bread and butter business.
[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]
The hyperconverged platform for secondary data, or is it?
When Cohesity came into the scene, they were branded the latest unicorn alongside Rubrik. Both were gunning for the top hyperconverged platform for secondary data. Crazy money was pouring into that segment – Cohesity got USD250 million in June 2018; Rubrik received USD261 million in Jan 2019 – making the market for hyperconverged platforms for secondary data red-hot. Continue reading →
It has been on my mind for a long time and I have been avoiding it too. But it is time to face the inevitable and just talk about it. After all, the more open the discussions, the more answers (and questions) will arise, and that is a good thing.
Yes, it is the big elephant in the room called Data Security. And the concern is going to get much worse as the proliferation of edge devices and fog computing, and IoT technobabble goes nuclear.
I have been involved in numerous discussions on IoT (Internet of Things) and Industrial Revolution 4.0. I have been in a consortium for the past 10 months, discussing with several experts of their field to face future with IR4.0. Malaysia just announced its National Policy for Industry 4.0 last week, known as Industry4WRD. Whilst the policy is a policy, there are many thoughts for implementation of IoT devices, edge and fog computing. And the thing that has been bugging me is related to of course, storage, most notably storage and data security.
Storage on the edge devices are likely to be ephemeral, and the data in these storage, transient. We can discuss about persistence in storage at the edge another day, because what I would like to address in the data security in these storage components. That’s the Big Elephant in the room I was relating to.
The more I work with IoT devices and the different frameworks (there are so many of them), I became further enlightened by the need to address data security. The proliferation and exponential multiplication of IoT devices at present and in the coming future have increased the attack vectors many folds. Many of the IoT devices are simplified components lacking the guards of data security and are easily exposed. These components are designed for simplicity and efficiency in mind. Things such as I/O performance, storage management and data security are probably the least important factors, because every single manufacturer and every single vendor are slogging to make their mark and presence in this wild, wild west world.
Picture from https://fcw.com/articles/2018/08/07/comment-iot-physical-risk.aspx