[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]
This part is the continuation of Part 1 posted earlier.
iconik has partnered with iXsystems™ almost a year ago. iconik is a cloud-based media content management platform. Its storage repository has many integration with public cloud storage such as Google Cloud, Wasabi® Cloud and more. The on-premises storage integration is made throughiconik storage gateway, and it presents itself to FreeNAS™ and TrueNAS® via plugins.
For a limited, you get free access to iconik via this link.
iconik – The Application setup
[ Note: A lot of the implementation details come from this iXsystems™ documentation by Joe Dutka. This is an updated version for the latest 11.3 U1 release ]
iconik is feature rich and navigating it to setup the storage gateway can be daunting. Fortunately the iXsystems™ documentation was extremely helpful. It is also helpful to consider this as a 2-step approach so that you won’t get overwhelmed of what is happening.
Set up the Application section
Get Application ID
Get Authorization Token
Set up the Storage section
Get Storage ID
The 3 credentials (Application ID, Authorization Token, Storage ID) are required to set up the iconik Storage Gateway at the FreeNAS™ iconik plug-in setup.
[ Note: This is still experimental and should not be taken as production materials. I took a couple days over the weekend to “muck” around the new Iconik plug-in in FreeNAS™ to prepare for as a possible future solution. ]
The COVID-19 situation goes on unabated. A couple of my customers asked about working from home and accessing their content files and coincidentally both are animation studios. Meanwhile, there was another opportunity asking about a content management solution that would work with the FreeNAS™ storage system we were proposing. Over the weekend, I searched for a solution that would combine both content management and cloud access that worked with both FreeNAS™ and TrueNAS®, and I was glad to find the iconik and TrueNAS®partnership.
In this blog (and part 2 later), I document the key steps to setup the iconik plug-in with FreeNAS™. I am using FreeNAS™ 11.3U1.
Dataset 777
A ZFS dataset assigned to be the storage repository for the “Storage Target” in iconik. Since iconik has a different IAM (identity access management) than the user/group permissions in FreeNAS, we have make the ZFS dataset to have Read/Write access to all. That is the 777 permission in Unix speak. Note that there is a new ACL manager in version 11.3, and the permissions/access rights screenshot is shown here.
Take note that this part is important. We have to assign @everyone to have Full Control because the credentials at iconik is tied to the permissions we set for @everyone. Missing this part will deny the iconik storage gateway scanner to peruse this folder, and the status will remain “Inactive”. We will discuss this part more in Part 2.
[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]
“Cheap and deep“, “Race to Zero” were some of the less storied calls I have come across when discussing about object storage, and it was really de-valuing the merits of object storage as vendors touted their superficial glory of being in the IDC Marketscape for Object-based Storage 2019.
Almost every single conversation I had in the past 3 years was either explaining what object storage is or “That is cheap storage right?”
[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at the event. The content of this blog is of my own opinions and views ]
A funny photo (below) came up on my Facebook feed a couple of weeks back. In an honest way, it depicted how a developer would think (or the lack of thinking) about the storage infrastructure designs and models for the applications and workloads. This also reminded me of how DBAs used to diss storage engineers. “I don’t care about storage, as long as it is RAID 10“. That was aeons ago 😉
The world of developers and the world of infrastructure people are vastly different. Since cloud computing birthed, both worlds have collided and programmable infrastructure-as-code (IAC) have become part and parcel of cloud native applications. Of course, there is no denying that there is friction.
Containerized applications are quickly defining the cloud native applications landscape. The container orchestration machinery has one dominant engine – Kubernetes.
In the world of software development and delivery, DevOps has taken a liking to containers. Containers make it easier to host and manage life-cycle of web applications inside the portable environment. It packages up application code other dependencies into building blocks to deliver consistency, efficiency, and productivity. To scale to a multi-applications, multi-cloud with th0usands and even tens of thousands of microservices in containers, the Kubernetes factor comes into play. Kubernetes handles tasks like auto-scaling, rolling deployment, computer resource, volume storage and much, much more, and it is designed to run on bare metal, in the data center, public cloud or even a hybrid cloud.
[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at this event. The content of this blog is of my own opinions and views ]
Cloud computing will have challenges processing data at the outer reach of its tentacles. Edge Computing, as it melds with the Internet of Things (IoT), needs a different approach to data processing and data storage. Data generated at source has to be processed at source, to respond to the event or events which have happened. Cloud Computing, even with 5G networks, has latency that is not sufficient to how an autonomous vehicle react to pedestrians on the road at speed or how a sprinkler system is activated in a fire, or even a fraud detection system to signal money laundering activities as they occur.
Furthermore, not all sensors, devices, and IoT end-points are connected to the cloud at all times. To understand this new way of data processing and data storage, have a look at this video by Jay Kreps, CEO of Confluent for Kafka® to view this new perspective.
Data is continuously and infinitely generated at source, and this data has to be compiled, controlled and consolidated with nanosecond precision. At Storage Field Day 19, an interesting open source project, Pravega, was introduced to the delegates by DellEMC. Pravega is an open source storage framework for streaming data and is part of Project Nautilus.
The data generated at source (end-points, sensors, devices) is serialized, timestamped (as event occurs), continuous and infinite. These are the properties of a time series data stream, and to make sense of the streaming data, new data formats such as Avro, Parquet, Orc pepper the landscape along with the more mature JSON and XML, each with its own strengths and weaknesses.
You can learn more about these data formats in the 2 links below:
Many time series projects started as DIY projects in many organizations. And many of them are still DIY projects in production systems as well. They depend on tribal knowledge, and these databases are tied to an unmanaged storage which is not congruent to the properties of streaming data.
At the storage end, the technologies today still rely on the SAN and NAS protocols, and in recent years, S3, with object storage. Block, file and object storage introduce layers of abstraction which may not be a good fit for streaming data.
[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]
Western Digital dived into Storage Field Day 19 in full force as they did in Storage Field Day 18. A series of high impact presentations, each curated for the diverse requirements of the audience. Several open source initiatives were shared, all open standards to address present inefficiencies and designed and developed for a greater future.
Zoned Storage
One of the initiatives is to increase the efficiencies around SMR and SSD zoning capabilities and removing the complexities and overlaps of both mediums. This is the Zoned Storage initiatives a technical working proposal to the existing NVMe standards. The resulting outcome will give applications in the user space more control on the placement of data blocks on zone aware devices and zoned SSDs, collectively as Zoned Block Device (ZBD). The implementation in the Linux user and kernel space is shown below:
[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]
This blog was not intended because it was not in my plans to write it. But a string of events happened in the Storage Field Day 19 week and I have the fodder to share my thoughts. Hadoop is indeed dead.
Warning: There are Lord of the Rings references in this blog. You might want to do some research. 😉
Storage metrics never happened
The fellowship of Arjan Timmerman, Keiran Shelden, Brian Gold (Pure Storage) and myself started at the office of Pure Storage in downtown Mountain View, much like Frodo Baggins, Samwise Gamgee, Peregrine Took and Meriadoc Brandybuck forging their journey vows at Rivendell. The podcast was supposed to be on the topic of storage metrics but was unanimously swung to talk about Hadoop under the stewardship of Mr. Stephen Foskett, our host of Tech Field Day. I saw Stephen as Elrond Half-elven, the Lord of Rivendell, moderating the podcast as he would have in the plans of decimating the One Ring in Mount Doom.
So there we were talking about Hadoop, or maybe Sauron, or both.
The photo of the Oliphaunt below seemed apt to describe the industry attacks on Hadoop.
[Disclosure: I am invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees will be covered by GestaltIT, the organizer and I am not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]
I am eager to find out what coming from Western Digital. They have tons of storage technologies that I have yet to encounter, and this anticipation is keeping me excited for the Western D session at Storage Field Day 19.
For a few years I have been keen on a few Western D’s technologies which were moving up the value chain. They are:
Symbotics Design™ (although I think they changed their marketing messaging)
In my patch, the signals of the 3 Western D’s technologies have gone weak in the past year. However, there is a lot of momentum right now for Zoned Storage and Zoned Name Space and I believe this could be what is in store for the storage propeller heads like us at Storage Field Day 19.
It is from one of my FreeNAS customers daily security run logs, emailed to our support@katanalogic.com alias. It is attempting a brute force attack trying to crack the authentication barrier via the exposed SSH port.
Just days after the installation was completed months ago, a bot has been doing IP port scans on our system, and found the SSH port open. (We used it for remote support). It has been trying every since, and we have been observing the source IP addresses.
The new Ransomware attack vector
This is not surprising to me. Ransomware has become more sophisticated and more damaging than ever because the monetary returns from the ransomware are far more effective and lucrative than other cybersecurity threats so far. And the easiest preys are the weakest link in the People, Process and Technology chain. Phishing breaches through social engineering, emails are the most common attack vectors, but there are vhishing (via voicemail) and smshing (via SMS) out there too. Of course, we do not discount other attack vectors such as mal-advertising sites, or exploits and so on. Anything to deliver the ransomware payload.
[Disclosure: I am invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees will be covered by GestaltIT, the organizer and I am not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]
This is NOT an advertisement for coloured balls.
This is the license to brag for the vendors in the next 2 weeks or so, as we approach the 2020 new year. This, of course, is the latest 2019 IDC Marketscape for Object-based Storage, released last week.
My object storage mentions
I have written extensively about Object Storage since 2011. With different angles and perspectives, here are some of them: