Storage Gaga
Storage Gaga
Going Ga-ga over storage networking technologies ….
  Menu
Skip to content
  • Home
  • About
  • Cookie Policy
  • FreeNAS 11.2/11.3 eBook

Tag Archives: metadata server

TrueNAS SCALE Clustered SMB to the fore

By cfheoh | July 4, 2022 - 8:00 am |July 4, 2022 API, Appliance, Business Continuity, CIFS, Clusters, Containers, Data Availability, Data Management, Disaster Recovery, Filesystems, FreeNAS, Gluster, High Performance Computing, iXsystems, Microsoft, NAS, NFS, Ryuusi, Scale-out architecture, SCSI, SMB, Software Defined Storage, TrueNAS, Unified Storage, Virtualization
Leave a comment

iXsystems™ released second iteration of TrueNAS® SCALE software just over a week ago. It is known as version 22.02.2 or Anglefish.2, with the most notable upgrades to HA (High Availability) for SCALE and Clustered SMB capabilities. This is the perfect excuse for me to learn about Clustered SMB and share what I have learned.

TrueNAS SCALEFor the

For the uninformed, Clustered SMB brings highly available SMB file sharing services to mission critical environments. More importantly, Clustered SMB is high availability in a scale-out clustered architecture.

My view beyond HA SMB

I am not familiar with Clustered SMB in a NAS (Network Attached Storage). The world I am more familiar with is either having CIFS/SMB file services on a dual controller storage appliance or running Windows File Sharing on an Microsoft® Clustered Service (MSCS). Typically in these 2 types of HA SMB services, the scale up architecture require a shared access to a consolidated storage volume. Behind the scenes, there are many mechanisms at play to ensure that one, and only one, storage controller or HA host can have write access capabilities at one time. The most common mechanism is the SCSI-3 Persistent Reservation or sometimes known as SCSI fencing, using the SPC-3 (SCSI Primary Command) primitives. The whole objective is to prevent 2 nodes or hosts to writing to the shared storage volume at the same time and other issues like split-brain.

Continue reading →

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
Tagged Clustered Samba, Clustered Trivial Database, CTDB, distributed clustering, federated clustering, High Availability, metadata server, Microsoft Cluster Server, Samba, scale-out file system, SCSI fencing, SCSI persistent reservation, Trivial Data Base, TrueCommand, windows file sharing

Glusterific!

By cfheoh | May 25, 2020 - 9:30 am |May 22, 2020 Acquisition, Algorithm, Appliance, Ceph, CIFS, Cloud, Containers, Disks, Filesystems, FreeNAS, Gluster, High Performance Computing, Hyperconvergence, IBM, Infiniband, Intel, Isilon, iXsystems, Linux, Lustre, NAS, NetApp, NFS, Object Storage, Openstack, Panasas, Quantum Corporation, RAID, RDMA, Redhat, Scale-out architecture, Server SAN, SMB, Software Defined Storage, Storage Optimization, TrueNAS, Virtualization
Leave a comment

A conversation with a storage executive last week brought up Gluster, a clustered file system I have not explored in many years. I had one interaction months before its acquisition by RedHat® in 2011.

I remembered the Gluster demo at Jaring over a video call, because I was the lead consultant pitching the scale-out NAS solution. It did not go well, and there were “bugs” which made the Head of IT flinched in her seat. Despite Jaring being Malaysia’s technology trailblazer, the impression of Gluster was forgettable. I stayed on the GlusterFS architecture a little while and then it dropped off my radar.

Gluster Logo Scale Out NAS

Gluster Scale Out NAS

But after the conversation last week, I am elated to revive my interest in Gluster, knowing that something big and impressive in coming into the fore very soon. Studying the architecture (again!), there are 2 parts of Gluster which excite me. One is the Brick and the other is the lack of a Metadata service.

Continue reading →

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
Tagged Brick, CephFS, cluster, Clustered parallel file system, Directory, distributed namespace, file system designs, GlusterFS, HCI, Hyperconverged Infrastructure, metadata server, Node, Noobaa, Redhat Summit, replication factor, scale out, Volume, XFS, ZFS

The power of E8

By cfheoh | November 21, 2017 - 4:22 pm |November 21, 2017 Analytics, API, Big Data, Data Availability, Data Fabric, Data Management, E8 Storage, Filesystems, High Performance Computing, Hyperconvergence, Infiniband, NVMe, PCIe, Performance Benchmark, Performance Caching, RDMA, Scale-out architecture, Server SAN, Software Defined Storage, Solid State Devices, Storage Optimization
2 Comments

[Preamble: I was a delegate of Storage Field Day 14 from Nov 8-10, 2017. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

E8 Storage technology update at Storage Field Day 14 was impressive. Out of the several next generation NVMe storage technologies I have explored so far, E8 came out as the most complete. It was no surprise that they won the “Best of Show” in the Flash Memory Summits for the “Most Innovative Flash Memory Technology” in 2016 and “Most Innovative Flash Memory Enterprise Business Application” for 2017.

Who is E8 Storage?

They came out of stealth in August 2016 and have been making waves with very impressive stats. When E8 was announced, their numbers were more than 10 million IOPS, with 100µsecs for reads and 40µsecs for writes. And in the SFD14 demo, they reached and past the 10 million IOPS numbers.

The design philosophy of E8 Storage is different than the traditional dual controller scale-up storage architecture design or the multi-node scale-out cluster design. In fact, from a 30,000 feet view, it is quite similar to a “SAN-client” design advocated by Lustre, leveraging a very high throughput, low latency network.

Continue reading →

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
Tagged controllers, COTS, E8 Storage, high performance, high throughput, host agent, Infiniband, IOPS, low latency, Lustre, metadata server, patented, RDMA, ROCEv2
  • Recent Posts

    • The AI Platformization of Storage – The Data Intelligence Platform
    • Rethinking Storage OKRs for AI Data Infrastructure – Part 2
    • Rethinking Storage OKRs for AI Data Infrastructure – Part 1
    • AI and the Data Factory
    • What next after Cyber Resiliency?
  • Sponsored Ads

  • Google Adsense

  • Recent Comments

    • cfheoh on Disaggregation or hyperconvergence?
    • DichaelPlutt on Disaggregation or hyperconvergence?
    • Peter on Snapshots? Don’t have a C-O-W about it!
    • ja on NIST CSF 2.0 brings Data Governance into the light
    • cfheoh on Nurturing Data Governance for Cybersecurity and AI
  • Google Adsense

Storage Gaga | Powered by Mantra & WordPress.