Storage Gaga
Storage Gaga
Going Ga-ga over storage networking technologies ….
  Menu
Skip to content
  • Home
  • About
  • Cookie Policy
  • FreeNAS 11.2/11.3 eBook

Tag Archives: zettabytes

And great AI starts with good Data Management

By cfheoh | November 20, 2023 - 7:00 am |November 15, 2023 Algorithm, Analytics, API, Artificial Intelligence, Backup, Big Data, Business Continuity, Composable Infrastructure, Containers, Data, Data Archiving, Data Management, Data Protection, Data Security, Deep Learning, Digital Transformation, Disaster Recovery, DMTF, iRODS, Machine Learning, Object Storage, Reliability, Security, Software-defined Datacenter, Storage Tiering, Uncategorized, Virtualization
Leave a comment

Processing data has become more expensive.

Somewhere, there is a misconception that data processing is cheap. That stems from the well-known pricings of the capacities of public cloud storage that are a fraction of cents per month. But data in storage has to be worked upon, and has to be built up and protected to increase its value. Data has to be processed, moved, shared, and used by applications. Data induce workloads. Nobody keeps data stored forever and never be used again. Nobody buys storage just for capacity alone.

We have a great saying in the industry. No matter, where the data moves, it will land in a storage. So, it is clear that data does not exist in ether. And yet, I often see how little attention and prudence and care, when it comes to data infrastructure and data management technologies, the very components that are foundational to great data.

Great data management for Great AI

AI is driving up costs in data processing

A few recent articles drew my focus into the cost of data processing.

Here is one posted by a friend on Facebook. It is titled “The world is running out of data to feed AI, experts warn.”

My first reaction was, “How can we run out of data“? We have so much data in the world today that the 175 zettabytes predicted by IDC when we reach 2025 might be grossly inaccurate. According to Exploding Topics, it is estimated that we create 328.77TB of data per day, 120 zettabytes per year. While I cannot vouch for the accuracy of the numbers, the numbers are humongous.

Continue reading →

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
Tagged 4vs of big data, AI, artificial intelligence, data integrity, Data Processing, data quality, Data Value, data variety, data velocity, data veracity, data volume, generative AI, information management, Interica PARS, knowledge, large language model, LLM, metadata, Objectives and Key Results, Subsurface data management, wisdom, zettabytes
  • Recent Posts

    • The AI Platformization of Storage – The Data Intelligence Platform
    • Rethinking Storage OKRs for AI Data Infrastructure – Part 2
    • Rethinking Storage OKRs for AI Data Infrastructure – Part 1
    • AI and the Data Factory
    • What next after Cyber Resiliency?
  • Sponsored Ads

  • Google Adsense

  • Recent Comments

    • cfheoh on Disaggregation or hyperconvergence?
    • DichaelPlutt on Disaggregation or hyperconvergence?
    • Peter on Snapshots? Don’t have a C-O-W about it!
    • ja on NIST CSF 2.0 brings Data Governance into the light
    • cfheoh on Nurturing Data Governance for Cybersecurity and AI
  • Google Adsense

Storage Gaga | Powered by Mantra & WordPress.