Last week, there was a press release by Qlik™, informing of a sponsored TechTarget®‘s Enterprise Strategy Group (ESG) about the state of responsible AI practices across industries. The study highlighted critical gaps in the approach to responsible AI, ethical AI practices and AI regulatory compliances. From the study, Qlik™ emphasizes on having a solid data foundation. To get to that bedrock foundation, we must trust the data and we must be responsible for the kinds of data that built that foundation. Hence, Data Trust and Data Responsibility.
There is an AI boom right now. Last year alone, the AI machine and its hype added in USD$2.4 trillion market cap to US tech companies. 5 months into 2024, AI is still supernova hot. And many are very much fixated to the infallible fables and tales of AI’s pompous splendour. It is this blind faith that I see many users and vendors alike sidestepping the realities of AI in the present state as it is.
AI is not always responsible. Then it begs the question, “Are we really working with a responsible set of AI applications and ecosystems“?
AI still hallucinates, unfortunately. The lack of transparency of AI applications coming to a conclusion and a recommended decision is not always known. What if you had a conversation with ChatGPT and it says that you are dead. Well, that was exactly what happened when Tom’s Guide writer, Tony Polanco, found out from ChatGPT that he passed away in September 2021.
Perhaps it was a one-off thing or perhaps it was the prompt questions Tony asked or the free version, ChatGPT 3.5 was using obsoleted information. In any way, it was shocking. Now imagine that Tony tried to get a bank loan and the AI application informed the bank clerk not to engage with Tony because he was already “not alive”. Can you imagine the devastation of such a falsehood?
Horse before the cart. Where is the data trust?
I wrote this LinkedIn® article 5 years ago, titled “Data privacy first before AI framework“. It was my response to the usual ways the Malaysian government like to approach shiny technology things. Always putting the horse before the cart.
In my article I wrote, “I see Data Privacy as the bedrock of Ethical Use of Information. Now my question to the authorities, to the responsible agencies, and to MDEC is “If our Data Privacy is treated poorly, can we expect Good Ethics to prevail in the National AI Framework?“. This pointed to another even older LinkedIn article of mine, Data Responsibility.
The Malaysian PDPA (Personal Data Protection Act) was enacted as a law in 2010. 14 years later, I still see my personal identifiable information such as my home address, my IC (identity card) number, my phone number flouted carelessly and dangerously by telemarketers and scammers. There is little enforcement to the misuse and abuse of my personal details, and I am sure every one of us in Malaysia can attest to my experiences.
Why aren’t we willingly putting our personal details to be given to so-called “responsible AI” applications yet? In fact, even before any kind of sovereign Malaysian AI applications, many Malaysians are avoiding PADU, a central hub database, registration. There is no trust.
Despite how the present Malaysian government may spin the tepid PADU reception, there is an element of trust deficit. We simply do not want to trust because of the lack of transparency of what might happen once the government get hold of our personal data. A LinkedIn acquaintance of mine, Kathirgugan Kathirasen, even speculated that PADU may be a pre-cursor to a social credit system.
Is there a data trust? Not in the way I see it. Which points to my other 5-year old LinkedIn article, “Malaysia, when will you take data privacy seriously?”
Responsible AI? Not yet.
Much has to be done before users, citizens, consumers can trust AI applications or anything that has an AI tag to it. Yes, you may call me a skeptic, not because I am skeptical about the future of AI. I am not. AI in Malaysia, sovereign AI, especially will have a great future, if only …
I am skeptical of how “AI” is presented to all of us. I am wary of the “truth” in AI outcomes because the data, the fuel that feeds into the algorithms of AI, may not have come from data we can trust, data that was not managed and governed responsibly.
I am starting to explore deeper into Data Governance. I am still learning to structure the designs of data governance in the frameworks of data infrastructure and data management, and explaining to organizations to handle the data that they possessed, curated, shared responsibly with strong deepest integrity, and with a high degree of trustworthiness. Only then can we talk about responsible AI.