AI needs data we can trust

[ Note: This article was published on LinkedIn on Jan 21th 2020. Here is the link to the original article ]

In 2020, the intensity on the topic of Artificial Intelligence will further escalate.

One news which came out last week terrified me. The Sarawak courts want to apply Artificial Intelligence to mete judgment and punishment, perhaps on a small scale.

Quoted in the news article – “Courts in Sarawak are set to use Artificial Intelligence (AI) to provide judges with guidelines and analysis in their duty to mete out suitable jail penalties and fines.

Another quote – “the application was timely because in the past there were complaints in regards to disparity and inconsistency of penalties passed by magistrates or judges.” was a noble one.

In my mind, many, many questions bubbled up and they are boiling over.

Setting the wrong precedence

An Artificial Intelligence system is only as good as its data input. If the data fed into the AI is of well intention and pertinent, we as human beings will trust its fair judgment. What if the data is wrong, biased, unreliable, irrelevant?

We know the adage well – “Garbage in, Garbage out“. What if the AI system was conditioned to fit an agenda?

Data Privacy trod and trodden

We have seen this before. Every day our personal details are out there – used, misused and abused. The Malaysia Personal Data Protection Act (PDPA) was enforced on January 2013. Seven years of a toothless and useless law which has failed to protect the most valuable assets of every Malaysian citizen in this digital world, their Personal Identifiable Information (PII).

If anything, this data privacy issue or the total lack of it has gotten worse, much worse. Do you feel safe when someone from a “state magistrate” called you up claiming you owe them summons and they want to send you to jail? It has happened to me.

Opening the Pandora’s box

I have been extremely vocal about this. I wrote this article “Data Privacy First before AI Framework” to voice out. And … We are opening a very fatal can of worms that has extreme implications when we AI this and we AI that without advocating the most fundamental pillar of AI properly – the sanctity of our private data and how we can decide it can be use. Where is our right to decide?

We have not fixed Data Privacy at all. Then how can we set the precedence for the AI system to judge us?

Is AI my friend?

I hate to paint a bleak picture. I wrote “Is AI my friend” about the same time I wrote the other article.

But we can be much more if we do it right. Fix Data Privacy first, and then only can we trust AI.

Tagged , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.