The increasingly towering concerns over the use of AI have prompted many countries to grope toward basic standards of AI regulation. The proposed EU rules, which are deemed to change the course of the AI debate, aim at identifying and managing the risk of using such systems. Who is affected and how to navigate the regulatory compliance risks? Here’s the rundown of all the proposed changes.
Remember the “Nosedive?” It was an episode from Black Mirror that depicted a society where people rated their interactions with each other through a mobile app and, by doing so, cumulatively affected each other’s socioeconomic status. Although more or less comedic, even back in 2016, the story had eerie undertones that didn’t leave even the bleakest hopes for privacy in the future. Now, the scenario doesn’t seem too far-fetched, does it? Think China's Social Credit System, the app Peeple, and similar technology that uses sensitive data to predict human behavior or rate people’s lives. We all have very good reasons to be skeptical of Artificial Intelligence (AI) that lies at the heart of such technology.
Consider a recent investigation by Wired into the use of NarxCare, a drug monitoring and analytics tool for doctors, pharmacies, and hospitals that instantly identifies a patient’s risk of misusing opioids. NarxCare uses Machine Learning (ML) and Artificial Intelligence (AI) algorithms to mine state drug registries for red flags, which might indicate a patient’s suspicious behavior like ‘drug shopping,’ and automatically assign each patient with a unique, comprehensive Overdose Risk Score. The software looks at the number of pharmacies the patient visits, the distances they travel to get medication or receive healthcare, and the combination of prescriptions they use.
While the idea behind the tool seems brilliant – after all, the US government has spent years and millions of dollars trying to contain the opioid crisis and the number of prescribed controlled substances – the implementation is far from perfect. The problem is that NarxCare, which boasts of wide-ranging access to patients’ sensitive data, uses a proprietary mechanism, meaning there’s no way to look under its hood to inspect its data for errors and biases that might (even unintentionally) slip into the AI’s mechanism. As a result, many patients, particularly the most vulnerable, have been mistakenly flagged for suspicious behavior and denied healthcare and medication that might have improved their quality of life.
NarxCare is just one example of algorithmic engines that, despite having benign intentions at the core, can produce erroneous results, thus weaponizing the biases of its underlying data. Even when AI seems to be inherently good, like when it’s used to spot carcinomas, human intervention seems necessary to ensure equality and racial diversity, in terms of representation, and accuracy, in terms of its output. It’s obvious that AI regulation has been a long time in coming.