When the topic Artificial Intelligence is introduced in a room of techies whether armature or veteran, opinions galore emerge on the impact of AI both positive and negative. Even though we are already in an AI era, human beings have delved into the possible extremes of the technology that is still in the works.
These opinions have been seen through movie scripts giving futuristic assumptions, most of which are based on assumptions that the human being will finally be eradicated. I like to say its a wait and see situation.
What is AI anyway? this is pretty much intelligence demonstrated outside the human mind, by a machine. Artificial Intelligence is achieved through machine learning based on the idea that systems can learn from data, identify patterns and make decisions with minimal human assistance. We already see this with Google assistant Siri and Alexa as well as website popups ready to assist you with a purchase or navigation.
As voice and facial recognition continue to evolve, AI is intensely influencing industries and sectors as machine learning algorithms get smarter by day. Interestingly, not only will AI heavily influence your business and personal life through everyday interactions with technology, but it will also determine your court sentence.
As insane as it might sound, AI will use automated data analysis systems to make decisions about your life. The same way your phone will buzz and viola! a message from your fridge reminding you to purchase milk, algorithm makers are coming thick and hot and no one knows whether they are doing it appropriately.
If we thought we already have enough judicial hiccups, wait until the Government of Kenya introduces AI in our courtrooms. Meanwhile, courts in the USA have been using AI to determine bail, paroles, sentencing and the probability that a person will commit another crime or even if they will appear for court hearings. This has received a lot of backlash with legit claims like Artificial Intelligence is outside the judicial system and yet no regulatory steps have been put in place.
Eric Loomis, a man charged for his role in a drive-by shooting was sentenced to six years after answering a few questions that were entered into COMPAS, a software used by US courts to assess the likelihood of a defendant repeating an offense. Pretty much a black box risk-assessment tool. Loomis is now suing the court since he was not allowed to assess the algorithm after a ‘high-risk’ score that sent him to jail.
The bone of contention here is that the AI system violates the defendant’s right to due process because defendants are prevented from challenging the scientific validity or accuracy of such a test. The system also took into account the race, gender and the possibility that Loomis would repeat the same crime, as elements advising the sentencing. Whereas hearing the case would look outside a risk-assessment instrument whose workings are protected as a trade secret.
Even if the engineer behind the program was availed in court, they would only be in a position to tell how the network was designed, what inputs were entered and what outputs were developed. One thing that the engineer cannot explain though is the decision-making process used.
As much as the idea of using intelligence out of the human mind is exciting, the intelligence systems will deny you mortgage, loans and give you a low credit score just because algorithm makers said so.