News of AI Systems beating humans at what they do best got mainstream in 1996 when Deep Blue, a computer designed by IBM specifically to play chess, beat the then reigning world chess champion Garry Kasparov in the first of six game matches. Kasparov however beat Deep Blue in three of the matches and drew two to finally manage to beat Deep Blue 4 to 2. In May 1997, DeepBlue managed to beat Garry Kasparov after an upgrade, but IBM refused to give Kasparov a rematch after Kasparov accused IBM of cheating.
In level of complexities, chess may be considered a simple game that can be broken down into mathematical formulas. If someone can calculate and memorise, in good time, more than 20 moves in advance, then that person could technically beat everyone else in the game of chess. However, a typical human being can calculate around 3 to 5 moves in advance in strategic positions or up to 15 moves ahead in “forced moves” situations whereas modern chess engines like Stockfish or Komodo can calculate tens to hundreds of moves ahead in no time, and at the same time evaluate the strength and weaknesses of those positions – that’s why AI Systems beating humans in chess is no longer news. News should be AI Systems beating humans in games that require vast knowledge and interpretation of human language, a game like Jeopardy. But again, computers beating humans in Jeopardy is no longer news.
Jeopardy is a game many thought AI Systems could never master. This is because the game requires contestants to respond to general knowledge clues in the form of answers, and must phrase their responses in the form of questions. To further complicate Jeopardy, Jeopardy requires contestants to master almost all branches of knowledge including “history and current events, the sciences, the arts, popular culture, literature, and languages”, writes Wikipedia. This complexity however proved to be a walk in the park when Watson, a computer developed by IBM to answer questions posed in natural language, received the 2011 first place prize of $1 million after beating former champions Brad Rutter and Ken Jennings.
After Watson beat Brad Rutter and Ken Jennings, the AI system has received a number of upgrades until recently it was able to act as a teaching assistant to students enrolled in Georgia Tech’s online artificial intelligence course. As Jill Watson, Watson was able to offer the best teaching guidance to the hundreds of students, and at the end of the course that lasted three months, none of the students was able to recognise that Watson wasn’t a human being.
Before news of Watson acting as an assistant professor rocked the Internet, Google’s Artificial Intelligence programme AlphaGo occupied tech headlines after beating the world champion Lee Sedol in the game of Go. The game of Go is specifically important as it differs from either Chess or Jeopardy as it requires the player not only to be calculative, but to also be creative and intuitive. TechCrunch explained it this way, “Unlike previously AI victories — such as Deep Blue’s defeat of chess grandmaster Garry Kasparov in 1997, or IBM Watson’s Jeopardy triumph in 2011 — DeepMind programmed AlphaGo to be capable of teaching itself, not just carrying out a set of fixed moves or activities”.
When it comes to the significance of AlphaGo outperforming Go grandmaster Lee Sedol in the game of Go, we are told by Michael Nielsen in his article Is AlphaGo Really Such a Big Deal? published in QUANTA Magazine, “I see AlphaGo not as a revolutionary breakthrough in itself, but rather as the leading edge of an extremely important development: the ability to build systems that can capture intuition and learn to recognize patterns”. Sam Byford, writing in The Verge, adds, “Ultimately the Google unit thinks its machine learning techniques will be useful in robotics, smartphone assistant systems, and healthcare; last month [February] DeepMind announced that it had struck a deal with the UK’s National Health Service”.
After the victory AlphaGo had against Lee Sedol, the Internet went somewhat quiet on new breakthroughs on Artificial Intelligence except on two occasions; when AI combined with fMRI was able to read people’s minds and today when an IA dubbed ALPHA managed to down an expert human fighter in dogfights. I will come back to the ability of AI being able to read people’s mind in a few minutes.
The news that ALPHA has managed to down an expert human, U.S. Air Force Colonel Gene “Geno” Lee may not be as big as the news that AlphaGo beat Lee Sedol in the game of Go, but it still significant nonetheless. This is because the ability for the AI to not only maneuver complex 3D environments to beat Colonel Gene who has controlled or flown in thousands of air-to-air intercepts as mission commander or pilot for decades, but also to beat him every single time alongside beating other competing AIs shows how the ALPHA AI is capable of “thinking” like humans. Popular Science explains:
The A.I., dubbed ALPHA, was developed by Psibernetix, a company founded by University of Cincinnati doctoral graduate Nick Ernest, in collaboration with the Air Force Research Laboratory. According to the developers, ALPHA was specifically designed for research purposes in simulated air-combat missions.
The secret to ALPHA’s superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink
To date, application of machine learning and deep learning algorithms have given rise to the ability of AI to recognise faces in pictures, dream in pictures, provide full colour versions of pictures taken in black and white, add captions to pictures, and most recently recreate pictures from human thoughts.
The ability for AI to take photos of images a human being is thinking about means right now an AI machine in conjunction with mind reading machines like fMRI can dig out what humans are thinking.
Roughly a week ago, a team of researchers from University of Oregon reported to have built a system that can read people’s thoughts via brain scan, and reprint on a screen the images of those thoughts. “We can take someone’s memory – which is typically something internal and private – and we can pull it out from their brains,” Brice Kuhl, a neuroscientist who was part of the research, told Vox.
To test their AI system that comprised of an fMRI unit and an AI unit, ScienceAlert wrote, “the researchers elected 23 volunteers, and compiled a set of 1,000 colour photos of random people’s faces. The volunteers were shown these pictures while hooked up to an fMRI machine, which detects subtle changes in the blood flow of the brain to measure their neurological activity”.
Hooked to the fMRI unit was the AI programme that interpreted the images as read by the fMRI unit. The researchers assigned 300 numbers to certain physical features on the faces to help the AI ‘see’ them as code. “The machine managed to reconstruct each face based on activity from two separate regions in the brain: the angular gyrus (ANG), which is involved in a number of processes related to language, number processing, spatial awareness, and the formation of vivid memories; and the occipitotemporal cortex (OTC), which processes visual cues.”
When a person was shown an image of a face, the fMRI was able to record the brain activity and send, in real time, the information to the AI unit. the AI unit was then able to use the 300 number to reconstruct the image and print the output on a screen. The results of that experiment is provided in the image below, where the top images are the original images that were shown to the participants, the middle images being the AI’s reconstruction from the OTC region and the bottom images being the AI’s reconstruction from the ANG region of the brain.
In 2015, Popular Science reported of a study published in Frontiers in Neuroscience where a mind reading system was able to convert audio thoughts into text. Typically, when reading words like those in this article, there is always a sound in your brain that associates the text on this page to the sound of the word. The researchers were able to extract the thought-sounds, converted them into text, and printed them out. Way back in 2011, Jack Gallant, a neuroscientist at U.C. Berkeley, reconstructed a video by reading the brain scans of someone who watched that video.
When the abilities of mind reading systems like reading sounds in the brain, reconstruct experiences into videos and read images are combined with the power of AI to reproduce those thoughts, then in the near future we are sure no one will harbour private thoughts.
The application of mind reading technologies will allow governments and criminals to easily extract information from witnesses, criminals, spies or any other person with information they bound to keep secret. For example in situations where investigators do not have an image of a criminal, witnesses to a crime will be subjected to brain scans from where the investigators will be able to reconstruct the faces of the criminals.
When it comes to experiences, we’ll only have to pray hard that spouses will not acquire the technologies in order to reconstruct events of the previous night. Team mafisi and sponsors, be warned.