Deconstructing the sensationalization of AIs

By Kohava Mendelsohn

Image source: https://www.pexels.com/photo/yelling-formal-man-watching-news-on-laptop-3760778/ 

Up until a couple of  years ago, whenever I heard the words “artificial intelligence”, something inside me would switch off.  I would think, “Artificial Intelligences are complicated for me to understand.” Code that can beat world class chess masters? Perform flawless surgery? Analyze datasets larger than what the human mind can actually conceptualize? It seemed impossibly beyond the limits of my understanding. Even though I knew how to use the computer programming language python and had taken some data structures and algorithms courses, I was still scared of AIs. 

I don’t think I was to blame for my trepidation, nor do I think I was alone. In the media, we see AIs as human-level intelligences and miracle cures.  We see AIs represented as very advanced in science fiction works, such as C3PO in Star Wars, and HAL in 2001: A Space Odyssey. These robots are technically advanced and very human-like.  

The news portrays AIs as the ultimate human achievement and the cure for every technological, business, and financial problem. AIs can supposedly solve the climate crisis, predict the stock market, and eventually transcend us by developing the ability to create better AIs. Achieving any of that seemed beyond my capabilities. 

Thinking about creating or coding a complicated, unintelligible, and advanced human-level intelligence was terrifying. I did not believe I would even be able to understand how they worked. 

I don’t think this view of AIs as complicated and impossible to understand is uncommon. It created a mental barrier for me, and will likely create barriers for other people as well. But these obstacles are dangerous.  

 AIs are presently integrated in many aspects of our lives: when you open your devices using facial recognition, that’s an AI. When you talk to Siri, Alexa, or your smart home devices, that’s an AI. When Amazon recommends a new product or Netflix suggests a new show for you to watch, that’s an AI. If we don’t understand AIs, we don’t understand the impact they are having on our lives. 

As Bristow’s, a law firm known for its work in technology, puts it, there is a “dichotomy between AI hype and AI reality”. We have a right to understand what Artificial Intelligences are, where we are encountering them, and the impacts they can have on our lives.

If you, like me, are intimidated by the words “artificial intelligence”, and are unsure of where to start learning more, keep on reading. This article is for you. I will explain the general mechanism, current issues, and future of Artificial Intelligence. 

Artificial Intelligence is actually an umbrella term, usually meaning a machine that can accomplish any task that we define as requiring “intelligence”. What we call intelligence has changed over time. Today, AIs are almost all just statistical models that predict things. They can accomplish a lot, but it is almost all from taking in data and then making really good predictions using probabilities and math. 

The most accurate AIs are the ones that have access to large datasets that allow them to make better predictions. For example, ImageNet is a dataset of 14,197,122 labeled images, allowing AIs based on classifying images to be very accurate- because they have so much data to work from. This is also how AI art tools such as “craiyon” (DALL-E 2) came about – they are, at a very basic level, reversing the labels and images, input versus output. 

The least accurate AIs are ones that are trying to predict the future, based on bad, or irrelevant, data. AI is just a framework for making predictions, and it will try to find patterns, even if they don’t exist. This is especially worrying when the data the AI is built on is biased. AIs are not impartial, perfect machines. They depend on the data you feed them. Biased data will create biased results. This can be seen in hiring AIs that discriminate based on race, gender, or disability, and in predictive algorithms that justice systems use to predict repeat offenders. If the creators and past data of the AI are biased, the AI will be as well. 

But will AIs ever reach the point of C3PO and HAL? AI is developing so quickly, is it going to become human-like soon? 

To start, we need to understand the concept of Narrow versus General AI. Currently, all the Artificial Intelligence in the world are Narrow AIs, programmed to carry out specific tasks. A General AI is a more human-like intelligence that is flexible, and capable of learning new tasks and applying knowledge to new things. According to Dr. Gary Marcus, a Professor of Psychology and Neural Science at New York University, “Humans can be super flexible – they can learn something in one context and apply it in another. Machines can’t do that.” 

At least, not yet. A 2013 survey found that 90% of AI researchers believed we would achieve Artificial General Intelligence by 2075. This could mean AIs that can learn new things, and have human-level communication. 

When it comes to AIs, the line between fact and myth has been blurred by all forms of media, from science fiction to reputable news sources. But at the end of the day, we have just designed code that can use statistics to make predictions with volumes of data we don’t have the ability to handle otherwise. AIs have proven to be incredibly useful tools, which means they aren’t going away anytime soon. Like most cutting-edge science, they can be surrounded by controversy and mystery. Taking the time to appreciate what the technology can really do not only demystifies innovation, but helps us establish informed opinions on how we want it to be integrated into our daily lives.