Artificial Intelligence (AI) is a misleading term. It sounds like it parallels human intelligence, except faster, all-knowing, and non-forgetting. But, this just isn’t what it means when people attach the term to their innovation.
It’s important we set the record straight on what Artificial Intelligence is before we continue labeling every computer innovation as Artificial Intelligence.
Ladies and Gents…Artificial Intelligence
Let’s start with what Artificial Intelligence isn’t. It’s not some all-knowing system. It’s not an evil initiative that will take your job. In fact, Artificial Intelligence isn’t an “it” at all. Theoretically, Artificial Intelligence doesn’t exist.
Artificial intelligence (AI) is the general label for a field of study. Specifically, the study of whatever might answer the question “What is required for a machine to exhibit intelligence?”
In using this blanket term to describe everything from the cancer-detecting algorithms of IBM to the “you should watch this next” software of Netflix, we begin to believe that there is some overarching system somewhere that is learning how to do all of these things.
When in reality, each “Artificial Intelligence” innovation we hear about is a separate piece of software learning how to automate a very specific task. They aren’t all working together to achieve world domination.
For instance, many great businesspeople accredit their strategy and forethinking to playing Chess. However, the “AI” that beat undefeated world chess champion, Garry Kasparov, would have a very hard time translating their knowledge of strategy to run a successful business. Autonomous companies have a long way to go.
In reality, “AI” is like a childhood nickname you can’t get rid of. Hardly does that nickname accurately describe who you are as a person.
Although terms such as “software” or “automation” are more fitting for most AI breakthroughs. I’ll probably, and you’ll probably, continue using the term Artificial Intelligence for the time being. It’s a buzzword.
Of course, not all “AI” is created equal. While most are advancements to improve daily tasks, the “AI” that has Elon Musk scared and Mark Zuckerberg hopeful is something completely different.
Independent Thinkers
Both of these visionaries are referring to a time when software begins to exhibit human-like cognitive abilities. Meaning, not only can it recognize the patterns that underlie our economic markets, but it can then create strategies on how to influence the free market. Not only will it detect early warning signs of pancreatic cancer, but it can also formulate hypotheses on how to resolve the fact that most people don’t catch pancreatic cancer until it’s too late.
To achieve this level of independent thinking in machines, commonly called The Singularity, researchers create these neural networks, which structurally mimic the neural connections of the human brain.
Realistically, this monstrous feat has more to do with understanding our own consciousness and intelligence, than it does with the actual machines.
For this reason, the UK launched a $1.3 billion initiative to digitally map the human brain rightfully named the Human Brain Project. Here in the US, a similar initiative was launched with an even less-creative name, BRAIN – Brain Research through Advancing Innovative Neurotechnologies.
Experiments such as the Cognitron Intelligence Test, which is free and fun to take, test how different forms of intelligence (i.e. spatial intelligence, facial intelligence, etc…) relate to one another in each of us.
These initiatives and many more aim to understand how our brains analyze the world, predict outcomes, and create things. Maybe someday replicating it in a machine.
On this road, however, there is a major challenge. Ingrained in our own intelligence is bias.
A New Hope for Bias?
Almost nothing we do is completely free from bias. The moment you read this article’s title, every emotion you have about AI came to the top of mind. Every sensory input, from what we see to what we hear, has a personal connotation that steers our actions.
Often times, the data these AI algorithms learn from are riddled with our social biases.
Facial recognition apps which diagnose disorders with just a picture are more effective for white patients. The risk-assessment algorithms courts use to assist in sentencing have been found to support racism. And the image recognition algorithms at Facebook and Google reinforce sexist gender stereotypes?
If these problems aren’t fixed, many of the same biases and racisms that have existed for the past 300 years will continue to exist for the next 300.
And if you ask me, that’s not progress.
Fortunately, people realize this is a big problem and moves are being made. For instance, AI Now is an initiative to better understand the impacts of AI, formulating ideas on how to help design socially-conscious automation.
I suppose recognizing our own biases is the first step toward diminishing them.
Clean Slate Mindset
I’ve always tried to approach every interaction and every task with a clean slate. A tabula rasa as John Locke would say. Because I find that I can learn a lot more when I’m not trying to predict what I will learn.
Within the first 30 seconds of meeting someone new, we’ve already determined if they are worthwhile to continue conversing with. Instantly, our minds begin thinking how to get out of the situation or what to say next. But, meeting new people shouldn’t be treated like speed dating.
Ralph Waldo Emerson said, “Every man I meet is in some way my superior, and in that, I can learn of him.”
Treat every new person you meet as if they have something unique to teach you. And you’ll find your social and cultural biases melt away.
I appreciate you taking the time to learn what I have to teach you in this week’s Quick Theories. Let me know if your concept of AI has changed or what some of your favorite applications of AI are.
Just a bit of precision. The Human Brain Project is a E.U. founded project, not U.K. Its actual “headquarters” is in Switzerland if anything. So U.K. is still part of the project for a bit 🙂