Fears of AI misplaced, machines will not overthrow humans, issue instead lies in pattern programming
This past month, Microsoft announced that Bing would be enhanced with a chatbot feature similar to OpenAI’s ChatGPT. The Bing chatbot has been in the news since its limited-access launch because of many reported incidents in which it gives incorrect, rude and even hateful responses. When artificial intelligence (AI) research produces something controversial like the Bing chatbot, many begin to ask the large, open-ended questions that have loomed over the field of machine learning from its very inception in the mid-20th century.
People ask: “What is consciousness?”, “Can a machine have a soul?” and, perhaps the most frequently asked, “Has AI gone too far?” While they often kindle deep, important discussions on the nature of AI, I do not think these are the right questions to ask if we want to get closer to understanding where this technology is headed and how it should be used as we go forward.
In an ABC News article on the Bing chatbot, Toby Ord, a researcher from Oxford University, described the controversial AI responses as “a consequence of the rapid improvements in AI capabilities having outpaced work on AI alignment… like a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control.”
AI technology is just beginning to reach milestones that make headlines. At this point, I do not see a point in asking whether AI has gone too far, since I see it as a technology still in its infancy. Rather, I think that so long as this technology is emerging and is going to be used, we should spend our effort on trying to make it as useful and accessible as possible to as many people as we can.
Recalling a similar discussion in my AI and Machine Learning class, I decided to email my professor, Peter Drake, for his thoughts on the matter. My first question for him was about the ABC News quote and whether he agreed with the importance of “AI alignment” that the researcher emphasizes.
“We do need to be careful that powerful new AI tools are used to benefit, rather than harm, humanity,” Drake said. “New applications are being developed without sufficient thought about the consequences, especially for groups of people with less power in society.”
Especially with the recent explosion in popularity, AI technology is being sensationalized. Many research teams seem to lose focus on, or are told not to pay attention to, accessibility and objectivity in favor of flashy, profitable features and closer release dates. I have also heard fears of more sinister applications of the technology, such as combat drones, consumer exploitation and political sabotage. Professor Drake shares these concerns, explaining that fears surrounding AI are often misplaced.
“I’m far less concerned about an AI ‘going rogue’ than I am about it doing exactly what a human intended it to do, e.g. becoming a manipulative salesbot or propagating disinformation for political reasons,” Drake said.
These fears are far from unfounded. Our biggest threat is not AI replacing us or staging a worldwide coup against humans. The potential for AI to harm comes from people with far more influence, and often wealth, than the rest of those who should have a say in such decisions.
I also asked Professor Drake for his opinion on the big question “Has AI gone too far?” and whether he might suggest a more practical question for considering the potential future of this technology.
“I think people are misled by ChatGPT’s use of language,” Drake said. “Unlike some other AI systems, it does not have a model of the world and is not capable of basic reasoning; it routinely fails to do middle school word problems. It does not pass the Turing test… It excels at one very specific task: writing text (or code) that is similar to other text in its training data set. As the technology improves, this is likely to change.”
The Turing test was a major point of in-class discussion about ChatGPT. To provide some background, the Turing test is a method proposed by famous mid-20th-century computer scientist Alan Turing to answer the question, “Can machines think?” The test involves three participants. Two contestants, one of which is the machine being tested, remain behind a curtain, obscured to the third participant, who is the “mediator.”
The contestants have some method of providing answers to questions posed by the mediator that does not involve speaking out loud or showing themselves. The mediator’s goal is to provide a series of questions on whatever topic they see fit, from the knowledge of the contestants to describing their own appearance, in order to try to determine which contestant is a machine. The contestants’ answers do not need to be truthful, just convincing.
Turing’s thinking behind the test is that, since we cannot presently begin to understand the concept of thinking, nor definitively prove whether ourselves or anyone else thinks, the only way we can answer the question of whether machines can think is to test whether they give answers to any question imaginable that would lead the questioner to believe that they are thinking, since that’s the only way we can tell other people think as well. For a more in-depth description, I would highly recommend reading “Computing Machinery and Intelligence,” which still holds up very well, despite being over half a century old.
I believe that ChatGPT, or some other large language model, will pass the test within the next few years. AI will continue to develop at a rapid pace so long as it produces results and the investments keep coming. I saw this happening in the past few years as I closely followed the latest research papers and news in AI, especially when a research paper finds its way into the public eye. For example, image synthesis, which gave us DALL-E, among other models that caused the controversy around AI-generated art recently. Finding a way to manage the chaos that ensues from these breakthroughs is the biggest challenge here.
“People worry that they will lose their jobs … I always scratch my head when there’s talk of AI ‘taking away jobs,’” Drake said. “Wasn’t that the point of inventing all of this technology — so we humans would have to work less? Of course, what’s really being taken away are wages. The benefits of AI are currently accruing mainly to the billionaire class. How can we change that?”
This is the biggest question I have about the current state of this technology. The scariest, most urgent problem to solve in AI is how to put these technological breakthroughs in the hands of the people, empowering them without a predatory pricing scheme or an invasion of privacy.
Finally, Drake touched on the usefulness of a technology that doesn’t always give correct answers.
“AI systems sometimes make very weird mistakes — that’s true of humans as well, but they’re different mistakes,” Drake said. “It’s well established that diverse teams perform better, so it’s reasonable that a human working with a machine could do more than either alone.”
My strongest belief about AI as a technological advancement is that it will never replace us, because it is not going to be designed to, nor capable of, doing everything a human does with the same levels of accuracy, intelligence and consistency. Artificial intelligence, at its core, is not about intelligence; it is the study of creating tools that can recognize patterns.
These tools will prove useful in solving problems quicker than a human, but we will still need to check their work. This technology, like many before it, will make our lives easier, but I do not think it will take over the world. We should be more concerned with the humans that are already trying to do that, and how they might try to use this technology to do so.