AI robot using digital controls
Image by Lukas from Pixabay
AI robot using digital controls
Image by Lukas from Pixabay

Will AI Destroy Humanity? These Scientists Think it Could

Artificial intelligence can be a scary subject. Out-of-control AI has been the premise of sci-fi dystopia for decades, and now the same technology is emerging everywhere. While many experts are excited about an AI future, others are terrified. So, will AI destroy humanity? Some scientists think it could, and their arguments are quite compelling.


Now, there’s no reason to believe that artificial intelligence will end humanity anytime soon, but the technology is already showing as many potentially negative effects on our world as positive ones. As machine learning progresses, AI researchers believe it’s destined to become vastly smarter than man – a kind of “God-like intelligence” model that’ll dwarf us in every way. This superintelligence could go two ways: It could free humanity or it could ruin everything.

Products promoting AI and machine learning models boast of all the positive impacts this new tech has on the world. Of course, it’s in their best interest to leave out the negative or world-dooming possibilities. Don’t worry though. There are plenty of scientists and researchers speaking out in an attempt to protect the public. Whether or not other experts agree with them.


It’s important to remember that there are two sides to this argument. We really don’t know how AI is going to turn out in the future, but when some of the smartest minds believe machine learning could destroy the human world, we might want to listen.

ai brain collecting energy to destroy humanity
Image by Alexandra_Koch from Pixabay

Up to 5% of AI researchers think AI could end in human extinction

The number of AI researchers who believe that future machine learning programs could end in the extinction of the human race is staggering. At least according to a survey conducted by AI Impacts on a sample of 738 current professionals in the field.


The survey included all sorts of questions, such as whether or not AI will replace jobs or entire industries. The questions we’re most interested in, however, were about researchers predicting the emergence of a high-level machine intelligence and how that could negatively impact the world.


Participants concluded that high-level intelligence won’t be around for another 37 years (with roughly 50% confidence). But the scariest prediction came from around 5% of researchers who believe that this intelligence would have an “extremely bad” outcome. Even worse, the example specifically used in the question was “human extinction.”


That low of a percentage might seem like something to brush off, but if 5% of scientists believed a medication would kill people rather than treat a disease, it likely wouldn’t move forward for human trials. To make matters more dire, 48% of researchers responding gave a 10% chance of the same thing. Only a quarter of researchers predicted no major negative outcome at all. To put it in metaphorical terms: That’s one medication we certainly wouldn’t trust.

Geoffrey Hinton, the godfather of AI
28 June 2023; Geoffrey Hinton, Godfather of AI, University of Toronto, on Centre Stage during day two of Collision 2023 at Enercare Centre in Toronto, Canada. Photo by Ramsey Cardy/Collision via Sportsfile

The Godfather of AI thinks it needs to be controlled since the chance AI will destroy humanity is very real

In the world of machine learning and artificial intelligence, one man stands above the rest, and that man is Geoffrey Hinton, godfather of AI and 2018 Touring Award winner. He’s been working on AI projects since the ’70s and was one of the primary researchers at Google before stepping away in 2023 to warn the public about the dangers of AI.

Hinton believes the machine learning models we have now already surpass the human brain in many ways. He’s convinced that even ChatGPT is capable of thinking at this very moment, and it does so with exponentially fewer neuro links than we humans have, or so he told Amanpour and Company during an interview. He also predicts that if we’re not careful, creating smarter learning models could be disastrous for all of us.

"This stuff will get smarter than us and take over. And if you want to know what that feels like, ask a chicken."

It’s hard to believe that one of the leading minds behind our most recent, life-changing technology would regret having helped usher in this new era, but that’s how Hinton feels. His reasoning ranges from doomsday scenarios, like the Hinton quote above, to misinformation via floods of fake information and content. He’s also well aware that a broad loss of jobs and an increase in world poverty is on the horizon. But he also fears political election swaying and fraud.

Sure, many of the dangers Hinton recognizes have human elements, but those elements will leverage AI technology to negative outcomes. The doomsday scenarios, though, are inherent possibilities of creating superintelligence. Either way, Hinton’s warnings should carry some weight since few people in the industry know as much about the technology as he does.

Join the Obscurix Newsletter!
Intel Free Press, CC BY-SA 2.0, via Wikimedia Commons

Stephen Hawking outright believed AI would destroy humanity

Let’s move on to one of the greatest minds the human race has ever known: Stephen Hawking. Hawking wasn’t necessarily involved in the world of artificial intelligence outside of the assisted intelligence devices that helped him communicate. That doesn’t stop him from being arguably the smartest person to ever walk the Earth. And with all his brains, he had a few things to say about how he believed AI will destroy humanity.


“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. He reasoned that humans are constrained by biological evolution, a rather slow process compared to the lightspeed movement of technological growth we’ve experienced in the last hundred years. And it’s because of these discrepancies in our and AI’s predicted abilities to adapt that we’d become yesterday’s news. Now, Hawking’s prediction doesn’t necessarily mean we’d go extinct, but the status quo would. We’d, at best, be subordinates to the new machine gods.


It’s important to note that Hawking died before ChatGPT became a hit, so he wouldn’t know what our current machine-learning technologies are capable of. We’re not sure that would sway his opinion though. There’s a good chance he knew about the precursor technologies that were in development before he died in 2018. The goals of reaching a high-level machine intelligence were in place long before then, and it’s likely this is what drove his predictions. After all, how does one control a being exponentially smarter than itself?

AI face looking menacing
Image by Alexandra_Koch from Pixabay

There’s not much being done to ensure ongoing safety with AI

At the time this article was written, there wasn’t much in the way of regulations for how machine learning technology was allowed to advance. The industry seems to grow too quickly for legislation to keep up. This is especially problematic with the lack of technological experience in the upper levels of our government. Luckily, government officials are starting to catch on and more committees, bill proposals, and hearings over AI misconduct are cropping up every day. Unfortunately, that might not be enough – at least if technology keeps outpacing our legislative capabilities.


Outpacing is the name of the game. As is shown in a discussion between CNN and AI researcher Connor Leahy, significantly more resources are constantly being funneled toward growing our AI and machine intelligence programs than are being used to safeguard against unforeseen consequences. Hinton talks about this too, saying that only one percent of the resources go towards our future safeguards. Of course, that may be an exaggeration.


It seems like all new technology comes with unforeseen consequences. Smartphones leading to the texting-and-driving issue is a good example. Although smartphones weren’t meant to distract drivers, it’s still a deadly consequence of the technology.


Now, imagine if the accidents caused by cell phones were actually unforeseen issues caused by superintelligences controlling our military tech or bank accounts or being used to commit voter fraud. The outcomes would be substantially worse than the current death rates. This comparison could be seen as hyperbole, we know, but there’s no way to tell until the technology comes to pass. And since AI will likely be controlling major infrastructure in the future, we argue that the potential for loss of life on a large scale is actually a reasonable prediction.

Rodio clown distracting with cheerleader outfit
Image by Clarence Alford from Pixabay

Are the doomsday fears a distraction?

There are many out there – from researchers to common folk and CEOs – who believe talking about whether or not AI will doom humanity is a distraction from the very real current dangers of the technology. For example, Margaret Mitchell, former head of the ethics team for Google’s artificial intelligence program, explains that AI is already adding to major ethical issues. These include “the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalized in tech.”

That poses the question: Should we discuss the possibility of doomsday scenarios while actual harm is already being done? Well, yes, we should. Though it’s important not to distract from current issues, a worldwide incident would require early preparation. In fact, all of the possible negative outcomes of artificial intelligence technology require early measures to counteract or prevent. Even the issues Mitchell brings to public attention. If companies aren’t able to mitigate the issues currently caused by AI, how are they going to counter a superintelligence that decides to change the world to its liking? Maybe it’s time for AI develors to get on the ball before it’s too late.

Follow us on social media!