New Book on AI Says ‘Everyone Dies,’ Leading Chatbots Disagree

New Book on AI Says 'Everyone Dies,' Leading Chatbots Disagree

Simply put

  • Authors Yudkowsky and Soares warn that AI Superintelligence will exterminate humans.
  • Critics say extinction stories mask real harm like prejudice, layoffs and disinformation.
  • The AI ​​debate is split between Dwemers and Accelerators, who drive faster growth.

It may sound like a Hollywood thriller, but in their new book, “If Everyone Building It, Everyone Dies,” authors Eliezer Yudkovsky and Nate Soares argue that survival would not be impossible if humanity creates intelligence that is smarter than itself.

The authors argue that today’s systems are not designed line by line, but are “growth” by training billions of parameters. It makes their behavior unpredictable.

As intelligence expands, drives such as self-preservation and power seeking could emerge independently, the author warns. If such a system exceeds human control, they say, extinction will continue by default.

They call the current race between the tech giant and the government a “continuous model” a “suicide race.” No need for a villain, just incompetent.

Why is it important?

This book features the AI ​​community already having a fracture in the camp.

AI Doomers argues that extinction is an inevitable outcome unless humanity ceases or dramatically slows progress. Accelerators argue that moving forward could lead to breakthroughs in medicine, science and economics, and slowing down is a huge risk.

Nick Bostrom’s Super Intelligence first created an existential case 10 years ago. Yudkowsky and Soares cut it down into a Klaxon call.

However, critics worry that fixation to extinction will already be distracted from the harms here: bias, layoffs, surveillance, and disinformation.

What the AI ​​model says

The book talks about them, so we asked the major AI models what they think about it.

As far as we know, none of the LLMS actually read the book, the following reviews are based on the interpretation of the interpolated review platform.

chatgpt

Openai’s ChatGpt framed the book as a useful provocation rather than a prophecy.

“This book is valuable as a tool to change your perspective, even if you think it’s overshooting,” replied the chatbot. “It sharpens your intuition about alignment and unintended consequences. But dealing with the worst case scenarios inevitably ignores uncertainty about future architecture, regulations and constraints.”

ChatGpt said that future AI development models are conditional on the evolution of architecture and how regulatory, social, or technological guardrails emerge.

“That uncertainty means I don’t treat their worst case as inevitable,” ChatGpt sniffed.

Meta

Meta AI hit a central note saying that the book’s tragic warnings are exaggerated but still worth it to spark discussion.

“Though that disastrous prediction may feel exaggerated, it justifies a focus on attention and international cooperation,” Meta AI replied. “Whether you agree with that conclusion or not, the book needed a discussion about responsible AI development.”

Meta’s AI said the authors focus on the need for attention and the need for international cooperation.

Gemini

Google’s Gemini casts the book as a scary but clear reading, even if the proposed solution appears to be unrealistic.

“This book is essential to understanding the extreme end of AI risk,” Gemini said. “Its proposed solution (full closure) is politically unattainable, but this warning can encourage investment in safety and control techniques before reaching the point of return.”

Gemini said the book defines extreme stakes in emergencies.

Claude

Claude of humanity was even more skeptical, but this was interesting as it is generally considered the most safest consciousness of the major LLMS. However, Claude described the book’s tone as “overconfident,” and described its framing as too binary.

“The authors correctly identify the real risks, but their certainty feels overconfident,” Claude said. “They dismiss advances in AI safety research and frame the issue as a binary between integrity and extinction. It doesn’t make their message constructive.”

Their warnings highlight important risks that guarantee considerable resources and attention, but Claude said that “everyone dies” framing appears to be designed to attract more attention than accurately representing the complex uncertainties involved.

“The risks are so serious that they take them seriously without having to accept the inevitable catastrophe claim,” AI said.

Glock

Elon Musk’s AI platform was also shocked. Because he promotes Grok as the Anti Mainstream Media AI ai of Record.

However, Grok immediately cited the “Scathing NYT Review.” [calling] That argument is weak or alert. ”

“This book reminds us that intelligence without wisdom is responsible, and that none of the latter has been burned into it yet,” replied Glock. “That being said, it’s porn of fate for the rationalists: it’s fatal thrilling, but it underestimates human adaptability.

But beyond that call, Grok said:

Generally intelligent Newsletter

A weekly AI journey narrated by Gen, a generator AI model.

Leave a Reply

Your email address will not be published. Required fields are marked *