AI GIANTS ‘Fundamentally Unprepared’ for the Dangers of Human Level Intelligence


Thursday 17 July 2025 16:45

The world’s leading intelligence companies (AI) glide towards the creation of AI human level-but without credible safety nets.

The top AI developer because “fundamentally is not ready” for the consequences of the system they race to build, warn the Future of Life Institute (FLI).

In a recent report, the US Safety non-profit revealed that none of the seven main AI laboratories, including OpenAI, Google Deepmind, Anthropic, XAI and the Chinese company Deepseek and Zhipu Ai, were higher than D on the “Existential Safety” index.

The score reflects how serious the company prepares the possibility of creating artificial intelligence (AGI), which is a system of matching or exceeding human performance in almost all intellectual tasks.

Anthropic gets upper class, even if only C+, followed by OpenAI (C) and Google Deepmind (C-).

But no company receives a sign that passes through existential risk planning, which includes disaster failure in which AI can spin out of human control.

Max Tegmark, Co-Founder FLI, equalizes the situation by “Building a Giant Nuclear Power Plant in New York City which will be opened next week-but there are no plans to prevent it from experiencing destruction”.

Lack of fence

Criticism landed at a very important time, with the development of AI soaring forward with the ability that is increasingly human, driven by breakthroughs in architecture inspired by the brain and emotional modeling.

Only last month, researchers at the University of Geneva discovered that large language models such as chatgpt 4, Claude 3.5, and Google Gemini 1.5 outperformed humans in emotional intelligence tests.

However, the quality that seems to be humans covering deep vulnerability due to lack of transparency, control, and their understanding.

FLI’s findings came only a few months after the British AI Safety Summit in Paris, which called for international cooperation to ensure safe AI development.

Since then, new models such as XAI’s Grok 4 and Google VeO3 have pushed what limits AI can do without, FLI warns, a suitable encouragement in risk mitigation.

Saferi, another supervisor, released his own findings with FLI, labeling the current safety regime in the top AI companies as “weak to very weak,” and called the industrial approach “unacceptable.”

“The company says Agi can only be only a few more years,” Tegmark said. “But they still do not have a coherent safety strategy and can be followed up. It must be worried about everyone.”

Agi may be closer than we think

Agi, called AI’s ‘Holy Cup’, has long been seen for decades. But the latest progress shows that it might be closer than assumed.

Adding complexity to AI network -through ‘height’ other than width and depth -reported to produce a system that is more intuitive, stable, and like humans.

This design leap, which was pioneered by researchers at the Rensselaer Polytechnic Institute and City University of Hong Kong, used the loop of feedback and intra-layer links to mimic the local nerve circuit of the brain.

Such changes can move AI outside of Transformer architecture, 2017 breakthroughs that give rise to the current large language model.

Ge Wang, one of the writers, described the changes similar to adding the third dimension to the city map: “You not only add more paths or buildings,” he said, “You connect the room in the same structure as a new way. It allows richer reasoning, more stable, closer to human thinking.”

This innovation can encourage the next AI revolution, and can also open the door to understand the human brain itself, with the implication to treat neurological disorders and explore cognition. But with this strength there is a risk.

The registered AI company has been approached to comment.





Game Center

Game News

Review Film

Berita Olahraga

Lowongan Kerja

Berita Terkini

Berita Terbaru

Berita Teknologi

Seputar Teknologi

Berita Politik

Resep Masakan

Pendidikan
Berita Terkini
Berita Terkini
Berita Terkini
review anime

Gaming Center

Originally posted 2025-07-18 03:34:52.

Leave a Reply

Your email address will not be published. Required fields are marked *