Amidst the rapid development of AI technology, a wave of reflection is gathering momentum. Recently, more than 800 celebrities, including Apple co-founder Steve Wozniak, known as the "Technology Enfant Terminator," and Prince Harry of the United Kingdom, have expressed their deep concern for the future of AI.Joint StatementCalls for a ban on AI research that could lead to "superintelligence" until science can prove its development is "safe and controllable."
Cross-sectoral consensus on concerns
This statement, initiated by the Future of Life Institute, has received support from well-known figures from different fields and the political spectrum. Signatories include AI research pioneer and Nobel Prize winner Geoffrey Hinton, former Trump adviser Steve Bannon, former Chairman of the Joint Chiefs of Staff Mike Mullen, musician Will.i.am, as well as Prince Harry of the United Kingdom and Apple co-founder Steve Wozniak.
This cross-border joint signature further highlights that concerns about the development of AI have transcended traditional party and professional boundaries.
AI companies are racing to build superintelligent AI, despite its many risks.
Let's take our future back.
📝 Sign the Superintelligence Statement and join the growing call to ban the reckless development of superintelligence, until it can be made safely.#KeepTheFutureHuman pic.twitter.com/Hx8WBC1byI
— Future of Life Institute (@FLI_org) October 22, 2025
The crisis of the gap between development speed and regulation
“To some extent, this path has been chosen for us by AI companies, their founders, and the economic systems that drive them, but few have asked others, ‘Is this what we want?’” Anthony Aguirre, executive director of the Future of Life Institute, told NBC News.
The agency warned that AI is developing faster than the public can understand it, and that regulatory measures are lagging far behind the pace of technological advancement.
The debate from AGI to superintelligence
Artificial general intelligence (AGI) refers to the ability of machines to reason and perform tasks like humans, while superintelligence means that AI can even surpass human experts.Right before our eyes, "X" CEO Elon Musk said it "is happening in real time", but existing AI can still only handle limited tasks, and even repeatedly fails in complex application scenarios such as autonomous driving.
Absence of industry leaders and dissenting voices
It is noteworthy that the leaders of these tech giants who have been actively promoting the development of AI, as well as key figures from their companies, did not sign the petition. OpenAI CEO Sam Altman has predicted that superintelligence will be achieved by 2030 at the latest, but has also warned of its potential risks.
This isn't the first warning about the speed of AI development. Last month, more than 200 researchers and public officials, including 10 Nobel laureates, issued an urgent call for setting "red lines" for AI risks, though they focused on crises that are already emerging, such as mass unemployment, climate change, and human rights violations.
As AI investment continues to heat up, with companies like OpenAI pouring billions of dollars into building data centers and Meta actively investing in the development of "superintelligence," the debate over the ethical balance between technological development and technology will continue to shape humanity's future.



