Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Existential Threats and Risks to All

SUBSCRIBE TO OUR E-NEWSLETTER

EXTRA: Subscribe to our e-newsletter

The Newsletter complements our InfoHub by providing a brief monthly bulletin on the latest news and insights about threats to humanity’s survival, helping you stay at the cutting edge. The content includes original material, such as interviews and articles, as well as news on upcoming events organized by EXTRA, WAAS, and our partners. It also features reviews of recently released existential risk reports and articles, as well as important news items from the global press and NGOs.

Newsletter Archive

2025

SUBSCRIBE

EXTRA NEWSLETTER

 

UPCOMING EVENTS

Webinar
Coping with Polycrisis and  Systemic Risks: New approaches to assessment and governance
Existential Threats and Risks to All (EXTRA) Working Group
7 November 2025 | Online 1:30 PM CEST
Register HERE

Conference
The Future We Agreed: One Year of the Pact
AI Ethics and Integrity International Association (AIEI)
November 11, 2025 | Lisbon 6:00-11:00 PM CEST

Summit
Second World Summit for Social Development
United Nations
November 4-6, 2025 | Doha, Qatar

PAST EVENTS

Meet & Greet
EXTRA – X-Risk Cafe
Existential Threats and Risks to All (EXTRA) Working Group
22 October 2025 | Online 8:00 PM CEST
Register HERE

Webinar
Empowering Educators: Building AI Pedagogy and Literacy for Future Learning
Higher Education Sustainability Initiative (HESI)
October 21, 2025 | Hybrid Event 7:00-8:30 AM EST

Conference
Geneva Peace Week 2025

Geneva Peacebuilding Platform
13-17 October 2025 | Geneva, Switzerland

Conference
International Decade of Sciences for Sustainable Development

World Conference on Science and Art for Sustainability
22-24 September 2025 | Belgrade, Serbia

Webinar
Hidden Correlations and Systemic Risk in Global Food System Vulnerability Seminar
Centre for the Study of Existential Risk
03 September 2025 | 17:30 – 18:30 (UK Time/GMT+1)

INTERVIEWS, ARTICLES, WEBINARS AND REVIEWS
Webinars

EXTRA Interviews: Jerome Glenn & Artificial Intelligence – Urgent AGI Governance Challenges
EXTRA and Jerome Gleen
October 13, 2025
Interview Recording
The EXTRA InfoHub of the World Academy of Art and Science presents a critical conversation on the urgent governance challenges posed by Artificial General Intelligence and the 3-year window humanity has to establish proper regulatory frameworks. Find it on YouTube,  and check his latest book here.

LONDON FUTURISTS and EXTRA InfoHub: Invitation to Collaborate
EXTRA and London Futurists
September 20, 2025
Webinar Recording
The EXTRA Working Group members shared their vision for the EXTRA InfoHub, discussed engagement opportunities for organizations and individuals, and presented further insights from the 20 Notable Reports on Existential Threats and Risks. Find it on YouTube.

What is the Significance of the UN Global Risk Report?
EXTRA, UNESCO BRIDGES, Pardee Institute, and ASRA 
September 08, 2025
Webinar Recording
With some of the report’s authors, we reflect on the methodology, scope, and foresight scenarios of the flagship document marking the entry of the UN headquarters into the arena of risk overview reporting. Find it on YouTube.

Articles

Recent Reports and Articles on the AI Race, Impacts, and Needed Guardrails
Michael Marien, EXTRA Director of Research
October 17, 2025
Special Collection
For better and worse, Artificial Intelligence or AI is already widespread, and still evolving, perhaps to AGI and superintelligence in the next few years. This overview seeks to identify many of the headlines and bottom lines of recent reports and articles, as well as three books—all published in 2025, with two exceptions.  It is divided into three major parts: I) The AI Race between the US and China, and a handful of massively-spending US technology organizations; II) Impacts of AI: both current and expected; and III) Creating Guardrails for this emerging and influential technology.
Read more

Balancing Benefits and Risks: The Role of AI in Education
Polonca Serrano, Assist. Prof., Alma Mater Europaea University 
October 17, 2025
Article
AI is transforming education through intelligent tutoring systems, chatbots, and analytics tools that adapt to individual learning styles, provide real-time feedback, and improve outcomes. However, it introduces risks like superficial learning, reduced emotional resilience, deepening inequalities, threats to academic integrity, and platform dependence. AI cannot replace human judgment, critical thinking, and interpersonal guidance. It explores AI’s benefits and risks in education, emphasizing ethical, inclusive, and strategic implementation. 
Read more

Beyond Efficiency: AI for Rhythm- Aware, Compassionate Healthcare
Kiriti Prasad Choudhury, Manager, Beximco Pharmaceuticals
October 17, 2025
Article
AI is changing healthcare, but technology alone can’t solve the challenges of aging populations, chronic diseases, and system strain. Modern medicine excels in precision yet lacks empathy. Integrating Medicine, Nature, Mind, and Rhythms—principles from Ayurveda and Chinese Medicine, now research-validated—is essential. Healthcare remains fragmented. It outlines a rhythm-aware framework using humane AI.
Read More

Europe’s Moral Compass for AI: From Regulation to Realisation
Samraj Matharu, Founder, The AI Lyceum
October 17, 2025
Article
AI is rapidly transforming society. The EU has responded with a comprehensive framework, including the AI Act and the new Apply AI Strategy, to regulate AI risks and promote responsible deployment across sectors. The vision of an evolving “agentic web” highlights AI’s growing autonomy while maintaining responsible oversight.
Read More

Technology and The Crisis of Containment
WAAS/EXTRA Working Group, Prof. Thomas Reuter
August 13, 2025
Article
In a recent discussion in the WAAS Working Group on Existential Threats to Human Security, David Harries, former chair of Pugwash Canada and Associate Executive Director of Foresight Canada, raised the concern that the conventional approach to threat containment, based to a large extent on early warning, is becoming obsolete. 

In the wake of technological innovations such as AI but also as a consequence of the increasing proliferation and speed of creation of new threats such as biological weapons, “state actors, state agents, public and private organizations, and individuals are now more than equipped to escape ‘containment’ and defeat ‘early warning’, he noted. This article argues that containment may be regained and maintained only by applying principles of human security.

The current crisis of technology containment affects all aspects of contemporary life. It is testimony to a process of technological innovation that now appears to be increasingly out of control. The lack of containment in technology is not the result of an oversight or an accident, nor can it be reduced to innovation in pursuit of profit, though that obviously plays a role. Instead, I contend, it is driven by a relentless race for ever more extreme tech capabilities in the service of military supremacy. 

This race can be traced back to the dawn of history, but today it has reached unprecedented extremes, driven primarily by escalating geopolitical struggles for dominance among competing major powers. Once a slow and meandering trickle, the race for technical and general supremacy is now a raging torrent advancing at exponential speed, and recently has been accelerated to yet another level of recklessness by the use of super-human machine intelligence for technological development and deployment.

Tech innovation is a war machine. Technology, more generally, is about power and control over nature or other people. It intrinsically lends itself to hubris. Perhaps not necessarily, but so.

The political realists tell us: Stop military tech development, and you will be destroyed! So, of course, nobody is going to stop, even if they know a particular piece of tech innovation can kill all of humanity or all life on earth. Au contraire: All the more reason to push ahead with it relentlessly! Everyone wants the most murderous form of AI, the most deadly biotech, or other weaponizable technology under their control – first, before their opponents beat them to it. 

There is no scope for regulations in such a perpetual war, nor is there time to apply a precautionary principle. And given the ubiquitousness of so-called ‘dual use’, civilian technology is often directly and always indirectly implicated in this race. After all, contenders for global power require an economic surplus (or debt) to be able to pay the steep price for owning the fastest technological war machine. Money is thus also weaponised.

This situation has escalated during the industrial revolution and, more recently, the digital revolution. We have progressed from the industrial warfare of the world wars to the nuclear stand-off of the Cold War to the hybrid and drone warfare of today’s major conflicts. The associated escalation of risk to human survival is now so acute that it calls for a fundamental rethink of the way security is to be achieved. It asks for a shift away from a military to a human security paradigm. But what does that mean?

Only voluntary restraint or ‚inner containment ‘will save us from ourselves. Human security comes from within us individually, and from within our social systems of mutuality. External, physical containment, based on out-innovating or pre-empting your opponents, is what is driving the game. It is not going to end. More technology is the problem, not the solution. The solution either lies within us, or there is none.

Inner containment is an ethic that does not assume or require intrinsic benevolence. It assumes merely the insight and genuine conviction that life requires containment, that it is wise to exercise moderation in dealing with other people and their interests. It is a commitment to law, and to the maintenance of effective mechanisms for correcting the few who prove incorrigible – law enforcement.

A functional and durable system of international law and law enforcement cannot be imposed. Law is based on agreement to exercise restraint in the pursuit of self-interest and self-preservation. For a law to be loved and jointly upheld, not just feared and obeyed under duress, it must be built on voluntary commitment. That can happen only if laws are rational and just and hence acceptable to all, and desirable as well, with all actors being well aware that security and even survival are not achievable long term in a world bristling with killer technology and devoid of commitment to lawful behaviour. This is the stance I refer to as inner containment. Such inner containment is the only way to de-escalate. It is the human quality that enables human security, not from others but with them.

This is not some far-fetched proposition, but already the majority position. As it is, most nations would be quite content to live safely under a just international law, just as most individuals are happy to live under a just national law, or would be if they had the opportunity. There are some national actors, however, who think themselves exceptional or entitled to dominate in the name of their security, and others who feel a need to avenge past wrongs, or wish to indulge their lust for more power, all in the name of their nation.

 These national actors cannot be policed at present. Their operations typically have the character of organised crime, ruthless, secretive, profitable, and hence very well funded, which is what gives them impunity. I say ‘actors’ rather than ‘people’ because the majority of people, even in these nations, do not want war, or at least not unless they face desperately difficult living conditions or are whipped into a frenzy with incendiary propaganda authored by well-organised criminal actors. Active war mongering and war profiteering are crimes against humanity, at home and abroad. 

We have never had a comprehensive global security concord, based on the insight that humanity can no longer afford to live without inward containment. But we do have a rudimentary international legal structure that is imperfect and sometimes unjust, but continues to evolve. Recent events in Ukraine and Gaza are instructive as to the limited effectiveness of taking matters to the International Court of Justice or War Crimes Tribunal – after the event. This is not a precautionary approach. 

On that front, however, there are some interesting precedents. Nuclear weapons treaties, despite their failures and rapidly increasing fragility, are an instructive case because, until now at least, 80 years after the obliteration of Hiroshima and Nagasaki, they have prevented a repeat of such actions and a potential global nuclear Armageddon. We now need to respond much more broadly to the fact that physical containment is a self-defeating process and is quickly becoming untenable not just in the context of nuclear weapons, but in general. 

We do not yet seem to be approaching such a concord, except perhaps by the painful and dubitable route of calamity. History shows that innovative frameworks for regulating human relations tend to emerge and find broad support in the period after a major conflagration. But there is no genuine precedent for the current and emerging state of war technology. A global conflagration today could take multiple forms and be initiated by numerous state and non-state actors. 

It may be very hard or impossible to come back from such a disaster. Waiting out the cycles of history may seem ok if one chooses to adopt a detached long-term perspective, but we cannot count on such cyclical developments anymore. History may end with the next downward turn, and not at all in the manner that Samuel Huntington predicted.

The duty of scientists and all rational thinkers and champions of peace is to lay out pathways toward de-escalation, and that means deceleration of the frantic technological supremacy race. We are not doing this, or not systematically and publicly enough. The vast majority of people and nations would welcome genuine rule of law, and we should be emboldening them to step up to the challenge by making the rational case for this more human approach to security.

EXTRA, the World Academy of Arts and Science’s Working Group on Existential Threats and Risks to All, is therefore planning to organise a foresight session on the question of how the war technology machine may be stopped, and how the usurpation of lawful government by (suit-wearing) organised crime can be stopped. Various UN reform options need to be discussed in such a forum, drawing on lessons learnt from recent experiments such as the recent push for a universal nuclear weapons ban by non-nuclear-armed nations. 

The world may just be ready to embrace inner containment now, seeing that it has become a matter of human survival. It certainly needs to be tried.

First version published as: T.A. Reuter 2024. ‘The Crisis of Containment – Time for a new approach?’ Cadmus: Journal of the World Academy of Arts and Science 5(3, Part 2):44-46.