In a new report from Elon's Imagining the Digital Future Center, experts call for radical change across institutions and social structures.
Experts Call for Radical Change Across Institutions and Social Structures, Warning That AI Will Be Significantly More Influential in the Next 10 Years or Less
The vast majority of expert respondents in a new 2026 report by Elon University’s Imagining the Digital Future Center called for leaders to work together now to build a coordinated resilience infrastructure for the age of artificial intelligence (AI) to counterbalance the human and systemic challenges posed by widespread AI adoption. Some 82% said AI will play a significantly larger role in shaping people’s lives and key societal functions in the next 10 years or less. They urged an “institutions-first” resilience agenda because the most significant problems arise from a life-encircling AI infrastructure.
In more than 160 impassioned essays, the global experts noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humans flourish as they adjust to an AI-infused future.

“The central risk described by these experts is not a single catastrophic AI event,” said report co-author Janna Anderson, professor of communications and senior researcher for the ITDF Center. “They said accelerated AI use will lead to a cumulative reallocation of human agency until people and institutions find it harder to question, contest or even notice what has changed. That drift can look like ‘progress’ in the short term, but it has a price – the gradual weakening of human judgment, accountability, shared truth and the social fabric that makes self-government possible.”
Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: “AI will diffuse responsibility by design. … Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”
The experts responding to this canvassing is an international and notably cross-disciplinary mix of people with academic, professional, technical and industry experience.
> Building Human Resilience for the Age of AI website
> Full details about the participating global experts and methodology are available here.
> The executive summary is available here.
The full report PDF is 376 pages. It includes experts’ full responses to the open-ended essay question. This is the 52nd “Future of Digital Life” report issued by ITDF since 2005.

“One of the major surprises to me in these responses is that we wrote our questions about resilience wondering about individual resilience and its various parts. Yet these experts were insistent that humanity’s best response for building a brighter future as we evolve with our AI systems must start at a higher level,” said Lee Rainie, director of the ITDF Center. “They note how AI has already become part of our environment, embedded in often invisible ways in our lives and it will take a systems-level response to shore up our in-born capacities.”
Alison Poltock, co-founder of AI Commons UK, wrote, “We are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.”
Mel Sellick, founder of the Future Human Lab, said, “AI has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no ‘outside’ anymore. Some form of AI is upstream of everything. We are the last generation that knows what human capacity felt like before it became inseparable from AI.”
Srinivasan Ramani, Internet Hall of Fame member, former research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote, “AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?”
The experts underscored the urgency of taking action. Salman Khatani, manager of the IMAGINE Institute of Futures Studies in Pakistan, wrote, “The window for proactive intervention is now – we have perhaps five to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.”
Taken together, they suggested a sweeping agenda for developing human resilience in the AI Age, focused on the fact that actions by individuals alone are not sufficient. Many of the concerns and proposed solutions are crosscutting, and they said collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others. A selection of goals to target:
For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable “red lines” and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.
For AI developers: Do better than designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage people to reflect on choices; train AIs to cite and honor humanity’s intellectual and psychological foundations; build systems that buttress humans’ capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.
For business leaders: See the call to action in the items above; play a role in initiating and carrying out that change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs’ challenges to people’s self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones – areas of work in which AI is intentionally prohibited.
For educators: Create literacy regimes in all AI-related domains, particularly teaching “existential literacy” – the cultivation of individuals’ understanding of how technologies shape goals, values and identities. They urged the teaching of skills and development of norms that encourage people to consciously navigate life‘s fundamental challenges, to strive to retain and apply the capabilities of metacognition, discernment and epistemic vigilance – to be responsible for making their own decisions, retaining agency. To strengthen their ability to adapt to change and manage friction, paradoxes, ambiguity and anxiety. To focus on their critical human traits such as curiosity and social and emotional intelligence.
For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as “analog communities,” “dumbphones” and “dumb homes” that allow people to avoid algorithmic mediation and surveillance technology.
For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don’t be passive, don’t be hasty and don’t be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.
Many experts expressed optimism, saying if we are resilient and all goes well, humans will flourish in the AI age. Internet pioneer Doc Searls wrote that humans will come to rely on AIs to help with the myriad details of modern life. “Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976,” he wrote. “But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.”
While most comments were focused on developing human resilience for the AI Age, a number of futures-scenario predictions were included in the report. A small selection of the many predictions:
Digital advances drive sex and childbirth declines: “Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.” – Greg Sherwin, Singularity University global faculty member based in Portugal, previously senior principal engineer at Farfetch
“Modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.” – Tracey Follows, founder and CEO of Futuremade, a UK-based futures consultancy
Solitude will be lost: “Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.” – Paul Saffo, prominent Silicon Valley-based forecaster
The retirement age will be manipulated to maintain ‘full employment’: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the “governor” (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. – Nigel M.de S. Cameron, past president of the Center for Policy on Emerging Technologies
Battles will occur over defining what is ‘human’: “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans. … ‘Physical AI’ will fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.” – Ray Wang, chair and principal analyst at Constellation Research
AIs will gain rights: “We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights.” – John Smart, president of the Acceleration Studies Foundation and author of “Introduction to Foresight”
“AI psychosis and other forms of mental illness will arise. The further erosion of a solid foundational reality will create a great vulnerability. Coping with these issues will require new approaches to the diagnosis and treatment of mental illness. It will also demand new approaches to evaluating and appreciating the impact of human relationships with AIs and deeper assessment and understanding of consciousness itself.” – Stephan Adelson, president of Adelson Consulting Services
Superstupidity (not superintelligence) is the real threat: “The existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand – the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film ‘Idiocracy’ is prophetic.” – Roger Spitz, founder of the Disruptive Futures Institute in San Francisco
Agent failures will start with social (not technical) problems: “Agentic systems will fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.” – Daniel Erasmus, founder at Serious Insights, based in Amsterdam
As agents take over, the internet will become a network of databases, not websites: “As software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.” – Gary Bolles, author of “The Next Rules of Work” and chair of the Future of Work efforts at Singularity University
This study is based on a canvassing with a non-random sample conducted between Dec. 26, 2025, and Feb. 12, 2026. In all, 386 experts responded to at least one aspect of the canvassing; 251 provided written answers to an open-ended question – more than 160 provided detailed essay-length responses. Imagining the Digital Future is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead. The Center was established in 2000 as Imagining the Internet and renamed with an expanded research agenda in 2024. It is funded and operated by Elon University, a nationally ranked private university located in Elon, North Carolina.