Elon University

Improving Digital Public Forums’ Role in Democracy

The Future of Digital Public Spaces: Are the troubled social media platforms used for democratic discourse and informing the public likely to be improved by 2035?

Critics say activities on social media platforms are damaging democracy and the fabric of society. Can these digital spaces be significantly improved to better serve the public good by 2035? How? If not, why not? Researchers at Elon University and the Pew Internet Project asked experts to examine the forces at play and suggest solutions. 

Results released November 22, 2021Pew Research Center and Elon University’s Imagining the Internet Center asked experts in a Summer 2021 canvassing if it might be possible for social media platforms and other online public-discussion spaces to be improved by 2035 in ways that significantly serve the public good. More than 860 technology innovators, developers, business and policy leaders, researchers and activists responded to these specific questions.

This page carries the full 160-page report in one online scroll; you can also read the digital PDF online or download it by clicking on the related graphic.

The Questions – This canvassing of experts is prompted by debates about the evolution of digital spaces and whether online life is moving in a positive or negative direction when it comes to the overall good of society. Will technology developers, civil society, and government and business leaders find ways to create better, safer, more-equitable digital public spaces? Looking ahead to 2035, can digital spaces and people’s use of them be changed in ways that significantly serve the public good – yes or no? If you answered “yes,” what reforms or initiatives may have the biggest impact? What role do you see tech leaders and/or politicians and/or public audiences playing in this evolution? What could be improved about digital life for the average user in 2035? What current problems do you see being diminished? Which will persist and continue to raise major concerns? If you answered “no,” why do you think digital spaces and digital life will not be substantially better by 2035? What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces may not much change?

862 respondents answered the yes-no question

  • 61% said they either hope or expect that by 2035 digital spaces and people’s uses of them WILL change in ways that significantly serve the public good. However, because some wrote that this is merely their hope and others listed one or more extremely difficult hurdles to overcome before that outcome can be achieved, the numeric finding of 61 is not fully indicative of the challenge of accomplishing this.
  • 39% said they expect that by 2035, digital spaces and people’s uses of them WILL NOT change in ways that significantly serve the public good.
  • It is important to note that a large share of those who chose “yes” – that online public spaces will improve significantly by 2035 – said it was their “hope” only and/or also wrote in their answers that the changes between now and then could go either way. They often listed one or more difficult hurdles to overcome before that outcome can be achieved. The simple quantitative results are not fully indicative of the complexities of the challenges. The important findings are found in the respondents’ rich, deep qualitative replies.The full 160-page report includes full details.

Among the key themes emerging among hopeful respondents’ answers were

(The two sets of themes are available as shareable JPG files in the full 157-page report, below.)

* Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech. They said social media platforms have come to dominate the public’s attention to the point of replacing journalism and other traditional sources in providing information to citizens. These experts argued that surveillance capitalism is not the only way to organize digital spaces. They predict that better spaces in the future will be built of algorithms designed with the public good and ethical imperatives at their core. They hope upgraded digital “town squares” will encourage consensus rather than division, downgrade misinformation and deepfakes, surface diverse voices, kick out “bozos and bots,” enable affinity networks and engender pro-social emotions such as empathy and joy.

* Government regulation plus less-direct “soft” pressure by government will help shape corporations’ adoption of more ethical behavior: A large share of these experts predicted that legislation and regulation of digital spaces will expand; they said the new rules are likely to focus on upgrading online communities, solving issues of privacy/surveillance and giving people more control over their personal data. Some argued that too much government regulation could lead to negative outcomes, possibly stifling innovation and free speech. There are worries that overt regulation of technology will empower authoritarian governments by letting them punish dissidents under the guise of “fighting misinformation.” Some foresee a combination of carefully directed regulation and “soft” public and political pressure on big tech, leading corporations to be more responsive and attuned to the ethical design of online spaces.

* The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will bring improvements: A share of these experts predicted that the public will apply more pressure for the reform of digital spaces by 2035. Many said tech literacy will increase, especially if new and improved programs arise to inform and educate the public. They expect that people who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces. Some experts noted that this is how every previous advance in human communication has played out.

* New internet governance structures will appear that draw on collaborations among citizens, businesses and governments: A portion of these experts predict the most promising initiatives will be those in which institutions collaborate along with civil society to work for positive change that will institutionalize new forms of governance of online spaces with public input. They expect these multistakeholder efforts will redesign the digital sphere for the better, upgrading a tech-building ecosystem that is now too reliant on venture capital, fast-growth startup firms and the commodification of people’s online activities.

Among the key themes emerging among worried respondents’ answers were:

* Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears, these experts argued. In a digitally networked world in which people are constantly surveilled and their passions are discoverable, messages that weaponize human frailties and foster mis/disinformation will continue to be spread by those who wish to exert influence to meet political or commercial goals or cultivate divisiveness and hatred.

* The trends toward more datafication and surveillance of human activity are unstoppable: A share of experts said advances in digital technology will worsen the prospects for improving online spaces. They said more human activity will be quantified; more “smart” devices will drive people’s lives; more environments will be monitored. Those who control tech will possess more knowledge about individuals than the people know themselves, predicting their behavior, getting inside their minds, pushing subtle messages to them and steering them toward certain outcomes; such “psychographic manipulation” is already being used to tear cultures asunder, threaten democracy and stealthily stifle people’s free will.

* Haters, polarizers and jerks will gain more power: These experts noted that people’s instincts toward self-interest and fear of “the other” have led them to commit damaging acts in every social space throughout history, but the online world is different because it enables instantaneous widespread provocations at low cost, and it affords bad actors anonymity to spread any message. They argued that the current platforms, with their millions to billions of users, or any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.

* Humans can’t keep up with the speed and complexity of digital change: Internet-enabled systems are too large, too fast, too complex and constantly morphing. making it impossible for either regulation or social norms to keep up, according to some of these experts. They explained that accelerating change will not be reined in, meaning that new threats will continue to emerge as new tech advances arise. Because the global network is too widespread and distributed to possibly be “policed,” these experts argue that humans and human organizations as they are structured today cannot respond efficiently and effectively to challenges confronting the digital public sphere.

Click here to read a news release with a nutshell version of analysis and findings

Choose a link below to read only the expert responses, with no sort or analysis

All for-credit responses on the future of improved online forums

All anonymous responses on the future of improved online forums

Full Report with Full Details and Complete Findings

The Future of Digital Spaces and Their Role in Democracy

Many experts say public online spaces could significantly improve by 2035 if reformers, big technology firms, governments and activists tackle the problems created by misinformation, disinformation and toxic discourse. Others expect continuing troubles as digital tools and forums are used to exploit people’s frailties, stoke their rage and drive them apart

Those who worry about the future of democracy focus a lot of their anxiety on the way that the things that happen in online public spaces are harming deliberation and the fabric of society. To be sure, billions of users appreciate what the internet does for them. But the climate in some segments of social media and other online spaces has been called a “dumpster fire” of venom, misinformation, conspiracy theories and goads to violence.

Social media platforms are drawing fire for their role in all of this. After the Jan. 6, 2021, attack on the U.S. Capitol, a congressional panel requested that Facebook, Google, Twitter, Parler, 4chan, Twitch and TikTok release all records related to misinformation around the 2020 election, including efforts to influence or overturn the presidential election results. In September 2021, a five-part series in The Wall Street Journal exposed details that seem to show that Facebook has allowed the diffusion of misinformation, disinformation and toxicity that has resulted in ethnic violence and harm to teenage girls and has undermined COVID-19 vaccination efforts. And The Journal’s source, Facebook whistleblower Frances Haugen, followed up by telling the U.S. Senate that she had gone public with her explosive material “because I believe that Facebook’s products harm children, stoke division and weaken our democracy.”

Worries over the rise in the acrid tone and harmful and manipulative interactions in some online spaces, and concerns over the role of technology firms in all of this, have spawned efforts by tech activists to try to redesign online spaces in ways that facilitate debate, enhance civility and provide personal security. A selection of these initiatives were described in a spring 2021 article in The Atlantic Monthly by Anne Applebaum and Peter Pomerantsev. Among the suggested solutions documented in the piece:

  • The creation of an internet version of public media along the lines of PBS and NPR
  • Middleware” that could allow people to set an algorithm to give them the kind of internet experience they want, perhaps without the dystopian side effects
  • Online upvoting systems that favor content that could push partisans toward consensus, rather than polarizing them
  • An internet “bill of rights” allowing “self-sovereign identity” that lets people stay anonymous online, but weeds out bots
  • Constructive communication” systems set up to dial down anger and bridge divides.

In light of the current conversations about the need to rethink and redesign online public spaces, Pew Research Center and Elon University’s Imagining the Internet Center asked experts how they expect the digital public sphere to evolve by 2035. Some 862 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question:

Looking ahead to 2035, will digital spaces and people’s use of them be changed in ways that significantly serve the public good?

Some 61% chose the option declaring that, “yes,” by 2035, digital spaces and people’s uses of them will change in ways that significantly serve the public good; 39% chose the “no” option, positing that by 2035, digital spaces and people’s uses of them will not change in ways that significantly serve the public good.

It is important to note that a large share of those who chose “yes” – that online public spaces will improve significantly by 2035 – said it was their “hope” only and/or also wrote in their answers that the changes between now and then could go either way. They often listed one or more difficult hurdles to overcome before that outcome can be achieved. The numeric findings reported here are not fully indicative of the complexities of the challenges now and in future.

In fact, in answer to a separate question in which they were asked how they see digital spaces generally evolving now, a majority (70%), said current technological evolution has both positives and negatives, 18% said digital spaces are evolving in a mostly negative way that is likely to lead to a worse future for society, 10% said the online world is evolving in a mostly positive way that is likely to lead to a better society, and about 3% said digital spaces are not evolving in one direction or another.

It is also worth noting that the responses were gathered in mid-summer of 2021. People’s responses came in the cultural context of the ongoing COVID-19 pandemic, and at a time when rising concerns over climate change, racial justice and social inequality were particularly prominent – and half a year after the Jan. 6, 2021, attack at the U.S. Capitol in the aftermath of one of the most highly contentious U.S. presidential elections in recent history.

This is a nonscientific canvassing, based on a nonrandom sample. The results represent only the opinions of the individuals who responded to the queries and are not projectable to any other population.

The bulk of this report covers these experts’ written answers explaining their responses to our questions. They sounded many broad themes in sharing their insights about the evolution of the digital “town squares” most people frequent.

The themes are outlined in the tables that follow below:

As they considered these questions, some of these experts predicted that changes of a different order of magnitude are also in store by 2035. Some of the most compelling ideas include:

  • Brad Templeton advanced a “new moral theory [that] it is wrong to exploit known flaws in the human psyche.” He argues that the embrace of “psyche-exploitation avoidance” would lead to a new design of online spaces.
  • Raashi Saxena urged, “We do not have a global, agreed-upon list of digital harms that can be inflicted upon us … We first need to define the rights to be protected.”
  • Mike Liebhold outlined a future with applied machine intelligence everywhere, continuous pervasive cybersecurity vulnerabilities, ubiquitous conversational bot agents, holographic media and telepresence and cobotics (collaborative robotics), among other things.
  • Carolina Rossini said a regulatory agency to monitor technology’s impact on health – a Food and Drug Administration (FDA) for algorithms – should arise as increasing numbers of digital technology tools are placed in people’s bodies.
  • Robin Raskin predicted, “The metaverse – digital twins of real worlds or entirely fabricated worlds – will be a large presence by 2035, unfortunately with some of the same bad practices on the internet today such as personal-identity infringements.”
  • James Hendler believes there will be tech advances that allow people to control their online identities and privacy preferences in ways that thwart omnipresent surveillance schemes.
  • Cory Doctorow said the “tyranny of network effects” will be broken if interoperability is imposed on tech companies so that, for instance, people could move their social media networks from one platform to another and easily abandon online spaces they do not like.
  • Beth Simone Noveck expects new “governance models” for public online spaces that allow citizens and groups to participate directly in policymaking and provision of services.
  • Barry Chudakov predicts “the self will go digital” and exist in the flesh and in its digital avatar. “Identity is thereby multiple and fluid: Roles, sexual orientation and self-presentation evolve from solely in-person to in-space.”
  • Jerome Glenn said a new civilization will emerge as the “Information Age” gives way to the “Conscious-Technology Age” through the force of two megatrends: “First, humans will become cyborgs, as our biology becomes integrated with technology. Second, our built environment will incorporate more artificial intelligence.”

In the next section, we highlight the remarks of several dozen experts who gave some of the most wide-ranging answers or incisive responses to our question about the future of the digital public sphere.

That featured-quotes section is followed by the equivalent of more than 100 print pages of expert comments organized under the set of themes we set out at the top of this report.

The remarks made by the respondents to this canvassing reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their backgrounds and the locus of their expertise. Some responses are lightly edited for style and readability.

1. A sampling of key, overarching views

Many of the tech-reform advocates contributed to the canvassing and elaborated on their ideas.

For instance, Ethan Zuckerman, director of the Initiative on Digital Public Infrastructure at the University of Massachusetts-Amherst, said, “We can, absolutely, change digital spaces to better serve the public good. But we’ve not made the broad commitment to do so. Right now, we are overfocused on fixing existing broken spaces, for instance, making Facebook and Twitter less toxic. We need to do more work imagining and creating new spaces with explicit civic purposes and goals if we are to achieve better online communities by 2035. We begin solving the problem of digital public spaces by imagining spaces designed to encourage pro-social conversations. Instead of naively assuming that connecting people will lead toward increased social harmony, we need to recognize that functional public spaces require careful engineering, moderation and attention paid toward marginalized and traditionally silenced communities. This innovation is more likely to come from real-world communities who take control of their own digital public spaces than it is to come from tech entrepreneurs seeking the next billion-person network. Regulation has a secondary role to play here – its job is not to force Facebook and others into pro-social behavior, but to create a more level playing field for these new social networks.”

The following selection of responses covers some of the more panoramic and incisive big ideas shared by 51 of the 862 global thought leaders participating in this canvassing.

This is the fork in the road where people can choose a better future – or a downward path

Mark Davis, associate professor of media and communications at the University of Melbourne, wrote, “Against all expectations otherwise, we are still in the ‘Wild West’ phase of the internet, where ethical and regulatory frameworks have failed to keep up with rapid advances in technology. The internet, in this phase, and against early utopic hopes for its democratic utility, has had severely negative impacts on democracy that are not offset by its more-hopeful developments such as Black Twitter and #metoo, among the many innovative, emancipatory uses of online media. One reason for this is that the surveillance business model on which digital platforms operate – which has seen traditional liberal democratic intermediaries displaced to some extent by algorithmic intermediaries – privileges quantities of engagement over the qualities of content.

“Emancipatory movements exist in the digital folds of an internet designed to maximise corporate profits. It has seen a new class of mega-rich individuals and corporations emerge that, in effect, now own the infrastructure of the ‘public sphere’ and have enormous lobbying power over government. The affordances of these systems have at the same time fostered the creation of alternative media spheres where extremism and hate discourse continue to proliferate.

“We are fast approaching a crisis point where the failures of the present hyper-corporate, relatively unregulated model of the internet are having severe, detrimental impacts on public communication. We are at a proverbial fork in the road. One route leads an ever deeper downward spiral into digital dystopia: hyper-surveillance, predictive technology working hand in hand with authoritarianism, disinformation overload and proliferating online divisiveness and hatred. The alternative route is a more-regulated internet where accountability matters, guided by a commonly assented ethics of public culture.

“Is this alternative possible in an era of winner-takes-all partisanship and corporate greed so vast that it is literally interplanetary in its ambitions? I fear not, but if we are to be civic optimists then it is the only possible hope, and we have no alternative but to say ‘yes’ to a better digital future and to become digital activists who collectively work to make it happen.”

We should consider embracing a new moral theory: accepting that it is wrong to exploit known flaws in the human psyche

Brad Templeton, internet pioneer, futurist, activist and former president of the Electronic Frontier Foundation, said, “I hold some hope for the advancement of a new moral theory I am exploring. Its thesis is that it is wrong to exploit known flaws in the human psyche. A well-known example is gambling addiction. We know it is wrong to exploit that and we even make it illegal to exploit it and other addictive behaviours. On the other hand, we have no problem with all sorts of marketing and computer interaction tricks that unconsciously lead us to do things that, when examined later, we agree are against our interests and which exploit flaws well established in the scientific literature. A-B testing to see what is more addictive would be deprecated rather than be a good idea.

“This psyche-exploitation avoidance approach is new but might lead to a way to design our systems that has stronger focus on our true interests. While it would be nice if we could make social media that are not driven by advertising, and thus work more toward serving the interests of users/customers than advertisers/customers, this is not enough. After all, Netflix also works hard to addict users and make them binge, even though it does not take advertising.

“I don’t think anybody knows what form the changes for the better in the digital public sphere will take, but it’s clear that the players and their customers find the current situation untenable. They will find solutions because they must. Tristan Harris has convinced Facebook to at least give lip-service to his ‘time well spent’ positioning; to make people feel, upon reflection, that their time on social media was worthwhile where today many feel it’s not.

“I have proposed there be a way for friends to anonymously ‘shame’ friends who post false and divisive material – a way that you can learn that some of your friends found your post false or lacking, without knowing who they were (so they don’t feel they will risk the relationship to tell you, for instance, that you fell for a false meme.)

“This will not be enough, but it’s a start. I also hope we’ll be trained to not trust video evidence any more than we do text because of deepfakes. It will get worse in some ways, too. This is an adversarial battle, with some forces trying deliberately to disrupt their enemies. But they will certainly try. Propaganda, driven by AI, will continue to be weaponized.”

People will use new tools to turn rage into public awareness, acceptance and rapport

Maja Vujovic, owner/director of Compass Communications in Belgrade, Serbia, predicted, “By engineering more tools to tap our commonalities rather than our differences, we will keep transcending our restrictive bubbles between now and 2035. Automatic translation and transcription already tackle our language differences. Our public fora, like Wikipedia and Quora, teach us about foreign cultures, customs or religions. We will also find new ways to manage our conflicting gender or political identities, by ‘translating,’ role-playing or modeling them (maybe through augmented reality and virtual reality). The gaming industry, for one, could creatively crush its misogyny and help reform hostile workplaces and audiences everywhere faster.

“Over these early digital decades, our online public spheres have brought major issues of contention to the surface – truly globally – for the first time ever. Social media algorithms exploited our many frustrations, thus the rage was all the rage. In the future, we’ll turn that public rage into public awareness, then into acceptance, then – in a distant future – into rapport. One step down, three to go; we will struggle through a four-step algorithm regarding each of our principal polar opposites. We will learn to hold ourselves accountable over time. When our public online spheres normalize our real identities (eliminating bozos and bots) we will prove civil on the whole. In the years to come, a new global consensus and protocols will inevitably emerge from and for dealing with worldwide emergencies such as pandemics or climate change.

“Improvements will largely be owed to the global public debates we passionately exercise online. If we, the taxpayers of all countries, crowdsource the most viable identity-vouching solutions, we could, de facto, become fully represented. The distributed technologies will boldly attempt to keep a tally of everyone in all of our demographic, economic, cultural and other tribes. …

“It would be ludicrous to not want to walk our talk directly once we become equipped to do so. We could then automate, gamify or distribute the governance (or choose ‘all of the above’). As a bonus, our global digital public spheres would vastly improve as well. In effect, we would be saving the civilization baby and purifying its bath water, too.”

A robust regulatory approach can improve more of the digital sphere

Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, said, “The Fourth Industrial Revolution has started in most countries, and we are witnessing manufacturing in the digital space in a way that is unprecedented. Our society will be smarter and have richer experiences – it will be bettered as it engages in more-immersive education and virtual-reality entertainment. Our currency may be totally digital. The Internet of Things (IoT) will facilitate a brighter society.

“However, there are many concerns. More financial heists and scams may be perpetrated through digital platforms. Cryptocurrency, due to its decentralised nature, is used to facilitate crime; ransomware perpetrators demand cryptocurrency as a method of untraceable payment, and illegal international deals are made possible by payment through untrackable cryptocurrency. Terrorism may be advanced using new robotics tools and digital identities to wreak more havoc. It is possible that with a proper framework and meticulous, robust regulatory approach that the positive advantages will outweigh the ills.

“Most aspects of our lives will be impacted positively by the emerging technologies. The IoT can usher in smart cities, smart agriculture, smart health, smart drugs, smart sports, smart businesses, smart digital currencies. Robotics will be used to combat pandemics by promoting less physical contact where it will help to flatten the curves and it will be used in advanced industrial applications.

“The opportunities are limitless. However, all hands should be on deck so that the negative impact will not erode the gains of digital evolution. Global collaboration through global bodies is necessary for positive digital evolution. International governance and national governance of each country will have to be active. Sensitisation of the citizenry against the ills of digital transformation is key to sustaining the gains. Inventors and private businesses have roles to play. Even a future technological singularity is also a threat.”

Tech alone can’t solve inequality or hate; humans must collaborate to bring true change

danah boyd, founder and president of the Data & Society Research Institute and principal researcher at Microsoft, commented, “Technology mirrors and magnifies the good, bad and ugly of society. There are serious (and daunting) challenges to public life in front of us that are likely to result in significant civil unrest and chaos – and technology will be leveraged by those who are scared, angry or disenfranchised even as technology will also be used by those seeking to address the challenges in front of us.

“But technology can’t solve inequality. Technology can’t solve hate. These require humans working together. Moreover, technology is completely entangled with late-stage capitalism right now, and addressing inequality/hate and many other problems (e.g., climate change) will require a radical undoing/redoing of capitalism. My expectation is that technology will be leveraged to reify capitalism rather than to help undo its most harmful components.”

These are challenging issues, but people and tools will evolve a better public sphere online

Vinton G. Cerf, vice president and chief internet evangelist at Google and Internet Hall of Fame member, observed, “Digital spaces have evolved dramatically over the past 50 years. During that time, programmable devices have become central to an unlimited number of products upon which we increasingly depend. Information space is instantly accessible thanks to the World Wide Web and search engines such as Google. Collaboration is facilitated with email, texting, shared documents, access to immeasurable amounts of data and increasingly powerful computer-based tools for its use.

“Over the next 15 years, instrumentation in every dimension will color our lives to include remote medical care, robotics and self-driving cars. Cities will have models of themselves they can use to assess whether they are functioning properly or not; these models will be invaluable to aid in response to emergencies and to smooth the course of daily life.

“During this same period, we will have to continue to cope with the amplifying effects of social media, including the side effects of misinformation, disinformation, malware, stalking, bullying, fraud and a raft of other abuses. We will have made progress in international agreements on norms of civil behavior and law enforcement in online environments.

“The internet or its successor will have become safer and more secure, and preservation of these properties will be easier with the help of new devices and practices. There will be more collaboration between government and the private sector in the interest of citizen safety and privacy. These are hard problems, and abuses will continue, but tools will evolve to provide better protection in 2035.”

Requiring platforms to become interoperable would allow people to choose where they want to be

Cory Doctorow, activist, journalist and author of “How to Destroy Surveillance Capitalism” and many other books, recommended, “The move to lower switching costs – by imposing interoperability on online spaces – will correct the major source of online toxicity – the tyranny of network effects. Services like Facebook are so valuable due to network effects that users are loathe to leave, even when they have negative experiences there.

“If you could leave Facebook but still connect to your Facebook friends, customers and communities, then the equilibrium would shift – Facebook would have to be more responsive to users because otherwise the users would depart and it would lose money. And if Facebook wasn’t responsive to user needs, the users could take advantage of interoperability to leave, because interoperability means they don’t have to give up the benefits of Facebook when they go.”

‘We need to start training our babies as carefully as we are talking about training our AIs’

Esther Dyson, internet pioneer, entrepreneur and executive founder of Wellville.net, responded, “I see things getting both better and worse for people depending on who you are and under what jurisdiction you live. (It is ever thus.) There is no particular endpoint that will resolve the tension between more power for both good and bad actors. We will have AI [artificial intelligence] that can monitor speech and to some extent, reactions to speech, closely – but we will have both good and bad actors in charge of the AIs.

“As more of life goes online, people will have more freedom to choose their virtual jurisdictions, and the luckier ones will be able to get an education online and perhaps to move out to a better physical jurisdiction.

“By 2065, I would hope that there would be some worldwide movement that would simply rescue the bottom-of-the-pyramid citizens of the most toxic governments, but I believe that the (sometimes misguided) respect for sovereignty is strong enough to persist through 2035. At what point will we be able to escape to now floating jurisdictions (especially as many places get flooded by climate change) or even – though this will remain an expensive proposition – into space?

“Somehow, we have evolved to prefer superiority over absolute progress, and we are unlikely to move into a world of evenly distributed power. To get more specific, I do see business playing a bigger role, but businesses are seduced by and addicted to increasing profits just as political actors are seduced by and addicted to power.

“Somehow, we need to start training our babies as carefully as we are talking about training our AIs. Train them to think long-term, to favor their own species, to love justice and fairness.”

We must collectively agree on a Universal Declaration of Digital Rights

Raashi Saxena, project officer at The IO Foundation and scientific committee member at We, the Internet, wrote, “We need to move toward defining technical standards that will protect citizens’ data in digital spaces from harm. One such initiative from The IO Foundation is the Universal Declaration of Digital Rights, which would act as a technical reference for technologists, which we identify as the next generation of rights defenders, so that technology is designed and implemented in a way that proactively protects citizens. Governments are not closing the loop when it comes to tech policies by not offering infrastructures that implement them.

“Examples of how this is possible can be found in corporate tech: Apple can enforce its policy (its licensing business model) in its digital assets such as music because it has implemented its own infrastructure for that. The same degree of protection should be provided to citizens. Their sharing of data does not follow a different model from a technical perspective. In essence, they are licensing their personal data.

“The underlying problem is that we do not have a global, agreed-upon list of digital harms, that is, harms that can be inflicted upon us by the data that models all of us. In order to implement public infrastructures that foster meaningful connectivity, philanthropies should pursue the core principle of ‘Rights by Design.’

“We first need to catalog and collectively agree on a common definition of digital harms so that we can proceed to define the rights to be protected. The areas of work for them should be around digital governance, sustainability and capital to promote the rise of other stakeholder groups that can sustain, scale and grow. Supporting projects to implement research-informed best practices for conflict zones and sparsely populated terrains should be the highest priority, since access to information and communication can constitute a critical step in the defense of the territories of these communities.”

Stop playing with ‘technocratic incrementalism’ and take big steps toward positive change

Caitlin Howarth, humanitarian data and security analyst, asked, “Are there ways that things can change for the better? Yes. Is that change not complex and dramatic? No. We need to stop playing at this with technocratic incrementalism. Here are some needed internet governance measures:

  1. Firmly establish that information is a human right, interdependent upon other established rights (particularly the right to protection). The right to information – accessing, creating, sharing, updating, storing and deleting it – is particularly critical during crises and must be protected as a vital condition for securing all other human rights. This right to information must also be protected and comprehensively advanced – along with its interdependent rights – through the activities and obligations of human rights and humanitarian organizations that operate according to shared standards. As Hugo Slim and others have called for, this is the moment for a fifth Geneva Convention given the fact that ICT systems are routinely targeted first as ‘dual use’ infrastructure and are therefore considered valid targets under outdated laws of armed conflict.
  2. Using a rights-based approach, substantially advance these rights using a comprehensive framework of accessibility, security and protection (e.g., digital security and surveillance awareness), civilian redress and rectification measures (e.g., regulatory guidance and claims structure, akin to the original design of the Consumer Financial Protection Bureau) and eliminating or ending liability-shielding practices for major technology companies.
  3. Every cybersecurity professional is aware that governments, including the U.S., are on the cusp of achieving quantum computing breakthroughs that will render current digital security protocols meaningless. Invest explicitly and rapidly in quantum-era civilian-protection mechanisms that could meaningfully advance their human rights when such government capacity comes online; if not, we risk a rapid descent into wholesale authoritarianism.
  4. Establish hard national and international regulations on the propagation of cyber currencies and the use of blockchain technologies that bear disproportionately harmful environmental burdens without demonstrable, comparable benefits to society as a whole. Similarly, regulate the use of digital-identification systems, especially those connected to biometric data and irreversible data storage, to ensure the fundamental bodily integrity of human beings’ ‘digital bodies’ as well as their physical persons. When systems cannot pass the stress tests to meet minimum rights-based requirements, they should not be permitted to profligate and harm. We need regulatory systems similar in focus and function to the FDA for platforms of such significance – and they must be free of regulatory capture.”

People should be recognized for what they do; they should not pollute their rivers

Srinivasan Ramani, Internet Hall of Fame member and pioneer of the internet in India, said, “I am reminded of life in Kerala, one of the states of India. There are many rivers and backwaters there and it is common for people to live on the rivers; that means that most people live on the edge of riverbanks or the backwaters. The rivers give them food (mostly fish) and transportation by boat. The rivers, of course, give them drinking water. The people are very hygiene-conscious, because if they pollute their river, they will be ruining their own lives.

“We now live by the internet, and we should be equally careful not to pollute it with misinformation, unreliable information, etc. Of course, people have freedom of expression. Going back to the river analogy, do they have freedom to pollute the river? I think, and I hope, that rubbish will reduce on the internet in the coming years. People should have freedom of expression, but they should not be able to hide behind anonymity.

“I would hope that every original post and every forwarding would be signed in a manner that would let us identify the person responsible. Then there is the question of ignorant postings. One may express one’s opinion and own the responsibility for it. That does not guarantee that it is a contribution for the good of society. You may claim in all sincerity that a certain herbal remedy protects you against COVID-19, but it may be a statement with no reliable evidence behind it whatever. It can land the reader in trouble by misleading him or her. We can probably invent an effective safeguard against it, but it may not be very easy.”

Changes in governance and law, amplified by tech, can help shape a better public sphere

Beth Simone Noveck, director of the Governance Lab and author of “Solving Public Problems: How to Fix Our Government and Change Our World,” observed, “Many people are working today on building better alternatives to the current social media dumpster fire and many institutions turning to the use of platforms designed to facilitate more-civil and engaged discourse. … Brazil has adopted platforms like Mudamos, which enables citizens to propose legislation and which is being used systematically and on an ongoing basis for ‘crowdlaw,’ namely to enable ordinary citizens to participate in the legislative process. Taiwan has engaged the public in co-creating 26 pieces of national legislation, but perhaps even more exciting is its creation of a ‘Participation Officers Network‘ to train officials to work with the public in a more-conversational form of democratic engagement enabled by technology, day in and day out.

“The most exciting initiatives are those where institutions are collaborating with civil society, not as a pilot or experiment, but as an institutionalized and new form of governance and problem solving. In the UK, GoodSAM uses new technology to crowdsource a network of thousands of amateur first responders to offer bystander aid in the event of an emergency, thereby dramatically improving survival rates. Petabancana enables residents in parts of Indonesia and India to report on fair weather flooding to facilitate better governmental disaster response.

“Civic tech developers are creating exciting new alternatives designed to foster a more participatory future. Whether it is platforms for citizen engagement like Pol.is or Your Priorities or projects like Applied – hiring software designed by the UK Behavioral Insights team designed to foster diversity rather than inadvertently entrenching new biases – there has always been a community of tech designers committed to using tech for good.

“But the technology is not enough. The reforms that have the biggest impact are those changes in law and governance that lead to uses of technology that promote a systematically more responsive, engaged and conversational forms of governance on a quotidian basis by prohibiting malevolent uses of tech while encouraging good uses. For example, New Jersey is exploring opportunities to regulate uses of hiring technology that enable discrimination. But, at the same time, New Jersey is running a Future of Work Accelerator to invest in and promote technologies that protect workers, amplify workers’ voices and strengthen worker rights.

“In the United States, many positive uses of technology are happening in cities and at the local, rather than the national, level. The Biden Administration’s July 2021 OMB request for comments to explore more equitable forms of citizen engagement may portend greater investment in technology for sustained citizen engagement. Also, the introduction of machine learning is enabling the creation of new kinds of tools to facilitate more efficient forms of democratic engagement at scale.

“Given the proliferation of new platforms and initiatives designed to solve public problems using new technology and the collective intelligence of communities, I am hopeful that we will see increasing institutionalization of technologies that promote strong democracy and civil rights, however, in the absence of sufficient investments in civic infrastructure (i.e., government and philanthropy paying for these platforms) and investments in training citizens to learn how to be what Living Cities calls ‘resident engaged,’ the opportunity to use technology to enable the kind of democracy and society we want will go unrealized.”

Leaders will see they must cooperate to convert swords into sustainable solutions

Jonathan Grudin, principal human-computer design researcher at Microsoft and affiliate professor at the University of Washington, wrote, “In 2005, digital spaces served the public good. Recovering from the internet bubble, we were connecting with long-lost classmates and friends and conducting business more efficiently online. By 2020, digital spaces had become problematic. Mental health problems afflicted young and old, there was rising income inequality, trust in governments and institutions had eroded, there were elected politicians of staggering ineptitude, and tens of millions were drawn to online spaces rife with malicious conspiracy fantasies and big lies.

“Trillions of dollars are spent annually to combat bad actors who may have the upper hand. Debt-ridden consumers are succumbing to marketers armed with powerful digital technologies. In 2035, another 15 years will have elapsed. … Life may be worse for the average person in 2035 than today, but I’m betting the digital spaces will be better places.”

All stakeholders have to keep each other in check in the further development of digital life

Olivier Crépin-Leblond, internet policy expert and founding member of the European Dialogue on Internet Governance, wrote, “I am optimistic about the transformation of digital spaces for the following reasons:

  1. Natural Law will ensure that the extreme scenarios will ultimately not be successful.
  2. The Public, at large, is made up of people who want to live a positive, good life.
  3. Unless it is completely censored and controlled, the internet will provide a backstop to any democracy that is in trouble.
  4. The excesses of the early years’ GAFAs [an acronym for Google, Apple, Facebook, Amazon that is generally meant to represent all of the tech behemoths] will be soon kept more in check, whilst innovation will prevail.
  5. The next generations of political leaders will embrace and understand technology better than their predecessors.
  6. Past practice will help in addressing issues like cybersecurity, human rights, freedom of speech – issues that were very novel in the context of the internet only a few years ago.
  7. On the other hand, this could be only achievable if all stakeholders of the multistakeholder model keep each other in check in the development of the future internet. If this model is not pursued, the internet’s characteristics and very fabric will change dramatically to one serving the vested interests of the few at the expense of the whole population.”

Machines, bots will be more widespread and more spaces will be autonomously controlled

Marc Rotenberg, president and founder of the Center for AI and Digital Policy and editor of the AI Policy Sourcebook, said, “Digital Spaces will evolve as users become more sophisticated, more practical and more willing to turn aside from online environments that are harmful, abusive and toxic. But the techniques to lure people into digital spaces will also become more subtle and more effective, as interactive bots become more widespread and as more spaces are curated by autonomous programs. By 2035, we will begin to experience online a society of humans and machines that will also be making its way into the physical world.”

Industry should come together with the public sector to broaden access to digital skills

Melissa Sassi, the Global Head of IBM Hyper Protect Accelerator, focused on empowering early-stage startups, suggested, “Initiatives for improvement that could be undertaken that might have the largest impact on digital life include:

  1. Access to affordable internet for the 50% that are not currently connected and/or those that are unable to connect due to costs.
  2. Digital skill-building for those with access but currently unable to make meaningful use of the internet.
  3. Empowering underserved and underrepresented communities via digital inclusion (woman/girls, youth, people with disabilities, indigenous populations, elderly populations, etc.).
  4. Investment in locally generated tech entrepreneurship endeavors in hyper-local communities. Tech leaders play an important role by incorporating design thinking into everything and anything built. It is important to hire and involve a more-representative group of builders, design makers and experts into designing and creating solutions that are more empathetic with audience needs, making the customer and/or user central to what gets shipped and/or evolved.
  5. Tech leaders from social media platforms should be playing a greater role in data stewardship, protection, privacy and security, as well as incorporating more-informed consent protocols for those individuals who might lack the necessary skills to understand what data is going where and how data is being used when it comes to ad serving and other actions taken by social media networks.
  6. Tech leaders play a fundamental role in training our current and next generation of users on the introductory building blocks of learning to code, as well as what it means to be digitally skilled, ready, intelligent, literate and prepared for the future of work. This is something that could be incorporated into a multistakeholder approach where industry comes together with the public sector to broaden access to digital skills.
  7. Improvement areas relating to digital life includes individuals becoming more productive at work and in their personal lives, utilizing technology to drive outcomes (health care, education, economic, agricultural, etc.) and incorporating technology to solve the 17 UN Sustainable Development Goals.
  8. Technology could play an incredibly important role in evolving the global monetary system to one that is decentralized. One that is for the people, with the people, by the people; where those at the bottom of the pyramid do not suffer from faulty monetary policies.”

Tech will mostly be applied to controlling populations and resources and to entertainment

Douglas Rushkoff, digital theorist and host of the NPR One podcast “Team Human,” predicted, “There will be many terrific, wonderful innovations for civics in digital spaces moving forward. There will also be almost unimaginably cruel forms of oppression implemented through digital technology by 2035. It’s hard to talk too specifically about digital technology in 2035, since we will likely be primarily dealing with death and destruction from climate change. So, digital technology will be useful for organizing humanity’s broad retreat from coastal areas, organizing refugee camps for over a billion people, administrating medical and other forms of triage, and so on.

“That’s part of the problem when casting out this far. We don’t really know how much of the world will be on fire, whether America will be a democracy, whether China will be dominating global affairs, how disease and famine will have changed the geopolitical landscape, and so on. So, if I have to predict, I’d say digital technology will be mostly applied to: 1) control populations, 2) administrate mass migrations and resource allocation and 3) provide entertainment.”

Digital transformation arrives as climate change is at the top of the global agenda

Grace Wambura, an associate at DotConnectAfrica based in Nairobi, said, “Digital transformation will pursue unlimited growth and our limitless consumption threatens to crowd out everything else on Earth. Climate change is currently happening, we are overspending our financial resources, we require more fresh water than we have, there is increasing income inequality, a diminishing of other species, and all of these are triggering shockwaves.

“At this important time, technology initiatives that are aimed at working forward to end climate change, achieve financial inclusion, overcome gender inequalities and enable the provision of safe drinking water will have a great impact on communities by 2035.

“Tech leaders are increasing their power and digital surveillance. They can also apply technology to come up with new options to cope with the problems arriving with the digital technology evolution. Thanks to technology, everyone can be able to access the world’s best services, resources and knowledge. One thing that will remain as a puzzle and continue to cause concern is the vital need for both privacy and security.”

Bad actors are still going to act bad; no one is in charge of the internet

Alan Mutter, consultant and former Silicon Valley CEO, observed, “The internet is designed to be open. Accordingly, no one is in charge. While good actors will do many positive things with the freedom afforded by digital publishing, bad actors will continue to act badly with no one to stop them. Did I mention that no one is in charge?”

We need an FDA for tech, an agency to help monitor and regulate its effects on humans

Carolina Rossini, an international technology law and policy expert and consultant who is active in a number of global digital initiatives, predicted, “For years to come – based on the current world polarization and the polarization within various powerful and relevant countries – I feel speech and security risks will increase. Personal harm, including a greater impact on mental health, might also increase within digital realms like the metaverse. We might need some new form of a regulatory agency that has some input on how technology impacts people’s health. We have FDA for medicines and more, why not something like that for the tech that is getting closer and closer to being put inside our bodies?

“If countries do not come together to deal with those issues, the future might be grim. From building trust and cooperation to good regulation against large monopolistic platforms to better review of the impact of technologies to good data governance frameworks that tackle society’s most pressing problems (e.g., climate change, food security, etc.) to digital literacy to building empathy early on, there is a lot to be done.”

New breeds of social platforms and other human institutions have to emerge

Robin Raskin, a writer, conference organizer and head of the Virtual Events Group, exclaimed, “There should be a UBI – Universal Basic Internet – for the good of all! Human nature never changes, so the internet will have to keep evolving to try to stay ahead of human greed; the same holds true for all of our other human institutions – evolution and change have to be the norm. Digital currency has to be regulated on a worldwide basis if the internet is going to NOT be a place for ransomware and money laundering. In 2035 there will be more social players. Facebook is already falling in popularity, paving the way for a new breed of social media platforms that seem more in tune with keeping their citizens safer.

“The metaverse – digital twins of real worlds or entirely fabricated worlds – will be a large presence by 2035, unfortunately with some of the same bad practices on the internet today such as personal-identity infringements. Regulators will crack down on privacy violations. Clearly marked posts as to their origins (possibly on the blockchain) will authorize the source of information. Warnings about information being suspect will be worked out. The Internet of Things will be in full swing, creating safer, more-efficient cities – provided adequate privacy practices are created. Advertisers hungry for information and the traditional ad model make the internet less important than it could be. Subscription models, possibly based on usage are one possible answer.”

Can we meet the challenge of automating trust, truth and ethics?

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, (sharing statements she had earlier made in a long interview with the Harvard Gazette) wrote, “Tech is critical infrastructure. It saved lives during the pandemic. It also enabled election manipulation, the rapid spread of misinformation and the growth of radicalism. The same internet supports both Pornhub and CDC.gov, Goodreads and Parler.com. The digital world we experience is a fusion of tech innovation and social controls. For cyberspace to be a force for good, it will require a societal shift in how we develop, use and oversee tech, a reprioritization of the public interest over private profit.

“Fundamentally, it is the public sector’s responsibility to create the social controls that promote the use of tech for good rather than for exploitation, manipulation, misinformation and worse. Doing so is enormously complex and requires a change in the broader culture of tech opportunism to a culture of tech in the public interest. There is no magic bullet that will create this culture change – no single law, federal agency, institutional policy or set of practices will do it, although all are needed. It’s a long, hard slog.

“Changing from a culture of tech opportunism to a culture of tech in the public interest will require many and sustained efforts on a number of fronts, just like we are experiencing now as we work hard to change from a culture of discrimination to a culture of inclusion. That being said, we need to create the building blocks for culture change now – proactive short-term solutions, foundational long-term solutions and serious efforts to develop strategies for challenges that we don’t yet know how to address. …

“At the root of our problems with misinformation and fake news online is the tremendous challenge of automating trust, truth and ethics. Social media largely removes context from information, and with it, many of the cues that enable us to vet what we hear. Online, we probably don’t know whom we’re talking with or where they got their information. There is a lot of piling on. In real life we have ways to vet information, assess credentials from context and utilize conversational dynamics to evaluate what we’re hearing. Few of those things are present in social media.

“Harnessing the tremendous power of tech is hard for everyone. Social media companies are struggling with their role as platform providers (where they are not responsible for content) versus their role as content modulators (where they commit to taking down hate speech, information that incites violence, etc.). They’ve yet to develop good solutions to the content-modulation problem. Crowdsourcing (allowing the crowd to determine what is valuable), third-party vetting (employing a fact-checking service), advisory groups and citizen-based editorial boards all have truth, trust and scale challenges. (Twitter alone hosts 500 million tweets per day.)”

Investing in change can have a multiplied impact, overcoming inequities

Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network, observed, “As we build policies, programs and funding mechanisms to support bringing more and more people online, it will be necessary to build policies and appropriate investments to ensure folks are coming online to digital spaces that are safe and accessible. Misinformation is a massive concern that will continue without deliberate and direct work, and the pandemic has made clear the level of necessity we have for internet in everyday lives from virtual school to virtual work, telehealth to social connection. That means that the inequities that we’ve enabled around digital divides were exacerbating many other clear inequities around health, social supports, employment, schooling and much more. This means that while challenges are compounded, investing in change can have a multiplied impact.”

Tech leaders and politicians can and hopefully will play a crucial, beneficial role

Mei Lin Fung, chair of People-Centered Internet and former socio-technical lead for the U.S. Department of Defense’s Federal Health Futures initiative, predicted, “The trajectory of digital transformation in our lives and organizations will have parallels to the transformation that societies underwent with the introduction of electricity. Thus, the creation of digital public goods and digital utilities will allow for widespread participation and access in digital transformation. This is already underway at the IEEE.org, the International Telecommunication Union and action-oriented forums like the World Summit on the Information Society and the Internet Governance Forum. There are tech leaders and/or politicians who are playing and can play a beneficial role: 1) Antonio Guterres, the first electrical engineer to be UN Secretary-General, has established the Digital Cooperation Roadmap, bringing together stakeholders from across many sectors of society; 2) Satya Nadella, CEO of Microsoft; 3) Ajay Banga, executive chair of MasterCard; 4) Marc Benioff, chairman of Salesforce; 5) an original innovator of the internet, Vint Cerf, now a Google vice president, and other internet pioneers who built the internet as a free and open resource.

“All of these and more will be working to build bridges to a better approach to digital transformation. The most noticeable improvement in the network in 2035 will be that digital will become more invisible, and it will be much more natural and easier to navigate the digital world.

“This transformation will be similar to the evolution of the impact of writing. At the beginning, it was difficult to learn to write, but it advanced broadly and quickly. After it becomes a normal part of people’s education we will see a shift to a digital world with much more digitally literate people. It will be like the film ‘Back to the Future’ – the best parts of human life will flourish, augmented by digital. Current problems that will be diminished include cyberattacks, misinformation, fake news and the stirring up of tribal conflicts. The uses of digital tools and networks by criminals, for human and sex trafficking, for online abuse of the vulnerable, especially children, for fraud, for violence and drug trafficking; increasing attacks via cyber by both state actors and nonstate actors; and increasing attempts to shape and manipulate political discourse by cyber means will persist as major concerns.”

Hoping for the decommodification of digital platforms and the rise of AI-generated ad hoc networks

Bart Knijnenburg, associate professor of human-centered computing at Clemson University, said, “One big transformation that I am really hoping for is the decommodification of the spaces that facilitate online discourse. Right now, most of our online interactions are aggregated on a few giant social networks (Twitter, Facebook, Instagram). We tend to use these networks for multiple purposes, which leads to context collapse: If you mostly talk on Facebook about cars and politics, your car junkie friends will be exposed to your political views and your political kindred spirits will learn about your mechanical skills. On the consumer side this context collapse may induce some serendipity, but on the author’s side it could have a stifling effect: If your words are shared with an increasingly broad audience, you will likely be less outspoken than you’d be in smaller circles. This problem is exacerbated by the lack of transparency in how social networks show content to your audience and by the tendency of social networks to make this audience as broad as possible (e.g., by encouraging users to add more ‘friends,’ or by automatically translating posts into other languages).

“I envision the decommodification of these spaces to result in interest-oriented peer networks (e.g., surrounding a common interest in a certain podcast, author, sports club, etc.), hosted on platforms like Slack, Clubhouse or Discord, which do not specifically aim to grow the network or to algorithmically control/manipulate the presentation of the shared information. By joining *multiple* networks like this, people can mentally separate the expression of a variety of their interests, thereby overcoming the current issue of context collapse. If AI technologies do end up playing a role in this scenario, then I hope it to be at the level of network creation rather than content distribution. The idea would be for an AI system to automatically create ad hoc networks of people with preferences that are similar enough to create an engaging discourse, but not so similar that they result in ‘echo chambers.’”

A new kind of information civilization is being built; here’s hoping its builders will find the will to do it right

Calton Pu, professor, software chair and co-director of the center for experimental research in computer systems at Georgia Tech, wrote, “We are building an information civilization unlike anything else in the history of humankind. The information civilization is built on digital technologies and platforms that can be called digital spaces. The impact of information has been profound in economy (both macro and micro), society (as an organization affecting its population, and the people transforming the social organization), and humans (an aspect that can be called digital life). …

“Throughout the human history, all civilizations have risen and fallen. It appears that as the builders construct an increasingly more sophisticated civilization, the intricacy of organization also makes it more susceptible to manipulation and disruption by the schemers. It is clear that the schemers are not acting alone: They reflect deep, dark desires in human nature. The battle between the builders and schemers will persist in the information civilization, as it has been through all the civilizations in the history. …

“Technical leaders and politicians who help build the information civilization will make beneficial contributions, and those who misuse the digital spaces for their own benefits will lead us toward the downfall of the information civilization. For the information civilization to thrive, the builders must find technological and political means to distinguish factual information (the constructive building blocks) from misinformation and disinformation (the destructive, eroding bacteria/fungi). As the information civilization grows stronger, there is hope that its building blocks of factual information will become better organized and easier to adopt. This improvement will help more humans to grow wiser and help build the human civilization, including the informational and physical dimensions.”

As long as there is profit to be made in scaring people, societies will continue to fracture

Larry Lannom, director of information services and vice president at the Corporation for National Research Initiatives (CNRI), commented, “Solutions will be hard to come by. The essential conundrum is how to preserve free speech in an environment in which the worst speech has a many-fold advantage. This general phenomenon is not new. Jonathan Swift wrote in “The Art of Political Lying” in 1710, ‘If a lie be believed only for an hour, it has done its work, and there is no farther occasion for it. Falsehood flies, and Truth comes limping after it.’ Today the problem is enormously exacerbated by the ease of information spread across the internet, and it is unclear whether the virus-like behavior of misinformation that strikes the right chords in some subset of the population can be stopped.

“The negative sense I have is primarily about social media and the algorithms that drive users into more and more extreme positions. As long as there is profit in scaring people, in pushing conspiracy theories and in emphasizing wedge issues instead of the common good, societies will continue to fracture and good governance will be harder to achieve.

“There is still a lot of good in collaboration technologies. You can focus the world’s expertise on a given problem without having to get all of those experts together in a single room. It makes information more readily available. Consider the transformative protein-folding announcement from DeepMind. Researchers say the resource – which is set to grow to 130 million structures by the end of 2021 – has the potential to revolutionize the life sciences. These sorts of advances, widely shared, will increase over time, with great potential benefits.”

Citizens become targets in an evolving ecology in which their emotions are being datafied

A professor who studies civil society and intelligence elites observed, “The disinformation media ecology that generates and targets messages that are deceptive and/or designed to bypass thoughtful deliberation in favour of profiled, emotionalised engagement severely challenges the democratic ideal of treating people as citizens rather than as ‘targets’ or ‘consumers.’ This is an ecology in which the psychological and emotional behaviour of individuals and groups is increasingly being quantified and datafied, as evidenced by the rise of emotion AI or affective AI. Also important is the nature of psychology, in that influential behavioural sciences downplay rationality in favour of a neo-behaviourist outlook. In an applied context, neo-behaviourism and seeing people in psycho-physiological terms disregards (or denies) agency and civic autonomy. This near-horizon future is bleak, particularly since such techniques for emotional profiling are rapidly becoming commonplace in the political and civic world, starting with social media but spilling out into once offline domains (e.g., cities that have become ‘smart’, and dwellings that have become ‘Internet of Things-connected’).”

Cross-sector collaboration is needed to work toward the creation of aligned incentives

Perry Hewitt, chief marketing officer at data.org, a platform for partnerships to build the field of data science for social impact, urged, “Achieving a transformation of digital spaces and improved digital life will require collaboration: private-sector tech, government and social-impact organizations coming together in a combination of regulation and norms. Aligned incentives enabling for-profit and social impact to come together is critical. Healthy, informed and engaged publics are better consumers and citizens. Public audiences will play a role to the extent that we build digital spaces that are engaging and convenient to use; it’s hard to see people flocking toward digital broccoli in a candy store of addictive apps. Nate Matias’ research into the civic labor of volunteer moderators online, showing the actions of individuals in improving a platform’s algorithm, is hugely encouraging. I am very bullish on the ability to better manage spam, misinformation and hate speech, the scourge of digital spaces today. But it will be an ongoing battle as deepfakes and similar technologies (fake VR in one’s living room?) become more persuasive. Perhaps the biggest challenge will be the trade-offs between personal privacy and safe spaces. There are many legitimate reasons people require anonymity in public spaces (personal threats, whistleblowing, academic freedom), but it’s really tricky to moderate information and abuse in communities with high anonymity.”

Reasonable regulation can promote accountability and free expression

Nazar Nicholas Kirama, president and CEO of the Internet Society chapter in Tanzania and founder of the Digital Africa Forum, said, “The internet is a reflection of our own societies’ good and bad sides; the good far outweighs the harm. As digital spaces evolve, stakeholders need to find ways to curb online harms, not through ‘sanitation’ of digital spaces but by creating reasonable regulations that promote freedom of online expression and personal accountability that promote internet trust. The internet has evolved to a stage where it is now a necessary ‘commodity.’ Over the past year we have learned how key it is for communication and business continuity in times of global emergencies like the COVID-19 pandemic. During the first wave, more than 1.5 billion learners who were put out of classrooms due to global lockdowns could not continue their education because they had no connection. Had their homes been connected, the disruption would have been minimal. Being online is vital and good for societies.”

Politicians can and will be motivated to ensure resilient economic societies

Amali De Silva-Mitchell, futurist and founder/coordinator of the Internet Governance Forum’s Dynamic Coalition on Data-Driven Health Technologies, predicted, “The increasing knowledge of the space and of its benefits and risks by the average user of technology could be exponential, as digital becomes the norm in health, education, agriculture, transport, governance, climate change mitigation including waste management, and so forth. By 2035 most global citizens will be more conversant with the uses of technology, easing the delivery of technology goods and services.

“The biggest advances will be in the universal quality of connectivity and increased device accessibility. Citizens who are unable to participate digitally must be served by alternative means. This is a public duty. A 100% technology-user world is not possible, and this limitation must be recognized across all services and products in this space.

“Perfection of technology output will continue to be marred by misinformation, fake news, poor design, bias, privacy versus copyright, jurisdiction mismatches, interoperability issues, content struggles, security problems, data ocean issues (data silos, fickle data, data froth, receding-stability data and more) and yet-to-be-identified issues. All of these must be managed in order to create a more-positive digital public sphere with better opportunities.

“Politicians will be motivated to ensure resilient economic societies and will pursue the ideal of universal accessibility through all means such as satellite, quantum and other emerging technologies. The public will be focused on affordable, quality, unbiased (Artificial Intelligence/Machine Learning, quantum) internet access. In the nano, quantum and yet-unidentified operational spaces the private sector will be focused on issues of interoperability for the Internet of Things and other emerging applications (for market growth versus democratization).

“In the future, quantum entanglement will create new opportunities and unexpected results while challenging old principles and norms due to potential breakthroughs, for instance, telepathy for human information exchange competing with traditional wireless technology.”

All miraculous technologies eventually ‘settle down’ and steadily improve humanity

Frank Kaufmann, president of the Twelve Gates Foundation, responded, “I see digital life and digital spaces and the ‘evolution’ of these as following classic patterns of all prior major turning points of radical change that are connected to technological progress and development. Wheels, fire, the printing press, electricity, the railroads, flight and so forth.

“To me the pattern goes:

  1. A genius visionary or visionary group opens a historical portal of magic and wonder. These first people tend to be visionaries with pure, wholesome dreams and the desire to help people.
  2. The new technology explodes in a ‘Wild West’ environment during which time underdeveloped, avaricious, power-hungry, vile people amass obscene amounts of wealth and power by exploiting the technology and exploiting people. Eventually these criminals vanish into their private, petty hells and try to coat the horror they perpetrated by establishing self-serving veneers of work for ‘charitable’ causes and ‘grant-giving foundations,’ Their time of power lust has come and gone. In the meantime…
  3. a widespread reaction by normal, good people to the harm and evil caused by the avaricious exploiters, gradually…
  4. implements ‘checks and balances’ to bring the technology more fully into genuine healthy and wholesome service to people, plus a natural ‘decentralization’ occurs, yielding an explosion of creativity and positive growth and development.

“Both the implementation of guardrails, and ‘checks and balances,’ after the ‘Wild West’ time and the smaller-ness, the more local-ness, the more manageable, humane little subunits of the boundless benefits afforded by all these miraculous technologies settle down and they will help us improve steadily.”

Leaders’ primary role is to assure that decentralized and open systems can thrive

James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, said, “In the fast-changing world of today we see new technologies emerging rapidly, and then they become institutionalized. Thus, most people only see the larger, more tech-giant-dominated applications. But we are seeing moves toward several things that encourage me to believe innovators will help to create useful solutions.

“In particular, much work is now going into ways to give individuals more control of their online identities in order to control the flow of information (privacy enhancing technologies). By 2035, it is likely some of these will have broken through and they may become heavily used. Additionally, the further internationalization of communication technologies reaching more of the world can help break down barriers.

“The primary role of tech leaders and politicians is to help keep the innovation space alive and to make sure that decentralized and open systems can thrive (a counter to tendencies toward authoritarianism, etc.). Today’s children and teens are learning to be less trusting of everything they see online (much as in the past they had to learn not to believe everything one saw in a TV commercial or read in a newspaper) and that will also help in navigating a world where dis- and misinformation will continue to exist.”

The best spaces come with heterogeneity, collaboration and consequences

Gary A. Bolles, chair for the future of work at Singularity University, commented, “The greatest opportunity comes from community-anchored digital spaces that come with heterogeneity, collaboration and consequences.

  • Community-anchored, because the more humans can interact both online and in person, the more the potential there is for deeper connection.
  • Heterogeneity, because homogeneous groups build effective echo chambers and heterogenous groups expose members to a range of ideas and beliefs.
  • Collaboration, because communities that solve one problem together can solve the next, and the next.
  • Consequences, because effective public discourse requires people to be aware of and responsible for the potential negative results of their words and actions.

“What is critical is that the business models of the digital communications platforms must change. Tech leaders must turn the same level of innovation they have brought to their products, toward business model innovations that encourage them to design for more heterogeneity, collaboration and consequences.”

Have faith in individuals’ improvisation, bricolage, resistance and reuse/reinterpretation

Jay Owens, a research and innovation consultant with New River Insight, responded, “You ask, ‘What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces do you think are so entrenched. …’ The entrenched issue here isn’t ‘human nature’ or technology or regulation – it’s capitalism. Unless we overthrow it prior to 2035, digital spaces will continue to be owned and controlled by profit-seeking companies who will claim they’re legally bound to spend as little as possible on ‘serving the public good’ – because it detracts from shareholder returns. The growth of Chinese social media companies in Western markets will mean there are firms driven by more than purely for-profit impulses, yes – but the vision of ‘good’ that they are required to serve is that of the Chinese state. Theirs is not a model of ‘public good’ that either speaks to Western publics or indeed Western ideas of ‘good.’ I retain faith in individual users’ capacity for improvisation, bricolage, resistance, creative reuse and reinterpretation. I do not think this will grow substantially from today – but it will remain a continuing contrapuntal thread.”

Don’t ignore the good on the net; media’s narrative about it is incomplete and dystopian

Jeff Jarvis, director of the Tow-Knight Center for entrepreneurial journalism at City University of New York, said, “We have time. The internet is yet young. I have confidence that society will understand how to benefit from the net just as it did with print. After Gutenberg, it took 150 years before innovations with print flourished: the creation of the first regularly published newspaper, the birth of the modern novel with Cervantes and of the essay with Montaigne. In the meantime, yes, there was a Reformation and the Thirty Years War. Here’s hoping we manage to avoid those detours.

“Media is engaged in a full-blown moral panic about the net. It is one of their own engineering and it is in their self-interest, as media choose to portray their new competitor as the folk devil that is causing every problem in sight. In the process, media ignore the good on the net. It is with the net and social media that #BlackLivesMatter rose to become a worldwide movement. Imagine enduring the pandemic without the net, preserving jobs, the economy, connections with friends and families. Media’s narrative about the net is dystopian. It is an incomplete and inaccurate picture of the net’s present and future.”

It is, as always, a war with Doomsday scenarios ready to write, yet the future is bright

David Porush, writer, longtime professor at Rensselaer Polytechnic Institute and author of “The Soft Machine: Cybernetic Fiction,” wrote, “Digital spaces are like all technologies: They change our minds, and even our brains, but not our souls. Or if the word ‘soul’ is too loaded for you, try ‘the eternal, enduring human instincts and impulses that drive our interactions with each other and considerations of our selves.’ (You can see why I prefer the shorthand).

“Digital spaces have unleashed new facilities for getting what’s in our souls into each other’s, for better or worse. We can do so wider, faster and with more fidelity and sensation (multimedia) and intimacy. New media grant us ways to express ourselves that were inconceivable without them. We can share subjectivities (i.e., Facebook) and objectivities (academic and scientific sites). The world is mostly made a better place by digital spaces, though new terrors and violence come with it, too. This is as always since we scrawled on cave walls and invented the phonetic alphabet and the printing press.

“It’s been a millennia-long ride on the asymptote, up toward technologically mediated telepathy. Neuralink is just the latest, most explicit manifestation of what’s always been implicit in the evolution of communication technologies.

“So, to answer the question at hand: I believe leaders, politicians and governments can do more to civilize the digital commons and regulate our behaviors in them, make the Wild West into a national park or theme park, but I both a) despair of them having the wisdom to do so, and b) sort of hope they don’t. I say a) because I don’t trust their wisdom beyond self-interest and ideology. I say b) because I believe the attempt is likely to do more damage to liberties in the short run up to 2035.

“In the long run, the digital commons, the virtual world – like the meatworld [in-person world] – will get better. It will be a healthier, safer, better, saner space. Sneakers, air conditioning, food, vaccines, and knowledge and education available for everyone, though unevenly. It is always already, and will continue to be, a war with plenty of Doomsday scenarios ready to write. But the future is bright. And with the help of the digital commons, we’ll get there.”

A rising communications tide lifts hospital ships and pirate ships, altruists and fascists

Howard Rheingold, a pioneering sociologist who was one of the first to explore the early diffusion and impact of the internet, responded, “When I wrote ‘The Virtual Community’ (published in 1993), I felt that the most important question to ask about what was not yet known as ‘social media’ was whether the widespread use of computer-mediated communication would strengthen or weaken democracy, increase or decrease the health of the public sphere. Although many good and vital functions continue to be served by internet communications, I am far from sanguine about the health of the public sphere now and in the future.

“My two most important concerns are the amplification of the discourse of bad actors and the emergence and continuing evolution of computational propaganda (using tools like Facebook’s ability to segment the population according to their beliefs to deliver microtargeted misinformation to very large numbers of people). The rising tide of internet communications lifts all boats by enabling like-minded people to meet, communicate and organize; it lifts both the hospital ships and the pirate ships, the altruists and the fascists.

“Misinformation and disinformation about the COVID-19 epidemic has already contributed to mass deaths. Flat-earthers, QAnon cultists, racists, anti-Semites, vandals and hackers are growing in numbers and capabilities, and I see no effort of equivalent scale from governments and private parties.

“Facebook is the worst, and unless it dies, it will never get better, because Facebook’s business model of selling to advertisers microtargeted access to large, finely segmented populations is exactly the tool used by bad actors to disseminate misinformation and disinformation. I have called for the increased creation and use of smaller communities, either general-purpose or specialized (e.g., patient and caregiver support groups to name just one example of many beneficial uses of social media).”

As communication becomes more bifurcated, things could become more deconstructed

Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, said, “The public discussion of this issue has only focused on the big social media services, but there are many other ‘digital spaces’ – in games, online forums, messaging platforms and the long tail of smaller niche groups both on the public internet and in dark nets. In 2035, there will be just as much – maybe more – fragmentation of the social commons through this proliferation of new types of ‘digital spaces.’ It is difficult to recover a shared, collective sense of what the world is – ideologically, culturally, politically, ethically, religiously, etc. – when people are scattered across innumerable disembodied and nonlocal digital networks. It’s very easy for fringes to connect and coordinate across the globe. Will this fact change by 2035? Or will it continue to deconstruct the social, political and economic mechanisms that are meant to contain such problems?”

Increasing complexity will dominate our future; here’s a rundown of what will change

Mike Liebhold, distinguished fellow, retired, at The Institute for the Future, commented, “Here is an outline of a few of the technical foundations of the shifts in digital spaces and digital life expected by 2035:

  • Cross-Cutting Forces – (across the technology stack):
    • Applied machine intelligence everywhere.
    • Continuous pervasive cybersecurity vulnerabilities, and vastly amplified security and privacy engineering.
    • Energy efficiency and circular accountability will become critical factors in personal and organization decision processes.
  • Systemic Digital Technology Shifts – (layers of the technology stack):
    • User-experience technologies (conversational agents everywhere), and a shift from glass screens to augmented reality for common interaction, including holographic telepresence and media.
    • Continued evolution and adoption of embedded intelligent and automated technologies in physical spaces and in robotics and cobotics [collaborative robotics].
    • Connection and network technologies – continuous adoption of fiber and broadband wireless connections including low-Earth-orbit satellites providing broadband internet connections in remote geographies.
    • Advances in computing and in cloud technologies.
    • Continued adoption of hybrid edge-cloud AI micro services.”

Every year of an unrestricted internet industry damages the public sphere more

Bruce Bimber, professor of political science and founder of the Center for Information Technology and Society at the University of California-Santa Barbara, observed, “I envision that, eventually, new ways of thinking about regulation and the responsibility of social media companies will have an influence on policy. Every major industry with an effect on the public’s safety and well-being is managed by a regulatory regime today, with principles of responsibility and accountability, with limits, with procedures for oversight, with legal principles enforced in courts.

“That is except for internet industries which instead enjoy Section 230. I anticipate that this will change by 2035, as countries come to understand how to think about the relationship of the state and the market in new and more productive ways.

“That being said, it is not at all clear that this will happen in time. Every year of unrestrained market activity and lack of accountability damages the public sphere more, and we may reach a point where things are too broken for policy to matter.”

The difficulty comes in generating the appropriate collective action and trust

Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for science, technology and innovation policy, noted, “Forwarding the public good requires both collective action and trust in democratic institutions. Online spaces may become even better places for yelling and organizing in the years to come, but so far they are of zero usefulness in causing genuine policy changes to happen through the public-spirited work of elected representatives.

“Restoring trust in our real-world democratic institutions will require some exogenous stuff. And online spaces don’t do exogenous.”

We hoped for cyberutopia, feared cybergeddon, and we’re getting ‘cyburbia’ – an amped-up analog reality

Paul Saffo, a leading Silicon Valley-based forecaster exploring long-term trends and their impact on society, wrote, “This particular media revolution – a shift from mass to personal media – is approximately 25 years old, and it has unfolded in precisely the same way every single prior media revolution has evolved. This is because beneath the technological novelty is a common constant of human behavior. Specifically, when a new media technology arrives, first it is hailed as the utopian solution to everything from the common cold to world peace. Then time passes, we realize there is a downside, and the new medium is demonized as the agent of the end the civilization. And finally, the medium, now no longer new, disappears into the cultural fabric of daily life. In short, we hoped cyberspace would deliver a new cyberutopia, then we feared cybergeddon. But what we are getting in the end is ‘cyburbia,’ an amplified version of our analog reality.”

‘Between Fear and Hope’ is a fitting title for today, but there’s hope for a brighter tomorrow

Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab at the University of Maryland, said, “My view toward 2035 has been darkened by the harsh tone of politics over the past few years that is continuing to play out. … Journalists can’t resist reporting on outrageous behaviours, and false claims and lies still make the news. Social media have also been a problem, with algorithms that amplify misinformation rather than stopping bot farms and giving more control to users … My fears are that political maneuvers that encourage divisiveness will remain strong, misinformation will continue, and racism and other forms of violence will endure.

“I am troubled by the Google/Facebook surveillance capitalism (strong bravos to Shoshanna Zuboff for her amazing book on the topic, ‘Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power’), social media abuses and the general tone of violence, anger and hate speech in the U.S. My journalist father wrote a book, ‘Between Fear and Hope,’ in 1947 about post-war Europe. That title fits for now, but I am hoping for a brighter tomorrow.”

People have to adapt to overcome; there is no other workable solution

Anna Andreenkova, professor of sociology at CESSI, an institute for comparative social research based in Europe, predicted, “Attempts to censor or ‘clean up’ digital space by any actors – private or public – will not be possible or beneficial. People will have to learn and adapt to living in open information space, how to sort the fake from the real, trustful from untrustful, evidence-based from interest-driven. Digital education is a more-fruitful approach than any limitations or attempts at guiding in paternalistic way.

“Any innovation or social change always evokes concerns about its consequences. These concerns are often expressed in very radical manner. Over the centuries, eschatological or catastrophic consequences have been predicted for most emerging processes or innovations, and most of these worries are eventually forgotten. Digitalization of life domains is certainly not straightforward or easy. But at the end it is inevitable and unavoidable. What is really important to discuss is how to minimize the negative sides.”

New-gen platforms will live in our networked wearables, transportation and built environment

John Lazzaro, retired professor of electrical engineering and computer science, wrote, “The only way to make progress is to return to people being ‘the customer’ as opposed to ‘the product.’ By 2035, a new generation of platforms will replace smartphones (and the apps that run on them). The new platforms will be built from the ground up to address the intractable issues we face today. Unlike the smartphone – a single platform that tries to do it all – the new platforms will be customized to place and purpose.

  • A platform for the body: Wearables that function as stand-alone devices, incorporating augmented reality, with a direct connection to the cloud.
  • A platform for built environments: Displays, sensors, computing and communication built into the home and office environments not as add-ons, but as integral parts of the structure.
  • A platform for transportation: The passenger compartment of fully self-driving automobiles will be reimagined as a ‘third place’ with its own way to interface humans to the cloud.

“What the platforms will share is a way to structure interactions between an individual and the community that mirrors the way relationships work in the physical world. Inherent in this redesign will be a reworking of monetization.”

We are divided by very real differences that did not originate with the internet

Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, commented, “Underlying the success of social media, and also their ills, is the widespread recognition that these media can be used to get potentially wide attention, and that it’s exceedingly easy to give that a try. And underlying that is the fact that a very large percentage of people worldwide want and desire attention, and possibly a lot of it.

“Algorithms used, for instance, by Facebook, may further distort what gets attention, but that’s not the only problem. The best way to get attention is to say or do something different from just the daily ‘boring’ sort of colloquy. You can do that with cute cat videos, by inventing and showing off a new dance, by juggling 13 balls at once, or by saying something that recognized authorities or widespread consensus is not saying. Thus, an outright lie is one attractive method. A whole series of lies and wild assertions gets you something like the attention that goes to QAnon. If what you say can be shared by an at-first-little, self-reinforcing community, that helps, too.

“When those lies underline and amplify a widely shared but not widely articulated attitude, such as the feeling of being oppressed by technocrats, experts or just the self-appointed ‘elite’ with supposedly more credentialized ‘merit’ than most people have (as pointed out for example in Michael Sandel’s ‘The Tyranny of Merit’) such views can easily gain wide followings. Algorithms may help further amplify support of such messages, but that ignores their underlying sources of strength. We, especially in the U.S. – though by no means only here – are divided by very real differences that did not at all originate with the internet.

“These are differences primarily in who gets heard and how, as well as in monetary income levels that partly follow along with the former. In one sense, social media offer a new path to greater equality. These are not refereed journals by any means. Anyone can try to seize an audience. Movements I would regard as positive, such as: the effort for stronger response to climate change; Black Lives Matter; #MeToo; LGBTQ rights – these all have been strengthened in my judgment by social media. …

“Clearly, over the next few years, until well beyond 2035, we are in for a wild ride, dealing with the ongoing pandemic, horrendous effects of climate change and social issues, including various kinds of inequality that are only exacerbated and, in some cases, brought to light through social media. Another crisis is that the political motion we might hope for is stalled by the inadequacies and susceptibilities to crass manipulation that our now elderly political institutions and constitutions now reveal.

“It will be more and more difficult to remain either aloof from or unaware of these interlocking struggles. It may well turn out to be a good thing in the long run that we are all drawn in. It will be good, if somehow, we move toward greater acknowledgment of all of the inequalities and problems and somehow forge a degree of consensus about the solution. We may not, but we could.”

These systems can be built to support full agency for everyone by design

Doc Searls, internet pioneer, co-author of “The Cluetrain Manifesto” and “The Intention Economy” and co-founder and board member at Customer Commons, predicted, “There is hope for 2035 if we think, work, invest and gather outside the web and the closed worlds of apps available only from the siloed spheres provided by giant companies and company stores. That closed world – or collection of private worlds – is based on a mainframe-era model of computing on networks called ‘client-server’ and might better have been called ‘slave-master.’

“This model is now so normative that, without irony, Europe’s GDPR [General Data Protection Regulation] refers to the public as ‘data subjects,’ California’s CCPA calls us ‘consumers’ and the whole computer industry calls us ‘users’ – a label used elsewhere only by the drug industry. None call us ‘persons’ or ‘individuals’ because they see us always as mere clients. But the web and the tech giants’ app ecosystems are just early examples of what can be built on the Internet. By its open and supportive end-to-end design, however, the Internet can support full agency for everyone and not just the servers of the world and the companies that operate them.

“I don’t see full agency being provided by today’s tech leaders or politicians, all of whom are too subscribed to ‘business as usual.’ I do see lots of help coming from technologists working with communities, especially locally, on solutions to problems that can best be solved first by tools and systems serving individuals and small groups.

“I expect mostly good outcomes because it will soon be clear to all that we have no choice about working toward them. As Samuel Johnson said, ‘When a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’ Our species today is entering a metaphorical fortnight, knowing that it faces a global environmental catastrophe of its own making.

“To slow or stop this catastrophe we need to work together, learn from each other, and draw wisdom from our histories and sciences as widely and rapidly as possible. For this it helps enormously that we are digital now and that we live in the first and only technical environment where it is possible to mobilize globally to save what can still be saved. And we have already begun training at home during a strangely well-timed global pandemic. The internet and digital technology are the only ways we have to concentrate our collective minds in the metaphorical fortnight or less that is still left to us.”

The definition of ‘social good’ is evolving in digital environments

Jamais Cascio, distinguished fellow at the Institute for the Future, responded, “The further spread of internet use around the globe will mean that by 2035 a significant part – perhaps the majority – of active digital citizens will come from societies that are comfortable with online behavioral restrictions. Their 2035 definition of the ‘social good’ online will likely differ considerably from the definition we most frequently discuss in 2021. This isn’t to say that attempts to improve the social impacts of digital life won’t be ongoing, but they will be happening in an environment that is culturally fractured, politically restive and likely filled with bots and automated management relying on increasingly obscure machine-learning algorithms.

“Our definition of ‘social good’ in the context of digital environments is evolving. Outcomes that may seem attractive in 2021 could well be considered anathema by 2035, and vice versa. Censorship of extreme viewpoints offers a ready example. In 2021, we’re finding that silencing or deplatforming extreme political and social voices on digital media seems to have an overall calming effect on broader political/social discourse. At the same time, there remains broad opposition (at least in the West) to explicit ‘censorship’ of opinions.

“By 2035, we may find ourselves in a digital environment in which sharp controls on speech are widely accepted, where we generally favor stability over freedom. Conversely, we may find by 2035 that deplatforming and silencing opinions too quickly becomes a partisan weapon, and there’s widespread pushback against it, even if it means that radical and extreme voices again garner outsized attention.

“In both of these futures, the people of the time would see the development as generally supporting the social good – even though both of these futures are fairly unattractive to the people of today.”

The metaverse may be well on its way by 2035: The seeming gap between digital and real-world spaces will soon be gapless. What then?

Barry Chudakov, founder and principal at Sertain Research, said, “I imagine an awakening to the nature and logic of digital spaces, as people realize the profound human, psychological and material revolutions these spaces – the metaverse (virtual representation combined with simulation) – will provoke. I suspect we will go through a transition period of unlearning: We will look at emerging digital spaces and have to unlearn our inherited alphabetic logic to actually see their inherent dynamics.

“A central question: By 2035 what will constitute digital spaces? Today these are sites, streaming services, apps, recognition technologies, and a host of (touch)screen-enabled entertainments. But as we move into mirror worlds, as Things That Think begin to think harder and more seamlessly, as AI and federated learning begin to populate our worlds and thinking and behaviors – digital spaces will transform. It is happening already.

“Consider inventory tracking – making sure that a warehouse knows exactly what’s inside of it and where: Corvus Robotics uses autonomous drones that can fly unattended for weeks on end, collecting inventory data without any human intervention at all. Corvus Robotics’ drones are able to inventory an entire warehouse on a rolling basis in just a couple days, while it would take a human team weeks to do the same task. Effectively Corvus’ drones turn a warehouse into a working digital space. Another emerging digital space: health care. In the last couple of years, the sale of professional service robots has increased by 32% ($11.2 billion) worldwide; the sale of assistance robots for the elderly increased by 17% ($91 million) between 2018 and 2019 alone. Grace, a new medical robot from Singularity Net and Hanson Robotics, is part of a growing cohort of robot caregivers working in hospitals and eldercare facilities around the world. They do everything from bedside care and monitoring to stocking medical supplies, welcoming guests and even cohosting karaoke nights for isolated residents. As these robots warm, enlighten and aid us, they will also monitor, track and digitize our data.

“The gap between digital spaces and real-world space (i.e., us) is narrowing. Soon that seeming gap will be gapless. By 2035, a profound transition will be well on the way. The transition and distinction between digital worlds and spaces and the so-called real world will be less distinctive, and in many instances will disappear altogether. In this sense, digital spaces will become ubiquitous invisible spaces. Digital spaces will be breathing, will be blinking, will be moving. Digital spaces will surround us and enter us as we enter them. William Gibson said, ‘We have no future because our present is too volatile. … We have only risk management. The spinning of the given moment’s scenarios. Pattern recognition.’ The new immersion is submersion. We will swim through digital spaces as we now swim through water. Our oxygen tanks will be smart glasses, embedded chips, algorithms and AI. The larger question remains: What will this mean? What will this do to us and what will we do with this?

“Like Delmore Schwartz’s ‘heavy bear who goes with me,’ we carry our present dynamics into our conception of future digital spaces. Via cellphones, computers or consoles we click, swipe or talk to engage with digital spaces. That conception will be altered by advances in the following technologies, which will fuse, evolve, transform and blend to effect completely different dynamics:

  • Mirror worlds
  • Quantum computing
  • Robotics, machine intelligence, deep learning
  • Artificial intelligence
  • Federated learning
  • Recognition technologies
  • Surveillance capitalism and totalitarian oversight. As of 2019, it is estimated that 770 million monitoring CCTV cameras of the Skynet system had been put to use in mainland China, the number is expected to exceed 1 billion by the end of 2021.
  • Contact tracing
  • Data collecting, management and analysis

“We presently approach technology like kids opening presents at Christmas. We can’t wait to get our hands on the tech and jump in and play with it. No curriculum or pedagogy exists to make us stop and consider what happens when we open the present. With all puns intended, once we open it, the present itself changes. As does the past. As do we. Digital spaces change us, and we change in digital spaces. So, we will transform digital spaces in crisis mode, instead of the better way: using game theory and simulation to map out options. …

“As reality is digitized, the digital artifact replaces the physical reality. We have no structural or institutional knowledge that aids us in understanding, preparing for or adjudicating this altered reality. What are the mores and ethics of a world where real and made-up identities mingle? Consider for a moment how digital dating sites have affected how people get to know and meet significant others. Or how COVID-19 changed the ways people worked in offices and from home. Ask yourself: How many kids play outside versus play video games?

“Digital spaces have already been replacing reality. The immediate effect of ubiquitous digital spaces that are not distinct spaces but extensions of the so-called real world will be reality replacement.”

A tale of 2035:
It need not be this grim; serious work in developing fair and palatable ways of paying for content must be developed

Judith Donath, a faculty fellow at Harvard’s Berkman Klein Center whose work focuses on the co-evolution of technology and society, shared this predictive scenario set in 2035:

“Back in 2021, almost 5 billion people were connected to the internet (along with billions of objects – cameras, smart cars, shipping containers, bathroom scales and bear collars, to name a few). They thought of the internet as a useful if sometimes problematic technology, a communication utility that brought them news and movies, connections to other humans and convenient at-home shopping.

“In 2035, nearly all humans and innumerable animate and inanimate others are online. And while most people still think of the internet as a network for their use, that is an increasingly obvious illusion, a sedating fiction distracting them from the fact that it now makes more sense to consider the internet to be a vast information-digesting organism, one that has already subsumed them into its vast data and communication metabolism.

“As nectar is to bees, data is to The Internet (as we’ll refer to its emergent, sovereign incarnation). Rather than producing honey, though, it digests that data into persuasive algorithms, continually perfecting its ability to nudge people in one direction or another. It has learned to rile them up with dissatisfactions they must assuage with purchases of new shoes, a new drink, a trip to Disney or to the moon. It has mastered stoking fear of others, of immigrants, Black people, White people, smart people, dumb people – any ‘Other’ – to muster political frenzy. Its sensors are everywhere and it never tires of learning.

“In retrospect, it is easy to see the roots of humankind’s subsumption into The Internet. There was the early blithe belief that ads were somehow ‘free,’ that content which we were told would be prohibitively expensive if we paid its real cost was being provided to us gratis, in return for just a bit of exposure to some marketing material. Then came the astronomical fortunes made by tycoons of data harvesting, the bot-driven conspiracies.

“By the end of the 2020s, everything from hard news to soft porn was artificially generated. Never static, it was continuously refined – based on detailed biometric sensing of the audience’s response (the crude click-counting of the earlier web long gone) – to be evermore-addictively compelling.

“Arguably the most significant breakthrough in The Internet’s power over us came through our pursuit of health and wellness. Bodily monitoring, popularized by Fitbitters and quantified selfers, became widespread – even mandated – during the relentless waves of pandemics. But the radically transformative change came when The Internet went from just measuring your response to chemically inducing it with the advent of networked infusion devices, initially for delivering medicine to quarantined patients but quickly adapted to provide everyone with personalized, context-aware micro-doses of mood-shifting meds: a custom drip of caffeine and cannabis, a touch of Xanax, a little cortisol to boost that righteous anger.

“It is important to remember that The Internet, though unimaginably huge and complex, is not, as science fiction might lead you to believe, an emergent autonomous consciousness. It was and is still shaped and guided by humans. But which humans and toward what goal?

“The ultimate effect of The Internet (and its earlier incarnations) has been to make power and wealth accrue at the very top. As the attention and beliefs of the vast majority of people came increasingly under technological control, the right to rule, whether won by raising armies of voters or of soldiers, was gained by those who wield that control.”

Donath continued: “From the standpoint of 2021, this prediction seems grim. Is it inevitable? Is it inevitably grim? We are moving rapidly in the direction described in this scenario, but it is still not inevitable. The underlying business model of the internet should not be primarily based upon personal data extraction. Strong privacy protection laws would be a start. Serious work in developing fair and palatable ways of paying for content must be developed.

“The full societal, political and environmental costs of advertising must be recognized: We are paying for the internet not only with the loss of privacy and, ultimately, of volition, but also with the artificial inflation of consumption in an overcrowded, climate-challenged and environmentally degraded planet.

“If we allow present trends to continue, one can argue the future is not inevitably grim. We simply place our faith in the mercy of a few hugely powerful corporations and the individuals who run them, hoping that instead of milking the world’s remaining resources in their bottomless status competition, they use their power to promote peace, love, sustainability and the advancement of the creative and spiritual potential of the humans under their control.”

The next sections of this report organize hundreds of additional expert quotes under the headings that follow the common themes listed in the tables at the beginning. For more on how this canvassing was conducted, see the last section, “About This Canvassing.”

2. Public digital spaces will be improved: Tech can be fixed, governments and corporations can reorient incentives and people can band together to work for reform

A notable share of the most hopeful respondents to this canvassing declared that in order to serve the public interest and improve digital spaces, the tech industry, government and civil society need to focus on achieving an ethical tech design that values people over profit. They said that this – combined with vastly improved individual digital literacy globally, a much-upgraded investment in accurate, fair journalism and the closing of the digital divide – is crucial to bringing about the change needed for a better future with new, more effective digital-age social norms. Some also said support for accurate journalism and global access to fact-based public information sources is essential to help citizens responsibly participate in democratic self-governance.

While some respondents said the primary responsibility for improvements in the digital public sphere falls solely upon the technology industry or solely upon government or upon civil society, many said that real change requires human leadership across all sectors of society to bring it all together.

David J. Krieger, director of the Institute for Communication and Leadership, based in Lucerne, Switzerland, said, “What is needed in the face of global problems such as climate change, migration, a precarious and uncontrolled international finance system, the ever-present danger of pandemics, not to speak of a Hobbesian ‘state of nature’ or a geopolitical ‘war of all against all’ on the international level, is a viable and inspiring vision of a global future.

“The global network society is a data-driven society. The most important reforms or initiatives we should expect are those that make available more data of better quality to more people and institutions. Here the primary values and guiding norms are connectivity, flow of information, encouragement of participation in production and use of information, and transparency. Those in business, politics, civil society organizations and the public should focus on practical ways in which to implement these values.

“To the extent that they are implemented it will become possible to mitigate against the social harms caused by the economy of attention in media (click bait, filter bubbles, fake news, etc.), political opportunism, and the lack of social responsibility by business. Decisions on all levels and in all areas – business, education, health care, science and even politics – should be made on the basis of evidence and not on the basis of status, privilege, gut feelings, bias, personal experience, etc. Data-driven decision-making can, in many situations, be automated. This requires the most complete and reliable data on everything and everyone as possible.”

Mark Surman, executive director of the Mozilla Foundation, a leading advocate for trustworthy artificial intelligence (AI), digital privacy and the open internet, wrote, “It is my optimistic side that says ‘yes,’ we can improve. This is far from a certainty, but right now we have governments and a public who actively want to point internet spaces in a better direction. And you have a generation of young developers and startup founders who want to do something different – something more socially beneficial – than what they see from big tech. If these people can rally around a practical and cohesive vision of what ‘a better internet’ looks like, they have the resources, power and smarts to make it a reality.”

David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, commented, “These technologies are complex dynamic systems embedded in the complex dynamic system that we call life on Earth. I expect to see more concern about how the current systems are tearing us apart, along with a continuation of the underplaying of how they are binding us together. …

“We are not powerless in the face of our technology. We can choose the tech we find acceptable and we can mandate changes to make it serve us better rather than worse. Of course, complex dynamic systems are often – usually  – unpredictable, nonlinear and chaotic, but because we humans can exert control if we choose to, I have to believe our social tech will get better at serving our human needs and goals.

“I do want to note that it is entirely possible that our ideas about what constitutes useful and helpful discourse are being changed by our years on social media. Because this is a change in values, what looks like negative behavior now may start to look positive. By this I mean that the social media ways of collaboratively making sense of our world may start to look essential and look like the first time we humans have had a scalable way of building meaning.

“If we are able to get past the existential threat posed by the ways our online social engagements can reinforce deeply dangerous beliefs, then I have hope that – with the aid of 2035’s tech – we’ll be able to make even more sense of our world together in diverse ways that have well-traveled links to other viewpoints.”

Rob Reich, a professor focused on public policy, ethics and technology who also serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford University, predicted, “In the absence of a significant change in the ethos of Silicon Valley, the regulatory indifference/logjam of Washington, D.C., and the success of the current venture capital funding model, we should not expect any significant change to digital spaces, and in 2035 our world will be worse.

“Our collective and urgent task is to address problems and make interventions in all three of these core elements: the ethos of Silicon Valley, the regulatory savvy and willpower of D.C. and the European Union, and the funding model of venture capitalists.”

Social media algorithms are the first thing to fix

A large share of respondents singled out algorithmic intermediaries used by big tech firms as the most significant problem to overcome. They note that algorithms privilege user engagement with social media and profit over the quality of content that social media users see. They argue that those incentives for user engagement have replaced journalism and other traditional democratic intermediaries in shaping the character of knowledge-sharing in digital spaces and discourse.

They point out that the surveillance-based business model of digital capitalism enabled a new class of mega-rich individuals and corporations to control the primary infrastructures of the public sphere and wield enormous lobbying power over government. These experts urge that big tech should focus on solving emerging problems by implementing more ethical applications of artificial intelligence (AI) to improve online spaces that are important to democracy and the public good.

Don Heider, executive director of the Markkula Center for Applied Ethics at Santa Clara University, wrote, “Technology could be designed to promote the common good and human well-being. This is a decision each organization must make in regard to what it produces. Whether or not to promote the common good and human well-being is also a decision each citizen must make each time they use any technology. Human designers and engineers make a series of choices about how technology will work, what behaviors will be allowed, what behaviors will not be allowed and hundreds of other basic decisions which are baked into technology and are often opaque to users. Then human users take that technology and use it in myriad ways, some helpful, some harmful, some neutral.

“Governments and regulatory groups can require certain features in technology, but ultimately have great difficulty in controlling technology. That’s why we spend time thinking about ethical decisions and teaching folks how to incorporate ethics into decision making, so individuals and companies and governments can consider more carefully the effect of technology on humans.”

Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, predicted, “I do believe the largest social media services will continue spending to make their services more appealing to the masses and to avoid regulatory responses that could curb their growth and profitability. They will look for ways to support public initiatives toward confronting global warming, advocating for diversity and equality and optimizing our civic infrastructure while supporting innovators of many stripes.

“To serve the public good, social media services will likely need to reevaluate their business models, innovate on identity and some degree of digital embodiment, and scale up automating content moderation in ways that may challenge their business models.

“Regulators will likely need to be involved to require more guardrails against misinformation/disinformation, memetic ideologies, and exploitation of the ad model for microtargeted persuasion.

“However, this discussion often overlooks the reality that people have flocked to social media and continue to use it. Surveys continue to show that most users don’t change their behaviors, and when things become problematic they often want regulators to hold the companies accountable rather than taking responsibility themselves. So, part of this may simply be about maturing digital literacy.”

Amy Zalman, futures strategist and founder of Prescient Foresight, wrote, “Positive change could come from: 1) Engineering/programming options and choice into designing digital spaces differently so that those that work according to recommender systems or predictive algorithms open new spaces up for people rather than closing them into their preferences and biases. 2) Voluntary accountability by technology platform CEOs and others who profit from the internet/digital spaces. This accountability will come about, if it does, from consistent nudging by government leaders, other business leaders and the public. I do not believe that the public sector can impose these options through law or regulation very effectively right now, except at blunt levels. 3) Literacy training.”

Eric Goldman, co-director of the High-Tech Law Institute at Santa Clara University School of Law, observed, “In 15 years, I expect many user-generated content services will have figured out ways to mediate conversations to encourage more pro-social behavior than we experience in the offline world.”

Jenny L. Davis, a senior lecturer in sociology at the Australian National University, said, “Although any good/bad question obscures the complex dynamics of evolving sociotechnical systems, it is true that the speed of technological development fundamentally outpaces policies and regulations. By 2035, I expect platforms themselves to be better regulated internally. This will be motivated, indeed necessary, to sustain public support, commercial sponsorships and a degree of regulatory autonomy. I also expect tighter policies and regulations to be imposed upon tech companies and the platforms they host.”

An expert on media and information policy commented, “Several forces and initiatives will start to mitigate the problem thanks to an increasing awareness of the heterogeneous positive and negative impacts of digital spaces and digital life on individuals, communities and society. For one, technology designers will increasingly reconsider the behavioral and social effects of their choices and stronger ethical considerations will start to change the technological architectures of digital spaces.

“An increasing number of individuals will argue for the need of a technology ethics that can govern digital spaces and digital life. New initiatives and businesses will emerge that use ethics-informed design, creating alternative digital spaces in which individuals and groups can interact.

“We will increasingly realize that the effects of digital technology are heterogeneous and context-specific. Hence questions such as ‘does the internet increase or reduce depression?’ will be recognized as overly simple, as average statistics do not reveal much about a heterogeneous population. Once this is recognized, it is possible to advance technology designs and user conventions in ways that mitigate undesirable effects.”

A professor and researcher who studies the media’s role in shaping people’s political attitudes and behaviors said, “By 2035 tech leaders will be more aware of the problematic aspects of the digital sphere and design systems to work against them. There will be greater government regulation and more public awareness of the problematic aspects of digital life. There will be more choice in digital spaces. There will be less incivility and mis- and disinformation. There will still be problems with bringing diverse people together to cooperate.”

A leading expert in human-computer interfaces at a major global tech company urged, “Ethicists at large tech companies need to have actual power, not symbolic power. They can advise, but rarely (ever?) actually stop anything or cause real practices to change.”

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, wrote, “We need to assume that in the coming 10-15 years, we will learn to harness digital spaces in better, less polarizing manners. In part, this will be due to the ability to use better AI driven for filtering and thus developing more-robust digital governance. … There will of course always be those who would weaponize digital spaces, and the need to be vigilant isn’t going to go away for a long while. Better filtering tools will be met by more-advanced forms of cyberbullying and digital malfeasance, and better media literacy will be met by more elaborate fabrications – so all we can do is hope that we can keep accentuating the positive.”

Kate Klonick, a law professor at St. John’s University whose research has focused on private internet platforms’ policies and social responsibilities, responded, “Norms will coalesce around speech and harms on platforms. I think political leaders will have little role in this happening. I see tech leaders and academics playing a role in shaping and identifying where the norms come out and where effective policy can land. I think that users in 2035 will have more control over what they see, hear and read online, and, also, in some ways there will be less control by consolidation of major technologies.”

A scientist and expert at data management who works at Microsoft said, “Facebook, Twitter and other social media companies are investing heavily in flagging hate speech and disinformation. Hopefully, they’ll have the legal option to act on them.”

A professor emeritus of engineering predicted, “Responsible internet companies will rise. Irresponsible internet companies will become the home to a small number of dangerous organizations.”

Selections of respondents’ comments on the broad topic of the people-driven change needed are organized over the next section under these themed subheadings:

  1. Some tech design will focus on pro-social and pro-civic digital goals
  2. Government regulation plus less-direct “soft” pressure by government will help share corporations’ adoption of more ethical behavior
  3. The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will force change
  4. People will evolve and improve their use of digital spaces and make them better
  5. New internet governance structures will appear that draw on collaborations among citizens, businesses and governments
  6. Better civic life online will arise as communities find ways to underwrite accurate, trustworthy public information – including journalism.

Some tech design will focus on pro-social and pro-civic goals

A number of respondents made specific suggestions about what the tech sector can do to improve the digital public sphere. They urged that tech business methods and technology design be changed and oriented toward public good – seeing people as more than mere “users.” Some noted that well-applied artificial intelligence (AI) and the implementation of decentralized and distributed technologies may help achieve better content moderation or help create decommodified social spaces. Some encouraged the creation of digital spaces where rage-inducing or manipulative engagement is deemphasized and civil discourse is encouraged. For instance, they see advances leading to ad hoc birds-of-a-feather networks where like-minded people join together to discuss issues and solve problems.

Henning Schulzrinne, an Internet Hall of Fame member and former CTO for the Federal Communications Commission, wrote, “Some subset of people will choose fact-based, civil and constructive spaces, others will be attracted to or guided to conspiratorial, hostile and destructive spaces. For quite a few people, Facebook is a perfectly nice way to discuss culture, hobbies, family events or ask questions about travel – and even to, politely, disagree on matter politic. Other people are drawn to darker spaces defined by misinformation, hate and fear.

“All major platforms could make the ‘nicer’ version the easier choice. For example, I should be able to choose to see only publications or social media posts that rely on fact-checked, responsible publications. I should be able to avoid posts by people who have earned a reputation of offering low-quality contributions, i.e., trolls, without having to block each person individually. This might also function as the equivalent of self-exclusion in gambling establishments. (I suspect grown children or spouses of people falling into the vortex of conspiracy theories would want such an option, but that appears likely to be difficult to implement short of having power of attorney.)

“Social media platforms have options other than a binary block-or-distribute, such as limiting distribution or forwarding. This might, in particular, be applied to accounts that are unverified. There are now numerous address-verification systems that could be used to ensure that a person is indeed, say, living in the United States rather than at a troll farm in the Ukraine.”

Stephen Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “The biggest change by 2035 will be the introduction of measures that allow for the creation of contexts. In an environment where every message can potentially be seen by everyone (this is known as ‘context collapse’) we’ve seen a trend toward negative and hostile messaging, as it is a reliable way to gain attention and followers. This has created a need, now being filled, for communication spaces that allow for the creation of local communities.

“Measuring online impact by high follower counts, which leads to the proliferation of negative impacts, will become a thing of the past. It should be noted that this impact is being created not by content moderation algorithms, which has been the characteristic response by social media (Facebook, Twitter, TikTok, etc.) but by changes in network topology. These changes can be hard-coded into the design of the system, as they are for example in platforms like Slack and Microsoft Teams. They can be a natural outcome of resource limitations and gateways, for example in platforms like Zoom.

“I think we may see algorithmically generated network topologies in the near future, perhaps similar to Google’s Federated Learning of Cohorts (FLoC) but with a more benign intention than the targeting of advertising. Making such a system work will require more than simply placing login or subscription barriers at the entrance to online communities; today’s social networks emerged as a response to the practice in the early 2000s, and trying it again is unlikely to be successful.

“A more promising approach may be found in a decentralized approach to online social networks, as found in (say) Mastodon or Diaspora. Protocols, such as ActivityPub and Webmention, have been designed around a system of federated social networks. However, the adoption barrier remains high and they’re too technical to reach widespread adoption.

“There needs to be a concerted effort to, first, embrace the idea of decentralized social networking, and second, ease the transition from toxic social media platforms to more-personable community networks. This will require that social and technology leaders embrace a certain level of standardization and interoperability that is not owned by any particular company (I recognize that this will be a challenge for the tech community).

“In particular, a mechanism for decentralized and (in some way) self-sovereign identity will be required, to on the one hand, enable portability across platforms, but on the other hand, ensure account security. Government can, and may be required to, play a role in such a mechanism. We are seeing signs that we’re moving toward such an approach.

“We can draw perhaps a parallel between what we might call ‘cognitive networking’ with what we already see in financial networking. A person can have a single authenticated identity, guaranteed by government, that moves across financial platforms. Their assets are mostly fluid with the system; they can move them from one platform to another and exchange them for goods and services. In cognitive networking, we see a similar design, however a person’s cognitive assets consist of activity data, content created by the person, lists and graphs, nonfungible tokens and other digital assets. The value of such assets is not measured financially but rather directly by the interactions generated in decentralized communities.

“In essence, the positive outcome from such a development is a transition from an economy based on mass to an economy based on connection and interactivity. This, if well executed, has the potential to address wealth inequality directly by limiting the utility of the accumulation of wealth, just as decentralized communities limit the utility of the accumulation of large numbers of followers, by making it too expensive to be able to extract value from low-return practices such as mass advertising and propaganda.

“Needless to say, there’s a lot that could go wrong. Probably the major risk is the concentration of platform ownership. Even if we achieve decentralized communities, if they depend on a given technology provider (for example, Slack or Microsoft) then there is a danger that this centralization will be monetized, creating again inequality and a concentration of wealth, and undermining the utility of cognitive networking. There needs to be a public infrastructure layer underpinning such a system, and the danger of public infrastructure being privatized is ongoing and significant.

“We might also get identity wrong. For example, how do we distinguish between individual actions and actions taken by a proxy, such as an AI agent? Failure to draw that distinction creates an advantage for individuals with access to masses of AI proxies, as they would be able to be simultaneously in every community.

“The impact would be very similar to the impact of targeted advertising in social network platforms such as Facebook, where it’s not possible to know what messages a given entity is targeting to different individuals and different communities, because each message is unique, and each message may be delivered by proxies whose origins cannot be detected or challenged by the recipient. These risks are significant because unless individuals are able to attain an equitable standing in a cognitive network, they are unable to participate in community decision-making, with the result that social decision-making will be conducted to the advantage of those with greater standing, just as occurs in financial networks today.”

John Battelle, co-founder and CEO of Recount Media, said, “Within 15 years, I believe the changes wrought by significantly misunderstood technologies – 5G and blockchain among them – will wrest control of the public dialogue away from our current platforms, which are mainly advertising-based business models.”

Heather D. Benoit, a senior managing director of strategic foresight, responded, “Digital life will (hopefully) be improved by a number of initiatives aimed at reducing the proliferation of misinformation and conspiracy theories online. Blockchain systems can help trace digital content to its source. Detection algorithms can identify and catalog deepfakes. Sentiment and bias analysis tools allow readers to better understand online content. A number of digital literacy programs are aiming to help educate the general public in online safety and critical thinking.

“One of the more interesting solutions I’ve seen are AIs built to break down echo chambers by exposing users to alternative viewpoints. There are a number of challenges to overcome – misinformation may just be one. But the fact that questions are being asked and solutions devised is a sign that digital life is maturing and that it should improve given enough time.”

Gus Hosein, executive director of Privacy International, commented, “Digital spaces are messy. They were supposed to be diverse, but to exist, the platforms work to gamify behaviour, promote consumption and ensure that people continue to participate. While much could be said of ‘old media,’ they weren’t capable of making people behave differently in order to consume. And so, we have small numbers of fora where this is taking place, and they dominate and shape behaviour. To minimise this … we would have to promote diversity of experience.

“Yes, we could promote alternative platforms but that hardly ever works. We could open infrastructure, but someone would still have to build and take responsibility for and secure it. The fact that alternative fora have all failed is possibly because singular fora weren’t ever supposed to be a thing in a diverse digital world that was to reflect the diversity in the world.

“The platforms need users to justify their financial existence, so that’s why they shape behaviour, promote engagement, ensure consumption. If they didn’t, then they wouldn’t exist. So, maybe the objective should be a promotion of diversity of experience that isn’t mediated by companies that need to benefit from human interaction. If so, that means we will have to be OK that there are fora where people are nearly solely up to ‘bad things’ because the alternative is fewer fora that replicate the uniformity of the current platforms.”

A tech CEO, founder and digital strategist said, “A positive transformation could occur if the large tech platforms can find ways to mitigate effects of propaganda and disinformation campaigns. How well can they manage the problem of disinformation while honoring the principle of free speech? Legislation could help, but much depends on the will and capabilities of the platform operators. Possible solutions might be to restrict the uses of data and enforce interoperability. Tech monopolies have evolved partly due to network effects, and these are widely held to be a substantial part of the problem. Addressing monopoly is partly a legal issue, partly a business issue and partly (in this case) an issue of technology.”

A strategy and research director wrote, “For a positive scenario to play out, wealth must be more evenly distributed. Because so much of today’s wealth is tied up in digital spaces and assets, how they evolve must include a redistribution. Initiatives could shift the value equation to cooperative/community-based rewards systems for information at the personal level. This is more likely to happen outside the current financial/reward system. So, cryptocurrency would likely play a role, and digital assets and exchanges would aggregate P2P [peer-to-peer].

“A large trigger would be the open-source developments of biochemistry (CRISPR technologies) that enable gene-editing to sharply address the increasing tyranny of health care costs. By working to eliminate disease, cancers, etc., people will come to understand the value of sharing their genetic code despite the risks – pooling information for the common good means we learn faster than the government and the providers.

“When this trigger brings people back into learning, science may again have a role to play. Making positive change also requires a rethinking of educational access and some return to meritocracy for accelerated access so a broader swath of the population can again prosper.”

Susan Price, human-centered design innovator at Firecat Studio, observed, “People are taking more and more notice of the ways social media (in particular) has systematically disempowered them, and they are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds. An example is Clubhouse – a live-audio platform with features such as micropayments to content and value creators and a lively co-creation community that is pushing for accessibility features and etiquette mores of respect and inclusion. Another signal for change is the popularity of the documentary ‘The Social Dilemma,’ and the way its core ideas have been adopted in popular vernacular.

“The average internet user in 2035 will be more aware of the value of their attention and their content contributions due to platforms like Clubhouse and Twitter Spaces that monetarily reward users for participation. Emerging platforms, apps and communities will use fairer value propositions to differentiate and attract a user base.

“Current problems such as the commercial exploitation of users’ reluctance to read and understand terms of service will be solved by the arrival of competing products and services that strike a fairer bargain with users for their attention, data and time. Privacy, malware and trolls will remain an ongoing battleground; human ingenuity and lack of coordination between nations suggests that these larger issues will be with us for a long time.”

Brent Shambaugh, developer, researcher and consultant, predicted, “Decentralized and distributed technologies will challenge the monopolies. Many current tech leaders and politicians will become less relevant as they drown in their own hubris. The next 14 years will be turbulent in both the physical and digital worlds, but the average user will come out on top. Tech leaders and politicians who follow this trend will survive. I could believe the opposite, but I choose to be an optimist.”

Counterpoint: Many experts say the tech sector alone is not likely to lead the way to significant change

A share of respondents said they do not expect that people in the technology sector will play a leading role in helping to better the digital public sphere. Following is a selection of representative comments. (Many more statements along these lines are included in a later section in this report that includes experts’ critical comments about surveillance capitalism, datafication and manipulation.)

Greg Sherwin, a leader in digital experimentation with Singularity University, said, “As long as humans are treated as round pegs forced to fit into the square holes in the mental models of the greatest technological influencers of digital spaces, negative side effects will accumulate with scale, and users who are forced into binary states will react in binary conflicts by design. As it is now, most of the leadership behind the evolution of digital spaces is weighted heavily toward those with a reductionist, linear view of humans and society. Technology cannot remove the human from the human. And while the higher bandwidth capabilities of some digital spaces stand to improve empathy and connection, these can be just as easily employed for negative social outcomes.”

Christopher Richter, a professor at Hollins University whose research focuses on communications processes in democracies, predicted, “I am confident that the interacting systems of design processes, market processes and user behaviors are so complex and so motivated by wealth concentration that they cannot and will not improve significantly in the next 14 years. Diagnosis, reform and regulation are all reactive processes. They are slow, and they don’t generate profit, while new-tech development in irrational market environments can be compared to a juggernaut, leading to rapid accumulation of huge amounts of wealth, the beneficiaries of which in turn rapidly become entrenched and devote considerable resources to actively resisting diagnosis, reform and regulation that could impact their wealth accumulation.

“Social media and other digital technologies theoretically and potentially could support a more-healthy public sphere by channeling information, providing neutral platforms for reasoned debate, etc. But they have to be designed and programmed to do so, and people have to value those functions. Instead, they are designed to generate profit by garnering hits, likes, whatever, and people prefer or are more vulnerable to having their emotions tweaked than to actually cultivating critical thinking and recognizing prejudice. Thus, emotional provocation is foregrounded.

“Even if there is a weak will to design more-equitable applications, recent research demonstrates that even AI/machine learning can reflect deep-seated biases of humans, and the new apps will be employed in ways that reflect the biases of the users – facial-recognition software illustrates both trends. And even as the problems with something like facial recognition may get recognized and eventually repaired, there are many, many more new apps being rapidly developed, the negative effects of which won’t be recognized for some time.”

Alexa Raad, chief purpose and policy officer at Human Security wrote, “Business models drive innovation. The quest for advertising revenue has driven innovations in the design of digital spaces as well as innovations in machine learning. Advertising – a primary profit center for tech behemoths like Facebook, Google, Twitter and TikTok – relies upon algorithms that engage and elicit an emotional response and an action (be it to buy a product or buy into a system of beliefs). It is hard to see new business models emerging that have the same economic return.”

Ian Peter, Australian internet pioneer, futurist and consultant, commented, “Monetisation of the digital space seems to be a permanent feature and there seems to be no mechanism via which concerned entities can address this.”

A leading internet infrastructure architect who has worked at major technology companies for more than 20 years, responded, “From the perspective of the designers and operators of these digital spaces, individual users are ‘shapeable’ toward an idealistic set of ends (users are the means toward the end of an ideal world) rather than being ends in themselves who should be treated with dignity and respect. This means the designers and operators of these digital spaces truly believe they are ‘doing good’ by creating systems that can be used to modify human behavior at large scale.

“Although this power has largely been used to increase revenue in the past, as the companies move more strongly into the political realm and as governments realize the power of these systems to shape behavior, there will be ever-greater collusion between the operators of these digital spaces and governments to shape societies toward ends that the progressive elements of governments believe will move societies toward their version of an ‘ideal future.’

“There is little any individual can do to combat this movement, as each individual voice is being drowned in an overwhelming sea of information, and individual voices that do not agree with the vision of the progressive idealists are being depromoted, flatly filtered and – in many cases – completely deplatformed. The problem is one of human nature and our beliefs about human nature.”

Russell Newman, associate professor of digital media and culture at Emerson College, observed, “Assuming we remain in a moment of unabated present forward movement, what prevails is a set of business models that continue to rely heavily on intensified tracking with an assist from artificial intelligence and machine learning, all of which we now know bake in societal inequities rather than alleviating them and point systems far away from any democratic outcome. Many of the debates about misinformation occurring now are in fact epiphenomena of several trends as parties harness them toward various ends. Several trends worry me in particular:

  1. While the largest tech companies receive the largest share of attention, the conduit providers themselves – AT&T, Comcast, Spectrum, Verizon – have been constructing their own abilities to track their users for the purpose of selling data about them to advertisers or other comers, and/or to strengthen their ability to provide intermediary access to such information across supply chains and more. Verizon’s recent handover of its Verizon Media unit to Apollo only means that one of the largest tracking entities in existence has been transferred to a sector that cares even less about the quality of democratic communications, seeking instead deeper returns. Clampdowns by tech giants on third-party tracking is similarly likely only serving to consolidate the source of tracking information on users with fewer, larger players. This is to leave aside that we are nowhere close to serious privacy legislation at the federal level.
  2. Adding to this, the elimination of network neutrality rules by the FCC is devastating for future democratic access to communications. In fact, the Trump [administration’s] FCC did not just remove network neutrality rules but took the agency itself out of even overseeing broadband communications overall. The resultant shift from common carriage communications, which required providers to take all paying comers, to private carriage portends all sorts of new inequities and roadblocks to democratic discourse while also potentially intensifying tracking (blocking the ability to use VPNs, perhaps). Maddeningly, the Biden administration shows little serious interest in fixing this; the fact it has yet to even hint at appointing a tie-breaking Democratic FCC commissioner with dwindling time remaining in this Congress is a disaster. [The FCC commissioner was finally appointed at the end of 2021.]
  3. Our tech giants are not just advertising behemoths but are also effectively and increasingly military contractors in their own right, with massive contracts with the intelligence and defense arms of the government. This instills troubling incentives that do not point toward increased democratic accountability. Facial-recognition initiatives in collaboration with police departments similarly portend intensifications of existing inequities and power imbalances.
  4. Traditional media upon which democratic discourse depends is continuing to consolidate; to add insult to injury, it is becoming financialized. Newspapers in particular are doing so under the thumb of hedge funds with no commitment to democratic values, instead seeing these important enterprises as revenue centers to wring dry and discard. ‘Citizen journalism’ is not a foundation for a democracy; a well-resourced sector prepared and incentivized to do deep investigative reporting about crucial issues of our time is. Emergent entities like Vox, Buzzfeed and Axios themselves received early support from the usual giants in tech and traditional media; and their own logics don’t necessarily lean toward optimally democratic ends, with Axios as recently as late 2020 telling the Wall Street Journal it saw itself as a software-as-a-service provider for other corporations.”

A Chinese social media researcher said he doubts any sort of beneficial redesign will emerge, writing, “We need to redesign the internet, but many incumbents won’t yield, or – based on the same reasons – they won’t let it happen. Whether or not we can tame big tech politically, there are so many other challenges to the architecture of internet that everything is leaning toward being controlled and centralized, eventually becoming fragile enough to be further abused or fall into worse perils.”

Erhardt Graeff, assistant professor of social and computer science at Olin College of Engineering, commented, “The only way we will push our digital spaces in the right direction will be through deliberation, collective action and some form of shared governance. I am encouraged by the growing number of intellectuals, technologists and public servants now advocating for better digital spaces, realizing that these represent critical public infrastructure that ought to be designed for the public good.

“Most important are initiatives that bring technologists together to realize the public purpose of their work, such as the Design Justice Network, public-interest technology and the tech worker movement. We need to continue strengthening our public conversation about what values we want in our technology, honoring the expertise and voices of non-technologists and non-elites; use regulation to address problems such as monopoly and surveillance capitalism; and, when we can, refuse to design or be subject to antidemocratic and oppressive digital spaces.”

Marcus Foth, professor of informatics at Queensland University of Technology, exclaimed, “Issues of privacy, autonomy, net neutrality, surveillance, sovereignty, etc., will continue to mark the lines on the battlefield between community advocates and academics on the one hand, and corporations wanting to make money on the other hand.

“Things could change for the better if we imagine new economic models that replace the old and tired neoliberal market logic that the internet is firmly embedded in. There are glimpses of hope with some progressive new economic models (steady state, degrowth, doughnut, and lots of blockchain fantasies, etc.) being proposed and explored. However, I am doubtful that the vested interests holding humankind in a firm grip will allow for any substantial reform work to proceed.

“These digital spaces are largely hosted by digital platform corporations operating globally. In the early days of the internet, the governance of digital spaces on the top ‘applications’ layer of the OSI (Open Systems Interconnection) model comprised simple and often organically grown community websites and Usenet groups. Today, this application layer is far more complex, as the commercial frameworks, business plans and associated governance arrangements – including policies and regulations – have all become far more sophisticated.

“While the pace with which this progression advances and seems to accelerate, the direction since the World Wide Web went live in 1993 has not changed much. The underlying big platform corporations that have emerged are strongly embedded in a capitalist market logic, set to be profitable following outdated neoliberal growth key performance indicators (KPIs). What they understand to be ‘better’ is based on commercial concerns and not necessarily on social or community concerns.”

Dan Pelegero, a consultant based in California, responded, “If the approach toward making our digital spaces better is either profit-driven or compliance-driven, without any other motivators, then the economics of our digital spaces will only make life better for the owners of platforms and not the users. The issues around the governance of our digital spaces do not have to do with technology, they have to do with policy and how we, as people, interact. Our bureaucracies have moved too slowly to keep up with the pace of communication changes. Regulation of these spaces is predominantly a volunteer-led effort or still remains a low-compensation, high-labor activity.”

Liza Potts, professor of writing, rhetoric and American cultures at Michigan State University, wrote, “The lack of action on the part of platform leaders has created an environment where our democracy, safety and security are all at risk. At this point, the only solution seems to be to break apart the major platforms, standardize governance, implement effective and active moderation and hold people accountable for their actions. Without these moves, I do not see anything changing.”

Jeremy West, senior digital policy analyst at the Organization for Economic Cooperation and Development (OECD), said there are ways to make a difference outside of a full tech and government commitment to flipping the script entirely, writing, “Neither tech leaders nor politicians (with some scattered exceptions) have been especially helpful, and I don’t have much hope for improvement there. However, by 2035 I expect to see users having substantially greater control over the data they wish to share, and more options for accessing formerly ‘free’ services by choosing to pay a pecuniary fee rather than sharing their data.

“Greater transparency from online service providers about harmful content, including mis/disinformation, is on the way. That will improve the evidence base and facilitate better policymaking in ways that are not currently possible. I expect to see terrorist and violent extremist content, child sexual abuse material and the like pushed into ever-smaller and more-remote corners of the internet. That is not to say that it will be eradicated, though.”

A retired U.S. military strategist commented, “The financial power of the major social media platforms, enabling technology providers and competing macro political interests, will act in ways that enable maximum benefit for them and their financial interests. We need look no further than capitalist experience in other economic sectors, in which the industries of digital spaces have thus far not demonstrated a singleness or distinctive separateness from the type of economic power exercise and consolidation quite familiar to us in U.S. industry.”

A leader of a center for society, science, technology and medicine responded, “Without a major restructuring of capitalist incentives or other substantial regulatory action – neither of which I think are likely unless climate change makes it all a moot point – digital spaces and digital life will continue to be ‘business as usual,’ emphasis on the business. While my teaching in technology ethics writ broadly betrays at least some optimism that things *could* change, I think it is unlikely that they will.”

A 30-year veteran of internet and web development said, “Maybe – if we are lucky – over the next decade or two various digital spaces and people’s use of them will change in ways that serve (or seem to serve) the public good (within an evolving definition of that term) to an extent greater than they do today. It is likely that the digital oligarchy, as well as Wall Street, are going to fight tooth-and-nail to maintain the status quo.

“In the meantime, we are barreling headlong toward a country that is isomorphic, with Huxley’s ‘Brave New World,’ Collins’ ‘The Hunger Games,’ Atwood’s ‘The Handmaid’s Tale,’ etc. (Cf. Chris Hedges’ ‘American Requiem’: ‘An American tyranny, dressed up with the ideological veneer of a Christianized fascism, will, it appears, define the empire’s epochal descent into irrelevance.’)”

An internet pioneer wrote, “The major changes in society point to greater stratification in its wealth. So, for-fee subscription services will do a better and better job of serving public good while only serving the wealthy. Free services that compete will continue to profit from manipulation by advertisers and other exploitive actors. Thus, community spaces will get better and worse depending on their revenue models, and social problems will not be addressed. (Black swan events like a change in our economic system might change things. Don’t bet on it.)”

A vice president for learning technologies predicted, “Tech leaders will help achieve improvements through their personal guidance (public and private) of their concerns to recognize the larger missions/aims that exist beyond corporate growth and personal power. Improvements in the digital lives of the average users will come through increasing the transparency of sources of information. Tech reforms I foresee include filtering mechanisms that recognize a filter’s origins – such as gatekeepers recognized for point of view, methods, etc. Persistent concerns will remain, especially the emerging approaches we see today in which players are gaming the system to harmful ends, including various forms of warfare.”

The leader of a well-known global consulting firm commented, “The emergence of new business and economic models, and a new and updated view of what public commons are in the digital age might possibly help. Digital spaces suffer from the business models that underly them, those that encourage and amplify the most negative behaviors and activities.”

A policy entrepreneur said, “Some corporations will successfully market their differentiation as leaders in trust-building and proactive ethical behaviors. However, there will be some holdouts continuing to exploit surveillance capitalism and providing platforms for misinformation that serves social division. All wealthy Western countries are going to surpass the U.S. in responsible digital technology regulations before 2030. Between compliance with the non-U.S. standards and the example provided by Engine No. 1 to shake up boards of directors, multinational corporations will choose the lowest-cost compliance strategies and will be swayed not to be on dual tracks.”

An accomplished programmer and innovator based in Berkeley, California, wrote, “Simply put, digital spaces are driven by monetary profit, and I don’t see that changing, even by 2035. The profit motive means that providers will continue to do the least amount of work necessary to maximize profit. For example, software today is insecure and unreliable, but the cost of making it secure and reliable is higher than providers want to pay; it would cut into their profits. In a slightly different but still related vein, the ‘always on’ aspects of digital spaces discourage people from human things like inner contemplation or even just reading a book. The providers don’t make money if you are just meditating on inner peace, so they make their platforms as addictive as possible. There is no incentive for them to do otherwise.”

An editorial manager for a high-tech market research firm said, “Elites are now firmly in control of emerging digital technology. The ‘democratization’ of internet resources has run its course. I don’t see these trends changing over the next few decades.”

A professor of sociology at an American Ivy League university responded, “Unless we re-educate engineers and tech-sector workers away from their insane notions of technology that can change society in ways in line with their ideologies and toward a more nuanced and grounded understanding of the intersection of technology and social life, we’ll continue to have sociopathic technologies foisted upon us. Unless we can dismantle the damaging personal data economy and disincentivize private data capture and the exchange of database information for profit, we will continue to see the kinds of damage through personalization algorithms, leaks, and the very real possibilities that such information is used to nefarious ends by governments. Until Jack Dorsey pulls the plug on Twitter and Mark Zuckerberg admits that Facebook has been a terrible mistake, and Google steps away from personal data tracking, we are not headed anywhere better by 2035.”

A professor emeritus of social sciences commented, “The tremendous clout of advertisers makes it extremely difficult to restrict corporate surveillance, which often is done insecurely, leaving everyone vulnerable to hackers and malware. The struggle for security in online communications and transactions from attempts to mandate backdoors everywhere makes it difficult for device and system developers to make a secure computer or phone. Another challenge is finding ways to reduce hate and dangerous misinformation while preserving civil liberties and free speech. But I do believe that that the continuation of the information commons in the form of open courseware, Wikipedia, the Internet Archive, fair-use provisions in intellectual property laws, open university scientific papers, all of the current and future online collaborations to address environmental problems, and open access to government will provide support to all of our efforts to make 2035 a better world than it seems to be heading toward at the moment.”

Government regulation plus less-direct ‘soft’ pressure by government will help encourage corporations’ adoption of more-ethical behavior

A share of these experts, whether they are hopeful or not for significant improvement of the digital public sphere, argued that regulation is necessary. They expect that legislation and regulation of digital spaces will expand, nudging the profit-focused firms in the digital economy to focus on issues of privacy, surveillance and data rights and finally rein in misinformation and disinformation to some degree. While some see legislation as a remedy, some do not agree, noting that regulation could lead to unwanted negative outcomes – among them the stifling of innovation and free speech and the further empowerment of authoritarian governments. Thus, a share of these experts suggest that a combination of carefully directed regulation and “soft” public and political pressure on big tech will lead its leaders to be more responsive and attuned to ethical design aimed at better serving the public interest.

Andrew Wyckoff, director of the OECD’s Directorate for Science, Technology and Innovation, predicted, “The twin forces of innovation and heightened recognition that the digital infrastructure is essential to the economy and society will have the biggest impact. As for innovation we will witness a profound change as ubiquitous computing enabled by fibre, 5G and embedded sensors and linked equipment and devices (the Internet of Things) augmented by AI becomes a reality. This new platform will unleash another innovation cycle. The pandemic has made it clear to all policymakers that the digital infrastructure – from the cables to widely used applications and platforms – are essential public services and the light-touch regulation of the internet’s first few decades will fade.

“Governments are slowly developing the capacity and know-how to govern the digital economy and society. This new cadre of policymakers will assert ‘sovereignty’ over what was ungoverned and will seek to promote digital spaces as useful, safe places, just as they did for automobiles and roads in the 20th century. What will be noticeably improved about digital life for the average user 2035? Key initiatives will be digital identities, control over personal data, protection of vulnerable populations (e.g., children) and measures to improve security.

“What current problems will persist and continue to raise major concerns? The end-to-end property of the internet, which is its ‘democratising’ feature has led to an inevitable decentralisation and recentralization, altering power dynamics. This shift is destabilising and naturally resisted by incumbents, causing strife and calls to reassert control.”

Peng Hwa Ang, professor of media law and policy at Nanyang Technological University, Singapore, commented, “What we are seeing is friction arising from the early days of use of disruptive technologies. We need law to lubricate these social frictions. Yes, I know Americans tend to see laws as stopping action. But consider a town where all the traffic lights are green. If laws, judiciously formulated, passed and enforced, are social lubricants, these frictions will be minimised. I expect therefore that people will appreciate the need for such social lubrication.

“John Perry Barlow’s Declaration of the Independence of Cyberspace is not an ideal. It was obvious to me when it was published that it was not realistic. It has taken many people some 20 years to realise that. The laws need to catch up with the technology. Facebook for example is now aware that it needs some regulation (internal rules short of hard government laws) in order to actually help its own business. Without some restraint, it is blamed for, and thus associated with, bad and criminal action. In short, I am optimistic because I think:

  • We are realising the futility of Barlow’s declaration.
  • The problems we face and will face highlight the need for social lubrication at different levels.
  • These regulations will come to pass.”

Stephan G. Humer, internet sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, said, “Initiatives aimed at empowerment and digital culture will probably have the greatest impact because this is where we continue to see the greatest deficits and therefore the greatest need for change. People need to be able to understand, master and individually shape digitization, and this requires appropriate initiatives. A diverse mix of initiatives – some governmental, some nongovernmental – will play a crucial role here! The result will be that digitization can be better appreciated and individually shaped. Increasingly, the effects that were expected from digitization at the beginning will come to pass: a better life for everyone, thanks to digital technology.

“The more that self-evident and controlled digital technology becomes part of our lives, the better. People will not let this aspect of control be taken away from them. The dystopias presented so far, some of which have prophesied a life ‘in the matrix,’ will not become an issue. So far, sanity has almost always triumphed, and that will not change. The more people in the world can use the internet sensibly, the more difficult it will be for despots and dictators.”

Willie Currie, who became known globally as an active leader with the Independent Communications Authority of South Africa, predicted, “The combination of antitrust interventions in the U.S. and algorithm regulation in the European Union will rein in the tech companies by 2035. Organised civil society in both territories as well as increasing digital literacy will drive the demand for antitrust and regulatory action. This is the way it always is with technological development.

“If one regards the internet as a hyper-object similar to global warming, regulating its problematic aspects will require considerable global coordination. Whether this will be possible in the current global political space is unclear. Legislation to fix problems arises after the implementation of new technologies. During the 2000s there was an opportunity to introduce global internet regulation through a treaty process, but the process broke down into two global blocs with different views on the matter. So global regulation is unlikely before 2035.

“What is most likely to happen is that the European Union will be the main regulatory reference point for many countries, with the U.S. following an antitrust approach and the authoritarian countries seeking greater control over their citizens’ use of the internet.

“As the lack of accountability and political and psychological abuse perpetuated by tech leaders in social media continues to multiply, the backlash against them will grow. The damage to democracy and the social fabric caused by unregulated tech companies in the West will continue to become more visible and we will reach a point where the regulation of algorithms will become a key demand.”

Evan Leibovitch, director of community development at Linux Professional Institute, commented, “The extent to which governments can create – and preferably collaborate – on these issues will determine everything. This can go either way. The internet can be used as a tool for elite control of the masses – China is already providing a blueprint – or for massive social progress. Whether the transformation of digital spaces becomes a net positive or negative is dependent upon political and economic factors that are too volatile to predict. Much depends upon the level of regulation that governments will choose to impose, both in concentration of monopoly power and the necessity to make computer users responsible for what they say. This will impact laws and regulations on monopoly concentration, libel/slander and intellectual property.”

A futurist and consultant based in Europe urged, “We need radical regulation, transparency and policy changes enacted at scale. This has to happen. Movement toward regulation feels like pushing one very small boulder up a very big hill. We need more regulation, more dissent within platforms, more whistleblowers, more deplatforming of hate speech/harmful content, more-aggressive moderation of platforms of misinformation/disinformation, etc. We just need more.”

The founder and director of a digital consultancy observed, “The last 20 years of the internet have very effectively answered the question, ‘how do we profit materially from the online world?’ The next 20 years need to answer the question, ‘how do we profit humanely and humanly from the online world?’ Government initiatives that target algorithms are an excellent start in protecting citizens.

“If we are to survive the coming decade, change is essential. The platforms we all use to communicate online must be reoriented toward the good of their users rather than only toward financial success. Classifying some entities as ‘information utilities’ might be a good first step. Legislation around technology has to be pitched at a truly effective level.

“Tackling the rules governing algorithms is a good meta-level for this. Applications or platforms, like Facebook may change or even vanish but the rule set will remain. This kind of thinking is not common in the world of government, so technologists and designers need to be engaged to help guide the legislation conversation and ensure it’s happening at an effective level.

“Arguably, a lot of the social progress (e.g., LGBTQ+ rights) that’s been made in recent decades can be credited to the access the internet has given us to other ways of thinking and to other ways of life and the tolerance and understanding this access has bred. If we can reorient technology to serve its users rather than its oligarchs, perhaps that path of progress can be resumed.”

Hume Winzar, a professor and director based at Australia’s Macquarie University with expertise in econometrics and marketing theory, commented, “A series of crises like those occurring now regarding election rigging and related conspiracy theories will force changes to publishing laws, so that posters must be identified personally and take personal responsibility for their actions. AI-based fact-checkers and evidence collection will be automatic, delivering a validity score on each post and on each poster. Of course, that also means we will see more-sophisticated attempts at ‘gaming’ such systems.”

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, said, “There are a lot of horror stories – false arrests based on bad facial recognition, data-brokered lists of rape victims, intruders screaming at babies from connected baby monitors – but there is surprisingly little consensus about what digital protections – specific expectations for privacy, security, safety and the like – U.S. citizens should have. We need to fix that. Europe’s General Data Protection Regulation (GDPR) is based on a well-articulated set of digital rights of European Union citizens. In the U.S. we have some specific digital rights – privacy of health and financial data, privacy of children’s online data – but these rights are largely piecemeal.

“What are the digital privacy rights of consumers? What are the expectations for the security and safety of digital systems and devices used as critical infrastructure? Specificity is important here because to be effective, social protections must be embedded in technical architectures.

“If a federal law were passed tomorrow that said that consumers must ‘opt in’ to personal data collection by digital consumer services, Google and Netflix would have to change their systems (and their business models) to allow users this kind of discretion. There would be trade-offs for consumers who did not opt in: Google’s search would become more generic, and Netflix’s recommendations wouldn’t be well-tailored to your interests. But there would also be upsides – opt-in rules put consumers in the driver’s seat and give them greater control over the privacy of their information.

“Once a base set of digital rights for citizens is specified, a federal agency should be created with regulatory and enforcement power to protect those rights. The FDA was created to promote the safety of our food and drugs. OSHA [Occupational Safety and Health Administration] was created to promote the safety of our workplaces. Today, there is more public scrutiny about the safety of the lettuce you buy at the grocery store than there is about the security of the software you download from the internet. Current bills in Congress that call for a Data Protection Agency, similar to the Data Protection Authorities required by the GDPR, could create needed oversight and enforcement of digital protections in cyberspace.

“Additional legislation that penalizes companies, rather than consumers, for failure to protect consumer digital rights could also do more to incentivize the private sector to promote the public interest. If your credit card is stolen, the company, not the cardholder, largely pays the price. Penalizing companies with meaningful fines and holding company personnel legally accountable – particularly those in the C suite – provides strong incentives for them to strengthen consumer protections. Refocusing company priorities would positively contribute to shifting us from a culture of tech opportunism to a culture of tech in the public interest.”

Theresa Pardo, senior fellow at the Center for Technology in Government at University at Albany-SUNY, commented, “There is an increasing appreciation for the potential of technology to create value and, more importantly, there is an increasing recognition of the risk to society from a lack of deep understanding of the potential unintended consequences of the use of new and emerging technologies. It is this recognition among both leadership and the public that will drive tech leaders and politicians to fulfill their unique roles and responsibilities by addressing the need for and creating the governance required to ensure that necessary understanding is built among those leaders and the public. Lack of understanding of the need for governance of new and emerging technologies, that requires trustworthy AI for example, is a problem that is just beginning to be diminished.”

A professor of information science based in California said, “Regulation is always a balance between competing values. It seems that it is time to for the pendulum to swing toward more restrictions for both social media and other forms of media (broadcast, cable, etc.) in terms of consolidation of ownership and the way content is distributed.

“It is important to remember that the technical systems we have are often a series of accidental or almost arbitrary choices that then become inevitable. But we can rethink these choices. For instance, video sites do not have to allow anyone to upload anything for instant viewing. Live streaming does not have to be available for for-profit reasons. Shares and likes and followers do not have to be part of an online system. These choices allow one or a few companies to make use of network externalities and become the largest, but not necessarily the best for individuals or society.”

A portion of respondents were confident that regulation will emerge soon.

Ed Terpening, industry analyst with the Altimeter Group, predicted, “Increased regulatory oversight will result in uniform rules that ensure digital privacy, equity and security. Tech markets – such as those involved in development of the Internet of Things (IoT) – have shown that they aren’t capable of self-regulation and the harms they have caused seldom have consequences that change their behavior. Legislative action will succeed through a combination of consumer groundswell as well as political input from business leaders whose operations have been impacted by digital crimes such as ransomware attacks and intellectual property theft.

“Still, while the scope and value of digitally connected devices will help consumers save time and money in their daily lives, in future the threat of bad international state actors who target those systems will increase the risk of disruption and economic harm for consumers.”

Tim Bray, founder and principal at Textuality Services, previously a vice president in the cloud computing division at Amazon, wrote, “There’s a surge of antitrust energy building. Breaking up a few of the big techs is very likely to improve the tenor of public digital conversation. There is an increasing awareness that social media that is programmed for engagement in a way that’s oblivious to truth and falsehood is damaging and really unacceptable.”

Jan Schaffer, director of J-Lab, said, “I believe digital spaces will transform for the public good by 2035. I expect it to happen due to government, and perhaps economic, intervention. I expect there will be legislation requiring internet platforms to take more responsibility for postings on their sites, particularly those that involve falsehoods, misinformation or fraudulent fundraising. And I suspect that the social media companies themselves will bow to public pressure and implement their own reforms.”

An information science professional based in Europe responded, “Before 2035 we shall see improved mechanisms for recognizing, identifying and then following up on each and every discriminatory or otherwise improper action by the public, politicians or any group that does harm. Digital spaces and digital life will be transformed due to more and better regulation and the education of public audiences along with the setting of explicit rules of acceptable use and clear consequences for abuse. Serious research and analysis are needed in order to increase our understanding of the situation before establishing new rules and regulation.”

The director of a cognitive neuroscience group predicted, “There will be regulatory reform with two goals: increased competition and public accountability. This has to be developed and led by political leaders at all levels and it will require active engagement by technology companies.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, where he is researching artificial intelligence and the social implications of technology, commented, “In the near-term future (next 10 to 15 years), I expect that top-down regulation will have the biggest impacts on digital environments, particularly through safeguarding privacy and combating some of the worst cases of misinformation, hate speech, and incitements to violence.

“Regulations shaping data governance and protecting privacy rights like GDPR and CCPA [California Consumer Privacy Act] are well suited to tackle a subset of current problems with digital spaces and can do so in a relatively straightforward fashion. Privacy by design, opt-in consent, purpose limitation for data collection and other advances are likely to accelerate through diffusion of regulatory policy, buttressed by the Brussels and California Effects, and the pressure applied to technology companies by governments and the public.

“For example, there may be enough policy pressure along the lines of the EU’s Digital Services Act and Digital Markets Act to limit the use of micro-targeted advertising, perhaps for vulnerable populations and sensitive issues (e.g., politics) especially. A rare consensus in U.S. politics also suggests that federal action is likely there as well. These would no doubt constitute improvements in digital life.”

A number of respondents noted that the largest amount of democratic regulation of digital technology has been emerging in Europe first and said they expect this trend to continue.

Christopher Yoo, founding director of the Center for Technology, Innovation and Competition at the University of Pennsylvania, said, “Digital spaces have become increasingly self-aware of the impact that they have on society and the responsibility that goes along with it. The government interventions that have gained the most traction have been in the area of economic power, highlighted by the EU and U.S. cases against Google and Facebook and proposed legislation, such as the EU’s Digital Markets Act and the bloc of bills recently reported by the House Judiciary Committee.

“Interestingly, the practices that are the focus of these interventions are the most ambiguous. Digital platforms now generate trillions of U.S. dollars in economic value each year, with many of the practices playing essential roles, and much of the supposed harms are backed more by theory than empirical evidence.

“Any economic interventions that are justified must be carefully targeted to curb abuses proven by evidence rather than conjecture in ways that do not curtail the benefits on which consumers now depend. Interestingly, the impact of digital platforms on political discourse is more important. In the U.S., the First Amendment limits the government’s ability to intervene. Any reforms must come from the digital platforms themselves. Fortunately, they are showing signs of greater conscientiousness on that front.”

Rick Lane, founder and CEO of Iggy Ventures, wrote, “I believe that policy makers around the world, the general public and tech companies are coming to the realization that the status quo around tech public policy that was created during the 1990s is no longer acceptable or justified. The almost unanimous passage of FOSTA/SESTA, the EU’s NIS2, the UK’s recent child-safety legislation, Australia’s encryption law, and the continued discussions around modifying Section 230 of the U.S. 1996 Communications Decency Act and privacy laws here in the U.S. highlight how views have drastically changed since the SOPA/PIPA fights.”

A futurist and consultant based in Europe predicted, “Regulation will significantly impact the evolution of digital spaces, tackling some of the more egregious harms they are currently causing. The draft UK ‘online safety’ legislation – in particular the proposed duty of care for platforms – is an example of a development that may help here, together with measures to remove some of the anonymity that users currently exploit. A move away from the current, largely U.S.-centric model of internet governance will enable the current decline to be reversed. The current ‘digital sovereignty’ focus of the European Commission will be helpful in this regard, given that progress only seems to be made when tech companies are faced with the threat or actual imposition of controls back by significant financial penalties, potentially with loss of access to key markets.”

A foresight strategist based in Washington, D.C., wrote, “I believe interventions such as enforceable data-privacy regulations, antitrust enforcement against ‘big tech,’ better integration of humanities and computer science education and continued investment in internet-freedom initiatives around the globe may help create conditions that improve digital life for people everywhere. This is necessary. By 2035, exogenous factors such as climate change and authoritarianism will play even more significant roles in shaping global society at large and social adoption of digital spaces in particular. The net results will be both the increased use of pervasive digital surveillance/algorithmic governance by large state and commercial actors and increased grassroots techno-social liberatory activity.”

Thomas Streeter, a professor of media, law, technology and culture at Western University, Ontario, Canada, commented, “The character of digital life will largely be determined by nondigital issues like global warming, the state of democracy and globalization, etc. That said, if an international coalition of liberal social democracies are able to dramatically reorganize digital technologies, perhaps through first breaking up the big companies with antitrust law and then regulating the pieces according to a mixture of common carrier and public media principles, while replacing advertising with subscriptions and public subsidies, that will help. There is no way to know if such efforts would succeed, but stranger things have happened in the past, and if we don’t try, we will guarantee failure.”

Several respondents specified particular approaches they expect might be most effective.

The co-founder of a global association for digital analytics responded, “What reforms or initiatives may have the biggest impact by 2035? I expect:

  • Effective regulation of social media companies and major service providers, such as Amazon and Google. These monopolies will be broken up.
  • The rise of better citizen awareness and better digital skills.
  • The rise of indie resistance – anti-surveillance apps, small-scale defensive AI, personal servers, cookie blocking, etc.
  • The for-profit tech leaders will not be a source of positive contribution toward change. Some politicians will continue to seek regulation of abusive monopolies, but others may have an equally negative effect. I think the most influence will come via demands for social/cultural change arising from the general public.
  • Monopoly domination by current leaders may be removed or reduced, however, emergent technology will drive new monopoly domination by large corporations in aspect of tech and society that are currently unpredictable.
  • Common, cheap and widespread AI applications will dominate concerns and create the most challenges in 2035.”

Jonathan Taplin, director emeritus at the University of Southern California’s Annenberg Innovation Lab and a member of the advisory board of the Democracy Collaborative at the University of Maryland, commented, “In the face of a federal judge’s recent dismissal of the FTC’s monopoly complaint against Facebook, it is clear that breaking up big tech may be a long, drawn out battle. Better to focus now on two fairly simple remedies.

“First, remove all ‘safe harbor’ liability shields from Facebook, YouTube, Twitter and Google. There are currently nine announced bills in Congress to address this issue. As soon as these services acknowledge that they are the largest publishers in the world, the sooner they will have to take the responsibilities that all publishers have taken since the invention of the printing press.

“Second, Facebook, Google, YouTube, Instagram and Twitter have to start paying for the content that allows them to earn billions in ad revenues. The Australian government has passed a new law requiring Google and Facebook to negotiate with news outlets to pay for their content or face arbitration. As the passage of the law approached, both Facebook and Google threatened to withdraw their services from Australia. But Australia called their bluff and they withdrew their threats, proving that they can still operate profitably while paying content creators.

The Journalism Competition and Preservation Act of 2021 that is currently before the Judiciary Committee in both House and Senate would bring a similar policy to the United States. There is no reason Congress couldn’t fix these two problems before the end of 2021.”

Robin Brewer, professor of information, electrical engineering and computer science at the University of Michigan, said, “As AI is woven into every aspect of digital life, we must be careful to protect digital spaces while mitigating harms that affect marginalized communities (e.g., age, disability, race, gender).

“Reforms with the biggest impact will be those that enforce regulation of AI-based technologies with routine audits for potential bias or errors. The most noticeable improvements about digital life by 2035 will likely be better ways for digital residents/users to report AI-related harms, more accountability for such harms, and as such, more trust in using digital spaces for every aspect of our lives (e.g., communication, driving, health) across age groups.”

A machine learning expert predicted, “Regulations associated with privacy, reporting, auditing and access to data will have the largest impact. Uprooting the deep web and dark web to remove malicious, illicit and illegal activity will eventually be done for the public good. There will also be more research and understanding associated with challenges to individuals’ digital/physical balance as more-immersive technology becomes mainstream (e.g., virtual reality). There will be limits imposed and technology enablers will work to ensure that individuals still also get together IRL [in real life].”

Richard H. Miller, CEO and managing director at Telematica and executive chairman at Provenant Data, wrote, “What reforms or initiatives may have the biggest impact?

  1. Those that revolve around data sovereignty, the capture of personal data, rights to use, and the ability of individuals and corporate entities to delegate or license rights to use by third parties. Accompanying the reforms will be technical solutions regarding fairly radical approaches to transparency through the use of zero knowledge data storage and retrieval approaches. By these means, clarity in the use (or definitive indications of misuse) of personal data is accomplished with reasonably strong means of protecting privacy. And technologies that retain tamper-proof/tamper-evident data along with the provenance and lineage of data will result in provable chains of data responsibility.
  2. Telecommunication/Data Services reform that establishes principles of fairness in access, responsibility and liability for transgressions, establishment of common carriage principles to be applied by law and the willingness of governments (federal, state, regional and so on) to clearly identify, call out and appropriately penalize cartel or monopolistic business practice.

“What beneficial role can tech leaders or politicians or public audiences play in this evolution? In both cases one and two above, technology leaders are capable of clearly describing the risks of not addressing the issues and can clearly present them in an understandable fashion to legislative bodies and to the populace so there is an informed public.

“Politicians, insofar as they are responsible for the establishment and enforcement of law, are potentially the most important contributors. But should they continue (as they have in the past 20 years, to abrogate responsibility for and modernization of regulation and its enforcement) they also represent the most impactful threat.

“What will be noticeably improved about digital life for the average user 2035? Trust in the knowledge that there is greater transparency and control over the use of personal data. Trust in identification of the source of information. Legal recourse and enforcement regarding data usage, information used for manipulation, and active pursuit of cartel and monopolistic behavior by technology, telecom and media hyperscalers.”

Christina J. Colclough, founder of the Why Not Lab, commented, “Where I expect governments to act is on the requirement for all fake news, fake artefacts, fake videos/texts, etc., to be labelled as such. I expect also we will see advancements in the labelling of ‘bots’ so we know what we are interacting with. I also believe we will see advancements in data rights – both for workers and citizens, including much stronger collective rights to the data extracted and generated (including inferences) and stricter regulations on what Shoshanna Zuboff calls ‘Markets in Human Futures.’”

Tom Wolzien, inventor, analyst and media executive, suggested the following:

  1. “Civil accountability for all platforms as publishers for what appears on/in them by any contributor, similar to the established regulation for legacy media publishers (broadcast and print).
  2. Appropriate legislation by politicians and acceptance by tech leaders.
  3. Platforms must not allow anonymity of contributors or persons retransmitting messages of others. Persons retransmitting should be held accountable for material re-transmitted by platform and in litigation. This will force individual contributors to accept personal accountability as enforced by the platforms, which should fear civil liability. This will diminish, but not eliminate a lot of the current issues.”

Rich Salz, a senior director of security services at Akamai Technologies, responded, “I hope that large social media companies will be broken up and forced to ‘federate’ without instances, so that global interaction is still possible but it’s not all under the control of a few players. This can be done, although some tricky (not hard) problems have to be solved. In spite of recent failed court actions tied to suits against Facebook, I maintain that the European Union and perhaps the U.S. Congress will do something.”

Valerie Bock, principal at VCB Consulting, wrote, “It has taken a very long time for the digital cheerleaders to understand how seriously destructive the use of online spaces could become. The Jan. 6, 2021, insurrection at the U.S. Capitol served as a wake-up call not only to the digerati, but to our lawmakers.

“I expect that the future will see creation of legislation that will hold platforms liable for providing space for the promulgation of likes and the planning of illegal activities. There will be actual, meaningful enforcement of such legislation. Of course, if such efforts are successful, they will drive a great deal of activity ‘underground,’ but the upside of that is that casual users will no longer be exposed to casual conspirators.

“Once the price of malfeasance goes up, it will concentrate the hardcore who are willing to pay up to finance fines, legal fees, etc., undertaken by their costs.”

Eileen Rudden, co-founder of LearnLaunch, commented, “Pressure from the public, governments and tech players will push for change, which is why I believe the future will be more positive than today. Internet spaces will evolve in a positive direction with the help of new legislation (or the threat of new legislation) that will cause tech spaces to modify what is considered acceptable behavior.

“External forces such as governments are being forced to act because the business model of the internet spaces is based on targeted advertising and the attention economy and the tech industry will not respond without governments getting involved.

“Tech players’ rules for what is acceptable content will become subject to norms that have developed over time, such as those already in place offline for libel. Whether a rating system to identify reliable information can be developed is open to question. Laws were created to address shared views of what is acceptable human behavior.”

Meredith P. Goins, a group manager connecting researchers to research and opportunities, said, “The internet is being used to track people’s every waking moment so that they can either be found or be advertised to. Tech leaders will continue to make billions from reselling content the general public produces while the middle class goes extinct. This will continue until broadband and internet service becomes regulated like telephone, TV, etc. If not, Facebook, Twitter and all social media will continue to devolve into a screaming match with advertising.”

Sean Mead, strategic lead at Ansuz Strategy, responded, “Twitter exists on and is programmed to reward hate, intolerance, dehumanization, libel and performative outrage. It is the cesspool that most clearly demonstrates the monetization of corruption. Many people sought out addiction to strawman mischaracterizations of other people who hold any beliefs that are in anyway different from their own. Why have a ‘two-minute hate,’ when you can have a full day of hating and self-righteousness every day, whether its justifications have a basis in reality or not?

“Algorithms are encouraging indulgence of these hate trips because doing so creates more time for the participants to be exposed to advertising. The social media oligarchy have been behaving not like platforms, but in violation of the intent of Section 230, like publishers promoting some views and disappearing others.

“If they were treated as publishers since they are behaving as publishers, this would force quite an improvement in community behavior, particularly in regard to libel. Many businesses may choose to move to a more-controlled network where participants are tied to a verified ID and anonymity is removed. That would not remove all issues, but it would dampen much problematic behavior.”

The founder and leader of a global futures research organization wrote, “Information warfare manipulates information channels trusted by a target without the target’s awareness, so that the target will make decisions against their interest but in the interest of the entity conducting the attack. This will get worse unless we anticipate and counter, rather than just identify and delete.

“We could reduce this problem if we use infowarfare-related data to develop an AI model to predict future actions, to identify characteristics needed to counter/prevent them and match social media uses with those characteristics and invite their actions. Since nation-states are waking up to these possibilities, I think they will clearly do this or come up with even better prevention strategies.”

Counterpoint 1: Some doubt that governance by nation-states will lead the way to significant, effective change

Some experts said they do not expect that people in the government sector will play a key role in helping to better the digital public sphere. A share of the respondents who do not expect significant improvement of the digital public sphere put the blame mainly on tech companies’ highly effective lobbying of and deep-pockets influence over government actors. Following is a selection of representative comments from those who were less optimistic about the near future of government influence.

Alexa Raad, chief purpose and policy officer at Human Security said, “Unfortunately, without significant and fundamental reforms in our system of government, the incentive for politicians is less about public service and transparency and more about holding onto power and reelection.

“So long as the incentives for government representatives are misaligned with the public interest, we can expect little in the way of meaningful reform. So long as internet services and their delivery continue to get consolidated (think more and more content being pushed into content delivery networks and managed by large infrastructure plays like Amazon Web Services), tech leaders will have greater power to push their own agenda and/or influence public opinion.

“The incentives for our elected officials are not aligned with public good. There will likely be some regulatory reform, but it will likely not address the root cause.”

Miguel Moreno, director of the department of philosophy at the University of Grenada, commented, “Major changes will be needed in regulatory frameworks, in antitrust laws, in privacy cultures and in the standardization of guarantees for users and consumers in different countries. But their experience in disseminating services on a global scale does not seem for now, nor in the near future, replaceable by any other scheme of activity managed by state institutions.”

Peter Rothman, lecturer in computational futurology at the University of California-Santa Cruz, wrote, “A change of direction would require a significant change of law and it can’t happen in the current political environment. As long as digital spaces and social media are controlled by for-profit corporations, they will be dominated by things that make profits and those things are outrage, anger, bad news and polarized politics. I see nothing happening on any service to change this trajectory.”

An expert on media and information policy responded, “I do, in principle, trust in government and believe in the importance of good government solutions, however, I am concerned that the lack of ability of government to solve important problems will limit its ability to find meaningful solutions that are appropriate to meet the challenges we face. Digital spaces and digital life will continue to be shaped by existing social and economic inequalities, which are at the heart of many of the current challenges and will, for a long time, continue to burden the ability to engage in productive dialogue in digital spaces.”

Ian Peter, Australian internet pioneer, futurist and consultant, noted, “The reality is that most nation-states are far less powerful than the digital giants, and their efforts to control them have to be watered down to a point where they are often ineffective. There is no easy answer to this problem with the existing world order.”

An AI scientist at a major global technology company said, “I would love to believe in the utopian possibility laid out in the article ‘How to Put Out Democracy’s Dumpster Fire,’ where the equivalent of online town halls and civic societies bring people closer together to resolve our toughest challenges, but I cannot. It’s not just the slow pace of bureaucracy that is to blame; graft and self-interest are largely at play.

“Historically, the most egregious violators of societal good in their own pursuit of wealth and power have only been curbed once significant regulation has been enacted and government agents then enforced those regulations. Unfortunately, Congress and local governments are run by people who must raise hundreds of thousands to millions of dollars to run for office, be elected, and then stay in office. Lobbyists are allowed to protect the interests of the most-powerful companies, organizations, unions and private individuals because the Supreme Court voted in favor of Citizens United.

“Money and power protect those with the most to gain. The global wealth gap is the largest in history, and it has only increased during the pandemic, rather than bringing citizens closer to each other’s realities. The U.S. is battered by historic heat waves and storms, and states with low vaccination rates are seeing new waves of COVID-19 outbreaks, yet a significant portion of Americans still deny science.

“The richest men in the world are using their wealth to send themselves into space for their own amusement while blindly ignoring nations unable to afford vaccines, food and water. Instead of vilifying these men for dodging taxes and shirking any societal responsibility to the people they made their fortunes off of, the media covers their exploits with awe and the government is either incapable or unwilling to get any of money back that should be going into public infrastructure.

“How can digital spaces improve when there is so much benefit for those who cause the greatest societal harm while neither government nor society seem capable or willing to stop them?

“Whistleblowers inside powerful companies are not protected. Sexual predators get golden parachutes and move on to cause harm at the next big tech company, startup or university. The evidence that [uses of social media] were at the heart of the two greatest threats to our democracy – the 2016 election … and the Jan. 6 Capitol riot – is overwhelming, but there have been no consequences. Congress puts on a bit of a show and yells at Mark Zuckerberg on TV, but he doesn’t have to worry because no real action will ever be taken.

“As long as Google and Facebook pay enough, they will continue to recruit the best and brightest minds to ensure that a tiny fraction of white men keep their wealth and power.”

A consultant whose research is focused on youth, families and media wrote, “Without strong governmental regulation, which will not occur, there is no stopping political actors from using any and all possible tools they can to gain advantage and sow division. The drive for maximum private profit on the part of tech industries will prevent them from taking significant action. Foreign entities seek to sow division, create chaos and profit from online disruptions. Diplomacy will not be able to address this sufficiently, and U.S. technological innovation will lag behind.”

An expert in organizational communication commented, “Corporations have taken over the internet. Governments serve corporations and will allow them to do as they wish to profit. Nothing really can be done. Money speaks and the people don’t have the money. The marketplace is biased in favor of profit-making companies.”

A researcher based in Ireland predicted, “Increasing corporate concentration, courts that favor private-sector rights and data use and politicians in the pockets of platforms will make things worse. People who are most made vulnerable in digital spaces will have decreasing power.”

An anonymous activist wrote, “There are too many very powerful public and private interests who control outcomes who have no incentive to make significant changes.”

Counterpoint 2: Some experts doubt that reformers have come up with effective solutions and cite a variety of reasons for this point of view

Brooke Foucault Welles, an associate professor of communication studies at Northeastern University whose research has focused on ways in which online communication networks enable and constrain behavior, argued that change via government action is unlikely. She wrote:

“I think it is possible for online spaces to change in ways that significantly improve the public good. However, current trends conspire to make that unlikely to happen, including:

First, there is an emphasis in law and policymaking that focuses on individual autonomy and privacy, rather than systemic issues: Many policymakers have good intentions when they propose individual-level protections and responses to particular issues. However, as a network scientist I know these protections may stem the harm for individuals, but they will never root out the problems. For example, privacy concerns are (as a matter of policy or practice) often dealt with by allowing individuals to opt out of tracking or sharing identifying information. However, data brokers do not need to know the details of a large number of individuals – only a few are needed to accurately infer information about everyone in a network. So, it is my sense that these policies may make people feel as if they are protected when they are likely to not be protected well at all. There should be a shift toward laws and policies that de-incentivize harms to individual autonomy and privacy. For example, laws that prevent micro-targeting, instead only allowing targeted advertising to segments no larger than some anonymity-preserving size (maybe 10,000 people).

Second, there are persistent inequalities in the training, recruitment and retention of diverse developers and tech leaders: This has been a problem for at least 30 years, with virtually no improvement. While there has been some public rumbling of late, I see few trends that indicate that tech companies or universities are seriously committed to change. It does not help that many tech companies are, as a matter of policy, not contributing to a tax base that might be used to improve public education, community outreach, and/or research investments that might move the needle on this issue.

Third, there is the increasing privatization of research funding and public-interest data: That makes it virtually impossible to monitor and/or intervene in platform-based issues of public harm or public good. We frankly have no idea how to avoid algorithmic bias, introduce community-building features, handle the deleterious effects of disinformation, etc., because there is no viable way for objective parties to study and test interventions.

Gary Marchionini, dean and professor at the School of Information and Library science at the University of North Carolina-Chapel Hill, wrote, “I expect that there will be a variety of national and local regulations aimed at limiting some of the more serious abuses of digital spaces by machines, corporations, interest groups, government agencies and individuals. These mitigation efforts will be insufficient for several reasons: The incentives for abuse will continue to be strong. The effects of abuse will continue to be strong. And each of these sets of actors will be able to masquerade and modify their identity (although corporations and perhaps government agencies will be more limited than machines, individuals and especially interest groups). On the positive side, individuals will become more adept at managing their online social behaviors and cyberidentities.”

Eugene H. Spafford, leading computer security expert and professor of computer science at Purdue University, predicted, “Balkanization due to politics and ideology will still create islands of belief and information in 2035. Some will embrace knowledge and sharing, but too many will represent slanted and restricted views that add to polarization. Material in many of these spaces will be viewed (correctly or not) as false by people whose beliefs are not aligned with it.

“Governments will be further challenged by these polarized communities in regulating issues of health, finance and crime. The digital divides will likely grow between the haves and have-nots in regard to access to information and resources. Trans-border propaganda and crime will be a major problem.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, said, “I have seen calls and suggestions for what amounts to an internet/social media technology environment that is developed as yet one more form of public good/service by national governments. Treating internet-facilitated communication, including social media, as public goods in these ways might further include both education and legal arrangements that would teach and enforce the distinctions between protected speech that contributes to informed and reasonable civil debate clearly contributing to democratic deliberation, norms, processes, etc. – and nonprotected expression that fosters, e.g., hatred, racism and the stifling of open democratic deliberation.

“Such a system and infrastructure would thereby avoid at least some of the commercial/competitive drivers that shape so much of current internet and social media use. Ideally, it would develop genuine and far more positive environments as alternatives to the commercially driven version we are currently stuck with.

“But all of this will depend on foundational assumptions of selfhood, identity and meaning, along with the proper governmental roles vis-à-vis public goods vis-à-vis capitalism, etc., that are largely alien to the U.S. context. It is hard to be optimistic that these underlying conceptions will manage to diffuse and make themselves felt in the U.S. context anytime soon.”

A researcher at the Center for Strategic and International Studies wrote, “Absent external threats or strong regulatory action at the global or European level, the prospects of substantial positive improvement within the U.S. seem dim. There are a number of forces at work that will frustrate efforts to improve digital spaces globally. These include geopolitics, partisan politics, varied definitions and defenses of free speech, business models and human nature.

“In the West, U.S. technology companies largely dominate the digital world. Their business models are fueled by extracting personal data and targeting advertising and other direct or indirect revenue generating data streams at users. Because human nature instinctively reacts to negative stimulus more strongly than positive stimulus, feeding consumers/users with data that keeps them on-screen means that they will be fed stimulating, often-divisive data streams.

“Efforts to change this will be met with resistance by the tech companies (whose business models will be threatened) and by advocates of free speech who will perceive such efforts as limiting freedoms or as censorship. This contest will be fuel for increasingly partisan politics, further frustrating change.

“These conditions will invite foreign interests to ‘stir the pot’ to keep the U.S. in particular, but Western democracies overall, at war internally and thus less effective globally. The rise of a Chinese-dominated internet environment outside of the West, however, could provide an impetus for more-productive dialogue in the West and more beneficial changes to digital spaces.”

An eminent expert in technology and global political policy observed, “There is insufficient attention paid to risk when assessing digital futures. To date this has enabled substantially positive impacts to take place, but with an underlying undercurrent of constraints on rights, inattention to impacts on (in)equality, environment, the relationships between states/businesses/citizens and many complex areas of public policy.

“Rapid technological changes, facilitated by market consolidation and a libertarian attitude to innovation (‘permissionless’), can have irreversible impacts before accountability mechanisms can be brought to bear. The pace and potency of these changes are increasing, and there is insufficient will in governments or authority in international governance to address them.

“There will be substantial gains in some areas of life, though these will be unequally distributed with substantial loss in others. The trajectory of interaction between technology and governance/geopolitics will be crucial in determining that balance, and that future does not currently look good.”

A leading internet infrastructure architect at major technology companies for more than 20 years responded, “Government regulation isn’t going to solve this problem. Governments will step in to ‘solve’ the problem, but their solutions will always move toward increasing government power toward using these systems for government ends. I don’t see a simple solution to this problem.”

A network consultant active in the Internet Engineering Task Force (IETF) commented, “A glimmer of hope may be found in distributed peer-to-peer applications that are not dependent on central servers. But governments, network service providers and existing social media services can all be expected to be hostile to these. That’s not to say that there will be no change – the internet is constantly changing – but what I don’t currently see is any factor that would encourage people to see their fellow humans in greater depth and to look past superficial attributes.

“Advertising-supported digital services have an inherent need to encourage engagement, and the easiest way to do that is to promote or favor content that is divisive, promotes prejudice or otherwise stirs up enmity. These are exactly the opposite of what is needed to make the world better. In addition, the internet – which was originally based on open standards not only for its lower-layer protocols but for applications also – is increasingly becoming siloed at the application layer, which results in further division and unhealthy competition.

“Right now, I don’t know what incentives would encourage a change away from these trends. I have little faith in laws or regulations to have a positive effect, beyond protecting freedom of speech, and there are increasing, naive public demands for both government and tech industries to engage in censorship.”

Natalie Pang, a senior lecturer in new media and digital civics at the National University of Singapore, said, “Although there is now greater awareness of the pitfalls of digital technologies – e.g., disinformation campaigns, amplification of hate speech, polarisation and identity politics – such awareness is not enough to reverse the market dynamics and surveillance capitalism that have become quite entrenched in the design of algorithms as well as the governance of the internet. Broader governance, transparency and accountability – especially in the governance of the internet – is instrumental in changing things for the better.”

A Pacific Islands-based activist wrote, “While the problem of centralisation of the internet to the major platforms is clear to most, solutions are not. Antitrust/monopoly legislation has been discussed for decades but has not been applied. In fact:

  • Corporate concentration has been encouraged by nation-states in order ‘to produce local enterprises that can compete on the world market.’
  • In addition, nation-states have profited from the concentration of communication in platforms in order to have a minimal number of ‘points of control’ and to gain access to the data that they can provide.
  • In addition, some of the proposals aimed at controlling the behaviour of anti-competitive companies seem worse than the problems they are meant to solve, for instance, requiring such companies to censor or not censor, on the pain of immense fines – in essence privatising government powers and leaving little to no ability to appeal decisions. This is already in place for copyright in many countries where the tendency is to expand the system to whatever legislators wish for. Governments can then proclaim that it is the companies that are doing the censorship, and companies can state they have no choice because the government required it, leaving citizens who are unfairly censored with little recourse.

“Another related area is the increasing push to limit encryption that is under the control of individual citizens. If states, or companies to which they have delegated powers, cannot read what is being written, filmed, etc., and then communicated, then the restrictions on content proposed will have limited impact. But taking away encryption capabilities from individual citizens leaves them at the mercy of criminals, snoopers, governments, corporations, etc.

“The initial promise of the internet – to enable ordinary citizens to communicate with each other as freely as the wealthy and/or powerful have been able to in the past seemed in large part to have been realised. BUT this seems to have shaken the latter group enough to reverse this progress and again limit citizens’ communication. Time will tell.”

Jessica Fjeld, assistant director of the Cyberlaw Clinic at Harvard’s Berkman Klein Center for Internet & Society, commented, “I have hope for the future of digital spaces because we are rightly beginning to understand the issues as systemic, rather than the result of the choices of individual people and companies. Dealing with threats to democracy, free expression, privacy and personal autonomy become possible when we view these issues through a structural lens and governments begin to take ownership of the issue.”

A writer and editor who reports on management issues affecting global business said, “I am not confident the disparate coalition of state, country and international governing bodies needed to correctly influence and monitor commercialized digital public spaces will be able to come to agreement and have enough clout to push back against the very largest and growing larger tech players, who have no loyalty to customer, country or societal norms.”

A professor of political communication based in Hong Kong observed, “Digital technologies will intensify their negative impact on civil society through more-sophisticated micro-targeting, improved deepfake technologies and improved surveillance technologies. Minimizing negative impacts will require government regulation, which is too difficult to accomplish in democracies due to strong lobbying and political polarization. Authoritarian countries, on the other hand, will use these technologies not only to suppress civil society, but also to gain a technological advantage over democracies.”

A veteran investigative reporter for a global news organization said, “The transformation of digital spaces into more-communitarian, responsible fora will happen mostly at the local and regional level in the United States and may not achieve national or global dominance. This presupposes a dim view of the immediate future of the United States, which is in grave danger of breaking up. I believe the same antidemocratic forces that threaten the integrity of the United States as a country also threaten the integrity of digital spaces, the reliability of the information they carry and their political use. I see a global balkanization of the internet in the near term with the potential for eventual international conventions and accords that could partially break down those barriers. But the path may be rocky and even war-studded.”

The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will force change

A portion of respondents to this canvassing said internet users themselves are a big part of the problem. People’s political, social and economic behaviors in digital spaces are threatening other’s identities, agency and rights, according to these experts. Some argue it is the public’s responsibility to learn about the opportunities and threats in digital spaces and apply that knowledge to reduce the dystopic influences of tech applications. These experts push for increased “digital literacy” to help drive a shift in norms so that people are continuously attuned to and ready to adapt to technological change.

Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association, responded, “The haves-and-have-nots dichotomy will not be about access to technology or information, but rather on the cognitive ability to understand, manage and take advantage of the ever-growing abstractions of digital space. The configuration of digital spaces is greatly influenced by the fundamental forces that shape society.

“The greater bifurcation of society that developed in the last few decades will continue to 2035. Knowledge workers, often college graduates, will do relatively well; they have the education and improving skills from their profession that will enable them to navigate the voluminous and complex digital spaces to serve their purposes. Other workers will not do so well, with no replacement for the blue-collar, unionized, factory jobs (and other similar employment) that placed them in the middle class in the 20th century.

“As the possibilities of digital spaces become increasingly numerous and complex with nuanced interconnections, these workers will have more difficulty in navigating them and shaping them to accommodate their needs. Indeed, there will be increasing opportunities to manipulate these workers through ever more sophisticated technology.”

Amy Zalman, futures strategist and founder of Prescient Foresight, wrote, “I would like to see schools, governments, civil society and businesses participate in better education in general so future generations can apply critical thinking skills to how they live their lives in digital spaces. People should understand how to better evaluate what they see and hear. We need to shape a positive culture on and in digital spaces, starting with simply recognizing they are an extension of our daily lives. There are also many unspoken rules of behavior that help us generally get along with those around us.”

Jesse Drew, a professor of media at the University of California-Davis, urged, “The public must take a lead. I see people shedding their naivete about technology and realizing that they must take a more involved role in deciding how tech will be used in our society. This assumes democracy is able to survive both the perils of right-wing totalitarianism as well as neoliberal surrender to corporations.”

Barry Chudakov, founder and principal at Sertain Research, said, “Today we are struggling to grapple with managing the size and scope of certain tech enterprises. That is presently what proposed reforms or initiatives look like. But going forward we are going to have to dig deeper. We are going to have to think more broadly, more comprehensively.

“Our educational systems are based on memorization and matriculation norms that are outmoded in the age of Google and a robotic and remote workforce. Churches are built around myths and stories that contain injunctions and perspectives that do not address key forces and realities in emerging digital spaces. Governments are based on laws which are written matrices. While these matrices will not disappear, they represent an older order. Digital spaces, by comparison, are anarchic. They do not represent a new destination; they are a new disorder, a new way of seeing and being in the world.

“So, to have the biggest impact, reforms and initiatives must start from a new basis. This is as big a change as moving from base 10 arithmetic to base two. We cannot reform our way into new realities. We have to acknowledge and understand them.

“Like pandemics that morph from one variation to another, digital spaces and our behavior in them change over time, often dramatically and quickly. Proof on a smaller scale: In one generation, virtually every teenager in the Western world and many the world over considers a cellphone a bodily appendage as important as her left arm and as vital to existence as the air going through his lungs. In a decade, that phone will get smaller, will no longer be a phone but instead will be a voice prompt in a headset, a streaming video in eyeglasses, a gateway in an embedded chip under the skin.

“Our understanding of digital spaces will have to evolve as designers use algorithms and bots to create ever more sticky and seamless digital spaces. Nothing here is fixed or will remain fixed. We are in flux, and we must get used to the dynamics of flux.

“The No. 1 initiative or reform regarding digital spaces would be to institute a grammar, dynamics and logic training for digital spaces, effectively a new digital spaces education, starting in kindergarten going through graduate school. This education/retraining – fed and adjusted by ongoing digital spaces research – is needed now. It is as fundamental to society and the public square as literacy or STEM. Spearheading this initiative should be the insistence among technologists and leaders of all stripes that profit and growth are among a series of goods – not the only goods – to consider when evaluating and parachuting a new technology into digital spaces.

“New digital spaces will be like vast cities with bright entertainments and dark areas; we will say we are ourselves in them but we will also be digital avatars. Cellphones caused us to become more alone together (see the work of Sherry Turkle). Emerging digital spaces which will be much more lifelike and powerful than today’s screens, may challenge identity, may become forces of disinformation, may polarize and galvanize the public around false narratives – to cite just a few of the reasons why a new digital spaces curriculum is essential.

“The nature of identity in digital spaces is intimately involved with privacy issues; with dating and relationship issues; with truth and the fight against disinformation. We think of reforms and initiatives in terms of a slight alteration of what we’re already doing. Better oversight of online privacy practices, for example. But to create the biggest impact in digital spaces, we need to understand and deeply consider how they operate, who we are once we engage with digital spaces and how we change as we engage. Example: Porn is one digital space phenomenon that has fundamentally changed how humans on the planet think about and engage in sex and romance. We hardly know all the ramifications. While it appears the negative effects of porn have been exaggerated, the body dysmorphia issues associated with ubiquitous body images in digital spaces have created new problems and issues. These cannot be resolved by passing laws that abolish them.

“Can we fix hacking or fraud in digital spaces by abolishing them? While that would be a noble intent, consider it took centuries for the effects of slavery, for example – once abolished – to be recognized, addressed and reconciled (still in process). Impersonation and altering identity are fundamental dynamics of digital spaces. These features of digital spaces enable hacking. We are disembodied in digital spaces which is a leading cause of fraud. This is not an idle example.”

Evan Selinger, a professor of philosophy at Rochester Institute of Technology, wrote, “Increased platform literacy might be the primary driver for improving digital spaces. Simply put, the idea that widely used platforms aren’t neutral spaces for information to flow freely but are intermediaries that exercise massive amounts of power when deciding how to design user interfaces and govern online behavior has gone from being a vanguard topic for academic researchers and tech reporters to a mainstream sensibility.

“Indeed, while there are diverse and often conflicting ideas about how to reform corporate-controlled digital spaces to promote public-interest outcomes better, there is widespread agreement that the future of democracy depends on critically addressing, right here and now, central civic issues such as privacy and free speech.”

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, said, “The real progress will stem from improvements in media literacy, the capacity to for individuals to critically assess claims made in digital spaces and social behavior in digital spaces. We are already seeing some positive moves in this direction, particularly among younger groups who are more aware regarding how digital spaces can be co-opted and perverted, and less gullible when it comes to ‘digital-first falsehoods.’”

An anonymous respondent wrote, “The past seven years and recent events have shown us the limits of the early days of a technology and how naive the ‘build it and they will come’ approach to the digital sector was. Hindsight shows that the human species still has a lot to learn about how to use the power of digitally enhanced networking and communications.

“The unconsidered and unaddressed issues baked into the current form of our digital spaces have been exposed to us more clearly now, especially by the activities of Vladimir Putin’s Internet Research Agency, which many see to be a key causal factor in the political outcomes of Brexit, Trump 2016 and Brazil’s populist swing. These are examples of geopolitical abuse of digital spaces fostering perception manipulation tantamount to mind control.

“Inequalities in education and access to development pathways for critical thinking skills have set the stage for these kinds of influence campaigns to succeed.”

An expert in marketing and commercialization of machine learning tools commented, “I believe regulators, academics, tech leaders and journalists will develop systems and processes that society will need to partake in and work with to learn how to better communicate and collaborate in digital spaces. At first this will be painful, but it will become normalized and more efficient over time, using greater levels of digital signatures and processes.

“Means will evolve for advancements in communicating the rising complexity associated with digital identity, traces and how information might be used in malicious and inappropriate means. It is incredibly challenging to simplify and communicate and to achieve having a vast audience cognitively process their role in keeping information secure and maintaining a level of accuracy while sharing information.”

A share of these experts said the public’s role should go beyond simply understanding how tech-designed digital spaces come together for good and bad; they said the public has to be digitally savvy so it can more actively lobby for its rights. They also argued that tech companies and governments should invite the public to be more directly involved in shaping and creating better public spaces, advising and motivating government and tech leaders to develop, adopt and continuously evolve the types of digital political, social and economic levers that might help promote a more-positive future for the digital public sphere.

An internet architecture expert based in Europe said, “Some problems may be diminished if citizens are full participants in the governance of digital spaces; if not, the problems can worsen. Citizens must reconquer digital spaces, but this is a long path, like the one toward democracy and freedom. Digital life will improve if the whole population has access to these spaces and digital literacies are learned. It might be useful to create especially targeted digital spaces, governed by appropriate algorithms, for all of the people who want to express and vent their rage.”

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, wrote, “Today it is largely impossible to thrive in a digital world without knowledge and experience with technology and its impacts on society. This knowledge has become a general education requirement for effective citizenship and leadership in the 21st century. And it should be a general education requirement in educational institutions that serve as a last stop before many professional careers, especially in higher education.

“Currently, forward-looking universities are creating courses, concentrations, minors and majors in public-interest technology – an emerging area focused on the social impacts of technology. Education in public interest technology is more than just extra computer science courses. It involves interdisciplinary courses that focus on the broader impacts of technology – on personal freedom, on communities, on economics, etc. – with the purpose of developing the critical thinking needed to make informed choices about technology. And students are hungry for these courses and the skills they offer.

“Students who have taken courses and clinics in public-interest technology are better positioned to be knowledgeable next-generation policymakers, public servants and business professionals who may design and determine how tech services are developed and products are used. With an understanding of how technology works and how it impacts the common good, they can better promote a culture of tech in the public interest, rather than tech opportunism.”

A professor of sociology and anthropology commented, “Ultimately citizens will demand government regulation that limits the worst downsides of digital spaces. These changes will be supported by increased public awareness and knowledge of digital spaces brought about by both demographic change and better education about such spaces. The key problem is an advertising model which – coupled with socio-psychometric profiling algorithms – incentivizes destructive digital spaces.”

The CEO of a technology futures consultancy said, “As we advance into the Fourth Industrial Revolution – the digital age – there is a heightened focus on digital privacy, digital inclusion, digital cooperation and digital justice across governments, society and academia. This is causing tech companies to face the consequences, hearing and responding to those who loudly advocate for digital safety and having to comply with regulation and guidance and join in sustainable collaborative efforts to ensure tech is trustworthy. The average user in 2035 will not have experienced the world before tech and will have grown up as a tech consumer and data producer.

“I foresee users developing social contracts with tech companies and governments in exchange for their data. This could look like public oversight, and there will be engagement of efforts, initiatives that require or request public data. I foresee more tech-savvy and data-privacy-oriented elected officials who have a strong background in data advocacy. I believe society will continue to demand trust in the use, collection, harvesting and aggregation of their data. This will diminish misuse. However, law enforcement’s use of data-driven tools used to augment their work will continue to present a challenge for everyday citizens.”

Aaron Chia Yuan Hung, associate professor of education technology at Adelphi University, responded, “As much power as technology companies have, they do tend to bend toward the demands of their users. In that sense, I have more hope in the public than in companies. Of course, the public is not a monolithic group and some will want to push digital life in a negative direction (e.g., entities that conduct troll farming, manufactured news, mis/disinformation, etc.). I believe most people don’t want that and will push back, through education, through public campaigning, through political pressure. 2035 will bring about its own problems, of course, and every era can seem dire. It’s hard to imagine what those new concerns would be, just as it was hard to imagine what our current concerns were back in 2005.”

Pia Andrews, an open- and data-driven government leader for Employment and Social Development Canada (ESDC), observed, “What I am seeing is a trend to the internet bringing out both the best and worst of people, and with new technologies creating greater challenges for trust and authenticity, people are starting to get activated and proactive in saying they want to create the sorts of spaces that improve quality of life, rather than naturally allowing spaces to devolve without purpose. This engagement by normal people in wanting to shape their lives rather than waiting to have their lives shaped for them sees a trend of more civic engagement, civil disobedience and activism, and a greater likelihood that digital and other spaces will be designed by humans for good human outcomes, rather than being shaped by purely economic forces that value the dollar over people.”

A director of a research project focused on digital civil society wrote, “Civil society has been and will be playing a key role in raising public awareness, and we are likely to see groups from a wide spectrum of civil society (not just those promulgating digital rights) coming together to confront issues. I imagine there will be growing awareness among the public of the dangers and harms of digital spaces; the main business model of our current digital spaces is advertisement and data extraction. Unless something is done, that – coupled with the rise of political authoritarianism – will continue to shape digital spaces in ways that are harmful and effectively erode trust in democracy and public institutions.”

People will evolve and improve their use of digital spaces and make them better

History shows that people do not stand still when problems in information spaces arise. They learn and they act to change those spaces. A share of these experts predict the same will be true of the digital era. They argue that users will become more facile using digital spaces, learn how to work around problem areas and move toward collective action when problems become unbearable. Schools will play a role, too, in teaching digital literacy, according to these experts.

Robert Bell, co-founder of Intelligent Community Forum, said, “As long as providers can make big profits from the ‘dumpster fire,’ I don’t expect them to change. But people will evolve, and that takes much more time than just a few years. We will eventually adapt to use digital spaces in more-positive ways. I don’t expect the solution to be technological but in human behavior, as more people have negative experiences with false information, misleading advice and the general-panic level of concern that digital spaces seek to generate.”

Jeremy West, senior digital policy analyst at the OECD, wrote, “I am optimistic that improvements will be made. The fixes won’t all be technical, though. Some of the most effective solutions will be found in education, transparency and awareness. Take awareness, for example – experience with social media grows all the time, and I think we are already seeing embryonic inklings in the general public that perhaps their social media spheres aren’t actually representative of viewpoints in the wider population (or of reality, for that matter). Those inklings may grow, and/or be followed by awareness that sometimes the distortions are intentionally aimed at them. This should, in principle, lead to greater resilience against mis/disinformation.”

A computer science professor said business, governmental and social norms will develop as society’s capacity to understand new digital spaces expands. They predicted, “Digital space will evolve in ways that improve society simply because the 2035 space does not exist now and will develop. Just as with email, I believe a new and better equilibrium can eventually be reached.

“At present, the governance of digital spaces is limited by our capacity to understand how to deploy these tools and create or manage these spaces. By 2035, that capacity problem will be mitigated at least to some degree. In terms of the management of existing spaces, I anticipate investment will stabilize many of the problems that currently cause worry.

“Consider email and, to a lesser extent, websites used for things like fraud and malware distribution. Early on, many of the same concerns were prevalent around these spaces, yet today we have new social norms, new governance structures and investment in tools and teams to police these spaces in effective ways.

“A worrying development is the trans-jurisdictional nature of digital spaces, which might require new agreements to manage enforcement that requires cooperation among many parties. These will emerge as driven by need, as has happened in the management of malware, fraud and spam. In some cases, this will create barriers to accountability or governance. … Another worry I have related to the development of online spaces in the next 10 years is the emerging misinformation-as-a-service business model and other new methods of monetizing activity considered malign.”

The founder and chief scientist of a network consultancy commented, “Generational change will make a difference. The vast majority will have had the experience of ‘digitalhood’ by that time, importantly, their parents will have had experience as well. Issues of veracity will remain, but it is to be hoped that their consumption will be better tempered.

“The real remaining issue will be one that has existed in the physical world for centuries: closed (and self-isolating) communities. The notion of ‘purity of interaction’ will still exist, as it has in various religious-/cultural-based groups. The ‘Plymouth Brethren‘ of the internet has arrived, and managing that tribalism and its antagonistic actions will remain. It is clear that it will not be a smooth ride, it is clear that both society and individuals will suffer in mental and physical ways. However, it is my hope that people will adapt and learn to filter and engage constructively. That said, I have seen low-level mental illness in very intelligent individuals explode into full-fledged ‘QAnon-ness,’ so I can only say that this is a hope, not something I can evidence.”

Zak Rogoff, a research analyst at the Ranking Digital Rights project, wrote, “In 2035 … most people will have more control and understanding of algorithmic decision-making that affects them in what we currently think of as online spaces. I also feel that physical space will be more negatively impacted, in ways that online space is today, for example through the reduction of privacy due to ubiquitous AI-powered sensor equipment.”

John L. King, a professor at the University of Michigan School of Information Science, said, “It’s a matter of learning. As people gain experiences with these technologies, they learn what’s helpful and what’s not. Most people are not inclined toward malicious mischief – otherwise there would be a lot more of it. A few are inclined toward it, and of course, they cause a lot of trouble. But social regulation will evolve to take care of that.”

A Southeast Asia-based expert on the opportunities and challenges of digital life responded, “Technologies do not determine culture. Instead, they allow people to more easily see divides that already exist. The new generation of digital media users came of age at a time when the internet promised to them an alternative to ‘mainstream’ culture – new digital economies, certainly, and special prices and products only available online – and the application of this sales pitch to information has been initially unhealthy. … In coming years, the disruptive effects of these new conversations will be minimized. Users will accustom themselves to having conversations with others, and content providers will be better able to navigate the needs of their audiences.”

An associate professor whose research focuses on information policy wrote, “I believe in the good in human nature. I also believe that humans, in general, are problem solvers. The use of digital spaces currently is a problem, particularly for civil communication and, hence, democracy, but it is a problem we can address. Raising younger generations to think critically and write kindly would be a good start to changing norms in digital spaces.”

Charles Anaman, founder of waaliwireless.co, based in Ghana, said, “While the media tends to rally to the negatives (because the public tends to react to that kind of information), the reality is that better conversations are now taking place in real-life interactions in digital spaces. When better conversation can be had – discussing ideas without shaming the ‘ignorant’ – society will benefit greatly in the long term, rebuilding trust. It will be a slow process.

“It is taking us a while to realise that we have been manipulated by wealthy entities playing off all sides to achieve their own goals. Transparency has been a farce for some time. Reality is fueling a new wave of breaking down digital silos to develop better social awareness and a review of facts to understand the context and biases of the sources being used.

“Cybersecurity, as it is being taught now, is going to have to be applied with the understanding that all attack tools can be misused (NSO tools/Stuxnet/et al.) to cause real-world damage in unexpected ways. Open-source solutions to proactive security from trustless authentication can and should be applied to all online resources to develop better collaboration tools.”

Counterpoint: Some of these experts do not think that the general public will become more savvy or that ‘literacy’ will be enough

A share of these experts believe people’s critical-thinking skills are in decline in the digital age; some said they doubt that effective digital literacy education about the ins and outs of the light and dark areas of rapidly changing digital spaces will improve digital discourse.

Kent Landfield, a chief standards and technology policy strategist with 30 years of experience, noted, “Critical thinking is what made Western societies able to innovate and adapt. The iPhone phenomenon has transformed our society to one of lookup instead of learning. With the lack of that fundamental way of looking at the world being mastered today, generations that follow may become driven by simple herd mentality. The impact of social media on our society is dangerous as it propels large groups of our populations to think in ways that do not require original thinking.

“Social media platforms are ‘like or dislike’ spaces that foster conflict, causing these populations to be more susceptible to disinformation, either societal or nation-state. ‘Us versus them’ is not beneficial to society at all. The days of compromise, constructive criticism and critical thinking are passing us by. Younger generations’ minds are being corrupted by half-truths and promises of that which can never be achieved.”

An angel and venture investor who previously led innovation and investment for a major U.S. government organization commented, “The educational system is not creating people with critical-thinking skills. These skills are essential for separating what is real from what is fake in any space. Further, the word fake has become, itself, fake. So, we’re creating a next generation of digital consumers/participants who are not prepared to separate reality from fantasy. Lastly, state actors and nonstate actors are rewarded by and wish to continue to take advantage of this disconnect. The disconnect will continue to affect politics, social norms, education, health care and many other facets of society.”

A professor of political science expert in e-government and technology policy noted, “Because these digital spaces are forms of mass communication and spaced together with groups promoting the public interest, the views of extremists are easily spread and digested by the public and often appear to be quite legitimate. I see these digital spaces as becoming even more commonplace for political extremists, especially white power and antidemocratic groups. Government is always behind the curve in dealing with these types of groups, and internet governance tends to take a hands-off or ad hoc approach. I don’t think things will change for the better. I can’t say I have the answers on how to counter this.”

A researcher, educator and international statesman in the field of medicine responded, “Our current uses of technology have not contributed to a better society. We are ‘always on,’ ‘present but absent,’ ‘alone in the company of others’ and inattentive. Many of the problems in the digital sphere are simply due to the ways humans’ weaknesses are magnified by technology. People have always faced challenges developing meaningful relationships, and conspiracy theories are not new. Digital technology is a catalyst. There has been a change in our communication parameters and there are cyber effects. The biggest burden is on educators to help each generation continue to develop psychologically and socially.

“When trying to use this technology to communicate, too many fail to consider others and appreciate differences. Many messages are performances and not part of building anything together. Too many people are compulsive users of this technology. Many have moved from overuse to compulsive use and from compulsive use to addiction. We have invented terms to describe our attempts to control our behavior – technology deprivation, technology detox or internet vacations are expressions suggesting people are becoming more mindful of their use.

“Many have not used the technology to be responsive to others. To ask meaningful questions, provide encouraging nonverbal communication that encourages others to continue talking, or even use a paraphrase to signal or check on understanding and to confirm others has always been difficult because it requires focusing outside oneself and on others. Now, too many post a comment and leave the field, and too many cannot seem to provide that third text (A’s message, B’s response, A’s response) in the stream that indicates closure on even the most-simple task coordination. Many create dramatic messages that are variations of ‘pay attention to me’ while failing to pay attention to others! …

“I am afraid we are losing our sense of appropriateness, disclosure and intimacy in an era of disposable relationships. We are using our limited time and mental capacity to ‘keep in touch’ or ‘lurk.’ There are more than 22,000 YouTube sites with over a million followers each. There are a lot of people online to be entertained and relieve ‘boredom’ instead of developing a network of meaningful relationships. …

“Civic engagement has had a resurgence, and people have used technology to develop activist networks. However, these will be temporary manifestations unless people form sustainable groups aimed at accomplishing renewable goals. Otherwise, these efforts will fade. Instead, people seem to have found like-minded people to confirm their biases, creating consequent social identities that dominate individuals’ personal identities.

“Most online conflict about public issues becomes ego-defensive or dramatic declarations instead of simple conflict recognizing differences and solving problems. All of this has brought many people to confuse their sense of reality. We live in a hybrid world in which our technologies have become indispensable. We seem to have lost our ability to discriminate events, news, editorials or entertainment. Indeed, some have lost their ability to discriminate simulated and virtual experiences from the rest of their lives. Advances in artificial intelligence encourage this trend. …

“There is very little that business leaders or politicians can do beyond modeling behaviors and limiting abuses associated with general use. ‘Alternate facts’ and repeated efforts to explain away what the rest of us can see and hear do not help. Using the internet to attack scientists, educators, journalists and government researchers creates the impression that all reports and sources of reports are equally true or false. Some people’s facts are more validated and reliable than others.

“Confirmation bias and motivated reasoning are the problems here. When the population begins to reject the garbage, there will be less of it, but this will take a while since so many have staked their sense of themselves on different positions.”

New internet governance structures will appear that draw on collaborations among citizens, businesses and governments

The most-promising initiatives will be those in which the business, governmental, academic and civil society sectors work together with the public to solve problems, according to a number of these expert respondents. Some suggest that this work could be enabled by funding from a coalition of industry, government and philanthropies. Some are hopeful this can happen but say it will require change in the ethics and ethos of tech, in the venture capital funding model underlying tech and in the hierarchical structure of governance, which typically tips toward serving the needs of the power elite.

Paul Jones, emeritus professor of information science at University of North Carolina-Chapel Hill, urged, “Technologists have to learn to think politically and socially. Politicians have to learn to think about technology in a broader way. Both will have grown up with these problems by 2035 and will have seen and participated in the construction of the social, legal and technical environments. From that vantage point, the likelihood of being able to strike a balance between control and social and individual freedoms is increased. Not perfected but increased. The hard work of regulation and of societal norms is to allow for benefits from new technologies to grow and spread while restricting the detriments and potential harms.”

William Lehr, an associate research scholar at MIT’s Computer Science & Artificial Intelligence Laboratory with more than 25 years of internet and telecommunications experience, wrote, “We need to adapt both our society and our technology because digital spaces are changing the nature of public life and being human. The rise of fake news is one obvious bad outcome and if post-truth discourse continues, things will get worse before they can get better. The fixes will require joint effort across the spectrum from technologists to policymakers. There is the potential for digital spaces to produce public goods, but also potential for the opposite. Neither outcome is a foregone conclusion. Digital spaces will be a critical part of our future in any case, and either that future will be mostly good or mostly bad, but a future without digital spaces is unrealistic.”

Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab, said, “The current situation in which a handful of commercial enterprises dominate what is thought of as ‘digital spaces’ will crash and burn, not of its own accord but because the combined weight of climate catastrophe and democratic demise will force other changes that ultimately lead to a re-creation of the digital sphere.

“The path to this will be painful, but humans don’t make big changes until the cost of doing so becomes less than the cost of staying the same. The collapse of both planetary health and democratic governance are going to require collective action on a scale never before seen. Along the way, the current centralized and centralizing power of ‘tech companies’ will expand, along with autocracy. Both will fail to address the needs of billions of people, and, in time, be undone.

“Whether this will all happen by 2035, who knows? Just as climate changes is compressing geologic time, digital consolidation is compressing political time. It’s possible we’ll push through both the very worst of our current direction and break through to a more pluralistic, less centralized, participatory set of governing systems – including digital ones – in 24 years. If not, and we only go further down the current path, then the answer to this question becomes a NO.”

Ginger Paque, an expert in and teacher of internet governance with the Diplo Foundation, observed, “Today’s largest problems are not all about digital issues. They are all human issues, and we need to – and we will – start tackling important human issues along with their corresponding online facets. Addressing health (COVID-19 for the moment), climate change, human rights and other critical human issues is vital.

“The internet must become a tool for solving species-threatening challenges. 2035 will be a time of doing or dying. To continue a negative trend is unthinkable, and how we imagine and use the internet is what we will make our future into. The internet is no longer a separate portion of our lives. Online and offline have truly merged, as shown by the G7 proposal for a minimum corporate tax of 15% for the world’s 100 largest and most profitable companies with minimum profit margins of 10%; it involves tech giants like Google, Amazon and Facebook, and this was undertaken in consideration of digital issues.”

Wendell Wallach, senior fellow with the Carnegie Council for Ethics in International Affairs, commented, “The outstanding question is whether we will actually take significant actions to nudge the current trajectory of digital life toward a more-beneficial trajectory. Reforms that would help:

  1. Holding social media companies liable for harms caused by activities they refuse, or are unable, to regulate effectively.
  2. Shifting governance away from a ‘cult of innovation’ where digital corporations and those who get rich investing in them have little or no responsibility for societal costs and undesirable impacts of their activities. The proposed minimum 15% tax endorsed by the G7/G20 is a step in the right direction, but only if some of that revenue is directed explicitly toward governing the internet, and ameliorating harms caused by digital life, including the exacerbation of inequality fostered by the structure of the digital economy.
  3. Development of a multistakeholder network to oversee governance of the internet. This would need to be international and include bottom-up representation from various stakeholder groups including consumers and those with disabilities. This body, for example, might make decisions as to the utilization of a portion of the taxes the G7/G20 said should be collected from the digital oligopoly.”

A program officer for an international organization focused on supporting democracy said, “We should not underestimate the ability of the public and civil society to innovate positive changes that will incentivize constructive behavior and continue to provide crucial space for free expression. The COVID-19 pandemic has demonstrated that digital connectivity is more important to societies around the world than ever.

“Western tech platforms, for all their faults, are making an effort to be more receptive and responsive to civil society voices in more-diverse settings. In particular, there is growing recognition that voices from the global south need to be heard and involved in discussions about how platforms can better respond to disinformation and address privacy concerns.

“Civil society and democratic governments need to be more involved in global internet governance conversations and in the standards-settings bodies that are making decisions about emerging technologies such as artificial intelligence and facial recognition.

“If civil society sectors unite around core issues related to protecting human rights and free expression in the digital sphere, I am cautiously optimistic that they can affect a certain degree of positive change. One major area of concern relates to the role of authoritarian powers such as China, Russia and others that are redesigning technology and the norms surrounding it in ways that enable greater government control over digital technologies and spaces. We should be concerned about how these forces will affect and shape global discussions that affect platforms, technologies and citizen behavior everywhere.”

A senior economic analyst who works for the U.S. government wrote, “Over time, society in its broadest sense will develop government policies, rules and laws to better govern digital space and digital life.”

A professor whose research is focused on civil society and elites responded, “It may not be too late to take corrective steps, but it will require a highly coordinated set of actions by stakeholders (e.g., government, intelligence agencies, digital intermediaries and platforms, mainstream media, the influence industry – PR, advertising, etc. – educators and citizens). We will likely need supra-national regulation to steer things in the right direction and fight the current default settings and business models of dominant social media platforms.

“Throughout, we need to be alert and guard against the negatives that can arise from each type of stakeholder intervention (especially damage to human rights). There are numerous social and democratic harms arising from what we could term the ‘disinformation media ecology’ and its targeted, affective, deception. It impacts negatively on citizenship and citizens in fundamental ways. These include attacks on:

  • Our shared knowledge base – Can we agree on even the most basic facts anymore?
  • Our rationality – Faulty argumentation is common online, as evidenced by conspiracy theorists.
  • Our togetherness – Social media encourage tribalism, hate speech and echo chambers.
  • Our trust in government and democratic institutions and processes – Disinformation erodes this trust.
  • Our vulnerabilities – We are targeted and manipulated with honed messages.
  • And our agency – We are being nudged, e.g., by ‘dark design’ and influenced unduly.”

A director with an African nation’s regulatory authority for communications said, “It is very important that all members of society play an equal role in devising and operating the evolving framework for the governance of digital spaces. Most services – both economic and social – will be delivered through digital platforms in 2035. … The current environment, in which digital social media platforms are unregulated, will be strongly challenged. The dominance of developed countries in the digital space will also face a strong challenge from developing countries.”

Terri Horton, work futurist at FuturePath, observed, “The challenges lie in bridging the global digital divide, reducing equity gaps, governing privacy, evolving ethical use and security protocols and rapidly increasing global digital and AI literacy. Mitigating these challenges will require substantial collaborative interventions that merge private and public industries, governments and global technology organizations.

“The desire to create a future that is equitable, inclusive, sustainable and serves the public good is human. I believe that desire will persist in 2035. The growth and expansion of novel digital spaces and platforms will enable people across the globe to use them in positive ways that drive the energy and combustion for improving the lives of many and creating a future that serves society.

“In the future, people will have more choices and opportunities to leverage AI, ML, VR and other technologies in digital spaces to improve how they work, live and play; amplify passions and interests; and drive positive societal change for people and the planet.”

A computer science professor based in Japan said, “Although the internet as a technology is already about 50 years old, its use in society at large is much more recent, and in terms of society adapting to these new uses, including the establishment of laws and general expectations, this is a very short time span.

“Tech leaders will have to invest in better technology to detect and dampen and cull aggressive/negative tendencies on their platforms. Such understanding may only be possible with the ‘help’ of some laws and public pressures that penalize the tolerance of overly negative/aggressive tendencies.

“Figuring out how to apply such pressure without leading to overly strict limitations will require extreme care and inventiveness. Education will also have to play quite a role in making sure that people value true communications more than negative clickbait.”

An executive with an African nation’s directorate in finance for development wrote, “It would be utopian of us to underestimate the impact of the lack of ethics in the assembly of certain technologies. They can cause disasters of all kinds, including exacerbated cyber terrorism. People must collaborate to put in place laws and policies that have a positive impact on the evolution of the digital ecosystem.

“The regular adaptation of existing technologies will be reworked to offer the options of the possible. Teleworking and medical assistance at home will be generalized. By 2035, the digital transformation of space will be obvious in all countries of the world, including poor countries.

“The mixing of scientific knowledge and the opening up of open-access data in the world will be an opportunity for progress for each of the peoples. The transparency imposed by the intangible tools of artificial intelligence can make public service more and more available than it has ever been in the past.”

Sam Lehman-Wilzig, professor and former chair of communications at Bar-Ilan University, Israel, commented, “As with most new technologies that have significant social impact, the beginning is full of promise, then the reality sets in as it is misused by malevolent forces (or simply for self-aggrandizement), and ultimately there is societal pushback or technological fixes.

“Regarding social media, we seem to be now in the latter stage as policymakers are considering how to ‘reform’ its ecology and as public pressure grows for additional self-supervision by the social media companies themselves. I also expect the educational establishment will enter the fray with ‘media literacy’ education at the grade school and high school level. As a result of all these, I envision some sort of ‘balance’ being reached in the near future between free speech and social responsibility.”

Peter Padbury, a Canadian futurist who has led hundreds of foresight projects for federal government departments, NGOs and other organizations, wrote:

  •  “Artificial intelligence will play a large role in identifying and challenging mis- and disinformation.
  • There could be a code of conduct that platforms use and enforce in the public interest.
  • There could be a national or, ideally, international accreditation body that monitors compliance with the code.
  • Reputable service providers could then block the non-code-compliant platforms.
  • The education system has an important role to play in creating informed citizens capable of critical thinking, empathy and a deep understanding of our long-term, global, collective interest.
  • Politicians have a very important role to play in informing, acting and supporting the long-term, global, public interest.”

Alejandro Pisanty, professor of internet and information society at the National Autonomous University of Mexico (UNAM), said, “By 2035 it is likely that there will be ‘positive’ digital spaces. In them, ideally, there will be enough trust in general to allow significant political discussion and the diffusion of trustworthy news and vital information such as health-related content. These are spaces in which digital citizenship will be exerted in order to enrich society. This is so necessary that societies will build it, whatever the cost.

“However, this does not mean that all digital spaces will be healthy, nor that the healthy ones will be the ones we have today. The healthy spaces will probably have a cost and be separated from the others. There will continue to be veritable cesspools of lies, disinformation, discrimination and outright crime. Human drivers for cheating, harassment, disconnection from the truth, ignorance, bad faith and crime won’t be gone in 15 years.

“The hope we can have is that enough people and organizations (including for-profit) will push the common good so that the positive spaces can still be useful. These spaces may become gated, to everyone’s loss. Education and political pressure on platforms will be key to motivating the possible improvements.”

An internet pioneer working at the intersection of technology, business/economics and policy predicted, “Digital spaces will be even more ubiquitous in 2035 than today, so I hope we won’t even have to think about ‘am I online or not?’ by then. That’s only not creepy if it’s a positive experience. I don’t think we’re going to get there through policing or enforcement by technology, technology companies or governments. I do think we need support from all of those as well as public support for improved discourse, but there is no magic bullet, and there is nothing to enforce.

“What will help is having some level of accountability and a visible history of all interactions in digital spaces for identifiable individuals and for organizations.”

Better civic life online will arise as communities find ways to underwrite accurate, trustworthy public information – including journalism

Some respondents argued that all sectors of society must work quickly now to create an effective strategy to defeat the digital “infodemic” and rein in the spread of mis- and disinformation. They said support for accurate journalism and global access to fact-based public information sources is essential to help citizens responsibly participate in democratic self-governance.

Alexander B. Howard, director of the Digital Democracy Project, wrote, “Just as poor diets and sedentary lifestyles affect our physical health, today’s infodemic has been fueled by bad information diets. We face intertwined public health, environmental and civic crises. Thousands of local newspapers have folded in the last two decades, driving a massive decline in newsroom employment. There is still no national strategy to preserve and sustain the accountability journalism that self-governance in a union of, by and for the People requires – despite the clear and present danger data voids, civic illiteracy and disinformation merchants pose to democracy everywhere.

“Research shows that the loss of local newspapers in the U.S. is driving political polarization. As outlets close, government borrowing costs increase. The collapse of local news and nationalization of politics is costing us money, trust in governance and societal cohesion. Information deprivation should not be any more acceptable in the politics of the world’s remaining hyperpower than poisoning children with lead through a city water supply. A lack of shared public facts has undermined collective action in response to threats, from medical misinformation to disinformation about voter fraud or vaccination to the growing impact of climate change.

1. Investors, philanthropists, foundations and billionaires who care about the future of democracy should invest in experiments that rebuild trust in journalism. They will need to develop, seed, and scale more-sustainable business models that produce investigative journalism that doesn’t just depend upon grants from foundations and public broadcasting corporations – though those funds will continue to be part of the revenue mix.

2. Legislatures and foundations should invest much more in digital public infrastructure now, from civic media to public media to university newspapers. News outlets and social media platforms should isolate viral disinformation in ‘epistemic quarantines’ and inject trustworthy information into diseased media ecosystems, online and off. Community leaders should inspire active citizenship at the state and local level with civics education, community organizing. Congress should fund a year of national service for every high school graduate tied to college scholarships.

3. Congress should create a ‘PBS for the Internet’ that takes the existing Corporation for Public Broadcasting model and reinvents it for the 21st century. Publishers should build on existing public media and nonprofit models, investing in service journalism connected to civic information needs. Journalists should ask the ‘people formerly known as the audience’ to help them investigate. State governments should subsidize more public access to publications and the internet through libraries, schools and wireless networks, aiming to deploy gigabit speeds to every home through whatever combination of technologies gets the job done. Renovate and expand public libraries to provide digital and media literacy programs, and nonpartisan information feeds to fill data voids left by the collapse of local news outlets.

4. The U.S. government, states, and cities should invest in restorative information justice. How can a national government that spends hundreds of billions on weapon systems somehow have failed to provide a laptop for each child and broadband internet access to every home? It is unconscionable that our governments have allowed existing social inequities to widen in 2020. Children were left behind by remote learning, excluded from the access to information, telehealth, unemployment benefits, and family leave that will help them and their guardians make it through this pandemic.

“By 2035, we should expect digital life to be both better and worse, depending on where humans live. There will be faster, near-universal connectivity – for those who can afford it. People who can pay to subscribe will be able to browse faster, without ads or location and activity tracking. The poor will trade data for access that’s used by corporations and insurance companies unless nations overcome massive lobbying operations to enact data protection laws and enforce regulations.

  • “Smartphones will evolve into personalized virtual assistants we access through augmented-reality glasses, health bands, gestural or spoken interfaces, and information kiosks.
  • “Information pollution, authoritarianism and ethnonationalism supercharged by massive surveillance states will pose immense risks to human rights.
  • “Climate change will drive extreme weather events and migration of refugees both within countries and across borders.
  • “Unless there are significant reforms, societal inequality will destabilize governments and drive civil wars, revolutions and collapsed states.
  • “Toxic populism, tribalism and nativism antagonistic to democracy, science and good governance will persist and grow in these darkened spaces.”

Courtney C. Radsch, author and free-expression advocate, said, “The decline in the concept of truth and a shared reality is only going to be worsened by the increasing prevalence of so-called deepfake videos, audio, images and text. The lack of a shared definition of reality is going to make democratic politics, public health, journalism and myriad aspects of life more challenging.”

Stowe Boyd, founder of Work Futures, predicted, “Decreasing the amplification of disinformation is the most critical aspect of what needs to be done. Until that is accomplished, we are at risk of growing discord and division. Policy makers – elected officials, legislatures, government agencies and the courts – must take action to counter the entrenched power of today’s social platforms.

“The coming antitrust war with major platform companies – Facebook and its competitors – will lead to more and smaller social media companies with more-focused communities and potentially lessened commercial goals. That will diminish the amplification potential of social media and will likely lead to better ways to root out disinformation.”

Scott Santens, senior advisor at Humanity Forward, commented, “We really have no choice but to improve digital spaces, so ‘no’ isn’t an option. We are coming to realize that the internet isn’t going to fix itself and that certain decisions we made along the way need to be rectified. One of those decisions was to lean on an ad-driven model to make online spaces free. This was one of the biggest mistakes.

“In order to function better, we need to shift toward a subscription model and a data ownership model, and in order for that to happen, we’re going to need to make sure that digital space users are able to afford many different subscriptions and are paid for their data. That means potentially providing digital subscription vouchers to people in a public-funded way, and it also means recognizing and formalizing people’s legal rights to the data they are generating.

“Additionally, I believe universal basic income will have been adopted by 2035 anyway, which itself will help pay for subscriptions, help free people to do the unpaid work of improving digital spaces, and perhaps most importantly of all, reduce the stress in people’s lives, which will do a lot to reduce the toxicity of social media behavior.

“The problem of disinformation and misinformation will also require investments in the evolution of education, to better prepare people with the tools necessary to navigate digital spaces so as to better determine what is false or should not be shared for other reasons, versus what is true or should be shared for other reasons. We can’t keep teaching kids as we were once taught. A digital world is a different place and requires an education focused on critical thinking and information processing versus memorization and information filing.”

Melissa Sassi, the Global Head of IBM Hyper Protect Accelerator, said, “Media misinformation and disinformation are two of the largest challenges of our time. The current trend in social media networks raises significant concern around the role access to the information shared by users in a platform plays when it comes to causing strife around the world that could drive genocide, authoritarianism, bullying and crimes against humanity. Equally, it is concerning when governments shut down internet connectivity or access to specific sites to curtail dissent or adjust the narrative to benefit their own political party and/or agenda.”

Craig Newmark, the founder of Craigslist, now leading Craig Newmark Philanthropies, observed, “Social media becomes a force mainly for good actors when the platforms (and mass media) no longer amplify disinformation. I hope for this by 2035.”

Brooke Foucault Welles, an associate professor of communication studies at Northeastern University whose research has focused on ways in which online communication networks enable and constrain behavior, commented, “The current consolidation of media industries – including new media industries – leaves little room for alternatives. This is an unstable media ecosystem and unlikely to allow for, much less incentivize, major shifts toward the public good. There is, by fiduciary duty, little room for massive, consolidated media companies to serve the public good over the interests of their investors.”

Andy Opel, professor of communications at Florida State University, wrote, “As with all systems of social control and surveillance, capillary, bottom-up resistance builds and eventually challenges the consolidation of power. We are seeing that resistance from both ends of the political spectrum, with the right calling for regulation of social media to prevent the silencing of individual politicians while the left attempts to respond to the viral spread of misinformation. Both groups recognize the dangers posed by the current media-ownership landscape and, while their solutions differ, the social and political attention on the need for media reform suggests a likely time when a digital bill of rights becomes a major issue in near-term political election cycles.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, responded, “There is reason for moderate hopefulness about the fight against misinformation. While I don’t expect public news media literacy or incentives to change dramatically, social media platforms may have enough in the form of technical and platform control tools to mitigate certain issues like bot accounts and viral spreading of untrustworthy sources.

“Significant research and pressure, along with compelling examples of actions that can be taken, suggest improvements are available. However, this positive transformation for some is complicated by the willingness of unscrupulous actors, authoritarian governments and criminal groups to promote misinformation, particularly for the many countries and languages that are less well monitored and protected.

“Further, it is not clear whether a loss of participants from mainstream social media platforms to more fringe/radical platforms would increase or decrease the spread of misinformation and polarization overall.

“Deepfakes and plain old fake news are likely to (continue to) have significant purchase with large portions of the global population, but it is possible that platforms will be able to minimize the most harmful misinformation (such as misinformation promoting violence or genocide) especially around key periods of interest (such as elections).

“For a portion of the world then, I would expect the misinformation problem to improve, though only in the more well-regulated and high-income corners. However, deepfakes could throw a wrench in this. It is unclear whether perpetrators or regulators will stay ahead in the informational battle.”

Some respondents to this canvassing, including Aaron Falk, senior technical product manager at Akamai Technologies, suggested that any improvement in the tone of the digital public sphere is likely to require that those who share information have an accountable identity – no more anonymity. He commented, “Pervasive anonymity is leading to the degradation of online communications because it limits the accountability of the speaker. By 2035, I expect online fora will require an accountable identity, ideally one that still permits users to have multiple personas.”

Counterpoint: Many doubt that leaders and the public can come together on these issues

There is pushback against this enthusiasm. Many respondents said that while they hope for, wish for and even expect to see people come together to work on all of these important issues, they do not anticipate significant positive changes in the digital public sphere by 2035. A small selection of their responses are shared here; many more are included in the next section of this report, which illuminates four more themes.

Joseph Turow, professor of media systems and industries at the University of Pennsylvania, said, “Correcting this profound problem will require a reorientation of 21st century corporate, national and interpersonal relationships that is akin to what is needed to meet the challenge of reducing global warning. There are many wonderful features of the internet when it comes to search, worldwide communication, document sharing, community-oriented interactions and human-technology interconnections for security, safety and health. Many of these will continue apace.

“The problem is that corporate, ideological, criminal and government malefactors – sometimes working together – have been corrupting major domains of these wonderful features in ways that are eroding democracy, knowledge, worldwide communication, community, health and safety in the name of saving them. This too will continue apace – unfortunately often faster and with more creativity than the socially helpful parts of our internet world.”

Kate Carruthers, chief data and insights officer at the University of New South Wales-Sydney, observed, “Digital spaces will not magically become wholesome places without significant thought and action on the part of leaders, and U.S. leadership is either not capable or not willing to make the necessary decisions. Given the political situation in the U.S., any kind of positive change is extremely unlikely. All social media platforms should be regulated as public utilities and then we might stand a chance for the growth of civil society in digital spaces. Internet governance is becoming fragmented, and countries like China are Russia are driving this.”

Ivan R. Mendez, a writer and editor based in Venezuela, responded, “The largest danger is no longer the digital divide (which still exists and is wider in 2021, after the pandemic); the largest danger is the further conversion of the public into large, easily marketable digital herds. The evolution of digital spaces into commercialized platforms poses new challenges.

“The arrival of agile big tech players with proposals that connect quickly with the masses (who are then converted into customers) gives them a large amount of influence in governments’ internet governance discussions. … Other important internet stakeholders – entities that have been attributed the representation of the internet ecosystem in order to work for the betterment of networks through organized cross-sector discussions, such as the Internet Governance Forum (IGF) have not gained enough authority in the governance discussions of governments; they are not given any input and have not been allowed to participate or influence global or nation-state digital diplomacy.”

Richard Barke, an associate professor in the School of Public Policy at Georgia Tech, wrote, “Communications media – book publishers and authors, newspaper editors, broadcast stations – have always been shaped by financial forces. But for most of our history there have been delays between the gathering of news or the production of opinions and the dissemination of that information. Those delays have allowed (at least sometimes) for careful reflection: Is this true? Is this helpful? Can I defend it? Digital life provides almost no delay. There is little time for reflection or self-criticism, and great amounts of money can be made by promulgating ideas that are untrue, cruel or harmful to people and societies.

“I see little prospect that businesses, individuals, or governments have the will and the capacity to change this. … The meme about crying fire in a crowded theatre might become a historical relic; there is a market for selling untruths and panics, even if they cross or skirt the line between protected speech and provocation. Laws and regulation can’t keep up, and many possible legal remedies are likely to confront conflicting interpretations of constitutional rights.”

An expert in urban studies based in Venezuela observed, “The future looks negative because it is not sufficiently recognized that the current business model of the digital world – the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) plus AI – creates and promotes inequalities that are an impediment to social development. Ethical values that should safeguard the rights of citizens and the various social groups require further review and support based on broad consultations with the multiple stakeholders involved.”

A North America-based entrepreneur said, “It seems clear that digital spaces will continue to trend toward isolationist views and practices that continue to alienate groups from one another. I foresee a further splintering and divide among class, race, age, politics and most any other measures of subdivision. Self-centered views and extreme beliefs will continue to divide society and erode trust in government, and educational and traditional news sources will continue to diminish. We will continue to see an erosion of communication between disparate groups.”

3. Large improvement of digital spaces is unlikely by 2035: Human frailties will remain the same; corporations, governments and the public will not be able to make reforms

Experts who doubt significant improvement will be made in the digital democratic sphere anytime soon say the key factor underlying the currently concerning challenges of online discourse is the ways in which people, with their varied and complicated motivations and behaviors, use and abuse digital spaces. Those who think the situation is unlikely to change say it is because “humans will be human.” They say digital networks and tools will continue to amplify human frailties and magnify malign human intent.

Two quotes providing an overarching frame for this theme:

Alejandro Pisanty, professor of internet and information society at the National Autonomous University of Mexico (UNAM), wrote, “Most major concerns for humanity’s future stem from deeply rooted human conduct, be it individual, corporate, criminal or governmental.”

A professor of psychology at a major U.S. technological university whose specialty is human-computer interaction said, “One can imagine a future in which digital life is more welcoming of diverse views, supportive to those in need, and wise. Then we can look at the nature of human beings, who have evolved to protect their own interests at the expense of the common good, who divide the world into ‘us’ and ‘them’ and justify their actions by self-deception and proselytizing. Nothing about the digital world provides a force toward the first vision. In fact, as now constituted – with no brakes on individual posts and virtually no effort by platforms to weed out evil-doers – all of the impetus is in the direction of unmitigated expression of the worst of human nature. So, I am direly pessimistic that the digital future is a benevolent one.”

Many of these experts pointed out that current technology design exploits the very-human characteristics that trigger humans’ most-troublesome online behaviors. Some expect that this will worsen in the future due to expected advances in: the hyper-surveillance of populations; datafication that is turning people’s online activities into individualized insights about their behaviors; and predictive technology that can anticipate what they may do next. Some noted that these characteristics of digital tech aid authoritarians, magnify mis/disinformation and enable psychological and emotional manipulation.

A number of respondents’ views about why it will be difficult to improve the digital public sphere by 2035 were included in earlier theme sections of this report. In this section we showcase scores of additional expert comments, organized under five themes:

  1. Humans are self-centered and short-sighted, making them easy to manipulate
  2. The trends toward more datafication and surveillance of human activity are unstoppable
  3. Haters, polarizers and jerks will gain more power
  4. Humans can’t keep up with the speed and complexity of digital change
  5. Reform cannot arise because nation-states are weaponizing digital tools

Humans are self-centered and short-sighted, making them easy to manipulate

Many respondents to this canvassing wrote about humans’ hard-wired “survival instinct” to protect themselves and meet personal goals. They noted that these motivations in the hair-trigger, global public sphere have fostered divisiveness even to the point in some cases of genocide and violence against governments. When human dispositions and frailties can be manipulated in a digitally networked world, danger is intensified. And, these experts note, this explosive environment can worsen when those in digital spaces can be surveilled.

Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago, observed, “We enter these spaces with our baggage – there is no check-in counter online where we enter and get to leave that baggage behind. This baggage includes toxicity. Toxicity is a human attribute, not an element inherent to digital life. Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”

Alexa Raad, chief purpose and policy officer at Human Security wrote, “Fundamentally, the same aspects of human nature that have ruled our behavior for millennia will continue to dictate our behavior, albeit with new technology. For example, our need for affiliation and identity – coupled with our cognitive biases – has led to and will continue to breed tribalism and exacerbate divisions. Our limbic brains will continue to overrule rational thought and prudent action when confronted with emotional stimuli that generate fear.”

Paul Jones, emeritus professor of information science at University of North Carolina-Chapel Hill, said, “Authors Charles F. Briggs and Augustus Maverick wrote in their 1858 book ‘The Story of the Telegraph,’ ‘It is impossible that old prejudices and hostilities should longer exist while such an instrument has been created for the exchange of thought between all nations of the earth.’ The telegraph was supposed to be an instrument of peace, but the first broad use was to suppress anti-colonial rebellion in India. I’m not sure why we talk about digital spaces as if they were separate from, say, telephone spaces or shopping mall spaces or public park spaces. In many ways, the social performance of self in digital spaces is no different. Or it is? Certainly, anonymous behaviors when acted out in public spaces of any kind are more likely to be less constrained and less accountable. Digital spaces can and do act to accelerate and maintain cohesion and cooperation of real-world activities. We see how affinity groups support communitarian efforts – cancer and rare-disease support groups, Friends of the Library. We also are aware that not all affinity groups are formed to serve the same interests in service of democracy and society – see Oath Keepers for example.”

Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, responded, “It’s unfortunate that the digital space has been so thoroughly polluted, but it’s also unlikely to change for one reason – people don’t change. We can ruin anything. Most new technologies started out with great promise to change society for the better. Remember what was being said when cable was introduced? There is a lot that’s good and useful in the digital space, but the bad drives out the good and causes more harm. Do we have to talk these days about Russian interference, the Big Lie of the election or the fact that people aren’t getting vaccinated against Covid? It’s not all the online space – cable contributed also. Technology will never keep up with all the garbage going in.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, predicted, “While there may be significant changes in what will amount to niche sectors for the better, my strong sense is that the conditions and causes that underlie the multiple negative affordances and phenomena now so obvious and prevalent will not change substantially. This is … about human selfhood and identity as culturally and socially shaped, coupled with the ongoing, all but colonizing dominance of the U.S.-based tech giants and their affiliates. Much of this rests on the largely unbridled capitalism favored and fostered by the United States.

“The U.S., for all of its best impulses and accomplishments, is increasingly shaped by social Darwinism, the belief that humans are greedy, self-interested atomistic individuals thereby caught up in the Hobbesian war of each against all, ruthless competition as ‘natural’ – and that all of this is somehow a good thing as it allegedly generates greater economic surplus, however unequally distributed it is (as a ‘natural result’ of competition).

“All of this got encoded into law, starting in early 1970s regulation of networking and computer-mediated communication industries as ‘carriers’ instead of ‘content-providers’ (i.e., newspapers, radio and TV) regulated vis-à-vis rights to freedom of expression as importantly limited with a view toward what contributes to fruitful democratic debate, procedures and norms.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, where he is researching artificial intelligence and the social implications of technology, commented, “I would not expect the quality of public discourse to improve dramatically on average. While companies may have some incentives to remediate the worst offenses (violent speech), my concern is that human nature and emergent behavior will continue to lead to activities like bullying, uncharitable treatment of others and the formation of out-groups. I find it unlikely that more positive, pluralistic and civil platforms will be able to outcompete traditional digital spaces financially and in terms of audience desire. Given that regulation is unlikely to impose such dramatic changes and that users are unlikely to go elsewhere, I suspect there are not sufficient incentives for the leading firms to transform themselves beyond, for example, protections to privacy and efforts to combat misinformation.

“Overall, while some of the worst growing pains of digital spaces may be remediated in part, we can still expect outcomes like hostility, polarization and poor mental health. Progress then, may be modest, and limited to areas like privacy rights and combating misinformation and hate speech – still tremendously important advances. Further, my skepticism about broader progress is not meant to rule out the tremendous benefits of digital spaces for connection, education, work and so on. But it stretches my credulity, in light of human nature and individual and corporate incentives, to believe that the kind of transformations that could deeply change the tenor of digital life are likely to prevail in the near future.”

Marc Brenman, managing partner of IDARE LLC, observed, “Human nature is unlikely to change. There is little that is entrenched in technology that will not change much. The interaction of the two will continue to become more problematic. Technology enables errors to be made very quickly, and the errors, once made, are largely irretrievable. Instead, they perpetuate, extend and reproduce themselves. Autonomy becomes the possession of machines and not people. Responsibility belongs to no one. Random errors creep in.

“We, as humans, must adjust ourselves to machines. Recently I bought a new car with state-of-the-art features. These include lane-keeping, and I have been tempted to take my hands off the steering wheel for long periods. This, combined with cruise controls and distance regulation come close to self-driving. I am tempted to surrender my will to the machine and its sensors and millions of lines of code. The safety features of the car may save my life, but is it worth saving? Similarly, the technology of gene-splicing enables the creation of mRNA vaccines, but some people refuse to take them. We legally respect this ‘Thanatos,’ as we legally respect another technology: guns.”

Neil Richards, professor of law at Washington University in St. Louis and one of the country’s foremost academic experts on privacy law, wrote, “Right now, I’m pretty pessimistic about the ability of venture capital-driven tech companies to better humanity when our politics have two Americas at each other’s throats and there is massive wealth inequality complicated by centuries of racism. I’m confident over the long term, but the medium term promises to be messy. In particular, our undemocratic political system (political gerrymandering, voting restrictions and the absurdity of the Senate, where California has the same power as Wyoming and a dozen other states with a fraction of its population), tone-deaf tech company leaders and viral misinformation mean we’re likely to make lots of bad decisions before things get better. We’re human beings. The history of technological advancements makes pretty clear that transformative technological changes create winners and losers, and that even when the net change is for the better, there are no guarantees, and, in the short term, things can get pretty bad. In addition, you have to look at contexts much broader than just technology.”

Randall Gellens, director at Core Technology Consulting, said, “We have ample evidence that significant numbers of humans are inherently susceptible to demagogs and sociopaths. Better education, especially honest teaching of history and effective critical-thinking skills, could mitigate this to some degree, but those who benefit from this will fight such education efforts, as they have, and I don’t see how modern, pluralistic societies can summon the political courage to overcome this. I see digital communications turbocharging those aspects of social interaction and human nature that are exploited by those who seek power and financial gain, such as groupthink, longing for simplicity and certainty, and wanting to be part of something big and important.

“Digital media enhances the environment of emersion and belonging that, for example, cults use to entrap followers. Digital communications, even such primitive tools as Usenet groups and mailing lists, lower social inhibitions to bad behavior. The concept of trolling, for example, in which people, as individuals or as part of a group, indulge in purely negative behavior, arose with early digital communications. It may be the lack of face-to-face, in-person stimuli or other factors, but the effect is very real. During the pandemic shutdown of in-person activities, digital replacements were often targeted for attack and harassment. For example, some school classes, city council meetings, addiction and mental health support groups were flooded with hate speech and pornography. Access controls can help in some cases (e.g., school classes) but is inimical in many others (e.g., city council meetings, support groups).

“Throughout history and in current years, dictators have shown how to use democracy against itself. Exploiting inherent human traits, they get elected and then consolidate their power and neutralize institutions and opposition, leaving the facade of a functioning democracy. Digital communications enhance the effectiveness of the mechanisms and tools long used for this. It’s hard to see how profit-driven companies can be incentivized to counter these forces.”

An expert in how psychology, society and biology influence human decision-making commented, “People are people; tech might change the modality of communication, but people drive the content/usage, not the reverse.”

An expert at helping developing countries to strategically implement ICT solutions wrote, “Technologies continue to amplify human intention and behaviour. As long as people are not aware of this, the digital space will not be a safe place to be. People with power will continue to misuse it. The digital divides between north and south, women and men, rich and poor, will not be closed because digitalisation exacerbates polarisation.”

Eileen Rudden, co-founder of LearnLaunch, responded, “In the mid-1990s, during the birth of the internet, we rejoiced in the internet’s possibility to enable new voices to be heard. That possibility has been realized, but the bad of human nature as well as the good has been given a broader platform. Witness how varied the media brands are, from Breitbart to The New York Times. The root cause is that we social human beings are structured to be interested in difference and changes. Tech social spaces amplify the good and the bad of human nature. An issue I expect to see remain unsolved by 2035 is bad actors exploiting the slowness of the public’s responses to emerging challenges online.”

An educator based in North America predicted, “Seems like there will be less discourse and more censorship, mass hysteria, group-think, bullying and oppression in 2035.”

An anonymous respondent said, “The lack of a single shared physical space in which real people must work toward coming to a mutual understanding and the reduced need for more than a few humans to be in agreement to coordinate the activity of millions has reduced the countervailing forces that previously led cults to remain isolated or to fade over time.

“The regular historical difficulties that have often resulted from such communication trends in the past and in the present (but to date only in isolated regions, not globally) include the suppression of and destruction of science, of histories, of news, and the creation and enshrining of artificial histories as the only allowed narrative. It also leads to a glorification of the destruction of people, art, architecture and many of the real events of human civilization.

“Today’s public platforms have almost all been designed in a way that allows for the fast, creative generation of fake accounts. The use of these platforms’ automated tools for discussion and interaction is the dominant way to be seen and heard, and the dominant way to be perceived as popular and seek approval or agreement from others. As a result, forged social proof has become the most common form of social proof. Second-order effects convert this into ‘real’ social proof, erasing the record of the first. This is allowing cult-forming techniques that were once only well understood in isolation to become mainstream.”

A North American strategy consultant wrote, “There will always be spin-offs of the Big Lie. Negativity wins over truth, especially when the volume is loud. Plus, there’s far too much money involved here for the internet companies to play ball.”

An educator who has been active in the Second Life community online virtual community responded, “Human egos, nature and cognitive dissonance will continue to prevail. Political, marketing and evangelistic agendas will continue to prevail.”

An American author, journalist and professor said, “Attention-seeking behavior won’t change, nor will Skinnerian attention rewards for extreme views. It’s possible that algorithms will become better at not sending people to train wreck/extreme content. It is also possible that legislation will change the relationship between the social media sites and the content they serve up.”

A professor of informatics based in Athens, Greece, predicted, “There will not be significant improvement by 2035 due to greed, lack of regulation, money in politics and corruption.”

A futurist and cybercrime expert responded, “The worst aspects of human nature, its faults, flaws and biases are amplified beyond belief by today’s tech and the anticipated technologies still to come. There is always a subset of people ‘hoping’ for humans’ kindness and decency to prevail. That’s a nice idea but not usually the smart way to bet.”

A business professor researching smart cities and artificial intelligence wrote, “I am very fearful about the impact of AI [artificial intelligence] on digital spaces. While AI has been around for a while, it is only in the last decade that, through its deployment in social media, we have started to see its impact on, inter alia, human nature (for those who have access to smart technology, it has become an addiction), discourse (echo chambers have never been more entrenched), and consent/agency (do I really hold a certain belief, or have I been nudged toward it?). Yes, I do think that there are ways to move our societal trajectory toward a more optimistic future. These include meaningful and impactful regulation; more pervasive ethical training for anybody involved in creating, commercializing or using ‘smart’ technologies; greater educational efforts toward equipping students of all ages with critical-thinking tools; and less capture by – bitter and divisive – political interest.”

An analytics director for a social media strategies consultancy commented, “I don’t think digital spaces and digital life have the capacity to experience a substantial net increase until we change how we operate as a society. The technology might change, but – time after time – humans seem to prove that we don’t change. The ‘net’ amount of change in digital spaces and digital life will not be substantially better. Certainly, there will be some positive change, as there is with most technological developments. I can’t say what those changes will be, but there will be improvements for some. However, there’s always the other side of the coin, and there will certainly be people, organizations, institutions, etc., that have a negative impact on digital spaces/life.”

People are not capable of coming together to solve problems like these

A share of experts were fairly confident that people will simply not be able to find a way to come together to accomplish the goal of designing effective approaches to public digital spaces. Coming to consensus isn’t easy, they say, since everyone has different motivations; people will not get their act together effectively enough to truly make a difference.

Leah Lievrouw, professor of information studies at the University of California-Los Angeles, argued, “Despite growing public concern and dismay about the climate of and risks of online communication and information sources, no coherent agenda for addressing the problems seems to have yet emerged, given the tension between an appropriate reluctance to let governments (with wildly different values and many with a penchant for authoritarianism) set the rules for online expression and exchange, and the laddish, extractive ‘don’t blame us, we saw our chances and took ’em’ attitude that still prevails among most tech industry leadership.

“It’s not clear to me where the new, responsible, really compelling model for ‘digital spaces’ is going to come from. If the pervasive privatization, ‘walled garden’ business models and network externalities that allowed the major tech firms to dominate their respective sectors – search, commerce, content/entertainment, interpersonal relations and networks – continue to prevail, things will not improve, as big players continue to oppose meaningful governance and choke off any possible competition that might challenge their incumbency.”

Clifford Lynch, director of the Coalition for Networked Information, commented, “The digital public sphere has become the target of all kinds of entities that want to shape opinion and disseminate propaganda, misinformation and disinformation. It has become an attack vector in which to stage assaults on our society and to promote extremism and polarization. Digital spaces in the public sphere where large numbers of sometimes anonymous or pseudoanonymous entities can interact with the general public have become full of all of the worst sort of human behavior: bullying, shaming, picking fights, insults, trolling – all made worse by the fact that it’s happening in public as part of a performance to attract attention, influence and build audience. I don’t expect that the human behavior aspects of this are likely to change soon; at best we’ll see continued adjustments in the platforms to try to reduce the worst excesses.

“Right now, there’s a lot of focus on these issues within the digital public sphere and discussions on how to protect it from those bad actors. It is unclear how successful these efforts might be. I am extremely skeptical they’ve been genuinely effective to this point. One thing that is really clear is that we have no idea of how to do content moderation at the necessary scale, or whether it’s even possible. Perhaps in the next 5 to 10 years we’ll figure this out, which would lead to some significant improvements, but keep in mind that a lot of content moderation is about setting norms, which implies some kind of consensus. There is, as well, the very difficult question over deciding what content conforms to those norms.”

Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, said, “We do not fully understand the forces that impact our digital lives or the data that is collected and aggregated about us. As a result, individuals use these texts, tools and spaces without fully understanding or questioning the decisions made or being made therein. The end result is a populace that does not possess or chooses not to employ the basic skills and responsibilities needed to engage in digital spaces. In the end, most users will continue to choose to surrender to these digital, social spaces and all of their positive and negative affordances. There will be a small subset that chooses to educate themselves and use digital tools in a way that they believe will safely allow them to connect while obfuscating their identity and related metadata. Tech leaders and politicians view the data collection and opportunities to influence or mislead citizens as a valuable commodity.

“Digital spaces provide a way to connect and unite communities from a variety of ideological strains. Online social spaces also provide an opportunity to fine-tune propaganda to sway the population in specific contexts. As we study human development and awareness, this intersects with ontology and epistemology. When technologies advance, humans are forced to reconcile their existing understandings of the world with the moral and practical implications said technologies can (or should) have in their lives. Post-Patriot Act era – and in light of Edward Snowden’s National Security Administration whistleblowing – this also begets a need to understand the role of web literacies as a means of empowering or restricting the livelihood of others. Clashes over privacy, security and identity can have a chilling impact on individual willingness to share, create and connect using open, digital tools, and we need to consider how our recommendations for the future are inevitably shaped by worries and celebrations of the moment.”

A professor of political science based in the U.S. observed, “The only way things might change for the better is if there is a wholesale restructuring of the digital space – not likely. The majority of digital spaces are serving private economic and propaganda needs, not the public good. There is no discernible will on the part of regulators, governmental entities or private enterprise to turn these spaces to the public good. News organizations are losing their impact, there is no place for shared information/facts to reach a wide audience. Hackers and criminal interests are threatening economic and national security and the protection of citizens.”

An associate dean for research in computer science and engineering commented, “I am very worried there will not be much improvement in digital spaces due to the combination of social division, encouragement of that social division by any and all nondemocratic nations, the profit focus of business interests, individuals protecting their own interests and the lack of a clearly invested advocate for the common good. Highly interested and highly motivated forces tend to always win over the common good because the concept of what constitutes the common good is so diffuse among people. There may be ways things could improve. I see promise in local digital spaces in connecting neighbors. But I have yet to see much success in connecting them across the political spectrum. I see potential for better-identifying falsehoods and inflammatory content. But I don’t see a national (or global) consensus or a structure for actually enforcing social good over profits and selfish/destructive interests.”

An internet pioneer predicted, “Our societal descent into truth decay – which threatens the world like no other ill – will not be solved by digital savants, some different form of internet governance, nor new laws/regulations/antitrust actions. Truth decay is first a symptom; its seeds were planted long ago in jarring market transitions across the economy, in employment, in political action and rhetoric. The internet – an intellectual buffet that begins and ends with dessert – has accelerated and amplified the descent, but cannot be reshaped to stop it, let alone reverse it.”

A principal architect for a major global technology company responded, “I wish I could have more techno-optimism, because we in tech keep thinking up creative improvements of the users’ options in digital spaces, allowing for better control over one’s data. SOLID, the work on decentralizing social applications by Tim Berners-Lee, is an interesting project in this realm. And we are working toward better algorithm equity and safety (there are multiple efforts in this area). But at the broad level, the digital space being bad for people isn’t sufficiently addressed by such improvements. Our complex tech ideas might not even be necessary if the companies operating the digital spaces committed to and invested in civic governance. For the companies to do so, and for it to be a consensual approach with the users, requires them to change their values for real. They would have to commit to improving the product quality of experience because it’s worth investing in for the long term even if it lowers the growth rate of the company.

“Companies have spent decades not investing in defenses from security attacks, and even now investments in that are often driven by regulations rather than sincere valuation of security as a deliverable. That’s one reason the security space continues to be hellish and damaging. That’s an analogy, in my opinion, to explain why there are likely to only be ineffective and incremental technical and governance measures for digital spaces. There may be a combination of good effort by regulatory push and some big tech pull, but it would be nothing like enough to significantly change the digital-space world.”

A UK-based expert on well-being in the digital age observed, “Social media speaks to our darkest needs: for games, for validation and for the hit of dopamine. This isn’t discerning. In 2035 there will still be people who abuse online spaces, finding ways to do so beyond the controls. Too often we focus on helping the child, helping the bully and not on kicking those who exercise certain behaviours off social media altogether.”

A director of strategic relationships and standards for a global technology company noted, “Digital spaces and digital life have dramatically reduced civility and kindness in the world. I honestly don’t know how to fix this. My hope is that we will continue to talk about this and promote a desire to want to fix it. I worry that a majority won’t want to fix it because it is not in their interest. There are two driving reasons for the incivility. 1) In the U.S., First Amendment rights are in conflict with promoting civility and mitigating attempts to control cruelty and facts. 2) A natural consequence of digital spaces is a lack of physical contact which, by definition, facilitates cruelty without penalty.”

A computer science and engineering professor at a major U.S. technological university said, “Things will not and are not changing significantly, just moving from one platform (newspaper, radio) to another (internet). Past human history indicates that politics creates hot emotions, wild claims and nasty attacks, whatever the platform. Attempts to curtail expression by legislation can sometimes have a useful dampening effect, but most are not widely supported because of infringing on free speech.”

A professor of computer science and data studies wrote, “The damage done by digital spaces seems irreparable. Society is fractured in regard to basic truths, so leaders cannot even make changes for the better because factions can’t agree on what ‘better’ means.”

A professor emerita of informatics and computing responded, “Most people have seen the impact of individualistic efficacy on the internet and are likely to be resistant to government attempts to regulate content such that controls individuals. We have seen so much affective polarization in recent years in this country and around the world that it will be difficult to roll that back through policies. As for technological changes that might effect change, I don’t have a crystal ball to tell me how those might interact with governments and citizens. We have also witnessed the rise of online hate groups that have wielded power and will also resist being controlled.”

A professor of internet studies observed, “The internet’s architecture will always allow end-runs around whatever safeguards are put in place. There is not enough regulation in place to deal with the misinformation and echo chambers, but I doubt there will ever be enough regulation.”

The trends toward more datafication and surveillance of human activity are unstoppable

A number of respondents focused on the growth of increasingly pervasive and effective surveillance technologies – the bread-and-butter business model of online platforms and most digital capitalism – and said they expect that upgrades in them will worsen things. They said monitoring of users and “datafying” people’s activities for profit are nearly inescapable and extremely susceptible to abuse. This underlies widening societal divisions in democracies in addition to furthering the goals of authoritarian governments, even, at times, to the point of facilitating genocide, according to these experts. They said digital spaces are often used for the types of psychographic manipulation that can cleave cultures, threaten democracy and stealthily stifle people’s agency, their free will.

Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, commented, “Currently, our entire social media environment is deliberately engineered throughout to promote fear, hatred, division, personal attacks, etc., and to discourage thought, nuance, compromise, forgiveness, etc. And here I don’t mean the current moral panic over ‘algorithms,’ which, contrary to hype, I would say are a relatively minor aspect of the structural issues. Rather, the problem is ‘business models.’

“Fundamentally, the simplest path of status-seeking in one’s tribe is treating opponents with sneering trashing, inflammatory mischaracterization or even outright lying. That’s quick and easy, while people who merely even take a little time to investigate and think about an issue will tend to find themselves drowned out by the outrage-mongering, or too late to even try to affect the mob reaction (or perhaps risking attack themselves as disloyal).

“These aren’t original, or even particularly novel observations. But they do imply that the problems have no simple technical fix in terms of promoting good information over bad or banning individual malefactors. Instead, there has to be an entire system of rewarding the creation of good information and not bad. And I’m well aware that’s easier said than done. This is a massive philosophical problem. But if one believes there is a distinction between the ‘public interest’ (truth) versus ‘what interests the public’ (popularity), having more of the former rather than the latter is not ever going be accomplished by getting together the loudest screamers and putting advertising in the pauses of the screaming.

“I want to stress how much the ‘algorithms’ critique here is mostly a diversion in my view. ‘If it bleeds, it leads’ is a venerable media algorithm, not just recently invented. There has a been a decades-long political project aimed at tearing down civic institutions that produce public goods and replacing them with privatized versions that optimize for profits for the owners. We can’t remedy the intrinsic failures by trying to suppress the worst and most obvious individual examples which arise out of systemic pathology. I should note even in the most dictatorial of countries, one can still find little islets of beauty – artists who have managed to find a niche, scientists doing amazing work, intellectuals who manage to speak out yet survive and so on. There’s a whole genre of these stories, praising the resilience of the human spirit in the face of adversity. But I’ve never found these tales as inspiring as others do, as they’re isolated cherry-picking in an overall hellscape.”

Ellery Biddle, projects director at Ranking Digital Rights, wrote, “I am encouraged by the degree to which policymakers and influential voices in academia and civil society have woken up to the inequities and harms that exist in digital space. But the overwhelming feeling as I look ahead is one of dread. There are three major things that worry me:

1. Digital space has been colonized (see Ulises Mejias and Nick Couldry’s definition of data colonialism) by a handful of mega companies (Google, Facebook, Amazon) and a much broader industry of players that trade on people’s behavioral data. Despite some positive steps toward establishing data-protection regimes (mainly in the EU), this genie is out of the bottle now and the profits that this industry reaps may be too enormous for it to change course any time soon. This could happen someday, but not as soon as 2035.

2. While the public is much more cognizant of the harms that major social media platforms can enable through algorithmic content moderation that can supercharge the spread of things like disinformation and hate speech online, the solutions to this problem are far from clear. Right now, three major regimes in the global south (Brazil, India and Nigeria) are considering legislation that would limit the degree to which companies can moderate their own content. Companies that want to stay competitive and continue collecting and profiting from user data will comply, and this may drive us to a place where platforms are even more riddled with harmful material than in the past and where government leaders dominate the discourse. The scale of social platforms like Facebook and Twitter is far too large – we need to work toward a more diverse global ecosystem of social platforms, but this may necessitate the fall of the giants. I don’t see this happening before 2035.

3. Although the pandemic has laid bare the inequities and inequalities derived from access to digital technologies, it is difficult to imagine our current global internet (to say nothing of the U.S. context) infrastructure morphing into something more equitable any time soon.”

David Barnhizer, a professor of law emeritus, human rights expert and founder/director of an environmental law clinic, said, “In the decades since the internet was commercialized in the mid-1990s it has turned into a dark instrumentality far beyond the ‘vast wasteland’ of the kind the FCC’s [Federal Communications Commission’s] Newton Minow accused the television industry of having become in the early 1960s. A large percentage of the output flooding social platforms is raw sewage, vitriol and lies.

“In 2018, in a public essay in which he outlined ‘Three Challenges for the Web,’ Tim Berners-Lee, designer of the World Wide Web, voiced his dismay at what his creation had become compared to what he and his colleagues sought to create. He warned that widespread collection of people’s personal data and the spread of misinformation and political manipulation online are a dangerous threat to the integrity of democratic societies. …

“He noted that the internet has become a key instrument in propaganda and mis- and disinformation has proliferated to the point that we don’t know how to unpack the truth of what we see online, even as we increasingly rely on internet sites for information and evidence as traditional print media withers on the vine. Berners-Lee said it is too easy for misinformation to spread on the web, particularly because there has been a huge consolidation in the way people find news and information online through gatekeepers like Facebook and Google, which select content to show us based on algorithms that seek to increase engagement and learn from the harvesting of personal data.

“He wrote: ‘The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking or designed to appeal to our biases can spread like wildfire.’ This allows people with bad intentions and armies of bots to game the system to spread misinformation for financial or political gain.

“The current internet business model, with its expanding power and sophistication of AI systems, has created somewhat of a cesspool. It has become weaponized as an instrumentality of political manipulation, innuendo, accusation, fraud and lies, as well as a vehicle for shaming and sanctioning anyone seen to be somehow offending a group’s sensitivities.

“When people are subjected to a diet of such content they may become angry, hostile and pick ‘sides.’ This leads to a fragmentation of society and encourages the development of aggressive and ultra-sensitive identity groups and collectives. These tend to be filled with people convinced they have been wronged and people who are in pursuit of power to advance their agendas by projecting the image of victimhood. The consequence is that society is fractured by deep and quite possibly unbridgeable divisions. This allows the enraged, perverted, violent, ignorant and fanatical elements of society to communicate, organize, coordinate and feel that they are not as reprehensible as they appear.

“There are hundreds of millions of people who, as Tim Berners-Lee suggests, lack any filters that allow an accurate evaluation of what they are receiving and sending online. Illegitimate online speech legitimizes, for some, hate, stupidity and malice, while rendering the absurdity and viciousness nurtured by the narrowness of these groups’ agendas and perceptions.”

A professor of sociology based in Italy predicted, “Unless we break down the workings of platform and surveillance capitalism, no positive outlook can be imagined.”

A futures strategist and lecturer noted, “There is no incentive structure that would lead to improvement in digital spaces except ones that regard the lubrication of commerce.”

An online security expert based in New York City observed, “The problem is that the financial incentives of the internet as it has evolved do not promote healthy online life, and by now there are many large entrenched corporate interests that have no incentive to support changes for the better.

“Major platforms deny their role in promoting hate speech and other incendiary content, while continuing to measure success based on ham-fisted measures of ‘engagement’ that promote a race to the bottom with content that appeals to users’ visceral emotions.

“Advertising networks are also harnessed for disinformation and incendiary speech as well as clickjacking. (One bright spot is the great work the Global Disinformation Index is doing to call out companies benefitting from this promotion of dangerous garbage.)

“The expanding popularity of cryptocurrencies, built on a tremendous amount of handwaving and popular unfamiliarity with the technologies involved, poses threats to environment and economy alike.

“We have also failed to slow the roll of technologies that profile all of us based on data gathering; China’s large-scale building of surveillance tools for their nation-state offers few escapes for its citizens, and with the United States struggling to get its act together in many ways, it seems likely more and more countries around the world will decide that China’s model works for them.

“And then there’s the escalation of cyberwarfare, and the ongoing lack of Geneva Convention-like protections for everyday citizens. I do hold out hope that governments will at least sort out the latter in the next 5-10 years.”

Sonia Livingstone, a professor of social psychology and former head of the media and communications department at the London School of Economics and Political Science, wrote, “Governments struggle to regulate and manage the power of platforms and the data ecology in ways that serve the public interest while commerce continues to outwit governments and regulators in ways that undermine human rights and leave the public playing catch-up. Unless society can ensure that tech is ethical and subject to oversight, compliance and remedy, things will get worse. I retain my faith in the human spirit, so some things will improve, but they can’t win against the power of platforms.”

Rob Frieden, retired professor of telecommunications and law at Penn State University, responded, “While not fitting into the technology determinist, optimist or pessimist camps, I worry that the internet ecosystem on balance will generate more harms than benefits. There is too much fame, fortune, power, etc., to gain in overreach in lieu of prudence. The need to generate ever-growing revenues, enhance shareholder value and pad bonuses/stock options creates incentives for more data mining and pushing the envelope negatively on matters of privacy, data security, corporate responsibility. While I am quite leery of government regulation, the almost libertarian deference facilitates the overreach.”

Courtney C. Radsch, journalist, author and free-expression advocate, wrote, “Digital spaces and digital lives are shaped by and shape the social, economic and political forces in which they are embedded. Unfettered surveillance capitalism coupled with the proliferation of public and private surveillance, whether through pervasive facial and sentiment recognition systems and so-called ‘smart’ cities is creating a new logic that governs every aspect of our lives.

“Surveillance capitalism is a powerful forcing logic that compels other systems to adapt to it and become shaped by its logic. Furthermore, the datafication of every aspect of human experience and existence, coupled with the potential for behavioral modification and manipulation, make it difficult to see how the world will come together to rein in these forces since it would require significant political will and regulatory effort to unwind the trajectory we are on. There is not political will to do so. It’s hard to imagine what a different future alternative logic would look like and how that would be implemented, given that American lawmakers and tech firms are largely uninterested in meaningful regulation or serious privacy or oversight.

“Furthermore, surveillance, and the proliferation of facial- and sentiment-recognition systems, sophisticated spyware and tracking capabilities are being deployed by authoritarian and democratic company countries alike. So, it’s hard to see how the future does not end up being one in which pervasive surveillance is the norm and everyone is watched and trackable at all times, whether you’re talking about China and its model in Xinjiang and its export of its approach to countries around the world through the Belt and Road initiative, or American and Five Eyes mass surveillance, or approaches like ClearView AI and so-called ‘smart cities.’ These pervasive surveillance-based approaches to improving life or safety and security are likely to expand and deepen rather than become less concerning over this time period.

“Politics is now infused by the logic of surveillance capitalism and by microtargeting, individual targeting and behavioral manipulation, and this is only going to become more prevalent as an entire industry is already evolving to serve campaigns around the world. We’re going to see insurance completely redefined from collective risk to individualized, personalized risk, which could have all sorts of implications for cost and viability.

“Digital spaces are also going to expand to include the inside of our bodies. The wearable trend is going to become more sophisticated, and implantables that offer the option to better monitor health data are unlikely to have sufficient oversight or safety given how much further ahead the market is from the legal and regulatory frameworks that will be needed to govern these developments. Constant monitoring and tracking and surveillance will be ubiquitous, inescapable and susceptible to abuse. I don’t see how the world is going to move away from surveillance when every indication is that more and more parts of our lives will be surveilled whether it’s to bring us coupons and savings or whether it’s to keep us safe, or whether it’s to deliver us better services.”

Nicholas Proferes, assistant professor of information science at Arizona State University, said, “There is an inherent conflict between the way for-profit social media platforms set up users to think about the platforms as ‘community’ but also must commodify information flows for user-content vastly exceeding what normally exist in a ‘community.’ Targeted ads, deep analysis of user-generated content (such as identification of brands or goods in photos/videos uploaded by users), facial recognition, all pose threats to individuals. As more and more social media platforms become publicly traded companies (or plan to), the pressure to commodify will only intensify. Given the relatively weak regulation of social media companies in the past decade in the U.S., I am pessimistic.”

A teacher based in Oceania wrote, “It has become so people are almost being forced to own and maintain a smartphone in order to conduct their daily lives. I cannot conceive of any scenario where this trajectory will improve our lives in the areas of social cohesion – more likely digital spaces will continue to be marshaled in order to divide and rule. Many people are unaware of how they are being either manipulated or exploited or both. Some of them are not interested in key issues of the internet, its governance and so on. They are online as a matter of course and their lives are dependent on connectivity. They are not interested in how data is collected or whether everything they do with IT is either already being tracked or could be given to some entity that might want to use such data for their own ends.

“The most difficult issue to be surmounted is the increasing division between ‘camps’ of users. Social media has already been seen to enhance some users’ feelings of entitlement while others have been reported to feel unable to speak out in digital public due to the chilling effects of what some are policing. I believe this sort of fragmentation of society is not going to be improved, but only enhanced in the future – most obviously by those with digital ‘power’ (large companies such as Google, Facebook, Amazon, TikTok, etc.). It also seems as if nation-states are getting on board with widespread surveillance and law-making to prevent anyone from sticking their heads above the parapet and whistleblowing – we already have seen many imprisoned or being harassed for reporting online. Social fragmentation is also exemplified in areas such as online dating and the fact that many people don’t even know any more how to simply meet others in real life due to utter dependence on their mobile technology.”

An academic based in France commented, “Human nature in each of us seeks power, money and domination, which are such strong attractors that they are very difficult to give up. Buddhists describe futility and the need to give up any desire for possessions responsible for the suffering of all men and all species in the ecosystem who suffer the hegemony of man on Earth. Powerful people find new ways to dominate the weakest on the internet.”

Data surveillance used against individuals’ best interests will remain on ongoing, unstoppable threat

A futurist and transformational business leader commented, “As long as digital spaces are controlled by for-profit companies they will be continue to focus on clicks and visibility. What is popular is not necessarily good for our society. And increased use of algorithms will drive increased micro-segmentation that further isolates content that is not read by ‘people like me,’ however that is defined. The only way to combat this is to:

  1. Provide consumers with full control over how their data is used both at the macro and micro levels.
  2. Provide full transparency of the algorithms that are used to pre-select content, rate consumers for eligibility for services, etc., otherwise bias will creep in and discriminate against profiles that don’t drive high-value consumption patterns.
  3. Provide reasonably priced, paid social platforms that do not collect data.
  4. Provide clear visibility to users of all data collection, uses (including to whom the personal data is being routed), and the insights derived from such data.”

Andy Opel, professor of communications at Florida State University, responded, “Markets only work when citizens have a range of products to choose from, and currently the major media products most people interact with online – social media, dominant news and entertainment sites, search engines – track and market their every move, selling granular, detailed profiles of the public that they are not even allowed to access.

“Right now, there is a very active and dynamic struggle over transparency, access and personal data rights. The outcome of this struggle is what will shape the future of our digital lives. As the ubiquitous commercialization of our digital spaces continues, audiences have grown increasingly frustrated and resistant. This frustration is fueling a growing call for a political and regulatory response that defends individual rights and restores balance to a system that currently does not offer non-commercial, anonymous, transparent alternatives.”

David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic, wrote, “Despots, dictators and tyrants understand that AI and the internet grant to ordinary people the ability to communicate with those who share their critical views, and to do so anonymously and surreptitiously threatens these controllers’ power and must be suppressed. Simultaneously, they understand that, coupled with AI, the internet provides a powerful tool for monitoring, intimidating, brainwashing and controlling their people.

“China has proudly taken the lead in employing such strategies: the power to engage in automated surveillance, snooping, monitoring and propaganda can lead to intimidating, jailing, shaming or otherwise harming those who do not conform. This is transforming societies in heavy handed and authoritarian ways. This includes the United States.

“China is leading the way in showing the world how to use AI technology to intimidate and control its population. China’s President Xi Jinping is applauding the rise of censorship and social control by other countries. Xi recently declared that he considers it essential for a political community’s coherence and survival that the government have complete control of the internet.

“A large critical consideration is the rising threat to democratic systems of government due to the abuse of the powers of AI by governments, corporations and identity group activists who are increasingly using AI to monitor, snoop, influence, invade fundamental privacies, intimidate and punish anyone seen as a threat or who simply violated their subjective ‘sensitivities.’ This is occurring to the point that the very ideal of democratic governance is threatened.

“Authoritarian and dictatorial systems such as China, Russia, Saudi Arabia, Turkey and others are being handed powers that consolidate and perpetuate their oppression. Recently leaked information indicates that as many as 40 governments of all kinds have gained access to the Pegasus spyware system that allows deep, comprehensive and detailed monitoring on the electronic records of anyone, and that there have been numerous journalists targeted by individual nations.

“Reports indicate that the Biden administration has forged a close relationship with Big Tech companies related to the obtaining of citizens’ electronic data and online censorship. An unfortunate truth is that those in power – such as intelligence agencies like the NSA, politicized bureaucrats, and those who can gain financially or otherwise – simply cannot resist using AI tools to serve their interests.

“The authoritarian masters of such political systems have eagerly seized on the surveillance and propaganda powers granted them by the AI and the internet. Overly broad and highly subjective interpretations about what constitutes ‘hate’ and ‘offense’ are destructive grants of power to identity groups and tools of oppression in the hands of governments. They create a culture of suspicion, accusation, mistrust, resentment, intimidation, abuse of power and hostility.

“The proliferation of ‘hate speech’ laws and sanctions in the West – formal and informal, including the rise of ‘cancel culture’ – has created a poisonous psychological climate that is contributing to our growing social divisiveness and destroying any sense of overall community.”

A distinguished engineer at one of the world’s leading technology companies noted, “There are always bad players and, sadly, most digital spaces design security as an afterthought. Attackers are getting more and more sophisticated, and AI/ML [machine learning] is being overhyped and over-marketed as a solution to these problems. Security failures and hacks are happening all over the place. But of bigger concern to me is when AI/ML do things that single out individuals incorrectly. It often makes not just mistakes but serious blunders that are often completely overlooked by the designers of applications that use it. This is likely to have increasingly negative consequences for society in general and can be very damaging for innocent individuals who are incorrectly targeted. I foresee this turning into a legal mess moving forward.”

An enterprise software expert with one of the world’s leading technology companies said, “There are two disturbing trends occurring that have the potential to dramatically reduce the benefits of the internet. The first is a trend toward centralized services controlled by large corporations and/or governments. Functions and features that are attractive to many users are being controlled more and more by fewer and fewer distinct entities. Diversity is falling by the wayside. This centralization:

  • Limits choices for everyday users.
  • Concentrates large amounts of personal information under the control of these near monopolies.
  • Creates a homogeneous environment, which tends to be more susceptible to compromise.

“The second trend is balkanization within the internet ecosystem. Countries like China and Russia are making or have made concerted efforts to build capabilities that will allow them to segment their national networks from the global internet. This trend is starting to be propagated to other countries as well. Such balkanization:

  • Reduces access to global information.
  • Creates a vector for controlling the information consumed by a country’s citizens.
  • Facilitates tracking of individuals within the country.”

An advocate for free expression and open access to the internet wrote, “While it is true that the internet and digital spaces are empowering people, governments around the world are equally threatened by the liberation the internet provides and tend to impose or adopt policies in order to control information. Increasingly, governments are weaponizing internet shutdowns, censorship, surveillance and the exploitation of data, among others, to have control. These practices in the next few years will negatively impact democracies and provide avenues for governments to violate fundamental human rights of the people with impunity.

“Other stakeholders including internet service providers and technology companies are also complicit when it comes to the deterioration we are seeing in digital spaces. The recent revelation of how NSO Group’s spyware tool Pegasus was implemented in mass human rights violations around the world through surveillance, as well as the involvement of Sandvine in facilitating the Belarus internet shutdowns last year, brings to bear some of these concerns.”

A professor of digital economy and culture commented, “We are creating huge commercial organizations with large repositories of data that are not politically accountable. These organizations possess quasi-extralegal powers through data that we need to regulate now.”

A professor based in Oceania said, “I see the increasing encroachment of states through amplification of narrow political messaging, control through regulation and adoption of technical tools that are less transparent/visible. The justification for increased surveillance to keep people safe – safe from threats from others who might threaten local livelihood, threat from viruses – will open up broader opportunities for state control of populations and their activities (much like 9/11 changed the public comfort levels with some degree of surveillance, this will be amplified even further by the current pandemic).

“Global uncertainty and migration as a result of climate change and threat will also accentuate inequity and opportunities to harness dissatisfaction. Increasing conservatism as a result of uncertainties such as COVID-19, climate change, digital disruption and changes in higher education toward an increased focus on job skilling rather than also developing critical thought and social empathy/citizenship understood in the broadest sense do not inspire much confidence in a brighter future.”

A professor of architecture and urban planning at a major U.S. university wrote, “Attention is the coin of the realm. Alas, the kinds of attention that support trustful, undivided participation in civic and institutional contexts fall by the wayside. Perhaps the most important concern is the loss of ability to debate nuances of issues, to hold conflicting and incomplete positions equally in mind, or to see deeper than the callow claims of technological solutionism. Embodied cognition and the extended mind emphasize other, more fluent, more socially situated kinds of attention that one does not have to ‘pay.’ Per Aristotle – and still acted out in the daily news cycle – embodiment in the built spaces of the city remains the main basis for thoughtful political life. Disembodiment seems unwise enough, but when coupled with distraction engineering, it becomes quite terrifying. China shows how. In America, a competent tyrant would find most of the means in place. Factor in some shocks from climate, and America’s future has never seemed so dire. (On the other hand, to do the world some good right now, today, just give an East African a phone).”

A professor of information science who is based in France observed, “Technological tools and the digital space are primarily at the service of those who master the technologies, the specifications of these tools and even the ethical charters through the lobbying that these companies organize. … Hell is paved with good intentions. Digital ethical charters strongly influenced by digital companies do not make digital spaces ethical. At the beginning of the internet years (1980-1990), this digital technology was at the service of science and researchers and made for knowledge-sharing and education. Today, the internet is 95% at the service of marketing and customer profiling, and the dominant players recursively feed on profits and the recurring influence of influencers followed on the net (most of the time because they benefit from a superficial positive image). The internet has become a place of control and surveillance over all people. It has become a threat to democracy and the government institutions that become themselves controlled and influenced by digital companies. … A genuine internet that is only dedicated to art, sciences and education, free of advertising, should be developed.”

Toby Shulruff, senior technology safety specialist at the National Network to End Domestic Violence, wrote, “Digital spaces are the product of the interplay between social and technical forces. From the social side, the harms we’re seeing in terms of harassment, hate and misinformation are driven by social dynamics and actors that predate digital spaces. However, those dynamics are accelerated and amplified by technology. While a doctrine of hate (whether racialized, gendered or along another line) might have had a smaller audience on the fringe in previous decades, social media in particular among digital spaces has been pouring fuel on the flames, attracting a wider audience and disseminating a much higher volume on content.

“On the technological side, the business models and design strategies for digital spaces have given preference to content that generates a reaction (whether positive or negative) at a rapid pace. This therefore discourages thoughtful reflection, fact-checking and respectful discourse. Legal and regulatory frameworks have not kept pace with the rapid emergence of digital spaces and the platforms that host them, with policymakers left without adequate assessment or useful options for governance. Digital spaces are accelerating existing, complex deeply entrenched inequalities of access and power rather than shaping more pro-social, respectful, cooperative forms of social interaction.

“In sum, these trends lead me to a pessimistic outlook on the quality of digital spaces in 2035. I do think that a combination of shifts in social attitudes, wider acceptance of concepts of equality and human rights, dissemination of more cooperative and respectful ways of relating with each other in person and a deliberate redesign of digital spaces to promote pro-social behavior and add friction and dissuasion of hateful and violent behavior holds a possibility for improving not only digital spaces, but human interaction IRL (in real life).”

A number of respondents did point out the positives of data applications. One was Brock Hinzmann, co-chair of the Millennium Project’s Silicon Valley group and a 40-year veteran of SRI International. He wrote, “Public access to online services and e-government analysis of citizen input will continue to evolve in positive ways to democratize social function and to increase a sense of well-being. The Internet of Things will obviously vastly increase the amount of highly detailed data available to all. Analytics (call it AI) will improve the person-system interface to help individuals to understand the veracity of the information they see and to help the system AI to understand what the people are experiencing. Small business and other socially beneficial organization formation will become easier and more sustainable than they are today. Nefarious users, criminals and social miscreants will continue to be a problem; this will require continuous upgrades in security software.”

Theresa Pardo, senior fellow at the Center for Technology in Government at University at Albany-SUNY said, “There is an increasing appreciation of the need for sophisticated data management practices across all sectors. Leaders at all levels appear to have moved beyond the theoretical notion that data-informed decision making can create public value; they are now actually seeking more and more opportunities to draw on analytics in decision making. They are, as a consequence, becoming more aware of the pervasive issues with data and the need for sophisticated data governance and management capabilities in their organizations. As they seek also to fully integrate programs and services across the boundaries of organizations at all levels and sectors building, among other assets, data collaboratives, they are also recognizing the need for leadership in the management of data as a government asset.”

Haters, polarizers and jerks will gain more power

The human instinct toward self-interest and fear of “the other” or the unfamiliar has led people to commit damaging acts in every social space throughout human history. One difference now, though, is that digital networks enable instantaneous global reach at low cost while affording anonymity to spread any message. Many expert respondents noted that digital networks are being wielded as weapons of personal, political and commercial manipulation, innuendo, accusation, fraud and lies, and that they can easily be leveraged by authoritarian interests and the general public to spread toxic divisiveness.

Chris Labash, associate teaching professor of information systems management at Carnegie Mellon, responded, “My fear is that negative evolution of the digital sphere may be more rapid, more widespread and more insidious than its potential positive evolution.

“We have seen, 2016 to present especially, how digital spaces act as cover and as a breeding ground for some of the most negative elements of society, not just in the U.S., but worldwide. Whether the bad actors are from terror organizations or ‘simply’ from hate groups, these spaces have become digital roach holes that research suggests will only get larger, more numerous and more polarized and polarizing. That we will lose some of the worst and most extreme elements of society to these places is a given.

“Far more concerning is the number of less-thoughtful people who will become mesmerized and radicalized by these spaces and their denizens: people who, in a less digital world, might have had more willingness to consider alternate points of view. Balancing this won’t be easy; it’s not simply a matter of creating ‘good’ digital spaces where participants discuss edgy concepts, read poetry and share cat videos. It will take strategies, incentives and dialogue that is expansive and persuasive to attract those people and subtly educate them in approaches to separate real and accurate inaccurate information from that which fuels mistrust, stupidity and hate.”

Adam Clayton Powell III, executive director of the Election Cybersecurity Initiative at the University of Southern California, commented, “While I wish this were not the case, it is becoming clear that digital spaces, even more than physical spaces, are becoming more negative. Consider as just one example the vulnerability of female journalists, many of whom are leaving the profession because of digital harassment and attacks. In Africa, where I have worked for years, this is a fact of life for anyone opposing authoritative regimes.”

Danny Gillane, an information science professional, wrote, “People can now disagree instantaneously with anybody and with the bravery of being out of range and of anonymity in many cases. Digital life is permanent, so personal growth can be erased or ignored by an opponent’s digging up some past statement to counter any positive change. Existing laws that could be applied to large tech companies, such as antitrust laws, are not being applied to these companies nor to their CEOs. Penalties imposed in the hundreds of millions of dollars or euros are a drop in the bucket to the Googles of the world.

“Relying on Mark Zuckerberg to do the right thing is not a plan. Relying on any billionaire or wannabe billionaire to do the right thing to benefit the planet as opposed to gaining power or wealth is not a plan. It is a fantasy.

“I think things could change for the better but they won’t. Elected officials, especially in the United States, could place doing what’s best for their constituencies and the world over power and reelection. Laws could be enforced to prevent the consolidation of power in the few largest companies. Laws could be passed to regulate these large companies. People could become nicer.”

William L. Schrader, board member and advisor to CEOs, previously co-founder of PSINet Inc., said, “Democracy is under attack, now and for the next decade, with the help and strong support of all digital spaces. The basic problem is ignorance (lack of education), racism (anti-fill-in-the-blank) and the predilection of some segments of society to listen to conspiracy theories A through Z and believe them (stupid? or just a choice they make due to bias or racism?).

“To quote the movie ‘Red October,’ ‘We will be lucky to live through this’ means more now than before the 2016 U.S. election. I think things could change for the better but not likely before 2035. The delay is due to the sheer momentum of the social injustice we have seen since humankind populated the earth. That plus the economic- and life-extinguishing climate change that has pitted science against big money, rich against poor, and, eventually, the low-land countries against the highlanders.

“Hell is coming and it’s coming fast with climate change. Politics will have no effect on the climate but will on the money. The rich get richer, and the poor get meaner. Riots are coming and not just at the U.S. Capitol. Meanwhile, the digital space will remain alive and secure in parts and insecure mostly. It will assist and not fully replace the traditional media.”

An Ivy League professor of science and technology studies responded, “Overall, voices of criticism and disenchantment are rising, and one can hope for a reckoning. The questions remain: ‘How soon, and what else will have become entrenched by then?’ Things don’t look good. There is near-monopolistic control by a few firms, there are complex and opaque privacy protections and then there is the addictive power of social media and an increasing reliance on digital work solutions by institutions that are eager to cut back on the cost and complications of having human employees.

“Things might get somewhat better. Even a single case can resonate, like Google vs. Spain, which had ripple effects that can be seen in the GDPR [General Data Protection Regulation in Europe] and California’s privacy law. But people’s understanding of what is changing – including impacts upon their own subjectivity and expectations of agency – is not highly developed. The buzz and hype surrounding Silicon Valley has tamped down dissent and critical inquiry to such an extent that it will take a big upheaval – bigger than the Jan. 6, 2021, insurrection – to fundamentally alter how people see the threats of digital space.”

A professor of information technology and public policy based at a major U.S. technological university predicted, “Similar to the likely outcome for humanity of the doleful predictions we are seeing regarding climate change, the deleterious influences on society that we have put in place through novel digital technologies could keep gaining momentum until they reach a point of irreversibility – a world with no privacy, of endemic misinformation, and of precise, targeted, intentional manipulation of individual behavior that exploits and leverages our own worst instincts. My hope (it’s not an expectation) is that recognition of the negative effects of human behavior in digital spaces will lead to a collective impetus for change and, specifically, for regulatory interventions that would promote said change (in areas including privacy, misinformation, exploitation of vulnerable communities and so forth). It is entirely possible in fact that the opposite will happen.”

A futurist based in North America commented, “I anticipate plenty of change in digital life, however not so much in human beings. Almost all new and improved technologies can, and will be, used for bad as well as good ends. Criminality and the struggle for advantage are always with us. If we can recognize this and be willing to explore, understand, and regulate digital life and its many manifestations we should be okay.”

The director and co-founder of a nonprofit organization that seeks social solutions to grand challenges responded, “We seem woefully unconcerned about the fact that we are eating the seed corn of our civilization. I see no sign that this will change at the moment, though we’ve had civic revivals before and one may be brewing. Our democracy, civic culture and general ability to solve problems together is steadily and not so slowly being degraded in many ways, including through toxic and polarizing ‘digital spaces.’ This will make it difficult to address this issue, itself, not to mention any challenge.”

Humans can’t keep up with the speed and complexity of digital change

A share of these respondents make the case that one of the largest threats to a better future arises from the fact that digital systems are too large, too fast, too complex and constantly morphing. They say this accelerating change cannot be reined in, and new threats will continue to emerge as tech advances. They say the global network is too widespread and distributed to possibly be “policed” or even “regulated” effectively, that humans and human organizations as they are structured today cannot address this appropriately.

Alexa Raad, chief purpose and policy officer at Human Security and host of the TechSequences podcast, commented, “Transformation and innovation in digital spaces and digital life have often outpaced the understanding and analysis of their intended or unintended impact and hence have far surpassed efforts to rein in their less-savory consequences.”

A professor of digital economics explained, “There is often a time lag between the appropriation of technologies and the ramifications of these on social life, public/private life, ethics and morality. Due to this lag between the point at which extensive usage is reached and the recognition of moral/social consequences and because there is a human dimension along with its interplay with a capitalist agenda in the appropriation of technologies, we will often only remedy social and ethical ills after a period of time has lapsed. Our evaluations of technologies at a social and ethical level are not in sync with the arrival and uses of technologies as a platform for economic enterprise and the glorifications of these by nation-states and neoliberal economies. The ascendency of data empires attests to this.”

The founding director of an institute for analytics predicted, “The changes that need to be made – which reasonable people would probably debate – won’t matter, because they won’t be made soon enough to stop the current trajectory. Technology is moving too fast, and it is uncontrollable in ways that will be increasingly destructive to society. Still, it is time for the internet idealists to leave the room and for a serious conversation to begin about regulating digital spaces and fast, otherwise we may not make it to 2035. Digital spaces have to be moved from an advertising model into either a subscriber model or a utility model with metered distribution. Stricter privacy laws might kill the advertising model instantly.”

Rick Doner, emeritus professor wrote, “My concern is that, as so often happens with innovation/technology, changes in the ‘marketplace’ – whether financial, commercial or informational – outpace the institutions that theoretically operate to direct and/or constrain the impact of such innovations. I view digital developments almost as a sort of resource curse. There are, to be sure, lots of differences, but we know that plentiful, lucrative natural resource endowments tend to be highly destructive of social stability and equity when they emerge in the absence of ‘governance’ institutions, and here I’m talking not just about formal rules of government, but also institutions of representation and accountability. And we now have a vicious cycle in which the digital innovations are undermining both the existing institutions (including informal trust) and the potential for stronger institutions down the road.”

Richard Barke, associate professor in the School of Public Policy at Georgia Tech, commented, “Laws and regulations might be tried, but these change much more slowly than digital technologies and business practices. Policies have always lagged technologies, but the speed of change is much greater now.”

Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, responded, “One of the biggest challenges is that the systems and algorithms that control these digital spaces have largely become unintelligible. For the most part, the decisions that are made in our apps and platforms are only fully understood by a handful of individuals. As machine learning continues to advance, and corporations rely on AI to make decisions, these processes will become even less understood by the developers in control let alone the average user interacting in these spaces.”

Oscar Gandy, an emeritus scholar of the political economy of information at the University of Pennsylvania, said, “Much of my pessimism about the future of digital spaces is derived from my observations regarding developments that I have seen lately, and on the projections of critical observers who project further declines in these directions. While there are signs of growing concern over the growth in the power of dominant firms within the communications industry and suggestions about the development of specialized regulatory agencies with the knowledge, resources and authority to limit the development and use of data and analytically derived inferences about individuals and members of population segments or groups, I have not got much faith in the long-term success of such efforts, especially in the wake of more widespread use of more and more sophisticated algorithmic technologies to bypass regulatory efforts.

“There is also a tendency for this communicative environment to become more and more specialized, or focused upon smaller and smaller topics and perspectives, a process that is being extended through algorithmically enabled segmentation and targeting of information based upon assessments of the interests and viewpoints of each of us.

“In addition, I have been struck by the nature of the developments within the sphere of manipulative communication efforts, such as those associated with the so-called dark psychology, or presentational strategies based upon experimental assessments of different ways of presenting information to increase its persuasive impact.”

A North American entrepreneur wrote, “Technology is advancing at a rapid pace and will continue to outpace policy solutions. I am concerned that a combination of bad actors and diminishing trust in government and other institutions will lead to the continued proliferation of disinformation and other harms in digital spaces. I also am concerned that governments will ramp up efforts to weaponize digital spaces. The one change for the better is that the next generation of users and leaders may be better equipped to counter the negative trends and drive improvements from a user, technical and governance perspective.”

Barry Chudakov, founder and principal at Sertain Research, proposed that a “Bill of Integrities” might be helpful in adjusting everything to the speed of digital. He observed, “There is one supremely important beneficial role for tech leaders and/or politicians and/or public audiences concerning the evolution of digital spaces. Namely, understanding the drastically different logic digital spaces represent compared to the traditional logic (alphabet and text-centric logic) that built our inherited traditional physical spaces. Our central institutions of school, church, government and corporation emerged from rule-based, sequential alphabetic logic over hundreds of years; digital spaces follow different rules and dynamics.

“A central issue fuels, possibly even dwarfs that consideration: We are in the age of accelerations. Events and technologies have surpassed – and will soon far surpass – political figures’ ability to understand and make meaningful recommendations for improvement or regulation.

“In the past, governments had a general sense of a company’s products and service. Car manufacturers made cars with understandable parts and components. But today, leading technologies are advancing by inventing and applying new, esoteric, little-understood (except by creators and a handful of tech commentators) technologies whose far-reaching consequences are either unknown, unanticipated, or both. The COVID-19 pandemic has revealed colossal ignorance among some politicians regarding the basics of public health.

“What wisdom could these same people bring to cyber hacking? To algorithm-mediated surveillance? To supporting, enhancing and regulating the metaverse? At its most basic, governance requires a reasonable understanding of how a thing works. Who in government today truly understands quantum computing? Machine intelligence and learning? Distributed networks? Artificial intelligence?

“We now need a technology and future-focused aristos: a completely neutral, apolitical body akin to the Federal Reserve focused solely on the evolution of digital spaces. In lieu of an aristos, education will need to refocus to comprehend and teach new technologies and the mounting ramifications of these technologies – in addition to teaching young minds how perceptions and experiences change in evolving digital spaces.

“Digital spaces expand our notions of right and wrong; of acceptable and unworthy. Rights that we have fought for and cherished will not disappear; they will continue to be fundamental to freedom and democracy. But digital spaces and what Mary Aiken called the cyber effect create different, at times alternate, realities. Public audiences have a significant role to play by expanding our notion of human rights to include integrities.

“Integrity – the state of being whole and undivided – is a fundamental new imperative in emerging digital spaces which can easily conflate real and fake, fact and artifact. Identity and experience in these digital spaces will, I believe, require a Bill of Integrities which would include:

  • Integrity of Speech | An artifact has the right to free expression as long as what it says is factually true and is not a distortion of the truth.
  • Integrity of Identity | An artifact must be, without equivocation, who or what it says it is. If an artifact is a new entity it can identify accordingly, but pretense to an existing identity other than itself is a violation of identity sanctity.
  • Integrity of Transparency | An artifact must clearly present who it is and with whom, if anyone, it is associated.
  • Integrity of Privacy | Any artifact associated with a human must protect the privacy of the human with whom the artifact is associated and must gain the consent of the human if the artifact is shared.
  • Integrity of Life | An artifact which purports to extend the life of a deceased (human) individual after the death of that individual must faithfully and accurately use the words and thoughts of the deceased to maintain a digital presence for the deceased – without inventing or distorting the spirit or intent of the deceased.
  • Integrity of Exceptions | Exceptions to the above Integrities may be granted to those using satire or art as free expression, providing that art or satire is not degraded for political or deceptive use.”

Reform cannot arise because nation-states are weaponizing digital tools

A share of the respondents who worried about the unmanageable speed of change said they are concerned about the weaponization of digital tools by nation-states. There is no incentive to improve digital spaces, according to these experts, when nations use them as part of their global and domestic policies. A digital arms race among nations will encourage the use of digital tools to mount physical and social attacks, they claim. Some respondents predicted that the technological advances will always have humans playing a game of catch-up.

David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic, wrote, “We are in a new kind of arms race we naively thought was over with the collapse of the Soviet Union. We are experiencing quantum leaps in AI/robotics capabilities. Sounds great, right? The problem is that these leaps lead to include vastly heightened surveillance systems, amazing military and weapons technologies, autonomous self-driving vehicles, massive job elimination, data management and deeply penetrating privacy invasions by governments, corporations, private groups and individuals.

“The Pentagon is investing $2 billion in the Defense Advanced Research Projects Agency (DARPA) ‘AI Next Campaign,’ focusing on increased AI research and development. The U.S. military is committed to creating autonomous weapons and is in the early stages of developing weapons systems intended to be controlled directly by soldiers’ minds.

“Significant AI/robotics weaponry and cyber warfare capabilities are being developed and implemented by China and Russia, including autonomous tanks, planes, ships and submarines, tools that can also mount dangerous attacks on nation-states’ grids and systems.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, said, “The tech giants can point to the more-ruthless competitors out there – Russia and China as a start – to stoke further fear of any sort of government intrusion as hobbling a global competition with such high stakes (i.e., superpower dominance).”

Zak Rogoff, a research analyst at the Ranking Digital Rights project, said, “New problems will continue to keep appearing at the margins with the newer tech. Social media and driverless cars, for example, as they have emerged have been good for most people most of the time, but eventually they caused unforeseen systemic problems. I suspect we’ll see a continuing cycle where, as more elements of life become at least partially controlled by machines, new problems arise and they are later at least partially addressed. By 2035 there will probably be newly popular forms of always-on wearables that interface with our sensorium, or even brain-computer interfaces, and these will be the source of some of the most interesting problems.”

Carl Frey, director of the Future of Work project at Oxford University, responded, “While I am optimistic about the long-run, I think it will take some time to reverse the political polarization that we are currently seeing. In addition, I worry about the surveillance state that China is building and exporting.”

Sam Punnett, retired owner of FAD Research, commented, “It’s difficult to read a book such as Nicole Perlroth’s ‘This is How They Tell Me the World Ends’ [a book about the cyberweapons market] and then think we are not doomed. It’s like trying to negotiate a mutually-assured-destruction model with several dozen nation-states holding weapons of mass destruction. I’d guess many Western legislators aren’t even aware of the scope of the problem. Any concerns about social media and consumer information are trivial compared to the threats that exist for intellectual property and intelligence theft and damage to infrastructure.”

Dweep Chand Singh, professor and director/head of clinical psychology at Aibhas Amity University in India, predicted, “Communication via digital mode will advance, evolving to an addition of non-physical means, i.e., brain-to-brain transmission/exchange of information. Biological chips will be prepared and inserted in people’s brains to facilitate non-physical communication. Artificial neurotransmitters will be developed in neuroscience labs for an alternative mode of brain-to-brain communication.”

4. Work is needed now to prepare for a mind-bending future

Several of these experts wrote about the urgent need to make moves now to establish systems and processes to help society cope with expected modes of far more significant change that are currently in their early days that are likely to completely alter almost everything in the near to far future.

They say it is important to address these likely possibilities today in order to prepare for and avoid the worst possible outcomes.

Imagining humans’ positive transition to a mind-blowing new paradigm for humanity

Barry Chudakov, founder and principal at Sertain Research, said, “Taking and evolving simulation and virtual representation from the gaming world, digital spaces will morph from apps and social media platforms into mirror worlds – the metaverse and ‘the third platform, which will digitize the rest of the world … all things and places will be machine-readable, subject to the power of algorithms,’ as Kevin Kelly wrote in Wired.

“Features of that logic include:

  • Digital twins (operating in digital spaces) create a doubling effect of everything and everyone.
  • Digital spaces’ mirror worlds start by complementing, then competing with – or replacing – reality (‘Truman Show‘ syndrome).
  • Digital spaces evolve from solely a screen experience to more immersive, in-body, in-place experiences.
  • Augmented reality adds dimension to any experience within digital spaces.
  • Immersion in digital spaces challenges (devours) human attention.
  • Time compresses to Now, aka eternal nowness.
  • Identity is identity in the mirror (compounded exponentially by the implementation of digital spaces as mirror worlds).
  • Self goes digital: Digital spaces become the emerging venue for the presentation of self; I am who I am in digital spaces.
    • Identity is thereby multiple and fluid: Roles, sexual orientation and self-presentation evolve from solely in-person to in-space.
  • Privacy in digital spaces becomes a paid service with multiple layers and options like cable TV or streaming services (as tracking and data identification are built into all objects and all things start to think).
  • Everything (action, reaction, statement, response, movement) generates data, which exponentially increases the information barrage; the outmoded notion of memorization and retention are replaced with ambient findability.
  • Wholes become miscellaneous as everything is turned into miscellaneous data.
  • Navigation replaces rules.
  • Original and copy conflate, objects and experiences become duplicative, as digital spaces become mirror worlds and mirror worlds become the metaverse.
  • Cut and paste, copy and paste, are no longer merely computer commands, they are behaviors – the prevailing psychology of digital spaces.
  • Robots engage with the mirror world as augmented eyes and ears: “reality fused with a virtual shadow” (Kevin Kelly).
  • The need for interoperability and portability among digital spaces generates mandates for standards of governance.

“Market dynamics will force these digital spaces to become more ‘sticky.’ Commerce – making money – will drive this dynamic. To make more money, to get more people to spend more, any surviving digital space will decide it must become stickier. If you doubt that just watch or talk to teenagers playing video games. Video games are highly involving, addictive, engendering the ‘I don’t want to leave’ dynamic. That realization will not be lost on the designers of future digital spaces.

“Digital spaces will become the addictive video game/cellphone of the future. They promise information about any and everything, so we will be always plugged in and the spaces will always be updating, morphing, evolving. Soon – as users now do with cellphones – we will ignore conventional reality and/or people in that reality for life in the digital space. This is the first critical step in digital spaces competing with, and often replacing, conventional reality.

“To manage the assault of multiple simultaneous changes – new realities from emerging digital spaces – we will be forced to find a new language of ethics, a new set of guidelines for acting and operating in digital spaces. Even now, the National Institute of Standards and Technology (NIST) – part of the U.S. Department of Commerce – is asking the public for input on an AI risk-management framework. The organization is in the process of developing this framework as a way to help ‘manage the risks posed by artificial intelligence.’ This is an initial step in what will be a continuing process of understanding and trying to create reasonable protections and regulations.

“In 2035, many will see the merger of physical and digital worlds as an encroachment on their worldview. At the same time, facility of use and integration of physical and digital realms will improve many experiences and transactions. For example, the automobile will become a significant digital space. One notable improvement will be the reduction in the 38,000 deaths annually from traffic accidents. As driverless cars become mobile digital spaces with end-to-end digital information streaming in and out of each car our mobile digital experience will reduce accidents, deaths and congestion.

“The most noticeably different aspect of digital life for the average user in 2035 will be a more seamless integration of tools and so-called reality. Importing the dynamics of simulation and virtual representation from the gaming world, we will swallow the internet; digital spaces will move inside us.

“Time and distance will effectively vanish, whether you are implementing augmented reality, virtual reality or a mirror world in your interaction. Here is where I am, where I can find you or any other – so there is only here. There is only now. The proscenium arch and backstage of ‘The Truman Show’ will have disappeared.

“What is now known as ‘stickiness’ – the ways in which the design of a digital space encourage more engagement – will become full immersion. The outside of any digital space will be harder to fathom because physical spaces will include adjunct digital spaces (just as every business and person has a URL now) and – just as people today pore over their phones and ignore cars, pedestrians and loved ones.

“By 2035, digital spaces will become so immersive that we will have a problem. It will be extremely difficult to get people to disengage with those digital spaces. We will all become video gamers, hooked on the mirror world of the world.”

What about the potential impact of superintelligence, and why might it be important now?

Some respondents to this canvassing voiced concerns about the potential issues that may arise if superintelligence is developed in the years following humanity’s shift into more-immersive virtual and augmented reality spaces. Of course, the estimated timeline for this to possibly arrive varies, and some experts still doubt it may transpire at all. But the ranks of respected scientists and innovative entrepreneurs who have expressed both hopes and worries for humanity due to the potential rise of superintelligence have grown over the past decade. They have included Stephen Hawking, Stuart Russell, Bill Gates, Ray Kurzweil, Elon Musk and Masayoshi Son.

These leaders have said they expect that the recursive self-improvement of artificial intelligence will completely transform the world, possibly mostly for the better, possibly for the worse. They say it is obvious that these concerns are important to address today due to the ways in which recent rapid technological advances have already altered the world in significant ways in an extremely brief period of human history.

The following thoughts come from two experts in this canvassing who wrote deeply about that potential future in their responses, noting that this is why they believe that the time for smarter, forward-thinking technology design, governance decisions and societal evolution is today.

People must work much harder now to prepare for a much-different future

Jerome Glenn, co-founder and CEO of The Millennium Project, predicted, “The race is on to complete the global nervous system of civilization and make supercomputing power available to everyone. Another race is to develop artificial general intelligence (AGI), which some say might never get developed while others think it could be possible within 10 to 15 years; if so, its impact will be far beyond artificial narrow intelligence (ANI). Investments in AGI are forecast to reach $50 billion by 2023.

“The human brain projects of the U.S., EU, China and other countries – plus corporate ANI and AGI research – should lead to augmented individual human and collective intelligence. We are moving from the Information Age into the Conscious-Technology Age, which will force us to confront fundamental questions about life as a new kind of civilization emerges from the convergence of two megatrends. First, humans will become cyborgs, as our biology becomes integrated with technology. Second, our built environment will incorporate more artificial intelligence.

“Conscious technology raises profound dangers, including artificial intelligence rapidly outstripping human intelligence when it becomes able to rewrite its own code and individuals become able to make and deploy weapons of mass destruction. Minimizing the dangers and maximizing opportunities – such as improving governance with the use of collective intelligence systems, making it easier to prevent and detect crime and match needs and resources more efficiently – will require that we actively shape the evolution of conscious-technology.

“Like every other revolution in human history, from agriculture to industry to the internet, the arrival of conscious technology will have both good and bad effects. Can we think deeply and wisely about the future we want while we still have time to shape the effects of conscious technology?

“The age of conscious technology is coming as two mega technology trends converge: Our built environments will become so intelligent that they seem conscious, and humans will become so integrated with technology that we become cyborgs. Yes, humans will become cyborgs as our biology becomes integrated with technology. We are already microminiaturizing technology and putting it in and on our bodies. In the coming decades, we will augment our physiological and cognitive capacities as we now install new hardware and software on computers. This will offer access to genius-level capabilities and will connect our brains directly to information and artificial intelligence networks.

“Our built environment will incorporate more artificial intelligence. With the Internet of Things, we are integrating chips and sensors into objects, giving them the impression of consciousness – as when we use voice commands to control heating, lighting or music in our homes. As our increasingly intelligent environments connect with our cyborg future, we will experience a continuum of our consciousness and our technology.

“As humans and machines become linked more closely, the distinction between the two entities will blur. Conscious technology will force us to confront fundamental questions about life. All ages and cultures have had mystics who have been interested in consciousness and the meaning of life, as well as technocrats who have been interested in developing technology to improve the future. All cultures have a mix of the two, but the representatives of each viewpoint tend to be isolated from and prejudiced toward each other.

“To improve the quality of the Conscious-Technology Age, the attitudes of mystics and approaches of technocrats should merge. For example, we can think of a city as a machine to provide electricity, water, shelter, transportation and income; or we can think of it as a set of human minds spiritually evolving and exciting our consciousness. Both are necessary. Without the technocratic management, the city’s physical infrastructure would not work; without the spiritual element, the city would be a boring place to live. Like the musician who reports feeling his consciousness merge with the music and his instrument to produce a great performance, one can imagine the future ‘performance’ of a city, or of civilization as a whole, as a holistic synthesis experience of the continuum between technology and consciousness.

“History teaches us that civilizations need a kind of ‘perceptual glue’ to hold them together, whether in the form of religious myths or stories about national origins or destinies. The idea of a feedback loop between consciousness and technology moving toward a more enlightened civilization offers a perceptual glue to help harmonize the many cultures of the world into a new global civilization.

“There are profound dangers along the path toward a conscious-technology civilization. At some point, it is likely that development will start to happen very quickly. When artificial intelligence is able to rewrite its own code, based on feedback from global sensor networks, it will be able to get more intelligent from moment to moment. It could evolve beyond our control in either a positive or a destructive fashion. The question is: By exploring scenarios about the possible future evolution of artificial intelligence can we make wise decisions now about what kinds of new software and capabilities to create?

“As cognition-enhancing technology develops, we will have a world full of augmented geniuses. With the new perceptual, technological and artificial biological powers at their disposal, a single individual could be able to make and deploy weapons of mass destruction – a prospect known as SIMAD, or ‘Single Individual Massively Destructive.’ We already have structures, albeit imperfect, to monitor and prevent the mass-destructive capacity of nation-states and groups – what structures could prevent the threat of SIMADs?

“Connecting human brains directly to information and artificial intelligence networks raises the question of whether minds could be hacked and manipulated. How can we minimize the potential for information or perceptual warfare and its potential consequence of widespread paranoia?

“Accelerated automation will render much of today’s work unnecessary. Driverless vehicles could remove the need for taxi, bus and truck drivers. Personal care robots could take over many functions of nurses and care workers. Artificial intelligence could make humans redundant in professions such as law and research. Will conscious technology create more jobs than it replaces? Or is massive structural unemployment inevitable, requiring the development of new concepts of economics and work?

“If we think ahead and plan well, the conscious-technology civilization could become better than we can currently imagine. Governance could be vastly improved by collective intelligence systems; it could become easier to prevent and detect crime; needs and resources could be matched more efficiently; opportunities for self-actualization could abound; and so on.

“We must think through the possibilities of the Conscious-Technology Age today in order to shape its evolution to create the future civilization we desire.”

The grandest challenge humans face may be the emergence of a dangerous alternative species

David Barnhizer, a professor of law emeritus, human rights expert and founder/director of an environmental law clinic, wrote, “The ‘bad’ in celebrating the undeniable ‘good’ that will flow from further developments in AI and robotics, is that we can move too fast and be blind to the ‘bad.’ We face extremely serious challenges in our immediate and near-term future. Those challenges include social disintegration, largescale job loss, rising inequality and poverty, increasingly authoritarian political systems, surveillance, loss of privacy, violence and vicious competition for resources.

With the possibility of social turmoil in mind, former Facebook project manager Antonio Garcia Martinez quit his job and moved to an isolated location due to what he saw as the relentless development of AI/robotic systems that will take over as much as 50% of human work in the next 30 years in an accelerating and disruptive process. Martinez concluded that, as the predicted destruction of jobs increasingly comes to pass, it will create serious consequences for society, including the probability of high levels of violence and armed conflict as people fight over the distribution of limited resources.

“Tesla’s Elon Musk describes artificial intelligence development as the most serious threat our civilization faces. He is on record saying that the human race stands only a 5% to 10% chance of avoiding being destroyed by killer robots. Max Tegmark, physics professor at MIT, has also warned that AI/robotics systems could ‘break out’ of human efforts to control them and endanger humanity. Tommi Jaakkola, an MIT AI researcher described the dilemma, explaining: ‘If you had a very small neural network [deep learning algorithm], you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.’ He added, ‘We can build these models, but we don’t know how they work.’ This fact exists at a point that is quite early in the development of AI.

“If Masayoshi Son, CEO of SoftBank, is right, the AI future is a great danger. Like anyone else trying to gain a sense of our future, we simply don’t know what the future holds, but we are playing with fire and beset by unbounded hubris and tunnel vision. Like opioid and heroin addicts, it seems that we simply ‘can’t help ourselves’ and will innovate, create and invent right up to the point when we aren’t in control. Just because you can do something does not dictate that you should. … Stephen Hawking warned: ‘I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. … Computers can, in theory, emulate human intelligence – and exceed it. … And in the future, AI could develop a will of its own – a will that is in conflict with ours. In short, the rise of powerful AI will be either the best, or the worst, thing ever to happen to humanity.’

“Hawking is not alone. Oxford University philosopher Nick Bostrom focuses on the development of artificial intelligence systems, although he says he hopes that future will be quite positive, he has raised the possibility that fully developed AI/robotic systems may be the final invention of the human race, indicating we are ‘like small children playing with a bomb.’ The developments in AI/robotics are so rapid and uncontrolled that Hawking posited that a ‘rogue’ AI system could be difficult to defend against, given humans’ greedy and stupid tendencies.

“Already today we are inundated with deceptive AI propaganda ‘bots’ and subjected to continuous invasions into our most private and personal information. Big data mining is being used by businesses and governments to create virtual simulacra of us so that they can more efficiently anticipate our actions, preferences and needs. This is aimed at manipulating and persuading us to act to advance agendas and to deliver advantages. If people such as Hawking, Tegmark, Bostrom and Musk are even partially correct in their concerns, we are witnessing the emergence of an alternative species that could ultimately represent a fundamental threat to the human race.”

5. Closing thoughts

The following respondents wrote contributions that bring together a holistic look at the issues at hand, trying to place them in human and historical context.

Peter B. Reiner, co-founder of the National Core for Neuroethics at the University of British Columbia, wrote, “It is challenging to make plausible predictions about the impact that digital spaces will have upon society in 2035. For perspective, consider how things looked 14 years ago when the iPhone was first introduced to the world. A wonderous gadget it was, but nobody would have predicted that 14 years later, nearly half the population of the planet would own a smartphone, no less how reliant upon them people would become.

“With that disclaimer in mind, I expect that digital life will have both negative and positive effects in the year 2035. Among the positives, I would include automation of routine day-to-day tasks, improved algorithmic medical diagnoses and the availability of high-quality AI assistants that take over everything from making reservations to keeping track of personal spending. The worry is that such cognitive offloading will lead to the sort of corpulent torpor envisioned in the animated film ‘Wall-E,’ with humans increasingly unable to care for themselves in a world where the digital takes care of essentially all worldly needs.

“Yet such a dystopian outcome may be unlikely. Victor Frankl vividly describes the human need for finding meaning in one’s life, even when the abyss seems near at hand. Faced with the manifold offerings of the digital world, many will look for meaning in creative tasks, in social discourse and perhaps even in improving the intolerable state of political affairs today. While some may blame digital spaces for providing a breeding ground for divisive political views, what we are witnessing seems more an amplification of persistent prejudice by people who are, for the first time in generations, feeling less powerful than their forebears.

“The real problem is that our digital spaces cater to assuaging the ego rather than considering what makes for a life well-lived. In the current instance, social media, driven by the dictates of surveillance capitalism, is largely predicated on individuals feeling better (for a few seconds) when someone notices them with a like or a mention. Harder to find are digital spaces that foster the sort of deep interpersonal interaction that Aristotle famously extolled as friendships of virtue. The optimistic view is that the public will tire of the artifice of saccharine digital interactions and gravitate toward more meaningful opportunities to engage with both human and artificial intelligence. The pessimistic view is that, well, I prefer not to go there.”

Michael Kleeman, senior fellow at the University of California-San Diego, commented, “The digital space has radically altered the costs of information distribution, including the costs of misinformation. This economic reality has created and will likely continue to create a cacophony with no filters and likely cause people to continue to move toward a few sources that echo their beliefs and simplify what are inherently complex issues. Threats to civil society, democracy and physical and mental health are very real and growing. The only hope I feel is a move toward more local information where people can ‘test’ the digital data against what they see in the real world. But even that is complex and difficult as partial truths can mask for more complete information and garner support for a distorted position. I am, sadly, not hopeful.”

Kenneth A. Grady, a lawyer and consultant working to transform the legal industry, said, “Could digital spaces and digital life be substantially better by 2035? Of course. But present circumstances and the foreseeable future suggest otherwise. For them to become substantially better, we need consensus on what ‘substantially better’ means. We need changes in laws, customs and practices aimed at realizing that consensus position. And we need time.

“At present, we have a gridlocked society with very different ideas of where digital space and digital life should be. These ideas reflect, in part, the different ideas we see in other areas of society on cultural issues. If we look back roughly 15 years at where things were, we can see that reaching a consensus (or something close to it) over the next 15 years seems unlikely. Without a consensus, changes to laws, customs and practices will fall over a spectrum rather than be concentrated in one direction. As a society, this reflects how we work out our collective thoughts and direction. We go a bit in one direction, course correct, move a bit in another direction, and continue the process over time.

“Will 15 years be enough time to reach a substantially better position for digital spaces and digital life? I doubt it. Inertia, vested capital interests and the lack of consensus mean that the give-and-take process will take longer. We may make progress toward ‘better,’ but to get to ‘substantially better’ will take longer and require a less-divisive society.”

Hans Klein, associate professor of public policy at Georgia Tech, responded, “The U.S. has a problem: ‘state autonomy.’ Its military and foreign policy establishments (‘the state’) are only imperfectly under civilian/democratic control. The American public is not committed to forever wars in the Middle East, Russia and China, nor to deindustrialization through global trade, but no matter who the citizens elect, the policies hardly change. Elections – the will of the people – have remarkably little effect on policy. Policies arguably do not represent the will of the people. The state is autonomous of the citizens.

“Large media corporations play an important role in enabling such state autonomy. The media corporations repeat and amplify policymakers’ narratives, with little criticism. They report on select issues while ignoring others and frame issues in ways that reinforce the status quo. So, in 2003, we heard endlessly about weapons of mass destruction but nothing about antiwar protests. In 2020, we heard endlessly about protests but nothing about people of color suffering from violent crime.

“What we call the ‘public sphere’ might better be called the narrative sphere. Citizens are enclosed in a state-corporate narrative sphere that tells them what to think and what to feel. Media corporations’ control of this narrative sphere is essential to state autonomy, because the narratives shape facts in ways that support the autonomy of policy makers.

“Around 2010, a revolution occurred: social media punctured the corporate narrative sphere. Alongside the narrative sphere there appeared a public sphere, in which the voices of people could be heard. This new social-media-enabled public sphere led to political movements on the left and the right. On the left, Bernie Sanders criticized state and especially corporate power. He focused citizens’ attention upward to the power structure. On the right, Donald Trump did something literally unthinkable prior to social media: He ran on an anti-war platform. Bernie Sanders was contained by his party, but Trump broke his party, won the nomination and won the election.

“This new, social-media-enabled public sphere is often crude, and the voices it empowers may be both constructive and destructive. Donald Trump manifested that. Those who could see beyond his personal style saw an elected official who finally raised important questions of war and peace, work and justice. The autonomy of the state was named and criticized (colorfully, as a ‘swamp’). Social media made it possible for such issues – perhaps the most important issues facing American society – to be publicly raised. Social media empowered the public. Therefore, social media had to be brought back under control.

“Following the election of such a critic of state autonomy, both the state and the corporate media have sharply attacked the social media that made his election possible. The corporate-created narrative sphere doubled down to inform the American public that the bad voices in social media are all there is.

“The power structure is working hard to demonize social media and the public sphere. Voices … are given outlet in state-quoting corporate media like The Atlantic. The public is being silenced.

“Looking ahead to 2035, it seems possible that the social-media-enabled public sphere will merely be a memory. Digital spaces and people’s use of them will be safely bounded by the understandings disseminated by the state. The wars will be good wars, and there will be no stories about people losing their livelihood to workers in Bangladesh. Perhaps the greatest challenge of our time is to prevent such a suppression of the social-media-enabled public sphere.

“Citizens on both the left and the right have a powerful interest in making sure that social media survives to 2035.”

Adam Nagy, project coordinator at Harvard Law School’s Cyberlaw Clinic, commented, “In general, the digitization of sectors that have lagged behind others – such as government social services, health care, education and agriculture – will unlock significant potential productivity and innovation. These areas are critical to accelerating economic growth and reducing poverty. At the same time, sectors that have led the pack in digitization, such as finance, insurance, media and advertising, are now facing regulatory headwinds and public scrutiny.

“Globally, politicians, regulators, civil society and even some industry players are increasingly trying to understand and mitigate harms to individual privacy rights, market competitiveness, consumer welfare, the spread of illegal or harmful content and various other issues. These are complex issues, and not every solution is waiting just around the corner, easy to achieve or free of difficult trade-offs.”

Ayden Férdeline, a public-interest technologist based in Berlin, Germany, said, “We have recentralized what was a decentralized network of networks by primarily relying on three or four content-distribution networks to store and cache our data. We are making the internet’s previously resilient architecture weaker, easier to censor and more reliant on the goodwill of commercial entities to make decisions in our interests.

“If we don’t course-correct soon, I worry that the internet of 2035 will be even more commercial, government-controlled and far less community-led. I am concerned that we are moving toward more closed ecosystems, proprietary protocols and standards, and national Splinternets that all abandon the very properties that made the internet such an impactful and positive tool for social change over the past 25 years.

“Of course, in not addressing many of the very real issues that the internet does have on society we have found ourselves in a situation where some kind of intervention is required. I just worry that the wrong actors have identified this opportunity to intervene.

“If we think back to how the internet was developed, it grew somewhat surreptitiously as far as commercial and political interests are concerned, which gave it the time and space to have defined around it the norms and governance structures that we now take for granted: values like interoperability, permissionless innovation and reusable building blocks. These are excellent properties, but they are not technical values, they were political choices only possible because the internet was a publicly funded project intended for use in democracies for academic and military networks.

“As the internet has grown in importance and commercial interests have recognized opportunities to monetize it, the internet’s foundational values have been abandoned. Social media and messaging services have no interoperability.”

A retired consultant based in Canada wrote, “Marshall McLuhan noted: ‘The most human thing about us is our technology.’ Language and culture are technology. Life is the emergence of complexity that engenders more complexity. Uncertainty is integral to evolutionary constraints shaping survival choices. We are at the threshold of a phase transition that demands we guide our choices during this struggle between empires ruled by elites and the next flourishing and ‘leveling-up’ toward a participatory democracy. All technologies can be weaponized. All weapons can find a positive use. There will never be a shortage of work and activity to do and to value when we are engaged in the enterprise of a flourishing life, community and ecology. In the 21st century, where everything that can be automated will be, there are three paradigms enabling response-able action:

  1. The power of a nation with its own currency – modern monetary theory.
  2. The enabling of the people to flourish as citizens – accomplished through universal basic assets (UBA) and guaranteed jobs (rather than unemployment insurance).
  3. Enabling communities to be response-able in a changing world through Asset-Based Community Development.”

Steven Livingston, founding director of the Institute for Data, Democracy and Politics at George Washington University, wrote, “Narratives about technology tend to run hot or cold: ‘It is all terrific and a new democratic dawn is breaking!’ Or … ‘Technology is ushering in a dystopian nightmare!’ Both outcomes are possible. With the former, Western scholars tend to ignore or be unaware of digital network effects in the developing world that have a positive effect. This would include M-Pesa in Kenya and the entire array of information and communication technologies for development applications. I wrote an article several years ago about the positive effects of crowdsourced elections monitoring in Nigeria. I came up with a whopper example of academic jargon to describe this: Digitally enabled collective action in areas of limited statehood. Positive human intentions have been made actionable by the lower transaction costs in digital space.

“Another example of positive outcomes is found in the work of online information sites such as Bellingcat, Forensic Architecture, and The New York Times Visual Investigations Unit headed by Malachy Browne. We know things about war crimes and other horrific events because of the digital breadcrumbs left behind that are gathered and analyzed by people and organizations such as these.

“On the other hand, where human intentions are less laudable these same affordances are used to erode confidence in institutions, spread disinformation and make the lives of others miserable. The kicker here is that digital phenomena such as QAnon are seen and understood by participants – at least many of them – as doing good. After all, QAnon is in a fight against evil, just as Forensic Architecture is out to expose war criminals. We end up judging the goodness and harmfulness of these two moments according to our own value structures. Is there some external position that allows us to determine which is misguided and which is God’s work? I believe there is. QAnon is no Forensic Architecture.”

Steve Jones, co-founder of the Association of Internet Researchers and distinguished professor of communication at the University of Illinois-Chicago, observed, “Digital spaces reflect analog spaces, that is, they are not separate from the pressure and tensions of social, political, economic, etc., human life. It is not so much that digital spaces are ‘entrenched’ as that they will evolve in ways that are unpredictable while also predictably tracking social and political evolution.”

Russell Newman, associate professor of digital media and culture at Emerson College, wrote, “Perhaps most challengingly, our communications networks and the metadata of our use of them have themselves become intrinsically embedded within global capital flows, with aspects of our interactions with traditional media being as folded into this amalgam as the tracking of container-ship cargo. Making democratic media policy in its own right is challenging when it is interwoven with flows of global capital in this way. This is also another reason why antitrust has, itself, become fraught in many ways.

“New interest in resuscitating a moribund antitrust policy does not address the core logics in play here, as developing manifestations of power are unaddressable by it, barring much rethinking. There are numerous technical initiatives that seek to instill different rationales and logics for new forms of participation. Such initiatives, while useful to explore, neglect the banal and almost crucial insight of all: that all of the problems we face are social ones, not technological ones, and developing new web platforms of varying logics is ancillary to addressing the conditions the trends do not just exacerbate but actually support.

“The notion that policy just ‘lags’ behind emergent tech is a red herring. The business models being pursued today were agendas before they became realities in policy debates, even if still gestating. I study this stuff intensively and I was barely familiar some of these initiatives introduced in the piece [the Applebaum and Pomerantsev article in The Atlantic Monthly that inspired the primary question asked in this canvassing of experts]. Participation in these new arenas is a privilege of both knowledge and, frankly, time that many working people do not possess (for that matter, even that I do not possess, and I occupy a position of relative privilege in that regard). …

“All of the ills identified are endemic to a time in which wages have effectively stagnated and the power of collective bargaining has been brought low (leading to greater efforts by necessity to pinpoint perfect audiences so as to clear markets), where policy toward corporate interests has intensified a divergence between the capital-owning sector and main street; where basic needs like health care are lacking for so many; where a personal-debt crisis (born not just of student debt but historically stagnating wages) threatens the financial health of multiple generations and, by extension, the economy writ large.

“This is to leave aside the barriers being thrown up to voting itself and the constitution of right-wing echo chambers our new platforms have afforded which have been armed and deployed to forestall these trends from changing. Elites across the globe share more commonalities in their interests and station than differences, even if national prerogatives differ. The climate crisis intensifies every single one of the trends above, one that these same economic elites look to evade themselves, rather than solve.

“All of this does not portend stronger democratic features across our landscape. It portends continued division sown by artificial intelligence-driven suggestion engines, an economic climate that only finds bullet wounds covered over with Band-Aids that threaten new and larger future implosions, and a climate crisis that will only heighten these tensions.”

Deirdre Williams, an independent internet governance consultant, commented, “I was lucky enough to attend an early demonstration of ‘Mosaic’ [the first graphical web browser] at the University of Illinois, Urbana-Champaign in 1993. I can still remember how I felt then – ‘Charm’d magic casements, opening,’ to borrow from Keats. I thought how wonderful this would be for the students in the rather remote small island state I had come from. Nearly 30 years later it feels that the miracle I was expecting didn’t happen. And plenty of unwelcome things happened instead – things to do with identity, with the community/individual balance in the society.

“Those unwelcome things are not all to be attributed to ‘digital life,’ but digital life seems to have failed to provide much of its positive potential. I may appear to be pessimistic, however, underneath there is optimism.

“The human perspective fails in its refusal to accept other ways of looking, of seeing, other priorities. Time is often ignored because it is an element beyond human control. And human agency is not the only agency. ‘There’s a divinity that shapes our ends, Rough-hew them how we will,’ says Hamlet to Horatio in Shakespeare’s play ‘Hamlet’ Act 5, Scene 2. Call it divinity, or Gaia, or simply serendipity, but the system is such that it always strives for balance. What is missing currently in ‘digital life’ is a sense of balance; the weightings are all uneven. They need time to reach equilibrium.

“The questions posed here are all about human agency, but the system itself is superhuman. Fourteen years may (or may not) be sufficient for the system to effect its levelling, but I would expect the pendulum to swing toward improvement because that is what is in its nature.

“At the human level, ‘digital life’ has the potential to create globally shared experience and improve understanding, thus bringing greater balance among the human variable. Climate change, the movement of asteroids, solar flares and the evolution of the Earth’s geology will re-teach human animals their true place in the system and force them to learn humility again. Fourteen years and the opportunities provided by digital life will hopefully be enough to at least begin the reordering and balancing of a system in which humans acknowledge their place as members, not leaders; parts of a greater whole.”

Bill Woodcock, executive director at the Packet Clearing House, wrote, “For the internet’s first 40 years, digital spaces and the conversations they engendered were largely defined by individual interaction and real conversation between real people.

“In the past 10 or 15 years, though, we’ve moved away from humans talking with humans to machine-intermediated and machine-dominated ‘conversation’ that exists principally to exploit human psychological weaknesses and to direct human behavior. This is the ‘attention economy,’ in which bots interact with people or decide what people will see in order to guide them toward predetermined or profitable outcomes.

“This is destroying civic discourse, destroying the fundamental underpinnings of democracy and undermining the human intellectual processes that we think of as ‘free will.’ It’s not clear to me that any of the countervailing efforts will prevail, though 2035 is a long time from now, and I am irrationally optimistic.”

Thomas Lenzo, a consultant, community trainer and volunteer in Pasadena, California, said he expects that human behavior will not adapt to change, writing, “I expect a continuing transformation of digital spaces and life, and I expect it will be a mix of good and bad based on the driving actor:

  1. Tech leaders, in general, will push the technology they create; some as visionaries, and some to make money.
  2. Politicians will push technology in an effort to ensure they and their political party remain in office.
  3. Public audiences for the most part will want those digital spaces that will improve the quality of their lives.
  4. Criminals will seek digital spaces that enable them to commit crimes and get away without risk.

A well-known UK-based professor of media history said, “I am gloomy, but with hopeful glints. I don’t, due to my exchanges with policymakers, etc., believe that they are up to speed on this. There is a vanishingly small opportunity, but presumably a real one, to get the right or better policies and regulations in place so that the digital space is tipped in a positive way. There never has been and never will be a ‘medium’ that is inherently anything.

“How things go depends how they are used and regulated. Some ‘public-interest’ algorithms are being developed, and some governments have at last woken up to the real challenges that dis/mis/malinformation are causing. But it’s late. Plus, what might be seen as a ‘good’ regulation in a democratic society is a ‘bad’ one in an authoritarian one – so policy is quite complex.

“Looking at the changes in private and public lives over the last five years, it is remarkable how uncivil public discourse has become so swiftly. It is the degradation of manners that is so dangerous. Manners require a taking into account of the experiences of others. In addition, the capacity of the foreign/domestic/rich to attack and protect their own interests online has grown exponentially.

“So, there might be a policy shift, there might be an ability to bring the big social media companies who profit from divisive behaviors to come to have a more public-interest view of their power. Right now, we are looking at the tabloidisation of life. There are some ways forward. Vaccine hesitancy in the UK has been tackled really interestingly (locally and familiarly). On the other hand, the sense of collective values today is weakening.”

A distinguished professor of computer science at one of the largest universities in the U.S. commented, “As Richard Feynman put it, ‘To every man is given the key to the gates of heaven; the same key opens the gates of hell.’ This statement can be applied to nuclear technology and to digital technology. Whether a technology opens the door to heaven or hell is up to how well people regulate the use of it.

“The internet’s unprecedented growth took people by surprise, thus society was unprepared. Few foresaw where it might be headed, and warning voices were not well heard (see the book ‘Surveillance Capitalism’ as one example). It takes a deep understanding to develop effective solutions to make digital technology better serve society’s needs and to raise the bar against the abuse. We are not there yet but we will get there.”

An award-winning author wrote, “While human survival depends upon a sophisticated ability to categorize, the current notion that our intellectual lives should rest upon aligning with one side or another or seeing the world in either/or terms – can be traced to the chokehold that digital spaces have on our minds and lives today.

“We need digital spaces – from email to TikTok – that leave more room for not-knowing and for attending to issues and questions that are messy, murky and shifting. This kind of digital space might allow for multiple tempos of communication-and-response, as well as for operating systems that are more in sync with freeform, associational, ‘inefficient’ types of human thinking, such as reverie, forgetting, confusion, doubt and, above all, uncertainty.

“It is not a coincidence, in my view, that such mysterious yet astonishing realms of human thinking are devalued in society today.”

An author and journalist based in the Northeastern U.S. urged, “It’s important to rethink and re-envision the meta-architecture of digital spaces so that they can allow for more open-minded, dialectical, creative thinking and social connections. This meta-question seems to me to be almost entirely ignored in society today. What’s missing from national conversations about the nature of digital spaces is a realization that the architecture and aesthetics – i.e., the look and feel and bones – of our virtual realities exacerbate human inclinations to see the world in clear, binary and easily digestible terms.

“In a nutshell, I believe that the way digital spaces are set up deeply shapes our behavior in these spaces, just as strongly as physical landscapes and human-built buildings implicitly and explicitly influence our actions and moods. In effect, the meta-quality of digital spaces disturbs me more than even the current alarming content of these realms.

“The digital realm is a space of boxes, templates, lists, bullet points and crisp brevity. In searching, most people are offered a linear, pre-prioritized list of ‘answers’ – often even before they finish asking a question.

“The value and worth of people and objects are aligned with explicit data; ratings have become a standard of measurement that squeezes out all room for in-betweenness or dynamic change. In these and many other ways, digital spaces narrow our vision of what it means to know, paving the way for the types of cursory, extremist and simplistic content online we see today.”

6. About this canvassing of experts

This report covers results from the 13th “Future of the Internet” canvassing by Pew Research Center and Elon University’s Imagining the Internet Center.

Participants were asked to respond to several questions about the tone and impact of the online environment and the trajectory of activities in the digital public sphere that have recently been raising deepening societal concerns. Invitations to participate were emailed to more than 10,000 experts and members of the interested public. They were invited to weigh in via a web-based instrument that was open to them from June 29-Aug. 2, 2021. Overall, 862 people responded to at least one question. Results reflect comments fielded from a nonscientific, nonrandom, opt-in sample and are not projectable to any population other than the individuals expressing their points of view in this sample.

Respondent answers were solicited though the following prompts:

The evolution of digital spaces by 2035: This canvassing of experts is prompted by debates about the evolution of digital spaces and whether online life is moving in a positive or negative direction when it comes to the overall good of society. Some analysts and activists are fearful about the trajectory of digital activities; others are less concerned about the things that are happening. So, we start with a question about the way you see things evolving.

The question: Considering the things you see occurring online, which statement comes closer to your view about the evolution of digital spaces:

  • Digital spaces are evolving in ways that are both positive and negative.
  • Digital spaces are evolving in a MOSTLY POSITIVE way that is likely to lead to a BETTER future for society.
  • Digital spaces are evolving in a MOSTLY NEGATIVE way that is likely to lead to a WORSE future for society.
  • Digital spaces are not evolving in one direction or another.

Results for this question regarding the current evolution of digital spaces:

  • 70% said digital spaces are evolving in ways that are both positive and negative.
  • 18% said digital spaces are evolving in a mostly negative way that is likely to lead to a worse future for society.
  • 10% said digital spaces are evolving in a mostly positive way that is likely to lead to a better future for society.
  • 3% said digital spaces are not evolving in one direction or another.

The follow-up quantitative prompt and research questions of this study were:

Bettering the digital public sphere: An Atlantic Monthly piece by Anne Applebaum and Peter Pomerantsev, “How to Put Out Democracy’s Dumpster Fire,” provides an overview of the questions that are being raised about the tone and impact of digital life: How much harm does the current online environment cause? What kinds of changes in digital spaces might have an impact for the better? Will technology developers, civil society, and government and business leaders find ways to create better, safer, more-equitable digital public spaces?

The question: Looking ahead to 2035, can digital spaces and people’s use of them be changed in ways that significantly serve the public good?

-YES, by 2035, digital spaces and people’s use of them will change in ways that significantly serve the public good.

-NO, by 2035, digital spaces and people’s use of them will NOT change in ways that significantly serve the public good.

Results for the Yes-No quantitative question regarding the current evolution of digital spaces:

  • 61% said by 2035, digital spaces and people’s uses of them WILL change in ways that significantly serve the public good.
  • 39% said by 2035, digital spaces and people’s uses of them WILL NOT change in ways that significantly serve the public good.

It should be noted that in the follow-up qualitative question to the above binary-response request many of the 61% who were hopeful for change for public good said it is only a hope and many also listed a number of difficult hurdles to be overcome before the positive change might take place, and, of course, any quantitative result does not fully represent the complexities of this challenge. Below are the follow-up prompts used to elicit the respondents’ open-end written answers; the answers they gave constitute the truly important content of this report:

If you answered YES to the last question, please tell us how you imagine this transformation of digital spaces and digital life will take place: What reforms or initiatives may have the biggest impact? What beneficial role do you see tech leaders and/or politicians and/or public audiences playing in this evolution? What will be noticeably improved about digital life for the average user 2035? What current problems do you see being diminished? Which will persist and continue to raise major concerns?

If you answered NO to the last question, why do you think digital spaces and digital life will not be substantially better by 2035? What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces do you think are so entrenched that things will not much change? Are there any ways in which you think things could change for the better – even if the change isn’t dramatic?

The web-based instrument was first sent directly to an international set of experts (primarily U.S.-based) identified and accumulated by Pew Research Center and Elon University during previous studies, as well as those identified in a 2003 study of people who made predictions about the likely future of the internet between 1990 and 1995. Additional experts with proven interest in the health of the digital public sphere and related aspects of these particular research topics were also added to the list. We invited a large number of professionals and policy people from government bodies and technology businesses, think tanks and interest networks (for instance, those that include professionals and academics in law, ethics, political science, economics, social and civic innovation, sociology, psychology and communications); globally located people working with communications technologies in government positions; technologists and innovators; top universities’ engineering/computer science, political science, sociology/anthropology and business/entrepreneurship faculty, graduate students and postgraduate researchers; plus some who are active in civil society organizations that focus on digital life and those affiliated with newly emerging nonprofits and other research units examining the impacts of digital life.

Among those invited were researchers, developers and business leaders from leading global organizations, including Oxford, Cambridge, MIT, Stanford and Carnegie Mellon universities; Google, Microsoft, Amazon, Facebook, Apple and Twitter; leaders active in the advancement of and innovation in global communications networks and technology policy, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and the Organization for Economic Cooperation and Development (OECD). Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there may have been somewhat of a “snowball” effect as some invitees invited others to weigh in.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their backgrounds and the locus of their expertise. Some responses are lightly edited for style and readability.

A large number of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted, when available, in this report.

In this canvassing, 64% of respondents answered at least one of the demographic questions. Some 67% of these 550 people identified as male, 31% as female and 1% identified themselves in some other way. Some 77% identified themselves as being based in North America, while 23% are located in other parts of the world. When asked about their “primary area of interest,” 39% identified themselves as professor/teacher; 14% as futurists or consultants; 12% as research scientists; 8% as advocates or activist users; 8% as technology developers or administrators; 7% as entrepreneurs or business leaders; 4% as pioneers or originators; and 8% specified their primary area of interest as “other.”

Following is a list noting a selection of key respondents who took credit for their responses on at least one of the overall topics in this canvassing. Workplaces are included to show expertise; they reflect the respondents’ job titles and locations at the time of this canvassing.

Charles Anaman, founder of waaliwireless.co, based in Ghana; Anna Andreenkova, professor of sociology at CESSI; Peng Hwa Ang, professor of media law and policy at Nanyang Technological University, Singapore; Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications; David Barnhizer, professor of law emeritus, author and human rights expert; John Battelle, co-founder and CEO of Recount Media; Robert Bell, co-founder of Intelligent Community Forum; Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab; Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute; Bruce Bimber, professor of political science and founder of the Center for Information Technology and Society at the University of California-Santa Barbara; Valerie Bock, principal at VCB Consulting; Gary A. Bolles, chair for the future of work at Singularity University; danah boyd, founder of the Data & Society Research Institute and principal researcher at Microsoft; Stowe Boyd, managing director and founder of Work Futures; Tim Bray, founder and principal at Textuality Services (previously at Amazon); Jamais Cascio, distinguished fellow at the Institute for the Future; Vinton G. Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google; Barry Chudakov, founder and principal at Sertain Research; Christina J. Colclough, founder of the Why Not Lab; Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House; Olivier Crépin-Leblond, founding member of the European Dialogue on Internet Governance; Willie Currie, retired global internet governance leader with the Independent Communications Authority of South Africa; Mark Davis, associate professor of communications at the University of Melbourne; Amali De Silva-Mitchell, founder/coordinator of the IGF Dynamic Coalition on Data-Driven Health Technologies; Cory Doctorow, activist journalist and author of “How to Destroy Surveillance Capitalism”; Judith Donath, faculty fellow at Harvard’s Berkman Klein Center; Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada; Esther Dyson, internet pioneer and executive founder of Wellville.net; Ayden Férdeline, public-interest technologist based in Berlin, Germany; Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner; Marcus Foth, professor of informatics at Queensland University of Technology; Carl Frey, director of the Future of Work project at Oxford University; Mei Lin Fung, chair of People-Centered Internet; Oscar Gandy, emeritus scholar of the political economy of information at the University of Pennsylvania; Randall Gellens, director at Core Technology Consulting; Jerome Glenn, co-founder and CEO of The Millennium Project; Michael H. Goldhaber, author, consultant and theoretical physicist who wrote early explorations on the digital attention economy; Jonathan Grudin, principal human-computer design researcher at Microsoft; Don Heider, executive director of the Markkula Center for Applied Ethics at Santa Clara University; James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute; Perry Hewitt, chief marketing officer at data.org; Brock Hinzmann, co-chair of the Millennium Project’s Silicon Valley group; Terri Horton, work futurist at FuturePath; Gus Hosein, executive director of Privacy International; Alexander B. Howard, director of the Digital Democracy Project; Stephan G. Humer, sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin; Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association; Jeff Jarvis, director of the Tow-Knight Center for entrepreneurial journalism at City University of New York; Frank Kaufmann, president of the Twelve Gates Foundation; Nazar Nicholas Kirama, president of the Internet Society chapter in Tanzania and founder of the Digital Africa Forum; Michael Kleeman, senior fellow at the University of California-San Diego; Hans Klein, associate professor of public policy at Georgia Tech; Bart Knijnenburg, associate professor of human-centered computing at Clemson University; David J. Krieger, director of the Institute for Communication and Leadership; Kent Landfield, chief standards and technology policy strategist; Larry Lannom, vice president at the Corporation for National Research Initiatives (CNRI); Evan Leibovitch, director of community development at Linux Professional Institute; Mike Liebhold, distinguished fellow, retired, at The Institute for the Future; Leah Lievrouw, professor of information studies at UCLA; Clifford Lynch, director of the Coalition for Networked Information; Sean Mead, strategic lead at Ansuz Strategy; Richard H. Miller, CEO and managing director at Telematica and executive chairman at Provenant Data; Alan Mutter, consultant and former Silicon Valley CEO; Russell Newman, associate professor of digital media and culture at Emerson College; Craig Newmark, founder of Craigslist; Beth Simone Noveck, director of the Governance Lab; Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society; Jay Owens, research and innovation consultant with New River Insight; Ian Peter, Australian internet pioneer, futurist and consultant; Peter Padbury, Canadian futurist and consultant; Alejandro Pisanty, professor of internet and information society at National Autonomous University of Mexico (UNAM); David Porush, writer, longtime professor at Rensselaer Polytechnic Institute; Adam Clayton Powell III, executive director of the Election Cybersecurity Initiative at the University of Southern California; Calton Pu, professor, chair and co-director of the center for experimental research in computer systems at Georgia Tech; Alexa Raad, chief purpose and policy officer at Human Security; Courtney C. Radsch, journalist, author and free-expression advocate; Srinivasan Ramani, Internet Hall of Fame member and pioneer of the internet in India; Rob Reich, associate director of the Human-Centered Artificial Intelligence initiative at Stanford University; Howard Rheingold, pioneering internet sociologist, author of “The Virtual Community”; Zak Rogoff, research analyst at the Ranking Digital Rights project; Carolina Rossini, international technology law and policy consultant; Marc Rotenberg, president and founder of the Center for AI and Digital Policy; Eileen Rudden, co-founder of LearnLaunch; Douglas Rushkoff, digital theorist and host of the NPR One podcast “Team Human”; Paul Saffo, a leading Silicon Valley-based forecaster; Rich Salz, a senior director of security services at Akamai Technologies; Scott Santens, senior advisor at Humanity Forward; Melissa Sassi, Global Head of IBM Hyper Protect Accelerator; Raashi Saxena, project officer at The IO Foundation; Doc Searls, internet pioneer and co-founder and board member at Customer Commons; William L. Schrader, advisor to CEOs, previously co-founder of PSINet; Henning Schulzrinne, Internet Hall of Fame member and former CTO for the Federal Communications Commission; Evan Selinger, professor of philosophy at Rochester Institute of Technology; Ben Shneiderman, distinguished professor of computer science and founder of the Human-Computer Interaction Lab at the University of Maryland; Toby Shulruff, senior technology safety specialist at the National Network to End Domestic Violence; Mark Surman, executive director of the Mozilla Foundation; Brad Templeton, internet pioneer, futurist and activist, chair emeritus of the Electronic Frontier Foundation; Ed Terpening, industry analyst with the Altimeter Group; Joseph Turow, professor of media systems and industries, University of Pennsylvania; Maja Vujovic, director of Compass Communications; Wendell Wallach, senior fellow with the Carnegie Council for Ethics in International Affairs; Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network; David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society; Brooke Foucault Welles, associate professor of communication studies at Northeastern University; Jeremy West, senior digital policy analyst at the Organization for Economic Cooperation and Development (OECD); Tom Wolzien, inventor, analyst and media executive; Andrew Wyckoff, director of the OECD’s Directorate for Science, Technology and Innovation; Christopher Yoo, founding director of the Center for Technology, Innovation and Competition at the University of Pennsylvania; Amy Zalman, futures strategist and founder of Prescient Foresight; Ethan Zuckerman, director of the Initiative on Digital Public Infrastructure at the University of Massachusetts-Amherst.

A selection of institutions at which some of the respondents work or have affiliations:

AAI Foresight; Access Now; Akamai Technologies; Altimeter Group; Amazon; Aoyama Gakuin University; American Enterprise Institute; American Institute for Behavioral Research and Technology; American Library Association; Ansuz Strategy; APNIC; Arizona State University; Asian Development Bank; The Associated Press; Atlantic Council; Australian National University; Bar-Ilan University; Benton Institute; Botswana Communications Regulatory Authority; Brookings Institution; Berkman Klein Center for Internet & Society; Carnegie Endowment for International Peace; Carnegie Mellon University; Center for a New American Security; Center for Data Innovation; Center for Global Enterprise; Center for Strategic and International Studies; Centre for International Governance Innovation; CESSI; Cisco Systems; City University of New York; Coalition for Networked Information; Columbia University; Compass Communications; Conmergence; Constellation Research; Convocation Design + Research; Core Technology Consulting; Cornell University; Council of Europe; Data & Society Research Institute; Data Science Institute at Columbia; Davis Wright Tremaine; Dell EMC; Deloitte; The Digital Democracy Project; Digital Grassroots; Digital Value Institute; Diplo Foundation; DotConnectAfrica; DX Open Network; Electronic Frontier Foundation; Emerson College; European Broadcasting Union; Facebook; Foresight Alliance; Front Line Defenders; FuturePath; Georgia Institute of Technology; Global Internet Policy Digital Watch; Global Village Ltd; Global Voices; Google; Gridmerge; The Hague Center for Strategic Studies; Harvard University; Hochschule Fresenius University of Applied Sciences; Hokkaido University; IBM; Iggy Ventures; Internet Corporation for Assigned Names and Numbers (ICANN); IDG; Ignite Social Media; Information Technology and Innovation Foundation; Institute for the Future; Instituto Superior Técnico, Portugal; Institute for Ethics and Emerging Technologies; Institute for Prediction Technology; International Computer Science Institute, Berkeley, California; International Telecommunication Union; Internet Engineering Task Force (IETF); Internet Society; Institute of Electrical and Electronics Engineers (IEEE); IO Foundation; Juniper Networks; Kororoit Institute; Le Havre University; Leading Futurists; Lifeboat Foundation; Log Cabin LLC; Limitless Lab; London School of Economics and Political Science; MacArthur Research Network on Open Governance; Macquarie University, Sydney, Australia; Massachusetts Institute of Technology; Menlo College; Mercator XXI; Michigan State University; Microsoft Research; Millennium Project; The Morgan Group; Mozilla; Nanyang Technological University, Singapore; New York University; Namibia University of Science and Technology; National Autonomous University of Mexico; National Distance University of Spain; National Research Council of Canada; Nigerian Communications Commission; Nonprofit Technology Network; Northeastern University; North Carolina State University; OECD; Olin College of Engineering; The People-Centered Internet; Plugged Research; Policy Horizons Canada; Predictable Network Solutions; The Providence Group; RAND; Ranking Digital Rights; Recount Media; Rensselaer Polytechnic Institute; Rice University; Rose-Hulman Institute of Technology; RTI International; San Jose State University; Santa Clara University; Shambhala; Shareable; Singularity University; Singapore Management University; Smart Cities Council; Södertörn University, Sweden; Social Brain Foundation; Social Science Research Council; Sorbonne University; South China University of Technology; Stanford University; Stevens Institute of Technology; Superhuman Ltd; Syracuse University; Tallinn University of Technology; Team Human; The TechCast Project; Tech Policy Tank; Telecommunities Canada; Telematica; Textuality; Tignis; Tufts University; The Representation Project; Twelve Gates Foundation; Twitter; United Nations; University of California, Berkeley; University of California, Los Angeles; University of California, San Diego; University College London; University of Hawaii, Manoa; University of Texas, Austin; the Universities of Alabama, Arizona, Dallas, Delaware, Florida, Maryland, Massachusetts, Miami, Michigan, Minnesota, Oklahoma, Pennsylvania, Rochester, San Francisco and Southern California; the Universities of Amsterdam, British Columbia, Cambridge, Cyprus, Edinburgh, Groningen, Liverpool, Naples, Oslo, Otago, Queensland, Toronto, West Indies; UNESCO; U.S. Army; U.S. Geological Survey; U.S. National Science Foundation; Venture Philanthropy Partners; Verizon; Virginia Tech; Vision2Lead; Vision & Logic; Waaliwireless.co; Waseda University; Wellville; Wikimedia Foundation; Witness; Work Futures; World Bank Group – Nepal; World Economic Forum; World Wide Web Foundation; World Wide Web Consortium; Xponential; and Yale University Center for Bioethics.

To download the print version of the full report or read it online, click here: 
http://www.elon.edu/u/imagining/wp-content/uploads/sites/964/2021/11/Future-Social-Media-Democracy-In-2035-Elon-University-Pew-Research-2021.pdf

To read for-credit survey participants’ responses with no analysis, click here:
https://www.elon.edu/u/imagining/surveys/xiii-2021/improving-toxic-online-forums-2035/credit

To read anonymous survey participants’ responses with no analysis, click here:
https://www.elon.edu/u/imagining/surveys/xiii-2021/improving-toxic-online-forums-2035/anon