New Elon/Pew report: Will online forums become more or less toxic?

Will digital public spaces be more or less toxic by 2035? Many experts expect improvement, while some predict things will worsen.

Graphic Digital Public Sphere Report A majority of technology experts predict that online public spaces can be improved significantly by 2035 if reformers, big technology firms, governments and activists work more diligently to tackle the problems created by misinformation, disinformation and toxic discourse, according to a new report from the Pew Research Center and Elon University’s Imagining the Internet Center.

Still, many others expect continuing troubles as digital tools and forums are used to exploit people’s frailties, stoke their rage and drive them apart.

In a non-scientific, opt-in canvassing, more than 860 technology innovators, developers, business and policy leaders, researchers and activists were asked, “Looking ahead to 2035, will digital spaces and people’s use of them be changed in ways that significantly serve the public good?” In response, 61% of the respondents chose the option declaring that “yes,” by 2035, digital spaces and people’s uses of them will change in ways that significantly serve the public good; 39% chose the “no” option, positing that by 2035, people’s uses of digital spaces will not change in ways that significantly serve the public good.

danah boydThis report, “The Future of Digital Spaces and Their Role in Democracy,” is part of a long-running Elon-Pew research series in which experts explain how today’s trends may be influencing the future. The experts’ responses – gathered in the summer of 2021 – include a wide range of hopeful and worrisome prescriptions and predictions. They discussed innovations, such as:

  • The creation of an internet version of public media along the lines of PBS and NPR
  • “Middleware” that could allow people to set an algorithm to give them the kind of internet experience they want, perhaps without the dystopian side effects
  • Online upvoting systems that favor content that could help encourage partisans toward understanding varying views and seeking out consensus rather than polarizing them
  • An internet “bill of rights” allowing “self-sovereign identity” that lets people stay anonymous online while blocking bots
  • “Constructive communication” systems set up to dial down anger and bridge divides

Some of the most compelling ideas advanced by specific experts included:

  • Brad TempletonBrad Templeton, chair emeritus, Electronic Frontier Foundation, director Foresight Institute, advanced a “new moral theory [that] it is wrong to exploit known flaws in the human psyche.” He argues that the embrace of “psyche-exploitation avoidance” would lead to a new design of online spaces.
  • Raashi Saxena, project officer at The IO Foundation, urged, “We do not have a global, agreed-upon list of digital harms that can be inflicted upon us … We first need to define the rights to be protected.”
  • Mike Liebhold, distinguished fellow, retired, at The Institute for the Future, outlined a future with applied machine intelligence everywhere, ubiquitous conversational bot agents, holographic media and telepresence, cobotics (collaborative robotics) and continuous pervasive cybersecurity vulnerabilities.
  • Carolina Rossini, an international technology law and policy expert, said a regulatory agency to monitor technology’s impact on health – a Food and Drug Administration (FDA) for algorithms – should arise as digital technology devices begin to be placed inside people’s bodies.
  • Robin Raskin, writer and founder of the Virtual Events Group, predicted, “The metaverse – digital twins of real worlds or entirely fabricated worlds – will be a large presence by 2035, unfortunately with some of the same bad practices on the internet today such as personal-identity infringements.”
  • James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, said there will be tech advances that allow people to control their online identities and privacy preferences in ways that thwart omnipresent surveillance schemes.
  • Cory Doctorow, activist journalist and author of “How to Destroy Surveillance Capitalism,” said the “tyranny of network effects” will be broken if interoperability is imposed on tech companies so that, for instance, people could move their social media networks from one platform to another and easily abandon online spaces they do not like.
  • Beth Simone Noveck, director, the Governance Lab at New York University expects new “governance models” for public online spaces that allow citizens and groups to participate directly in policymaking and provision of services.
  • Barry Chudakov, founder and principal of Sertain Research predicts that in the future metaverse “the self will go digital” and people will simultaneously exist in the flesh and in their digital avatars. “Identity is thereby multiple and fluid: Roles, sexual orientation and self-presentation evolve from solely in-person to in-space.”
  • Jerome Glenn, co-founder and CEO of The Millennium Project, said a new civilization will emerge as the “Information Age” gives way to the “Conscious-Technology Age” through the force of two megatrends: “First, humans will become cyborgs, as our biology becomes integrated with technology. Second, our built environment will incorporate more artificial intelligence.”

The broad themes covered in hundreds of written responses to this canvassing include the following:

Public digital spaces will improve significantly by 2035: Tech can be fixed, governments and corporations can reorient incentives, people can band together for reform

Esther DysonA majority of these experts said their hopes are tied to the tech industry, government and activist groups working to inspire the redesign of social media algorithms to improve individuals’ interactions and enhance democratic debate and its outcomes. Additionally, many said they hope or expect that there will be better efforts toward enhanced and widespread digital literacy and the closing of digital divides; the formation of helpful new digital social norms; and much greater government/public/corporate/nonprofit investment in accurate, fair journalism that is not tied to bottom-line outcomes.

  • Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech. They said social media platforms have come to dominate the public’s attention to the point of replacing journalism and other traditional sources in providing information to citizens. These experts argued that surveillance capitalism is not the only way to organize digital spaces. They predict that better spaces in the future will be built of algorithms designed with the public good and ethical imperatives at their core. They hope upgraded digital “town squares” will encourage consensus rather than division, downgrade misinformation and deepfakes, surface diverse voices, kick out “bozos and bots,” enable affinity networks and engender pro-social emotions such as empathy and joy.
  • Government regulation plus less-direct “soft” pressure by government will help shape corporations’ adoption of more ethical behavior: A large share of these experts predicted that legislation and regulation of digital spaces will expand; they said the new rules are likely to focus on upgrading online communities, solving issues of privacy/surveillance and giving people more control over their personal data. Some argued that too much government regulation could lead to negative outcomes, possibly stifling innovation and free speech. There are worries that overt regulation of technology will empower authoritarian governments to punish dissidents under the guise of “fighting misinformation.” Some foresee a combination of carefully directed regulation and “soft” public and political pressure on big tech, leading corporations to be more responsive and attuned to the ethical design of online spaces.
  • Russell NewmanThe general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will bring improvements: A share of these experts predicted that the public will apply more pressure for the reform of digital spaces by 2035. Many said tech literacy will increase, especially if new and improved programs arise to inform and educate the public. They expect that people who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces. Some experts noted that this is how every previous advance in human communication has played out.
  • New internet governance structures will appear that draw on collaborations among citizens, businesses and governments: A portion of these experts predict the most promising initiatives will be those in which institutions collaborate along with civil society to work for positive change that will institutionalize new forms of governance of online spaces with public input. They expect these multistakeholder efforts will redesign the digital sphere for the better, upgrading a tech-building ecosystem that is now too reliant on venture capital, fast-growth startup firms and the commodification of people’s online activities.
Source: Nonscientific canvassing of select experts conducted June 29-Aug. 2, 2021. “The Future of Digital Spaces and Their Role in Democracy” PEW RESEARCH CENTER and ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER, 2021

Public digital spaces will not improve significantly by 2035: Human frailties will remain the same; corporations, governments and the public will not be able to make reforms

Judith DonathExperts who doubt significant improvement will be made in the digital democratic sphere anytime soon argued that digital networks and tools will continue to amplify human frailties and magnify malign human intent. Some predicted that society could even spiral into a worsening situation due to advances in artificial intelligence (AI), hyper-surveillance, the “datafication” of every aspect of life, predictive technology-fueled authoritarianism and magnified mis/disinformation. Many argued that humans’ intrinsic flaws will thwart attempts to upgrade public online spaces because these platforms are built and driven by capitalism and geopolitical competition. Some said human organizations, laws and norms simply can’t evolve quickly enough to keep up with the speed and complexity of a massive, ever-changing digital communications system used by billions that will soon be connecting increasing numbers of non-human, automated entities.

  • Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears, these experts argued. In a digitally networked world in which people are constantly surveilled and their passions are discoverable, messages that weaponize human frailties and foster mis/disinformation will continue to be spread by those who wish to exert influence to meet political or commercial goals or cultivate divisiveness and hatred.
  • The trends toward more datafication and surveillance of human activity are unstoppable: A share of experts said advances in digital technology will worsen the prospects for improving online spaces. They said more human activity will be quantified; more “smart” devices will drive people’s lives; more environments will be monitored. Those who control tech will possess more knowledge about individuals than the people know themselves, predicting their behavior, getting inside their minds, pushing subtle messages to them and steering them toward certain outcomes; this type of “psychographic manipulation” is already being used to tear cultures asunder, threaten democracy and stealthily stifle people’s free will.
  • Art BrodskyHaters, polarizers and jerks will gain more power: These experts noted that people’s instincts toward self-interest and fear of “the other” have led them to commit damaging acts in every social space throughout history, but the online world is different because it enables instantaneous widespread provocations at low cost, and it affords bad actors anonymity to spread any message. They argued that the current platforms, with their millions to billions of users, or any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.
  • Humans can’t keep up with the speed and complexity of digital change: Internet-enabled systems are too large, too fast, too complex and constantly morphing. making it impossible for either regulation or social norms to keep up, according to some of these experts. They explained that accelerating change will not be reined in, meaning that new threats will continue to emerge as new tech advances arise. Because the global network is too widespread and distributed to possibly be “policed,” these experts argue that humans and human organizations as they are structured today cannot respond efficiently and effectively to challenges confronting the digital public sphere.
Source: Nonscientific canvassing of select experts conducted June 29-Aug. 2, 2021. “The Future of Digital Spaces and Their Role in Democracy” PEW RESEARCH CENTER and ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER, 2021

The full report features a 148-page selection of the most-comprehensive overarching responses shared by the hundreds of thought leaders invited to participate in the nonrandom sample, including:

Vint Cerf, Internet Hall of Fame member and vice president at Google

danah boyd, founder and president of Data & Society, principal researcher at Microsoft

Henning Schulzrinne, Internet Hall of Fame member and former CTO for the FCC

Esther Dyson, internet pioneer, entrepreneur, executive founder of Wellville

Jonathan Grudin, principal human-computer design researcher at Microsoft

Mei Lin Fung, chair, People-Centered Internet; former lead at the U.S. Federal Health Futures initiative

Gary A. Bolles, chair for the future of work at Singularity University

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute

Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society

Susan Crawford, professor at Harvard Law and former special assistant for science and tech, Obama White House

Jamais Cascio, distinguished fellow at the Institute for the Future

Nazar Nicholas Kirama, president of the Internet Society chapter in Tanzania; founder of Digital Africa Forum

Alexa Raad, chief purpose and policy officer at Human Security, previously with Farsight Security and PIR

Andrew Wycoff, director, OECD Directorate for Science, Technology and Innovation

Larry Lannom, vice president, Corporation for National Research Initiatives

Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network

Frank Kaufmann, president of Twelve Gates Foundation and Values in Knowledge Foundation

Melissa Sassi, Global Head of IBM Hyper Protect Accelerator

Doc Searls, internet pioneer, author of “The Intention Economy” and co-founder of Customer Commons

Maja Vujovic, owner/director of Compass Communications in Belgrade, Serbia

Ben Shneiderman, founder, Human-Computer Interaction Lab at the University of Maryland

Grace Wambura Mbuthia, associate at DotConnectAfrica

Calton Pu, co-director of the center for experimental research in computer systems, Georgia Tech

Judith Donath, faculty fellow at Harvard’s Berkman Klein Center

Amali De Silva-Mitchell, founder/coordinator of the IGF Dynamic Coalition on Data-Driven Health Technologies

David Krieger, director of the Institute for Communication and Leadership, Switzerland

Ethan Zuckerman, director, Initiative on Digital Public Infrastructure, UMass-Amherst

Joseph Turow, professor of media systems and industries University of Pennsylvania

Read the full report here: https://www.elon.edu/u/imagining/surveys/xiii-2021/

Elon University’s Imagining the Internet Center explores and provides insights into emerging human-digital innovations, dynamics, diffusion and governance. Its research holds a mirror to humanity’s use of communications technologies, informs policy development, exposes potential futures and provides a historic record. Pew Research Center is a nonpartisan fact tank that informs the public about issues, attitudes and trends shaping America and the world. Pew Research is a subsidiary of The Pew Charitable Trusts, its primary funder. Neither center takes policy positions.