Elon University

The 2017 Survey: The Future of Truth and Misinformation Online, Part 6 of 6

What will happen to trust in information online in the next decade?

Full Survey Link Future of MisinformationTechnologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answers to the following query – they were evenly split, 51-49 on the question:

What is the future of trusted, verified information online? In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas? 

This page holds a full analysis of the answers to the fifth of five follow-up questions:

Misinformation Online Full Survey LinkWhat do you think will happen to trust in information online by 2027?

Among the key themes emerging from among 1,116 respondents’ answers were: – People have differing notions of ‘trust,’ ‘facts’ and ‘truth.’ – The rise of misinformation will continue and things will likely worsen. – The next few years are crucial to the future of the information environment. – Some people will be smarter inthe future about finding and relying on trusted sources. – There will be a divide between the savvy and not-so-savvy, and noisy, manipulative attention-grabbers may drown out the voices of veracity. – New and old approaches to improving the information environment will be successful. – Methods adopted to potentially improve things will cut free speech and elevate surveillance, changing the nature of the internet; the actors responsible for enabling change profit from control. – Despite some work to improve things, there won’t be much change.

If you wish to read survey participants’ credited responses with no analysis, please click here.

If you wish to read anonymous survey participants’ responses with no analysis, please click here.

Summary of Key Findings Follow-Up Question 5

The rise of misinformation could continue and things might worsen. Regulation could cut free speech and raise surveillance. The next few years are crucial to the information environment

Future of Misinformation LogoThe answers to this question were widely spread.

professor based in North America wrote, “The level of trust will be very low.”

policy analyst for the U.S. Department of Defense wrote, “Trust will improve.”

A senior research scholar at a top-ranked U.S. law school wrote, “Trust will remain high within filter bubbles of information, trust in adversaries’ bubbles may climb some.”

professor and research scientist based in Europe commented, “Nothing will change. The balance between trust and distrust in information will be approximately the same.”

Susan Etlinger, industry analyst, Altimeter Research, said, “Technology has created an information arms race that is very similar to what we see with cybercrime and hackers. My guess would be that information ecosystems will behave similarly: periods of relative apathy punctuated by panic and outrage.”

Virginia Paque, lecturer and researcher of internet governance, DiploFoundation, wrote, “We are entering into a period of open and serious skepticism of any information, online or offline. I hope that this will quickly be followed by implementation of tools to address this both on and offline, and we will have recovered before 2027. The internet will require a quick response time [to this issue] to maintain its usefulness.”

Amy Webb, author and founder of the Future Today Institute, wrote, “We’ve become conditioned to share before we read all the way through a story, or before our common sense kicks in. We’re also slaves to our amygdalas, and this moment in human history is rife with economic uncertainty, geopolitical anxiety and wild stories about the future of transformative technologies like artificial intelligence and genomic editing. Given what we know to be true today, it’s clear that we’re on a dangerous path towards the future. Without significant changes, the public trust of quality news will continue to erode, which inevitably contributes to the financial demise of our once-lauded news organizations. Without trained investigative reporters, copy desks, producers and editors, we’ll find ourselves drowning in information but without any sense of which paddle or tree branch to grasp onto for help. Around 2027, people and the artificially intelligent systems that work alongside and augment them, could have to make decisions based on a cesspool of misinformation, misleading statistics, rumour, innuendo and whatever’s left of our trusted news organizations. It’s a bleak outlook, but here’s something important to keep in mind: that future hasn’t happened yet. The future has always been our shared responsibility in the present. When you stop to think of the critical role that you, personally, play in what’s over the horizon, it can be very empowering. And, by the way, that’s a good way to keep your amygdala in check.”

Scrutinizing the very notion of trust…
And, what is ‘fact’? What is ‘truth’?

The reasoning behind opinions on this question were wide-ranging. Some people discussed the very notions of trust and truth.

professor of education policy commented, “This process of thinking about how you come to ‘trust’ is very different than ‘trusting’ information Channel X. The notion of trust is so multifaceted. Trust that some information is ‘true,’ for example, is very different than trust that a particular source ‘speaks’ your language. The danger that psychology and economics researchers have shown us is that people like to hear what confirms preexisting biases. So, a form of trust can easily grow from confirmatory sources, but this is not a form of trust we would want to nurture as a country. What is often missing from this discussion so far is not technological fixes, but educational/behavioral fixes. We need to unplug, we need to listen to each other, we need to be cautious about trusting because we are able to weigh evidence and seek out multiple sources that we can weigh against each other.”

research scientist based in North America said the concept of trust is morphing, commenting, “With all the forms of manipulation possible with digital information, the boundary of fake and true will blur, trust will transform with it.”

John Wilbanks, chief commons officer, Sage Bionetworks, replied, “Trust is a word that gets redefined by new generations with new access to information. So this isn’t about ‘trust’ but about ‘what we thought trust was before it got subsumed in an information flood.’”

Ian O’Byrne, assistant professor at the College of Charleston, replied, “Some bad actors will be prosecuted to make us believe something has happened to address these issues. [But] business, governments, and organizations will continue to spread these digital texts and tools and play it ‘fast and loose’ with our rights and liberties. Our online tribes and affinity spaces will continue to fracture and solidify as we find more in common with the collection of friends we have online than we do with the people on our street, state or country. Trust and truth will be different commodities for different individuals in and across these spaces. Everyone will have trust and truth. It will just mean different things for different people.”

An anonymous respondent wrote, “Trust isn’t often related to actual presence of factual content, at least not much. Trust in information will continue to decline and – paradoxically – this decline will occur despite the fact we have gotten better at stopping it.”

danah boyd, principal researcher, Microsoft Research and founder, Data & Society, wrote, “We are going to see more and more adversarial attempts to gaslight the public. This is not specifically about the internet. This is about who trusts what.”

Ian Peter, an internet pioneer, historian, activist user and futurist, replied, “Trust should deteriorate, and we all should become far more critical of what we read and are told, but I am not sure whether that will eventuate. We are more likely to carry on trusting information as if nothing had happened and just believe whatever we are told we should believe.”

David Manz, a cybersecurity scientist, replied, “Trust is an attribute of a relationship between two human beings. You don’t trust a chair to hold you. You trust the maker, the installer, the last user, etc. Similarly in the use of computers we might anthropomorphize them but at the end of the day it is trust between the human content creator, the distributor, the echo chamber, your peers and finally you the consumer.”

Garth Graham, an advocate for community-owned broadband with Telecommunities Canada, explained, “We will begin to realize that truth/lie is a false dichotomy, and that ‘information’ is a verb, not a noun. Also that narrative is an illusion. We are discovering that mind/consciousness depends on context. The internet increases our awareness that reality is a construct. It accelerates our capacity to apply that awareness. If we are lucky, by 2027, we will be able to practice that capacity as a learned artifice. We will be more conscious of the nature of consciousness. As we do this, our trust in ‘society’ as an organizing principle dependent on external authority will disappear. To be replaced by a reliance on self-organizing community as the primary principle of structural relationship and organization.”

Sean Justice, assistant professor at Texas State University-San Marcos, “This is an ecosystem question if ‘trust’ is held as an open, relational term. In that sense, ‘trust’ will continue to be commodified in capitalist systems. But another question needs to be asked simultaneously, how is capitalism changing? Changes to the materiality of the ecology have yet to be theorized in a coherent way. Questions that rely on anachronistic (black-boxed) terminology might actually work against a sustainable dialog that might prove useful, however. In the end it might not matter too much that we understand what we’re doing; practice often (perhaps always) leads theory.”

The rise of misinformation will continue, and things will likely worsen

Many respondents argued that action must be taken or there will be dire results and others expressed worry, disappointment, sadness or resignation.

Jerry Michalski, futurist and founder of REX, replied, “I’m afraid we won’t make much progress in a decade. It’s too early. The possibilities for havoc haven’t yet been played out, believe it or not.”

Andrew Nachison, author, futurist and founder of WeMedia, said, “We’re losing trust in everything, including the institutions we need to sustain a civic, civil, peaceful liberal society. We need to rebuild trust in everything. Short of that, by 2027 we will be stuck in a century of endless wars, terror, corruption and injustice. Who will lead us to a better future? That’s the real question we and our children will face.”

Tom Rosenstiel, author, director of the American Press Institute, senior non-resident fellow at the Brookings Institution, commented, “The last 30 years suggest that the forces of declining trust will likely continue. Three trends are merging here. As technology expands, the audience fragments further into its own channels by subject and point of view. And as that happens, political leaders, particularly those who feel the traditional media are against them will continue to exploit of that to inflame audiences for their own purposes.”

Dean Willis, consultant for Softarmor Systems, commented, “By 2027, online information will be as trusted as (the Russian news service) Pravda.”

An adjunct senior lecturer in computing said, “In 2027 people won’t even trust information from many of their family and friends, let alone online information that disagrees with their own view of the world.”

A professor of media and communication based in Europe said, “If no substantial measures are taken to avoid and prevent the pollution of the internet, online information by 2027 will be regarded as sewage.”

Tiziano Bonini, lecturer in media studies at the department of social, political and cognitive sciences, University of Siena, said, “Information online will be extremely polarized. Most of the information will be under the real-time review of millions of skilled users, while bad actors will continue to proliferate in subcultural contexts or specific clusters of people (those less skilled in media literacy). Authoritarian governments will centralize and control information online, producing fake news themselves. Trust will more and more rely on single persons (journalists and gatekeepers with a high reputation, maybe measured through new ranking systems) instead of single institutions.”
A number of people mentioned that emerging advances in technology – including manipulation of audio, video and VR and AR information – today make it clear that by 2027 it is highly likely to be even more difficult to understand if information is fake or real.

David Conrad, a chief technology officer, replied, “It will continue to decline, particularly as technology evolves for modifying and/or generating fake video, audio and text that is essentially indistinguishable from real information.”

Judith Donath, fellow at Harvard’s Berkman Klein Center, and founder of the Sociable Media Group at the MIT Media Lab, commented, “There will be an arms race of fakeness, especially in audio and video, as the tools to make convincing artificial videos of people and events become commonplace and believable.”

postdoctoral associate at MIT said, “In the next decade we will see the rise of false (but completely realistic looking) audio and video segments on the internet. Textual misinformation (such as fake news) will be the least of our worries by then.”

software engineer based in Europe said, “Given recent experimentation with spoofed speech and videos, people are only going to have to pay much more attention.”

professor at Harvard Business School said, “2027 will be much the same as today, but with even-more-sophisticated videos and other fakes.”

The managing partner of a technology consultancy wrote, “The notion of empirical facts based on physical law, observation, research and/or corroboration may be jeopardized as we merge the physical world with virtual world (fueled by pervasive AR/VR/mixed reality) and ‘facts’ can be created with the attributes of authenticity, corroboration and evidence yet false or harmful.”

business leader based in Europe wrote, “We’ll see more and more deception, not only fake news as they are now – but also more elaborate schemes – like placing fake information proactively in files that are in danger to be leaked.”

planning information manager for an East Coast city said, “Trust will deteriorate significantly, with people breaking into even-more-moated constituencies for obtaining information.”

professor of rhetoric and communication said, “People will be even more isolated in their silos, reticent to accept any information that does not cohere with their existing beliefs.”

graduate researcher based in North America commented, “People will only trust the small range of sources that align with their viewpoints and general trust will decrease dramatically unless we can curtail the spread of false news.”

A futurist/consultant said, “If we don’t engineer greater trust we may see a more balkanized/tribalized society, which could become ungovernable on the scales we are familiar with.”

Alan D. Mutter, media consultant and faculty at graduate school of journalism, University of California-Berkeley, replied, “I am terrified to contemplate the subject… Even as I cling to the hope that the arc of history will right itself, I doubt anyone will ever again be regarded, Walter Cronkite-like, as ‘the most trusted’ person in America.”

Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, wrote, “Trust will disintegrate.”

Adam Holland, a lawyer and project manager at Harvard’s Berkman Klein Center for Internet & Society, said, “It will in general decrease, as the sheer amount of what is available proliferates. Alongside this trend, information consumers will also increase trust in information from certain people or outlets. This trust will sometimes be warranted, but it will also be the result of avoiding cognitive dissonance or (virtue), signaling tribal allegiance.”

Fredric Litto, professor emeritus, University of São Paulo, Brazil, wrote, “It will be increasingly diminished, much as we currently witness, dividing every community into ‘tribes’ organized by ideological, economic, religious and other cultural criteria, thereby augmenting extreme stress in everyday life, with unpredictable, long-range, consequences.”

An internet pioneer and principal architect in computing science replied, “Overall trust in all online activity will decline markedly over time, due to mass compromise of information systems. This will lead to legislation, holding firms liable for negligent security practices. After that, trust will improve.”

media networking consultant said, “Online information will be more resilient to hacking but there will continue to be a growing number of sources of information and not all will be reliable.”

principal technology architect and author replied, “It will decrease – but there will be so few sources that it will not matter. The entire ecosystem will fall under the control of a few players, and ‘we will believe what we are told to believe,’ unless we have some alternate form of information, which will only be local. Hence we will end up in a situation where every piece of local knowledge goes against the larger picture we are being told to believe, but everyone will believe the larger picture because ‘it must be different everywhere else.’”

technical writer said, “Trust in information will be nonexistent.”

The dean of one of the top 10 journalism and communications schools in the U.S. replied, “There is financial and political gain from false information, so it will not cease. The focus should be on how to counter it in new forms.”

An anonymous respondent replied, “All information will require repeated authentication by individuals and organisations and even then we can take it with a pinch of salt, or we can simply live in the hope that false information will stop :).”

An author, editor and journalist based in North America replied, “Not much good will happen in the next decade. We’re pretty doomed.”

professor at a major U.S. university replied, “Trust will decline, creating incentives for a balkanized internet, with different parts of it offering differing degrees of encryption and consumer protection.”

Daniel Berleant, author of the book “The Human Race to the Future,” commented, “Trust will decline, as society becomes more polarized and more segmented into parochial special interests. If and when society turns a corner and prevailing values begin to favor the common good, trust may begin to increase, but there is no particular reason to believe this will occur soon.”

Bernie Hogan, senior research fellow, University of Oxford, said, “I’m sure we want to believe it will get better, but I assume that instead it will get more effectively manipulated. Those on the right are increasingly suspicious of institutions and those on the left are suspicious of many actors that do not pander to their specific cause. Personalised, demographically appropriate celebrities will be increasingly available to appeal to specific groups. A cataclysmic event such as a pandemic or world war might disrupt this trend, wherein we reevaluate the overall state of information distribution. Barring that, I imagine it will be business as usual, with people trusting what they believe in, in the most convenient, smallest doses possible. I mean we would much rather buy an intelligent agent that tells us what we want to hear than one that tells us what we should here to engage in politics beyond the local level.”

Kelly Garrett, associate professor in the School of Communication at Ohio State University, said, “In 10 years’ time individuals and societies will have developed new strategies to keep online deception in check, and Americans will have found renewed faith in some type of authority… Unfortunately, some bad behaviors will have been normalized, and new threats to our ability to know what to believe will emerge. Ideological divisions will remain sharp, and beliefs will continue to fall along party lines. Foreign powers’ attempts at political manipulation via disinformation will be more commonplace. And technologies for fabricating audio and video recordings of events that never happened will be widely known, and regularly abused.”

Glenn Edens, CTO for Technology Reserve at Xeroz/PARC, “Truth now seems ‘optional.’ The root of these issues is in publishing and consumption as well as education. We may get to a point where ‘media’ is largely ignored, especially in an environment where the boundaries between business and editorial barely exist anymore. With any luck, society will self-regulate and it will be cool again to verify sources and fact-check.”

Bill Woodcock, executive director of the Packet Clearing House, wrote, “I’m afraid the trolls will continue to ascend over the next decade, with national sponsors and a growing sense that it can be hip to be reactionary if you can play the left for rubes. Which leaves the schoolchildren of tomorrow unable to trust either textbooks or the internet.”

The next few years are crucial to the future of the information environment

Some respondents expressed uncertainty about what will happen to trust in the next decade or said the likely future will be determined by actions and events in the next few years.

David Sarokin of Sarokin Consulting, author of “Missed Information,” said, “Continued deterioration will set back science, journalism and liberty, but hopefully, we’re smarter than that.”

Evan Selinger, professor of philosophy, Rochester Institute of Technology, wrote, “How much trust is given to online information in 2027 will be determined, to a large extent, by whether society comes to its senses and recognizes: 1) That democracy requires quality investigative journalism; and, 2) That this, in turn, requires financially supporting the organizations and companies that can provide it. Algorithmic policing of content and generation of content shouldn’t be fetishized as forms of solutionism.”

Pamela Rutledge, director of the Media Psychology Research Center, said, “Trust in information depends on individuals taking action and responsibility on their own behalf. If we try to offload responsibility, we will give away freedom.”

leading researcher studying the spread of misinformation wrote, “What happens in 2027 will completely depend on what happens in the next five years. Legislators on both sides of the U.S. political spectrum will need to get tough on regulating funding sources, outside influence, and work to increase transparency in marketing and advertising. There will also need to be a joint effort between technology companies, international organizations and public-advocacy organizations to find working solutions for some of the problems that have disrupted civil discourse and more moderate/centrist social and political viewpoints. If this doesn’t happen soon, it’s unlikely to get better in 2027.”

Giovanni Luca Ciampaglia, a research scientist at the Network Science Institute, Indiana University, wrote, “Different sectors of society will have to work together; this includes the press and the social media companies whose platforms connect society with information. And we will need to improve our understanding of these digital information networks to make this happen.”

A publisher said, “By 2027, either we will have devolved into a splintered isolated society or we will have collectively moved beyond the problem out of necessity.”

An international internet policy expert said, “Trust in 2027 will ultimately depend on the strength of the democratic governance models that exist. If these models remain then there will be trust.”

Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “Two futures exist for the internet. One option is that internet service providers decide that they will no longer offer the ‘free unfiltered service’ and provide only clean data. This level of service will clean out many bots, attacks and pornography. In the second, the internet service providers will continue to have two services: trusted and ‘anything goes’ internet. Businesses and individuals will desire information that is trusted – so a portion of the internet will have the ‘high-trust’ information.”

Paul Gardner-Stephen, senior lecturer, College of Science & Engineering, Flinders University, said, “Fake news is simply too easy to create, the general population too easy to influence, and the potential benefits of its application too great for power-hungry entities to ignore. It is only if we find ways to defuse these factors, that we will see a long-term improvement in the situation… This is an arms race, just as with spam, malware and other digital blights. Battles will be won and lost, and although the war currently shows no end of ending, the increasing awareness of manipulation will likely mitigate the overall impact of fake news over time.”

senior principal and author wrote, “It depends on what interventions are made to encourage the public to separate fact from fiction. Right now, people trust whatever reinforces their worldview.”

O’Brien Uzoechi, a business development professional based in Africa, replied, “If misinformation continues to go on unchecked, trust will become a trash word in 2027. But, with appropriate laws and the right application of development through technological commitments there could be a turnaround in our trust in information dissemination by 2027.”

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, wrote, “The landscape of trust in information online by 2027 will continue to be mixed. There are reasons to project into the future that alkaline diets, science denial, conspiranoic theories, hate and ignorance will not be abated in 10 years. On the other hand, a better understanding of biases and a decade more of the internet’s life may begin to create information resources whose trustworthiness is better established and more easily identified, as has taken the press more than 500 years to, somewhat, achieve.”

An associate professor at a major Canadian university wrote, “We may be at a turning point now, in which the pushback against misinformation will result in a reduction, in which case the trust level will likely stay the same. However, if this issue is not tackled effectively, we will see a reduction in trust of online information and a partial return to more traditional notions of authority, based on known publishers and authors.”

Michael R. Nelson, public policy executive with Cloudflare, replied, “New business models and new techniques that harness AI, digital watermarking and more powerful forms of crowdsourcing will mean more information is verified and reliable. But there will also continue to be deep oceans of misinformation, doctored images and even computer-generated video that portray things that never happened in very convincing and realistic ways.”

professor of law at a major California university said, “I would like to think we will have become very good by 2027 at discerning what is false or misleading and what is not. However, humans have always been taken in by frauds, scams and misinformation. Fundamentally, it seems unlikely that we will get much better at discernment on an individual level. Online services may become better at sifting some information out. We may have to come up with a way to better scale our legal system’s protections against false information.”

Greg Wood, director of communications planning and operations for the Internet Society, replied, “I am hopeful that systems and practices will be developed and deployed that improve the ability of internet users to better verify online information and its sources. However, it is not clear if the economic and other drivers to do this exist. And, practical incentives to spread false information will remain.”

professor and researcher of American public affairs at a major university replied, “There will always be information sources at the extremes; the question is whether they continue to have influence.”

Mike Roberts, pioneer leader of ICANN and Internet Hall of Fame member, replied, “Trust in information will generally be higher, but perhaps not viewed as high enough.”

Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., responded in detail: “Trust in information online will erode if media outlets do not position themselves and their media vehicles to build trust-measures into their content. Just by generating information we will not, magically, generate tools to better regulate that content – any more than driving your car down a road would magically generate road signs and traffic signals along that road. Keeping in mind the need for open access, transparency, and protection of privacy, online information sources will have to cooperatively generate new ‘rules of the road’ for online information. User-generated content will continue to explode in the next decade. Virginia Tech’s ‘Evaluating internet information uses five criteria to determine the trustworthiness of online information:

1. Authority (Who is this person? How is he or she qualified?)
2. Coverage (How relevant is this information? Does it fully address the significant issues associated with the topic?)
3. Objectivity (Does the information show minimum bias? Are there links or ads that show the author’s agenda?)
4. Accuracy (Is the information reliable and error-free? Is there some kind of fact-checking confirmation of the information?)
5. Currency (How recent is the information? When was the page last updated?)

“Readers need guidance, filters, standards. The information flood is here, and with it come truly positive outcomes and opportunities. But it also brings consequences, foremost of which is the need to manage that information – give the reader perspective and tools to coordinate the information with other information and ultimately evaluate its worthiness. For example, users can upload social media posts, links, images, or other content to Check, an open web–based verification tool developed by Meedan, as part of their verification process: ’Once an item is uploaded, it can be color-coded and tagged by subject matter. Users can regularly update the status of their reporting, add notes, and include other details that might be useful.’ (Neiman Lab)

“By 2027, hackers and mischief-makers will use technology advances to create more confusion and work to obfuscate or distort the truth. Now is the time to build vigilance and standards into our information. Magical thinking or wishing this to get better is foolish. We must get to work now or by 2027 the nonsense one hears today – you can’t trust any information anymore – may, like Orwell’s ‘doublespeak,’ distort reality enough that people will assume it is true.

“As William D. Lutz has written:’All who use language should be concerned whether statements and facts agree, whether language is, in Orwell’s words, ‘largely the defense of the indefensible’ and whether language ‘is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.’ By 2027 online information can puncture illusions, but only with vigorous attention to building confirmation tools that underline facts and foster truth-telling.”

Some people will be smarter about finding and relying upon trusted sources

A share of respondents expressed the hope that people will evolve new ways of dealing with the increasing volume of information of all types in ways that serve the best interests of the common good.

Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “I really hope that the public will become much more discerning, sceptical and mindful of information quality, source and intent. Will it? Yes, for the mindful – the educated, the politicised, the angry, probably not for everyone else. I can’t foretell which will be in the majority – that depends on some other things like the economy, geopolitics, etc.”

Esther Dyson, a former journalist and founding chair at ICANN, now a technology entrepreneur, nonprofit founder and philanthropist, expert, said, “I’m optimistic because I’m an optimist. However, there is not a lot of evidence [to support that] right now.”

Bob Frankston, internet pioneer and software innovator, said, “Ideally there will be a more-aware public less apt to accept ‘the internet says’? Or will there be more acceptance of one’s tribe as authority?”

member of the Internet Architecture Board said, “More people will be more sophisticated in how they consume information; they will be less likely to trust it blindly (and that’s a good thing). Some people will remain relatively unsophisticated, and thus open to manipulation. The proportion between sophisticated and not will matter, a lot.”

Matt Mathis, a research scientist who works at Google, said, “We will get smarter at separating facts from alternate facts.”

Jonathan Grudin, principal design researcher, Microsoft, said, “People will develop a more sophisticated awareness of where to find trustworthy information by 2027. This may have taken a century for print media; we can get there faster.”

Jamais Cascio, distinguished fellow at the Institute for the Future, said, “There are multiple scenarios. We could be so mistrustful of online information that we look for alternative media of communication for trustworthy material, each potentially worse than the last; we could successfully develop tools and norms to push back against falsehoods (e.g., reliance on general public camera swarms as verification of video). We could be so polarized that people will trust information that comes from ideologically aligned sources and everything else is garbage. I suspect trust in information will be greater by 2027, largely because it will be easier to block out information and information sources that we don’t like.”

Tim Bray, senior principal technologist for Amazon.com, wrote, “I believe that the people pushing the lying stories also have an explicit political agenda, and once that agenda is discredited, the effect on lying-as-a-strategy will be salutary.”

An internet pioneer who has worked with the FCC, ITU, GE and other major technology companies commented, “People will rely on trusted sources. The rest will be suspect.”

Charlie Firestone, executive director, Aspen Institute Communications and Society Program, commented, “People will be skeptical of information online, but most (or at least many) will have the skills to determine the truthful sources if they care to.”

Iain MacLaren, director of the Centre for Excellence in Learning & Teaching, National University of Ireland-Galway, commented, “The default position, which is taking shape even now, will be that of not taking seriously information that is not backed up by evidence or which is part of an obvious ‘high shock’ deluge. Just as we have developed the ability to screen out many of the ads that plaster websites, so too will we see much of this type of ‘information’ as electronic noise.”

A researcher based in North America replied, “We are still living in the wild west of online information. No doubt, entities will be increasingly sophisticated in their ability to create ‘realistic’ fake information. But at the same time, people will be more aware of the phenomenon and will seek reliable markers for credibility. Technology tools will support this. But also traditional methods, such as information/digital literacy instruction.”

Geoff Scott, CEO of Hackerati, commented, “I hope parents and educators will begin teaching their children the critical thinking and investigative skills needed to render fake information harmless, but it will take several more decades before enough people think independently enough to have an impact.”

consultant based in North America said, “Trust will decline overall. But there will be sources that enjoy high degrees of trust among particular audiences. Trust in media, like its production, will likely continue to decentralize.”

professor of information systems at a major technological university in Germany commented, “We will gradually get up again by 2027, after falling heavily between 2017 and 2022.”

Amber Case, research fellow at Harvard Berkman Klein Center for Internet & Society, replied, “Right now we’re in a serious emotional time with information. It can trigger intense feelings and reactions that make it difficult to make sober choices or take a step back. We’ll probably learn a bit more about this and become accustomed to it over time. In the same way that a fake news story from 1860 might look ridiculous to us now, we’ll probably feel the same way about news stories posted in 2017 when we look back on them in 10 years. The trick is to be able to have that perspective during the moment we read the news story.”

Axel Bruns, professor at the Digital Media Research Centre, Queensland University of Technology, commented, “I would expect people to have formed a considerably more sophisticated, differentiated understanding of the relative trustworthiness of different (online as well as offline) information sources.”

principal network architect for a major edge cloud platform company replied, “There will be more garbage in all likelihood, but its social and cultural currency will decline.”

research scientist said, “It will improve, and people will be more aware, and more critical.”

A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “Hopefully, the public will become more skeptical of online sources, and will gravitate toward those sources that provide more reliable and helpful information. Hopefully, people will learn to check sources online in one or two ways rather than relying on the top search result or most ‘liked’ item.”

librarian based in North America said, “People will get smarter about the internet by then. Most of the older folks who don’t know the difference between clickbait blogs and real newspaper sites will die or be out of power, and people who have grown up in the environment and have digital literacy will be in charge. It’s the responsibility of teachers, librarians, etc., to teach these skills to students NOW so that when they grow up they are information-literate.”

professor at a major U.S. state university wrote, “Not sure about the trustworthiness of information, but at least, people will be better trained by 2027.”

The managing editor of an online fact-checking site replied, “There will be a way to parse out real from false by 2027. People always do adapt. It’s just important to do it quickly.”

Rich Ling, professor of media technology, School of Communication and Information, Nanyang Technological University, said, “Society faced somewhat similar issues with the development of the printing press. In that case, there was the development of mechanisms that worked to enhance to positive sides of the development while hindering the negative effects. That interaction took many decades (and perhaps centuries) to work out. Hopefully we will be able to address this issue in a reasonable way on a shorter time-scale.”

There will be a divide between the savvy and the not-so-savvy, and noisy, manipulative attention-grabbers may drown out the voices of veracity

Amidst the responses there was much discussion about the likely widening of the divide in the information environment between the most educated and sophisticated members of the public who take the care to seek the most reliably sourced information and those who were not as likely to do so. Creating a “trust divide” and possibly lowering the efficacy of public discourse in maintaining a strong, well-informed public able to competently participate in creating the best future possible for all in what appears to be a struggling and contentious political system.

Alan Inouye, director of public policy for the American Library Association, commented, “I am concerned about differential impacts. More-affluent people with graduate education will continue to access systems that are mostly trustworthy. Other socio-economic groups could be subjected to less-robust systems, and importantly, the gap between the haves and have-nots grows – it is a new kind of digital divide – the trust divide.”

Giacomo Mazzone, head of institutional relations for the World Broadcasting Union, replied, “The world will be divided in do-knows and don’t-knows. Only the first ones will be able to find trusted sources.”

Henning Schulzrinne, professor and chief technology officer for Columbia University, said, “There will be two worlds – one world of people and institutions that value factual accuracy, with correction and reputation mechanism, and the other where anything goes. The hard part is not distinguishing truth from malicious fiction but choosing to ignore the latter.”

A retired local politician and national consumer representative replied, “Educated people will become more circumspect and select information sources they trust. The majority will believe anything they chance upon.”

James Schlaffer, an assistant professor of economics, commented, “People will adjust to the amount of available information better. Also, the people who only want news from their worldview will double down on their own narratives.”

Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, wrote, “My guess is that the more popular, click-bait-y, online sources and streams will continue to have audiences (as tabloid or sensationalist, celebrity culture outlets always have). But the great online ‘pool’ of information will increasingly be distrusted by opinion leaders, decision-makers, institutions, and experts, who may need to create a separate ‘ecosystem’ of high-status – elite, if you will – and reliable sources for creating, sharing and debating information away from the populist ‘roar.’ Perhaps it will look a bit more like book publishing and libraries (with the ‘curation’ that implies), perhaps enclosed by paywalls (like academic publications?). But without an arena for trusted information to be created, circulated and debated in a fair way, there is little chance that a pluralist society can succeed into the future.”

senior researcher and distinguished fellow for a futures consultancy wrote, “We’ll have a great array of trusted services for high-quality information. But many populations will still lack critical reading and thinking skills to discriminate between truth and fabrication.”

Erhardt Graeff, a sociologist doing research on technology and civic engagement at the MIT Media Lab, said, “Most likely, between 2017 and 2027, we will see increased inequality when it comes to trust in information online and the ability of certain people to leverage the information ecosystem to serve their needs and to make change in the world. There will be elite classes who are structurally positioned online and offline to comprehend and to access the most reliable nodes in the overall information ecosystem, benefiting from existing social and cultural capital and resources like money, education, and advanced tools. And there will be underclasses whose information ecosystems who lack connections to diverse, trustworthy people and news sources, and/or who have simply been left behind in their understanding of improvements to their information ecosystems – their lack of trust will mean they cannot exploit this new landscape as fully empowered citizens.”

Serge Marelli, an IT professional who works on and with the Net, wrote, “Stupid people will believe in what they want to believe: alternate facts, lies, ‘alternate media,’ populist propaganda.”

An ICT for development consultant and retired professor commented, “As it is, an educated person knows how much to trust online; the better education levels, the greater a discerning individual – so one must concentrate on internet awareness and internet education, else trust will go down.”

An internet pioneer replied, “Knowledgeable people will have semantic Web tools to check the plausibility of information from sources of unknown quality. (If Netscape hadn’t invented JavaScript, we would already have had such tools.) For the rest there will be reliable and unreliable sources online, just as there are offline.”

professor of law at a state university replied, “There will be balkanization. Scientific and professional information will likely continue to be of high reliability thanks to professional communities policing it, while information in the public sphere will degenerate in its veracity.”

user-experience and interaction designer said, “A certain less-sophisticated type of user will always mistrust what they see/hear, preferring their own echo chambers. Perhaps if this was a required topic of education (critical thinking 101) in all schools, that might improve.”

Some predicted most people will be too lazy or too gullible to avoid being fooled by misinformation. A retired senior IT engineer based in Europe wrote, “Finding information you trust will probably be very time-consuming.”

professor of management based at a university in the U.S. West replied, “George Orwell described it perfectly in his novel ‘1984,’ which turned out to be somewhat late but is now technologically within reach.”

Several participants in this canvassing made references to the dystopian-future movie “Idiocracy.” An IT director wrote, “See the film ‘Idiocracy.’ It is prophetic.” The story line shows the quality of American life descending drastically in the future as the majority of people gradually evolve into an uneducated, crass population of consumers led by corporate-sponsored idiots. Ryan Sweeney, director of analytics, Ignite Social Media, wrote, “Trust in information 10 years from now relies on our actions today. If we can curb these negative trends and rebuild the marketplace of ideas, our trust in information – and each other – will vastly improve. However, if we continue our current trajectory, the film ‘Idiocracy’ will be reclassified as non-fiction.”

research scientist wrote, “Trust is being eroded by lies, and the blurring of the boundary between advertising and content. I expect online information to be reduced to entertainment for much of society.”

An institute director and university professor said, “By 2027, trust in information will be moot. The internet will be the equivalent of the ‘Jerry Springer Show’ broadcast from the top of a nuclear waste dump – thoroughly toxic. People won’t think in terms of trust. They’ll just seek entertainment.”

professor of media and communications based in Europe wrote, “The mass of the population will continue to believe what they like (in God, the Market, homeopathy or fairies at the bottom of the garden).”

An analyst for one of the world’s leading technology networking companies predicted that in the future it will be even more difficult for voices of veracity to get their messages heard amidst the clamor raised by the attention-snatching purveyors of controversial misinformation, writing, “It will be harder to get to the diverse opinions held among my cohort as we are less active online than other groups.”

professor of economics based in North America said, “As the amount of information increases people will be overwhelmed. Trust will fall.”

Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “When people are bombarded with contradictory and confusing information, they often fall back on a strategy of just going with their gut feelings. While that’s an entirely reasonable and understandable reaction, it’s also good for manipulators. When there’s much noise, only what’s loud and simple gets heard. That’s not necessarily what’s right. Thus in the absence of dramatic changes reigning in laissez-faire capitalism, I expect trust in information overall will continue to worsen.”

partner in a services and development company based in Switzerland commented, “We are in a race to provide users with tools for informed trust against a trend pushing them toward a negligent attitude. I expect a move towards informed trust and informed mistrust. There is of course a great worry that negligent trust and negligent mistrust dwarf the informed and diligent attitudes on the scale of society at large. Great upheavals are possible… possibly leading to the proliferation of violent regimes.”

retired educator wrote, “Collective definitions of reality must continue in one way or another for society to exist. Social dissolution is possible. Or enclaves of information hubs.”

journalist who writes about science and technology said, “People will be extremely wary of media, and that there will be extreme balkanization of information sources. Expect the rise of FOX Nation vs. Washington Post eggheads, etc.”

Jack Schofield, longtime technology editor at The Guardian, now a columnist for The Guardian and ZDNet, commented, “News sources that distribute false information have a vested interest in discrediting more-honest news sources – for example, Fox News benefits by discrediting CNN and the New York Times. I envisage more and more sources appearing over the next decade, each putting its own distinctive spin on the news, while trying to discredit rivals in similar niches. The result could be more sources catering to fewer people, with less agreement between sources about even basic facts. Once you’ve discredited the old ‘gatekeepers’ like the New York Times, the Washington Post and the Wall Street Journal, anything goes.”

Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “Brands will continue to determine the public’s perception of trust. But perception and reality often diverge.”

Diana Ascher, information scholar at the University of California-Los Angeles, wrote, “By 2027, we’ll either place little stock in the information we encounter, or we’ll succumb to the Borg.”

Many expect a continued fragmentation or balkanization of online information communities. A principal research scientist based in North America commented, “Trust within internet communities will stay stable. Trust between such communities will continue to erode.” A vice president for stakeholder engagement said, “People will have stopped trusting the internet for general information and remain within their own walled gardens or trust communities.” An activist/user wrote, “There will no longer be a single ‘online,’ there will be many official and unofficial ‘onlines.’ with proportionally lesser or more trust attributed to them.”

An associate professor of sociology at a liberal arts university replied, “Information and users will be even more siloed in 2027 than they are today. The internet is not an open landscape, but a platform that increasingly consists of walled gardens of liked-minded individuals. It is difficult to imagine how the structure of incentives might be changed for both users and providers in a way that would change this fact in the next decade.”

An attorney for an online civil rights organization said, “Trust in 2027 will depend more upon our communal lives, education, economic justice and opportunity – the kind of society we have or at least publicly aspire to – than any technological innovation that might reduce freedom of online speech.”

copyright and free speech artist-advocate wrote, “Our commonality will continue to decrease as we will live more and more in our own little bubbles.”

Stephen Bounds, information and knowledge management consultant at KnowQuestion, predicted in great detail the following potential future scenario: “By 2027, trust in science and journalism without a known personal endorsement will have continued to erode. Governments and commercial organisations will all either own or lease access to significant aggregations of on-demand media. Traditional media advertising will be all but obsolete. Instead, the ‘influencers’ that star in these channels will be paid to pass on information to their followers. However, since this is common knowledge, their views will be treated with suspicion (thus repeating the cycle of increasing media-savviness seen in the previous iteration of advertising through mass media).

“A small but increasingly influential band of information providers known as ‘patronus’ will rely exclusively on no-strings-attached support from patrons. They will pride themselves on their fierce independence and champion issues of political and social importance that receive intense focus from their followers. Their success rate will be higher than the most highly-paid political lobbyists. Patronus will often be subject to information warfare attacks and lawsuits from disgruntled parties, and will be forced to invest in countermeasures as part of the cost of doing business.

“The most successful will have a staff to vet requests for coverage by governments, scientists and commercial organisations. Only a small percentage of these requests will be covered on the ‘main channels,’ but additional ‘side channels’ for niche topics of interest will be curated and published by their staff. Five years in, a patronus will suffer a damaging hit to their reputation when a second-in-command is bribed into publishing side channel content beneficial to The Walt Disney Company. In countries that outlaw or fail to develop a patronus culture, the shift towards authoritarianism will be marked. In the absence of reputable sources of information, citizens will tend to find a single outlet for information and consume it unquestioningly, reasoning that ‘they are all as bad as each other anyway.’ This will make government and corporate manipulation of sentiment easy to achieve.”

New approaches to improving the information environment will be successful

Some respondents were hopeful that one or more solutions are likely to enhance the public’s trust in online information by 2027. Among the likely remedies they suggested were more public support for good journalism, enhancements to information management or filtering systems, a clear labeling or ranking of trusted sources, and new technologies, policies, regulation and education.

Larry Keeley, founder of innovation consultancy Doblin, wrote, “Parts of it will get worse. But most of it will get much better.”

Jane Elizabeth, senior manager American Press Institute, said, “The current downhill trajectory will reach rock-bottom soon and prompt more serious efforts to reverse the trend. In 10 years, we can and should be able to restore some of the trust that’s been eroded.”

J. Nathan Matias, a postdoctoral researcher at Princeton University, previously a visiting scholar at MIT Center for Civic Media, wrote, “In our time, people already take billions of actions every month to manage and filter trusted information. By 2027, citizen behavioral scientists will routinely test the effects of these actions at scale, developing adaptive knowledge on effective ways to support public understanding in the face of rapidly-evolving misinformation.”

Nigel Cameron, technology and futures editor at UnHerd.com and president of the Center for Policy on Emerging Technologies, said, “There will have been much clarification of branded/trusted sources vs. unreliable, so there should be an increasingly healthy situation.”

Michael J. Oghia, an author, editor and journalist based in Europe, said, “If Wikipedia can be used as a benchmark, I’ve witnessed how it went from being laughable to practically a first-stop for legitimate and respectable information gathering in less than a decade. The fact is, while there is more content available to muddy the water between fact and fiction, new technologies, policies, education and human resources are being allocated to address this issue, so I’m optimistic it will improve.”

William Anderson, adjunct professor, School of Information, University of Texas-Austin, replied, “Trust will be both better defined in practice and under constant review.”

Barry Wellman, internet sociology and virtual communities expert and co-director of the NetLab Network, said, “We will have better means for verifying information.”

Some people put their faith in better-supported journalism and education. A consultant said, “People who want information they can trust will fund journalism they can trust. Trust networks and leaderboards rating trust factors will be commonplace, but there will still be those looking to hack the newer systems. As always.”

The CEO of a major American internet media company based in New York City replied, “Trust in information will be much higher in a decade. The lack of trust is mostly the fault of old media gatekeepers who think they should determine what people see. They created the opening for Trump; the distrust in media pre-dates his rise and enabled it. The millennial generation and digital news outlets will create a new kind of trust in the next decade, based on being humble guides to help people navigate the world, who ‘show their work,’ and are more transparent. The old gatekeepers are in the midst of peak moralizing right now and don’t realize they are part of the problem.”

graduate researcher at a U.S. university wrote, “There are two ways this could go. We could try to regulate or program our way out of this, which probably won’t work, and you’ll see a massive dip in trust in information. Alternately, we could reinvest in information literacy and teach people how to navigate this new environment on their own, give them their confidence in seeking information back along with the tools to do so well, and let people rebuild trust themselves.”

Some people have faith in technological innovation or human-tech combinations. An anonymous respondent based in North America said, “Technologies not on the radar now will be applied.”

An author and journalist said, “We will cede much of the work of trusted information to AI’s.”

professor based at a university in the Western U.S. wrote, “If we think about emerging technologies such as VR and experiential spaces (i.e., spaces where audio and visual accompany taste, smell and even feeling), and if we consider these might be available in realtime very soon, then it’s quite easy to imagine a public that experiences information as it comes into existence. That could, then, give rise to a new level of trust wherein information and the experience of its creation can be simultaneously felt and shared.”

Ph.D. candidate in informatics commented, “Systems such as internet browsers will have information verification built into them.”

An assistant director for a digital media and learning group at a California university said, “We will develop mechanisms that will help us assess whether information is trustworthy. We will also become more sophisticated technologically to be able to tag, share or to verify information.”

Andrew Dwyer, an expert in cybersecurity and malware at the University of Oxford, commented, “We will have developed frameworks of trust recognition, with some sort of verification body that attests that this has been ‘fact-checked’, in similar ways to emerging organisations have now. These will be plural due to the multiple perspectives required in democracies, yet others may verify another and so ecologies of trust will emerge that individuals and societies can ascribe to.”

Howard Greenstein, adjunct professor of management studies at Columbia University, said, “Systems will develop where facts and origins will be sourced, so readers know where the information originated. This will exceed hyperlinks and become more like a line-by-line ‘pedigree’ for articles. Hopefully these will create incentives to work with the most accurate sources.”

Alexander Halavais, associate professor of social technologies, Arizona State University, said, “We will see the development of metrics for determining the validity of news and information sources. This is a problem that we have already approached in search, with the need to filter ‘real’ responsive search results from attempts at spam or other misleading information. There is value in finding trusted information, and I suspect that people will seek ways of extracting that value, by certifying or rating the validity of claims. Unfortunately, as we have seen with Politifact and Snopes, not everyone will agree about who those certifying authorities should be.”

director of civic technology said, “It will be commonplace for major social media platforms to employ teams to take on propagandists, just as they employ teams to fight spam. It will be slightly more burdensome to speak online, as automated systems proliferate.”

Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, DC, replied, “One could expect to see reputation brokers, be they private enterprise (e.g., in the U.S.) or the state (e.g., in China).”

An eLearning specialist said, “It seems likely that – by repeatedly proving to be untrue – the misinformation and the sources that promote it will be proven unworthy of public trust.”

Maja Vujovic, senior copywriter for the Comtrade Group, said, “Trust will gradually diminish in the short and medium terms, necessitating that new filtering mechanisms be devised, tested and applied. The solutions will not come from governments, but from technology and mass human effort, akin to Wikipedia. Many people – those who can afford to – will opt to pay for access to reliable information. But the sheer number of those who cannot, coupled with ethical considerations, will spawn technological solutions and new standards in information quality control. The whole society will need to step up and this will result in a new norm of what it means to be literate.”

William L. Schrader, a former CEO with PSINet Inc., wrote, “Much like HTTPS helped provide perceived improved security for financial and other information, I suspect other technologies and organizations will be created which validate that the ‘publisher’ is of very high or very low repute. That report can also be hacked, but it will be noticed, and published. In short, there is so little trust in online information now that trust may actually go up.”

Joshua Hatch, president of the Online News Association, said, “Trust will be improved, as there will be more-sophisticated consumers and more social awareness, but the problem won’t be completely solved.”

Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “Trust will improve. It already has – e.g., the recent election results in England and France. Most people adjust fairly quickly to discounting false and misleading information once they recognize it as such. The trustworthiness of information will be judged in the future, as it has always been – by the reputability (in the eye of the beholder) and competency of the source.”

Wendy Seltzer, strategy lead and counsel for the World Wide Web Consortium, replied, “The important thing will be the end-to-end nature of trust: Can we add enough source-to-reader indicia that enable readers to determine whether to trust the source and its reliability?”

Micah Altman, director of research for the Program on Information Science at MIT, commented, “Reaching an equilibrium by 2027 is unlikely, and advances in technology will yield cycles in information trustworthiness as technologies for manipulating and verifying (respectively) information advance, and society reacts to them. In the mid-term, distributed ledger technologies (e.g., blockchain) will provide a powerful tool for establishing verifiable information in some scenarios. In addition, as a result of trends in information privacy in Europe, trust in the management of personal information online may be improved.”

The dean of a major university’s school of information science commented, “Things will improve if there is a systematic effort to promote information literacy.”

futurist and CEO said, “International standards and protocols will help, and broad ethical frameworks like the Earth Charter and the UN Principles of Responsible Investing will be recognized and enforced.”

Jennifer Urban, professor of law and director of the Samuelson Law, Technology & Public Policy Clinic at the University of California Berkeley, wrote, “It seems unlikely that we will get much better at discernment on an individual level. But online services may become better at sifting some information out, and we may have come up with a way to better scale our legal system’s protections against false information.”

lead experience strategist predicted, “There will be multiple offerings for protection of identity services, and, ideally, open-source-based options that major vendors (Google, Amazon, Apple) support, based in blockchain or beyond.”

Andreas Birkbak, assistant professor, Aalborg University, Copenhagen, said, “There will be more online brokers of information around who rely on a reputation of trustworthiness to attract an audience.”

Rob Atkinson, president, Information Technology and Innovation Foundation, wrote, “Trust will increase by 2027, as technology improves and as more people are better able to differentiate real from fake information.”

John King, professor, University of Michigan School of Information Science, said, “Caveat User: We’ll learn a lot about trust, which we think we understand now, but we don’t.”

Stowe Boyd, futurist, publisher and editor in chief of Work Futures, said, “I predict a rapid increase in ‘information trust’ online that will directly track the rise in capabilities in AI. Of course, we have to trust the AIs too. Quis custodiet ipsos custodes?”

Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, wrote, “I will only predict, given the speed at which things are moving technologically, that by 2027 cyber technical means and consequent social and political challenges will have emerged that we haven’t even imagined today.”

Dan Gillmor, professor at the Cronkite School of Journalism and Communication, Arizona State University, commented, “If we do this right, people will be better able to sort things out for themselves, using critical-thinking skills and new tools that will be developed to help.”

Many people who hope for solutions also question the likely success of them.

An author and journalist based in North America wrote, “Can third-party information be certified, trust-filtered, authenticated? Could there be systems? Would there be competing certifications?”

professor and researcher based in North America said, “I don’t think we will see confirmation bias or conspiracy theories go away. Trust in information will depend on trust in institutions.”

retired university professor said, “There’s way too much hacking going on (thanks to the NSA’s irrational belief that only they are smart enough to use their backdoors) for any sensible person to trust online information to be really secure or accurate.”

founder and research scientist commented, “Trust in information will, perhaps, be improved, but improving the overall quality of information doesn’t do anything to address our natural human shortcomings (heuristics, biases, and the effects of information overload).”

Nick Ashton-Hart, a public policy professional based in Europe, commented, “Trust will increase, but the processes that increase it will also reduce the ability of new forms of information dissemination to become publicly accessible as the costs of compliance reduce the ability of the private sector, especially SMEs, to innovate.”

Edward Kozel, an entrepreneur and investor, replied, “There will be fragmentation of ‘trusted ecosystems’ as national interests (countries) all struggle with the issues in different ways.”

An anonymous respondent wrote, “Trust in information will go (has gone) the way of trust in advertising. People will more and more rely on input from each other. And it’s not just online. Scientific research results appear one week, get discredited the next. There’s incentive to be first, regardless of accuracy. That’s all connected to capitalism, competition and to many social values in the United States.”

Methods adopted to potentially improve things will cut free speech and elevate surveillance, changing the nature of the internet; the actors responsible for enabling change profit from control

Requiring a higher level of accountability for the sources of online information is expected to kill the ability for anonymity. A research scientist from Latin America replied, “Every piece of work will be untrustable unless a chain of signatures and validations can be traced to the origin.” And a professor said, “Information can only be trusted if its full provenance is proven to be trustworthy.”

This is seen as likely to stifle some needed anonymous free speech or drive it to underground spaces.

A journalist wrote, “A likely scenario will be that we will have better tools and systems in place to combat fake news and false information. However… the free and open net that we know today could be history due to the end of net neutrality, massive and invasive surveillance of everything happening online, one or two dominant online players such as Facebook ‘eating the web,’ people abandoning the web for native apps due one of these things (massive surveillance, end of net neutrality), et cetera.”

leader of internet policy based in South America argued, “It will be the death of privacy online.”

An internet security expert based in Europe predicted people who wish to avoid this will have to travel the Dark Web, “The all-pervasive nature of surveillance will lead to an underground shadow IT with nobody as a recognised administrator.”

Tom Worthington, a lecturer in the Research School of Computer Science at Australian National University, commented, “We may see subscriber-based services for information verification replace ‘news’ services. There is a risk that governments will try to regulate and force their neighbors to also do so, as Saudi Arabia is currently doing to Qatar.”

Frank Odasz, president, Lone Eagle Consulting, wrote, “By 2027, we’ll have learned the public internet has been soiled, and walled gardens are necessary to separate those who desire to build trust and a better world from those who seek to destroy what others have built, and/or seek to profit at the expense of others. A reputation economy is evolving where it matters what you put online (and then can’t delete). But history teaches us that civilization has cycles, and we’re seeing a seeming loss in America of decency, ethics and honesty and the world sees mercenary interests are in control that threaten civil society at all levels.”

Some said platform companies will not work in favor of the alleviation of misinformation, instead supporting the “comfort” of online echo chambers.

postdoctoral scholar at a major university’s center for science, technology and society predicted, “There will be a further consolidation of power in the online landscape, where relatively few companies control much of the content online. We are already well on the way to this future, with just a few companies accounting for most online traffic. ‘Trust’ is not currently a priority for these companies, and given the market for untrusted but comforting information, the trust environment online will continue to deteriorate.”

Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University, said, “There will be less trust overall by 2027 for two reasons. First, fake news is like Gresham’s law: the bad drives out the good. As there is more of it, it becomes harder and more time consuming to differentiate between the good and the bad. The second is the growing criticisms (some justified, much of it not) of mainstream media. In both cases, it is important to remember that there are specific groups that benefit from both of the above (fake news and criticizing mainstream media), and thus have strong incentives to keep doing more. Unless we can find ways of undercutting those incentives, fundamentally changing the cost-benefit, we’ll just keep seeing more and more fake news and misinformation.”

Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “There will always be grey areas, but current systems that reward people who make up stories from whole cloth for political effect (via site hits and advertising, for example) must become illegal acts – stating an opinion is protected speech and should continue to be – spreading lies as truth has always been regarded as an unethical act, and current systems that reward, rather than punish, such acts are clearly eroding trust.”

A researcher of online harassment who works for a major internet information platform replied, “It’s extreme in either direction – in one direction, we’re totally fine and in another, we’re totally ******. If I err on the side of optimism, we can create spaces that facilitate media education, that move us away from solely SEO-driven initiatives that serve up content. I think we can make this better, but it would require really putting pressure on social networks to work with us, outside of governmental legislation.”

Eugene H. Spafford, internet pioneer in cybersecurity and professor at Purdue University, commented, “Trust will become more bimodal – some sources will be more trusted as correct by the majority but a significant percentage of people will continue to view dark conspiracies and fringe theories, thus disbelieving the better sources. This will be unevenly written globally, with some countries more prone to such fringe beliefs.”

Alexios Mantzarlis, director of the International Fact-Checking Network based at Poynter Institute for Media Studies, commented, “It is impossible to know. To give but one number: 10 years ago Facebook had 58 million monthly users; it now has 2 billion. Shouldn’t we expect an equally dramatic evolution in our online information landscape in the next 10 years?”

Despite some attempts to improve things, there won’t be much change by 2027

A share of respondents said the level of trust in 2027 will be about the same as it is in 2017.

Michel Grossetti, research director, CNRS (French National Center for Scientific Research), commented, “There will be a competition between the true and the false, as always.”

Filippo Menczer, professor of informatics and computing, Indiana University, said, “There will be a continuous arms race between increasingly sophisticated abuses and countermeasures. Trust will not be completely restored nor completely lost.”

Ari Ezra Waldman, associate professor of law and New York Law School, wrote, “Like today, people will trust information that confirms their biases. They will not trust information that challenges those biases.”

Kenneth R. Fleischmann, associate professor at the University of Texas- Austin School of Information, wrote, “ICTs will continue to evolve and multiply. Fora for sharing and receiving information will continue to multiply. Fragmentation of discourse and development of filter bubbles will likely continue to increase. It’s never safe or a wise idea to predict the future, but I see no reason (apart from some kind of nationwide or global catastrophe) that our political and information environments would become less fractured and polarized over the coming decade.”

Brian Cute, longtime internet executive and ICANN participant, said, “Users will have more tools that offer trust in information online. At the same time new techniques to deceive or promote fake news in new forms will be developed. It will continue to be a ‘mixed bag’ of trust and deception with individual responsibility being the most important element to protect the user.”

George Siemens, professor at LINK Research Lab at the University of Texas-Arlington, commented, “Within a decade, the amount of misinformation will increase due to bots and propaganda, but so will mechanisms to intentionally identify and isolate false information.”

research professor of robotics at Carnegie Mellon University wrote, “Most people will have a few sources that they trust, inherently, but they will continue to use other, unverified sources to support their inherent biases.”

senior research fellow based in Europe said, “Trust online will always reflect broader trends in society, which is to say, increasing disintegration and inequality. There will always be critical, information-savvy people, but the policy arena will revolve around the majority of people who actually lack media literacy.”

research psychologist commented, “There will be a wide variety of trustable and not trustable sources.”

David Weinberger, writer and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, said, “At best, we will have learned that while the Net looks like a publishing medium, it is not. It is a conversational medium in which ideas are promulgated without always having been vetted. We will become more ‘meta’ in our approach and recognize that we have a responsibility to question the truth and validity of what we see. That’s always been our obligation but we have spent centuries outsourcing it to authorities. By 2027, perhaps we will recognize that it’s up to us. It is the most basic and urgent of collaborative tasks the Net requires from us. Taking this meta step would be a significant achievement in the history of civilization. Maybe we’ll get there.”

CEO and advisor on many technology projects wrote, “Trust will be facilitated by technologies, yet those who would subvert it will also increase efforts to defraud. It’s a persistent Sisyphean battle.”

Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years’ experience at the BBC, Ofcom and as a digital consultant, wrote, “Trust in information online will be largely what it is today – that is, most people have trust in most of what they consume, but they trust some sources more than others, and can occasionally be fooled. The big question is whether trust in information from public institutions will have improved or declined – if the latter I fear our polities will be in an even direr state than they are today.”

futurist based in North America said, “It is unlikely that any rules would be adopted and enforced globally – and these are the only rules that could eventually help.”

media director and longtime journalist said, “People will move on from current trust issues/opportunities/liabilities to new ones. BUT data validation will be much easier to perform.”

An anonymous respondent from the Berkman Klein Center at Harvard University said, “As in any society where the channels of information become suspect, a portion of the population will look elsewhere for its information. Another portion will simply refuse to process information it receives through public channels, considering all of it to be contaminated by definition. And a remaining portion will continue to believe only in the information it finds which aligns with the opinions they’ve already formed.”

researcher based in Europe said, “It will be a chain of trust, and people will trust whomever they want.”

content manager and curator for a scientific research organization, commented, “It will be about the same.”

An internet pioneer/originator said, “In 2027 there will be an expanded version of what we see today: Competing, conflicting worldviews that are at war with each other in the most fundamental ways.”

Taina Bucher, associate professor in the Centre for Communication and Computing at the University of Copenhagen, commented, “The next decade will see an increase in public awareness and debate over issues of trust and information online… We all have a job to do, the public, the politicians, the technologists and the journalists alike. There has not been a better time for the humanist, social scientist and the software developer to meet.”

Dave Burstein, editor of FastNet.news, said, “The best but unlikely outcome would be for people to learn to be less trusting.”

A selection of additional comments by anonymous respondents:

• “Hopefully by 2027 we will begin to create institutional mechanisms for managing and rebuilding trust.”
• “People will lose trust in each other because each subgroup will lose the capacity to believe and understand the other.”
• “I hope people will be appropriately skeptical of everything.”
• “There will be more mechanisms for redress.”
• “There will be certified sources.”
• “It is impossible to peer review everything.”
• “Standards of verification will be even more politicized than they are now.”
• “If there is a proliferation of services to vet the ‘truth’ – that will just further add to the information noise we already have.”
• “Being ‘online’ will shift to the point that this question is irrelevant.”
• “We will see the emergence of subscription e-systems.”
• “Today’s problems are yesterday’s problems with speed and greater impact. We need to ask more questions.”
• “Individuals will figure this out for themselves. I know I will.”
• “All trust – in information online, offline, in person, et cetera – all will be eroded.”
• “People trust what they want to hear.”
• “People will continue to have different views of what is happening. This is the human condition.”
• “Trust will be in total disarray.”
• “Trust will increase for both true and false online information.”
• “There will be less trust, which is a good thing.”
• “There must be a concerted effort to find common ground to rebuild trust. Identity politics, intersectionality and the like are extremely divisive.”

To return to the survey homepage, please click here.

To read anonymous responses to this survey question with no analysis, please click here.

To read credited responses to the report with no analysis, please click here.

About this Canvassing of Experts

The expert predictions reported here about the impact of the internet over the next 10 years came in response to a question asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 2 and August 7, 2017. This is the eighth “Future of the Internet” study the two organizations have conducted together. For this project, we invited more than 8,000 experts and members of the interested public to share their opinions on the likely future of the Internet and received 1,116 responses; 777 participants also wrote an elaborate explanation to at least one of the six follow-up questions to the primary question, which was:

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

Respondents were then asked to choose one of the following answers and follow up by answering a series of six questions allowing them to elaborate on their thinking:

The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online

The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online

The six follow-up questions to the WILL/WILL NOT query were:

  • Briefly explain why the information environment will improve/not improve.
  • Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?
  • What are the consequences for society as a whole if it is not possible to prevent the coopting of public information by bad actors?
  • If changes can be made to reduce fake and misleading information, can this be done in a way that preserves civil liberties? What rights might be curtailed?
  • What do you think the penalities should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
  • What do you think will happen to trust in information online by 2027?

The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous seven “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in the global internet policy community and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and Organization for Economic Cooperation and Development (OECD).

We also invited a large number of professionals, innovators and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; the media and media-watchdog organizations; and think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications), as well as globally located people working with communications technologies in government positions; top universities’ engineering/computer science departments, business/entrepreneurship faculty, and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as the Association for Progressive Communications (APC), the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the canvassing questionnaire link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.

Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.

About 74% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 39% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An additional 22% specified their primary area of interest as “other.”

More than half the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their Internet expertise or background, and this was noted where relevant in this report.

Here are some of the key respondents in this report (note that position titles and organization names were provided by respondents at the time of the canvassing and may not be current):

Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the Program on Information Science at MIT; Robert Atkinson, president of the Information Technology and Innovation Foundation; Patricia Aufderheide, professor of communications, American University; Mark Bench, former executive director of World Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs; danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures; Tim Bray, senior principal technologist for Amazon.com; Marcel Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communication; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Johanna Drucker, professor of information studies, University of California-Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at ICANN; Glenn Edens, CTO for Technology Reserve at Xeroz/PARC; Paul N. Edwards, fellow in international security, Stanford University; Mohamed Elbashir, senior manager for internet regulatory policy, Packet Clearing House; Susan Etlinger, industry analyst, Altimeter Research; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Mark Glaser, publisher and founder, MediaShift.org; Marina Gorbis, executive director at the Institute for the Future; Jonathan Grudin, principal design researcher, Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute; Starr Roxanne Hiltz, author of “Network Nation” and distinguished professor of information systems; Helen Holder, distinguished technologist for HP; Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University; Christian H. Huitema, past president of the Internet Architecture Board; Alan Inouye, director of public policy for the American Library Association; Larry Irving, CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism; Christopher Jencks, a professor emeritus at Harvard University; Bart Knijnenburg, researcher on decision-making and recommender systems, Clemson University; James LaRue, director of the Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, Web consultant, developer and activist; Mark Lemley, professor of law, Stanford University; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia Livingstone, professor of social psychology, London School of Economics; Alexios Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at Northeastern University; Giacomo Mazzone, head of institutional relations for the World Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor, Oxford Internet Institute; Alex ‘Sandy’ Pentland, member US National Academies and World Economic Forum Councils; Ian Peter, internet pioneer, historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab; Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smart”; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo, longtime Silicon Valley-based technology forecaster; David Sarokin, author of “Missed Information”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Jack Schofield, longtime technology editor now a columnist at The Guardian; Clay Shirky, vice provost for educational technology at New York University; Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig Siegele, technology editor, The Economist; Evan Selinger, professor of philosophy, Rochester Institute of Technology; Scott Spangler, principal data scientist, IBM Watson Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard D. Titus, CEO for Andronik; Joseph Turow, professor of communication, University of Pennsylvania; Stuart A. Umpleby, professor emeritus, George Washington University; Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia; Tom Valovic, Technoskeptic magazine; Hal Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Kevin Werbach, professor of legal studies and business ethics, the Wharton School, University of Pennsylvania; John Wilbanks, chief commons officer, Sage Bionetworks; and Irene Wu, adjunct professor of communications, culture and technology at George Washington University.

Here is a selection of institutions at which respondents work or have affiliations:

Adroit Technolgic, Altimeter Group, Amazon, American Press Institute APNIC, AT&T, BrainPOP, Brown University, BuzzFeed, Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic Design, Center for Democracy/Development/Rule of Law, Center for Media Literacy, Cesidian Root, Cisco, City University of New York Graduate School of Journalism, Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data & Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy Information Center, Farpoint Group, Federal Communications Commission, Fundacion REDES, Future Today Institute, George Washington University, Google, Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard Business School, Hewlett Packard, Hyperloop, IBM Research, IBM Watson Health, ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network, Internet Engineering Task Force, Internet Society, International Telecommunication Union, Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global, LearnLaunch, LMU Munich, Massachusetts Institute of Technology, Mathematica Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang Technological University, National Academies of Sciences/Engineering/Medicine, National Research Council of Canada, National Science Foundation, Netapp, NetLab Network, Network Science Group of Indiana University, Neural Archives Foundation, New York Law School, New York University, OpenMedia, Oxford University, Packet Clearing House, Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network, Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S. Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles campuses), University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester Polytechnic Institute, World Broadcasting Union, W3C, Xerox PARC, Yale Law.

To return to the survey homepage, please click here.

To read anonymous responses to this survey question with no analysis, please click here.

To read credited responses to the report with no analysis, please click here.