Elon University

The 2017 Survey: The Future of Truth and Misinformation Online (Q1 Anonymous Responses)

Anonymous responses to the primary research question:
Will the information environment improve/not improve by 2027? Why?

Technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:

Future of Misinformation LogoWhat is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.

Why do you think the information environment will improve/not improve?

Some key themes emerging from all 1,116 respondents’ answers: – Things will not improve because the Internet’s growth and accelerating innovation are allowing more people and AI to create and instantly spread manipulative narratives. – Humans are, by nature, selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar. – In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil. – The dwindling of common knowledge makes healthy debate difficult, destabilizes trust and divides the public; info-glut and the fading of news media are part of the problem. – A small segment of society will find, use and perhaps pay a premium for information from reliable sources, but outside of this group ‘chaos will reign,’ and a worsening digital divide will develop. – Technology will create new challenges that can’t or won’t be countered effectively and at scale. – Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars. – The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and are likely to remove the ability for people to be anonymous online and limit free speech. – Technology will win out, as it helps us lable, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content. – Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of ‘trust ratings.’ – Regulatory remedies could include software liability law, required identities and the unbundling of social networks. – People will adjust and make things better; misinformation has always been with us, and people have found ways to lessen its impact. – Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. – Technology can’t win the battle, though, the public must fund and support the production of objective, accurate information. – Funding must be directed to the restoration of a well-fortified, ethical, trusted public press. – Elevate information literacy; it must become a primary goal at all levels of education.

Written elaborations to Q1 by anonymous respondents

Misinformation Online Full Survey LinkFollowing are full responses to Question #1 of the six survey questions, made by study participants who chose to take credit when making remarks. Some people chose not to provide a written elaboration. About half of respondents chose to remain anonymous when providing their elaborations to one or more of the survey questions. Respondents were given the opportunity to answer any questions of their choice and to take credit or remain anonymous on a question-by-question basis. Some of these are the longer versions of responses that are contained in shorter form in the survey report. These responses were collected in an opt-in invitation to about 8,000 people.

Their predictions:

The executive director of a major global privacy advocacy organization said, “Social systems like communications systems are more complex than the systems to regulate them. ‘Problematic’ actors will be able to game the devised systems while others will be over-regulated. What’s essentially happening today is basic human behaviour and powerful systems at play. It is only out-of-touch advocates and politicians who believe we can somehow constrain these results.”

A senior leader for an online civil rights organization predicted, “The problem is just getting started.”

An executive consultant based in North America wrote, “It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring.”

An anonymous respondent noted, “Research on AI focused on false news checking will improve the quality of the news.”

A professor of communication and information at a major US university, replied, “A range of automatic and manual systems and procedures are already being developed to review content for online platforms with large subscriber bases (Facebook).”

An institute director and university professor said, “The internet is the 21st century’s threat of a ‘nuclear winter,’ and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.”

A leader of internet policy based in South America responded, “I expect digital violence.”

A professor at a major US state university wrote, “At least, people have become more aware of the possibility of getting fake news. They will make more efforts to verify what they get.”

A senior fellow at a center focusing on democracy and the rule of law wrote, “1) People who want to access reliable information will be able to. 2) These efforts will constantly be hampered by the desire of state (and non-state) actors to promote their agenda through fake news, disinformation and the like. 3) Many people do not care about the veracity of the news they consume and circulate to others, and these people will continue spreading false information; those who do so from within established democracies can be punished/penalized, but many will remain in non-democracies where access to reliable information will deteriorate. My prediction is thus that in parts of the world things will improve, in others they will deteriorate. On average things will not improve.”

An educator commented, “Creating ‘a reliable, trusted, unhackable verification system’ would produce a system for filtering and hence structuring content. This will end up being a censored information reality.”

An analyst at Stanford University commented, “The general public will (re)learn that not all news sources are authoritative; one must learn and discern the reputation of the organizations, as well as learn how to evaluate evidence. An awareness of ‘fake news’ will cause people to pay more attention to sources.”

An author/editor/journalist wrote, “Confirmation bias, plus corporate manipulation, will not allow an improvement in the information environment.”

The chair emeritus of a major online civil rights organization predicted, “Forces will push false information. Better tools to uncover and flag this will exist, but only for those who choose to use them.”

An anonymous respondent said, “Forces of evil will get through the filters and continue to do damage while the majority will lose civil rights and many will be filtered or banned for no good reason.”

A vice president for public policy for one of the world’s foremost entertainment and media companies commented, “I fear that the small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural.  They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”

An anonymous respondent observed, “As long as there are benefits to actors for spreading lies, there will be lies. And there are always benefits. While there may be some efforts to reduce lies, just as with locks, there are always thieves who will figure out how to break them.”

A consultant replied, “The credibility history of each individual will be used by individuals to filter incoming information.”

A leading internet pioneer who has worked with the FCC, ITU, GE and other major technology concerns commented, “The ‘internet-as-weapon’ paradigm has emerged.”
A member of the Internet Architecture Board said, “The online advertising ecosystem is very resistant to change, and it powers the fake news ‘industry.’ Parties that could do something about it (e.g., makers of browsers) don’t have a strong incentive to do so.”

An associate professor of business at a major university in Australia observed, “Artificial intelligence technologies are advancing quickly enough to create an ‘Integrity Index’ for news sources even down to the level of individual commentators. Of course, other AI engines will attempt to game such a system. I can invisage an artificial blogger that achieves high levels of integrity before dropping the big lie just in time for an election. Big lies take a day or more to be disproven so it may just work, but the penalty for a big lie, or any lie, can be severe so everyone who gained from the big lie will be tainted.”

An anonymous survey participant said, “The work being done on things like verifiable identity and information-sharing through loose federation will improve things somewhat (but not completely). That is to say, things will become better but not necessarily good.”

A project manager based in Europe predicted, “There will be better filtering.”

An anonymous respondent replied, “Humans are all different and the risk and degree of misinformation can be reduced but never eliminated. We should still endeavour to reduce it to a bare minimum.”

A professor of information science at a large US state university wrote, “The information environment is not an object that independently exists along with human society. False information is put forward for a purpose, and as long as human beings exist, there will always be false information spread. Technology may improve in detecting false information, but humans will certainly be able to overcome it.”

An internet pioneer and principal architect in computing science replied, “Unless advertisers rebel, clicks will remain paramount, and whether those clicks are on pages containing disinformation or not will be irrelevant.”

An author and journalist wrote, “Technology for mass verification should improve as will the identification of posters. Fakers will still exist but hopefully the halflife of their information will shrink.”

An author and journalist based in North America wrote, “Fragmenting social groups and powerful economic interests have the motive and means to create their own narratives. Who is the status quo that can defeat this in a modern society that likes to define itself as disruptive, countercultural, rebel, radical – choose the term that fits your tribe.”

A policymaker based in North America said, “Intermediaries are under pressure to introduce human and technical systems and they will develop them to address the problem.”

An anonymous survey participant wrote, “The veracity of information will be linked to how much the source is perceived as trustworthy – we may, for instance, develop a trust index and trust will become more easily verified using artificial-intelligence-driven technologies.”

An anonymous respondent wrote, “News has always been subject to censorship, propaganda and disinformation, and methods have emerged to counter this problem. AI, blockchain, crowdsourcing and other technologies will further enhance our ability to filter and qualify the veracity of information.”

An international internet policy expert said, “Demand for trusted actors will rise. People will become more information literate.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “We as a society are not investing enough in education worldwide. The environment will only improve if both sides of the communication channel are responsible. The reader and the producer of content, both have responsibilities.”

A professor of law at a major US state university commented, “Things won’t get better until we realize that accurate news and information are a public good that require not-for-profit leadership and public subsidy.”

A North American research scientist, wrote, “The technology for creating false information is moving very fast, while our ability to discern reliable from unreliable information will lag considerably.”

A North American research scientist replied, “I’m an optimist, and believe we are going through a period of growing pains with the spread of knowledge. In the next decade, we’ll create better ways to suss out truth.”

An anonymous respondent noted, “Google and other outlets like Facebook are taking measures to become socially responsible content promoters. Combined with research trends in AI and other computing sectors, this may help improve the ‘fake news’ trends by providing better attribution channels.”

A longtime director for Google commented, “Companies like Google and Facebook are investing heavily in coming up with usable solutions. Like email spam, this problem can never entirely be eliminated, but it can be managed.”

A futurist/consultant replied, “We’re seeing the same kinds of misinformation that used to be in supermarket tabloids move online – it’s the format that has changed, not the human desire for salacious and dubious news.”

A postdoctoral scholar at a major university’s center for science, technology and society predicted, “Some advances will be made in automatically detecting and filtering ‘fake news’ and other misinformation online. However, audience attention and therefore the financial incentives  are not aligned to make these benefits widespread. Even if some online services implement robust filtering and detection, others will happily fill the void they leave, pandering to a growing audience willing to go to ‘alternative’ sites to hear what they want to hear.”

A futurist/consultant based in North America said, “Many of us, including those with the most control over the information environment, badly want things to improve, but it’s unclear to me that purely technical methods can solve these problems. The toxicity of the modern information landscape is as much attributable to vulnerabilities in human neurobiology as it is to anything embedded in software systems.”

An assistant professor of political science wrote, “Improving information environments does little to address demand for misinformation by users.”

A senior staff attorney for a major online civil rights organization said, “It’s far too early to believe that there is any technical or procedural ‘solution,’ given that evolution itself remains controversial.”

A researcher based in Europe replied, “The problem with fake news is not a technological one, but one related to human nature, fear, ignorance and power… In addition, as new tools are developed to fight fake news, those interested in spreading them will also become more savvy and sophisticated.”

A CEO and research director noted, “There are multiple incentives, economic and political, to solve the problem. The ‘fight’ against spam provides an imperfect, but applicable analogy.”

An associate professor at a US university wrote, “In general, we will get better as societies at sharing information. People wishing to influence that information will always be present, but I do not see us giving up on seeking truth.”
An internet pioneer in cybersecurity commented, “Better methods will be adopted in many places in the world. Those with better-educated citizens will take advantage of this. The United States is not likely to be one of them.”

A leading researcher studying the spread of misinformation observed, “The payoffs for actors who are able to set the agenda in the emerging information environment are rising quickly. Our collective understanding of and ability to monitor these threats and establish ground rules across disparate communities, geographies and end devices will be challenged.”

An anonymous respondent said, “The public will insist that online platforms take more responsibility for their actions and provide more tools to ensure information veracity.”

An anonymous respondent wrote, “There will be increasing sources of ‘facts’ as the cost of publishing goes down.”

A project manager for the US government responded, “It is going to get much worse before it gets better. There is no sign that people are willing to work at what we agree on, most would prefer to be divisive and focus on differences.”

A research scientist based in North America said, “I have a problem with the way the question is couched: is it about the information environment only, or also about how the political spectrum has leaned to the right with little interest for how minorities are represented? Moreover, with digital companies desire to ‘scale,’ it is unlikely that they will all of a sudden want to fix some of the problems that require editorial/human/community attention (let’s call these unprofitable scale for the companies).”

An anonymous respondent wrote, “I expect systems to emerge which support the development and maintenance of more reliable ‘reputation’ information online.”

A sociologist doing research on technology and civic engagement at MIT said, “Though likely to get worse before it gets better, the 2016-2017 information ecosystem problems represent a watershed moment and call to action for citizens, policymakers, journalists, designers and philanthropists who must work together to address the issues at the heart of misinformation.”

A distinguished engineer for one of the world’s largest networking technologies companies commented, “Certificate technologies already exist to validate a website’s sources and are in use for financial transactions. These will be used to verify sources for information in the future. Of course, there will always be people who look for information (true or false) that validates their biases.”

A longtime US government researcher and administrator in communications and technology sciences for agencies of the said, “The intelligence, defense and related US agencies are very actively working on this problem and results are promising.”

An assistant professor at a university in the US Midwest wrote, “I’m hopeful for an intellectual evolution rather than an idiocracy. It seems that many thought leaders are trying to improve the information environment.”

A media networking consultant noted, “Fact finding is a well-developed art and there is no technical barrier to applying this to our new media outlets.”

A retired politician and national consumer representative replied, “Unlike advertising, where a standards body can have the advertisement withdrawn and minimise its impact, fake news is a single hit that leaves a memory that can be activated by a dog whistle. There is money to be made and influence to gain.”

An associate professor at a major Canadian university wrote, “As someone who has followed the information retrieval community develop over the past 15 years, dealing with spam, link farms, etc., given a strong enough incentive, technologies will advance to address the challenge of misinformation. This may, however, be unevenly distributed, and may be more effective in domains such as e-business where there is a financial incentive to combat misinformation.”

A professor of law at a major California university noted, “There has always been misinformation; this is not going to change. What has changed is the speed and reach of misinformation today. I don’t see this changing fundamentally in the next decade. Entities with resources – including state actors – have too much interest in promoting misinformation. That said, I am hopeful that we will find a way to better flag and sort out misinformation than currently exists. 10 years is just too short a horizon. It will take longer.”

A professor and author based in the United States wrote, “Things will not improve. Too many people have realized that lying helps their cause.”

A professor of media and communication based in Europe said, “The online information environment will not improve if its architectural design, operation and control is left to five big companies alone. If they do not open up their algorithms, data governance and business models to allow for democratic and civic participation (in other words, if there is only an economic driver to rule the information environment) the platform ecosystem will not improve its conditions to facilitate an open and democratic online world.”

A professor at MIT observed, ” I see this as problem with a socioeconomic cure: Greater equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less a technological problem than a human problem, a problem of belief, of ideology. Profound levels of ungrounded beliefs about things both sacred and profane existed before the branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge relationships, explain the unexplainable.”

A principal technology architect and author replied, “Because there is no interest in it ‘improving’ – too many large organizations have too strong of an interest in ‘managing the people’ through information, such as Facebook, Google and various governments.”

A professor of media and communications based in Europe observed, “There has never been a wholly truthful human environment, and there are too many vested interests in fantasy, fiction and untruths.”

A research scientist replied, “Ideology drives people to create and repeat false statements, and no robust mechanisms exist to prevent this.”

A CEO and consultant based in North America said, “The current version of the internet is just too far gone to protect in any meaningful ways.”

An anonymous respondent replied, “Because the same problems tend to come in and out of saliency all the time; information quality has always been a problem. It will remain one. What the problem is exactly will change. We cannot really say whether things will be simply better or worse.”

A research scientist based in North America wrote, “We will develop technologies to help identify false and distorted information, but they won’t be good enough.”

A software engineer based in Europe said, “Feels to me like we’ve been caught by surprise somewhat by the impact of ‘fake news.’ I am optimistic that remedy attempts will have some effect, though it’s a hard problem to solve fairly.”

An anonymous respondent observed, “It’s really hard to say which way it will go. It may get worse for a while, but then as we adapt to changes in media, our information will gain more trustworthiness.”
An internet pioneer and longtime leader in ICANN said, “There is little prospect of a forcing factor that will emerge that will improve the ‘truthfulness’ of information in the internet.”

A professor and researcher of American public affairs at a major university replied, “As post-2016 international experience has shown so far, ‘fake news’ is most potent when political and media actors are not prepared. Relatively simple steps and institutional arrangements can minimize the malign influence of misinformation.”

A professor at the University of Illinois-Chicago wrote, “It will always be a race, with verification methods improving as the false information and rumor methods evolve. Just like viruses.”

A professor of law based in North America replied, “Too many people make money or otherwise benefit from the distribution of false and misleading stories, that I do not think it can be stopped without doing a lot of damage to freedom of speech. There is too much money to be made, as well as political power.”

An anonymous ICT for development consultant and retired professor commented, “Because the information environment reflects society at its best or worst; changes in human behavior, not technology, will impact on the information environment.”

An anonymous research scientist based in North America wrote, “The profit motive will be put in front of value. The reliance of corporations on algorithms that allow them to do better targeting leads to greater fragmentation and greater possibility for misinformation.”

A media director and longtime journalist said, “People become more accustomed to technology. They always have. There’s no reason not to believe they won’t adapt to this as well.”

An internet pioneer replied, “Blocking (a.k.a. censoring) information is just too dangerous.”

An anonymous respondent from the Berkman Klein Center at Harvard University noted, “False information – intentionally or inadvertently so – is neither new nor the result of new technologies. It may now be easier to spread to more people more quickly, but the responsibility for sifting facts from fiction has always sat with the person receiving that information and always will.”

A user-experience and interaction designer said, “As existing channels become more regulated, new unregulated channels will continue to emerge.”

A North American program officer wrote, “While technology may stop bots from spreading fake news, I don’t think it will be that easy to stop people who want to believe the fake news and/or make up the fake news.”

A professor of law at a state university replied, “Powerful incentives will continue for irresponsible politicians and others in the political industry (paid or not) to spread false information, and for publications to allow it to circulate: attention, clicks, ad revenue, political power. Meanwhile the First Amendment will protect [sharing of all information] powerfully inside the United States as the overall moral and ethical character of the country continues to be debased. Trump’s election with the support of Christian conservatives shows how many people do not hold public officials to the same standards they claim to hold themselves and children.”

A researcher based in Europe said, “Technologies will appear that solve the trust issues and reward logic.”

A knowledge management consultant, replied, “Everything we know about how human ingenuity and persistence has shaped the commercial and military (and philanthropic) drivers of the internet and the web suggests to me that we will continue to ensure this incredible resource remains useful and beneficial to our development.”

A head of systems and researcher working in web science said, “We are already seeing efforts to automatically disseminate false statements online and industry understanding of importance for veracity.”
An anonymous respondent replied, “The ongoing fragmentation of communities and the lack of common voice will lead to the lower levels of trust.”

A lecturer at the University of Tripoli in Libya noted, “The information environment will improve because of the technical affordances of online platforms. For example, the live broadcast of Facebook allows to share information on time and no one can manipulate such information.”

A principal network architect for a major edge cloud platform company replied, “Retooling of social networking platforms will likely, over time, reduce the value of stupid/wrong news.”

A technologist specializing in cloud computing observed, “Ten years is a ridiculously long timeframe to speculate about any trend. Sources prove reliable over time by providing actionable information. Continued misinformation will help people to learn first-hand how bad information functions in any system.”

A research scientist who predicted no improvement said, “But I have hope in human goodness.”

A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “The problem of fake news is largely a problem of untrusted source. Online media platforms delegated the role of human judgment to algorithms and bots. I expect that these social media platforms will begin to exercise more discretion in what is posted when.”

An anonymous respondent replied, “There will always be disinformation and noise in communication channels, but the internet will devise ways to rank credibility of sources. Over the next 10 years, users will also become much more savvy and less credulous on average.”

A professor and researcher based in North America noted, “I am optimistic that technology will help provide a solution.”

An anonymous respondent said, “Information platforms optimized for the internet are in their infancy. Like early e-commerce models, which merely sought to replicate existing, known systems, there will be massive shifts in understanding and therefore optimizing new delivery platforms in the future.”

A professor at the University of California-Irvine, replied, “Information-seeking is developing into both an art and science. It just used to be science.”

The top administrator of a major US university’s school of information sciences commented, “We weren’t allowed nuance in the question. Things will probably improve, but only probably, and probably not as much as we’d like. Why? Too much of the problem has foundations in relatively durable features of people and society. The technology amplifies, a lot, but that’s all. History backs this up.”

A professor based at a North American university said, “Proliferation of platforms and feeds make it difficult, if not impossible, to counter every one.”

A director of standards and technology who works with the Internet of Things wrote, “Things will improve due to [better tracking of the] provenance of data and security and privacy laws.”

An anonymous respondent wrote, “Artificial intelligence, machine learning, exascale computing from everywhere, quantum computing, the Internet of Things, sensors, big data science and global collaborative NREN (National Research and Education Network) alliances.”

An anonymous head of privacy commented, “I have confidence that as we learn more about the sources of fake news, we will adapt and find ways to preserve real news. Accurate facts are essential, particularly within a democracy, so this will be a high, shared value worthy of investment and government support, as well as private-sector initiatives.”

A technology analyst for one of the world’s leading technology networking companies replied, “Information will improve as more good information will become available. There will be more signal as well as more noise. The signal-to-noise ratio may be reduced, but there will be more signal than through the narrow lens of the Financial Times, New York Times, Washington Post or Wall Street Journal.”

A director with a digital learning research unit at a major university on the US West Coast said, “As the technology evolves, we will find ways (technologically) and also culturally to become savvier about the way in which we manage and define ‘trustworthiness.’”

An anonymous respondent commented, “The information environment will evolve rapidly. Like any change in technology, this will allow for amazing and problematic things to occur, including new opportunities for influencing and informing people.”

An anonymous CEO wrote, “Improvements will come because most people want factual and reliable information.”

A researcher based in North America said, “News aggregators such as Facebook will get better at removing low-information content from their news feeds but the amount of mis/disinformation will continue to increase.”

A retired professor and research scientist said, “It is too hard to ‘rationalize’ the current mess in the information ecosystem with tools vs. free speech. It is also hard to control the human factor.”

A retired senior IT engineer based in Europe observed, “To improve, you need a check-in/verifying apparatus. That requires laws and the willingness to do such and yet protect the freedom of information.”

An anonymous respondent commented, “On balance, the information environment may not improve because the potential to manipulate information in increasingly sophisticated ways may outpace the tools and methods available to ensure the integrity of information, as well as user literacy for the importance of using verified sources.”

A North American researcher replied, “In the United States – not necessarily in European democracies and places where people have more education about evaluating media sources – I see people getting more news from social media and therefore from an echo chamber that reinforces preconceived notions by peer pressure. This is stronger among adults, particularly the elderly (the peer group uniformity and pressure, not the use of social media).”

A political science and policy scholar and professor said, “I see this as a demand side as well as supply side issue – people very clearly want fake stories that confirm their priors. I don’t think this problem can be fully addressed through better systems.”

A professor based in North America observed, “Distrust of academics and scientists is so high it’s hard to imagine how to construct a fact-checking body that would trusted by the broader population.”

A policy analyst for the US Department of Defense predicted, “There will be a breakthrough in security technology.”

An independent journalist and longtime Washington correspondent for leading news outlets noted, “The internet is open to all, but ultimately self-regulating.”

A professor at a major US university replied, “Surveillance technologies and financial incentives will generate greater surveillance.”

An associate professor and journalist commented, “History shows how people have evolved ways to manage information over time and address emerging challenges.”

A professor of media studies at a European university wrote, “The history of technology shows repeatedly that as a new technology is introduced – whatever the intentions of the designers and manufacturers, bad actors will find ways to exploit the technology in darker, more dangerous ways. In the short run, they can succeed, sometimes spectacularly: in the long run, however, we usually find ways to limit and control the damage.”

A professor at a Washington DC-area university said, “It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.”
The managing editor of an online fact-checking site observed, “New communications technology always brings new ways of spreading misinformation and disinformation. Technology – and the public – adapt.”

A senior research scholar at a top-ranked US law school said, “The 2016 presidential election and its aftermath put a spotlight on fake news and the hidden means of spreading it as never before, which gave an incentive to social media companies to address the problem and a challenge to the hacker culture to help them, but improvements in the information environment won’t make the problem go away; they will only mitigate its effects.”

A post-doctoral fellow at a center for governance and innovation replied, “It will not improve. The stakes are just too high in political and monetary terms.”

A professor and research scientist based in Europe commented, “Advances in ICT development will improve the quality of the information that circulates online.”

A senior vice president of communications said, “There is no way to centrally control the information flow. It will be a constant game of whack-a-mole, and polarization has now come to facts. It’s almost like facts are a philosophy class exercise now – what is truth?”

An editor at large noted, “Technological advances will improve things.”

A research scientist from Latin America replied, “It will not improve because it will be driven by selfish interests and there will be no incentive to collaborate.”

An IT director observed, “The corporatists and elites have won. They know that civilization and mankind cannot survive for the long term. They are basically doing what they need to do to enslave all of us and enrich themselves. They will insulate themselves from the continued degradation of our environment until the day comes when money is no longer sufficient to shield oneself from the inevitable cataclysm.”

A librarian based in North America noted, “There will always be people who take advantage of technology to spread lies and harassment for their own ends. The environment will not improve, but people may.”

A senior lecturer in computer science and electrical engineering and cybersecurity expert at a major US university said, “Artificial intelligence/machine learning advances plus citizens will reach a tipping point and realize this is NOT helpful.”

An anonymous lawyer replied, “The internet is designed to be decentralized, not with the purpose of promoting accuracy or social order.”

A director of research for data science based in Spain observed, “People will develop better practices for dealing with information online.”

A researcher based in Europe replied, “I am worried about the misinformation that is spread by people running their own agendas. It is not going to change or go away.”

A professor at Harvard Business School wrote, “The vast majority of new users and a majority of existing users are not sophisticated readers of news – their facts, slants, or contents, nor should we expect them to be. Meanwhile, the methods for manipulation are getting better.”

An anonymous consultant noted, “To really solve this issue we need to look deeper at what truth means and who cares about it. It will take more than a decade to sort that out and implement solutions.”

An anonymous respondent replied, “It will take longer than 10 years for the information quality to improve. ‘Fake News’ is not a new problem but it does proliferate more quickly than in the past due to online social networks.”

A professor of rhetoric and communication noted, “People can easily stay in their own media universe and never have to encounter ideas that conflict with their own. Also, the meshing of video and images with text creates powerful effects that appeal to the more rudimentary parts of the brain. It will take a long time for people to adapt to the new media environment.”

An anonymous futurist/consultant said, “Technology and platform design is only one part of the problem. Building trust and spreading information quality skills takes time and coordination.”

An anonymous MIT student noted, “We can improve the misinformation landscape with with a strategic marriage of engineering and journalism.”

A research professor of robotics at Carnegie Mellon University observed, “Defensive innovation is always behind offensive innovation. Those wanting to spread misinformation will always be able to find ways to circumvent whatever controls are put in place.”

A CEO for a consulting firm said, “Machines are going to get increasingly better at validating accuracy of information and will report on it.”

A research psychologist commented, “The rise of individual contributions to news will inevitably lead to inaccuracies.”

A researcher/statistician consultant at a university observed, “There are already research efforts to stem the flow and creation of fake news/reports, etc. This research will pay off in the long run.”

A vice president for learning technologies emerita said, “Ways to attribute information and to verify sources can be protected, enhanced, expanded. Whether people will accept the information or tend to have their biases confirmed by verified information is another matter.”

A partner in a services and development company based in Switzerland commented, “Challenges will increase, but so will users’ determination to deal with them.”

A web producer/developer for a US-funded scientific agency noted, “The reliance on identity services for real-world, in-person interactions, which start with trust in web-based identification, will force reliability of information environments to improve.”

A journalist who writes about science and technology said, “Some platforms like, say, Google, might improve the reliability of the information it makes available, but other platforms will arise that will make money by spreading what amounts to fake news, so on balance, things will not improve.”

A retired university professor noted, “Social evils cannot be controlled by techno-economic means.”

A managing partner and fellow in economics observed, “In order to avoid censorship, the internet will remain relatively open, but technology will develop to more effectively warn and screen for fact-inaccurate information. Think of it as an automated ‘PolitiFact’ that will point out bullshit passively to the reader.”
The president of an information technology foundation wrote, “Large information technology platforms have strong incentives to get this right, and they have the technical skills to craft effective solutions.”

A professor at a major US university’s school of information science predicted, “Things will eventually improve, but it will take longer than 10 years for people to develop/learn the required social conventions.”

An anonymous mental health clinician wrote, “Things will not improve because it is too easy to manipulate internet information and, in general, humans are too lazy to do what they need to prevent the spread of false information.”

A professor of information technology at a large public research university in the US said, “As societies become more familiar with discourse using social media, norms will develop that will discourage the spread of false information.”

A researcher at Karlsruhe Institute of Technology replied, “We are only at the beginning of drastic technological and societal changes. We will learn and develop strategies to deal with problems like fake news.”

An anonymous consultant with the Association of Internet Users, wrote, “Information creation has, to date held the upper hand over information analysis. The rate of information creation always exceeds our capacity to analyze it.”

An eLearning specialist noted, “Any system deeming itself with the ability to ‘judge’ information as valid or invalid is inherently biased.”

A vice president for stakeholder engagement said, “Trust networks are best established with physical and unstructured interaction, discussion and observation. Technology is reducing opportunities for such interactions and disrupting human discourse, while giving the ‘feeling’ that we are communicating more than ever. With a deluge of data, people look for shortcuts to determine what they believe, making them susceptible to filter bubbles and manipulation.”

A professor at a major US university noted, “Misinformation has always existed. More powerful sharing and openness will eventually bring problems to light. It was such a problem in the past election because we were attending to it as much as we should have. That is now changing.”

A principal research scientist based in North America commented, “The trustworthiness of information is a subjective measure as seen by the consumer of that information.”

A principal engineer said, “There is no obvious way for it improve and there are lots of people who have no interest in making it improve.”

An associate professor of sociology at a liberal arts university replied, “The problem of fake news and misinformation is not fundamentally a problem of technology, but a problem of culture and the structure of our media and political arenas. Technology is therefore unlikely to solve the problem; cultural and political change is necessary.”

An anonymous journalist observed, “The increased awareness of the issue will lead to/force new solutions and regulation that will improve the situation in the long-term even if there are bound to be missteps such as flawed regulation and solutions along the way.”

A software architect for a major content delivery and cloud services provider whose work is focused on standards development said, “It is irreconcilably part of human nature to spread information, true or false, for a variety of reasons that might or might not even depend on the veracity of the information. From propaganda to humour, the natural drive to share information will overcome any obstacles that hinder it. While individual forums might succeed in having higher standards for information, on the whole the unconstrained freedom of expression, for good or ill, will not be suppressed.”

A researcher based in Europe commented, “I am not a pessimist when it comes to fake news. False news are easily reported as fake news, in a matter of minutes, by whoever might be using Facebook while having access to these contents. Fake news are more likely to be subject to scrutiny than for example the contents included in a printed edition of a newspaper run by an economic group with its interests and agendas. There is always good and bad information, meaning trusted news or not so much, either it is a printed newspaper or some content written by prosumers.”

A CEO and consultant to developers of many technology projects wrote, “Moves by Facebook, news organization and a general awareness – lead me to believe we will get past this as smarter, more-sophisticated consumers of media.”

A futurist based in North America said, “The information environment is only as good as the people who create and use it. Similar to the overall media, there are different interest groups and consumers who put forward ideas they want to spread/consume.”

A senior manager for regulatory policy for a major internet organization noted, “Artificial intelligence technologies will advance a lot, making it easy to make fake news more hard to be discovered and identified.”

A research scientist based in Europe observed, “There is a long record of innovation taking place to solve problems. Yes, sometimes innovation leads to abuses, but further innovation tends to solve those problems.”

A software engineer commented, “Automation, control and monopolization of information sources and distribution channels will expand with a goal to monetize or obfuscate.”

An anonymous respondent said, “I am optimistic that with the right investment in research and recognition from companies like Google and Facebook, we will innovate in ways to help improve the trust of online information.”

A research scientist based in North America commented, “The quality of scientific information has been degrading for decades due to commercial pressures. No practical means to push back against these pressures has been developed. If we can’t do it for science, why would we think we can do it for general information?”

A president observed, “I have faith in human intelligence to figure this problem out. It’s a bad problem and a lot of smart people want to solve it. There is a lot of money behind trying to propagate false information but there always has been, and consumers have risen up in the past to block the bullshit, fake ads, fake investment scams, etc., and they will again with regard to fake news.”

An assistant professor of political science at a large US state university wrote, “The information environment will not improve to the decline in digital literacy among the general population.”

An anonymous lecturer said, “Because some of us are currently skeptical of the quality of information available, some action will be take successfully or unsuccessfully, which will move things forward.”

An anonymous editor and publisher commented, “Technology platforms will be built. Sadly, many Americans will not pay attention to ANY content from existing or evolving sources.  It’ll be the continuing dumbing down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know, and continue to battle.”

A former director of a global press institute wrote, “Given a multitude of news sources, people will likely figure out what’s truth.”

A self-employed marketing researcher, replied, “There is no political will to address the problem; in fact the administration is turning a blind eye to the issue.”

A research scientist based in Europe noted, “There are AI techniques being developed that could be used to create fake videos and images.”

A professor of communication replied, “I am undecided really, as I am hopeful about trends like Facebook trying to introduce ways to reduce fake news, but others may find workarounds, but this election showed the importance of curbing fake news so hopefully people continue to care about it and other methods will be put in place. That said, we cannot stop the proliferation of sites on the internet completely and the American public does not use strong critical thinking skills in evaluating sources.”

A fellow at a UK-based university said, “The spread of ‘bad’ or false information is not only a matter of supply but also one of demand. I don’t think technological or top-down solution can ‘fix’ the information environment without addressing a range of root issues relating to democratic disenfranchisement, deteriorating education and anti-intellectualism.”

An anonymous researcher observed, “I do not believe *automated* systems will develop which increase the relative amount of reliable information, but the need by a variety of fairly powerful corporate and governmental players to have a venue in which to disseminate information (for sale in some cases) and to have reliable sources will spark a variety of measures to combat disinformation campaigns.”

A vice president of professional learning commented, “On days when I feel optimistic, I think the news landscape will improve. On days I feel pessimistic, I envision post-truth era devolving further.”

An anonymous activist replied, “Anything good guys can do, bad guys can get around.”

A researcher commented, “The information environment will stay the same. ‘Fake news’ is just the latest incarnation of propaganda in late capitalism.”

A senior principal technologist for one of the top five global technology companies said, “The people pushing the low-truth content are also attached to a political/economic agenda which I believe is about to become very unpopular.”

An associate professor at Brown University wrote, “Alarms were raised by this issue in the 2016 election which motivates people to address the problem. Also, I’ve observed groups I’m involved with seeking monitors for Facebook pages and establishing protocols for maintaining the information and narratives on their website.”

An anonymous respondent based in Asia/Southeast Asia replied, “We are being ‘gamed,’ simply put.”

An associate professor at a major university in Italy wrote, “Looking at previous changes in the history of media, society will adapt to the new environment.”

An internet pioneer/originator said, “Disinformation and misinformation are both forms of information, but geared towards supporting a particular worldview (in a phenomenological sense). It’s people’s worldviews that are the problem and they mostly derive from family situations and limited educational opportunities constrained by religion.”

An anonymous respondent who works with nonprofits and mission-based organizations said, “I foresee that the US government will fail to use its leadership position to uphold verifiable facts, online or elsewhere.”

An historian and former legislative staff person based in North America observed, “A major issue here is that what one side believes is true, is not the same as what the other side believes. Example: What Yankees and Confederates believed about the Civil War has never been the same, and there are differing social and cultural norms in different ages, times, regions and religions that have different ‘takes’ on what is right and proper behavior. We are facing an almost existential question here of ‘What is truth?’”

A librarian replied, “Although some level of fake news will always be around and has always been around we are still in the early stages of understanding the online information environment and how to effectively share and develop new ideas. Changes that are both technological and cognitive are already happening.”

An anonymous respondent wrote, “I hope regulators will recognise that social media companies are publishers, not technology companies, and therefore must take responsibility for what they carry. Perhaps then social media companies will limit the publication of false advertising and misinformation.”

An anonymous respondent wrote, “There is always a fight between ‘truth’ and free speech. Because the internet cannot be regulated, free speech will continue to dominate, meaning the information environment will not improve.”

A computer information systems researcher predicted no improvement, writing, “People don’t bother to validate sources.”

A futurist/consultant based in Europe said, “News has always been biased, but the apparent value of news on the internet has been magnified and so the value of exploiting it has also increased. Where there is such perceived value, the efforts to generate misleading news, false news and fake news will increase.”

A professor of philosophy at one of the world’s most-respected universities observed, “I doubt that there is a structural way to fix the problem.”

A professor of humanities who predicts no improvement noted, “The Obama administration’s attempts to regulate, balanced with retention of internet privacy rights, will be overturned by the Trump administration.”

A North American research scientist replied, “There will be new ways to guard against misinformation, but, at the same time, ways to circumvent the new barriers will also be developed.”

A senior research fellow working for the positive evolution of the information environment said, “Social media and bogus sites will outweigh legit information outlets.”

A small-press publisher based in North America commented, “The majority of Americans want to know what is going on and that will drive technology and social environments to press for reliable information sources. At the same time, there must be penalties (FCC) for licensed news organizations that knowingly broadcast false reports.”

A professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

A senior lecturer based in Asia/Southeast Asia commented, “The social and commercial forces in favor of a better information environment are stronger, though not much stronger, than those of a weaker one.”

A retired public official and internet pioneer replied, “1) Education for veracity will become an indispensable element of secondary school. 2) Information providers will become legally responsible for their content. 3) A few trusted sources will continue to dominate the internet.”

A principal with a major global consultancy observed, “There is no way, short of overt censorship, to keep any given individual from expressing any given thought.”

An engineer based in North America replied, “The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.”

An anonymous respondent said, “Actors can benefit socially, economically, politically, by manipulating the information environment. As long as these incentives exist, actors will find a way to exploit them. These benefits are not amenable to technological resolution as they are social, political and cultural in nature. Solving this problem will require larger changes in society.”

A CEO based in Canada replied, “Improvements will come via public education and major sites introducing filters and tools.”

A senior research fellow based in Europe said, “I would liken fake news to amateur porn in terms of its distribution, echo chambers, and lack of centralized control. In order to put that genie back into its bottle, we’d need centralized, major platforms that actually allow for control, but what we have instead is thousands of tiny echo chambers all across the spectrum.”

An associate professor at a major Australian university said, “While it is a shifting environment, I imagine the level of information/misinformation will stay roughly the same as countermeasures prompt new initiatives and actors become increasingly skilled in online strategies.”

An anonymous respondent wrote, “People seem to need certainty in understanding how the world around them works and will argue against assertions which do not match their own experiences/perceptions. I see a world in which these arguments help to clarify how in a society we make judgments on information.”

A university professor based in Europe who expects things will not improve cited, “The sheer volume of information and communication is too much.”

A research scientist at a network science group wrote, “Efforts to improve web literacy are underway, and as we understand more about digital misinformation we will design better tools, policies and opportunities for collective action.”

An anonymous respondent based in Europe warned, “Technical tools and shields to filter and recognize manipulations will be more effective than attempts at education in critical thinking for end users.”

A principal research scientist at a major US university replied, “People believe in fake news because of their biases. These people will search out the information that supports the beliefs they already have (confirmation bias).”

An anonymous respondent observed, “There is an inability to prevent new ways of disrupting our information systems. New pathways will emerge as old ones are closed. Whack-a-mole seems to be our future.”

A professor at the University of Maryland, wrote, “Threats to Google Search were reduced by concerted effort, and if there is public pressure on social media providers, I believe they will reduce fake/false news.”

A researcher of online harassment working for a major internet information platform replied, “If there are nonprofits keeping technology in line, such as an ACLU-esque initiative, to monitor misinformation and then partner with spaces like Facebook to deal with this kind of news-spam, then yes, the information environment will improve. We also need to move away from clickbaity-like articles, and not algorithmically rely on popularity but on information.”

An anonymous respondent who works at a major US university said, “Internet literacy is already improving. The 2016 election has already provided the impetus for critical reading of online information – the upsurge in the numbers of students studying history as a major is an indicator of this process. I see a demand for better vetting of online information already occurring, and the process of developing such methods and strategies already begun.”

A postdoctoral scholar based in North America wrote, “Ten years is a long time for the internet. Facebook barely opened to the masses just over 10 years ago, for perspective. I have no doubt the news landscape will change in another 10.”

A North American research scientist observed, “To keep people invested in online content it needs to be more trustworthy.”

A journalist and experience strategist who works for one of the top five global technology companies said, “I’m not sure the environment will actually improve, but I think digital literacy education will improve. This will educate individuals on how to identify and deal with falsified information or news.”

A professor of political economy at a US university wrote, “I don’t think there is a clear, categorical distinction between ‘false’ news and the other kind. Some falsehoods have been deliberately fostered by elites for purposes of political management – the scope has widened dramatically in recent years.”

A faculty member at a research university noted, “The people who actually understand how all of this works are generally ignored by those who produce and create policy on our information networks. Media theory has been around for a long time, yet it is not taught to computer science folks, business folks, etc., so they aren’t media savvy, yet they are designing and producing the media we live in, and therefore, the ways knowledge is produced and understood.”

An author and journalist based in North America said of misinformation’s likely advancement, “As long as people want to believe a lie, the lie will spread.”

An anonymous research scientist commented, “The issue is viewed as urgent by individuals and organizations across society – from traditional to social media, from politicians to citizens. That urgency should spur innovative approaches to the problem.”

A North American politician/lawyer wrote, “Many users seem to be indifferent or confused about objectively accurate information, which is difficult to confirm in an environment of information overload. It is not easy to address these issues with technology.”

An anonymous respondent said, “Trusted news sources will emerge, but unfortunately they will be competing and bifurcated along political and cultural lines. There will be a sort of ‘gold standard’ set of sources and there will be the fringe.”

An anonymous research scientist observed, “False narratives are not new to the internet, but authority figures are now also beginning to create them.”

An anonymous futurist/consultant commented, “There are too many pressures from the need to generate ‘clicks’ and increase advertising revenue.”

An anonymous respondent who predicts improvement replied, “Powerful social trends have a life cycle, and the pendulum typically swings back over time.”

An anonymous respondent said, “It is the nature of the technical development that politics and regulatory forces are only able to react ex post, but they will.”

A former journalism professor and author of a book on the future of news commented, “The ‘information superhighway’s very speed and ease have made people sloppier thinkers, not more discerning.”

A North American research scientist observed, “It will get better because people will develop technologies to identify or rate posts/stories, and people will get better at identifying fake news.”

A self-employed consultant said, “Bad information has always been produced and promulgated. The challenge remains for individuals to stay skeptical, consider numerous sources and consider their biases.”

An anonymous respondent noted, “I am betting that smart people will figure out how to tilt towards demarcating the trusted from the misinformation.”

A chief operating officer of a global nonprofit focused on children’s issues wrote, “There will be access to more information and more reliable sources.”

A senior researcher at a US-based nonprofit research center replied, “The next generation of news and information users will be more attuned to the environment of online news and will hopefully be more discerning as to its veracity. While there are questions as to whether the digital native generation can accurately separate real news from fake, they at least will have the technical and experiential knowledge that the older generations mostly do not.”

A senior lecturer in communications at a UK university said, “In all likelihood, the environment will improve in some ways and worsen in others.”

A research scientist based in Europe observed, “The question lacks a context: that is the situation will probably NOT improve in the United States, China and Russia but WILL improve in Europe. The Third World requires detailed examination.”

A vice president for an online information company noted, “It is really hard to automatically determine that some assertion is fake news or false. Using social media and ‘voting’ is overcome by botnets for example. Critical thinking is needed but people are lazy about that.”

An anonymous editor/journalist based in North America said, “The information environment is controlled by powerful stakeholders such as Google and Facebook. They see misinformation as harmful to their business models and/or reputations and will take steps to curb it.”

An anonymous respondent from North America wrote, “There are too many sources of information for the information environment to improve.”

A longtime leader with the Internet Engineering Task Force commented, “This has a lot of the properties of a classic arms race in which neither side can actually win and the things you are asking about seem to call for an official/approved/endorsed version of reality. You need to look at the propaganda literature.”

A researcher affiliated with a major US university noted, “There are already trusted sources of information on the internet. They more or less correspond to trusted institutions (e.g., news services of record). I am less confident about our collective ability to improve the trustworthiness of social media.”

A professor based in Australia replied, “The issue of false or unreliable information has received a huge amount of media attention. Media researchers and the general public are now confronted with thinking through these issues and their broader social implications. Now that it is on the agenda, smart researchers and technologists will develop solutions.”

An anonymous North American research scientist said, “We cannot undo the technology and economics of the current information environment, nor can we force those who are profiting from misinformation to forego their monetary gains.”

An anonymous business leader noted, “The negative impact of untruthful media will become a large enough problem that a largescale collaboration will form to address it.”

An anonymous business leader based in North America wrote, “The reduction of misinformation will come about as the result of the collaboration between policy makers, technologists and media creators and distributors. The solutions will be multi-pronged – policy, technology and media literacy education will all play a part. Given the intense amount of interest and energy in combatting the spread of misinformation I believe progress will be made.”

An associate professor of political science at a major US university commented, “The internet is a collective-action problem. Collective-action problems require a collective-action response and I don’t think we’ll manage that in the international environment.”

An anonymous respondent observed, “Until we reassert an education, health and social system that values social justice, there won’t be enough political will and effort to turn things around.”

A professor of legal issues and ethics at one of the pre-eminent graduate schools of business in the United States said, “The basic incentive structure that promotes untrustworthy information flow won’t change, and the bad guys will improve their approaches faster than the good guys.”

A global business leader replied, “There will be MORE news – some inaccurate and some accurate, but the problem is how do readers decipher what is fake and what is not fake. More information should be shared about the background of the journalists, so we have a sense of where they come from and how that info my impact their perspective.”

A North American research scientist observed, “People don’t just share information because  they think it’s true. They share to mark identity. Truth-seeking algorithms, etc., don’t address this crucial component.”

A public-interest lawyer based in North America commented, “I don’t see how public education can reverse these attitudes, especially since those fomenting mistrust will undermine such efforts.”

A self-employed marketing professional observed, “The 24/7 news cycle demands ‘fresh’ news, regardless of the source.”

An editor based in North America noted, “Hyper partisanism and malicious spreading of biased or even untruthful information is likely to accelerate given the divisive political climate.”

A senior policy researcher with an American nonprofit global policy think tank said, “The truth will win out because people demand it and because false narratives lead to error. The time horizons for some of these outcomes is long so it may take a while for the truth to win.”

The dean of one of the top 10 journalism and communications schools in the United States replied, “Society always adjusts to new media and responds to weaknesses and flaws. Individuals will adjust, as will the technology.”

A research scientist at Oxford University who expects no improvement commented, “Misinformation and disinformation and motivated reasoning are integral to platform capitalism’s business model.”

A respondent affiliated with Harvard’s Berkman Klein Center for Internet & Society wrote, “The democratization of publication and consumption that the networked sphere represents is too expansive for there to be any meaningful improvement possible in terms of controlling or labeling information. People will continue to cosset their own cognitive biases.”

An anonymous respondent replied, “The rise of more public platforms for media content (online opinion/editorials, and platforms such as Medium) gives me confidence that as information is shared, knowledge will increase so that trust and reliability will grow. Collaboration is key here.”

An anonymous business leader said some solutions will include “improvements in engineering sophistication, user tools and user education and sophistication.”

A longtime technology writer, personality and conference and events creator commented, “The next gen is surprisingly savvy about sniffing out true and false. They grew up surrounded by false and will adopt, plus tech algorithms will provide indicators including truth meters, fact-checking and additional sources.”

An anonymous respondent wrote, “There have always been media of different quality. Think about the variable quality of newspapers throughout US history. Over time, people increasingly learn to sort out real news from fake news.”

An associate professor of urban studies wrote, “The proliferation of fake news is a relatively new phenomenon, and we as a society haven’t had time to respond. I trust we will eventually.”

A research scientist based in Europe predicted, “The different actors will take appropriate measures, including efficient interfaces for reporting and automatic detection, as well as implement efficient decision mechanisms for the censorship of such content.”

A retired consultant and strategist for US government organizations replied, “Regardless of technological improvements, the change agents here are going to have to be, broadly speaking, US Supreme Court judges’ rulings on Constitutional interpretations of free speech, communication access and any number of other Constitutional issues brought to the fore by many actors at both the state and national level, and these numerous judicial change agents’ decisions are, in turn, affected by the citizen opinion and behavior.”

A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Problems will get worse faster than solutions can address, but that only means solutions are more needed than ever.”

A professor of journalism at New York University observed, “The fragmentation of the sources of media – and increasing audience participation – meant that it was no longer just canonical sources that could get their voices amplified.”

An independent systems integrator wrote, “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to correct for a feedback loop.”

An assistant professor based in North America replied, “My hope for improvement is just sheer optimism. I just have to believe that both people have begun to realize how damaging biased news can be and that tools will emerge to help navigate biased news.”

A development associate for an internet action group in the South Pacific observed, “The information environment will not improve, or may improve minimally, because there are too many variables working against it, and security measures will be working multi-times more to try to keep the internet stable and secure for an appropriate environment for effective information and communication systems.”

An internet researcher and author said, “I don’t have faith in the ability of real news to prevail over fake news. It may not get worse, but I don’t see it getting better.”

A senior lecturer in data science observed, “I am confident that future algorithms will be able to estimate the reliability of available information.”

An anonymous respondent noted, “Although I believe some mechanisms can be developed to limit the spread of false information via bots, the kind of control it would take to prevent the spread of fake news would amount to censorship which I think will still be seen as incompatible with democratic ideals in 2027.”

A senior political scientist wrote, “There are too many actors who have incentives to make up their own facts and spread their own views and the current media environment rewards that behavior.”

A senior researcher and distinguished fellow for a major futures consultancy observed, “Reliable fact checking is possible. Google in particular has both the computational resources and talent to successfully launch a good service. Facebook may also make progress, perhaps in a public consortium including Google. Twitter is problematic, and would need major re-structuring including a strict, true names policy for accounts – which is controversial among some privacy sectors.”

A North American author and journalist said, “In the race to exploit or contain, the exploiters always have new techniques to try out.”

An anonymous respondent noted, “People on the whole want to discern the truth and, because of this, people will seek and create ways to clarify fact from fiction.”

A chief marketing officer wrote, “The impact of fake news has created a public backlash. Misinformation will always be an issue, but people want ‘trusted’ sources. Companies and individuals will push for more accountability, and internet familiarity and education of the public will increase skepticism.”

An anonymous professor of economics based in North America noted, “I’m feeling optimistic. There are technical solutions to ‘fake news’ and there are political solutions.”

An anonymous respondent replied, “The increasingly segregated and underfunded structure of the US education system may likely prompt the easier spread of disinformation. However, the more rigorous standards set by journalists, policy makers and academics will set a new precedent for what defines trustworthy sources and news.”

A research scientist with IBM Research noted, “We should be able to build socio-technological systems that reconcile ‘fake news’ against facts from vetted sources.”

An anonymous respondent wrote, “More attention to the issue will lead to more efforts to control problems.”

An associate professor of communication studies at a Washington D.C. based university said, “The fake news problem is not one that can be fixed with engineering or technological intervention, short of a total reimagination of communication network architecture.”

A town council member based in the southeastern United States commented, “Once trust is so seriously eroded, I don’t see how it can be regained.”

An anonymous respondent said, “It may get worse and then better, as society or technology figures out how to counter the spread of misinformation.”

A research scientist based in Moscow said, “I am sure that the meaning of the information environment will change by 2027, and good/bad will be not comparable to what we suppose about it now.”

A senior global policy analyst for a major online citizen advocacy group said, “Platforms are improving their tools for reporting and appropriately presenting fake news.”

A CTO for a major national research and education network commented, “We will eventually figure out ways to authenticate information.”

An anonymous research scientist based in North America wrote, “Technological mechanisms for improving the information environment might possibly emerge, but they will not be sufficient. The technology will not be deployed until there is a prevailing political will to do so, and I don’t think that will exists in the United States today.”

An anonymous internet pioneer/originator commented, “In the same way that there is more general trust in sources like the New York Times and the Wall Street Journal than sources like the National Enquirer, people will learn to trust reliable online sources to the detriment of unreliable ones or ones not known to be reliable.”
The technology editor for one of the world’s most-trusted news organizations commented, “Society will adapt (with the help of technology).”

An anonymous respondent wrote, “Google and Facebook are focusing money and attention on the problem of false information, which give the movement legitimacy and momentum. We have not yet reached a societal tipping point where facts are valued, however – that will have to be triggered by a watershed event that suddenly makes it plain for all how valuable facts are. Like, perhaps, the investigation of #trumprussia.”

A consultant in the financial services industry replied, “The growth of new technologies will continue to emerge, so unless we focus on trusted sites the false information can continue.”

A North American research scientist said, “In order to have any hope about the future, I need to believe that the information environment and what counts as a credible source will improve.”

An anonymous respondent wrote, “I do not think technology can keep up with people’s creativity or appetite for information they find congenial to their pre-existing beliefs.”

An anonymous North American research scientist wrote, “Education is lagging and there is a lack of rigor in how people learn about the world, pictures and video trump the reading of raw print information that requires more introspection.”

An anonymous research scientist based in North America observed, “We’re already on a trajectory to add certificates to most websites for https (secure web support). It is not a huge stretch to add certificates to news, showing where it came from.”

An anonymous survey participant replied, “The internet is too critical a tool economically for the veracity of information to deteriorate. The battle will remain but commerce will drive toward ensuring the environment overall is secure, reliable and trustworthy. Otherwise its value is vastly diminished.”

A consultant based in North America noted, “Qualitative improvement in the information environment requires structural changes to the media system (i.e., business models for major increases in accountability journalism) that will take longer than a decade to realize. Moreover, on the ‘demand side’ we cannot expect to reduce the appetite for misinformation without a long-term effort to promote civic engagement, media literacy and the social norms related to information and democracy.”

An associate professor based in North America replied, “The environment will not improve in terms of the veracity of information shared because there is just so much information and the environment has become so fragmented. Consumers will have to change their behaviors, and hopefully a growth in fact-checking services will contribute to consumers’ ability to be smarter consumers in this environment.”

An early internet developer and security consultant commented, “Fake news is not a product of a flaw in the communications channel and cannot be fixed by a fix to the channel. It is due to a flaw in the human consumers of information and can be repaired only by education of those consumers. After 2016 and the 2017 European elections, we see an increased awareness of the need for that education.”

An adjunct senior lecturer in computing noted, “International organisations legitimate or criminal (the difference only depends upon current laws) will continue to control popular media.”

A distinguished professor emeritus of political science at a US university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.”

An anonymous respondent replied, “Trust in technology and artificial-intelligence solutions to help with filtering and checking against human fact checkers/researchers. Do not trust the US political system though.”

A professor of information systems at a major technological university in Germany commented, “Semantic technologies will be able to cross-verify statements, much like meta analysis.”

An instructor of political science based in North America wrote, “As technological advancements stabilize we will begin to tame the wild-west nature of the internet. Similarly to the way more independent newspaper replaced the patrician press in the past.”

A principal consultant said, “People will gain in sophistication, especially after witnessing the problems caused by the spread of misinformation in this decade. Vetting will be more sophisticated, and readers/viewers will be more alert to the signs that a source is not reliable.”

A professor based in North America observed, “I’m an optimist. As such, I believe that we will find ways to improve information and protect speech.”

A North American research scientist commented, “1) Technology will make it harder for true fake news to be disseminated; 2) and a widespread awareness of fake news may increase the public’s trust and reliance on trusted news authorities.”

A technical writer said, “Too many bad actors will be using fake news for their own benefit.”

A professor and researcher based in North America noted, “I’m an optimist. The tide of false information has to be stemmed. The alternative will be dystopia.”

A research scientist said, “As a political system we have dealt with similar problems before, just not in the internet domain (e.g., flyers in the printing press era). Computer scientists and new media will alleviate, not eliminate, this problem in the next 10 years.”

A professor of sociology based in North America said, “There are many platforms for communication but no quality control for most of them. Consequently, messages can spread quickly regardless of veracity.”

An anonymous respondent said, “The current administration gives me a very pessimistic view about the future of our country and the information environment.”

The director of a networking for development at a foundation based in the Dominican Republic urged, “The way to solve the issue is NOT so much in designing systems for detecting and eliminating fake news but rather in educating people to manage information appropriately. Media and information literacy is the answer.”

A data scientist based in Europe who is also affiliated with a program at Harvard University wrote, “The information environment is built on the top of telecommunication infrastructures and services developed following the free-market ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market products.”

A senior vice president for government relations predicted, “Governments should and will impose additional obligations on platforms to increase their responsibility for content on their services.”

A doctoral candidate and fellow with a major international privacy rights organization said, “There can be no technical solutions to the problem of fake news as it is less of an issue with technology and more an issue with psychology and bias.”

A professor based in Europe commented, “Unless serious changes are made we are seeing the internet increasingly degrade into a tragedy of the commons.”

A professor of sociology based in Europe observed, “There is a possibility that new policy and tools will be developed on the back of research recommendations.”

An anonymous professor of cybersecurity at a major US university commented, “’Fake news’ can refer to false information or biased presentation of information. I don’t see how the spread of either will be stopped through technical means.”

An anonymous educator predicted, “There will be increased information literacy through education.”

A professor and expert in technology law at a West Coast-based US university said, “Intermediaries such as Facebook and Google will develop more-robust systems to reward legitimate producers and punish purveyors of fake news.”

An anonymous futurist/consultant noted, “Younger people will improve the methods and structures of getting news and information so that it is unbiased and helpful.”

An anonymous researcher based in North America replied, “The information environment will improve because of 1) raised public awareness of media manipulation through technology; 2) new technical tools (primarily through data analytics) for detecting ‘bots’ in social media, fake sources, patterns in the dissemination of fake stories, etc. 3) policy development.”

A Ph.D. candidate in informatics commented, “I am optimistic about researchers’ and designers’ abilities to create technology to facilitate positive social change.”

A legal researcher based in Asia/Southeast Asia said, “Information is produced by people. Current environment shows that people are more attractive to and curious about the fake ‘hot’ topic. I think the phenomena of fake information spreading faster and wider is connected to human nature, thus the information environment will not change.”

An anonymous respondent wrote, “I don’t think it will improve. I don’t know how you can really fight misinformation while still allowing for a free exchange of information online.”

A senior lecturer at a university of technology replied, “There is no way for the information environment to stay ahead of attempts to put forward ‘fake news’, misinformation etc.”

An anonymous survey participant said, “My hope is that the demand for high-quality information and the ability to provide it broadly will improve. I went back and forth on which answer to provide, but ultimately decided that I am hopeful rather than cynical. I am concerned, but hope that the forces of empirical evidence will rise.”

An anonymous researcher based in North America observed, “There is no financial incentive for the few organizations that control most of the media to fix the situation.”

A university professor based in Asia/Southeast Asia said, “As people have to use platforms for internet communication, the information environment is managed by the owners of these platforms who may not be so interested in ethical issues.”

A postdoctoral associate at MIT noted, “I believe this will be one of the most important questions of next decade. It’s important for all stakeholders to at least acknowledge the problem.”

An anonymous respondent replied, “It is an arms race – both sides will improve, but ultimately, I believe that people will learn that being discerning about the trustworthiness of information is really in their own self-interest.”

A professor based in New York observed, “I am optimistic. The fact that we are discussing this in the open makes it easier to deal with, and more attention will be brought to it over time.”

An author and journalist based in North America said, “Social media, technology and legacy media companies have an ethical and economic incentive to place a premium on trusted, verified news and information. This will lead to the creation of new digital tools to weed out hoaxes and untrusted sources. There also will be new visual cues developed to help news consumers distinguish between trusted news sources and others. Beyond that, I believe this era could spawn a new one – a flight to quality in which time-starved citizens place high value on verified news sources.”

A past chairman of a major US scientific think tank and former CEO replied, “It should improve because there are many techniques that can be brought to bear both human-mediated – such as collective intelligence via user voting and rating – and technological responses that are either very early in their evolution or not or not deployed at all. See spam as an analog.”

A research scientist based in North America predicted as remedies “Human and artificial-intelligence monitoring.”

A professor of education policy commented, “I am answering this question only for the United States, where the average person has actually become quite poststructural in their thinking. Since there is no center around which to organize truth claims (fragmented political parties, social groups, identity groups, institutional affiliations, fragmentation of work environments, increasing economic precarity, etc.), and since the stakes for the very wealthy are increasing exponentially as environmental degradation expands, there is likely in my estimation to be more, not fewer, resources directed at destabilizing truth claims in the next 10 years.”

An anonymous survey participant noted, “The situation regarding the accuracy of information on the internet will continue to reflect human behavior. There will always be liars and people who distort the truth.”

A distinguished engineer for a major provider of IT solutions and hardware commented, “It’s not possible to censor the untrustworthy news without filtering some trustworthy news. That struggle means the situation is unlikely to improve.”

A North American research scientist said, “Fake news was always a problem, even before the internet. It just took a different form and, as the Internet evolves, its form will continue to evolve, but it will still be there.”

A researcher based in North America observed, “Artificial intelligence is improving, and this will make it possible for social media to improve.”

An anonymous internet activist/user based in Europe commented, “The information environment cannot be improved without violating the right to free speech; who can determine what is or is not fake news?”

A business leader based in North America noted, “I’m a 25-year veteran of the Silicon Valley and an optimist – 10 years is a long time to solve something like this.”

An academic based in North America replied, “Who decides what is trustworthy or not?”

An emeritus professor of communication for a US Ivy League university noted, “We have lost an important social function in the press. It is being replaced by social media, where there are few if any moral or ethical guidelines or constraints on the performance of informational roles.”

A former software systems architect replied, “Bad actors will always find ways to work around technical measures. In addition, it is always going to be human actors involved in the establishment of trust relationships and those can be gamed. I do not envision media organizations being willing participants.”

A copyright and free speech artist-advocate observed, ‘It will improve on certain sites and in certain places. But, the internet provides something for everyone, and people will go to the site that reinforces their beliefs.’”

A North American futurist/consultant commented, “Trusted aggregators will emerge, much like Rotten Tomatoes for movies.”

An anonymous survey participant wrote, “It’s a good question. I can see paths to both outcomes.”

A consultant based in North America replied, “Accuracy/veracity in the information ecosystem will improve. Society will adapt to the issue through education and technical solutions.”

A political economist and columnist commented, “This, too, will pass because there will be a backlash.”

The founder of one of the internet’s longest-running information-sharing platforms commented, “There are a lot of smart people, effective people focused on this, including those at the News Integrity Initiative.”

A professor and institute director based in the US said, “Our president is unashamedly spreading disinformation. With that awful example from our highest office, how could truth and honest reporting prevail?”

A professor based in North America noted, “The media is a capitalist system. As a result, the information that will be disseminated will be biased, based on monetary interests.”

The executive director for an environmental-issues startup commented, “It may take some kind of ‘reader review’ system like Airbnb’s to make this work, but even that can be hacked. I’m not 100% sure how lies will be labeled as such on a large scale, but think that there will be news outlets that are human-touch intensive to help with this problem.”

An anonymous respondent observed, “Whatever is devised will not be seen as impartial; some things are not black and white; for other situations, facts brought up to come to a conclusion are different that other facts used by others in a situation. Each can have real facts, but it is the facts that are gathered that matter in coming to a conclusion; who will determine what facts will be considered or what is is even considered a fact.”

An anonymous research scientist said, “The algorithms and ‘invisible’ nature of recommender systems will make it harder for people to realize they are in an echo chamber, hearing back whatever type of news confirms their world view. Also, for the up-and-coming generation, we do not value digital literacy as a core part of public education, and teach it only as an add-on in an ad-hoc manner well below math and language arts.”

An anonymous survey participant noted, “Misinformation will play a major role in conflicts between nations and within competing parties within nation states.”

An anonymous respondent said, “Throughout history, when there is a need someone will fill that need. There is a need for accurate information. Someone will fill that.”

An anonymous business leader replied, “It is too easy to create fake facts, too labor-intensive to check and too easy to fool checking algorithms.”

A project leader for a science institute commented, “We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. The existence of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do not bother to read entire articles, nor look for trusted sources. Given that there is freedom of speech I wonder how the situation can ever improve. Most users just read the headline, comment and share without digesting the entire article or thinking critically about its content (if they read it at all).”

A professor and researcher based in North America noted, “I am critically optimistic about our collective response to the current crisis. And I am not certain that the information environments of past decades were empirically ‘better.’ It may be the case that we are able to witness the spread of rumors and misinformation and to document the effects better than in previous years. Are we witnessing an expansion of the number of people who are engaging with political information and attempting to participate in the public sphere? If so, this may be a novel problem, a challenge of growth rather than a deterioration of some more idealized past.”

A professor of public policy at a major US state university noted, “It will improve because smart journalists and technologists will fund ways to reduce the spread and influence of malign narratives.”

A marketing consultant for an innovations company wrote, “Humans are innovative and will develop better methods to root out ‘fake news,’ which they have always done.”

An anonymous respondent commented, “Greed for power and dollars is driving information. Education is the way to improve this, starting with preschool. I don’t see either parameter changing much.”

A professor of sociology at a major university in the US Midwest commented, “I don’t see any effective mechanisms being put into place that are likely to lead to a better control of fake information in the future.”

An anonymous respondent replied, “Things will get better because people value truth and innovation follows where the value is.”

An anonymous activist/user wrote, “The quality of online information will worsen because it is not in the interest of governments to solve this problem, as they are also users of misinformation.”

An anonymous research scientist predicted, “The systems for producing false information will outpace efforts to block it. There are also issues with liberty and freedom of access.”

A US-based associate professor of political science commented, “The public will eventually be able to determine the difference between fake and non-fake news.”

A lecturer in media studies in a department of social, political and cognitive sciences at a major European university predicted, “We will create new tools able to evaluate the credibility of a news item. At the same time, the overall skills of the internet users will grow. Internet users will become more expert than now, and media literacy will grow, but not homogeneously: for less-educated people it will be more difficult to improve their digital culture.”

An anonymous respondent commented, “While blocking certain news sources is technically simple, fake news and propaganda are nothing new or intrinsic to the internet.”

An anonymous survey participant wrote, “I worry that sources of information will proliferate to the point at which it will be difficult to discern relatively unbiased sources from sources that are trying to communicate a point of view independent of supporting facts.”

An educational technology broker replied, “New technology will make it easier to track the source of information. In addition, fact-checking agencies will continue to monitor the accuracy of information.”

A professor at an American research university noted, “There will be new ways for fake news providers to circumvent whatever new technologies/procedures are implemented.”

A professor emerita and adjunct lecturer at two major US universities commented, “I have faith that technological advances will help solve the problems of determining accuracy of information and that education will help citizens advance their information literacy in the absence of a perfect environment.”

A chief technology officer said, “The internet and the ecosystem of companies around it have a way of solving their most important problems, like this.”

A research assistant at MIT predicted no improvement, noting, “’Fake’ and ‘true’ are not as binary as we would like, and – combined with an increasingly connected and complex digital society – it’s a challenge to manage the complexity of social media without prescribing a narrative as ‘truth.’”

A journalist based in North America said, “The US government engages in fake news around the world. That’s not going to stop.”

An anonymous respondent observed, “The talent pool the media system draws its personnel from will further deteriorate. Media personnel are influenced by defective information, and – even more – the quality of inferences and interpretations will decrease.”

The CEO of a major American internet media company based in New York City replied, “Things are already improving! There was much less fake news on Facebook before the recent UK and French elections than we saw before the US election.”

An owner and principal sage for a technology company based in North America commented, “[Things will not get better, due to] freedom of speech rights, net neutrality and politics.”

An engineering director for Google observed, “The internet has evolved to address its weaknesses over time, and this will continue to happen.”

A librarian based in North America noted, “It has to improve, because information is just too valuable a commodity.”

A vice president of survey operations for a major policy research organization replied, “This is not a new issue. The means and speed of dissemination have changed. It cannot be legislated without limiting free speech.”

A senior international communications advisor commented, “I don’t believe that the next 10 years will yield a business model that will replace the one left behind – particularly with respect to print journalism, which in the past offered audiences more in-depth coverage than was possible with video or radio. Today, print journalists effectively work for nothing, are exposed to liability and danger that would have been unheard of 25 years ago. Moreover, the separation between the interests of those corporations interested in disseminating news and editorial has all but closed – aside from a few noteworthy exceptions. Moreover, consumers of media appear to be having a harder time distinguishing spurious from credible sources – this could be the end result of decades of neglect regarding the public school system, a growing reliance on unsourced and uncrosschecked social media or any number of other factors. Bottom line is that very few corporations seem willing engage in a business enterprise that has become increasingly unfeasible from a financial point of view.”

A director of new media for a federation of organizations said, “Under present circumstances the ability of sophisticated, fringe hackers to operate with few resources is very difficult to counter.”

A technical evangelist based in Southern California said, “It’s impossible to filter content without bias.”

The owner of a consultancy replied, “We’re headed to a world where most people will use sources white-listed (explicitly or not) by third parties (e.g., Facebook, Apple, etc.).”

A doctoral candidate at a major US university said, “Those seeking to spread misinformation will evolve as the information environment does. Thus, while technologies may improve to address current misinformation spreading tactics, bad actors will simply find new chinks in the armor.”

A data scientist and blockchain expert based in Europe wrote, “Transparency will improve due to the wide adoption of blockchains and the democratization of data storage.”

An anonymous respondent commented, “At best, the definition of ‘lie’ will simply change and official disinformation will be called information anyway.”

A sociology Ph.D. wrote, “I have yet to see any evidence that the most-active political media consumers want more facts and less opinion.”

A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

An anonymous research scientist said, “I do not buy the assumption that information, ‘accurate’ or not, is the basis of political or – in fact – any action. I actually think it never has been. Yes, this is the story we like to tell when justifying actions vis-a-vis everyone else. It helps us present ourselves as rational, educated and considerate human beings. But no, in practice we do and say and write and report whatever seems reasonable in the specific situation for the specific purposes at hand. And that is OK, as long as others have the opportunity to challenge and contest our claims.”

A professor based in North America replied, “Technology is an arms race – each new filter will spawn a new workaround. Hopefully public skepticism toward fake news will win the day, because this problem has no technical solution.”

An author/editor/journalist based in Europe commented, “There will be sourcing and increased discrimination on the part of readers and viewers.”

To return to the survey’s anonymous responses home page, with links to all sets, click here.

To advance to the set of anonymous responses to survey Question 2, click here.

If you wish to read the full survey report with analysis, click here.

To read credited survey participants’ responses with no analysis, click here.