Elon University

The 2017 Survey: The Future of Truth and Misinformation Online (Q1 Credited Responses)

Credited responses to the primary research question:
Will the information environment improve/not improve by 2027? Why?

Internet technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:

Future of Misinformation LogoWhat is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.

Why do you think the information environment will improve/not improve?

Some key themes emerging from among the responses: – Things will not improve because the Internet’s growth and accelerating innovation are allowing more people and AI to create and instantly spread manipulative narratives. – Humans are, by nature, selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar. – In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil. – The dwindling of common knowledge makes healthy debate difficult, destabilizes trust and divides the public; info-glut and the fading of news media are part of the problem. – A small segment of society will find, use and perhaps pay a premium for information from reliable sources, but outside of this group ‘chaos will reign,’ and a worsening digital divide will develop. – Technology will create new challenges that can’t or won’t be countered effectively and at scale. – Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars. – The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and are likely to remove the ability for people to be anonymous online and limit free speech. – Technology will win out, as it helps us lable, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content. – Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of ‘trust ratings.’ – Regulatory remedies could include software liability law, required identities and the unbundling of social networks. – People will adjust and make things better; misinformation has always been with us, and people have found ways to lessen its impact. – Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. – Technology can’t win the battle, though, the public must fund and support the production of objective, accurate information. – Funding must be directed to the restoration of a well-fortified, ethical, trusted public press. – Elevate information literacy; it must become a primary goal at all levels of education.

Written elaborations to Q1 by for-credit respondents

Misinformation Online Full Survey LinkFollowing are full responses to Question #1 of the six survey questions, made by study participants who chose to take credit when making remarks. Some people chose not to provide a written elaboration. About half of respondents chose to remain anonymous when providing their elaborations to one or more of the survey questions. Respondents were given the opportunity to answer any questions of their choice and to take credit or remain anonymous on a question-by-question basis. Some of these are the longer versions of expert responses that are contained in shorter form in the official survey report. These responses were collected in an opt-in invitation to about 8,000 people.

Their predictions:

Micah Altman, director of research for the Program on Information Science at MIT, commented, “Powerful institutions have always had incentives to control information and shape communication. Technological advances are creating forces pulling in two directions: It is increasingly easy to create real-looking fake information; and it is increasingly easy to crowdsource the collection and verification of information. In the longer term, I’m optimistic that the second force will dominate – as transaction cost-reduction appears to be relatively in favor of crowds versus concentrated institutions.”

Brian Cute, longtime internet executive and ICANN participant, said, “The survey notes two elements that inform the issue of fake news: technology and humans. There are and will be technological solutions to identify and minimize fake news. The reason I responded that the environment will not improve is driven by the human element. Technology alone cannot address the problem. I am not optimistic that humans will collectively develop the type of rigorous habits that can positively impact the fake news environment. Humans have to become more effective consumers of information for the environment to improve. And that means they have to be active and effective ‘editors’ of the information they consume. That means they have to be active and effective editors of the information they share on the internet, because poorly researched information feeds the fake news cycle.”

Charlie Firestone, executive director, Aspen Institute Communications and Society Program, commented, “It is hard to see how it could be worse at this point. In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate. Most people want accurate information on which to base their business decisions, why shouldn’t the same be true of their citizen-sovereign decisions?”

Jennifer Urban, professor of law and director of the Samuelson Law, Technology & Public Policy Clinic at the University of California Berkeley, commented, “There has always been misinformation and people have always been susceptible to it. This is not going to change. What has changed is the speed and reach of misinformation today. I don’t see this changing fundamentally in the next decade. Entities with resources – including state actors – have too much interest in promoting misinformation. That said, I am hopeful that we will find a way to better flag and sort out misinformation than currently exists. Ten years is just too short a horizon. I think it will take longer.”

Jerry Michalski, futurist and founder of REX, replied, “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly. 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem. And 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”

Esther Dyson, a former journalist and founding chair at ICANN, a technology entrepreneur, nonprofit founder and philanthropist, said, “I think the environment will improve, but not because of technology.”

Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “As I wrote in ‘Net Smart’ in 2012, some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online – with the caveat that misinformation/disinformation versus verified information is likely to be a continuing arms race. In 2012, Facebook, Google and others had no incentive to pay attention to the problem. After the 2016 election, the issue of fake information has been spotlighted.”

David Brake, a researcher and journalist, replied, “The production and distribution of inaccurate information has lower cost and higher incentives than its correction does.”

J. Nathan Matias, a postdoctoral researcher at Princeton University, previously a visiting scholar at MIT’s Center for Civic Media, wrote, “News readers invest substantial effort to verify the news they read online, and peoples’ collective ability will only grow over time. Through ethnography and largescale social experiments, I have been encouraged to see volunteer communities with tens of millions of people work together to successfully manage the risks from inaccurate news.”

Helen Holder, distinguished technologist for HP, said, “People have a strong tendency to believe things that align with their existing understanding or views. Unreliable information will have a substantial advantage wherever it reinforces biases, making it difficult to discredit or correct. Also, people are more inclined to believe information received from more than one source, and the internet makes it trivial to artificially simulate multiple sources and higher levels of popular support or belief.”

David Weinberger, writer and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, noted, “It is an urgent problem, so it will be addressed urgently, and imperfectly.”

Peter Dambier, DNS guru for Cesidian Root, commented, “Nations introducing uncontrolled and undocumented censoring break all means to create trustworthiness by comparing multiple sources.”

Peter Eckart, director of information technology, Illinois Public Health Institute, replied, “The problem isn’t with the sources of information, but with the hearers of it. If we don’t increase our collective ability to critically analyze the information before us, all of the expert systems in the world won’t help us.”

Peter Levine, associate dean and professor, Tisch College of Civic Life, Tufts University, observed, “I don’t think there is a big enough market for the kinds of institutions, such as high-quality newspapers, that can counter fake news, plus fake news pays.”

Mike Roberts, pioneer leader of ICANN and Internet Hall of Fame member, replied, “There are complex forces working both to improve the quality of information on the net, and to corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net improvement, but viewed with hindsight, the improvement may be viewed as inadequate. The other side of the complexity coin is ignorance. The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did fifty or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So ignorance breeds frustration, which breeds acting out, which breeds antisocial and pathological behavior, such as the disinformation, which was the subject of the survey, and many other undesirable second order effects. Issues of trustable information are certainly important, especially since the technological intelligentsia command a number of tools to combat untrustable info. But the underlying pathology won’t be tamed through technology alone. We need to replace ignorance and frustration with better life opportunities that restore confidence – a tall order and a tough agenda. Is there an immediate nexus between widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a virtuous circle where acquisition of trustable information reduces ignorance, which leads to better use of better information, etc.”

Paul Kyzivat, retired software engineer and internet standards contributor, noted, “First, it will because it must. And we do have the means at hand to accomplish this.”

Clark Quinn, consultant with Quinnovation, said, “While I worry about the continuing escalation of battle between those with a general desire to improve and those with specific agendas, I believe that growing awareness of the value of constructive engagement, coupled with improvements in scrutability of comments, will improve things overall.”

Joseph Konstan, distinguished professor of computer science and engineering, University of Minnesota, observed, “Those trying to manipulate the public have great resources and ingenuity. While there are technologies that can help identify reliable information, I have little confidence that we are ready for widespread adoption of these technologies (and the censorship risks that relate to them).”

Judith Donath, fellow at Harvard’s Berkman Klein Center, and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted methods will emerge to block false narratives and allow accurate information to prevail, and, yes, the quality and veracity of information online will deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas. Of course, the definition of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to assure the veracity of their work, and the questions they ask have well-defined answers – and still there can be controversy about what is true, what work was free from outside influence. The truth of news stories is far murkier and multi-faceted. A story can be distorted, disproportional, meant to mislead – and still, strictly speaking, factually accurate. ‘Fake news’ is not new. The Weekly World News had a circulation of over a million for its mostly fictional news stories that are printed and sold in a format closely resembling a newspaper. Many readers recognized it as entertainment, but not all. More subtly, its presence on the newsstand reminded everyone that anything can be printed – a useful skepticism up to a point. But a pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald Trump’s repeated ‘fake news’ smears of the New York Times, Washington Post, etc., are among his most destructive non-truths.”

Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “Growing digital literacy and the use of automated systems will tip the balance towards a better information environment.”

Rick Hasen, professor of law and political science, University of California-Irvine, said, “By 2027 there will be fewer mediating institutions such as acceptable media to help readers/viewers ferret out truth. And there will be more deliberate disinformation from people in and out of the US.”

Howard Greenstein, adjunct professor of management studies at Columbia University, said, “This is an asymmetric problem. It is much easier for single actors and small groups to create things that are spread widely, and once out, are hard to ‘take back.’”

Alexander Halavais, associate professor of social technologies, Arizona State University, said, “As there is value in accurate information, the availability of such information will continue to grow, in order to meet that market. However, when consumers are not directly paying for such accuracy, it will certainly mean a greater degree of misinformation in the public sphere. That means the continuing bifurcation of haves and have-nots, when it comes to trusted news and information.”

David Schultz, professor of political science, Hamline University, said, “The social media and political economic forces that are driving the fragmentation of truth will not significantly change in the next 10 years, meaning the forces that drive misinformation will continue.”

Mark Lemley, professor of law, Stanford University, wrote, “Technology cannot easily distinguish truth from falsehood, and private technology companies don’t necessarily have the incentive to try.”

Nigel Cameron, a technology and futures editor and president of the Center for Policy on Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of course, be manipulated), and the political environment is bad.”

Bill Woodcock, executive director of the Packet Clearing House, wrote, “There’s a fundamental conflict between anonymity and control of public speech, and the countries that don’t value anonymous speech domestically are still free to weaponize it internationally, whereas the countries that do value anonymous speech must make it available to all, else fail to uphold their own principle.”

Bob Frankston, internet pioneer and software innovator, said, “I always thought that ‘Mein Kampf ‘could be countered with enough information. Now I feel that people will tend to look for confirmation of their biases and the radical transparency will not shine a cleansing light.”

Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation. The legal climate for security research will continue to improve, as its connection to national security becomes increasingly obvious. These changes will drive significant corporate and public sector improvements in security during the next decade.”

Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years’ experience at the BBC and as a digital consultant, wrote, “Our information environment has been immeasurably improved by the democratisation of the means of publication since the creation of the web nearly 25 years ago. We are now seeing the downsides of that transformation, with bad actors manipulating the new freedoms for antisocial purposes, but techniques for managing and mitigating those harms will improve, creating potential for freer, but well-governed, information environments in the 2020s.”

Christian H. Huitema, past president of the Internet Architecture Board, commented, “The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.”

Amali De Silva-Mitchell, a futurist, replied, “There is political and commercial value in misinformation. Absolutely ethical societies have never existed. Disclosures are critical and it will be important to state the source of news as being human or machine, with the legal obligation remaining with the human controller of the data.”

Alan D. Mutter, media consultant and faculty at the graduate school of journalism, University of California-Berkeley, replied, “The internet is, by design, an open and dynamically evolving platform. It’s the Wild West, and no one is in charge.”

Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., observed, “Globally, we have more people with more tools with more access to more information – and yes, more conflicting intent – than ever before; but, while messy and confusing, this will ultimately improve the information environment. We will continue to widen access to all types of information – access for citizen journalists, professionals, technical experts, others – so while the information environment becomes more diverse, the broader arc of human knowledge bends towards revelation and clarity; only mass suppression will stop the paid and unpaid information armies from discovering and revealing the truth.”

Sally Wentworth, vice president of global policy development for the Internet Society, wrote, “It’s encouraging to see some of the big platforms beginning to deploy internet solutions to some of the issues around online extremism, violence and fake news. And yet, it feels like as a society, we are outsourcing this function to private entities that exist, ultimately, to make a profit and not necessarily for a social good. How much power are we turning over to them to govern our social discourse? Do we know where that might eventually lead? On the one hand, it’s good that the big players are finally stepping up and taking responsibility. But governments, users and society are being too quick to turn all of the responsibility over to internet platforms. Who holds them accountable for the decisions they make on behalf of all of us? Do we even know what those decisions are?”

Alex `Sandy’ Pentland, member US National Academies and World Economic Forum Councils, said, “From a scientific point of view, we know how to dramatically improve the situation, based on studies of political and similar predictions. What we don’t know is how to make it a thriving business. The current [information] models are driven by clickbait, and that is not the foundation of a sustainable economic model. Things will improve, but only for the minority willing to pay subscription prices.”

Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Two developments will help improve the information environment: 1) News will move to a subscription model (like music, movies, etc.) and subscription providers will have a vested interest in culling down false narratives. 2) Algorithms that filter news will learn to discern the quality of a news item and not just tailor to ‘virality’ or political leaning.”

Adam Gismondi, a researcher at the Institute for Democracy & Higher Education, Tufts University, observed, “Ultimately, the information distributors – primarily social media platform companies, but others as well – will be forced, through their own economic self-interest and public pushback, to play a pivotal role in developing filters and signals that make the information environment easier for consumers to navigate.”

Laurel Felt, lecturer at the University of Southern California, “There will be mechanisms for flagging suspicious content and providers and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even install filters so that, when they’re doing searches, hits that don’t meet a certain trust threshold will not appear on the list.”

Jonathan Brewer, consulting engineer for Telco2, commented, “The incentives for social media providers are at odds with stemming the spread of misinformation. Outrageous claims and hyperbole will always generate more advertising revenue than measured analysis of an issue.”

Michael R. Nelson, public policy executive with Cloudflare, replied, “Some news sites will continue to differentiate themselves as sources of verified, unbiased information, and as these sites learn how to better distinguish themselves from ‘fake news’ sites, more and more advertisers will pay a premium to run their ads on such sites.”

Andrew Odlyzko, professor of math and former head of the University of Minnesota’s Supercomputing Institute, observed, “’What is truth’ has almost always been a contentious issue. Technological developments make it possible for more groups to construct their ‘alternate realities,’ and the temptation to do it is likely to be irresistible.”

Janet Kornblum, a writer/journalist, investigator and media trainer, replied, “As people get smarter and smarter about information, they will understand it better. Perhaps this is wishful thinking. Here’s the thing: People will put more stock into names and brands they trust. The problem will remain if name brands (such as Fox News) continue to perpetrate outright lies, but I am hoping that the wisdom of the crowds prevails.”

Glenn Edens, CTO for Technology Reserve at Xeroz/PARC, commented, “Misinformation is a two-way street. Producers have an easy publishing platform to reach wide audiences and those audiences are flocking to the sources. The audiences typically are looking for information that fits their belief systems, so it is a really tough problem.”

Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. As far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

David Sarokin of Sarokin Consulting, author of “Missed Information,” said, “There will be an ‘arms race’ between reliable and unreliable information.”

Paul Gardner-Stephen, senior lecturer, College of Science & Engineering, Flinders University, noted, “Increasing technical capability and automation, combined with the demonstrated dividends that can be obtained from targeted fake news makes an arms race inevitable. Governments and political parties are the major players. This is Propaganda 2.0.”

Jonathan Grudin, principal design researcher, Microsoft, said, “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression; contradictory claims have always existed in print, but have been manageable and often healthy.”

David Wood, a UK-based futurist at Delta Wisdom, said, “Both the ‘optimistic’ and ‘pessimistic’ scenarios have a lot of momentum behind them. I assess the optimistic one as about 65:35 more likely than the pessimistic one. On balance, I believe enough good people will devote enough expertise to keep this problem under control.”

Deirdre Williams, retired internet activist, replied, “Human beings are losing their capability to question and to refuse. Young people are growing into a world where those skills are not being taught.”

Serge Marelli, an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”

Paul Saffo, longtime Silicon Valley-based technology forecaster, commented, “The information crisis happened in the shadows. Now that the issue is visible as a clear and urgent danger, activists and people who see a business opportunity will begin to focus on it. Broken as it might be, the internet is still capable of routing around damage.”

Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, DC, replied, “Distinguishing between fake news, humor, strange-but-true news or unpopular news is too hard for humans to figure out, no less a computer.”

Greg Shatan, partner, Bortstein Legal Group, based in New York, replied, “Unfortunately, the incentives for spreading false information, along with the incentives for destabilizing trust in internet-based information, will continue to incentivize the spread of ‘fake news.’ Perversely, heightened concerns about privacy and anonymity are counterproductive to efforts to increase trust and validation.”

Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “Virtually all the structural incentives to spread misinformation seem to be getting worse.”

Barry Wellman, internet sociology and virtual communities expert and co-director of the NetLab Network, noted, “Software and people are becoming more sophisticated.”

Rich Ling, professor of media technology, School of Communication and Information, Nanyang Technological University, said, “We have seen the consequences of fake news in the US presidential election and Brexit. This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.”

Julian Sefton-Green, professor of new media education at Deakin University, Australia, replied, “The information environment is not a filter bubble, it is an extension of social and political tensions. It is impossible to make the information environment a rational, disinterested space; it will always be susceptible to pressure.”

Michael Zimmer, associate professor and privacy and information ethics scholar, University of Wisconsin-Milwaukee commented, “This is a social problem that cannot be solved via technology.”

Rajnesh Singh, Asia-Pacific director for an internet policy and standards organization, observed, “More likely than not, the issue will be how to cope with the volume of information that is generated and the proportion of it that is inaccurate or fake.”

Marina Gorbis, executive director of the Institute for the Future, said, “It’s not going to be better or worse but very different. Already we are developing technologies that make it impossible to distinguish between fake and real video, fake and real photographs, etc. We will have to evolve new tools for authentication and verification. We will probably have to evolve both new social norms as well as regulatory mechanisms if we want to maintain online environment as a source of information that many people can rely on.”

Patrick Lambe, principal consultant, Straits Knowledge, noted, “All largescale human systems are adaptive. When faced with novel predatory phenomena, counter-forces emerge to balance or defeat them. We are at the beginning of a largescale negative impact from the undermining of a social sense of reliable fact. Counter-forces are already emerging. The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g., Google, Facebook) aids in this counter-response.”

Daniel Kreiss, associate professor of communication, University of North Carolina-Chapel Hill, commented, “Misinformation/fake news/ideological/identity media is a political, not a media, problem. They are the outcome, not the cause, of political polarization and, especially, a contemporary Republican Party network of elites (including donors) that have embraced empirically dubious claims, that has grown more extreme ideologically, and has developed its own media ecosystem.”

Steven Miller, vice provost for research, Singapore Management University, wrote, “Even now, if one wants to find reliable sources, one has no problem doing that, so we do not lack reliable sources of news today. It is that there are all these other options, and people can choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity of sources that can be compared, and focus on what they want to believe. That type of situation will continue. Five or 10 years from now, I expect there to continue to be many reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable sources will have no problems doing so. Those who want to make sure they are getting a multiplicity of sources to see the range of inputs, and to sort through various types of inputs, will be able to do so, but I also expect that those who want to be in game of influencing perceptions of reality, and changing the perceptions of reality, will also have ample means to do so. So the responsibility is with the person who is seeking the news and trying to get information on what is going on. We need more individuals who take responsibility for getting reliable sources.”

Dan Gillmor, professor at the Cronkite School of Journalism and Communication, Arizona State University, commented, “Only recently have people recognized what is happening, and what the stakes are if we don’t fix what’s broken.”

Raymond Hogler, professor of management, Colorado State University, replied, “Powerful state actors, including the Trump Administration, will continue to disseminate false, misleading and ideologically-driven narratives posing as ‘news.’”

James Schlaffer, an assistant professor of economics, commented, “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”

Justin Reich, assistant professor of comparative media studies, MIT, noted, “Strategies to label fake news will require algorithmic or crowd-sourced approaches. Purveyors of fake news are quite savvy at reverse engineering and gaming algorithms, and equally adept at mobilizing crowds to apply ‘fake’ labels to their positions and ‘trusted’ labels to their opponents.”

Wendy Seltzer, strategy lead and counsel for the World Wide Web Consortium, replied, “I suspect our reactions to information technology are cyclical; as we get more feedback with new modes of information dissemination, first some exploit them, then others use that feedback to develop more measured responses.”

Nathaniel Borenstein, chief scientist at Mimecast, commented, “Internet technologies permit anyone to publish anything. Any attempt to improve the veracity of news must be done by some authority, and people don’t trust the same authorities, so they will ultimately get the news that their preferred authority wants them to have. There is nothing to stop them choosing an insane person as their authority.”

Irene Wu, adjunct professor of communications, culture and technology, Georgetown University, said, “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”

Ari Ezra Waldman, associate professor of law and New York Law School, wrote, “The spread of misinformation will only improve if platforms take responsibility for their role in the process. So far, although intermediaries like Facebook have nodded toward doing something about ‘fake news’ and cyberharassment and other forms of misleading or harmful speech, they simultaneously continue to maintain that they are merely neutral conduits and, therefore, uneasy about maintaining any sort of control over information flow. The ‘neutral conduit’ canard is a socio-legal strategy that is little more than a fancy way of absolving themselves of responsibility for their essential role in the spread of misinformation and the decay of discourse.”

Gina Neff, professor, Oxford Internet Institute, said, “The economic stakes are simply too high to rein in an information ecosystem that allows false information to spread. Without the political commitment of major social media platforms to address the problem, the technical challenges to solving this problem will never be met.”

William L. Schrader, a former CEO with PSINet Inc., wrote, “Mankind has always lied, and always will; which is why the winners of wars get to write the history their way and others have no say, but with the internet, the losers have a say! So which is better? Both sides, or just the winner? We have both sides today.”

Matt Armstrong, an independent research fellow working with King’s College, formerly executive director of the US Advisory Commission on Public Diplomacy, replied, “The influence of bad information will not change until people change. At present, there is little indication that people will alter their consumption habits. When ‘I heard it on the internet’ is a mark of authority rather than derision as it was, we are in trouble. This is coupled with the disappointing reality that we are now in a real war of words where many consumers do not check whether the words are/were/will be supported by actions or facts. The words of now are all that matter to too many audiences.”

Stowe Boyd, futurist, publisher and editor in chief of Work Futures, said, “The rapid rise of AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media sources, and social networks and rapidly identifying and tagging fake and misleading content.”

Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, observed, “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”

Starr Roxanne Hiltz, distinguished professor of information systems and co-author of the visionary 1970s book “The Network Nation,” replied, “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the US frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”

Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “As a computer technologist, I know that the technology to check information is within our technical grasp. Our US society simply needs to decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased and verified information.”

Jeff Jarvis, professor at the City University of New York Graduate School of Journalism, commented, “Reasons for hope: Much attention is being directed at manipulation and disinformation; the platforms may begin to recognize and favor quality; and we are still at the early stage of negotiating norms and mores around responsible civil conversation. Reasons for pessimism: Imploding trust in institutions; institutions that do not recognize the need to radically change to regain trust; and business models that favor volume over value.”

John Wilbanks, chief commons officer, Sage Bionetworks, replied, “I’m an optimist, so take this with a grain of salt, but I think as people born into the internet age move into positions of authority they’ll be better able to distill and discern fake news than those of us who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not that the environment will get better, it’s that those younger will be better fitted to survive it.”

David Sarokin, writer, commented, “People spread the information they want to spread, reliable or not. There’s no technology that will minimize that tendency.”

Paul Jones, director of ibiblio.org, University of North Carolina-Chapel Hill, noted, “Newspapers took quite a while to move from yellow journalism to responsible reporting. Accepted procedures, social contracts, and even laws aided by profit motives helped get us out of that sadder time. We’ll need more than software, but we have a chance of getting things better over time.”

Frank Kaufmann, founder and director of several international projects for peace activism and media and information, commented, “The quality of news will improve, because things always improve.”

Sam Lehman-Wilzig, associate professor and former chair of the School of Communication, Bar-Ilan University, Israel, wrote, “Given the critical importance for democracy and for a properly functioning economy of factual information – and given many previous situations where serious societal dysfunction resulting from technology was significantly resolved through other technological solutions – it stands to reason that here too technological fixes will strongly ameliorate the problem, albeit not perfectly or completely.”

Kenneth Sherrill, professor emeritus of political science, Hunter College, City University of New York, said, “First, readers will continue to sort in terms of which information sources they trust. Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”

Geoff Scott, CEO of Hackerati, commented, “‘Fake news’ works because it supports the point of view of the people it targets, which makes them feel good, right or vindicated in their beliefs. It takes critical thinking to overcome this, which requires effort and education. This isn’t a technical or information problem; it’s a social problem.”

Garth Graham, an advocate for community-owned broadband with Telecommunities Canada, explained, “I answered positively in spite of disliking the assumption in the question. Truth and lie is a false dichotomy. We are informed by context.”

Philip Rhoades, retired IT consultant and biomedical researcher with Neural Archives Foundation, said, “The historical trend is for information to be less reliable and for people to care less.”

Sean Justice, assistant professor at Texas State University-San Marcos, wrote, “Generally, humans meet and overcome the challenges they face, especially when those challenges are existential. Proof: we exist.”

Edward Kozel, an entrepreneur and investor, predicted, “Although trusted sources (e.g., the New York Times) will remain or new ones will emerge, the urge for mass audience and advertising revenue will encourage widespread use of untrusted information (again, social media, etc.).”

Neville Brownlee, associate professor of computer science at the University of Auckland, said, “The ability to target individuals is already being exploited for commercial gain. For that to change, individuals will need to become much more aware of how that targeting works, and how it really affects them.”

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”

Brian Harvey, teaching professor emeritus at the University of California-Berkeley, said, “There’s nothing new or internet-specific about fake news. Well-known examples include McCarthy’s witch-hunt and Hearst’s Spanish-American War, but also almost all US news coverage of the Vietnam War, which came straight out of Department of Defense press releases. Today’s fake news is more brazen, but, for example, there was a very visible anti-Sanders prejudice in the New York Times’ coverage of the Democratic primaries last year. So, even if people develop anti-fake-news technology, the gatekeepers will be the same rich and powerful people who edit the Times.”

Adam Nelson, a developer for Amazon, replied, “We had yellow journalism a hundred years ago and we have it now. We’re at a low point of trust, but people will begin to see the value of truth once people become more comfortable with what social platforms do and how they work.”

David Conrad, a chief technology officer, replied, “In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.”

Daniel Wendel, a research associate at MIT, said, “Trust is inherently personal. While central authorities can verify the identity of a particular website or person, people are less likely to trust a ‘trusted’ centralized fact checker [than the sources who express the same belief system as they and their friends]. For example, snopes.com has already been discounted by right-wing pundits as being too ‘liberal.’ Trust must come from networks rather than authorities, but the ideas behind that are nascent and the technologies do not yet exist.”

Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, observed, “Fake news spreads faster than genuine news. It is more attractive and ‘hot.’ We do not see corresponding efforts from genuine news peddlers to give factual information that is timely and interesting. On the contrary, reporters have become lazy, lifting articles off social media and presenting only obvious facts. Fake news peddlers have invested resources (domains and bots) to propagate their agenda. There isn’t a corresponding effort by genuine news reporters. People will get so used to being ‘duped’ that they will treat everything they read with skepticism, even real news. It will no longer be financially viable to invest in real news as the readership may go down. In such an environment, it is likely fake news will continue to thrive.”

Larry Keeley, founder of innovation consultancy Doblin, predicted technology will be improved but people will remain the same, writing, “The challenge of providing robust and trusted information systems is too important for it not to emerge technologically. Plus, capabilities adapted from both bibliometric analytics and good auditing practices will make this a solvable problem. However, non-certified, compelling-but-untrue information will also proliferate. So the new divide will be between the people who want their information to be real vs. those who simply want it to feel important. Remember that quote from Roger Ailes: ‘People don’t want to BE informed, they want to FEEL informed.’ Sigh.”

Jan Schaffer, executive director of J-Lab, said, “There are so many people seeking to disseminate fake news and produce fake videos in which officials appear to be talking, that it will be impossible to shut them all down. Twitter and Facebook and other social media players could play a stronger role. Only a few national news organizations will be trusted sources – if they can manage to survive.”

Steve Axler, a user-experience researcher, replied, “Social media and the web are on too large a scale to control content.”

Nick Ashton-Hart, a public policy professional based in Europe, commented, “Ways will be found to allow contextual verification of content as well as verification of content generators for many reasons related to the incentives to do so: too many platforms and individuals will need to be protected from reputational and other damage.”

Michael Rogers, principal at the Practical Futurist, wrote, “It will inevitably improve because right now we’re doing almost nothing defensively. Even simple steps like AI-powered IP address blocking will make a difference. The question is: how much of a difference?”

Scott MacLeod, founder and president of World University and School, replied, “The information environment will improve because of the ongoing generation of information with artificial intelligence that will counter ‘fake news.’ For example, if major, somewhat credible news agencies don’t do this, universities and students there could create their own information environments – thanks to the distribution of the internet. I also think that this could occur in all of 7,099 living languages, as potential counterbalances.”

Eleni Panagou, cultural informatics and communication scientist at Arwen Lannel Labs in Greece, wrote, “The future information ecosystem could be a radiant socio-technical advantage to a more resilient global society – more progressive, more eco-friendly, more peaceful, more sustainable.”

Morihiro Ogasahara, associate professor at Kansai University, said, “People basically need correct information to judge their environment.”

Ross Chandler, principal network architect for nir, said, “People should be trusted to become less credulous and more discerning about what purports to be news. Factual information sources are vital. Both self-appointed and official censors need to be opposed as freedom of speech is vital to achieving and maintaining freedom.”

Sahana Udupa, professor of media anthropology at Ludwig Maximilian University of Munich, wrote, “There will be greater awareness of digital information, but the spread of this awareness will be highly uneven across the globe.”

Ian Peter, internet pioneer, historian and activist, observed, “It is not in the interests of either the media or the internet giants who propagate information, nor of governments, to create a climate in which information cannot be manipulated for political, social or economic gain. Propaganda and the desire to distort truth for political and other ends have always been with us and will adapt to any form of new media which allows open communication and information flows.”

Uta Russmann, a professor whose research is concentrated on political communication via digital methods, noted, “Messages are spread by people and people will not change. They will keep on spreading misleading information to persuade others. However, I do think artificial intelligence will become exponentially better at reading those messages and ‘commenting’ on them and ’marking’ them as misleading or fake.”

John Perrino, senior communications associate at George Washington University School of Media and Public Affairs, wrote, “We are already seeing the use of AI and new algorithms that sort the quality of articles by how established the website source is and who is sharing the post.”

Jon Lebkowsky, web consultant/developer, author and activist, commented, “Given the complexity of the evolving ecosystem, it will be hard to get a handle on it. The decentralization of education is another difficult aspect: universal centralized digital literacy education could potentially mitigate the problem, but we could be moving away from universal standard educational systems.”

Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, observed, “I’m not sure the overall information environment will improve, because so many players and interests see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information.”

James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “Information systems incentivize getting attention. Lying is a powerful way to do that. To stop that requires high surveillance – which means government oversight which has its own incentives not to tell the truth.”

Stuart Elliott, visiting scholar at the US National Academies of Sciences, Engineering and Medicine, observed, “People don’t want to be misled or made to appear foolish. With the recent widespread recognition about problems with unreliable news, people will adapt their behaviors and in some cases, make use of available technologies to address the problem. Effective approaches to address the problem don’t have to be new ones. For example, relying on news from organizations with established brands for accuracy is a well-established approach. What we’re experiencing now is a new version of a very old problem.”

Michael J. Oghia, an author, editor and journalist based in Europe, said, “Propaganda and misleading information aren’t new, we’ve been learning how to fight them for as long as we could communicate. Given the severity of the issue, political and social institutions, media literacy skill programs, platforms and a host of others are responding to this phenomenon.”

Veronika Valdova, managing partner at Arete-Zoe, noted, “Rogue regimes like Russia will continue exploiting the information environment to gain as much power and influence as possible. Jurisdictional constraints will make intervention less practicable. Also, whilst the overall information environment in English-speaking countries might improve due to the employment of artificial intelligence and easier neutralization of bots, this may not necessarily be the case for small nations in Europe where the environment is compartmented by language.”

Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”

Peter Jones, associate professor in strategic foresight and innovation at OCAD University, Toronto, predicted, “By 2027 decentralized internet services will displace mainstream news, as corporate media continues to erode trust and fails to find a working business model. Field-level investigative journalism will be crowd-funded by smaller consortiums, as current news organizations will have moved into entertainment, such as CNN already has.”

Kenneth R. Fleischmann, associate professor at the University of Texas-Austin School of Information, wrote, “Over time, the general trend is that a proliferation of information and communications technologies (ICTs) has led to a proliferation of opportunities for different viewpoints and perspectives, which has eroded the degree to which there is a common narrative – indeed, in some ways, this parallels a trend away from monarchy toward more democratic societies that welcome a diversity of perspectives – so I anticipate the range of perspectives to increase, rather than decrease, and for these perspectives to include not only opinions but also facts, which are inherently reductionist and can easily be manipulated to suit the perspective of the author, following the old aphorism about statistics Mark Twain attributed to Benjamin Disraeli [‘There are three kinds of lies: lies, damned lies and statistics.’], which originally referred to experts more generally.”

Laurie Rice, associate professor at Southern Illinois University-Edwardsville, said, “While I find efforts to stop the spread of misinformation encouraging, as long as there are people who financially profit through spreading misinformation, they have a strong incentive to find ways to defeat efforts to curtail the spread of misinformation. I fear this may outweigh the incentives of those trying to stop them.”

Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”

J. Cychosz, a content manager and curator for a scientific research organization, commented, “False information has always been around and will continue to remain, technology will emerge that will help identify falsehoods and culture will shift, but there will always be those who find a path around.”

Liam Quin, an information specialist with the World Wide Web Consortium, said the information environment is unlikely to be improved because, “Human nature won’t change in such a short time, and people will find ways around technology.”

Ben Justice, professor and chair in Rutgers University’s department of educational theory, policy and administration, observed, “The struggle over information flow is as old as civilization and so in that sense there will always be a relationship between power and knowledge. The powerful are themselves split on this question, and have been, until recently, somewhat complacent about it. As a result, banking, media and energy have had a strong run at creating our current information environment. For the next decade, there is interest in other sectors (especially tech) in tipping the scales the other way. Environmental collapse and generational change should also contribute some on-the-ground realities.’”

Mark P. Hahn, a chief technology officer, wrote, “The situation will improve, but not that much. People will still need to make an effort to discern reliable sources, but better tools will be created to assist in that search.”

Taina Bucher, associate professor in the Centre for Communication and Computing at the University of Copenhagen, commented, “Having faith in what’s possible is always the first step to changing things.”

Scott Guthrey, publisher for Docent Press, said, “For all practical purposes, governments control communication channels and will not be able to resist using that control for their own benefit.”

Andrew Dwyer, an expert in cybersecurity and malware at the University of Oxford, commented, “I believe we have experienced a revolution in news sharing, but in the future we will see a consolidation of ‘trusted’ news sources and individuals will ask for these above other things. This will combine conventional and new media forms in a hopefully innovative way.”

Michael P. Cohen, a principal statistician, replied, “The situation has become so ridiculous that I expect pushback. I hope schools will start to emphasize the value of evidence-based information.”

Dean Willis, consultant for Softarmor Systems, commented, “Governments and political groups have now discovered the power of targeted misinformation coupled to personalized understanding of the targets. Messages can now be tailored with devastating accuracy. We’re doomed to living in targeted information bubbles.”

Axel Bruns, professor at the Digital Media Research Centre, Queensland University of Technology, commented, “Moral panics over new media platforms are nothing new. The web, television, radio, newspapers and even the alphabet were seen as making it easier to spread misinformation. The answer is media literacy amongst the public, which always takes some years to catch up with the possibilities of new media technologies.”

Sharon Tettegah, professor at the University of Nevada, commented, “As we learn more about the types of information, we will be able to isolate misinformation and reliable sources.”

John Lazzaro, a retired electrical engineering and computing sciences professor from the University of California-Berkeley, wrote, “I don’t think society can reach a consensus on what constitutes misinformation, and so trying to automate the removal of misinformation won’t be possible.”

Paul Hyland, a principal consultant for product management and user experience at Higher Digital, observed, “People who care about truth have now realized the challenge they face, and will start to develop ways to improve the quality of discourse. It won’t be completely fixed, but will likely improve from the current low point.”

Scott Shamp, a dean at Florida State University, commented, “Too many groups gain power through the proliferation of inaccurate or misleading information. When there is value in misinformation, it will rule.”

Mike Meyer, chief information officer at University of Hawaii, wrote, “These issues are basic to the redefinition of human society in the 21st century. The realization of the problems has frightened people more than I have ever seen so they will work on this actively.”

Amber Case, research fellow at Harvard Berkman Klein Center for Internet & Society, replied, “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. The incentive model for fake news is based on the sites that spread it themselves. In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”

George Siemens, professor at LINK Research Lab at the University of Texas-Arlington, commented, “Social media, and the internet more broadly, require trust and confidence. The survivability of public spaces of discourse requires trust in the information being shared. A trusted information environment is needed if digital public spaces are to endure.”

Matt Mathis, a research scientist who works at Google, said, “Missing is an understanding of the concept of ‘an original source.’ For science this is an experiment, for history and news it is an eyewitness account by somebody who was verifiably present. Adding the facts of how and why we know this to non-original sources will help the understanding that facts are verifiable.”

John Anderson, director of Journalism and Media Studies at Brooklyn College, City University of New York, wrote, “There’s been such a massive diminution of trust in institutions over recent decades (particularly related to media and education) that restoring confidence in our information ecosystems must extend far beyond simple technical fixes.”

Stephen Downes, researcher with the National Research Council of Canada, wrote, “Things will not improve. There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.”

Tom Valovic, contributor to Technoskeptic magazine and author of “Digital Mythologies,” commented, “Artificial intelligence that will supplant human judgment is being pursued aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to replacing human judgement are subject to hidden bias and will ultimately fail to accomplish this goal in my opinion. They will only continue the centralization of power in a small number of companies that control the flow of information.”

Philip J. Nickel, lecturer at Eindhoven University of Technology, said, “The decline of traditional news media and the persistence of closed social networks will not change in the next 10 years. These are the main causes of the deterioration of a public domain of shared facts as the basis for discourse and political debate.”

Miguel Alcaine, International Telecommunication Union Area Representative for Central America, commented, “The boundaries between online and offline will continue to blur. We understand online and offline are different modalities of real life. There is and will be a market (public and private providers) for trusted information. There is and will be space for misinformation. The most important action societies can take to protect people is education, information and training.”

Sharon Haleva-Amir, lecturer in the School of Communication, Bar Ilan University, Israel, said, “I fear that the phenomenon of fake news will not improve due to two main reasons: 1) There are too many interested actors in this field (both business and politics wise) who gain from dispersion of false news and therefore are interested in keeping things the way they are; 2) Echo chambers and filter bubbles will continue to exist as these attitudes are typical to people’s behavior offline and online. In order to change that, people will have to be educated since early childhood about the importance of both credibility of sources as well as variability of opinions that create the market of ideas.”

Fred Davis, a futurist based in North America, wrote, “Automated efforts to reduce fake news will be gamed, just like search is. That’s 20 years of gaming the system – search engine optimization and other things that corrupt the information discovery process have been in place for over 20 years, and the situation is still bad. Also, it may be difficult to implement technology because it could also be used for mass censorship. Mass censorship would have a very negative effect on free speech and society in general.”

danah boyd, principal researcher, Microsoft Research and founder, Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”

Sean Goggins, an associate professor and sociotechnical data scientist, wrote, “Our technical capacity to manipulate information will continue to grow. With investment tilted toward for-profit enterprise and the intelligence community and away from public-sector research like that sponsored by the National Science Foundation, it’s doubtful that technology for detecting misinformation will keep up with technologies designed to spread misinformation.”

Susan Etlinger, industry analyst, Altimeter Research, said, “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms, and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cyber security experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”

Scott Spangler, principal data scientist, IBM Watson Health, pointed out that technologies now exist to make fake information almost impossible to discern. He wrote, “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”

Peter Lunenfeld, a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”

Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “False and misleading information has always been part of all cultures (gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be. I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is MORE speech. The only major fear I have is of massive communications conglomerates imposing pervasive censorship.”

Allan Shearer, associate professor at the University of Texas-Austin, observed, “The problem is the combination of the proliferation of platforms to post news and an increasing sense of agency in each person that his/her view matters, and the blurring of facts and opinions.”

Evan Selinger, professor of philosophy, Rochester Institute of Technology, wrote, “Spreading disinformation and preventing its propagation is a cat-and-mouse game. However, I suspect that over time the balance will tilt in favor of better filters and better practices. The problem has become a major cause of concern for corporations and governments alike. While early efforts to combat, disinformation might be clumsy, there seems to be enough drive across sectors to create genuine improvement. The one caveat I’d add is that there are different levels of disinformation. The cautious optimism I’m predicting only applies to low to medium(ish) grades.”

Garrett A. Turner, a vice president for global engineering, noted, “The information environment will not improve because it has proven to be very effective and it is also extremely time-consuming to validate or police. In the transmission of information online it is difficult to decipher factual news from entertainment.”

Alexios Mantzarlis, director of the International Fact-Checking Network based at the Poynter Institute for Media Studies, commented, “This issue was not as central in the public debate before 2016. While the risk of misguided solutions is high, lots of clever people are trying to find ways to make the online information ecosystem healthier and more accurate. I am hopeful their aggregate effect will be positive.”

Christopher Jencks, a professor emeritus, said, “Reducing ‘fake news’ requires a profession whose members share a commitment to getting it right. That, in turn, requires a source of money to pay such professional journalists. Advertising used to provide newspapers with money to pay such people. That money is drying up, and it seems unlikely to be replaced within the next decade.”

Alan Inouye, director of public policy for the American Library Association, commented, “New technologies will continue to provide bountiful opportunities for mischief. We’ll be in the position of playing defense as new abuses or attacks arise. Thus success will be a future that is, on balance, not worse than today’s situation.”

Scott Amyx, managing partner of Amyx Ventures & Amyx+, wrote, “The sophistication of agents and intelligent systems will continually challenge the information ecosystem, much like the continued tit for tat in cybersecurity.”

Lokman Tsui, assistant professor, School of Journalism and Communication, The Chinese University of Hong Kong, commented, “The information environment will improve. This is not a new question; we had concerns about fake news when radio broadcasting and mass media first appeared (for example, the Orson Wells’ reading of ‘War of the Worlds’). People will develop literacy. Standards, norms and conventions to separate advertising from ‘organic’ content will develop. Bad actors who profit from fake news will be identified and targeted.”

Philipp Müller, postdoctoral researcher at the University of Mainz, Germany, replied, “I do not believe that the information environment will either ‘improve’ or ‘worsen’ by technological developments. As long as the internet exists, it will remain in the responsibility of the user to decide which information sources (and which information selection technologies) can be trusted.”

Andreas Birkbak, assistant professor, Aalborg University, Copenhagen, said, “The information environment will not improve because there are no way to automate fact checking. Facts are context-dependent.”

Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “The problem with online news is structural: There are too few gatekeepers, and the internet business model does not sustain quality journalism. The reason is simply that advertising revenue has been untethered from news production.”

Kate Paine, CEO, Paine Publishing, noted, “In the short term, things will get worse, but long term, as the nature of ‘fake news stories’ becomes more widely known, we will learn either through technology or experience to differentiate what is true and what is not.”

Sebastian Benthall, junior research scientist, New York University Steinhardt, responded, “The information environment is getting more complex. This complexity provides more opportunities for production and consumption of misinformation.”

Brooks Jackson of FactCheck.org wrote, “The current state of distrust of honest media is being driven in large part by a president who is steadily losing popularity and by profit-driven media personalities whose popularity has peaked and it will decline as events discredit their theories. Lastly, the economy is expected to continue its slow but steady recovery from the near-Depression of 2007-2009, which should cause the inchoate rage of a large part of the public to dissipate.”

Andrew Nachison, author, futurist and founder of WeMedia, noted, “Technology will not overcome malevolence. Systems built to censor communication, even malevolent communication, will be countered by people who circumvent them.”

Darel Preble, president and executive director, Space Solar Power Institute, commented, “The tightly linked energy economics, environmental and related technical and political threads we track and contribute to are not improving in the technical trust sphere. Even the technical media, such as the Jacobsen Clack (PNAS) brouhaha is substituting ad hominem attacks (or volume) and repetition for technical accuracy to these complex problems. Few people are familiar with or want to risk their paycheck to see these problems fixed, so these problems will continue growing for now. Fundamentally, the massively wealthy fossil fuel industry is resisting the best technical solution, space solar power, from moving forward because it works. Until the global SSP community grows large enough in horsepower, this morass will continue growing in severity.”

Ray Schroeder, associate vice chancellor for online learning, University of Illinois-Springfield, replied, “Recent polls show a swing back toward credible news sources. I anticipate the further growth of ‘fact-checking’ sites.”

Davide Beraldo, postdoctoral researcher, University of Amsterdam, noted, “Although companies, governments and activists are putting a lot of effort in denouncing and countering the phenomenon, they all have interests in controlling and manipulating information. This has been a constant in history.”

Francois Nel, director of the Journalism Leaders Programme, University of Central Lancashire, noted, “The first step to improving anything (including the information environment) is to recognise the problem – and that has happened with the emergence of ‘fake news’ as a meme. The challenge, of course, is to ensure the cure is commensurate to the problem. If combating misinformation, disinformation and propaganda leads to widespread censorship, repression of free speech and the robust exchange of ideas, our information environment – and societies – will be diminished.”

G. Hite, a researcher, replied, “The information environment will improve as time progresses, as there are more and more accredited entities putting valuable information online.”

Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to continue by those with power, as we are already seeing with increased national pressure on providers/companies by a range of means from law and regulation to moral and consumer pressures.”

Denise N. Rall, adjunct research fellow, Southern Cross University, Australia, said, “I completed a Ph.D. on the academic study of the internet in 2007. Things have changed a great deal since then. The internet itself poses a problem by its ‘open-plan’ navigating by TCP/IP and the use of IP addressing. Perhaps the World Wide Web version 2.0 or some future type of structure that is not open-access via IP addressing will ensure the trustworthiness of information.”

Tony Smith, boundary crosser for Meme Media, commented, “Expect marginal improvement, Steven Pinker-style, now that misinformation is a known problem. Not the substantial improvement.”

Steven Polunsky, writer with the Social Strategy Network, replied, “As with most disruptive events, people will adjust to accommodate needs and the changing environment.”

Timothy Herbst, senior vice president of ICF International, noted, “We have no choice but to come up with mechanisms to improve our information environment. The implications of not doing so will further shake trust and credibility in our institutions needed for a growing and stable democracy. Artificial Intelligence (AI) should help but technological solutions won’t be enough.  We also need high-touch solutions and a reinforcement of norms that value accuracy to address this challenge. I have no clue what that will look like, and fingers crossed that it won’t take a devastating crisis to spark the changes needed.”

Eileen Rudden, co-founder of LearnLaunch, wrote, “The lack of trust in established institutions is at the root of the issue. Trust will need to be re-established.”

Joel Reidenberg, chair and professor of law, Fordham University, wrote, “The complexity of the information ecosystem and the public’s preference for filter bubbles will make improvements very difficult to achieve at scale.”

Liz Ananat, an associate professor of public policy and economics at a major US university wrote, “It will likely get worse first, but over 10 years, civil society will respond with resources and innovation in an intensive effort. Historically, when civil society has banded together and given its all to fight destructive forces, it has been successful.”

Andrea Forte, associate professor, Drexel University, said, “In the second decade of the popular internet, we saw participatory forms of information production and circulation become not only possible, but powerful. A few places looked to develop an online culture of accountability (think Wikipedia), but in many cases, technologies developed with a focus on communication and social features (and delivering ads effectively). The quality of information being communicated wasn’t a priority. It is now. As mainstream social media sites take notice of information quality as an important feature of the online environment, there will be a move towards designing for what I call ‘assessability’ – interfaces that help people appropriate assessments of information quality.”

Tiffany Shlain, filmmaker and founder, The Webby Awards, wrote, “I am concerned that as artificial intelligences advance, distinguishing between what is written by a human and what is generated by a bot will become even more difficult.”

Joseph Turow, professor of communication, University of Pennsylvania, commented, “The issues of ‘fake’ and ‘weaponized’ news are too complex to be dealt with through automated, quantitative or algorithmic means. These activities have always existed under one label or another, and their rapid distribution by activist groups, companies and governments as a result of new technologies will continue. One reason is that the high ambiguity of these terms makes legislating against them difficult without infringing on speech and the press. Another reason is that the people sending out such materials will be at least as creative as those trying to stop them.”

Clay Shirky, vice provost for educational technology at New York University, replied, “‘News’ is not a stable category – it is a social bargain. There’s no technical solution for designing a system that prevents people from asserting that Obama is a Muslim but allows them to assert that Jesus loves you.”

Jack Park, CEO, TopicQuests Foundation, predicted, “There will be new forms of crowdsourcing – a radical kind of curation – participation in which will improve critical-thinking skills and will mitigate the effects of misinformation.”

Johanna Drucker, professor of information studies, University of California-Los Angeles, commented, “The constructedness of discourse removes news from the frameworks in which verification can occur. Responsible journalism will continue on the basis of ethical accountability but nothing will prevent other modes of discourse from proliferating. No controls can effectively legislate for accuracy or verity. It is a structural impossibility to suture language and the lived.”

Hazel Henderson, futurist and CEO of Ethical Markets Media Certified B. Corporation, said, “Global ethical standards and best practices are being developed in the many domains affected. New verification technologies, including blockchain and smart contracts, will help.”

Kelly Garrett, associate professor in the School of Communication at Ohio State University, said, “Although technology has altered how people communicate, it is not the primary source of distrust in authority, expertise, the media, etc. There are no simple technical solutions to the erosion of trust in those who produce and disseminate knowledge.”

Susan Price, lead experience strategist at Firecat Studio, observed, “There will always be a demand for trusted information, and human creativity will continue to be applied to create solutions to meet that demand.”

Tom Wolzien, chairman of The Video Center and Wolzien LLC, said, ” The market will not clean up the bad material, but will shift focus and economic rewards toward the reliable. Information consumers, fed up with false narratives, will increasingly shift toward more trusted sources, resulting in revenue flowing toward those more trusted sources and away from the junk. This does not mean that all people will subscribe to either scientific or journalistic method (or both), but they will gravitate toward material the sources and institutions they find trustworthy, and those institutions will, themselves, demand methods of verification beyond those they use today.”

Thomas Frey, executive director and senior futurist at the DaVinci Institute, replied, “The credibility of the journalism industry is at stake and the livelihood of many people is hanging in the balance of finding the tools, systems and techniques for validating the credibility of news.”

Daniel Berleant, author of the book “The Human Race to the Future,” predicted, “Digital and psychological technologies for the spreading of misinformation will continue to improve, and there will always be actors motivated to use it. Ways to prevent it will develop as well but will be playing catch-up rather than taking the lead.”

Stephan Adelson, an entrepreneur and business leader, said, “There are categorical types of information. The accuracy of some types of information will improve while the accuracy of other types will not. For example, information regarding science and technology will be reliable while political information will remain deceptive.”

Jacqueline Morris, a respondent who did not share additional personal details, replied, “I doubt there will be systems that will halt the proliferation of fake news. Generally, if people find value in something, they will find ways around any system designed to restrict access to what they want. The only way is to reduce the value of fake news by ensuring that people do not fall for it, basically, by educating the population.”

Mike O’Connor, a self-employed entrepreneur, wrote, “It will improve because it has to – many incentives exist to get it done. But the internet is just like real life; bad actors will find ways to fool people. Healthy skepticism will be part of the mix.”

Willie Currie, a longtime expert in global communications diffusion, wrote, “The information environment will improve because the internet is a learning environment and the experience of fake news will lead to attempts to do something about it over the next 10 years. The apparent success of fake news on platforms like Facebook will have to be dealt with on a regulatory basis as it is clear that technically-minded people will only look for technical fixes and may have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse that the scale of posts on social media platforms makes human intervention impossible will not be a defense. Regulatory options may include unbundling social networks like Facebook into smaller entities. Legal options include reversing the notion that providers of content services over the internet are mere conduits without responsibility for the content. These regulatory and legal options may not be politically possible to affect within the US but they are certainly possible in Europe and elsewhere, especially if fake news is shown to have an impact on European elections.”

Katim S. Toray, an international development consultant currently writing a book on fake news, noted, “It’s safe to assume (and I hope) that efforts to curb fake news have to yield positive results. However, we should not expect that fake news is going to go away, although its relevance might be drastically reduced.”

Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of emerging technologies, The Hastings Center, wrote, “While means will be developed to filter out existing forms of misinformation, the ability to undermine core values will continue to be relatively easy while steps to remediate destructive activities will be much harder and more costly. Furthermore, a gap will expand as technological possibilities speed ahead of their ethical-legal oversight. Those willing to exploit this gap for ideological purposes and personal gain will continue to do so.”

Amy Webb, author and founder of the Future Today Institute, wrote, “In an era of social, democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in stories. With our limbic systems continuously engaged, we’re more likely to pay attention to stories that make us want to fight, take flight or fill our social media accounts with links. As a result, there are strong economic forces incentivizing the creation and spread of fake news. In the digital realm, attention is currency. It’s good for democracy to stop the spread of misinformation, but it’s bad for business. Unless significant measures are taken in the present – and unless all the companies in our digital information ecosystem use strategic foresight to map out the future – I don’t see how fake news could possibly be reduced by 2027.”

Ian O’Byrne, assistant professor at the College of Charleston, replied, ” Human nature will take over as the salacious is often sexier than facts. There are multiple information streams, public and private, that spread this information online. We can also not trust the businesses and industries that develop and facilitate these digital texts and tools to make changes that will significantly improve the situation.”

David A. Bernstein, a marketing research professional, said, “The information environment will not improve due to the fractured nature of our society. As with many things, an individual’s perspective on issues of public interest is often slanted toward their belief and value system.”

Karen Mossberger, professor and director of the School of Public Affairs at Arizona State University, wrote, “The spread of fake news is not merely a problem of bots, but part of a larger problem of whether or not people exercise critical thinking and information-literacy skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address these aspects of online skills in the media, and to address these as fundamental educational competencies in our education system. Online information more generally has an almost limitless diversity of sources, with varied credibility. Technology is driving this issue, but the fix isn’t a technical one alone.”

Michael Wollowski, associate professor at the Rose-Hulman Institute of Technology, commented, “We are just only recently becoming aware of this problem and as such have some catching up to do.”

Axel Bender, a group leader for Defence Science and Technology (DST) Australia, said, “The veracity of information is unlikely to improve as, 1) there will be an increase in the number and heterogeneity of (mis)information sources; and 2) artificially intelligent misinformation detectors will not be smart enough to recognise semantically sophisticated misinformation.”

Monica Murero, a professor and researcher based in Europe, wrote, “The information environment will not improve easily, in part because of the technical nature of digitalized information and the tendency of re-elaborating and sharing information by anyone able to act in a prosumeristic fashion. For example, fake news (or unreliable information) is easy to produce thanks to the technical nature of digital information (duplicable, easy to modify, free of costs, durable over time, etc.) and the availability of programs (sw) and tools (pre-designed format for elaborating images and contents) are widely available to anyone at an easy reach (a few words on any search engine). In the next 10 years I foresee disparities among countries in terms of improvements and deteriorations of the information environment (depending on country and their regulation, i.e., China, Europe, North Korea, US, etc.).”

Ned Rossiter, professor of communication, Western Sydney University, replied, “Regardless of advances in verification systems, information environments are no longer enmeshed in the era of broadcast media and national publics or ‘imagined communities’ on national scales. The increasing social, cultural and political fragmentation will be a key factor in the ongoing contestation of legitimacy. Informational verification merely amplifies already existing conditions.”

Andrew Feldstein, an assistant provost, noted, “New technologies create new opportunities. It isn’t necessarily possible to know how those opportunities can be exploited. Once identified, steps can be taken to minimize abuse.”

Martin Shelton, a security researcher with a major technology company, said, “Just as it’s now straightforward to alter an image, it’s already becoming much easier to manipulate and alter documents, audio and video, and social media users help these fires spread much faster than we can put them out.”

Jaime McCauley, assistant professor of sociology at Coastal Carolina University, said, “I see this as a sort of dialectic – we can institute online protections but folks with a motive (money, power, etc.) will work their way around them.”

Sandro Hawke, technical staff, World Wide Web Consortium, predicted, “Things are going to get worse before they get better, but humans have the basic tools to solve this problem, so chances are good that we will. The biggest risk, as with many things, is that narrow self-interest stops people from effectively collaborating. But – unlike with climate change – this trust problem can be addressed unilaterally by each camp, so I’m optimistic.”

Ryan Sweeney, director of analytics, Ignite Social Media, wrote, “Our relationship with each other needs to change before our relationship with information can be repaired. The more we try to fight it, the worse it seems to get. I’m traditionally an optimist, but I’ve been finding that hard as of late.”

Jesse Drew, professor of cinema and digital media, University of California-Davis, commented, “The more the problem becomes more visible, efforts will be made to ameliorate it.”

Isto Huvila, professor of information studies, Uppsala University, replied, “In the short term, it seems unlikely that the information environment will improve. New mechanisms to communicate and disseminate trustworthy information and trust will undoubtedly emerge, but in the global context, it will take time before trust, trustworthiness and how they are communicated will be established.”

Paul N. Edwards, Perry Fellow in International Security, Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

Frank Odasz, president, Lone Eagle Consulting, observed, “Having watched online scams of all kinds evolve to be increasingly insidious, I expect this trend will continue and our best cybersecurity will forever be catching up with, not eradicating. The battle between good and evil is accelerated digitally.”

O’Brien Uzoechi, a business development professional based in Africa, replied, “I strongly believe and trust that the information environment will certainly improve, especially in terms of guarding against damaging, false and provocative information spread on the net. This will happen because of strong and effective application development for safe internet usage, and this has become inevitable which is good for business operations, etc. There has to be trust and reliability in our virtual relationships in all aspects, otherwise the internet as a vehicle for socio-political and cultural development has failed.”

Philippa Smith, research manager and senior lecturer in new media, Auckland University of Technology, noted, ” Efforts to keep pace with technology and somehow counteract the spread of misinformation or fake news may be more difficult than we imagine. I have concerns that the horse has bolted when it comes to trying to improve the information environment.”

Julia Koller, a learning solutions lead developer, replied, “Information is only as reliable as the people who are receiving it. If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”

Mark Glaser, publisher and founder, MediaShift.org, observed, “The current information ecosystem will get worse over the next few years because this is a very difficult problem to solve. However, I am confident that if news organizations and social platforms and other technologists work hard at this problem, then solutions will eventually take hold.”

Sandra Garcia-Rivadulla, a librarian based in Latin America, replied, “People are already noticing and doing something about this trend. In addition to addressing it with technology it will be more important to educate people to be able to curate the information they get more effectively.”

Hanane Boujemi, a senior expert in technology policy based in Europe, commented, “Verifying information will be more efficient and fast but misinformation will not completely disappear. The fake news phenomenon is very similar to email spam, however there is a fundamental difference where fake news can actually affect and shape public opinion about critical issues.”

Avery Holton, professor at the University of Utah, wrote, “The spread of misinformation is a problem that has been quickly identified and is already being worked on by a number of significant actors. Most notably, the public is no longer willing to accept misinformation or the destruction that it causes. Having a proactive stance from the public makes for a quicker, more lasting solution.”

Carol Chetkovich, professor emerita of public policy, Mills College, commented, “My negative assessment of the information environment has to do primarily with my sense that consumers of media (the public at large) are not sufficiently motivated and well-enough educated to think critically about what they read. There will always be some garbage put out by certain sources, so – even though it’s important that garbage be countered by good journalism – without an educated public, the task of countering fake news will be impossible.”

Daniel Menasce, professor of computer science, George Mason University, replied, “The future of information ecosystems and reliable facts depends not only on the internet, but also on politician speeches covered by the media and on talk radio shows by hosts. All of them can easily spread unreliable facts, which, unfortunately, are believed by millions of poorly educated people.”

Matt Stempeck, a director of civic technology, noted, “The purveyors of disinformation will outpace fact-checking groups in both technology and compelling content unless social media platforms are able to stem the tide.”

Dave Burstein, editor of FastNet.news, said, “Speaking of reports on policy and technology, the important thoroughly misleading information usually comes from the government and especially lobbyists and their shills. All governments lie, I.F. Stone taught us, and I can confirm that’s been true of both Obama’s people and the Republicans this century I have reported. Corporate advocates with massive budgets – Verizon and AT&T in the hundreds of billions – bamboozle reporters and governments into false claims. The totally outnumbered public-interest advocates often go over the line sometimes as well.”

David Manz, a cybersecurity scientist, replied, “Technology exists and will be created to attribute statements to their source in an easy-to-understand manner. However, this will still require the public to want to know the quality and source of their information.”

Hjalmar Gislason, vice president of data for Qlik, noted, “Recent developments have prompted some of the smartest minds in business and technology to take on this issue.”

Deborah Coe, research director for a major religious organization in the US, commented, “I have been observing (as have others) a global trend of increasing religious fundamentalism that correlates with a decreasing trust in, and value of, education, research, critical thinking and social justice/welfare. This sadly correlates with a long-term decline in mainstream religions that value these things. My fear is that without technological advances to stem the flow of misinformation, we will continue to move toward another round of the Dark Ages. I believe in the next generation of problem-solvers to use technology to help us with this problem.”

C.W. Anderson, professor at the University of Leeds, wrote, “There has been so little attention paid to the spread of propaganda online that any attempts at helping fix the problem can only improve the situation. However, I don’t know if this relative improvement will really cure what ails the American political system. It’s basically a new arms race, and the current administration is not equipped to address the problem.”

Emmanuel Edet, head of legal services, National Information Technology Development Agency of Nigeria, observed, “The information environment will improve but at a cost to privacy.”

Joshua Hatch, president of the Online News Association, noted, “I’m slightly optimistic because there are more people who care about doing the right thing than there are people who are trying to ruin the system. Things will improve because people – individually and collectively – will make it so.”

Troy Swanson, a teaching and learning librarian, replied, “We are at the beginning in another evolution of information technology. We are learning how these new tools will impact and mold existing socioeconomic and political systems. If we look at the impact of writing, the printing press, journalism, etc., we can see how these tools changed society and how existing systems adapted to them. We are learning to adapt.”

Denis Clements, chief operating officer of PlanetRisk Inc., replied, “Information disseminated via the internet will will follow at least two paths, trusted and untrusted. Trusted data disseminators will be defined by the integrity they bring to the information they disseminate. It will be up to the consumer to determine who they trust.”

Andreas Vlachos, lecturer in artificial intelligence at the University of Sheffield, commented, “I believe we will educate the public to identify misinformation better.”

Zbigniew Lukasiak, a business leader based in Europe, wrote, “Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.”

Michel Grossetti, research director, CNRS (French National Center for Scientific Research), commented, “It is the old story of the bullet and the cuirass. Improvement on one side, improvement on the other.”

Maja Vujovic, senior copywriter for the Comtrade Group, noted, “The information environment will be increasingly perceived as a public good, making its reliability a universal need. Technological advancements and civil-awareness efforts will yield varied ways to continuously purge misinformation from it, to keep it reasonably reliable.”

Constance Kampf, a researcher in computer science and mathematics, said, “The answer depends on socio-technical design – these trends of misinformation versus verifiable information were already present before the internet, and they are currently being amplified. The state and trends in education and place of critical thinking in curricula across the world will be the place to look to see whether or not the information environment will improve – cyberliteracy relies on basic information literacy, social literacy and technological literacy. For the environment to improve, we need substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy (see Laura Gurak’s book ‘Cyberliteracy’).”

Bill Adair, Knight Professor of Journalism and Public Policy at Duke University, commented, “The digital age and the internet have evolved in waves. The first wave was like, ‘This is so cool! We’re all connected!’ and the first things that developed were generally positive. The second wave was when scoundrels and bad actors discovered they could use the internet. Responsible people and groups have seen what happened and will respond. I’m hopeful.”

Stuart A. Umpleby, professor emeritus, George Washington University, wrote, “The threats to the information environment are new. We are working on solutions. I assume we shall find some.”

Antoinette Pole, associate professor, Montclair State University, noted, “I’ve read that Facebook is already working on a solution. Just like the evolution of the mainstream media at its inception, and blogs later on, a solution will emerge.”

Don Kettl, professor of public policy at the University of Maryland, said, “Big data and data analytics will increase the quantity, quality and persuasiveness of data.”

Scott Fahlman, professor emeritus of AI and language technologies, Carnegie Mellon University, noted, “More people and organizations now see the persuasive power of deliberate online lies, so there will be more attempts [at misinformation]. But, for people who are seriously trying to figure out what to believe, there will be better online tools to see which things are widely regarded as true and which have been debunked.”

Fredric Litto, professor emeritus, University of São Paulo, Brazil, wrote, “The incredibly complex nature of contemporary information technology will inevitably make for a continuing battle to reduce (note: I dare not say eliminate) false and undesirable ‘news’ and other information permeating electronic media. Without a foolproof method of truly eliminating the possibility of anonymity – and I cannot see this really happening by 2017 – there will be no end to the malicious use of most, if not all, modes of communication.”

Collette Sosnowy, a respondent who shared no additional personal details, wrote, “While I would like to think that the internet can be policed (in a good way), the sources of information and the speed with which they are spread are so numerous I don’t see how they could effectively be curtailed.”

Filippo Menczer, professor of informatics and computing, Indiana University, noted, “Technical solutions can be developed to incorporate journalistic ethics into social media algorithms, in a way similar to email spam filters.”

Garland McCoy, president, Technology Education Institute, predicted, “As most of us know there is the public internet, which operates as a ‘best effort’ platform and then there are private internets that command a premium because they offer much more reliable service. So it will be with the ‘news’ and information/content on the internet. Those who have the resources and want fact checking and vetting will pay for news services, which exist today, that charge a subscription and provide, for the most part, vetted/authenticated facts ‘news.’ Those who do not have the resources or who don’t see the ‘market value’ will take their chances exploring the world of uncensored, unfiltered and uncontrolled human mental exertion.”

Meamya Christie, user-experience designer with Style Maven Linx, replied, “There will be a division in how information is consumed. It will be like a fork in the road. People will have a choice to go through one portal or another based on their own set of core values, beliefs and truths.”

John Markoff, retired journalist, formerly technology reporter for the New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”

Sam Punnett, research officer, TableRock Media, replied, “The information environment will improve but what will determine this will be a matter of individual choice. Media literacy, information literacy, is a matter of choosing to be educated.”

Anne Mayhew, retired chief academic officer and professor emerita, University of Tennessee, replied, “Journalists, academics and scientists are much more aware of the need to combat misinformation and are learning to use the tools available to do so.”

Greg Lloyd, president and co-founder of Traction Software, wrote, “Although nothing will eliminate spam, lies and propaganda, I believe technical measures and pressure on advertising-driven services will make it easier for people to find and rely more on people and sources they trust.”

Luis Martínez, president of the Internet Society’s Mexico chapter, observed, “Technology evolves faster than society hence the information environment will evolve as new technology is available.”

Shawn Otto, author of “The War on Science,” said, “Journalists are beginning to become aware that their role is to hold the powerful accountable, rather than to present ‘both sides’ evenhandedly regardless of whether one is lying. The public is reawakening to the necessity of this in a democracy and is beginning to demand more of journalists, who have gone down a postmodernist intellectual sinkhole, falsely believing that there is no such thing as objectivity, when science is in the business of creating objective facts; i.e., information that holds true regardless of one’s individual biases. Journalists used to embrace this notion and are starting to again.”

Danny Rogers, founder and CEO of Terbium Labs, replied, “Things always improve. Not monotonically, and not without effort, but fundamentally, I still believe that the efforts to improve the information environment will ultimately outweigh efforts to devolve it.”

Dane Smith, president of the public policy research and equity advocacy group Growth & Justice, noted, “I’m an optimist. Truth will find a way, and prevail.”

Mike Gaudreau, a retired IT and telecommunications executive, commented, “I cannot see how the internet can be controlled. Look at the trouble Wikipedia has to keep their site accurate. The internet is just too vast and people will still fall for the fake stuff. You would need more than Snopes to do the job.”

Walter Bender, a senior research scientist at MIT, wrote, “I don’t think the problem is technological. It is social, and it is not much different from the American Aurora of 1800 in Philadelphia. People want to believe what reinforces their current positions, and there will be ‘publishers’ willing to accommodate them.”

Michele Walfred, a communications specialist at the University of Delaware, said, “Human nature is reactive. People react with ‘OMG’ and want to be the first to share.”

Louisa Heinrich, founder of Superhuman Ltd, commented, “The information environment has fundamentally changed in the past 10 years and there is no reason not to expect an equally radical set of changes in the next decade. The need to tell our stories to one another is a deeply rooted part of human nature, and we will continue to seek out better ways of doing so. This drive, combined with the ongoing upward trend of accessibility of technology, will lead more people to engage with the digital information environment, and new trust frameworks will emerge as old ones collapse.”

Bruce Edmonds, a respondent who shared no additional identifying details, noted, “Lack of trust and misinformation are social problems that will not be solved with technical or central fixes. Rather, political and new normative standards will need to be developed in society.”

Shahab Khan, CEO for Planwel, Karachi, Pakistan, replied, “Since the internet is a totally unregulated territory it’s just not possible to improve things. The government may have a role to play here.”

Richard Jones, a self-employed business owner based in Europe, said, “Ulterior motives exist to mislead and brainwash the populace.”

Iain MacLaren, director of the Centre for Excellence in Learning & Teaching, National University of Ireland-Galway, commented, “The fact that more people are now fully aware of the existence of fake news, or propaganda, as it used to be known, means that there is increasing distrust of unverified/unrecognised providers of news and information. Ironically, the US election, Brexit and the Scottish Referendum exposed the consequences of misinformation but also highlighted that ‘fake news’ is not confined to rogue ‘bad actors’ but also major media companies. I would like to hope, therefore, that a more sophisticated, critical awareness is growing across society and I certainly hear much to that effect amongst the young people/students I work with. This also shows the importance of education.”

Romella Janene El Kharzazi, a content producer, entrepreneur and user activist, said, “It is a known problem now, so people are already looking for solutions. One obvious solution is required authentication; fake news is spread anonymously and if that is taken away, then half of the battle is fought and won.”

Greg Wood, director of communications planning and operations for the Internet Society, replied, “The information environment will remain problematic – rumors, false information and outright lies will continue to propagate. However, I have hope that endpoints (people) can become more sophisticated consumers and thus apply improved filters. The evolution of email spam and how it has been dealt with provides a rough analogy.”

Stephen Bounds, information and knowledge management consultant, KnowQuestion, noted, “Two forces will drive the ongoing balkanisation of what information communities and subcommunities accept. The first is that people will increasingly rely on trusted proxies to cut down the amount of information received to a manageable level. The second is the huge benefits in finance and influence yielded to those who choose to manipulate what information is provided or withheld.”

R. Lee Mulberry, managing partner, Northern Star Consulting, said, “The current tension between the media and the rest of us is quite healthy. Typically, when a light is shown on an issue it gets better. The media has become so driven by revenue that they have allowed their integrity to slip. Given that many/most of them are good people they will respond with strong efforts to gain back their integrity and the faith of the public.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “Technology moves fast and humans adapt more slowly, but we have a proven capability to solve problems we create with technology.”

Ed Terpening, an industry analyst with the Altimeter Group, replied, “Disinformation will accelerate, as trust in institutions we’ve thought of as unbiased widen polarization through either hiding or interpreting facts that fulfill an agenda.”

Basavaraj Patil, principal architect for AT&T, wrote, “The rapid pace of technological change and the impact of false information on a number of aspects of life are key drivers.”

Bradford W. Hesse, chief of the health communication and informatics research branch of the US National Cancer Institute, said, “Communication specialists have been dealing with the consequences of propaganda, misinformation and misperceived information from before and throughout the Enlightenment. What has changed is the speed with which new anomalies are detected and entered into the public discourse. The same accelerated capacity will help move the needle on social discourse about the problem, while experimenting with new solutions.”

Adam Lella, senior analyst for marketing insights, comScore Inc., replied, “There have been numerous other industry-related issues in the past (e.g., viewability, invalid traffic detection, cross-platform measurement) that were seemingly impossible to solve, and yet major progress was made in the past few years. If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made to help mitigate this issue in the long run. In other words, if there’s a will, there’s way.”

Clifford Lynch, director of the Coalition for Networked Information, noted, “The severity of the problem has now been recognized fairly widely, and while I expect an ongoing ‘arms race’ in the coming decade, I think that we will make some progress on the problem.”

Rob Lerman, a retired information science professional, commented, “The combination of an established media which has encouraged opinion-based ‘news.’ The relative cheapness of websites, the proliferation of state-based misinformation and the seeming laziness of news consumers seems like an insurmountable obstacle to the improvement of the information environment.”

Cliff Cook, planning information manager for the City of Cambridge, Massachusetts, noted, “Fake news and related problems thrive when they have a receptive audience. The underlying problem is not one of fake news – rumors were no doubt a problem in ancient Rome and the court of King Henry VIII – but the presence of a receptive audience. Until a means is found to heal the fundamental breakdown in trust among Americans, I do not see matters improving, no matter what the technical fix.”

Katie Delahaye Paine, CEO of Paine Publishing, said, “Technology will make it easier to identify and block fake news and transparency. The more people are aware of the problem, the less susceptible they will be.”

Su Sonia Herring, an editor and translator, commented, “It will not improve in the way it is described in the question (through automated systems to weed out ‘fake news’ and misinformation). Misinformation and fake news will exist as long as humans do; they have existed ever since language was invented. Relying on algorithms and automated measures will result in various unwanted consequences. Unless we equip people with media literacy and critical-thinking skills, the spread of misinformation will prevail.”

Megan Knight, associate dean, University of Hertfordshire, said, “Human nature is what it is, and technological solutions can be turned against the things they are designed to fight. Look at computer viruses – anti-virus software and security systems get better and better all the time, and malware just gets worse. You can’t eradicate it.”

Dave Kissoondoyal, CEO, KMP Global, replied, “As the internet has evolved to where it is now, in the path there have been lots of obstacles that cropped up, especially those created by ill-intentioned people who have been motivated to use technology for mischievous, harmful misinformation, fake news and propaganda. However, there have been constant attempts to counteract it from the internet community and stakeholders. Fake news can be denounced by the community.”

Matt Moore, a business leader, observed, “The pressures driving the creation of ‘fake news’ will only increase – political partisanship, inter-state rivalry, plus the technologies needed to create and disseminate fake news will also increase in power and decrease in cost. New verification tools will emerge but these will not be sufficient to counter these other forces.”

Jeremiah Foster, a respondent who shared no additional background details, said, “We are only seeing the early stages of content moving from the non-curated web to the mainstream.”

Adam Powell, project manager, Internet of Things Emergency Response Initiative at the University of Southern California, said, “The democratization of the internet, and of information on the internet, means just that: everyone has and will have access to receiving and creating information, just as at a water cooler. Not only won’t the internet suddenly become ‘responsible,’ it shouldn’t, because that is how totalitarian regimes flourish (see: Firewall, Great, of China).”

David Harries, associate executive director for Foresight Canada, replied, “More and more, history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.”

Michael Marien, senior principal, The Security & Sustainability Guide and former editor of The Future Survey, wrote, “I have no idea one way or another of the answer to this question, nor does anyone else. The key is finding out how to reduce bad information and increase and make use of good information.”

Paul M.A. Baker, senior director of research for the Center for Advanced Communications Policy, observed, “First, the term ‘fake news’ seems to be used in several different ways. It can refer to actual information presentations that are objectively and factually correct, but are criticized as false as they go against a political or propaganda narrative. Secondly, the information presented is intentionally false, or constructed in a way which is misleading, and therefore called out as ‘fake news,’ but in this case it is actually descriptive. Lastly, and more complex, are in-between cases where the veracity of the news or information may or may not be correct, but the interpretation of the underlying information is contextually variable. This is a case where observers may disagree based on their own frames of perspective.”

Deborah Stewart, an internet activist/user, wrote, “With the popularity of fake news and the revaluation of truth, technology will advance so to weed out the bad actors, at least that is my hope.”

Fernando Ortega, a respondent who shared no additional identifying details, said, “Automatic controls will be established to identify lies or dangerous information, and human analysts will make the decision to erase it, following some ethical criteria.”

Jonathan Ssembajwe, executive director for the Rights of Young Foundation, Uganda, commented, “The information environment will improve because there are different organisations and individuals working to ensure the safe use of the internet. For example, Rights of Young Foundation, a youth-led organisation in Uganda working to see that young people use the internet safely and only put appropriate information on internet.”

Bernie Hogan, senior research fellow, University of Oxford, wrote, “It seems that the increased emphasis on personalisation enables people to more rapidly discredit information that doesn’t meet their pre-existing biases. It’s easier to code for pandering than for consensus.”

Joanna Bryson, associate professor and reader at University of Bath and affiliate with the Center for Information Technology Policy at Princeton University, responded, “It’s certainly not certain. The stakes are very high on both sides, but we are in the information age, and I believe good tools are likely to be found in the next few years.”

Adrian Schofield, an applied research manager based in Africa, commented, “In spite of awareness programmes, the passive majority remains blissfully unaware of the potential (and real) threats posed by malicious operators in the ICT space. As fast as the good guys develop barriers to cybercrime, the bad guys will devise ways to leapfrog the barriers. It’s cheap and it’s borderless.”

Alf Rehn, chair of management and organization studies, Åbo Akademi University, commented, “I firmly believe that the information environment will improve, the question is just by how much. Better algorithms will sort out some of the chaff, but at the same time the weaponization of fake news will develop. As strange as it seems, we may enter a time of less, but ‘better’ [more effective] fake news.”

Shirley Willett, CEO, Shirley Willett Inc., said, “It will be worse in 10 years, with a gradual battle and improvement.”

Riel Miller, an international civil servant who works as team leader in futures literacy for UNESCO, commented, “I do not think that the environment as an external condition will improve but the question of the capacity of people to assess the nature of the information they are consuming might change.”

Michael Pilos, chief marketing officer, FirePro, replied, “First of all, let’s make clear that one man’s hero is another man’s terrorist. We are all biased! Media have always been used for better or for worst. They will continue to be used in the same manner but in much more sophisticated and rapidly deployable manners.”

Bill Jones, chairman of Global Village Ltd., predicted, “Some things can be improved; others can’t. More knowledge lets us see that what we believed to be facts aren’t so. Trust can be so easily abused that it’s our collective ability to discern false from true which ultimately is the key, but that is fraught with challenges. No one can do it for us.”

Andrew McStay, professor of digital life at Bangor University, Wales, wrote, “Undoubtedly, fake news and weaponised information will increase in sophistication, but so will attempts to combat it. For example, the scope to analyse at the level of metadata is a promising opportunity. While it is an arms race, I do not foresee a dystopian outcome.”

Marcel Bullinga, futurist with Futurecheck, based in the Netherlands, said, “Misinformation will grow tremendously at first. Protection measures will emerge, but they will take long before coming into effect.”

Vian Bakir, professor in political communication and journalism, Bangor University, Wales, commented, “It won’t improve because of 1) the evolving nature of technology – emergent media always catches out those who which to control it, at least in the initial phase of emergence; 2) online social media and search engine business models favour misinformation spreading; 3) well-resourced propagandists exploit this mix.”

Jens Ambsdorf, CEO at The Lighthouse Foundation, based in Germany, replied, “The variability of information will increase. The amount of ‘noise’ and retweeted stuff will increase and without skills and tools it will become more difficult for citizens to sort out reliable from unreliable sources.”

Dan Ryan, professor of arts, technology and the business of design at the University of Southern California, said, “We will approach a post-post-enlightenment moment when the importance of reliable information to social order and general welfare becomes re-appreciated. Both the exploitive charlatans and the radically relativistic ideologues will lose ground to a not-yet-emerged data consensus.”

David J. Krieger, director of the Institute for Communication & Leadership, Lucerne, Switzerland, commented, “The information environment will improve because a data-driven society needs reliable information and it is possible to weed out the false information.”

Ella Taylor-Smith, senior research fellow, School of Computing, Edinburgh Napier University, noted, “As more people become more educated, especially as digital literacy becomes a popular and respected skill, people will favour (and even produce) better-quality information. Well I hope so, anyway. I also hope the attention economy will evolve and mature, especially as we come to understand more about the role of images and videos in social media.”

Tom Worthington, honorary lecturer in the Research School of Computer Science at Australian National University, commented, “Deciding what information to believe is an age-old problem, predating the Internet, TV, radio, newspapers and the town crier.”

John McNutt, professor, School of Public Policy and Administration, University of Delaware, wrote, “There will always be bad information. People intentionally create misinformation, and they do so unintentionally. Technology gives us the opportunity to challenge these issues.”

Eric Keller, a respondent who shared no additional identifying details, wrote, “Internet users will demand factual news and news providers will find it profitable to screen obvious false reports.”

Wiliam Scarborough, Ph.D. candidate, University of Illinois-Chicago, wrote, “Either mass forms of online communication will be de-legitimized as sources of information or new mediums will emerge that are less susceptible to false news.”

Alexander Furnas, Ph.D. candidate, University of Michigan, replied, “There are strong incentives to manufacture/manipulate the information environment and counter whatever solutions are engineered/proposed.”

Tomslin Samme-Nlar, technical lead, Dimension Data Australia, commented, “I expect the information environment to improve if user-awareness programs and campaigns are incorporated in whatever solutions that are designed to combat fake news.”

Ayaovi Olevie Kouami, chief technology officer for the Free and Open Source Software Foundation for Africa, said, “The actual framework of the internet ecosystem could have a positive impact on the information environment by setting up all the requisite institutions, beginning with DNSSEC, IXPs, FoE, CIRT/CERT/CSIRT, etc.”

Ed Tomchin, a retired writer and researcher, said, “After a full serving of Trump et al., this country will be extremely ready for some straightforward honest information.”

Jean Paul Nkurunziza, a consultant based in Africa, commented, “The expected mass adoption of the IPv6 protocol will allow every device to have a public IP address and then allow the tracking of the origin of any online publication.”

Mark Patenaude, vice president for iInnovation, cloud and self-service technology, ePRINTit Cloud Technology, replied, “New programming tech and knowledge will create a new language that will teach us to recognize malicious, false, misleading information by gathering all news and content sources and providing us with accurate and true information.”

Patricia Aufderheide, professor of communications and founder of the Center for Media and Social Impact at American University, said, “I fear that major interests are not invested enough in reliability to create new business models and political and regulatory standards needed for the shift. I hope that’s not true, and I’ll be part of working for and with organizations providing solutions. I’m sure that there will be islands of trustworthiness and demographic segments of the population that practice information hygiene, that both customs and technologies will develop to assure greater trustworthiness, but overall there are powerful forces against making this the norm. They include corporate investment in surveillance-based business models that create many incentives for unreliability, ‘invisible handshake’ agreements with governments that militate against changing surveillance models, international espionage at a governmental and corporate level in conjunction with mediocre cryptography and poor use of white hat hackers, poor educational standards in major industrial countries such as the US, and fundamental weaknesses in the US political/electoral system that encourage exploitation of unreliability. It would be wonderful to believe otherwise, and I hope that other commentators will be able to convince me otherwise.”

Greg Swanson, media consultant with Itzontarget, noted, “The sorting of reliable versus fake news requires a trusted referee. It seems unlikely that government can play a meaningful role as this referee. We are too polarized. And we have come to see the television news teams as representing divergent points of view, and, depending on your politics, the network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair referee that would be universally trusted.”

Flynn Ross, associate professor of teacher education, University of South Maine, said, “The free market with continue to proliferate tabloid type content – it’s up society to educate the consumers of information to understand the difference between news and tabloids like we do with paper media.”

Kevin J. Payne, founder and research scientist, Chronic Cow, commented, “Machine learning, natural language processing and network analytics provide a powerful toolset to apply to the challenge of scoring sources.”

William Anderson, adjunct professor, School of Information, University of Texas-Austin, replied, “The information environment will improve because enough people want improvements and we are able to use technology to provide improvement.”

Robin James, an associate professor of philosophy at a North American university, wrote, “The original question assumes that things have recently gotten worse. Scholars know that phenomena like patriarchy and white supremacy have created ‘epistemologies of ignorance’ that have been around for hundreds of years. ‘Fake news’ is just a new variation on this.”

K.G. Schneider, dean at a public university library, replied, “In the short run, the information environment will become more toxic, due in large part to poor leadership at our country’s highest level.”

Jeff MacKie-Mason, University librarian and professor of information science, professor of economics, University of California-Berkeley, replied, “One wonder of the internet is that it created a platform on which essentially anyone can publish anything, at essentially zero cost. That will become only more true. As a result, there will be a lot of information pollution. What we must do is better educate information consumers and provide better systems for reputation to help us distinguish the wheat from the chaff.”

Jennifer Hassum, a department leader at a nonprofit organization based in North America, commented, “In each advancement in communications carries with it the fear of lies and manipulation. Untruths will still circulate 10 years from now, however the public will be more literate and savvy, better able to distinguish between what is real and what is fake or opinion-based.”

Mike DeVito, graduate researcher, Northwestern University, wrote, “The environment is not likely to improve because these are not technical problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.”

Eduardo Villanueva-Mansilla, associate professor, department of communications, Pontificia Universidad Católica del Perú, said, “It’s an arms race. As much as the weapons on one side develop, the other side will do the same.”

Andee Baker, a retired professor, said, “People and tools will become more sophisticated in judging information to determine which may likely be false or true. This will happen through education and development of tools.”

Jeff Johnson, professor of computer science, University of San Francisco, replied, “Measures to reduce the proliferation of fake news will be countered by new ways to spread or disguise it.”

Jane Elizabeth, senior manager American Press Institute, said, “The information environment will improve because the alternative is too costly. Misinformation and disinformation will contribute to the crumbling of a democratic system of government.”

Federico Pistono, entrepreneur, angel Investor and researcher with Hyperloop TT, commented, “Algorithms will be tailored to optimize more than clicks – as this will be required by advertisers and consumers alike – and deep learning approaches will improve.”

Richard Rothenberg, professor and associate dean, School of Public Health, Georgia State University, noted, “Like everything else in the universe, people’s attitude towards information (facts) has a distribution. The properties of that distribution are distorted by internet use, since outrageous things grab attention. It is my guess that the dark end of the internet is relatively small but it has an outsized presence. Most people are in the middle and don’t care a great deal, but – and here’s the wishful thinking – those at the light end far outnumber their antipodes. If nothing else, folks have demonstrated enormous resourcefulness, particularly in crowd endeavors, and I believe methods for assuring veracity will be developed. By the way, such methods may well have positive unforeseen consequences: if we can vote on the internet, young people will do so and the political climate of the country will change dramatically.”

Virginia Paque, lecturer and researcher of internet governance, DiploFoundation, wrote, “It’s a matter of survival. If we cannot devise a way to evaluate the veracity of information, the internet will lose all credibility as a communications tool, and become a vehicle for entertainment only.”

Tatiana Tosi, netnographer at Plugged Research, commented, “The information environment will improve due to new artificial-intelligence bots that will verify the information. This should balance privacy and human rights in the automated environment.”

Pamela Rutledge, director of the Media Psychology Research Center, noted, “Fake news and information manipulation are no longer ‘other people’s problems.’ This new awareness of the importance of media will shift resources, education and behaviors across society.”

Richard Lachmann, professor of sociology, State University of New York-Albany, replied, “Even though systems to flag unreliable information can and will be developed, internet users have to be willing to take advantage of those warnings. Too many Americans will live in political and social subcultures that will champion false information and encourage use of sites that present such false information.”

Diana Ascher, information scholar at the University of California-Los Angeles, observed, “Fake news, misinformation, disinformation and propaganda are not new; what’s new is the algorithmic propagation of such information. In my research, I call this the new yellow journalism.”

Meg Mott, professor of politics at Marlboro College, commented, “While there are many factors that suggest the information environment will not improve, particularly the neoliberal strategy of putting economic values over political virtues, history suggests that even in times of total collapse, there have been pockets where humanity shows its saner side.”

Dariusz Jemielniak, professor of organization studies in the department of Management In Networked and Digital Societies (MiNDS), Kozminski University, said, “There are a number of efforts aimed at eliminating fake news, and we as a society are going to make them work.”

Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia, wrote, “There are no technological solutions that correct for the dominance of Facebook and Google in our lives. These incumbents are locked into monopoly power over our information ecosystem and as they drain advertising money from all other low-cost commercial media they impoverish the public sphere.”

Alexis Rachel, user researcher and consultant, said, “The logical progression of things at this point (unless something radical occurs) is that there will be increasingly more ‘sources’ of information that are unverified and vetted – a gift from the internet and the ubiquitous publishing platform it is. All it takes is something outrageous and plausible enough to go viral, and once out there, it it becomes exceedingly difficult to extinguish – fact or fiction.”

Susan Landau, a North American scientist/educator, wrote, “Russian attacks on information – the so-called ‘fake news’ and disinformation efforts – were a big bang for the buck. There are numerous players in this situation, and at least at the time that the Russians did this attack in 2016, the other players did not have an understanding or clear interest in preventing the attack. That situation has changed, at least with respect to some of the internet companies such as Facebook and Google, which are now undertaking attempts to limit the dissemination of fake news. The underlying question is whether this dissemination will expand or not lies with many players, many in the private sector. How will the press handle ‘fake news?’ How will the internet companies do so? And how will politicians, at least politicians post-Trump? The rise of ‘fake news’ is a serious threat to democracy. Post-election [US 2016], some in the press have been pursuing news with the same care and incisiveness that we saw in the Watergate era, but others are not. We have a serious threat here, but it is not clear that interests are aligned in responding to it. And it is not cheap to do so: securing sites against hacking is very difficult when the threat comes from a powerful nation-state. Is there a way to create trusted, unhackable verification systems? This depends on what the use case is; it is a not 0-1 answer, but an answer in scales of grey. First of all, the weakest aspect of any system is the user; if the user does not understand the importance of security, it is simply too easy to cut corners. If the user believes that the information they have access to is important to secure and is given the appropriate tools to do so, then with high probability the system can be made trusted and somewhat unhackable. We do this in stock trading. The system isn’t perfect; it works ‘well enough.’ If society cannot adequately protect itself against the coopting of public information by bad actors, then democracy itself is in serious risk. We have had this problem for quite some time. Climate change denial is one example, but so are many of the arguments used against abortion (e.g., the argument that having an abortion increases risk for depression). What has changed is the scope and scale of these efforts, partially through domestic funding, partially through foreign actors and partially through the ability of digital technologies to change the spread of ‘false news.’ What is needed to protect society against the coopting of public information is not only protecting the sources of the information, but also creating greater public capability to discern nonsense from sense. Here, the turning of the US public against science, the lack of understanding that science proceeds by testing hypotheses and rejecting those that fail, does not bode well for our future. I do not see a role for government in preventing the spread of ‘fake news’ – that comes too close to government control of speech – but I do see one for government in preventing tampering with news and research organizations, disrupting flows of information, etc.”

Sharon Roberts, a Ph.D. candidate, wrote, “I don’t think that technology changes or ’trusted methods’ will affect the information environment; I believe that social changes will be the ones that will affect our perception of the information environment. Just like there are still 1-888 psychic call lines content on television or ‘Nigerian princes’ promising money sending me email, it’s a social understanding of those meanings to be scams that have curtailed their proliferance, not any actual TV or email technology ’trusted methods.’”

Peter and Trudy Johnson-Lenz, founders of the online learning community Awakening Technology, combined on this response: “If we rely on technological solutions to verify trust and reliability of facts, then the number of states of the control mechanisms must be greater or equal to the number of states being controlled. With bots and trolls and all sorts of disinformation, that’s virtually impossible. There are probably some tech solutions, but that won’t solve the entire problem. And walling off some sections of the information ecosystem as ‘trusted’ or ‘verified fact-filled’ defeats the purpose of open communication. Any machine-based system, even crowd/swarm, might control some of the trash, but only at the price of squelching the mutual trust on which democracy and an open society depend. See relevant resources: W. Ross Ashby’s Law of Requisite Variety, ‘If a system is to be stable, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled. Ashby states the law as ‘variety can destroy variety.’ If you study microtargeting during the 2016 election, it’s clear that Facebook in particular was used to spread disinformation and propaganda and discourage voting in a very effective manner. This kind of activity is hard to discern and uncover in real time, it adds greatly to the polluted ecosystem and it is virtually impossible to control as well. Ultimately, people are going to have to make critical-thinking discernments themselves. Unfortunately, there are people who have no interest in doing that, and in fact discourage anyone else from doing that. The echo chamber is noisy and chaotic and full of lies. The only hope is some combination of technological advances to trust and verify, people being willing to take the time to listen, learn and think critically, and a rebuilding of trust. In our accelerating world, that’s a very big ask! For an eye-opening perspective on acceleration, see Peter Russell’s recent essay, ‘Blind Spot: The Unforeseen End of Accelerating Change.’ Also see Howard Rheingold’s chapter on ‘Crap Detection 101,’ pages 77-109 in his book, ‘NetSmart: How to Thrive Online,’ MIT Press: https://mitpress.mit.edu/books/net-smart… and Carl T. Bergstrom and Jevin Wests’ University of Washington course titled ‘Calling Bullshit in the Age of Big Data’ http://callingbullshit.org/. Here’s an excerpt of course description: ‘The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit – and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit. We’re sick of it. It’s time to do something.’”

To return to the survey’s for-credit responses home page, with links to all sets, click here.

To advance to the next set of for-credit responses – those to survey Question 2 – click here.

If you wish to read the full survey report with analysis, click here.

To read anonymous survey participants’ responses with no analysis, click here.