Finding solutions: Can society conquer the growing scourge of online misinformation?

A new survey by Elon University and the Pew Research Center asks tech experts about the prospects of blocking the spread of bogus information.  

The proliferation of misinformation and false narratives online leaves experts uniformly concerned but deeply divided about the future, according to a new study released by Pew Research Center and Elon University’s Imagining the Internet Center.

More than 1,100 internet and technology experts responded in the summer of 2017 to a series of questions tied to the following theme: Will trusted methods emerge over the next 10 years to block false narratives and allow the most accurate information to prevail in the overall information ecosystem?

Those responding to the survey were evenly split: 51 percent said the information environment will not improve, while 49 percent expect things to get better. The experts were asked to elaborate on their answers, yielding a wide range of opinions about the threat of misinformation, the prospect for solutions and the most promising strategies to pursue.

“Both camps of experts share the view that the current environment allows ‘fake news’ and weaponized narratives to flourish, but there is nothing resembling consensus about whether this problem can be successfully addressed in the coming decade,” said Lee Rainie, Pew Research Center’s director of internet and technology research. “They disagree about which side comes out on top in the escalating arms race: those who exploit human vulnerabilities with internet-speed manipulation tactics or those who create accurate information and reliable delivery systems for it.”

Report co-author Janna Anderson, director of Elon University’s Imagining the Internet Center, noted: “Many of these experts said that while the digital age has created countless information sources and magnified their potential influence globally, it has simultaneously reduced the influence of traditional news organizations that deliver objective, verified information. They said the information environment can’t be improved without more well-staffed, financially stable, independent news organizations whose signals are able to rise above the noise of misinformation to create a base of ‘common knowledge’ for the public. They also urged far more literacy efforts to help people differentiate fact from falsehood.”

An analysis of nearly 500 pages of written responses by these experts revealed two optimistic and two pessimistic themes:

  • The information environment will not improve and human nature is to blame.
    • Respondents supporting this theme say humans tend to be selfish, tribal, gullible, convenience seekers. They worry that today’s powerful information actors have an incentive to keep the status quo. And they think the future will be organized around social divisions, with a segment of population finding high-quality information, while “chaos will reign” for those who cannot afford or discern reliable information or show no interest in getting it.
  • The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale.
    • These responses often described the bad actors as having a leg up on those seeking to combat misinformation. They expect that weaponized narratives and false information will be magnified by social media, online filter bubbles, bots and artificial intelligence.
  • The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content.
    • Those who think there will be improvements predict that algorithmic filters, browsers, apps and plug-ins will diminish the potency and availability of misinformation. They think movements toward fact-checking  and “trust ratings” will help, too. Some say regulation will also play a part in curbing misinformation.
  • The information environment will improve because people will adjust and make things better.
    • Some of these experts argue that misinformation is nothing new and society has always found ways to lessen its impact. They say as people become more skilled in sorting fact from fiction, the information environment will improve. Some expect crowdsourcing will play a prominent role in verifying facts by blocking those who propagate lies and propaganda. Some also showed support for distributed ledgers (blockchain).

A fifth theme: Experts in both camps who said technology alone can’t overcome the influence of misinformation urged two strategies to combat it:

  • The public must fund and support the production of objective, accurate information.
  • Efforts must be made to elevate information literacy as a primary goal of education.

Below is a sample of responses from tech experts in this survey:

Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the Brookings Institution: “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

Starr Roxanne Hiltz, distinguished professor of information systems and co-author of the visionary 1970s book “The Network Nation”: “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the U.S. frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”

Jerry Michalski, futurist and founder of REX: “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly; 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem; and 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”

Nigel Cameron, a technology and futures editor and president of the Center for Policy on Emerging Technologies: “Human nature is not EVER going to change (though it may, of course, be manipulated). And the political environment is bad.”

Jamais Cascio, distinguished fellow at the Institute for the Future: “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing.”

Richard Lachmann, professor of sociology at the State University of New York at Albany: “Even though systems [that] flag unreliable information can and will be developed, internet users have to be willing to take advantage of those warnings. Too many Americans will live in political and social subcultures that will champion false information and encourage use of sites that present such false information.”

Scott Spangler, principal data scientist at IBM Watson Health, said technologies now exist to make fake information almost impossible to discern and flag, filter or block: “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”

danah boyd, principal researcher at Microsoft Research and founder of Data & Society: “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”

Charlie Firestone, executive director of the Aspen Institute Communications and Society Program: “In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate.”

Jonathan Grudin, principal design researcher at Microsoft: “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression; contradictory claims have always existed in print, but have been manageable and often healthy.”

John Markoff, retired journalist and former technology reporter at The New York Times: “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”

Alf Rehn, chair of management and organization studies at Åbo Akademi University: “Better algorithms will sort out some of the chaff [and may improve the overall information environment], but at the same time the weaponization of fake news will develop. As strange as it seems, we may enter a time of less, but ‘better’ [more effective] fake news.”

Justin Reich, assistant professor of comparative media studies at the Massachusetts Institute of Technology: “Strategies to label fake news will require algorithmic or crowd-sourced approaches. Purveyors of fake news are quite savvy at reverse engineering and gaming algorithms, and equally adept at mobilizing crowds to apply ‘fake’ labels to their positions and ‘trusted’ labels to their opponents.”

James Schlaffer, an assistant professor of economics at Westfield State University: “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”

Paul N. Edwards, Perry Fellow in International Security at Stanford University: “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

Larry Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University: “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”

Irene Wu, adjunct professor of communications, culture and technology at Georgetown University: “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”

Mike Roberts, pioneer leader of ICANN and Internet Hall of Fame member: “The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did 50 or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet.”

Susan Etlinger, industry analyst at Altimeter Research: “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms, and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cyber security experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”

Jim Hendler, professor of computing sciences at the Rensselaer Polytechnic Institute: “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”

Peter Lunenfeld, a professor of design|media arts at UCLA: “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

Bill Woodcock, executive director of the Packet Clearing House: “There’s a fundamental conflict between anonymity and control of public speech, and the countries that don’t value anonymous speech domestically are still free to weaponize it internationally, whereas the countries that do value anonymous speech must make it available to all, [or] else fail to uphold their own principle.”

Amber Case, research fellow at Harvard University’s Berkman Klein Center for Internet & Society, suggested withholding ad revenue until veracity has been established: “In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”

Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security: “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation.”

Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions.

The Imagining the Internet Center is an initiative of the Elon University School of Communications. Imagining the Internet explores and provides insights into the impact of Internet evolution. Students, faculty, staff and alumni have surveyed thousands of experts and traveled to cover Internet Governance Forums and Internet Hall of Fame events in Egypt, Greece, Kenya, Switzerland, Germany, Brazil, Lithuania and Mexico.

Major themes on the future of the online information environment

Theme 1: The information environment will not improve: The problem is human nature

  • More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives
  • Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar
  • In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil
  • Human tendencies and infoglut drive people apart and make it harder for them to agree on “common knowledge.” That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem
  • A small segment of society will find, use and perhaps pay a premium for information from reliable sources. Outside of this group “chaos will reign” and a worsening digital divide will develop

Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale

  • Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars
  • Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI
  • The most effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online 

​Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content

  • Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of “trust ratings” 
  • Regulatory remedies could include software liability law, required identities and the unbundling of social networks like Facebook

​Theme 4: The information environment will improve because society will adjust

  • Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material
  • Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)

Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education.

  • Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press
  • Elevate information literacy: It must become a primary goal at all levels of education