Elon University

The 2017 Survey: The Future of Truth and Misinformation Online, Part 2 of 6

Is there a way to create trusted, unhackable verification systems?

Full Survey Link Future of MisinformationTechnologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answers to the following query – they were evenly split, 51-49 on the question:

What is the future of trusted, verified information online? In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas? 

This page holds a full analysis of the answers to the first of five follow-up questions:

Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?

Misinformation Online Full Survey LinkAmong the key themes emerging from among 1,116 respondents’ answers were: – It is probably not possible to create such a system. – It would be seen as too costly and too work-intensive. – There is likely to be less profit if such systems are implemented, which is also likely to stifle such solutions. – It is possible to have commonly accepted, ‘trusted’ systems – It’s complicated because ‘what I trust and what you trust may be very different.’ – Can systems parse ‘facts’ from ‘fiction’ or identify accurately and in a widely accepted manner the veracity of information sources? – There can be no hackable largescale networked systems. – It’s worth a try to create verification systems; they may work or at least be helpful. – ‘Verification’ would reduce anonymity, hinder free speech and harm discourse. – There is hope for possible fixes.

If you wish to read survey participants’ credited responses with no analysis, please click here.

If you wish to read anonymous survey participants’ responses with no analysis, please click here.

Summary of Key Findings Follow-Up Question 1

Verification systems are likely to be too complex, too costly and highly unlikely to survive hacks, and who’s to decide what is fact and what is fiction? But they may be somewhat helpful

Future of Misinformation LogoAn overwhelming majority of respondents who answered this question said, “No.” Many say it is worth the effort to try to create verification systems because they may be at least partially helpful. Most say it’s impossible to have commonly accepted, “trusted” systems; some say widespread trust cannot be inspired in any system. Most say there can be no “unhackable” largescale systems. Many question the ability of systems to parse facts from fiction or identify accurately and in a widely accepted manner the veracity of information sources.

John Klensin, longtime leader with the Internet Engineering Task Force and Internet Hall of Fame member, commented, “‘Reliable’ implies a frame or reference or official version, ‘trusted’ is in the mind of the beholder, and ‘unhackable’ implies this is a technical problem, not a social one, but it has always been a social one in every regard other than, maybe, some identification and authentication issues.”

Helen Holder, distinguished technologist for HP, said, “First, nothing is ‘unhackable.’ Second, higher reliability of information can be achieved with human and electronic validation of facts, using methods that traditional investigators and journalists are trained to do. Some of those techniques may be enhanced with machine learning to identify common indicators of false information. Third, gaining trust is much harder and requires a long track record of virtually perfect execution. Any failures will be used to discredit such a system. For example, the modern widespread distrust of the reliability of information from major media outlets, despite being reliable the vast majority of time, indicates that even low error rates will add to perception that there are no objective, reliable sources of information. Rapid corrections when new information becomes available will be essential so that no outdated content can be referenced.”

Glenn Edens, CTO for Technology Reserve at Xerox/PARC, wrote, “Maybe, but it is not clear what an acceptable technology might be. Consumers of information need to take an active role in determining the quality and reliability of information they receive. This can happen via verifiable and trusted sources through subscriptions, certificates and verifiable secure protocols, of course this does not solve the problem of the ‘commons’ – the free marketplace.”

Liam Quin, an information specialist with the World Wide Web Consortium (W3C), said, “We’re working on [these issues] at W3C, but the boundary between the physical and virtual worlds remains a difficulty.”

Is it possible to create such a system?
Some say probably not

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “No, only partial approximations serving specific outlooks are possible. Malicious intent will never go away and will continue to find ways against defenses, especially automated ones; and the increasing complexity of our environments will continue to be way above our ability to keep people educated.”

Frank Kaufmann, founder and director of several international projects for peace activism and media and information, commented, “No it will not be possible. This is the wrong approach to fixing the ‘news’ problem. I call this the ‘cops and robbers’ approach.”

John Wilbanks, chief commons officer, Sage Bionetworks, replied, “No. Because the weakness of all technical systems is the people involved – the designers, builders, and users. And we’re always going to be hackable. Until we get better (or die off and are replaced by people better able to deal with it) it won’t improve.”

Tim Bray, senior principal technologist for Amazon.com, observed, “I doubt it; people trust people, not systems.”

Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” commented, “Because it is an arms race with the purveyors of untrustworthy information backed by both state actors and amateurs, I don’t think it is likely that 100% reliable systems will last for long. However, a combination of education – teaching people how to critically examine online info and use credibility tools, starting with elementary school children, can augment technical efforts.”

Bob Frankston, internet pioneer and software innovator, said, “No, because the world is inherently ambiguous. If anything, the wish for such a system feeds into an authoritarian dystopia.”

Garth Graham, an advocate for community-owned broadband with Telecommunities Canada, explained, “We can only verify the source, never the information. The question assumes external authority and there is no external authority.”

A professor and researcher based in North America commented, “It is not possible to create a reliable, trusted, unhackable verification system. Trust is a social value that must be developed and maintained over time. The system will be only as trusted as the institution responsible for its maintenance. I do believe that it is possible to maintain reliable and trusted systems, but it is not a technology problem. It is a problem of ongoing support, labor and social integration.”

Andrew Odlyzko, professor of math and former head of the University of Minnesota’s Supercomputing Institute, observed, “No, because what is accepted as reliable is a social construct, and in most cases does not have an absolute, unambiguous answer.”

Wendy Seltzer, strategy lead and counsel for the World Wide Web Consortium, replied, “No. We should focus on ways to reduce the impact and reach of falsehoods and spoofs, because we won’t be able to stop them entirely. In a combined social-technical system, technical solutions aren’t enough.”

Mark P. Hahn, a chief technology officer, wrote, “No. Even with perfect tools people will make mistakes. Decentralized tools will still allow bad actors to subvert locally. Centralized tools will concentrate power and become both a target and a magnet for bad actors on the inside.”

Matt Stempeck, a director of civic technology, noted, “Most verification signals can be misappropriated by third parties, as we’ve seen in the recent spates of sophisticated phishing attacks. More problematic is that many information consumers judge the content based on the person they know that’s sharing it, not a third-party verification system.”

An anonymous respondent replied, “No, because humans will go on getting information from all sorts of sources, some of which are less reliable than they think.”

An anonymous software engineer based in Europe said, “It’s going to be painful. People will continue to discard anything that doesn’t fit in their bubble as ‘untruth,’ and dispute the verification.”

An anonymous North American research scientist said, “We can’t force the purveyors of misinformation to forgo their profits. Traditional news organizations have less authority and power, especially among those who only believe the purveyors of misinformation. The government cannot limit speech. Who will?”

Such a system would be too
costly and work-intensive

An executive consultant based in North America echoed the voice of a number of other respondents when he said, “Yes, there are ways, but it is difficult and costly. Therefore, no one is motivated to do it. Right now, there are tech tools and algorithms that can point to suspicious sources of bad information, but it will take human intervention to step in, identify items and the source, and make the decision to intervene. That will be costly.”

An associate professor said, “There may be but doing so would require a lot of capital. The question is then where would the financial and technical resources come from, and what are the motives of those providing them?”

A participant in the projects of Harvard’s Berkman Klein Center for Internet & Society, said, “They will be complex and unwieldy, similar to high-level security, and in the same way, will be largely ignored or misused by all but the most sophisticated consumers. Effective systems will require mutli-factor verification, third parties and watermarking.”

There is likely to be less profit
if such systems are implemented

Several respondents said there’s too much money at stake to really put a stake in fake news.

Giacomo Mazzone, head of institutional relations for the World Broadcasting Union, replied, “I’m afraid there will be no way because the fundamental economic model will not change.”

Justin Reich, assistant professor of comparative media studies, MIT, noted, “The better question is ‘Will Facebook create a reliable verification system?’ since that platform has achieved unprecedented status as the dominant source of news for Americans. They won’t develop such a system because it’s antithetical to their incentives and technically infeasible. Fake news is the kind of high-throughput, viral content that’s terrific to sell ads against. Moreover communities really enjoy shared fake news: Judith Donath has important research here suggesting that sharing fake news can provide powerful signals of group affiliation even when people know it’s fake. Spreading fake news is a mechanism for self-expression and for building community building – both squarely within the mission of Facebook. It’s also financially lucrative to allow, and politically very difficult to deal with, since the bulk of Fake News comes from the Right and they are in political ascendancy. The corrosive effects of fake news on our society are but an unfortunate externality. Compounding the problems with incentives, algorithms can be reverse engineered and gamed, and crowd-sourcing methods will lead to mobilizing ideological crowds versus mobilizing people committed to objective truths. Fake news verification systems need to be built inside people’s heads.”

An institute director and university professor said, “Google, Facebook, Twitter and the like know there’s no money in reliable, trusted, unhackable verification systems… Telling people what they want to hear, no matter for the truth, will always be more profitable.”

An internet pioneer and principal architect in computing science replied, “If advertisers sign a pledge not to allow their ad money to flow to unreliable untrusted sources, then there will be an incentive to change – and with incentive, technical measures can be implemented.’

Jerry Michalski, futurist and founder of REX, replied, “If Mark Zuckerberg wanted to play watchdog, he could turn Facebook, one of the superconductors of unreliable info, into a far better platform. But he’d risk growth and loyalty. A novel platform that is very reliable will have trouble attracting users, unless it is the natural successor to Facebook, Instagram, Snapchat, et cetera.”

Is it possible to have commonly accepted ‘trusted’ systems? It’s complicated. ‘What I trust and what you trust may be very different’

Paul N. Edwards, fellow in International Security, Stanford University, commented, “ Any trusted verification system will require a significant component of attention from trained, reliable, trustworthy human beings. Such systems are labor-intensive and therefore expensive. Further, many people care more about confirming their own biases than about finding trustworthy sources.”

research scientist based in North America commented, “Who will be the referee?”

An anonymous research scientist replied, “‘Verified’ statements would simply be those in agreement with the ideology of the verifier.”

Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “Unhackable? No. Whether it’s a technological hack or social engineering, we have to operate as if ‘unhackable’ is un-possible. Reliable? Probably. Trusted? Now this is the problem. Trust is a cultural construct (as in, you trust when the source doesn’t violate your norms, put simply). What I trust and what you trust may be very different, and finding something that we both (or all) will trust may be functionally impossible. No matter the power of the technologies, there’s still the ‘analog hole’ – the fact that the human mind has to accept something as reliable and true.”

Daniel Kreiss, associate professor of communication, University of North Carolina-Chapel Hill, commented, “I doubt that a polarized public where partisanship is akin to religious identification will care about verified information. Members of the public would care about these systems if the information they parlayed benefits their own partisan team or social identity groups.”

Edward Kozel, an entrepreneur and investor, replied, “All existing or posited techniques to grade ‘trust’ are subjective. Like reputation, trust is relative and subjective.”

danah boyd, principal researcher, Microsoft Research, and founder, Data & Society, wrote, “Nothing is unhackable. You also can’t produce trust in a system without having trust in the underlying incentives and social infrastructure. If you want to improve the current ecosystem, it starts by addressing perceptions of inequality.”

Esther Dyson, a former journalist and founding chair at ICANN, now a technology entrepreneur, nonprofit founder and philanthropist, expert, said, “The systems can be unhackable, but they cannot be reliable and trusted any more than *people* can be reliable and trusted.”

Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, observed, “There may be some useful techniques for verification, but historically there’s always been a dynamic in digital technology development where different parties with different views about who or what that technology is for, build and reconfigure systems in a kind of adversarial or ‘argumentative’ cycle of point-counterpoint. That’s the culture of computing; it resists stabilization (at least so far). For me, though, the key thing is that verification isn’t judgment. Fact checking isn’t editing or making a case. It takes people to do these things and the idea that machines or ‘an artificial intelligence’ is going to do this for us is, I think, irresponsible.”

An anonymous respondent wrote, “No, the verification system has to have an opinion.”

Daniel Wendel, a research associate at MIT, said, “The technology exists to make things reliable and unhackable. However, this does not mean they will be reliable or trusted. At some level, value judgments will be made, and person preference will be injected into any system that endeavors to report on ‘truth.’ Luckily, having a fully foolproof, trusted and reliable source is not required. In fact, having a public that doubts everything is good. That said, some sources are more reliable than others. People need to begin to understand that being a wary consumer does not mean taking all news as ‘equally fake.’ There is a certain willful self-deception in society now that allows untruthful sources to be perceived as reliable. But social, not technical, innovation is required to overcome that.”

Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “The technical issue of verification is irrelevant to the social issue of not valuing truth. That is, a cryptographically signed statement does almost nothing against being quoted in misleading manner, or just plain lies that people want to believe. The problem with stories ‘too good to check’ isn’t a deficiency of ability, but rather essentially nobody cares. In discussion forums, when someone posts an article link and mischaracterizes it in an inflammatory way, consider how few people will read the full article versus immediately ranting based on the mischaracterization. That is, we see a prominent failure-mode of not verifying by reading an article often one click away. Given this, it’s hard to see more than a minuscule effect for anything elaborate in terms of an unforgeable chain to a source. It’s worthwhile to compare the infrastructure of online shopping, where a huge amount of money is directly at risk if the system allows for false information by bad actors, i.e. credit card scammers. There, the businesses involved have a very strong incentive to make sure all the various platforms cooperate to maintain high standards. This isn’t an argument to treat getting news like making a purchase. But looking at the overall architecture of a payment system can shed some light on what’s involved in having reliability and trust in the face of distributed threat.”

consultant based in North America wrote, “Adoption of verification systems will be strongest among those who seek them out, a demographic that is not at the center of the political disinformation problem. Further, the intervention of verification could well serve (in the short term) to deepen the dogmatic lines of ideological division.”

Scott Guthrey, publisher for Docent Press, said, “Ultimately the security will depend upon the humans building and using the system. There is no such thing as a ‘reliable, trusted, unhackable’ human being.”

David Weinberger, writer and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, said, “Reliability and trust are social formations: Reliable and trustworthy enough for some purpose. We will adjust our idea of what is an appropriate degree of reliability and trust. Because we have to.”

Can systems parse ‘facts’ from ‘fiction or identify accurately and
in a widely accepted manner the veracity of information sources?

professor of law at a major U.S. state university commented, “I don’t think this is a technological problem. We had reliable, trusted verification systems. It was called journalism. But journalism stopped being a profession and became an industry. And accuracy was not advantageous to the bottom line. We need to rebuild not-for-profit media and help it cut through the online and cable clutter.”

Eileen Rudden, co-founder of LearnLaunch, wrote, “We will be able to verify who you are, but will not be able to verify if what you say is true.”

Erhardt Graeff, a sociologist doing research on technology and civic engagement at the MIT Media Lab, said, “Solutions to misinformation will be more social than technical and will require we redistribute power in meaningful ways. Using the frame of security, there is never going to be such a thing as an unhackable verification system. The weakest links in security are human, which cannot be addressed by technical fixes. Rather, they require that we work on education and support systems, designs that are collaboratively created and adaptive to people’s needs, and ways to respond to hacks and crises that protect and reassure individual users first rather than business interests. Furthermore, conspiracy theorists will always find a way to discredit a system’s reliability and trustworthiness. A more fundamental solution will require that we work on building relationships among diverse communities that foster mutual respect and trust. These networks of people and institutions are what information ecosystems (and democracies, more generally) work through. It’s these webs of relationships that do the lion’s share of the work of verification. We will need to rethink our connections to public information in order to foster respect and trust through consistent engagement in the same way friendships are built. News organizations, platforms, and other media elites will need to operate in more ‘localized’ and participatory ways that allow regular people to have agency in the journalistic process and in how problems like misinformation are addressed. We trust who and what we know in part because we have some control over those relationships closest to us. Ultimately, verification and the larger universe of information problems affecting democracy boil down to relationships and power, which we must take into account in order to make real progress.”

Geoff Scott, CEO of Hackerati, commented, “Reliable and trusted in whose eyes? It’s technically feasible to create immutable and consensus-based repositories for information, but it is the ‘facts’ themselves that are being doubted and fabricated. What determines if a statement is true or not? Popular consensus only indicates which statements are most believable to a segment of the population. Findings from ‘independent’ investigations are themselves questioned by those who are already inclined to disagree.”

professor at MIT commented, “‘Slow’ news, with adequate research and sourcing, still offers established venues credibility. It will take real forensic effort to keep up with technological fakery (lip-syncing unspoken words, compositing unlived images, generating chaff by bot-driven social media). We need to include the education of media-literate citizens in our fix, and to do that as a priority. The down side of ‘fact control’ (as opposed to critical thinking) is its ease of misuse.”

Michael Zimmer, associate professor and privacy and information ethics scholar, University of Wisconsin-Milwaukee commented, “Any attempt at a system to ‘verify’ knowledge will be subject to systemic biases. This has been the case since the first dictionary, the evolution of encyclopedias from a roomful of editors to a million contributors, debates over standardized curriculum, et cetera. Technology might make things appear to be reliable or unhackable, but that’s just a facade that obscures latent biases in how such systems might be built, supported and verified.”

Philipp Müller, postdoctoral researcher at the University of Mainz, Germany, replied, “I am skeptical that verification systems can be reliable and trustworthy if their decision is a binary one between true or false. In many instances, truth is a social construction. The ultimately trustworthy answer to many questions would therefore be that there is no ultimate answer but rather different sides to a coin. I believe this logic of uncertainness and differentiated ‘truths’ is hard to implement in technological ecosystems.”

Alfred Hermida, associate professor and journalist, commented, “The question assumes there is an objective ‘truth’ that can be achieved. Who and how information is verified is shaped by systemic power structures that tend to privilege sectors of society.”

Rick Forno, senior lecturer in computer science and electrical engineering at the University of Maryland-Baltimore County, said, “There is no unhackable system for facts or the presentation of reality; people will still believe what they want to believe. Technology can help increase the level of trust and factual information in the world, but ultimately it comes down to the individual to determine what is real, true, fake, misleading, mis-sourced, or flat-out incorrect. That determination is based on the individual’s own critical-thinking skills if not the factual and practical soundness of their educational background as well. Unfortunately, I don’t see technology helping overcome *that* particular vulnerability which is quite prevalent in the world – i.e., people making bad judgments on what to believe or trust – anytime soon. I hope your report’s commentary will touch on the importance of not only education (both formal and informal) and especially the development of critical-thinking and analysis skills needed to inculcate an informed and capable citizenry – especially ones that allow a person to acknowledge an opposing view even if they disagree with it, and not just brush it off as ‘fake news’ because they don’t like what they’re hearing or seeing. Otherwise, I daresay a misinformed or easily-misguided citizenry that remains uncritical and unquestioning will remain a politician’s best friend, and this problem will only get worse in time. ;( ”

Name, a fellow at Harvard University’s Berkman Center for Internet & Society, said, “Robots and AI will increasingly replace routine kinds of work—even the complex routines performed by artisans, factory workers, lawyers, and…

Name, lead researcher at GigaOM Research, said, “As just one aspect of the rise of robots and AI, widespread use of autonomous cars and trucks will be the immediate end of taxi drivers and truck drivers; truck driver is the numb…

There can be no unhackable largescale networked systems

Michael R. Nelson, public policy executive with Cloudflare, replied, “No one who works on computer systems would promise that a system can be ‘unhackable.’ But a lot can be done with a system that is ‘good enough’ and upgradable (if vulnerabilities are found). The history of encryption is a good model. Standards have evolved to overcome new attacks.”

leading internet pioneer who has worked with the FCC, ITU, GE, Sprint and VeriSign commented, “This cannot be done with an open TCP/IP internet.”

David Conrad, a chief technology officer, replied, “No, not systems that can be deployed in a cost-effective fashion for the foreseeable future. ‘Unhackable’ implies a fundamental change in how computer systems and software are implemented and used and this is both expensive and takes time.”

professor of law at a major California university noted, “Reasonably reliable and trusted, yes. Completely unhackable? We have not managed it yet, and it seems unlikely until we can invent a system that, for example, has no vulnerabilities to social engineering. While we should always work to improve reliability, trust and security on the front end, we must always expect systems to fail, and plan for that failure.”

Timothy Herbst, senior vice president of ICF International, said, “I don’t think there will ever be an ‘unhackable verification system,’ and it would be folly to believe in such a thing.”

Brad Templeton, chair emeritus of the Electronic Frontier Foundation, said, “Reliable and trustable, but not unhackable. However, the level of intrusion can be low enough for people to use them.”

Name, a pioneering Internet sociologist and self-employed writer, consultant, and educator, noted, “The jobs that the robots will leave for humans will be those that require thought and knowledge. In other words, only the best-educated human…

Name, technology consultant, futurist, and senior fellow at the National Institute for Technology in Liberal Education, wrote, “The education system is not well positioned to transform itself to help shape graduates who can ‘race against the machines…

It’s worth it to create verification systems;
they may work somewhat or at least be helpful

Jonathan Brewer, consulting engineer for Telco2, commented, “Yes, it’s very possible to create trusted, un-hackable verification systems. Much of the requisite infrastructure exists through DNSSEC. Browser vendors and social media platforms need only integrate and extend DNSSEC to provide a registry of authentic information sources.”

Micah Altman, director of research for the Program on Information Science at MIT, commented, “People and the systems they create are always imperfect. Instead of ‘unhackable’ systems we should seek more reliable, more trustworthy and tamper-resistant (hardened) systems. These systems will be based on transparency of operation (e.g., open source, open algorithms); cryptographic protocols; and distributed operation.”

senior fellow at a center focusing on democracy and the rule of law commented, “Full reliability is not attainable. But there already exist general principles that can be used to reduce the spread of false information. Examples include: penalizing the organizations (newspapers, Facebook pages, Twitter accounts) that spread malicious information (e.g., libel laws); make trolling a punishable offense (e.g., hate speech); mechanically block distributors of malicious information (e.g., censorship – note that this particular approach can also be used to block the circulation of reliable information by non-democracies/non-democrats); encourage ethical reporting (e.g., insist on at least two independent direct sources as evidence).”

professor at a major U.S. state university said, “Yes. A lot can be accomplished with: 1) Automated analysis of the text, 2) Automated analysis of the sources, and 3) Social processes to suppress fake news.”

Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “Yes, reliable, trusted unhackable verification systems are within the range of today’s technology. The public writers and readers can be protected by current cryptography algorithms if new methods for storing and retrieving public information are created. As public outcry increases for fake news, then the requirements to have multiple sources document and tested within a program can be done. Academic systems already do cross checking of academic sources. The real problem today is that it costs to secure these systems. If the citizens of United States or other countries want these systems, then the public and private money must be invested to create them.”

Filippo Menczer, professor of informatics and computing, Indiana University, wrote, “Yes. We can develop community trust standards backed by independent news and fact-checking organizations, and implemented by Web and social media platforms. It won’t be perfect and abuse will continue to exist, but its harm will be reduced.”
Irene Wu, adjunct professor of communications, culture and technology, Georgetown University, said, “There is no perfectly unhackable system. If there were, they could be used for harm as easily as for good. However, we may develop technical ways to improve the quality of news we get. In other arenas, we rely on safety certifications for home appliances, or brand names for fashion clothing. Similarly, other markers could be developed for information. It used to be we trusted a newspaper, maybe it’s no longer just the newspaper, but a certification that reporters can get, or an industry association of online news sources that adheres to good codes of practice.”

Steve McDowell, professor of communication and information at Florida State University, replied, “A reference system (a trusted source vouches for the author or story in question) or a branded system (recognizable and trusted information providers) may reduce to persuasivenes of some false facts. However, there may not be agreement on who are the trusted sources in [the category of] news and information.”

Michael Wollowski, associate professor at the Rose-Hulman Institute of Technology, suggested there is trusted news, writing, “It’s called the New York Times [for facts]. We always had the National Enquirer [for fake news]. It is just that now we have many more information sources. If you want to read them, go ahead. If you want to trust information, do what people have been doing for a long time: peruse sources that are known to diligently check their facts. Use sources from several countries/continents.”

Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “A lot depends on (automated) social proof. Algorithms will learn to filter out ‘bad apples.’ This is crucially dependent on having the right incentives: the appeal of ‘virality’ will go away once news consumption is no longer funded by ad consumption.”

research scientist based in Europe said, “Ask legitimate news sites to be more thorough in the way they report information, because recently the quality of the content of major media (e.g., CNN, Fox News, NY Times) has been declining.”

Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, wrote, “I won’t comment on the technical dimensions, but I do think we can get more reliable and trusted information if the digital platforms invest greater human and technical resources in vetting and verification. I definitely don’t want to see governments play this role.”

An anonymous CEO and consultant based in North America suggested a solution: “Create and use smaller autonomous networks where peering is based solely upon trust.”

principal network architect said, “The process already exists. It is called earning the respect of one’s peers. It can’t be perfect but it works most of the time.”

An anonymous respondent replied, “There can still be a place for professional journalists who really investigate and earn public trust. Plus there are our peers around us, but with them our trust may be misplaced sometimes.”

Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “Yes, but it will require a much greater willingness in the U.S. to pursue antitrust investigations and to support the enactment of data-protection laws.”

Rob Atkinson, president, Information Technology and Innovation Foundation, said, “If nations copied Estonia’s digital signature model we could have trusted verification systems. These would bring an array of other economic benefits as well.”

Larry Keeley, founder of innovation consultancy Doblin, commented, “YES. I’ve had teams working on in both the design school where I teach graduate and doctorate students, plus at the Kellogg Graduate School of Management. There are technologies – like blockchain and other forms of distributed ledgers – that make complex information unhackable. There will be other methods that don’t worry so much about being unhackable, and worry more about being resilient and swiftly corrected. There will be many more such emergent capabilities. Most, however, will NOT be instantaneous and real-time, so that there will still be considerable asymmetry with information that is compelling, vivid, easily amplified, and untrue. So the coolest systems may need to have an augmented-reality ‘layer’ that provides the confidence interval about the underlying story – and shows how that gets steadily better after a bit of time permits a series of corrective/evaluative capabilities to address the initial story(ies).”

Henning Schulzrinne, professor and chief technology officer for Columbia University, said, “We only need near-perfect, not perfect, systems. Verification systems within limited realms are feasible, both for identifying publishers and individuals.”

Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “…Reliable and trusted systems – as we already have with banking, for instance – are possible and likely. There’s probably, in the end, more money and power to be gained by building systems the majority trust than in creating widespread distrust and, ultimately, a withdrawal from the internet (or, even, some as-yet hard-to-imagine alternative system being built).”

Riel Miller, an international civil servant who works as team leader in futures literacy for UNESCO, commented, “Reliable, trustworthy and secure ‘verification systems’ are in the eye of the beholder and context. A ‘truth’ vending machine or system is not doable. What is entirely feasible and is always more or less functional are systems for assessing information in context and related to need. With the decline in the status and power of the former gatekeepers of ‘good’ knowledge processes are unleashed to seek alternatives. Mass solutions are not the only way and are likely to be sub-optimal from many perspectives. As new sources and dynamics for counter-veiling power emerge so too will fit for purpose assessment. It will be messy and experimental, that’s complex evolution.”

The president of a business wrote, “Nothing is ever unhackable. But social engineering, editors, educational systems, the Wikipedia-style hive mind solution, and judicious law-making, combined with the type of AI that identifies fraud on a credit card, can make great strides toward protecting information systems and news.”

Paul Gardner-Stephen, senior lecturer, College of Science & Engineering, Flinders University, disagreed with Livingstone and the other optimists, writing, “One of the problems is that it is state-level actors who are major players in this space. Tools like blockchains may allow for consensus forming and similar web-of-trust schemes, however they all stumble over the problems of relativity, subjectivity and perspective. We see this today: One man’s bullying is another’s ‘standing up for himself.’ This is a classic tragedy of the commons: The rules that enabled public communications to be productively shared are being undermined by those so desperate to hold onto power, that they are willing to degrade the forward-value of the medium. Indeed, for some, it is probably an active ploy so as to neuter public scrutiny in the future by destroying and discrediting the means by which it could occur.”

‘Verification’ would reduct anonymity,
hinder free speech, harm discourse

Most respondents said requiring user verification systems would be likely to require the loss of anonymity and the appointment of an overall authority, which could limit online participation and free expression for many people.

John Perrino, senior communications associate at George Washington University School of Media and Public Affairs, wrote, “You would either put First Amendment rights at risk by only allowing verified content from a few sources or expand the verification system to everyone where it is sure to be exploited because it would be created by humans.”

senior policy researcher with an American nonprofit global policy think tank said, “Verification starts with the reduction of anonymity.” A professor and researcher of American public affairs at a major university replied, “No, such a system is unlikely, at least not without creating unacceptable limits on who can express themselves online.” A media director and longtime journalist said, “There is no absolute trust without an end to anonymity. Even then, human trust is hackable using economic/social/moral/peer pressure.”

Nathaniel Borenstein, chief scientist at Mimecast, commented, “No [such system is possible]. Not without an absolute authority that everyone is required to trust.”

An anonymous research scientist said, “I am not aware of any such system in the history of mankind. In fact, any system that actually did what you describe would probably be regarded as the instrument of an oppressive regime. For me, contestation, explanation, agonism are what a healthy information ecosystem is about – and not one that outsources accountability to ‘verification systems.’“

An anonymous respondent from the Berkman Klein Center at Harvard University noted, “No. A ‘reliable’ and ‘unhackable’ verification system implies policing the information we share and how we share it. That would seem to stand in opposition to a free exchange of ideas, however stupid some of those ideas might be. But placing the responsibility for assessing the quality of information on the listener keeps the channels open. Unfortunately, it’s far from foolproof. And there’s no reliable way to train people to be better, more critical listeners or consumers of information.”

Some have hopeful proposals for possible
fixes, and there are deeper looks at the issue

Peter Jones, associate professor in strategic foresight and innovation at OCAD University, Toronto, predicted, “The future will look like a network of commentary and field-level (‘embodied’) tweets supported by well-funded Wikileaks sources that cue journalists and give direction to investigation. Wikileaks is not ‘hackable’ in the way today’s fake news attributes such ‘hacks’ to Russia. False information on a leak site is crowd-analyzed and found out quickly.”

Amy Webb, author and founder of the Future Today Institute, suggested solutions, writing in an extended response: “There is a way to create reliable, trusted verification systems for news, but it would require radical transparency, a fundamental change in business models and global cooperation. Fake news is a bigger and more complicated problem than most of us realize. In the very near future, humanity’s goal should be to build international, nonpartisan verification body for credible information sources. Within the decade, machine learning can be applied for auditing – randomly selecting stories to fact-check and analyze expert sentiment. In the decade that follows, more advanced systems would need to authenticate videos of leaders as real, monitor augmented reality overlays for hacks, ensure that our mixed reality environments represent facts accurately.

“The best defense against fake news is a strong, coordinated offense. But it will take cooperation by both the distributors – Facebook, Google, YouTube, Twitter – and the world’s news media organizations. Google and Facebook could take a far more aggressive approach to identifying false or intentionally misleading content and demoting websites, channels and users who create and promote fake news. Twitter’s troll problem could be tackled using variables that analyze tweet language, hashtag timing, and the origin of links. YouTube could use filters to demote videos with misleading information.

“News organizations could offer a nutritional label alongside every single story published, which would list all the ingredients: everyone in the newsroom who worked on the story, all of the data sets used, the sources used, the algorithms used, any software that was used, and the like. Each story that travels digitally would have a snippet of code and a badge visible to viewers. Political stories that are factually accurate but represent liberal or conservative viewpoints would have a verification badge indicating a political slant, while non-political stories would carry a different badge. The easiest way to do this would be to use the existing emoji character system.

“The verification badge convention is something we’re already familiar with because of Twitter and Facebook. Similarly, stories with verified badges would be weighted more heavily in content distribution algorithms, so they would be prioritized in search and social media. Badges would be awarded based on credible, factual reporting, and that wouldn’t be limited to traditional news organizations. Of course, it’s possible to hack anything and everything, so whatever system gets built won’t be impenetrable.”

Jonathan Grudin, principal design researcher, Microsoft, said, “Verifying the accuracy of information claims is not always possible, but verifying the source of information seems likely to be tractable. The next step is to build and learn to use reliable sources of information about information sources. This can be done now most of the time, which isn’t to say unforeseen technical challenges won’t arise.”

Marcel Bullinga, futurist with Futurecheck, based in the Netherlands, said, “I envision an AI-backed, traffic light system that shows me in realtime: Is this information/this person/this party reliable, yes or no? Is their AI transparent and open, yes or no? Is their way of being financed transparent, yes or no?”

senior researcher and distinguished fellow for a futures consultancy observed, “In the long term, very sophisticated multi-factor biometrics may mitigate risks at the human end. Meanwhile, advanced interconnected secure blockchain fabrics, may extend considerable security to future microservices, Internet of Things automation and valuable data and media.”

The chief technology strategist for a nonprofit research network serving community institutions, commented, “We may see a combination of Wikipedia-like curation combined with blockchain for determining provenance.”

Diana Ascher, information scholar at the University of California-Los Angeles, said, “I suspect many will advocate for the use of artificial intelligence to build trustworthy verification systems. However, always inherent in such systems are the biases and perspectives of their creators. The solution to biased information must come in the form of a recognition on the part of the information seeker that no information is pure fact. It is all interpreted and deployed in context. Systems that present information from a variety of perspectives will be most effective in providing the public with the opportunity to understand the many facets of an issue. And then, of course, most people will accept as true the information that confirms their existing beliefs. In addition, news consumers depend on heuristic information practices to find the information on which they base their decisions. Often, this comes in the form of opt-in communications from emerging thought leaders as trusted sources, as we’re seeing in the resurgence of email digests from individuals and think tanks (e.g., Stat, Neiman, countless others), as well as following trusted entities on social media (e.g., Twitter).”

Judith Donath, fellow at Harvard’s Berkman Klein Center, and founder of the Sociable Media Group at the MIT Media Lab, wrote a lengthy and detailed reply: “There’s no single answer – there is and will continue to be a technological arms race, but many of the factors are political and social. Basically, there are two fronts to fighting fake news. The first is identifying it. This can be a technical issue (figuring out the ever more subtle indicators of doctored video, counterfeit documents), a research problem (finding the reliable documentation that backs a story), et cetera. The second, harder, one is making people care. Why have so many Americans embraced obvious lies and celebrated the liars? And what can we do to change this? Many feel a deep alienation from politics and power in general. If you don’t think your opinion and input matters, why should it matter how educated you are on issues?

“Rethinking news and publishing in the age of the cellphone should not be just about getting the small screen layout right, or convincing people to <like> a story. It needs to also be about getting people to engage at a local level and understand how that connects with a bigger picture. An authoritarian leader with contempt for the press is, obviously, a great boon for fake news; an authoritarian leader who has the power to control the press and the internet is worse. Socially, the key element is demand for truth – the ‘for-profit-only’ writers of some of last fall’s fake news had little interest in whether their stories were for the right or the left – but found that pro-Trump/anti-Hillary did well and brought them profits, and that there just wasn’t the same appetite on the left.

“We need to address the demand for fake news – to motivate people across the political spectrum to want reality. This is not simply a matter of saying ‘read critically, it is better for you’ – that is the equivalent of countering a proselytizing Christian telling you to believe in the Gospels because Jesus walked on water by explaining the laws of physics. You may be factually right, but you won’t get anywhere. We need to have leaders who appeal to authoritarian followers AND also promote a fact- and science-based view of the world, a healthy press ecology etc.

“That said, the technology – the internet, AI – has changed the dynamics of fake news. Many people now get their news as free floating stories, effectively detached from their source publication. So one issue is how to make news that is read online have more identity with the source, with the reasons why people should believe it or not. And the answers can’t be easy fixes, because any cheap signal of source identity can be easily mimicked by look-alike sites.

“The real answer will come with finding ways to work WITH the culture of online reading and find native ways to establish reliability, rather than trying to make it behave like paper. A key area is the social use of news in a platform like Facebook. We’ve seen the negative side – people happy to post anything that they agree with, using news as the equivalent of a bumper sticker, not a source of real information. News and social platforms – both the publishers and the networks – should create tools that help people discuss difficult issues.

“At the moment, it appears that what Facebook might be doing is separating people – if they disagree politically, showing them less of each other’s feeds. Instead, we need tools to help mediate engagement, tools that help people host discussions among their own friends less acrimoniously. Some discussions benefit from interfaces in which people up and downvote different responses; some interfaces present the best comments more prominently, etc. While not every social discussion on Facebook should have more structured interfaces and moderation tools, giving people the ability to add structure etc. to certain discussions would be useful.

“I would like to see newspapers do a better job of using links to back up stories, provide background information and more detailed explanations. While there is some linking in articles today, it is often haphazard – links to Wikipedia articles about a mentioned country, etc. rather than useful background information or explanations or alternative views. The New York Times is doing a great job in adding interactive material – I’d like to see more that helps people see how different changes and rules and decisions affect them personally.”

Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., also wrote a lengthy response: “The way to ensure information is trustworthy is to build trust-tools into the information itself. By transparently revealing as much metadata and tracking confirmation of the sources of the information, readers and viewers can verify the information. This not only enhances the value of the information, it fosters confidence and reliability.

“Many useful organizations such as Check, an open web-based verification tool, FactCheck.org, PolitiFact, the International Fact-Checking Network at Poynter Institute, Share the Facts, Full Fact, Live – all are tackling ways to institute and evolve trusted verification systems. Fifteen years ago in ‘Making the Page Think Like a Network,’ http://bit.ly/2vyxQ3l, I proposed adding an information balcony to all published information; in this balcony, or level above the information itself, would appear meta-commentary about the utility and accuracy and of the information.

“With tracking tools and metadata – today mostly in the hands of marketers, but useful for the larger public good – we can verify messaging and sources more accurately than ever before because information now carries – within its digital confines – more miscellaneous data than ever before. A Facebook post, a tweet, notes from a meeting, an audio recording, contemporaneous notes from an anonymous source – all can be combined to create trusted, verifiable content that reveals any hacking or alteration of the content.

“With meta information positioned in a balcony above or around the information, readers and viewers will become accustomed to evaluating the reliability of the information they receive. We can no longer behave as though information can operate alone on trust-me. Just as RSA, the security division of EMC, provides security, risk and compliance management solutions for banks; media outlets will need to provide an added layer of information protection that is a visible component of the information presentation itself, whether online or in broadcast and print.

“Some countries are already doing this. As the Washington Post reported recently, ‘When politicians use false talking points on talk shows that air on RAI, the public broadcast service in Italy, they get fact-checked on air. It’s a recorded segment, and the hosts and politicians debate the data. Politicians frequently revise their talking points when confronted with the facts during interviews.’ Work is already underway to enact better verification. A committee of fact-checkers, under the auspices of the International Fact-Checking Network developed a code of principles, to which The Washington Post Fact Checker was an inaugural signatory. Knowing that information is dynamic – that it can expand, deepen, change – is essential to creating reliable, trusted, unhackable verification systems.”

Bernie Hogan, senior research fellow, University of Oxford, noted, “All systems must work on some web of trust. We live in post-modern times where we have long since departed from a world of absolute truths to one of stable regularities. Our world is replete with uncertainty. To make a system that is certain also makes it rigid and impractical. We already know that one-time pads form excellent unhackable security, but they are completely unrealistic in practice. So, I genuinely challenge the question – reliable does not mean perfect or unhackable. We must reject these absolutes in order to create more practical working systems and stop discourses that lend us to false equivalences between different paradigms. Some are still much more reliable than others even if they are not perfect.”

An author/editor/journalist wrote, “If you view each piece of misinformation as a virus, trying to fight such a multitude of individual viruses is ridiculously difficult. Immunity from viral attack is the actual goal. That is only attained when the population gains high quality critical thinking skills. For those who missed out on this, the efficacy of such skills needs to be demonstrated by people with influence, frequently and explained well. In much the same way that it has become popular to look at parenting deficits through popular TV programs or even the detail of analysis which is invested in cooking programs. Without the popular uptake of critical thinking, misinformation will continue to be manufactured to appeal to specific subgroups by targeting their confirmation bias.”

Alladi Venkatesh, professor at the University of California-Irvine, replied, “This is not easy, but we should not give up.”

A selection of additional comments by anonymous respondents:

• “With cloud adoption up, there are more apps like academia.edu, experiment.com for information dissemination and crowd-funded science.”
• “There is no complete solution, but nimbleness and adaptability will go a long way toward helping curb hacking and bad information.”
• “The rapid development of the many-to-many communication system caught the world unprepared… The gatekeepers of the one-to-many communication systems (TV-press) are not valid anymore.”
• “Trust should be based on reputation for sound reporting, not what garners the most clicks.”
• “Changing the transparency model around systems to one where hacks and possible hacks are clearly defined by known interactions with specific systems at specific times is one possible path to resolving the trustworthiness of a source of information.”
• “No. The problem is the lack of a clear categorical differentiation. There are two fuzzy dimensions, the extent of falsity and the motivation of the purveyor.”
• “It is a matter of scale: if you want Facebook to do that for a billion users, it cannot happen since it is very hard to attend to minority views on a platform that wants to scale. On smaller scale, why not?”
• “Federated trust networks. At small enough scales they aren’t worth the effort to hack.”
• “Verification systems are only as good as their level of deployment.”
• “The cost of hacking verification systems will continue to rise, but so will the rewards for doing so.”
• “Wikipedia has demonstrated feasibility on open communities.”
• “Platforms should deploy larger efforts to limit fake news and misinformation by white-listing reliable sources.”
• “The biggest challenge is battling the narrative of mistrust, authoritarianism and racism.”
• “Humans are the weak link in the chain, and they aren’t getting any better.”
• “What is to be verified, the source or the opinion?”
• “There is no universal truth and certainly no universal acceptance of ‘facts.’ It has ever been thus.”
• “Analyzing other organizational systems like credit cards/banks may be a good idea to assess trust.”
• “There is too much news content being shared in too many different places to ensure accuracy.”
• “Security is about risk mitigation by reducing attack vectors, we can reduce the risk.”
• “Anything that tries to tackle misinformation once it’s already in the world will already be fighting a losing battle.”
• “Construct something like the old ‘Good Housekeeping Seal of Approval.’”
• “A successful solution would have to address people’s desire to seek out the ‘facts’ they like rather than the truth.”
• “The sources of news are so diffuse, and social networks so ingrained, that it may not be fully possible – is Facebook going to police the opinion of your Uncle Frank?”
• “This will never and should never be centralized. The internet welcomes all comers, so different homegrown verification systems will emerge.”
• “Systems are not the answer. Norms and standards for discourse are the solution.”

To read the next section of the report – What Are the Consequences for Society? – please click here.

To return to the survey homepage, please click here.

To read anonymous responses to this survey question with no analysis, please click here.

To read credited responses to the report with no analysis, please click here

About this Canvassing of Experts

The expert predictions reported here about the impact of the internet over the next 10 years came in response to a question asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 2 and August 7, 2017. This is the eighth “Future of the Internet” study the two organizations have conducted together. For this project, we invited more than 8,000 experts and members of the interested public to share their opinions on the likely future of the Internet and received 1,116 responses; 777 participants also wrote an elaborate explanation to at least one of the six follow-up questions to the primary question, which was:

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

Respondents were then asked to choose one of the following answers and follow up by answering a series of six questions allowing them to elaborate on their thinking:

The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online

The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online

The six follow-up questions to the WILL/WILL NOT query were:

  • Briefly explain why the information environment will improve/not improve.
  • Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?
  • What are the consequences for society as a whole if it is not possible to prevent the coopting of public information by bad actors?
  • If changes can be made to reduce fake and misleading information, can this be done in a way that preserves civil liberties? What rights might be curtailed?
  • What do you think the penalities should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
  • What do you think will happen to trust in information online by 2027?

The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous seven “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in the global internet policy community and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and Organization for Economic Cooperation and Development (OECD).

We also invited a large number of professionals, innovators and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; the media and media-watchdog organizations; and think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications), as well as globally located people working with communications technologies in government positions; top universities’ engineering/computer science departments, business/entrepreneurship faculty, and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as the Association for Progressive Communications (APC), the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the canvassing questionnaire link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.

Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.

About 74% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 39% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An additional 22% specified their primary area of interest as “other.”

More than half the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their Internet expertise or background, and this was noted where relevant in this report.

Here are some of the key respondents in this report (note that position titles and organization names were provided by respondents at the time of the canvassing and may not be current):

Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the Program on Information Science at MIT; Robert Atkinson, president of the Information Technology and Innovation Foundation; Patricia Aufderheide, professor of communications, American University; Mark Bench, former executive director of World Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs; danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures; Tim Bray, senior principal technologist for Amazon.com; Marcel Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communication; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Johanna Drucker, professor of information studies, University of California-Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at ICANN; Glenn Edens, CTO for Technology Reserve at Xeroz/PARC; Paul N. Edwards, fellow in international security, Stanford University; Mohamed Elbashir, senior manager for internet regulatory policy, Packet Clearing House; Susan Etlinger, industry analyst, Altimeter Research; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Mark Glaser, publisher and founder, MediaShift.org; Marina Gorbis, executive director at the Institute for the Future; Jonathan Grudin, principal design researcher, Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute; Starr Roxanne Hiltz, author of “Network Nation” and distinguished professor of information systems; Helen Holder, distinguished technologist for HP; Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University; Christian H. Huitema, past president of the Internet Architecture Board; Alan Inouye, director of public policy for the American Library Association; Larry Irving, CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism; Christopher Jencks, a professor emeritus at Harvard University; Bart Knijnenburg, researcher on decision-making and recommender systems, Clemson University; James LaRue, director of the Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, Web consultant, developer and activist; Mark Lemley, professor of law, Stanford University; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia Livingstone, professor of social psychology, London School of Economics; Alexios Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at Northeastern University; Giacomo Mazzone, head of institutional relations for the World Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor, Oxford Internet Institute; Alex ‘Sandy’ Pentland, member US National Academies and World Economic Forum Councils; Ian Peter, internet pioneer, historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab; Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smart”; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo, longtime Silicon Valley-based technology forecaster; David Sarokin, author of “Missed Information”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Jack Schofield, longtime technology editor now a columnist at The Guardian; Clay Shirky, vice provost for educational technology at New York University; Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig Siegele, technology editor, The Economist; Evan Selinger, professor of philosophy, Rochester Institute of Technology; Scott Spangler, principal data scientist, IBM Watson Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard D. Titus, CEO for Andronik; Joseph Turow, professor of communication, University of Pennsylvania; Stuart A. Umpleby, professor emeritus, George Washington University; Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia; Tom Valovic, Technoskeptic magazine; Hal Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Kevin Werbach, professor of legal studies and business ethics, the Wharton School, University of Pennsylvania; John Wilbanks, chief commons officer, Sage Bionetworks; and Irene Wu, adjunct professor of communications, culture and technology at George Washington University.

Here is a selection of institutions at which respondents work or have affiliations:

Adroit Technolgic, Altimeter Group, Amazon, American Press Institute APNIC, AT&T, BrainPOP, Brown University, BuzzFeed, Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic Design, Center for Democracy/Development/Rule of Law, Center for Media Literacy, Cesidian Root, Cisco, City University of New York Graduate School of Journalism, Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data & Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy Information Center, Farpoint Group, Federal Communications Commission, Fundacion REDES, Future Today Institute, George Washington University, Google, Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard Business School, Hewlett Packard, Hyperloop, IBM Research, IBM Watson Health, ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network, Internet Engineering Task Force, Internet Society, International Telecommunication Union, Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global, LearnLaunch, LMU Munich, Massachusetts Institute of Technology, Mathematica Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang Technological University, National Academies of Sciences/Engineering/Medicine, National Research Council of Canada, National Science Foundation, Netapp, NetLab Network, Network Science Group of Indiana University, Neural Archives Foundation, New York Law School, New York University, OpenMedia, Oxford University, Packet Clearing House, Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network, Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S. Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles campuses), University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester Polytechnic Institute, World Broadcasting Union, W3C, Xerox PARC, Yale Law.

To return to the survey homepage, please click here.

To read anonymous responses to this survey question with no analysis, please click here.

To read credited responses to the report with no analysis, please click here.