Elon University Home

 

The 2017 Survey: 
The Future of Truth and Misinformation Online, Part 4 of 6

Can changes be made in a way that protects civil liberties?

Download the full report graphicTechnologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answers to the following query - they were evenly split, 51-49 on the question:

What is the future of trusted, verified information online? In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas? 

This page holds a full analysis of the answers to the third of five follow-up questions:

If changes can be made to reduce fake and misleading information, can this be done in a way that protects civil liberties? What rights might have to be curtailed?

Among the key themes emerging from among 1,116 respondents' answers were: - There is likely to be a curtailment of the 'rights' of those who do harm to society. - Systems should generally be optimized in a way that protects civil liberties. - It's not easy to define what is real, what is misleading - who gets to decide? - Limiting rights is not likely to reduce the most dangerous fake and misleading information. - Some solutions may help limit misinformation while preserving rights to some extent. - Create resilience and embed critical thinking rather than 'trying to destroy all lies.' - The information explosion is so overwhelming we need to rething things. 

If you wish to read survey participants' credited responses with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_credit.xhtml

If you wish to read anonymous survey participants' responses with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_anon.xhtml

Summary of Key Findings Follow-Up Question 3

Damaging information might be removed but it's not easy to define what is real/what is misleading - who gets to decide? Can we work to educate more resilient critical thinkers so we do not have to resort to 'destroying all lies'?

While a number of participants in this canvassing say the protecction of civil liberties online is paramount, some said such sacrifices are necessary. Some experts warned that many of the proposed remedies to perceived problems in the information environment are likely to cut into free speech rights, broaden the already wide scope of online surveillance and greatly narrow the public’s options for sharing opinions and information online anonymously.

Anonymity has been most highly valued because it can protect whistleblowers, those subject to authoritarian rule and those subjected to discrimination. Some said that those who wish to spread misinformation will easily route around any changes and sacrifices of civil liberties will result in a net-negative impacts for society.

The overarching trends in their answers are covered by these three quotes:

• A longtime U.S. government researcher and administrator in communications and technology sciences said, “Changes should not be made, because civil liberties would be curtailed.”
• An analyst for one of the world’s leading technology networking companies replied, “I don’t know what changes would be effective.”
Matt Mathis, a research scientist who works at Google, said, “The right of free speech should not include global broadcasting of outright lies.”

Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, argued, “The authenticity, veracity, consistency, balance, fairness, et cetera, of information have, historically, been the responsibility of trusted institutions and actors (science, law, the academy, the press and publishers) who created systematic methods for making information as reliable as possible (though never perfectly so, which is more or less impossible). Peer review, editorial judgment, logical argument and debate have been our best tools – but all are now being undermined as ‘elite’ and thus illegitimate, in favor of emotion, personal-experience storytelling.”

Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “The meaningful solutions to fake news do not pose a risk to civil liberties; they pose a risk to corporate dominance of the internet.”

Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The only ways to fully curtail fake and misleading information (assuming such could even be rigorously defined) would be to rely heavily on rules and regulations that would be unacceptable civil liberties violations. What can be done is stronger policies that fight deliberate manipulation for profit or personal gain – many such laws already exist (libel and slander, SEC rules, et cetera) but the changing pace of technology means these rules need to be updated. Further, the mechanisms in the current legal system (for example, a libel case lasting years) cannot keep up with the pace of information – either much stronger penalties for convictions, new civic means of engagement, et cetera, would be needed.”

Rajnesh Singh, Asia-Pacific director for an internet policy and standards organization, observed, “As the online population grows this will get more complicated.”

Christian H. Huitema, past president of the Internet Architecture Board, commented, “I am very worried that in the name of ‘banning fake information’ we will get some kind of ‘thought police.’ Or maybe just speech codes.”

Jamais Cascio, distinguished fellow at the Institute for the Future, wrote, “If the system changes to *prohibit* or otherwise *stop* the production and proliferation of misleading information, then this will undermine civil liberties, as the definition of ‘misleading’ will inevitably take on subjective qualities. If the response to misleading information is functionally-mandatory verification information, then this would arguably not violate civil liberties. As long as the tools/methods used to reduce fake/misleading info do not *prevent* the discussion or creation of fake/misleading info, civil liberties will likely be fine. That is to say: a system that identifies false info without preventing the false info from being said would likely be civil liberties-friendly (or at worst, a frenemy). It’s the prevention of communication that’s the problem.”

Amy Webb, author and founder of the Future Today Institute, wrote, “We humans have agency. We have a stake in what’s coming next. The future is our shared responsibility in the present. We must engage in a difficult conversation about what constitutes ‘speech’ in the near-future AI age – and whether ‘freedom’ should be interpreted as broadly as it has in the past. Could the founders possibly have imagined a future of algorithmic subterfuge? And what happens when our AI agents start making unsupervised decisions? How do civil liberties apply then? The world is vastly complicated – too complicated for many of our current laws. While Congress may not enact a law abridging the freedom of speech, should there be a new terms of service – a new operating agreement – governing how information spreads for the 21st century? I think so.”

John Anderson, director of Journalism and Media Studies at Brooklyn College, City University of New York, wrote, “The fact that we’re raising the issue of curtailing civil liberties in order to better manage information flows suggests that this problem has already caused irreparable harm to norms of a functional democracy.”

Andrew Nachison, author, futurist and founder of WeMedia, noted, “What are civil liberties in networks and communication systems controlled by private enterprises? We need a global doctrine for digital rights – and without that, civil liberties will be under constant threat from governments with authoritarian, anti-democratic instincts and policies.”

John Markoff, retired journalist, formerly technology reporter for the New York Times, said, “This will be an enduring paradox. I believe we can have a more ‘secure’ computing universe, but only at the expense of traditional civil liberties.”

David Weinberger, writer and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, noted, “We at least need to be asking about whether we can reduce the influence of false information while preserving robust, genuine disagreement.”

There is likely to be a curtailment of the
'rights' of those who do harm to society

Some respondents said certain speech that is deemed to be a danger to society in some regard should possibly be limited.

Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “Offline, no-one ever said they have the right to shout into the homes and private lives of everyone; the internet has amplified the ability of those with money to reach many ears, but I do not agree that’s a speech right. So yes, ‘speech rights’ will have to be curtailed. But that will curtail the rights of the superrich, the malevolent and the private sector. Ordinary folk with something to say will still be able to speak, just not to reach absolutely everyone in an instant without any kind of filter for validity or relevance.”

Joshua Hatch, president of the Online News Association, noted, “The ‘actual malice’ standard may need to be revisited. Even suggesting that makes me shudder, but I do think that ought to be considered.” 

Carol Chetkovich, professor emerita of public policy, Mills College, commented, “We need to have more public conversation about where we draw lines around ‘free speech.’ It’s never been an absolute right, but we are facing particularly difficult challenges now in figuring out how to shape this right. Doing so will require dialogue. I suggest as a starting point thinking about Jurgen Habermas’s ‘ideal speech situation.’”

Mark Glaser, publisher and founder, MediaShift.org, observed, “You must preserve civil liberties at all costs no matter what you’re trying to accomplish. However, there might be less ‘free speech’ for those who are using that as a shield to promote misinformation and fake news (along with hate speech).”

An anonymous respondent wrote, “It might cause a re-thinking of civil liberties; perhaps we need to know that an individual or a group, even among the young, have been responsible for creating false and misleading information.”

Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of emerging technologies, The Hastings Center, wrote, “Freedom from repression can be maintained, but the freedom to pursue whatever one wants, even when this may harm others, would need to be curtailed. Those with the power to exploit the technology will resist such a curtailment.”

Steven Miller, vice provost for research, Singapore Management University, wrote, “Maybe total unrestricted civil liberties are not the number-one priority. Maybe there are some reasonable trade-offs between absolute civil liberties (so-called freedom of speech and expression) with responsibilities for accuracy, verification, validation. In other words, in the U.S. now, there is ‘freedom of speech,’ but no requirement that people be responsible for what they say. This is an odd type of freedom, and does not necessarily serve the needs of the society as a whole. So there needs to be a stronger linkage between the freedom and the responsibility. I do not see this as a curtailment, but as an alignment that has been needed for a long time now.”

Raymond Hogler, professor of management, Colorado State University, replied, “All increases in trust come at the cost of decreases in liberty.”

Sam Punnett, research officer, TableRock Media, replied, “Unfortunately the only way to reduce misleading information is for there to be consequences for its knowing creation. It is difficult to foresee how the future of the news media will evolve (particularly in the United States) given the continuous stream of contradictory statements and lies issuing from formerly reliable institutions such as the U.S. presidency. There will likely be eventual consequences, but the fourth estate and government institutions in the U.S. are struggling to adapt. Freedom of the press and free speech in a democracy are sacrosanct. Speech is not something you can easily legislate – neither is media consumption. In a democracy you have the freedom to be uninformed and the freedom to only consume what feeds your existing biases. The consequences may well be to lose your democracy, herein lies the paradox.”

Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “There are too many people nowadays who don’t understand that freedom of speech is not the same as having a right to claim a platform. If your speech is asinine, hateful or plain wrong, you cannot be angry at others for denying your voice to be heard.”

Mike Roberts, pioneer leader of ICANN and Internet Hall of Fame member, replied, “The First Amendment provides a lot of space for argument over its rights and obligations. Other countries have stronger defamation and libel laws, and moving ours in that direction should be considered.”

Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, observed, “Controlling fake news whilst preserving civil liberties is a balancing act. This would mean deliberately denying some posters the right to post information online. It would definitely mean compromising the freedom of expression and in a way the right to access information.”

Jan Schaffer, executive director of J-Lab, said, “The right to lie and the right to misstate information might need to be curtailed, with fines or loss of access to public airwaves and/or cable, internet. I don’t think the use of public airwaves (broadcast television/radio/Sinclair-type ownership) should be granted to those shown to have used them to disseminate false information.”

Erhardt Graeff, a sociologist doing research on technology and civic engagement at the MIT Media Lab, said, “Avoiding trampling on civil rights, particularly the strong definition of freedom of speech in America, will be extremely tricky. Following President Trump’s repeated accusations of fake news from American journalists, leaders less unencumbered by strong civil rights statutes in their own countries quickly adopted the ‘fake news’ frame to go after inconvenient journalists by creating new regulations such as whitelists for approved news organizations. Any attempt to regulate misinformation that appeals to the ‘common or public good’ or classifies comments as reckless or negligent can and will be manipulated by policymakers to police dissent. Privacy protection is another obvious target that could erode in the name of fighting misinformation. Journalists’ shield laws have started taking a beating, but we should expect that attempts to root out anonymous and pseudonymous sources of misinformation online could undermine the privacy of all users and creating chilling effects on free speech. As the issue of online harassment has shown, there is a need to revisit how we conceive of civil liberties in these contexts, but we must proceed with extreme caution.”

Joel Reidenberg, chair and professor of law, Fordham University, wrote, “Freedom of expression and censoring fake and misleading information are mutually exclusive."

Systems should generally be optimized
in a way that protects civil liberties

Micah Altman, director of research for the Program on Information Science at MIT, commented, “Supporting an independent media and robust information systems that are open, transparent, and traceable to evidence is compatible with civil liberties. Silencing bad speech is not – and doesn’t work.”

Nick Ashton-Hart, a public policy professional based in Europe, commented, “The idea of balance between other priorities and civil liberties is a false one. Rights are not something we use as currency; they are integral. We need to approach rights and other public policy goals from a win-win perspective, where solutions to problems reinforce rights, not reduce them.”

Brian Cute, longtime internet executive and ICANN participant, said, “I don’t support curtailing civil liberties as a solution to societal challenges.”

Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, commented, “Yes, I strongly believe that freedom of expression can and must be protected. But this does not mean that any terrorist or propagandist has the right to say literally anything, no matter how violence-inducing or patently false it may be, AND have it be given the same level of search priority and accessibility. I believe we can find a reasonable balance here, and again, that civil society and the internet companies must be in the lead, with government as a consulting but not dictating partner.”

An anonymous respondent affiliated with the Berkman Klein Center at Harvard University said, “We can institute penalties for people who knowingly spread false information to gain power, influence or wealth. But we also run the risk of quelling a free and fair exchange of ideas and information. The dangers to a population’s civil rights are too great to balance the potential gains of those laws.”

Alan D. Mutter, media consultant and faculty at graduate school of journalism, University of California-Berkeley, replied, “Curtailing free expression would be more dangerous than anything.”

Jane Elizabeth, senior manager American Press Institute, said, “This can be done in the same way we manage the conflicts inherent in free speech. Yes, it can be messy and difficult, but the First Amendment has been in place since 1791 and, all in all, it’s served us pretty well.”

Nate Cardozo, senior staff attorney, Electronic Frontier Foundation, observed, “The inevitable outcome of any system that aims to reduce the reach of fake news will be the reduction of expression of all kinds. It’s simply incompatible with free speech.”

Diana Ascher, information scholar at the University of California-Los Angeles, observed, “Until systemic discrimination is addressed, even new means of documentation that provide ‘incontrovertible’ evidence will be of little value in holding authorities accountable for violations of civil rights. For example, video footage of police brutality in the deaths of people of color has done little to dismantle structural racism, despite widespread news coverage. In 2016, there were 1,092 police killings of African Americans in the United States. Thus far, not one officer has been convicted. The Federalist direction in which the Trump administration has moved only exacerbates this through the creation of systems to identify and track populations of color.”

David A. Bernstein, a marketing research professional, said, “I believe it is possible by assigning some sort of a truth or consistency score to statements similar to what we saw by FactCheck during the past political debates. Why can’t we create a system that would be the Consumer Reports of fact-checking? I do not see any civil rights or freedom of speech problems. You can still say what you please, but then you have to live with the scoring by an independent scoring agency.”

Patrick Lambe, principal consultant, Straits Knowledge, noted, “Machine intelligence and knowledge-organisation techniques, together with human supervision are now smart enough to identify characteristics of false messaging based on the message and its manner of diffusion alone, without attending to privacy of originators or diffusers. Without diffusion, the incentive to create false messaging evaporates.”

Daniel Kreiss, associate professor of communication, University of North Carolina-Chapel Hill, commented, “I do not think the issues are legal, so I do not think civil liberties or rights are really at issue. The news media can, and should, change its patterns of coverage to: 1) Represent a broader range of intra-party and intra-social group debate and diversity of thought, especially when factual issues are at stake. 2) Highlight intra-party critiques of elites, especially when the veracity of information is at stake. 3) Hire more journalists from the communities media outlets serve to deliver information and correctives in an identity-congruous way to audiences. And 4) make a solid distinction between freedom of speech and freedom of the press, with the latter being delimited to speech that the public needs to hear (which means not covering misinformation). Technology platforms should think about more transparency in terms of their policies around expression, make more data available around these issues, and think more creatively about how social identity shapes epistemology, not simply rely on professional journalism outlets.”

David Manz, a cybersecurity scientist, replied, “Why do people assume security vs. freedom is a tradeoff? Privacy and security are on the same side. You cannot have privacy without security. INsecurity is opposed to privacy. This will not require any curtailment. Rather it will require a populous who wants to know where their information comes from. And a mechanism to share that information – something as simple as a Google search response of advertisements and legitimate results. Most savvy users can discern the difference easily. Alternatively we can have complex roots of trust with certificates and authorities to validate from reporter to bureau, to editor, to consortium to outlet to reader.”

More suggestions for potential solutions are included in another section, below.

It's not easy to define what is real,
what is misleading. Who gets to decide?

Respondents pointed out that differences between the digital age and earlier times complicate already complicated issues tied to limitations to free speech: How do you determine who posted the information in question? How do you define “damaging” misinformation? Who gets to decide what should or shouldn’t be classified – under any definition – as information that is false or damaging to society?

Stephen Downes, researcher with the National Research Council of Canada, commented, “The big difficulty in any campaign to reduce fake and misleading information will be to define what is real and not misleading. This isn’t a free-speech question – we have plenty of safeguards against libel, slander, mischief, et cetera. What we don’t have agreement on is on what counts as true – or, may be more accurately, how much fabrication is allowed. Should there be criminal prosecutions for denying human-caused climate change? Should people go to jail for advocating boycotts or divestment campaigns? Should religion be required to offer proof of its claims? Et cetera.”

Jack Schofield, longtime technology editor at The Guardian, now a columnist for The Guardian and ZDNet, commented, “I can’t see any way to stop the distribution of fake news for two reasons. First, everyone’s a publisher, in the sense that they can post on social media. Second, some people don’t think they are spreading false information: they genuinely believe crackpot conspiracy theories, and they refuse to accept provable facts. If you correct them, many just double down on their false beliefs. This is a problem that neither education nor honest media can solve.”

Johanna Drucker, professor of information studies, University of California-Los Angeles, commented, “The dilemma is that all structural controls can be subverted, and that the more malicious and pernicious players are most likely to do so. The most effective method of guaranteeing the future of responsible journalism is through professional organizations and their certification/validation of sources/outlets. We have accreditation in other fields. We may need it in the domain of news and reporting. But what is to stop a bogus organization from setting itself up as an accreditation agency? What political litmus test is applied? How to police partisanship when it aligns with the basic notion of what constitutes verity?”

A distinguished professor of computer science and engineering at a major state university noted, “Any authority that can prevent dissemination of fake or misleading information can also be used to prevent dissemination of legitimate information. Even public consensus technologies may simply be coopted by a wrong majority (remember, in the U.S. we had a majority belief for a period that blacks were subhuman).”

An anonymous business leader wrote, “The problem is who decides what is ‘fake?’”

The CEO of a major American internet media company based in New York City replied, “The solution isn’t censorship, it is about what gets magnified and promoted. People should be able to have wrong ideas, expound conspiracy theories, et cetera, but those views shouldn’t be magnified by algorithms and media networks!”

Mark Lemley, professor of law, Stanford University, observed, “There is a great risk in having the government decide what news is real. Just look at who would be making that decision in 2017. When a government has an incentive to promote fake over real news, giving them the power to suppress or select news is a real danger. We are much better served by private, competitive rating systems.”

Dave Burstein, editor of FastNet.news, said, “Any system will make it much harder to comment anonymously and free from government. Going beyond fraud to hateful or false claims inevitably will censor a great deal of legitimate commentary, I believe. The volume posted on the Net is so much [that] algorithms inevitably overreaching and inaccurate is the only practical technique.”

Bernie Hogan, senior research fellow, University of Oxford, said, “Fake news is less an issue than the profiting off of misleading and sensational news that erodes trust in public institutions. This creates the seedbed in which absurdist conspiratorial positions are grown. The notion that we would curtail civil liberties assumes we need real names in order to discipline trolls. Really, we already know that those who are the most nefarious are doing so through legal means, shell companies, lobbyists and legislators. Through legal means they redistribute money and attention, manipulate citizens and callously look askance at unequal outcomes. The threat from Rupert Murdoch’s demonstrably biased ‘reporting’ conglomerate is much greater than some online trolling by foreign actors. Curbing civil liberties is another way of suggesting that the disempowered are the problem and it is their rude online comments that must be disspelled, when it is the powerful who are doing the exploiting.”

Mark Bunting, a senior digital strategy and public policy advisor with 16 years’ experience at the BBC, Ofcom and as a digital consultant, wrote, “The need to draw an appropriate balance between freedom and restraint is not new – it’s been a feature of information environments as long as humans have had the means to communicate to co-opt support and coordinate against enemies. The crucial thing is to see this challenge as a matter of degree, not absolutes. The internet has enhanced opportunities for freedom and infringement of rights – we have to recalibrate our instruments for this new world, but we don’t have to invent completely new science.”

Alexios Mantzarlis, director of the International Fact-Checking Network based at Poynter Institute for Media Studies, commented, “Truly ‘fake’ information should be relatively easy to address without real consequences on civil liberties. Email largely defeated spam, for instance. But the misinformation space is a lot broader than totally fabricated stuff, as is made perhaps most clear by the taxonomy developed by Claire Wardle of First Draft. Truth comes in shades of gray and every item of information can be deemed at least somewhat misleading by someone. I am wary of any solutions that suggests basic legal rights need to be curtailed.”

Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, DC, replied, “Who defines ‘fake news’? Censorship that seems benign is the door to censorship that is malignant.”

Barry Wellman, internet sociologist and virtual communities expert and co-director of the NetLab Network, said, “I don’t trust self-serving governments deciding what’s false and misleading information. I’d rather freedom of speech be preserved.”

Glenn Edens, CTO for Technology Reserve at Xeroz/PARC, said, “Better internet protocols could help (CCNx for example where publishers can be verified), however this does address the issue of what sources individuals choose to trust. The solutions do appear to be as bad as the problem, especially related to free speech and civil liberties. Even a ‘rating’ system of trusted sources is questionable – rated by who?”

Michele Walfred, a communications specialist at a major U.S. university, said, “The concern in policing is having a police state and condemning free expression one that does not agree with as ‘fake.’ Opinion and satire needs to be labeled because although it should be intuitive or obvious, it isn’t. The identity of the publishers, their investors or backers should be fully disclosed. People are allowed to have left or right views. People are allowed to be vegan or not, et cetera. Skepticism of factual, peer-reviewed, researched articles is growing. Say what you want, but there should be a Better Business Bureau-type rating system – but who does that rating? It is a troublesome situation – when is an opinion a lie and an untruth? I don’t know.”

Helen Holder, distinguished technologist for HP, said, “Restrictions on the publication of patently false information probably cannot be done without infringing on civil liberties, except in cases of fraudulent advertising or incitement to violence. Despite the obviously negative impacts of trolls and other misinformation generators, it would be nearly impossible to curtail or censor them if no fraud or violence were involved because although there is a generalized damage to society from these practices, the direct, specific harm from each source is small and would be difficult to assess or prove.”

Joseph Turow, professor of communication, University of Pennsylvania, commented, “Some social media firms are trying out algorithms aimed at identifying bad actors, not just content, by the size and formulation of their message traffic. But the statistical nature of this activity inevitably means good actors will be identified incorrectly. If governments get involved the cure might exacerbate the disease. Unscrupulous politicians in the U.S. and elsewhere would inevitably look for ways to tar opponents with the fake-news or (especially) weaponized fake information label and thereby pollute the media environment even more.”

Greg Wood, director of communications planning and operations for the Internet Society, replied, “Centralized systems for authenticating or verifying information would seem to be unworkable and, as history has demonstrated time and again, incompatible with the individual civil liberties. A distributed approach to at least authenticating the sources of information might be possible.”

Brad Templeton, chair emeritus of the Electronic Frontier Foundation, said, “In the U.S. and many places, freedom of the press must be maintained. That means systems that tag misleading information will not be legally required, but this does not mean they can’t be commonly used. However, there will be those who correctly and incorrectly criticise the flags of such systems as politicized, which will drive some away from them.”

Michael Rogers, principal at the Practical Futurist, wrote, “This is a bigger problem than even the technology, especially for democracies. There would need to be complete transparency around reasons for information rejection, as well as a public appeal process. There are precedents within democracies for fake information control, such as Holocaust denial. Achieving a system that respects civil liberties would require a very careful coordination of technologists and elected officials.”

The recalibration will have to include new approaches to machine-generated speech. A leading researcher studying the spread of misinformation observed, “This will be a difficult challenge, and possibly one of the defining regulatory and policymaking issues of the next 20 years. The underlying problem involves advanced machine technologies (i.e., machine learning, natural language processing and sentiment-analysis tech) that will be able to impersonate human speech. Does this technology, designed and maintained by humans, have the right to free speech? So far in the United States, the answer is ‘yes’ based on challenges to ‘speech’ such as Google’s search results. Can more-advanced technologies designed for speech (e.g., bots) be taken to task for libel and/or harassment?

“Our fundamental freedoms as individuals are beginning to converge with ‘smart’ technologies, and we’ll have to find solutions – both in terms of short-term fixes as well as long term policies – to deal with this problem.”

Limiting rights is not likely to reduce the
most dangerous fake and misleading information

Some respondents said that there is no way to stop highly motivated actors (whether they be human or bots) from routing around attempts to establish the real identities of all who post misinformation, or to limit, flag or block content.

Tom Rosenstiel, author, director of the American Press Institute, senior non-resident fellow at the Brookings Institution, commented, “I strongly doubt changes can be made in any structural way to reduce fake and misleading information. The platform companies may make some efforts, but those will collide to some degree with other values they have about open communities and to some extent collide with their revenue models, which favor intense engagement. There is a bias there toward strong emotion, both cheering and panicky. In theory, there could be regulatory efforts to blunt this, but in reality there is nothing to suggest any political environment in which such regulations would be enacted. We have been moving away from that now for 40 years, and the signs at the moment point toward that only continuing further. By the time the political system would be ready to address this problem, the problem would have changed. And absent that, the efforts by platform and distribution companies to police their own landscapes will be unable to keep up with those who want to deceive or misuse the web and those efforts will be muted in any case.”

A researcher investigating information systems user behavior replied, “This is a global problem, and First Amendment rights are seen differently in different parts of the world. Without some kind of high-quality, difficult-to-spoof identification system it’s unclear that the amount of misleading information can be reduced.”

An institute director and university professor said, “If there was a way [to reduce fake news], the publishers of the National Enquirer would have been put out of business a long time ago. Instead, they’re dining at the White House.”

A vice president for an online information company was among a number of anonymous respondents to this survey testifying on behalf of the right of some degree of anonymity, “If no one can speak anonymously, unvarnished truths may evaporate and be replaced with falsehood; people may be afraid to say what they think; freedom of expression suffers. On the other hand, unbridled freedom to speak may not solve the problem either because it may become harder and harder to distinguish truth from fiction… Allowing only strongly-attributed speech will drive some truths underground.”

Ian O’Byrne, assistant professor at the College of Charleston, replied, “Anonymity, and the ability of individuals and bots to routinely spawn new accounts leads to a system where anyone can say anything while harming the civil liberties of others. Perhaps ‘real identities’ need to be pursued, while anonymous, or ‘off the record’ accounts/messages are somehow discredited may help. The end result would then [however] be a discourse system in which everyone is ‘real’ and verified and a second discourse system where all accounts are fake, unverified, et cetera. This ultimately would lead us back to our current situation.”

Some pointed out that most members of the general public have very little chance of remaining anonymous anyway, due to the ways in which their online movements are easily tracked by powerful corporations or governments that database their acts in a way that can create a very recognizable online identity.

An anonymous respondent based in Europe wrote, “In a networked world, with all the digital government and business services, it might be difficult (if not impossible) to keep the the same level of individual privacy, unless states and societies would grant citizens a choice of alternative of fully or partially ‘analogue’ living, regardless the economic costs and inconveniences. Most likely the latter is not happening. At the same time, societies need to be cautious of any calls and statements that some rights have to be curtailed.”

A leading internet pioneer who has worked with the FCC, ITU, GE, Sprint and VeriSign commented, “Privacy is not possible.”

A professor and associate dean commented, “Our most important tool for the preservation of civil liberties is anonymity, and it is also our most destructive. We need a barrier between anonymity and identification whose porosity can be turned on and off. The legal system can do this: I post something terrible anonymously. A legal action is brought against me to determine who I am and investigate my claims. The courts will need to adjudicate such actions. Imperfect, of course, and likely to jam the courts, of course, but there is a way.”

A senior attorney for an online civil rights organization said, “People always want to clamp down on online anonymity whenever online speech gets messy. But anonymity isn’t the problem – our very own president repeatedly misleads in public – it’s not an attribution problem.”

James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “Even unfailingly attaching identity to statement doesn’t work to reduce fake and misleading information. We need to incentivize truthfulness. Reward people for civil discourse. Award points for fact-checking, and for withstanding those checks. A second option is legislation, criminalizing some kinds of speech. But that, clearly, is fraught with a host of First Amendment issues.”

Others responding about anonymity wrote:

• A consultant said, “Anonymity may need to be restrained. Accountability will be required.”
• A vice president for stakeholder engagement said, “Anonymity on the internet is a concept whose time has passed.”
• A professor and researcher noted, “To reduce disinformation with current techniques we need to identify the source of every piece of information through the provenance chain to the beginning. There will be no privacy or anonymity in such a society.”
• A North American research scientist wrote, “We need systems to require more transparency. The right to anonymity will be collateral damage.”
• A public-interest lawyer based in North America commented, “I don’t see a way to do this without dangerous degradation of the First Amendment.”
• A self-employed marketing professional observed, “1984 would result.”
• Ironically, an anonymous respondent commented, “I oppose anonymous and/or unattributed responses on the internet. Everyone’s participation should be identifiable as theirs.”

Some solutions may help limit misinformation
while preserving rights to some degree

Susan Etlinger, industry analyst, Altimeter Research, said, “I have to believe that any attempt to reduce misinformation must preserve civil liberties or it is simply a weapon available for use by the highest bidder.”

A number of people said there are potential remedies that may at least slightly improve the information environment while minimizing damage to civil liberties.

Bill Woodcock, executive director of the Packet Clearing House, wrote, “The intersection of cryptographic signatures and third-party reputation tracking may provide some relief provided the reputation tracking is neither completely politically co-opted, as is the plan in China, or trivially manipulated by hackers or astroturfers. The combination of PGP and blockchain will probably help a lot. There have been attempts like Diaspora to build a platform on which identities and speech could flourish globally, but I think, unfortunately, the age of Usenet has passed, and commercial speech is trumping both gratis and libre speech.”

Jonathan Grudin, principal design researcher, Microsoft, said, “We have always had tremendous quantities of incompatible information, such as conflicting scientific claims or different religions insisting their tenets are true and others false. The solution is to organize the information and its claims and identify its provenance. It will take some time to do this and for people to learn about it. I credit people with the ability to master this.”

Alexander Halavais, associate professor of social technologies, Arizona State University, said, “Implicit here is whether changes may be made in government restrictions, and I don’t think we want public truth police. Justice Brandeis already answered this for us: ‘no danger flowing from speech can be deemed clear and present, unless the incidence of the evil apprehended is so imminent that it may befall before there is opportunity for full discussion. If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.’ Some of that may consist of meta-speech. Certain people have been better and more consistent in telling the truth over time, and I suspect we will find new ways of identifying them. While it may seem to be a problem of ‘turtles all the way down,’ and certainly efforts to assail the neutrality of, e.g., Snopes suggests that this will be a problem, there are counter-examples in social media – from Wikipedia’s NPOV (neutral point of view) efforts to Slashdot’s meta-moderation – that suggest that there are ways of creating the equivalent of the Better Business Bureau for truth claims.”

Veronika Valdova, managing partner at Arete-Zoe, noted, “Freedom of speech is not an absolute right. Unprotected speech includes obscenity, child pornography, fighting words and true threats. Particular categories of speech can also be subject to criminal or civil suit, such as disclosure of classified information, the disclosure of personally identifiable health information, defamation, libel or slander. Commercial free speech is yet another topic, fiercely disputed in, e.g., pharmaceutical advertising (see U.S. vs. Caronia). False testimony is not protected either. Impersonation, threats and hate speech and revenge reputation damage including revenge porn are currently difficult to prosecute but that may change. German authorities are taking the protection of their information environment very seriously, due to historical experience with the effect of Nazi propaganda.”

The technology editor for one of the world’s most-respected news organizations commented, “Free speech may have to be curtailed in some specific cases (in Germany Holocaust denial is illegal, for instance).”

Charles Ess, a professor of media studies at the University of Oslo, wrote, “Efforts to reduce fake and misleading information will, among other things, have to severely reduce the possibilities of anonymous communication online: this means a reduction in privacy and at least anonymous forms of free expression. The counterweights to ensure that these restrictions in turn do not become misused by those in power include far more robust educational efforts aimed at helping citizens better understand and develop the basic skills and capacities – the virtues – required for not only effective political discourse online, but also democratic citizenship more broadly. We need nothing less than a new enlightenment, one that sustains classic Enlightenment-Democratic theories and norms, but as transformed as needed for life in a world dominated by digital communication technologies.”

Susan Price, lead experience strategist at Firecat Studio, wrote, “Reliable attribution is the way forward; verification of news by a number of trusted sources can be the basis of crowdsourced verification, similar to movements of schools of fish or flocks of birds. Individuals need the ability to closely control the release of information of their own data through a human API, and this could form the basis of a workable compromise between privacy and transparency. Anonymity must be available to protect against fascist control.”

Justin Reich, assistant professor of comparative media studies, MIT, noted, “Most forms of fake news are squarely protected under the First Amendment. Social censure will stop fake news, just like social censure has had some effect in curbing common racist, sexist, homophobic speech without curtailing rights.”

Sandro Hawke, technical staff, World Wide Web Consortium, noted, “The only ‘rights’ that have to be curtailed are the ‘rights’ to be anonymous and to lie without consequence, which have never been accepted as rights. There is a value to anonymous whistleblowing, but that has to be managed very carefully for the good of society, and should not be seen as a civil right. There is also a value to being able to speak to millions of people at once, but again, that’s not a right. Our best approach will be to consider how small groups of humans handle fake and misleading information. In these groups, where everybody knows everybody else, people can hold each other to account for gossip, slander, and swindling. In our headline dive into the internet (and even radio and television, as we relaxed regulation), we left behind many of our tools for managing information quality. It will take some work, but we can (and must) bring them back.”

Matt Armstrong, an independent research fellow working with King’s College, formerly executive director of the U.S. Advisory Commission on Public Diplomacy, replied, “No rights need to be curtailed. This question pretends we can work around the consumer. The consumer/producer/repeater all must be called out, shown as the agents/naïfs they are. Reducing the demand and success (thus ‘profitability’) will reduce the fake and misleading information. One tactic is to shame the consumers and repeaters, but it requires education and support by leaders from civil society, including education and politics.”

Michael R. Nelson, public policy executive with Cloudflare, replied, “We can definitely preserve civil liberties unless we extend to eliminate all misinformation. Doing that would require eliminating anonymity online in order to deploy effective reputation systems, which would highlight misleading or bogus information. But the end of anonymity would limit free speech, particularly in countries where repressive government censor or arrest journalists or bloggers.”

Jim Rutt, research fellow and past chairman of the Santa Fe Institute and former CEO of Network Solutions, replied, “A real name ID policy rigorously enforced is the strongest relatively easy play. Unfortunately, this has some negative impact outside of The West, and perhaps in it as well.”

Jennifer Urban, professor of law and director of the Samuelson Law, Technology & Public Policy Clinic at the University of California-Berkeley, said, “We have existing legal models that deal with false information (fraud, defamation) and with other types of harmful information (e.g., harassment). What we do not yet have is a way to scale these rules to the internet. But if we can develop that, then yes, we can better reduce false information while leaving in place the current protections for civil liberties. We should not assume that we have to curtail civil rights to address this problem. It seems unlikely that curtailing civil rights would work – see every authoritarian regime that struggles with activists’ commentary – and there would be a greater loss.”

Jerry Michalski, futurist and founder of REX, replied, “We may need stronger identity authentication for people posting information, which will reduce the rights of people who want to remain anonymous. But I think we can find solutions that don’t do much more harm than that.”

Greg Lloyd, president and co-founder of Traction Software, wrote, “First, reduce economic incentive to spread fake news by pressuring advertisers (social, legal). Some public forums may require authentication to participate – or even read. New regulation based on ‘false advertising’ principles might work. Individuals’ personal rights can be preserved, with existing libel, hate speech, threat, blackmail, et cetera, laws.”

Irene Wu, adjunct professor of communications, culture and technology, Georgetown University, said, “We need media leaders in civil society, business and government to provide information on topics they want to hear about in a way that appeals to them in a manner that demonstrates the credibility and validity of the reporting. Maybe it’s time to make more explicit the number of sources a journalist is using to write a report – two if on the record (with their credentials), one if off the record, corroborated by statistics from which institution. Can you put a badge on an article that lists these, like a nutrition label on a cereal box? I think the good reporting needs to be highlighted more. Curtailing civil liberties does nothing to improve the quality of public discourse.”

Steve McDowell, professor of communication and information at Florida State University, replied, “The tasks and social roles of journalists as trusted reporters and commentators may become more important, in that we might also ask them to provide an assessment of the quality of information or claims and statements in stories. This information about procedures should be provided to the public. If technical means or organizational procedures are adopted by social media sites to filter or block information, these sites should be transparent about how their automated systems or organizational procedures operate. Just as with child protection web-blocking software, there will be over-blocking based on keywords, or underblocking. Software will have ongoing learning capabilities built in, but may be behind human actors with specific agendas.”

Filippo Menczer, professor of informatics and computing, Indiana University, noted, “Trusted news standards and their technological implementation and enforcement must strike a balance between free speech and the right not to be deceived. The two can coexist.”

Laurel Felt, lecturer at the University of Southern California, wrote, “Through [many] mechanisms… users can choose to receive information about a source’s trust rating. In terms of mainstream broadcast news, perhaps a fee can be levied when an organization shares uncredible information.”

Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “Technology can double-check information. Academic systems already check attributions or plagiarism. Computer systems can highlight plagiarism, quotes that are wrong, incorrect facts and correct details. Civil rights do not have to be curtailed. Websites can choose to not publish erroneous information. Legal suits can be allowed against web sites that publish erroneous information. Individuals can choose to refuse to use websites that provide erroneous information. I currently refuse to watch news channels with a high bias. I do not buy products from companies that sponsor these new channels.”

Paul Jones, director of ibiblio.org, University of North Carolina-Chapel Hill, suggested, “Certification of publishers, but not licensing, using technologies such as blockchain, could make brands responsible to the public with out curtailing rights Americans hold dear. Responsibilities should be the focus rather than *only* rights.”

Larry Keeley, founder of innovation consultancy Doblin, commented, “Imagine a world where information shared digitally has an embedded bar code, and when you read it, it takes you to the audit data for the information being shared, revealing the total confidence interval and all the sources, each with their audit information and confidence intervals revealed. This happens all the time now for some fields. It is inevitable for the best information systems too.”

A professor of law at a major California university observed, “We have existing legal models that deal with false information (fraud, defamation) and with other types of harmful information (e.g., harassment). What we do not yet have is a way to scale these rules to the internet. But if we can develop that, then yes, we can better reduce false information while leaving in place the current protections for civil liberties. It would be a bad idea to assume we have to curtail civil rights to address this problem. It seems unlikely that curtailing civil rights would work – see every authoritarian regime that struggles with activists’ commentary – and there would be a greater loss.”

Tom Birkland, professor of public policy, North Carolina State University, commented, “Some changes in civil law to make it easier to prosecute malicious falsehoods – such as the Pizzagate problem – might cause media companies and producers to more carefully vet their information. And since access to platforms like Facebook and Twitter is not a fundamental right, the owners of these networks should be more responsible for the worst kinds of misinformation that is posted in these services.”

A professor and researcher of American public affairs at a major university replied, “State pressure on foreign actors can help preempt attacks by bots. And media institutions can cooperate to ensure that fake information doesn’t overly influence the tone and coverage of political campaigns.”

A research associate at MIT, said, “Perhaps libel laws need to be strengthened. Some may view this as curtailing First Amendment rights. However, fraud is distinct from the right to be wrong. That is where law and society need to and will innovate. One more thought: calling an outlet ‘fake news’ should be considered slander or libel if the person using the phrase knows that the news is not actually fake.”

An author and journalist based in North America suggested, “Revise the Communications Decency Act and apply libel laws to online communication. The risk of court remedy has always been a good incentive to check your facts before hitting send. If this practice chills the lies, so be it.”

Stephen Bounds, information and knowledge management consultant, KnowQuestion, said, “Societies already mandate information sharing or withholding in certain circumstances. Sometimes criminal or private information may not be legally shared. In other circumstances, information must be disclosed, as when a person has reasonable suspicions of abusive activity or to allow shareholders to make informed investment decisions. These rules are an attempt to ensure justice and fairness for everyone, where there are incentives for people to act in selfish ways. Mostly laws already exist to prohibit the kinds of acts broadly covered by ‘fake and misleading information.’ The problem is detection and enforcement.”

Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “Changes can be made to reduce fake and misleading information by massively increasing public funding of academia, nonpartisan expert agencies, holding extensive intellectual events, and so on. For example, there needs to be far more ability to have a financially secure career as a public intellectual, without needing to be an attention-seeking social media hustler or some sort of corporate propagandist. Just as a starting point, let’s have an American implementation of a strong independent and well-funded BBC News, before any thought of curtailing rights. To put it very simply, corporations that make money selling eyeballs to advertisers, don’t, as a rule, care much about what goes into getting those eyeballs in front of the advertising. If market values are the only things that matter, the results are going to be dismal.”

Internet platforms seen as a potential
help and as a big part of the problem

A partner in a services and development company commented, “No rights need to be curtailed, but some rights need to be formulated more clearly and defended more vigourously… Internet intermediaries (especially search engines, social networks and advertising exchanges) will have to limit or stop certain problematic practices and/or support measures that mitigate the loss of freedom suffered by their users.”

Evan Selinger, professor of philosophy, Rochester Institute of Technology, wrote, “To comment on but a fraction of what’s at stake in this question, the balance between reducing fake information and preserving civil liberties is a contextual issue. In the context of corporate platforms, like Facebook, there are two important things to keep in mind. First, the idea that Facebook is merely a conduit for user-generated communication has outlived its expiration date; it’s laughably implausible. Second, normative consequences follow from acknowledging that Facebook’s curatorial power is a mechanism of techno-social engineering that affects what people see, believe and think. For starters, since that power is deployed through algorithmic governance, there are good reasons to believe that greater transparency should exist and less weight should be given to the ideal that valuing corporate secrecy requires making black-boxed software sacrosanct.”

Marina Gorbis, executive director of the Institute for the Future, said, “What is driving proliferation of misleading and sensationalist information are business models behind the main media channels. Reliance on advertising dollars and the drive to attract ‘eyeballs’ create a media environment that is not driven by public interest but rather by financial goals. Today Wikipedia, a nonprofit commons-based platform is the most unbiased and well-functioning media outlet we have. There are lessons from Wikipedia in how we need to evolve our media environment.”

Brian Harvey, teaching professor emeritus at the University of California-Berkeley, said, “Anonymity is a prerequisite for non-fake news; think Pentagon Papers, Deep Throat, NSA malware. The only ‘right’ whose curtailment would help the quality of public information would be the right of accumulation of capital.”

An internet pioneer and principal architect in computing science replied, “The internet’s advertising oligopoly has profited greatly from the distribution of fake news, putting the rest of us at risk. Advertisers have started the counter-revolution by organizing a boycott. We do not need to curtail civil liberties for that boycott to succeed – all we need to do is to make distribution of fake news less profitable.”

Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “The most crucial ‘right’ that must be curtailed has been, and will be, restricting how much of the information media can be controlled by one entity (governmental or corporate). We have always had fake and misleading information. Its only redress is to assure that others have equivalent opportunities to respond in timely and robust ways, to more or less the same audience.”

Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University, said, “Facebook and Google have the biggest role to play here because they not only make it easier to find misinformation, but also (inadvertently) help incentivize it through ad payments and clicks. As the old saying goes, follow the money. While this won’t stop state-based actors, cutting of advertising revenues for egregious sources of misinformation would severely undercut incentives for a non-trivial portion of fake news. The challenge, of course, is how to draw clear lines as to what is and isn’t fake, and to have a fair process that doesn’t harm potentially legitimate sources of information.”

A professor of sociology based in North America said, “Powerful media companies can help filter valid content. If trust is restored to news sources, I don’t expect personal liberties to suffer as a result. Social media will continue to allow people to share ideas, whether true or false.”

Constance Kampf, a researcher in computer science and mathematics, said, “People need to be continually developing their knowledge, and fake and misleading information are a challenge that we need civil society to overcome. No technology can think for us, and no platform can replace critically engaged citizens. That said, a look into the fourth estate and the state of journalism today, as well as the dominance of Google and YouTube as search engines that deliver information and use algorithms that affect access to information, does call for rethinking. I think Google’s right to experiment with algorithms affecting access to internet information should be publically and critically examined with its’ role in directing the publics’ attention. Is it appropriate for these algorithms to be privately controlled? So tech companies rights should be examined, but freedom of speech for individuals should remain a priority – with the same level of responsibility that the U.S. legal system currently gives for slander and endangerment.”

Henning Schulzrinne, professor and chief technology officer for Columbia University, said, “Private platforms can strengthen the ability to determine the source and their trustworthiness, e.g., by scoring their lifetime factual truth average. Best practices for corrections and challenges, similar to the security responsible disclosures model, may work. There might be a way to tie this to campaign finance reform, to the extent that the candidate solicits or pays for fake news.”

Ian Peter, internet pioneer, historian and activist, observed, “There is an assumption here that the spread of information via social media could somehow be curtailed or controlled. But when things go viral, many people unwittingly contribute to the spread of fake news. Add to this the use of ‘bubbles’ and algorithmic feeds that send you what algorithms suggest you might want to believe; you then have a messy situation. No laws ever stopped gossip, and no laws are likely to curtail fake news.”

A technical evangelist based in Southern California said, “The best we can do is to flag/annotate possible problems with issues/reasons, not filter.”

A professor and expert in technology law at a West Coast-based U.S. university said, “No, there is no way to avoid [infringement of civil liberties]. Intermediaries will figure this out, consistent with rights to free speech and press.”

But a fellow with an international privacy rights organization disagreed, saying, “I don’t believe the solution to the problem of fake news is through censorship, specifically automated censorship through the use of opaque algorithms [by platform companies like Facebook and Google]. That is because those solutions only get us closer to a ‘thought police’ and a curtailing of freedom of expression and privacy online. Instead we must realise that the problem will not be solved through technological means, as it is not a technological problem.”

A North American research scientist observed, “I for one don’t want Facebook deciding what’s true. I can already hear the screams of ‘censorship!’ That’s where we are heading if we want systems to think for us.”

A professor and researcher based in North America noted, “Yes. I don’t know why civil liberties would need to be curtailed. Most of the misinformation I see is circulated by exploiting the affordances of social media systems with the tacit support of private industry. For-profit corporations such as platform providers should be regulated and held accountable but the free-speech rights of individuals need not be curtailed.”

A senior international communications advisor commented, “We might want to develop something along the lines of ‘truth in advertising’ model and then legislate that all Google, search engines, be accountable for what they distribute. The historical record tells us that consent is often manufactured – including consent for repression. The most effective form of brainwashing is through mass media.”

An editor and translator commented, “All the systems and measures to prevent misinformation will be used by corporations and governments to suppress any type of undesired voices. Even now anti-hate speech measures of major social media platforms are mainly used to remove content by certain minorities or dissidents. Freedom of expression, right to information are the first rights to go.”

Jonathan Brewer, consulting engineer for Telco2, commented, “Any changes in social media platforms attempting to reduce fake and/or misleading information will only be as effective as local markets allow them to be.”

An adjunct senior lecturer in computing said, “Most rights have been curtailed but not by enforcement. The young likely to protest have been diverted by social media pulp. Older dissidents protest through social media but are drowned out by the ‘buzz’ of inconsequential pulp. Any popular movement that has significant general exposure in the media is soon forgotten, replaced by something horrible occurring around the world.”

A distinguished engineer for a major provider of IT solutions and hardware commented, “This is the crux of the problem: It is not possible to censor sources or articles of news without fundamentally infringing on the right to freedom of expression. My truth may not match yours, but that doesn’t mean it is wrong... Human senses are imperfect, and what I perceive when looking at something may be very different from what you perceive. That doesn’t make either of us wrong.”

Jeff Jarvis, professor at the City University of New York Graduate School of Journalism, commented, “Free speech includes the right to edit, to chose what one shares. So I see no threat to the civil right of free speech in encouraging both publishers and platforms to account for – as Google has said – reliability, authority and quality in their ranking and distribution of information and content. I see no problem in discouraging advertisers from financially supporting misinformation and fraud. And I see no problem in encouraging the public to share responsibly.”

John Wilbanks, chief commons officer, Sage Bionetworks, replied, “It’s unwise to conflate how private companies monetize speech with ‘the public sphere.’”

Create resilience and embed 
critical thinking rather than 'trying to destroy all lies'

Patricia Aufderheide, professor of communications, American University, said, “The basic problems are not at the level of the utterance itself but all the political, diplomatic, regulatory and commercial incentives to mislead.”
Many respondents urged that each individual must be encouraged and educated in such a way that they become responsible for that which they create, share and take to be “truth.”

Serge Marelli, an IT professional who works on and with the Net, shared a few rights that might be curtailed, writing, “The right to lie to oneself, the right to be stupid. The right to (choose to) keep believing in something/anything false *despite facts*. The right to believe in ‘alternate facts,’ The right to believe in ‘creationism.’ The right to mix-up fiction and reality.”

Esther Dyson, a former journalist and founding chair at ICANN, now a technology entrepreneur, nonprofit founder and philanthropist, expert, commented, “The most important is effective education for all. Create resilience to lies rather than trying to destroy all lies.”

Wendy Seltzer, strategy lead and counsel for the World Wide Web Consortium, replied, “We can change how we react to information. That – over the long term – can change how misinformation spreads.”

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “A new social compact has to be arrived at; not being lied to has to become a right on a par with all others. Curtailing rights will not work. A better-educated society is the only way ‘good’ actors may force ‘bad’ actors to limit their malfeasance.”

J. Nathan Matias, a postdoctoral researcher at Princeton University, previously a visiting scholar at MIT Center for Civic Media, said, “The most powerful, enduring ways to limit misinformation expand the use of civil liberties by growing our collective capacities for understanding. In my research with large news-discussion communities for example, encouraging people toward critical thinking and fact-checking reduced the human and algorithmic spread of articles from unreliable sources.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “There should be quality control before launching services and apps, and due-diligence reviews about possible effects both positive and negative. Of course unexpected effects will most likely happen, but at the moment there are way too many services and applications that are launched without any consideration about what their impact might be. No rights will have to be curtailed. It is about education and being responsible internet users as well as being responsible content producers, apply investigative journalism best practices.”

Bob Frankston, internet pioneer and software innovator, said, “We need to make critical thinking an essential part of our culture, push back on our worship of ‘winners’ and get a better understanding of the importance of external factors (luck).”

Pamela Rutledge, director of the Media Psychology Research Center, urged, “We have to arm people with media literacy and the technological skills to navigate the digital world and overcome fear of information systems. People give away freedom when afraid. Once given away, it’s hard to get back.”

Richard Jones, a self-employed business owner based in Europe, said, “Selective out-of-context information is embedded in human nature to manipulate. It is not new… The idea of privately sieving ideas before publicising them has not been adopted in the behaviour of many young. And amongst all groups instant communication is used to attempt to recruit for causes or disseminate propaganda. Awareness needs to be improved, gullibility reduced. Religious and political texts have always sought to conscript people’s minds. The Gideons placed Bibles in hotels, Jehovah’s Witnesses knocked on doors, the end of the world was announced on sandwichboards on Oxford Street. The difference is in volume, accessibility and gullibility.”

An anonymous research scientist based in North America wrote, “The solution is not to curb information, instead it’s to create a stronger democracy in which our bonds to each other do not rely solely on fragmented information communities but a stronger civic infrastructure that has relational bonds that counteract the power of online misinformation.”

A graduate researcher at Northwestern University wrote, “We can preserve civil liberties if we improve education around these issues, because that will enable community-based regulation of online spaces.”

Paul Gardner-Stephen, senior lecturer, College of Science & Engineering, Flinders University, commented, “Perhaps the most effective measure would be to encourage critical thinking among the population, and reversing the anti-science movements that have fostered the hyper-subjectivism that has allowed fake news to flourish. In practical terms: The more educated the population, the harder they are to dupe. Legislative measures, such as that recently taken by Germany to penalise social media platforms for failure to remove obviously fake news will help somewhat, but are only short-term solutions, and form part of the arms race. The main advantages in the long term are to remove cost and other barriers to improving educational attainment and finding ways to achieve de-escalation of partisanism in democracies that have led to these naked attempts to maintain power at all costs, rather than the entire political spectrum in these nations accepting that periods in opposition is a normal and healthy part of democracy.”

A chief executive officer for a research consultancy said, “A Google search 10 years ago turned up original documents, today it is almost impossible to peel the layers away. The flood of information has dumbed down the ability to learn or even see new things. When society is more curious and active for a better future and more engaging social contract, then what’s shared will be more powerful. Unfortunately this starts at the top of the political spectrum.”

Hjalmar Gislason, vice president of data for Qlik, noted, “Educating people on information literacy and facilitating systems that help rate or rank the accuracy of information found online is more important. In other words: People should have the right to publish nonsense, but other people should also have available to them tools to understand the veracity of information they come across.”

Geoff Scott, CEO of Hackerati, commented, “Fake and misleading information is being used to degrade civil liberties. The only way to reduce fake and misleading information and restore civil liberties is to ensure that the vast majority of people do not believe it.”

Deirdre Williams, retired internet activist, replied, “Fake and misleading information is powerful because of the current capability, provided by communications technology, for it to propagate. The solution, if any, depends on re-creating human trust and human trust networks, and the loss of the ‘lazy right’ to consider that the headline from a ‘reliable’ source is the truth, without trying to do any supporting research.”

Paul Saffo, longtime Silicon Valley-based technology forecaster, commented, “This has always been a balancing act and the future will be no different. Recall (U.S. Supreme Court Justice) Oliver Wendell Holme’s famous dictum about shouting ‘fire’ in a crowded theater. This time we are balancing on a digital razor’s edge, where acts that are innocuous in the physical world have outsized consequences in cyberspace. We all need to remember that with rights come responsibilities, and the more potent the right, the greater the burden to behave responsibly.”

Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “We can reduce the influence of fake news by teaching digital literacy to individuals, so that they can make better decisions.”

Charlie Firestone, executive director, Aspen Institute Communications and Society Program, commented, “Free societies have always faced fake or false information. Actions such as curtailing advertising to such statements and increased media literacy should help in bringing about the desired result without curtailing liberties.”
Adrian Schofield, an applied research manager based in Africa, commented, “The notion of civil liberties is false. No one person can have rights because they come at the expense of another person’s rights. There should be no rights, only responsibilities. If each one of use can be held accountable for our own behaviour, there will be no victims. The passive majority lives in this fashion.”

David J. Krieger, director of the Institute for Communication & Leadership, Lucerne, Switzerland, commented, “We must move away from ideas of privacy as secrecy, anonymity and disguise and create trust-based networks in order to maintain freedom, autonomy and human dignity in the digital age.”

Riel Miller, an international civil servant who works as team leader in futures literacy for UNESCO, commented, “If a sucker is born every minute does that mean a warning label needs to be smacked on the false goods every minute? Challenge is to equip the user; they must learn to fish.”

The information explosion is so
overwhelming we need to rethink things

A researcher at a European institute of tecnology replied, “Controlling information doesn´t appear as a promising solution. Instead we need to find strategies to cope with the new situation in a way that is compatible with our understanding of society (including the preservation of civil liberties). In the first place this means that the society needs to be educated, especially with regard to media literacy and critical thinking. However, very negative behavior should indeed be sanctioned without touching free speech too much (e.g., calls to harm human beings should be punished).”

A futurist based in North America said, “It is unrealistic to think that fake and misleading information can be reduced. A new civil contract might help, but it is unlikely that uninformed/misled citizens will be cooperating in generating reliable information. Most of them just prefer sensational information – real or not.”

A professor of media and communication based in Europe said, “Each digital society has to rearticulate its civil liberties in the face of new technologies; such recalibration requires a systematic rearticulation of legal frameworks that currently are not prepared for algorithmic-based clashes of values and norms.”

Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., wrote a lengthy response: “We now need an Information Bill of Rights with international signatories. We need to take into account the ubiquity of personal information and tracking; we need to institutionalize information watchdogs who will review collection and revelation standards from both programmed AI and live captures. We are in a new world and we need new-world tools and standards to establish protocols and protections. Included in the Information Bill of Rights should be such protections as: freedom of the press’ sources named and unnamed; the right to both protect and disseminate information for the public good; the right to know who is collecting data on you, or anyone, and the right to see all levels of that data – to name a few. The democratization of data brings with it the responsibility to establish widely adopted governance protocols.

“According to EMC: ‘By 2020 [the digital universe will contain] nearly as many digital bits as there are stars in the universe. It is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes.’ While these sums are mind-boggling, even more boggling is that we are letting this information ship go without steering, without a rudder. We can preserve civil liberties once we establish a set of standards for collection, transparency, transmission and other key issues… We have taken a laissez faire attitude to one of the most powerful forces ever unleashed by humans: exponentially multiplying information and information collection and manipulation systems.

“Once we have formulated – and there is broad adoption of – an Information Bill of Rights, the next step to reduce fake and misleading information is to educate as well as inform. Free citizens in the new information environment have a unique imperative: we all must be information omnivores because we now see clearly that information does not have a neutral intent; bad actors are using misinformation to effect their agendas. Just as we educate children about personal hygiene and proper nutrition, from an early age, we must teach our children – and then insist on adults in positions of authority – that they balance information sources and facts from a broad stream of media. Narrowcasting is the enemy of freedom. Broadcasting, and broad thinking, will preserve democratic perspectives and voices.”

A selection of additional comments by anonymous respondents:

• “Civil liberties, especially First Amendment rights, are the most threatened.”
• “Economic incentives for fake and misleading information have to be curtailed.”
• “If it was 35 years ago and we prohibited corporations from profiting via news as entertainment, we would be in a less sensationalist, ad-driven situation.”
• “Whatever we develop to ameliorate a problem can and will be turned against us. I have no problem with labeling, with debunking, with doing our best to maintain standards, but the emphasis should be on critical assessment and media literacy, not prohibition.”
• “Data is raw. How people tailor, shape and use it will be never-ending. Teaching people digital literacies and cognitive thinking will be the only way to sort through data inputs… There is no success to be found in panopticons.”
• “Fines for flat-out lies? Would that work?... But ideological ‘truth’ or spin or perspective or interpretation – who would determine that?”
• “Democracies endeavour to allow free speech but in practice find they have to set some limits on advocates of violence.”
• “Fake news is being used to advance an agenda of reduction of free speech.”
• “The only real ‘thing’ that can be done is to eliminate monopolies of information and stop the collection of personal data, much like we once stopped monopolies of rail.”
• “Driving propaganda out of our society is best undertaken not by attacking ‘fake news’ directly but by decreasing the susceptibility of everyday people to hyperbole, lies and innuendo.”
• “Identify a rating system for reports, writers and media outlets.”
• “A stronger focus on election reform and good governance would reduce a lot of the motivations to spread harmful information, without harming civil liberties.”
• “It has to be social pressure and not law that demands better information. People must care about being emotionally hijacked by misleading information.”
• “Ha! We have no rights if we are using a private service like Facebook or Twitter. They can do whatever they want.”
• “Users’ browsers will give them warnings when the source of information cannot be validated.”
• “The danger to undoing civil liberties is in thinking that by restricting them we are someone going to make the world a safer and trustworthy place.”
• “Education systems should be designed so students get authentic experience in rational decision making and seeking out a multitude of voices on any issue. The cultural norm of being a good thinker needs to be reestablished.”

To read the next section of the report - What Penalties Should There Be? -  please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/
future_of_the_information_environment_F4.xhtml

To return to the survey homepage, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_the_information_environment.xhtml

To read anonymous responses to this survey question with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_anon.xhtml

To read credited responses to the report with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_credit.xhtml

About this Canvassing of Experts

The expert predictions reported here about the impact of the internet over the next 10 years came in response to a question asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 2 and August 7, 2017. This is the eighth “Future of the Internet” study the two organizations have conducted together. For this project, we invited more than 8,000 experts and members of the interested public to share their opinions on the likely future of the Internet and received 1,116 responses; 777 participants also wrote an elaborate explanation to at least one of the six follow-up questions to the primary question, which was:

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

Respondents were then asked to choose one of the following answers and follow up by answering a series of six questions allowing them to elaborate on their thinking:

The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online

The information environment will NOT improve - In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online

The six follow-up questions to the WILL/WILL NOT query were:

  • Briefly explain why the information environment will improve/not improve.
  • Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?
  • What are the consequences for society as a whole if it is not possible to prevent the coopting of public information by bad actors?
  • If changes can be made to reduce fake and misleading information, can this be done in a way that preserves civil liberties? What rights might be curtailed?
  • What do you think the penalities should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
  • What do you think will happen to trust in information online by 2027?

The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous seven “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in the global internet policy community and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and Organization for Economic Cooperation and Development (OECD).

We also invited a large number of professionals, innovators and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; the media and media-watchdog organizations; and think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications), as well as globally located people working with communications technologies in government positions; top universities’ engineering/computer science departments, business/entrepreneurship faculty, and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as the Association for Progressive Communications (APC), the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the canvassing questionnaire link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.

Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.

About 74% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 39% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An additional 22% specified their primary area of interest as “other.”

More than half the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their Internet expertise or background, and this was noted where relevant in this report.

Here are some of the key respondents in this report (note that position titles and organization names were provided by respondents at the time of the canvassing and may not be current):

Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the Program on Information Science at MIT; Robert Atkinson, president of the Information Technology and Innovation Foundation; Patricia Aufderheide, professor of communications, American University; Mark Bench, former executive director of World Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs; danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures; Tim Bray, senior principal technologist for Amazon.com; Marcel Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communication; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Johanna Drucker, professor of information studies, University of California-Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at ICANN; Glenn Edens, CTO for Technology Reserve at Xeroz/PARC; Paul N. Edwards, fellow in international security, Stanford University; Mohamed Elbashir, senior manager for internet regulatory policy, Packet Clearing House; Susan Etlinger, industry analyst, Altimeter Research; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Mark Glaser, publisher and founder, MediaShift.org; Marina Gorbis, executive director at the Institute for the Future; Jonathan Grudin, principal design researcher, Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute; Starr Roxanne Hiltz, author of “Network Nation” and distinguished professor of information systems; Helen Holder, distinguished technologist for HP; Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University; Christian H. Huitema, past president of the Internet Architecture Board; Alan Inouye, director of public policy for the American Library Association; Larry Irving, CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism; Christopher Jencks, a professor emeritus at Harvard University; Bart Knijnenburg, researcher on decision-making and recommender systems, Clemson University; James LaRue, director of the Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, Web consultant, developer and activist; Mark Lemley, professor of law, Stanford University; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia Livingstone, professor of social psychology, London School of Economics; Alexios Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at Northeastern University; Giacomo Mazzone, head of institutional relations for the World Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor, Oxford Internet Institute; Alex ‘Sandy’ Pentland, member US National Academies and World Economic Forum Councils; Ian Peter, internet pioneer, historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab; Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smart”; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo, longtime Silicon Valley-based technology forecaster; David Sarokin, author of “Missed Information”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Jack Schofield, longtime technology editor now a columnist at The Guardian; Clay Shirky, vice provost for educational technology at New York University; Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig Siegele, technology editor, The Economist; Evan Selinger, professor of philosophy, Rochester Institute of Technology; Scott Spangler, principal data scientist, IBM Watson Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard D. Titus, CEO for Andronik; Joseph Turow, professor of communication, University of Pennsylvania; Stuart A. Umpleby, professor emeritus, George Washington University; Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia; Tom Valovic, Technoskeptic magazine; Hal Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Kevin Werbach, professor of legal studies and business ethics, the Wharton School, University of Pennsylvania; John Wilbanks, chief commons officer, Sage Bionetworks; and Irene Wu, adjunct professor of communications, culture and technology at George Washington University.

Here is a selection of institutions at which respondents work or have affiliations:

Adroit Technolgic, Altimeter Group, Amazon, American Press Institute APNIC, AT&T, BrainPOP, Brown University, BuzzFeed, Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic Design, Center for Democracy/Development/Rule of Law, Center for Media Literacy, Cesidian Root, Cisco, City University of New York Graduate School of Journalism, Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data & Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy Information Center, Farpoint Group, Federal Communications Commission, Fundacion REDES, Future Today Institute, George Washington University, Google, Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard Business School, Hewlett Packard, Hyperloop, IBM Research, IBM Watson Health, ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network, Internet Engineering Task Force, Internet Society, International Telecommunication Union, Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global, LearnLaunch, LMU Munich, Massachusetts Institute of Technology, Mathematica Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang Technological University, National Academies of Sciences/Engineering/Medicine, National Research Council of Canada, National Science Foundation, Netapp, NetLab Network, Network Science Group of Indiana University, Neural Archives Foundation, New York Law School, New York University, OpenMedia, Oxford University, Packet Clearing House, Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network, Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S. Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles campuses), University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester Polytechnic Institute, World Broadcasting Union, W3C, Xerox PARC, Yale Law.

To return to the survey homepage, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_the_information_environment.xhtml

To read anonymous responses to this survey question with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_anon.xhtml

To read credited responses to the report with no analysis, please click here:
http://www.elon.edu/e-web/imagining/surveys/2017_survey/future_of_information_environment_credit.xhtml