Should penalties for false information be meted out? By whom?
Technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answers to the following query – they were evenly split, 51-49 on the question:
What is the future of trusted, verified information online? In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?
This page holds a full analysis of the answers to the fourth of five follow-up questions:
What do you think penalties should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
Among the key themes emerging from among 1,116 respondents’ answers were: – Corporate actors profiting from information platforms should assist in improving the information environment. – Individuals and cultures must do a better job of policing themselves; it’s best to generally avoid any sort of added regulatory apparatus. – Governments should not be allowed to take any sort of oversight role. – Some sort of regulation should be applied, updated or adapted to help somewhat ameliorate the problem of misinformation. – While legal remedies may work locally at times, the global nature of the internet and variability of the application of law negates their efficacy. – Further legal approaches are not likely to be workable, nor are they likely to be effective. – Free speech is a pivot point: Regulatory mechanisms may stifle unpopular but important speech just as much or more than they stifle harmful speech. – The misinformation conundrum presents too many complexities to be solvable.
If you wish to read survey participants’ credited responses with no analysis, please click here.
If you wish to read anonymous survey participants’ responses with no analysis, please click here.
Summary of Key Findings Follow-Up Question 4
The global nature of the internet makes it difficult to apply legal remedies in many cases. Regulation is needed but it might stifle important speech. The misinformation conundrum seems as if it may be too complex to solve
Most of the respondents who said some sort of action should be taken against those who can clearly be identified as disseminators of disinformation with harmful impact generally pointed out that current laws and regulatory structures can be applied or adapted and applied. Some said corporate actors who have been profiting from such information should be required to step up. Some said it all comes down to acts by individuals, adding that education in information literacy, ethics and morals should be bolstered considerably.
And many said there are too many complexities, including cross-border issues and the difficulty of defining who commited an act, what is punishable and why and who gets to decide who gets punished and how.
danah boyd, principal researcher, Microsoft Research and founder, Data & Society, wrote, “What kinds of harm? Which governments? What’s at stake is far more complex than is implied here. We’re talking about jokesters engaging in similar practices as nation-states, profiteers using the same techniques as ideologues. For example, all governments are engaged in these practices and one could argue that they’re information operations practices are harmful.”
Susan Etlinger, industry analyst, Altimeter Research, said, “It depends on the context. Are we talking about antibiotics? Children’s toys? Or taking down a government? There already are guardrails in effect in many countries to protect the integrity of products, services and institutions. I don’t believe we need to reinvent all of those institutions. Rather, organizations that protect public health – food and drugs, and the electoral process, among others – need to account for and guard against their specific vulnerabilities to misinformation.”
Micah Altman, director of research for the Program on Information Science at MIT, commented, “The government should be supporting an independent media, and robust information systems that are open, transparent and traceable to evidence and not focused on suppressing false information.”
Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “[There is] a system of institutional incentives that promotes profitable misinformation over unprofitable but true information. The following sentence encapsulates the problem: There needs to be a business model for truth. I’m reminded of the legend, which is completely untrue, that Fox News is supposedly banned in Canada because ‘it’s illegal in Canada to lie on airwaves.’ Are those proposing penalties for having ‘created or knowingly spread false information’ willing to apply them to a large amount of lobbying, campaigning, and, sadly days these days, many media organizations? If so, there are major problems, not the least that such a proposal would go against much of the legal protection for freedom of speech in the Western world. If it’s proposed to apply narrowly, then by definition it’s only making a few fringe players miserable. Consider Tom Paxton’s 1964 song ‘Daily News’: ‘Don’t try to make me change my mind with facts / To hell with the graduated income tax / How do I know? / I read it in the Daily News.’ It’s tempting to dismiss the problem as always with us. But it’s also distracting to focus only on scapegoat outliers who are safely removed from positions of power.”
Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “As the problems are structural, the remedies must also be structural.”
Bernie Hogan, senior research fellow, University of Oxford, noted, “The government (here presumably we refer to the U.S. government) should reinstitute the Fairness Doctrine if nothing else. Penalties for misleading information framed as facts will almost always be defended as a first amendment right. This is one area where the Supreme Court seems to have consistent consensus. Who really needs to step up is platforms. They ought to be less acquiescent to fringe users. However, they appear to be committed to appeasing all their users (from whom they make money). Thus we see here the logic of capitalism reinforcing a profit motive above facts, something I assume will continue to accelerate as the few become more effective at personalising what they curate for the many.”
Joanna Bryson, associate professor and reader at University of Bath and affiliate with the Center for Information Technology Policy at Princeton University, said, “This should be treated exactly the same as any other equivalent level of destruction (blowing up buildings, writing on walls). We need to get better at quantifying the damage – a project for economics.”
Adrian Schofield, an applied research manager based in Africa, commented, “Most communities have laws prohibiting libel and slander. The challenge is enforcing them. Successful prosecution should result in depriving the guilty party of access to the mechanisms of publication.”
Jonathan Grudin, principal design researcher, Microsoft, said, “ Ideally the government should distribute accurate information and help establish the provenance of misinformation. It is difficult to prove ‘intent of causing harmful effects.’ If I lie to elect a candidate I believe will be good, did I intend to cause harmful effects? Where intention to harm can be proven, remedies often exist.”
Esther Dyson, a former journalist and founding chair at ICANN, now a technology entrepreneur, said, “There should be some application of legal penalties, but very carefully. The government should run the courts; the people should file lawsuits. There is also a regulatory role for the Federal Trade Commission and the like.”
Christian H. Huitema, past president of the Internet Architecture Board, commented, “I would not like to have to write such laws.”
Sandro Hawke, technical staff, World Wide Web Consortium, noted, “I don’t know exactly why the existing rules concerning fraud and libel are failing us. It might be about anonymity. It might be about jurisdictional boundaries. It might be lack of training for law enforcement. It might just be society is reeling, trying to adapt to a new set of problems. I doubt we need more-severe penalties. We probably need to look at making traditional consequences still enforceable, even with the new technologies. Most of the time, it shouldn’t need to rise to the level of law enforcement, though.”
Corporate actors profiting from information platforms should assist in improving the information environment
Jennifer Urban, professor of law and director of the Samuelson Law, Technology & Public Policy Clinic at the University of California Berkeley, wrote, “We already have laws against fraud, defamation, harassment, etc. Those are good models; we need to find a way to scale them. Government’s role should be to pressure other state actors that support or engage in spreading misinformation, to enforce the law, and to avoid spreading misinformation itself. We could also consider reviving the Fairness Doctrine, which would require that multiple viewpoints be presented, though this only applied to broadcast license holders. Beyond measures like these lies a very slippery slope towards government censorship. We should also ask about the role of corporate actors – it is Google, Facebook, Twitter, etc., that actually make many of the relevant decisions today.”
The president of a consultancy observed, “The tech companies who made millions on fake news by ignoring it should be held accountable. Government is so far behind on everything digital, their role has to first be to educate all government employees, then the citizenry, and sustain updates as diverse new false news strategies are identified.”
An assistant professor at a university in the U.S. Midwest wrote, “If a socio-technical solution is used to address this there can simply be in-system impacts. A person can be flagged in some way depending on the severity of the issue.”
Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years’ experience at the BBC, Ofcom and as a digital consultant, wrote, “The role of government is to ensure that the intermediaries who operate our information environments do so responsibly, in a way that takes appropriate account of the competing interests they must balance, that provides opportunities for appeal and redress, that is driven by consumers’ and citizens’ rather than purely commercial interests. It is not governments’ job to try to specify in micro-detail what content should and shouldn’t be allowed.”
Jeff Johnson, professor of computer science, University of San Francisco, replied, “Penalties for that should be loss of account with whatever online service was used to spread the misinformation.”
A consultant based in North America commented, “The better approaches are economic (pressing the platform companies to bar the worst offenders from access to advertising revenue) and media literacy. The answer to debates about restricting speech in America has always been that the first and best response should be more speech.”
A post-doctoral fellow at a center for governance and innovation replied, “Jail time and civil damages should be applied where injuries are proven. Strictly regulate non-traditional media especially social media.”
A journalist who writes about science and technology said, “The government should sue fraudsters, much the way the FTC currently sues businesses that make false claims or violate laws.”
Nigel Cameron, technology and futures editor at UnHerd.com and president of the Center for Policy on Emerging Technologies, said, “Governments should have no role, and false and misleading speech needs to remain free. But, for example, websites/social media companies have their own free speech rights and can excise/label as they choose.”
Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, observed, “Digital platforms should take the lead in denying access or demoting in visibility sources that persistently, knowingly, and harmfully distribute demonstrably false information. Government intervention should be a last resort only when there is imminent threat to public safety.”
An associate professor at a major university in Italy wrote, “Platforms should reduce the circulation of these false information.”
A fellow who works at a university in the UK said, “I am concerned with how the narrow economic interests that subtly shape the information landscape are being obfuscated by technologies which are claimed to be objective and impartial but really aren’t (AI/machine learning, predictive analytics and the like).”
An anonymous consultant urged, “The government should follow the money and hold advertisers accountable for paying to be on websites that are spreading disinformation.”
An anonymous futurist/consultant said, “Rather than government intervention, platforms like Reddit and others should work with their user base to establish rules around the spread of harmful and misleading information.”
Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “The nature of the information matters. That said, if the action violates the terms of use of the platform/social media site, this type of contract breach provides basis for shutting down the user’s account, in the discretion of the platform/site. Government should ensure that the information it provides to the public is itself fully accurate.”
Stephen Bounds, information and knowledge management consultant, KnowQuestion, noted, “I would support the establishment of on-the-spot fines for certain classes of information infractions. In a similar manner to speeding fines, grossly defamatory or insulting speech could be subject to an on-the-spot fine by a suitably constituted law enforcement body. Given the massive cultural change this would involve, I would recommend a graduated approach with either warnings or points used to encourage behavioural modification without immediate financial penalty. This could be used separately or in conjunction with ‘disclose or remove’ laws, where a person responsible for a post could be compelled to modify it to identify themselves and any financial incentives received in relation to that speech, or to remove it from publication. Both approaches encourage personal responsibility to the circulation of socially inappropriate information without outright censorship. The complications of anonymous speech are not insurmountable, since the most problematic free speech exists on highly trafficked platforms where there is a clear corporate body to engage with for assistance in enforcement of notifications and user identification.”
Amber Case, research fellow at Harvard Berkman Klein Center for Internet & Society, replied, “Governmental regulations might not be able to fully curtail the spread of fake news, as it relies on the emotional impulses of consumers. However, reducing payment incentives through advertising revenue could curtail the spread of fake news. Forcing a pause before spreading or reacting to content could also help. If an individual is found to spread false information and can be identified, then perhaps their ability to post and make revenue could be taken away, but this will not prevent them from operating anonymously. Some education for consumers could help, but this is not a problem that one government can solve. There are many nations and locations at play here, and there is not a ‘one size fits all punishment or law that could be enacted to curtail behavior. It could be made less convenient or profitable for the original poster, or the social networks in question could send a follow up note to all who reposted or reacted to the message with a note that the message in question is fake news, educating the recipient and amplifier on why it was fake news. That way each piece of fake news shared could become an educational moment.”
Individuals and cultures must do a better job of policing themselves; it is best to generally avoid any sort of added regulatory apparatus
Vian Bakir, professor in political communication and journalism, Bangor University, Wales, commented, “It is difficult to establish intent to cause harm at the level of individual people. Probably better to educate people to be suspicious of false information and know where to go to for trusted information.”
Alexios Mantzarlis, director of the International Fact-Checking Network based at Poynter Institute for Media Studies, commented, “I would be very very very wary of restrictive government intervention in this space. The media, tech companies, schools and the public all have a lot do before we hand this over to governments. Governments should for the moment limit themselves to educational initiatives and encourage research/debate on this topic.”
J. Nathan Matias, a postdoctoral researcher at Princeton University, previously a visiting scholar at MIT Center for Civic Media, wrote, “The most powerful, enduring ways to limit misinformation and expand the use of civil liberties is by growing our collective capacities for understanding. In my research with large news-discussion communities for example, encouraging people toward critical thinking and fact-checking reduced the human and algorithmic spread of articles from unreliable sources.”
An anonymous principal technology architect and author replied, “We should not have penalties based on intent – the idea that there should be penalties based on intent is a major part of the problem right now. This is one step in the destruction of freedom.”
An author/editor/journalist wrote, “To attempt to punish after the fact is pointless. Herd immunity to misinformation is far more effective.”
Jeff Jarvis, professor at the City University of New York Graduate School of Journalism, commented, “The First Amendment protects the right to lie and to be wrong. Government should play *no* role in controlling public speech. The only penalty for knowingly spreading false information should be shame – which is why we need encourage citizens to adapt their social norms to reward civility over incivility.”
A senior vice president of communications said, “We don’t care about shame any more, and that used to be enough.”
An anonymous journalist observed, “Overall, we need to equip people to critically evaluate information better through our education systems. We need to create more awareness, and more informed citizens, and there will be need for new legislation in areas such as algorithmic manipulation, but I don’t see how one single measure can solve this issue.”
Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia, wrote, “We used to have such penalties: Social shaming; loss of credibility and status; exclusion from the public sphere. Government should play no role in such dynamics, but government plays an important role in certifying the dependability of much scientific, economic and demographic claims. That should be defended and maintained.”
An associate professor at a U.S. university wrote, “Communities and journalistic organizations have to develop their own clear standards and educate the public on how to consume information and why they should care about they way they consume that information.”
A professor at a university in Australia replied, “One idea is for government to financially support investigative journalism and relevant research dissemination, with grants and other funding opportunities. Now that the business model supporting independent high-quality journalism is failing, it may need the support of public entities to continue its vital role as the Fourth Estate.”
Edward Kozel, an entrepreneur and investor, replied, “Only changes to social behaviour will/can address the dire situation: any such changes will require a degree of social judgment or even shame (i.e., morality). A difficult subject for government indeed, but changes to our educational system (comprehensive) that include and are embraced by society can bring about such societal changes.”
Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “The first and most important role of government in this respect is to promote education and support spaces for open, healthy, civil debate. Basics as mandatory vaccinations and science and logic in schools have to be provided as an infrastructure of trust. Unequivocal support for science, and the prosecution of bad actors such as phony medical treatment providers will help keep false information in check. That is, the action is on all fronts, not only on the news front.”
Justin Reich, assistant professor of comparative media studies, MIT, noted, “The primary role of local government is developing school systems where students learn the information literacy skills needed to identify or verify fake news. The role of state and national governments will be to support curriculum development and research towards these ends. Sam Wineberg and colleagues at the Stanford History Education Group are doing important work towards these ends.”
Michael Marien, senior principal, The Security & Sustainability Guide and former editor of The Future Survey, wrote, “‘Crap detecting’ should be a major concern for education at all levels. And what about the pussycat press: why aren’t they demanding evidence for questionable assertions and examples of so-called ‘fake news?’”
Michael R. Nelson, public policy executive with Cloudflare, replied, “Governments can encourage self-regulation like the codes of ethics that have guided journalists for more than 100 years. Attempts to ‘make the Internet safe and orderly,’ like the July 2017 German law on hate speech and ‘dangerous speech,’ are overly broad and would certainly be unconstitutional in the U.S.”
Andreas Birkbak, assistant professor, Aalborg University, Copenhagen, said, “Governments should use carrot more than stick and try to cultivate a culture that cares about facts without expecting facts to be universal truth.”
Ray Schroeder, associate vice chancellor for online learning, University of Illinois-Springfield, replied, “We may need to interpret the libel and slander rules to include knowingly disseminating false information with the intent to wrongfully influence political and policy decision making for personal gain or profit. Media may choose to focus reporting on statements delivered through legislative venues in which contempt proceedings can be initiated for knowingly false and misleading statements.”
Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “The penalties should be essentially a ‘scarlet letter’ – a tag or flag or some kind of transparent labeling that identifies the person as an intentional purveyor of falsehoods. Government would likely have to play a role in universalizing a system, but you’d likely have multiple alternative bodies putting out tagging guidelines. A ‘scarlet letter’ of sorts identifies the perpetrators as purveyors of dangerous and false facts to any who might interact with them, along with cultural norms that shame the perpetrators, even if they are ideologically friendly.”
Serge Marelli, an IT professional who works on and with the Net, wrote, “They should be sentenced to prison for a limited time and be forced to publicly retract and correct any lies. Also, they should be barred from running for public offices. Nowadays, they get elected to be president.”
George Siemens, professor at LINK Research Lab at the University of Texas-Arlington, commented, “Penalties should be social, not government-mandated. For example, there are libel laws, but most gossip isn’t handled through that legal model. Most is social awareness in networks that results in a softer pressure.”
John Laprise, consultant with the Association of Internet Users, wrote, “There will be reputational harm, there should be no civil/criminal penalties.”
Dean Willis, consultant for Softarmor Systems, commented, “I’m in favor of exposure and ridicule. You know, like they did to Darwin after he launched that ridiculous theory. Oh wait, he was right.”
Governments should not be allowed to take any sort of oversight role
An anonymous research scientist replied, “Penalties would require a government ‘Bureau of Truth’ to determine the ‘true’ story. Such a bureau would be inherently repressive and even more dangerous than the unrestricted spread of false information. It would resemble the situation in the Soviet Union at its worst.”
A researcher based at MIT, said, “The government should provide the judicial system that decides these cases. It should not attempt to become the prosecutor of truth.”
Garth Graham, an advocate for community-owned broadband with Telecommunities Canada, said, “Since the governors (i.e., external authority), are primary users of public relations manipulation, giving them a role in regulating distribution, is like giving the insane the control of the asylum.”
Mark Lemley, professor of law, Stanford University, observed, “While false facts that injure people (inaccurate drug ingredient information, say) can and should be punished, the government should not be in the business of punishing fake news.”
Alexis Rachel, user researcher and consultant, said, “There needs to be a cultural shift wherein spreading of false news is looked on as a heinous and dangerous act, versus the current ambivalence. I’m not sure what the government can or should do with regard to this, except lead by example.”
An anonymous respondent said, “I do not believe we can trust one group to accurately police truth.”
A media networking consultant said, “The government’s only role is to provide reliable information to the press. Failure to do so should be prosecuted.”
A journalism professor and author of a book on news commented, “Government role? Yikes! There should not be any government role – otherwise we are China. Letting whatever current regime is in the White House control what constitutes truth and which can prevent the distribution of information that does not support its truth – that would truly end the American dream.”
Jon Lebkowsky, web consultant/developer, author and activist, commented, “This question suggests a slippery slope we might want to avoid. The one thing the government has done before and might do again is a ‘fairness doctrine.’ However involving the government in managing information accuracy or quality invites the potentially greater problem of censorship.”
Barry Parr, owner of Media Savvy, replied, “There’s no way to do this without limiting free inquiry and dissent. Government action would be disastrous to democracy.”
Joseph Turow, professor of communication, University of Pennsylvania, commented, “If such penalties were created and enforced many public relations executives would arguably be liable to prosecution. And the notion that government officials would lord over decisions about the facticity of news often about them or the parties is laced with conflicts of interest and threats to democracy.”
Jack Park, CEO, TopicQuests Foundation, noted, “Penalties should fit the nature of the measured harm. Government playing roles in this context raises issues like: who gets to decide what is and is not “false” information? In my view, if there is a role, it should be that the government funds, in the same way it funds biomedical research, ways in which to increase public engagement in civic activities, some of which include crowd-sourced, role-playing-game-based global sensemaking.”
Brian Harvey, teaching professor emeritus at the University of California-Berkeley, said, “Throughout history, governments have been among the most prolific creators of fake news. If I could choose between eliminating Breitbart and eliminating the CIA, I’d definitely choose the latter. Not only does the CIA have a bigger budget, but they are better at creating plausible misinformation. When people believe things like that pizza parlor story last year, the biggest problem is not the story itself, but rather the social conditions that leave people so (rightly) mistrustful of social institutions that they find the story plausible. Trump wasn’t elected by Breitbart; he was elected by the 2008 bank crash and the government’s response to it.”
Johanna Drucker, professor of information studies, University of California-Los Angeles, commented, “We have methods of meting out punishment for lying in financial, legal and medical realms (or used to, they are being quickly stripped away). Why not create similar laws and liability statutes for information? My concern about government controls comes from observation of current trends in the Trump administration to control discourse through intimidation, closed briefings, strategic release of misinformation as if it were official – or as official – statements. The checks and balances built into the relationships among the judicial, legislative, and administrative branches of American government are still essential. No single branch should have any exclusive powers over information or it will lead to abuse (of those powers and of information).”
Some type of regulation should be applied, updated or adapted to help somewhat ameliorate the problem of misinformation
The president of a business said, “We can use what we already have and perhaps a few more. Libel laws; false advertising laws; laws against breach of contract; laws against making false scientific claims for personal or corporate gain; penalties for victim-targeted hacking and doxxing; laws to protect the integrity of the vote and bar foreign interference in elections; laws preventing corporations from having the rights of people.”
Rick Hasen, professor of law and political science, University of California-Irvine, said, “Existing tort law should handle these things. For example, fraudulent conduct leading to damages compensable under current tort system.”
A researcher affiliated with a company and with a major U.S. university noted, “Manipulation of news should be treated similarly to manipulation of financial data or personal reputation – i.e., subject to legal challenge and legal penalties. Enforcement across borders will require considerable international work to establish protocols. Interpol, et cetera, and the EU are good starting places.”
David Conrad, a chief technology officer, replied, “They should be similar to those for false advertising, perjury, and/or libel depending on context.”
A professor of sociology with expertise in social policy, political economy and public policy said, “Purveyors must be required to produce clear evidence and, when they cannot, their failures must be publicized.”
A professor at a Washington DC-area university said, “Government can look to deter some state-based distributors, by, e.g., declaring democratic elections to be critical infrastructure and threatening retaliation for attacks.”
An associate professor and journalist commented, “The UK has been progressive in tackling trolls through the legal system, so the U.S. could learn from that experience.”
Charlie Firestone, executive director, Aspen Institute Communications and Society Program, commented, “We should guard against state censorship, or even corporate censorship that becomes equivalent to state censorship. As for knowingly spreading false info with the intent to create harm, there can be civil actions from those harmed, like libel allows someone to sue for damages.”
A director of new media for a national federation of organizations said, “They should suffer penalties for treason if this in done in cooperation with a foreign government meant to damage the U.S. political system.”
Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “In the U.S., slander is treated as a virtual attack on a person. Individuals who plan a destructive riot rather than a peaceful demonstration are criminals. Purposely spreading false information about a company that impacts business can also be considered a crime. Based on these existing legal principles, the government can press for laws that set legal penalties for creating and knowingly spreading bad information. The difficult part of these laws will be the definition of “intent” to create and knowingly spread false information.”
A professor of information studies based in Europe replied, “Lying and spreading false information with an aim of harming directly or indirectly others should be punishable as it indirectly tends to be in the current judicial systems. Government and legislators should make sure that winning such cases does not require a lot of wealth, engaging in long-term and unsure cases, and hiring expensive lawyers so that everyone in the society can have an opportunity to win such cases.”
Daniel Alpert, managing partner at Westwood Capital, a fellow in economics with The Century Foundation, observed, “Government cybercrime-efforts should track down and confront malefactors and seek to shut them down or block them. But there has to be a transparent judicial process to oversee such efforts.”
Helen Holder, distinguished technologist for HP, said, “Penalties should be those for incitement, fraud, harassment, libel, slander, et cetera, rather than any additional or specific penalties. The government could make it easier to pursue these cases. For example, today it is very hard for a person who has been threatened online to take action against their harasser. Often law enforcement is unable or unwilling to investigate. Policy, training, and staffing adjustments could be made to better enforce existing laws and regulations.”
A researcher/statistician consultant at a university observed, “We need ‘ombuds’ groups to investigate and apply punitive measures. Punitive measures – loss of employment, if employed. Loss of contract – if on contract. Fines – if unemployed. Also maybe some community work to be completed.”
Glenn Edens, CTO for Technology Reserve at Xeroz/PARC, “There should be penalties similar to libel or fraudulent transactions, government should play a role similar to the laws governing consumer protections.”
Stowe Boyd, futurist, publisher and editor in chief of Work Futures, said, “What’s the legal consequence of yelling ‘fire’ in a crowded theater? Or libeling or slandering others? We have laws in place that could be repurposed, reinterpreted for our modern times.”
Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The knowing creation and spread of information that is both provably wrong and done with malicious intent needs to have strong penalties via court of law. Government’s role in this would amount to a censorship that would likely be unacceptable – the key phrase is ‘intent of causing harmful effects’ (illegal might be a better word than harmful) would be what need to be enforced via civic mechanisms and courts.”
Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, observed, “There are already legal sanctions, even in societies with strong free-speech traditions, on particular classes of information that cause harm: fraud, libel, slander, incitement and so on. These should be revisited and adapted to the online social context. However, I would be very cautious about establishing other, particularistic types of ‘harms’ that are invoked to restrict speech and information more broadly: blasphemy, disrupting ‘public order,’ laissez-majesté rules against insulting states or rulers, even some instances of hate speech. The difficulty is balancing individual sensitivities and the wider interest in a diverse, pluralistic, and sometimes disputatious, society.”
Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “I’d treat it like we do the incitement to racial hatred. If there is intent to harm, then the penalty should reflect the intended or actual harm. This must be done by governments not companies, as government is (should be!) accountable to its people.”
A North American research scientist wrote, “Government is likely the only actor with authority to stem flows of false information. The penalties should be determined by the intent of the harms (and should be very severe for efforts to undermine democratic freedoms and security).”
An anonymous international internet public policy expert said, “The government should play a role in supporting prevention and in issuing penalties.”
Others said those in government and their support teams are a cause of much of the misinformation. A head of systems and researcher working in Web science said, “Government needs to be held accountable as they cooperate with Super PAC agendas that are behind a good number of disinformation campaigns.”
A futurist based in Western Europe said, “Yes, large penalties should be introduced, the same as for people who give misleading information in financial reports and in advertisements. Of course, this must be overseen by a body that can be understood as being independent, which will be hard. And it will not be a complete solution to the problem of fake information. But it will be an important contribution.”
Axel Bruns, professor at the Digital Media Research Centre, Queensland University of Technology, commented, “Possible penalties could range from temporary social media bans to imprisonment, but how these are applied is a matter for the judiciary. The fundamental principle, however, must be that legal penalties are designed to promote rehabilitation rather than exact revenge; simply locking up trolls and propagandists merely makes martyrs out of them. Government is clearly central here, as it is to all aspects of society: it must get better at sensibly regulating traditional, digital, and social media platforms and channels, rather than vainly believing in market self-regulation; it must develop a much better understanding of contemporary media platforms amongst policy-makers, law enforcement, and the judiciary; and most of all it must develop far more proactive means of promoting media literacy in society.”
Andrew Dwyer, an expert in cybersecurity and malware at the University of Oxford, commented, “From the perspective of the UK/Europe – we already have systems in place that allow for misinformation to have penalties. Yet these have not been routinely applied online thus far. Developing a body of case law could be a productive way forward. Government roles are and should always be limited.”
Tanya Berger-Wolf, professor at the University of Illinois-Chicago, wrote, “We already have most of the penalty system for intentionally harming somebody, including with misinformation (libel, false advertising, identity theft, et cetera). The intention is very hard to prove. However, the punishment, as always, should be commensurate with the resulting harm.”
Charles Ess, a professor of media studies at the University of Oslo, wrote, “The penalties should be severe. Rights to freedom of expression have always recognized that speech intended to generate harm is NOT protected speech. As has become manifest over the past two decades, the international corporations controlling most of our communication media have little incentive to regulate or control harmful speech: the more clicks, the better, etc. Democratically elected and responsible governments – i.e., ones that citizens constantly call into account – are the only institutions capable of policing and regulating harmful speech.”
Angela Carr, a professor of law based in North America, replied, “I would like to see government step up efforts to enforce the laws against unfair or deceptive marketing practices. Also, I also think more should be done to protect speakers that others try to silence through threats and intimidation. I would like to see Citizens United overturned and effective campaign reform legislation. Beyond these efforts (which certainly seem unlikely in the present environment) I think it is difficult for government to prevent distribution of false information. Not only is it difficult to know whether information is true or false, but it is even more difficult to determine the speaker’s intent. Government can, however, encourage the dissemination of accurate information by supporting public broadcasting, and other non-profit organizations that seek to genuinely inform the public.”
Michael J. Oghia, an author, editor and journalist based in Europe, said, “There should be some form of criminal procedure for this, which includes a fine or other appropriate penalty. Depending on the intended effect, prison or internet restrictions could also be options, but governments and law enforcement would have to be involved in this process. I also fear that empowering these two stakeholder groups with such power could be used against, say, minorities and other disadvantaged groups.”
An anonymous internet pioneer and longtime leader in ICANN said, “Proportionality of response should take into effect all of the costs of the negative externalities created by knowingly spreading the false information.”
A distinguished engineer for one of the world’s largest networking technologies companies commented, “There are outright lies and then there’s stretching the truth. Some actions, for example creating widespread panic with fake news, must be prosecuted. There are already civil penalties for slander and libel that extend to the internet. Legislation and definition of this will be a protracted debate. In the immediate future, the penalty will be taking away the source’s access and will be done by the content and service providers (albeit a moving target).”
Alan Inouye, director of public policy for the American Library Association, commented, “We already have some well-established laws, such as the rubrics of libel and slander. Perhaps these rubrics need to be revised.”
Veronika Valdova, managing partner at Arete-Zoe, noted, “False statements distributed to official authorities as a witness statement qualify as perjury. Adverse information identified during background checks hinders the individual’s ability to find gainful employment, get a security clearance, or obtain a visa. If such information turns out to be false, this may be the grounds for a civil suit. Currently, pursuing such suits is difficult because the victim is rarely able to prove the nature and origin of such information and prove a causal relationship between a specific piece of information and rejection. The spread of illegally obtained surveillance material, personal health records, and other sensitive material is illegal under specific laws in most jurisdictions. The right to due process may be the answer. Resolution of such disputes generally belongs to courts. The role of governments is to ensure the resilience of their systems and rigorous assessment of evidence and the prevention of abuse. The penalties can range from shutting down a single website to no-fly lists for specific individuals.”
James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “I suppose libel/defamation laws provide some guidance: a finding of deliberate harm has financial penalties. The government role: rule of law, incorruptible courts. The adequate funding of public libraries to provide sufficient and timely resources to investigate claims.”
Sebastian Benthall, junior research scientist, New York University Steinhardt, responded, “Companies should be subject to penalties for deceptive practices as under a strong FTC regime. Defamation should be punished under the relevant laws. And so on. There is ample precedent for the role and limits of government in confronting false information.”
Tim Bray, senior principal technologist for Amazon.com, said, “Existing libel and slander laws are adequate. Canada has anti-hate legislation but its effectiveness has really yet to be established.”
Adam Holland, a lawyer and project manager at Harvard’s Berkman Klein Center for Internet & Society, noted, “I suspect that exisiting law about willfully causing harm, whether physical, emotional, or reputational, will provide a useful template for the actual nature of any penalties. However, intent is extremely difficult to effectively prove, and ‘false information’ is going to be equally difficult to distinguish from fiction. Penalties, regardless of what they are, will be rare in application. Government should not be taking steps to prevent, since definitions of what is subject to any prevention may well change with the government. Government should empower the citizenry and enforce existing law equally.”
Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Harmful speech should not be protected as free speech. I believe that the European anti-hate speech laws make a lot of sense: if there is an intent to harm others (or move one’s followers to harm others), it should be punishable by law.”
David Brake, a researcher and journalist, replied, “It depends on the ‘harmful effect’ sought. If the intent is to incite hatred of others it should be dealt with through hate crime legislation (present already in most countries) or anti-bullying legislation. If ‘merely’ political then simple ridicule by a free press is the best we can hope for.”
Laurel Felt, lecturer at the University of Southern California, “It might be difficult to prove intent. But assuming that one could prove harmful intent, then the government would need to create some sort of policy that condemns such an action. The government body responsible for investigating and prosecuting such cases might be the Federal Communications Commission? Assuming prosecution that culminates in conviction, the penalty could be a fine, I suppose.”
Nick Ashton-Hart, a public policy professional based in Europe, commented, “Penalties should be proportional to the ability to harm, and government should step in only to the extent that civil or criminal action to redress harms done are appropriate and proportional.”
Jonathan Brewer, consulting engineer for Telco2, commented, “Dangerous speech is not a phenomena unique to the internet. Existing regulations and programs around the world may need to be updated or enhanced, but I don’t think any new penalties need be established.”
Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., shared an in-depth point of view: “Just as electricity in the U.S. is regulated at the federal, state, and local levels, information is now a force like electricity and needs independent oversight with checks and balances, of course, but with some recourse to signal the deliberate spread of false information and some power to stop it. Penalties for those found to have created or knowingly spread false information with the intent of causing harm should be at least as severe as a class B felony (punishable by up to 20 years in prison, a fine of up to $20,000, or both.) News sources (i.e., CNN, The New York Times, Washington Post, The New Yorker, et cetera) should hold a news reliability summit and devise what might be termed a ‘reliability index’ … Like the American constitution, there should be a means to amend or improve the reliability index. Once that is in place, each piece of information, or definitive statement, can be assigned a level of certainty or reliability.
“… We have standards of measurement in the food industries and commerce, we have standards of disease and wellness in healthcare, we have standards of tolerance and capacity in civil engineering and aerospace. We can establish standards for information. With a meaningful standard in place, we can establish penalties for violations of the standard(s)… A free press should be able to govern itself without government interference, so the pillars of the press community should establish and jealously guard the integrity of a reliability index. We need to establish clear sanctions and penalties to deter any authority or other entity from designing and spreading misinformation, a trendy word for lies. Information is the lifeblood of democratic institutions. Without trustworthy, reliable information … democracy will die.”
Bob Frankston, internet pioneer and software innovator, said, “It is dangerous to impose too much control, but maybe there should be a concept of public libel?”
Jane Elizabeth, senior manager American Press Institute, said, “There already are penalties for hateful/dangerous speech and other communications. The penalties for malicious misinformation could work in a similar way.”
Whilt legal remedies may work locally at times, the global nature of the internew and the variability of the application of law negates their efficacy
Eduardo Villanueva-Mansilla, associate professor, department of communications, Pontificia Universidad Católica del Perú, said, “First, define ‘harmful,’ then, we will have to deal with the interconnectedness of the systems allowing the spread of such information, and the fact that there are no political mechanisms to punish actions by a citizen from a given nation state in other nation state, even if s/he is identifiable. Sanctions between states are limited and dangerous beyond some very specific scope.”
Bill Woodcock, executive director of the Packet Clearing House, wrote, “This is the ‘crying fire in an opera house’ abuse, generally. The problem is in the transnational nature of the commission of the crime; in the country of origin, it may be a patriotic act, rather than a criminal one. And nations will never curtail their own Westphalian sovereign ‘rights’ to define what actions are criminal within their borders.”
Jerry Michalski, futurist and founder of REX, replied, “I am not a lawyer, but current laws regarding freedom of speech and harmful speech give us a lot to work with. The problem is the anonymity and superconductivity of the Net, along with the global trust implosion. Governments need to address trust more directly.”
A retired professor and research scientist said, “The government role is hard to accomplish due to the global nature of the internet – e.g., Wikileaks and Julian Assange.”
A chief technology officer observed, “It depends on the harmful effects. If the actors are outside of U.S. jurisdiction, there is not much that can be done.”
Nathaniel Borenstein, chief scientist at Mimecast, commented, “Penalties should be severe, including substantial jail time and fines. But I expect this to be unenforceable across international boundaries.”
Thomas Frey, executive director and senior futurist at the DaVinci Institute, replied, “In a world that is transitioning from national systems to global systems, we are desperately in need of a new global watchdog, one that perhaps most nation states are members of, to oversee the creation of policies, rules and enforcement around the globe.”
Further legal approaches are not likely to be workable, nor are they likely to be effective
Dan Gillmor, professor at the Cronkite School of Journalism and Communication, Arizona State University, commented, “In a few cases this is already illegal. Expanding laws to punish speech is a step toward a police state. One key government role should be to give schools better incentives to teach critical thinking – specifically media/news literacy – as a core part of the curriculum.”
John Anderson, director of Journalism and Media Studies at Brooklyn College, City University of New York, wrote, “We have existing legal mechanisms to combat the spread of false information with the intent to do harm, but our legal system works about two generations behind where communications technology is. Things are not helped by the increased politicization of the judiciary itself.”
Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “Accuracy of information is NOT binary. It is a continuum. Additionally, proving intent makes legal or governmental penalties VERY difficult; even moreso when government agents, themselves, are the perpetrator. If there is substantive harm from such disinformation, defined by law, then those same laws need to include penalties and enforcement procedures. We have ample precedent for this (difficult!) situation, in the form of slander and libel laws.”
Stephen Downes, researcher with the National Research Council of Canada, commented, “Using existing laws, we can assess penalties based on actual damages caused, in those few cases where actual prosecution is possible. But given that government and large corporations profit the most from spreading false information, it seems unlikely they can be trusted to take any steps to prevent it. There is probably no legal remedy, because the people who benefit from misinformation have been the ones to write the laws.”
A vice president for a company based in North America replied, “Left to its own devices, the market will likely begin reputation tracking (similar to the reputation tracking of eBay). Bad actors would suffer loss of reputation and influence. Let the market of ideas work out its own solution. Keep government meddling to a minimum; it’s almost universally destructive.”
Free speech is a pivot point: Regulatory mechanisms may stifle unpopular-but-important speech just as much or more than they stifle harmful speech
Kenneth Sherrill, professor emeritus of political science, Hunter College, City University of New York, said, “I’m a hard-core, empirical, quantitative scholar. We think that good information drives out bad information and that systematic liars are shunned. This is wishful thinking. I don’t want the government to decide what information is false… The only answer is to be found in free speech and the marketplace of ideas. This is why I’m so pessimistic.”
Paul M.A. Baker, senior director of research for the Center for Advanced Communications Policy, said, “The consequences and penalties of knowingly spreading false information is a tricky balance. In a private setting, operation of market mechanisms/self-regulation would seem to viable approach; in a public setting the balance between free speech and protection of vulnerable populations must be maintained. For willful promotion/distribution of dangerous or harmful material it would seem that the judicial process is appropriate. Use of regulatory mechanisms while possible run the risk of stifling both dangerous, as well as unpopular speech. The latter could be a case of criticism of an administration which might be valid.”
Adam Powell, project manager, Internet of Things Emergency Response Initiative, University of Southern California Annenberg Center, said, “No, and therefore none. Remember, ‘Congress shall make no law….’”
Garland McCoy, president, Technology Education Institute, commented, “Who defines ‘false information’? Many argued at the time that Orson Welles should have been put in prison for his ground breaking radio broadcast of ‘War of the Worlds.’ So the government or mob rule should hang a modern-day Orson Welles?”
A professor of law at a state university replied, “I have no problem with criminalizing knowing false statements. That is not in my view free expression. But the Supreme Court has often protected lying in politics. We need a constitutional amendment – but of course will never get it.”
A principal research scientist at a major U.S. university replied, “One person’s harm is another person’s virtue. The government can’t impose penalties without running afoul of the First Amendment.”
Rick Forno, senior lecturer in computer science and electrical engineering at the University of Maryland-Baltimore County, said, “This is a hard issue to enforce, since in the U.S., First Amendment protections prevent prosecution of even the most moronic ‘fake news’ items.”
Carl Ellison, an early internet developer and security consultant for Microsoft, now retired, commented, “Seventy years ago, such a source would be denied air time. We no longer have limited channels. We can apply economic sanctions against Russia but what power do/should we have against Breitbart or The National Enquirer?”
Geoff Scott, CEO of Hackerati, commented, “Knowingly spreading false information is a form of taking away people’s right of self-determination and it is an extremely heinous act that should be severely punished. On the other hand, what constitutes ‘false information’? The First Amendment exists for a reason; but some forms of speech are not protected, and these have been clearly defined. How would ‘False Information’ be defined in the context of the First Amendment?”
Steve McDowell, professor of communication and information at Florida State University, replied, “It will be easier for private-sector actors to proceed as they already do, and make commercial decisions about their policies. Civil law remedies for defamation, libel, and privacy protection already are in place, and if other types of harm can be identified, there may be civil law approaches that can be followed. Since many stories may originate outside the country, this approach may have significant limitations. It will be more difficult for the government in the United States to be involved in such efforts, given the strong First Amendment traditions limiting government actions concerning speech and expression.”
Matt Armstrong, an independent research fellow working with King’s College, formerly executive director of the U.S. Advisory Commission on Public Diplomacy, replied, “Our modern view of the First Amendment is perhaps 100 years old. The malicious creation and spreading of information is an intent that can be pursued, however this will not be successful tactic unless society and government are unified behind this approach. At present, the creation and spread of intentionally false and harmful information plays into a divisiveness that must be addressed first. It is very close to a chicken and egg conundrum, but we have to start somewhere.”
The misinformation conundrum presents too many complexities to be solvable
Matt Moore, a business leader, commented, “I am not sure that there is a public appetite to enforce penalties for doing this. We need people to take public responsibility for both what they say and what they consume. This needs to come from the top. And it is manifestly not happening.”
Noah Grand, a sociology Ph.D., wrote, “I understand why there is a lot of anger toward people who knowingly spread false information. Punishing these deceivers seems very appealing. Unfortunately, punishment won’t do anything about the people who want to be deceived. America’s ‘War on Drugs’ – with its emphasis on punishing suppliers – hasn’t been very effective. There’s always a new supplier who rushes in to fill the demand. Why would we expect something different from a ‘War on False Information’ that targets suppliers?”
An anonymous CEO and consultant based in North America noted, “Trying various enforcement models on the current internet is just a waste of time. They won’t solve the overall problem.”
An internet pioneer/originator said, “Defamation of Public Trust depends on who defines ‘The Public’ doesn’t it? And that is the fundamental problem that will always remain.”
An analyst at Stanford University commented, “I can’t imagine who would adjudicate this. What is ‘harmful effect?’”
Stephan Adelson, an entrepreneur and business leader, said, “The enforcement of this type of oversight sounds like an impossible task. Ideally repercussions for spreading harmful untruths should exist but the expense of monitoring and pursuing those guilty would be immense. I can’t imagine the government being put in the position of determining what truth is determining what harm is and then pursuing those they have determined to be spreading harmful untruths under their own definitions.”
Jack Schofield, longtime technology editor at The Guardian, now a columnist for The Guardian and ZDNet, commented, “You can’t fine or lock people up for spreading false information, because there’s too much information to fact-check, and because it’s sometimes quite hard to separate fact from opinion.”
A leading researcher studying the spread of misinformation observed, “We don’t currently don’t have the ability to know which actors spread false or harmful information, or even who pays for this type of influence. There are few rules in digital political advertising in the United States, for instance. While campaign spends are recorded with the Federal Election Commission, there is no record or detailed log of what was promoted/placed, who was exposed to messages, or how these messages (ads, sponsored posts, ‘dark posts,’ et cetera) were executed on and delivered through data mining. This isn’t like a campaign mailer or newspaper ad for a political candidate that we can clip out. Since there is no way to trace large-scale political influence operations back to specific actors, agencies
Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” said, “Criminal penalties might infringe on free-speech rights, but identifying known liars and purveyors of false information and providing links to proofs that their information is false – enabling reputation to enter the equation – could be helpful. But then the smartest purveyors of false information will shift identities.”
Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University, said, “Applying penalties to people who spread false information is a good idea in theory but incredibly hard to do in practice. For domestic cases, how can we differentiate from stupidly but innocently jumping to conclusions vs. deliberate misinformation? How to differentiate between legitimate grassroots campaigns vs. organized groups deliberately aiming to spread misinformation? How to identify domestic groups or individuals vs. foreign ones that local governments have little leverage over? There aren’t many good options here, especially when the bulk of misinformation today is substantially benefiting one of the two main political parties in the United States. What the government can do is to hold hearings to do basic fact finding, and offer research funding/competitions to disincentivize and/or block the most widely agreed upon and egregious cases.”
A former software systems architect replied, “Civil/criminal penalties that exist can continue. The problem is identifying the culprit, demonstrating that is knowingly false (or not verifiably true), and it being harmful. The problem is identifying harmed parties.”
An anonymous professor of information science at a large U.S. state university wrote, “How do you judge whether it is false information or not? It may simply be one’s opinion, or it may be created and spread for national security purposes. Some false information doesn’t really cause too much problem for other people and the society.”
Adam Gismondi, a researcher at the Institute for Democracy & Higher Education, Tufts University, observed, “It is hard to overstate how delicate the approach must be on these questions. If this problem isn’t approached in a way that transcends partisan politics, it will forever be dragged down by polarized perspectives. Until there is a collective recognition of facts around false information and the harm that it causes, the idea of penalties and a role for government in the matter is a non-starter.”
Philip Rhoades, retired IT consultant and biomedical researcher with Neural Archives Foundation, said, “Separate, non-falsifiable networks need to be established as alternatives. It is not going to be possible contain the powerful ‘bad actors’ who basically own the system.”
Larry Keeley, founder of innovation consultancy Doblin, observed, “We should spank them and send them to bed without any supper. ;-) The solution to bad information is always better information.”
An anonymous respondent who works with the Berkman Klein Center at Harvard University replied, “People who knowingly spread false information can suffer civil penalties… But how do you measure or quantify the damage done by spreading false information? Is it enough to decide that it causes measureable harm to individuals? How can we quantify the effect on our civil systems of governance and information sharing, on the integrity of the social contract that all members of a community act without intent to harm or deceive their fellows?”
Philipp Müller, postdoctoral researcher at the University of Mainz, Germany, replied, “It is impossible to judge whether the intended effects of spreading a certain information are ‘harmful.’ Therefore, I see no way how this could be punished. If false information harms existing laws (e.g., against insult or demagoguery) the source should be punished according to these existing laws. If false information is spread with the intention of, e.g., political campaigning without harming any existing law, we cannot begin to punish this. This would undermine freedom of expression. The consequences of this would be harmful to a much greater extent than any misinformation could be.”
A selection of additional comments by anonymous respondents:
• “Manipulating society is a crime.”
• “Freedom of speech means there is no crime in this.”
• “Before taking any steps, we need to define what ‘false’ and ‘harmful’ means.”
• “The key is to reduce incentives at the elite level for spreading misinformation.”
• “Those spreading false information should be cut off from systems they use to do it.”
• “Sanctions should be available to prevent spreading of fake news inciting violence.”
• “The government should not be the arbiter of truthfulness.”
• “I don’t really think government should censor information even if it is false. I’m with Milton in that way.”
• “I do not believe most people who create fake news believe that they are working for a cause that will have harmful effects.”
• “It would be impossible in today’s environment to get consensus on what counts as ‘false information.’”
• “The burden of proof required to make this case is too high to make penalties effective.”
• “Government is very often the problem. We should let the people decide, not some political ministry of truth.”
• “Read Plato’s ‘Gorgias.’ Hold the gullible accountable for culpability.”
• “There could be ‘reputation penalties’ for such actors. Similar to how platforms like Reddit keep their bad apple in check, there could be a universal reputation bank for all internet users.”
• “Those spreading the information are unlikely to be in the targeted jurisdiction, and are likely to be operating with the tacit or explicit support of their jurisdiction.”
• “It will be hard to police the spread of false information because truth is a nebulous concept. Creating trusted, reliable sources for information is a better strategy than punishing liars.”
• “Focus on creating a robust civic infrastructure. Government plays a big role in that.”
• “Individuals should be fined or sanctioned. Government might establish expedited courts to enforce this.”
• “The intention to cause harm is in this case the legally actionable quality. We have laws against hate speech for example, which might be extended to hatred aimed at non-humans, which might include climate change denial if the intention is to harm the environment.”
• “We need better data around who has been truthful.”
To return to the survey homepage, please click here.
To read anonymous responses to this survey question with no analysis, please click here.
To read credited responses to the report with no analysis, please click here.
About this Canvassing of Experts
The expert predictions reported here about the impact of the internet over the next 10 years came in response to a question asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 2 and August 7, 2017. This is the eighth “Future of the Internet” study the two organizations have conducted together. For this project, we invited more than 8,000 experts and members of the interested public to share their opinions on the likely future of the Internet and received 1,116 responses; 777 participants also wrote an elaborate explanation to at least one of the six follow-up questions to the primary question, which was:
The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?
Respondents were then asked to choose one of the following answers and follow up by answering a series of six questions allowing them to elaborate on their thinking:
The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online
The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online
The six follow-up questions to the WILL/WILL NOT query were:
- Briefly explain why the information environment will improve/not improve.
- Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?
- What are the consequences for society as a whole if it is not possible to prevent the coopting of public information by bad actors?
- If changes can be made to reduce fake and misleading information, can this be done in a way that preserves civil liberties? What rights might be curtailed?
- What do you think the penalities should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
- What do you think will happen to trust in information online by 2027?
The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous seven “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in the global internet policy community and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR) and Organization for Economic Cooperation and Development (OECD).
We also invited a large number of professionals, innovators and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and European Union; the media and media-watchdog organizations; and think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications), as well as globally located people working with communications technologies in government positions; top universities’ engineering/computer science departments, business/entrepreneurship faculty, and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as the Association for Progressive Communications (APC), the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the canvassing questionnaire link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.
Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.
The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.
About 74% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 39% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An additional 22% specified their primary area of interest as “other.”
More than half the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their Internet expertise or background, and this was noted where relevant in this report.
Here are some of the key respondents in this report (note that position titles and organization names were provided by respondents at the time of the canvassing and may not be current):
Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the Program on Information Science at MIT; Robert Atkinson, president of the Information Technology and Innovation Foundation; Patricia Aufderheide, professor of communications, American University; Mark Bench, former executive director of World Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs; danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures; Tim Bray, senior principal technologist for Amazon.com; Marcel Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communication; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Johanna Drucker, professor of information studies, University of California-Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at ICANN; Glenn Edens, CTO for Technology Reserve at Xeroz/PARC; Paul N. Edwards, fellow in international security, Stanford University; Mohamed Elbashir, senior manager for internet regulatory policy, Packet Clearing House; Susan Etlinger, industry analyst, Altimeter Research; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Mark Glaser, publisher and founder, MediaShift.org; Marina Gorbis, executive director at the Institute for the Future; Jonathan Grudin, principal design researcher, Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute; Starr Roxanne Hiltz, author of “Network Nation” and distinguished professor of information systems; Helen Holder, distinguished technologist for HP; Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University; Christian H. Huitema, past president of the Internet Architecture Board; Alan Inouye, director of public policy for the American Library Association; Larry Irving, CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism; Christopher Jencks, a professor emeritus at Harvard University; Bart Knijnenburg, researcher on decision-making and recommender systems, Clemson University; James LaRue, director of the Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, Web consultant, developer and activist; Mark Lemley, professor of law, Stanford University; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia Livingstone, professor of social psychology, London School of Economics; Alexios Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at Northeastern University; Giacomo Mazzone, head of institutional relations for the World Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor, Oxford Internet Institute; Alex ‘Sandy’ Pentland, member US National Academies and World Economic Forum Councils; Ian Peter, internet pioneer, historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab; Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smart”; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo, longtime Silicon Valley-based technology forecaster; David Sarokin, author of “Missed Information”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Jack Schofield, longtime technology editor now a columnist at The Guardian; Clay Shirky, vice provost for educational technology at New York University; Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig Siegele, technology editor, The Economist; Evan Selinger, professor of philosophy, Rochester Institute of Technology; Scott Spangler, principal data scientist, IBM Watson Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard D. Titus, CEO for Andronik; Joseph Turow, professor of communication, University of Pennsylvania; Stuart A. Umpleby, professor emeritus, George Washington University; Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship, University of Virginia; Tom Valovic, Technoskeptic magazine; Hal Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Kevin Werbach, professor of legal studies and business ethics, the Wharton School, University of Pennsylvania; John Wilbanks, chief commons officer, Sage Bionetworks; and Irene Wu, adjunct professor of communications, culture and technology at George Washington University.
Here is a selection of institutions at which respondents work or have affiliations:
Adroit Technolgic, Altimeter Group, Amazon, American Press Institute APNIC, AT&T, BrainPOP, Brown University, BuzzFeed, Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic Design, Center for Democracy/Development/Rule of Law, Center for Media Literacy, Cesidian Root, Cisco, City University of New York Graduate School of Journalism, Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data & Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy Information Center, Farpoint Group, Federal Communications Commission, Fundacion REDES, Future Today Institute, George Washington University, Google, Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard Business School, Hewlett Packard, Hyperloop, IBM Research, IBM Watson Health, ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network, Internet Engineering Task Force, Internet Society, International Telecommunication Union, Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global, LearnLaunch, LMU Munich, Massachusetts Institute of Technology, Mathematica Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang Technological University, National Academies of Sciences/Engineering/Medicine, National Research Council of Canada, National Science Foundation, Netapp, NetLab Network, Network Science Group of Indiana University, Neural Archives Foundation, New York Law School, New York University, OpenMedia, Oxford University, Packet Clearing House, Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network, Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S. Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles campuses), University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester Polytechnic Institute, World Broadcasting Union, W3C, Xerox PARC, Yale Law.
To return to the survey homepage, please click here.
To read anonymous responses to this survey question with no analysis, please click here.
To read credited responses to the report with no analysis, please click here.
