Elon University

The 2017 Survey: The Future of Truth and Misinformation Online (Q5 Anonymous Responses)

Anonymous responses to the fourth follow-up question:
What penalties should there be for harmful misinformation?

Technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:

Future of Misinformation LogoWhat is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.

Follow-up Question #4 was:
What do you think the penalities should be for those who are found to have created or knowingly spread false information causing harmful effects? What role, if any, should government play in taking steps to prevent this?

Some key themes emerging 1,116 respondents’ answers: – Corporate actors profiting from information platforms should assist in improving the information environment. – Individuals and cultures must do a better job of policing themselves; it’s best to generally avoid any sort of added regulatory apparatus. – Governments should not be allowed to take any sort of oversight role. – Some sort of regulation should be applied, updated or adapted to help somewhat ameliorate the problem of misinformation. – While legal remedies may work locally at times, the global nature of the internet and variability of the application of law negates their efficacy. – Further legal approaches are not likely to be workable, nor are they likely to be effective. – Free speech is a pivot point: Regulatory mechanisms may stifle unpopular but important speech just as much or more than they stifle harmful speech. – The misinformation conundrum presents too many complexities to be solvable.

Written elaborations by anonymous respondents

Misinformation Online Full Survey LinkFollowing are full responses to Follow-Up Question #4 of the six survey questions, made by study participants who chose to take credit when making remarks. Some people chose not to provide a written elaboration. About half of respondents chose to remain anonymous when providing their elaborations to one or more of the survey questions. Respondents were given the opportunity to answer any questions of their choice and to take credit or remain anonymous on a question-by-question basis. Some of these are the longer versions of responses that are contained in shorter form in the survey report. These responses were collected in an opt-in invitation to about 8,000 people.

Their predictions:

An executive consultant based in North America wrote, “We should explore criminal and civil penalties.”

An anonymous professor of information science at a large US state university wrote, “It depends on the consequences of the bad information and damages it caused. This is very hard, because how do you judge whether it is false information or not? It may simply be one’s opinion, or it may be created and spread for national security purposes. Some false information doesn’t really cause too much problem for other people and the society.”

An internet pioneer and principal architect in computing science replied, “It seems to me that existing laws are sufficient. I do not see a need for government action, which could be harmful.”

A research scientist based in North America commented, “It should be same as for information today.”

An anonymous respondent wrote, “I’m not sure that there should be legal consequences because I still do not believe we can trust one group to accurately police truth.”

An anonymous international internet public policy expert said, “The penalties should be inspired and/or aligned with those for defamation. The government should play a role in supporting prevention and in issuing penalties.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “Penalties and processes exist already. Start with what we have in place and build from there. But the best way will be to have educated readers that can just stop supporting dubious sources. Only education can achieve that.”

A North American research scientist wrote, “Government is likely the only actor with authority to stem flows of false information. The penalties should be determined by the intent of the harms (and should be very severe for efforts to undermine democratic freedoms and security).”

A North American research scientist replied, “They should pay fines and have to watch videos of sad kittens.”

A leading researcher studying the spread of misinformation observed, “We don’t currently don’t have the ability to know which actors spread false or harmful information, or even who pays for this type of influence. There are few rules in digital political advertising in the United States, for instance. While campaign spends are recorded with the Federal Election Commission, there is no record or detailed log of what was promoted/placed, who was exposed to messages, or how these messages (ads, sponsored posts, ‘dark posts,’ et cetera) were executed on and delivered through data mining. This isn’t like a campaign mailer or newspaper ad for a political candidate that we can clip out. Since there is no way to trace large-scale political influence operations back to specific actors, agencies, and funding bodies, there is no way to curtail this activity by imposing penalties.”

A futurist based in Western Europe said, “Yes, large penalties should be introduced, the same as for people who give misleading information in financial reports and in advertisements. Of course, this must be overseen by a body that can be understood as being independent, which will be hard. And it will not be a complete solution to the problem of fake information. But it will be an important contribution.”

A project manager for the US government responded, “Since actors are sometimes sanctioned by the government, it’s difficult to say. I am hoping that the separation of powers acts as intended and keeps checks and balances.”

A research scientist based in North America said, “Defamation laws should help. When it comes to other more political issues, e.g., global warming/environmental decay, maybe other mechanisms need to be put in place. I don’t think individuals can be held responsible for the distribution of false information; first and foremost it is the distribution ecosystem that allows low-quality information to bubble up.”

An associate professor of computer science at a university located in the South Pacific said, “Those spreading false information should be cut off from the systems they’ve used to do that.”

A distinguished engineer for one of the world’s largest networking technologies companies commented, “There are outright lies and then there’s stretching the truth. Some actions, for example creating widespread panic with fake news, must be prosecuted. There are already civil penalties for slander and libel that extend to the internet. Legislation and definition of this will be a protracted debate. In the immediate future, the penalty will be taking away the source’s access and will be done by the content and service providers (albeit a moving target).”

A longtime researcher and administrator in communications and technology sciences for agencies of the US government said, “None.”

An assistant professor at a university in the US Midwest wrote, “If a socio-technical solution is used to address this there can simply be in-system impacts. A person can be flagged in some way depending on the severity of the issue. Legal implications should only be applied depending on consequences. Consider current legal consequences of spreading false information (libel, slander, et cetera).”

A researcher based at MIT, said, “The penalty should be commensurate with the harm caused, as in other civil cases involving fraud. The government should provide the judicial system that decides these cases. It should not attempt to become the prosecutor of truth.”

A media networking consultant noted, “Legally proving what is proposed will be difficult. A strong driver of fake reports is garnering clicks and this couldn’t be prosecuted. The government’s only role is to provide reliable information to the press. Failure to do so should be prosecuted.”

An anonymous respondent commented, “I support subscription and or fully vetted and attributed information.”

A retired local politician and national consumer representative replied, “False information can only be countered by a trusted source with accurate information. Sanctions should be available to prevent spreading of fake news that incites violence.”

An associate professor at a major Canadian university wrote, “This should be handled by legal frameworks, rather than government monitoring and censorship.”

A professor of law at a major California university noted, “We already have laws against fraud, defamation, harassment, et cetera. Those are good models; we need to find a way to scale them. Government’s role should be to pressure other state actors that support or engage in spreading misinformation to enforce the law and to avoid spreading misinformation themselves. Beyond that is a very slippery slope. We should also ask about the role of corporate actors – it is Google, Facebook, Twitter, et cetera, that actually make many of these decisions.”

A professor and author, editor, journalist based in the US wrote, “It is very difficult to determine harmful intent – I am not sure government can play much of a role.”

A professor of media and communication based in Europe said, “It will be very difficult to assign penalties to culprits when platforms deny responsibility for any wrongdoing by their ‘users.’ Accountability and liability should definitely be assumed by platform operators who spread news and information, regardless of its source and even if unwittingly. Government has very limited power to te ‘fake news’ or ‘misinformation’ but it can definitely help articulate which actors in society are responsible.”

A professor at MIT observed, “We have libel laws, but again, anything we ‘weaponize’ will be used against ‘us’.”

An anonymous principal technology architect and author replied, “We should not have penalties based on intent – the idea that there should be penalties based on intent is a major part of the problem right now. This is one step in the destruction of freedom.”

An anonymous professor of media and communications based in Europe observed, “The intention to cause harm is in this case the legally actionable quality. We have laws against hate speech for example, which might be extended to hatred aimed at non-humans, which might include climate change denial if the intention is to harm the environment.”

An anonymous research scientist replied, “Penalties would require a government ‘Bureau of Truth’ to determine the ‘true’ story. Such a bureau would be inherently repressive and even more dangerous than the unrestricted spread of false information. It would resemble the situation in the Soviet Union at its worst.”

An anonymous CEO and consultant based in North America noted, “Trying various enforcement models on the current internet is just a waste of time. They won’t solve the overall problem.”

An anonymous research scientist based in North America wrote, “It should be treated similar to the way we treat defamation, fraud and other acts that involve falsehoods.”

An anonymous respondent observed, “News organizations and online networking websites could flag these with the aid of users. The government should have no role making it a crime.”

An anonymous respondent from the Berkman Klein Center at Harvard University said, “The misinformation should be refuted, and if they are doing so at a large-scale to influence public opinion unjustly, we should decide as a society if charges are acceptable.”

An anonymous internet pioneer and longtime leader in ICANN said, “Proportionality of response should take into effect all of the costs of the negative externalities created by knowingly spreading the false information.”

An anonymous internet security expert based in Europe noted, “In the armed forces in time of war, this is treason.”

A professor and researcher of American public affairs at a major university replied, “This is a slippery slope. The key is to reduce incentives at the elite level for spreading misinformation.”

An anonymous ICT for development consultant and retired professor commented, “Government best plays a regulating role and laws are punitive; so both regulation and laws should be stringently applied.”

An anonymous research scientist based in North America wrote, “Focus on creating a robust civic infrastructure. Government plays a big role in that.”

An anonymous author, editor and journalist based in North America replied, “Fund education. Lots of it.”

A media director and longtime journalist said, “Social opprobrium. There’s a reason we hate liars. Malicious publication is already subject to defamation laws.”

An anonymous internet pioneer replied, “It depends on the intended and actual effects. But I think those laws already exist in many countries, maybe most.”

An associate professor in strategic foresight and innovation commented, “A smarter populace will be able to better detect crap information. So we need to just improve the sources and complexity of news, not vet it. The government is very often the problem. We should let the people decide, not some political ministry of truth.”

An anonymous respondent from the Berkman Klein Center at Harvard University noted, “People who knowingly spread false information can suffer civil penalties in an amount equal to the damages caused. Government can create those laws and try to establish non-partisan bodies to apply them. But how do you measure or quantify the damage done by spreading false information? Is it enough to decide that it causes measureable harm to individuals? How can we quantify the effect on our civil systems of governance and information sharing, on the integrity of the social contract that all members of a community act without intent to harm or deceive their fellows?”

An anonymous user-experience and interaction designer said, “Fines and/or restricted access to the information channels. The government should not be involved except as it relates to judiciary outcomes (otherwise people would suspect government of trying to assert political agendas).”

An anonymous North American program officer wrote, “If it’s on a very large scale with harmful effects, perhaps jail time or a fine is appropriate. But enforcing this might be difficult. It might be hard to prove who created the false information or their intent.”

A professor of law at a state university replied, “I have no problem with criminalizing knowing false statements. That is not in my view free expression. But the Supreme Court has often protected lying in politics. We need a constitutional amendment – but of course will never get it.”

A professor and chair in a department of educational theory, policy and administration commented, “Some of this work can be done in private markets. Being banned from social media is one obvious one. In terms of criminal law, I think the important thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain venues, but obviously not in all. Federal (and perhaps even international) guidelines would be useful. Without a framework for regulation, I can’t imagine penalties.”

A head of systems and researcher working in Web science said, “Public ridicule based on truth and when applicable lawsuits as mentioned in previous answer. Government needs to be held accountable as they cooperate with Super PAC agendas that are behind a good number of disinformation campaigns.”

A leader of internet policy based in South America noted, “Transdisciplinary Political of Digital Violence: https://www.youtube.com/watch?v=orB0vmB67-8.”

A lecturer at the University of Tripoli in Libya said, “Activating cyber-crimes law could help to reduce false information. Also, trying to raise people’s awareness by government through television or NGOs.”

A principal network architect for a major edge cloud platform company replied, “Slander and libel are infrequently applied to those who issue false or malicious statements in print or on social media. In particular, malicious cyber bullying is largely not prosecuted even when it is in fact a crime.”

A technologist specializing in cloud computing observed, “Shouldn’t that be based on actual measurable consequences?”

A research scientist said, “Are there not penalties already available?”

A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “I don’t think it’s wise to punish those who create false information, but there are ways to stop those who spread it. One of the tacit bargains of the Western press has been access to inside information and a privileged place in society in return for verifying information and validating sources. I don’t see any reason why that same bargain couldn’t hold today.”

A professor and researcher based in North America noted, “It depends on the damage the false information causes.”

An institute director and university professor said, “Criminal penalties. But it’s not realistic.”

A professor at a major US state university wrote, “All the parties involved in dissemination as well as creation should take the responsibility and charged for any harms resulted.”

An anonymous respondent replied, “This requires fair judgment, and – when certain – the penalty would have to be fairly harsh, especially if it has harmful effects. Society has to decide what steps can be taken and are acceptable and government with proper guidance and advice from experts will need to act.”

The dean of a school of information science commented, “New laws? Probably not.”

A professor based at a North American university noted, “Require them to do closely networked community service directly connected to the field of perpetration.”

A director of standards and technology who works with the Internet of Things said, “Government should pass laws to avoid this, and have penalties.”

An anonymous respondent based in North America said, “False information and propaganda have been around since before the printing press. The government should not curtail broadband competition and investment.”

The assistant director of a digital media and learning group said, “It would depend on the scenario.”

A chief executive officer wrote, “Government should play a minimal role. Identifying and publicizing falsehoods should be the primary response.”

A retired professor and research scientist said, “Penalities might influde fines or prison, depending on the harm. Maybe look at current efforts to prohibit non-consensual revenge porn as a model. Look for analogies in existing law based on outcomes. The government role is hard to accomplish due to the global nature of the internet – e.g., Wikileaks and Julian Assange.”

A retired senior IT engineer based in Europe observed, “Heavy penalties, prison, laws must be developed and also must be internationally applied.”

A North American researcher replied, “Kevin Mitnick was denied computer access. Denying hackers/fake news creators/people who incite violence access to media from radio to blogs would be appropriate. How? I don’t know.”

A political science and policy scholar and professor said, “A limited and cautious role. I’m open to the idea that this could be a crime, but very wary of government using such laws to target people they don’t like. As an example, it’s easy to imagine a Trump appointee trying to use such a law to prosecute reporters. We have to realize these laws would not be implemented by an idealized or hypothetical government, but by our real-life, deeply dysfunctional political system.”

A professor based in North America observed, “Government should not play a role in this – doing so seems like a direct impingement on media freedom and a slide towards dictatorship.”

A policy analyst for the US Department of Defense wrote, “Confinement, detention, custody, captivity, restraint.”

An independent journalist and longtime Washington correspondent for leading news outlets noted, “Zero and none. Sounds like totalitarianism.”

A professor at a major US university replied, “The question presupposes that communicators have a duty of care, honesty, et cetera. Who imposes and enforces such a duty? It’s buyer beware, particularly when so much dissemination of news, et cetera, can take place anonymously.”

An associate professor and journalist commented, “The UK has been progressive in tackling trolls through the legal system, so the US could learn from that experience.”

A professor at a Washington DC-area university said, “Harms vary widely. Government can look to deter some state-based distributors, by, e.g., declaring democratic elections to be critical infrastructure and threatening retaliation for attacks.”

The managing editor of an online fact-checking site replied, “They should at the very least lose their ability to put out disinformation. I’m torn about legal repercussions beyond that, as I’m a free speech nut, but definitely sanction their ability to spread disinformation.”

A post-doctoral fellow at a center for governance and innovation replied, “Jail time and civil damages should be applied where injuries are proven. Strictly regulate non-traditional media especially social media.”

A senior vice president of communications said, “We don’t care about shame any more, and that used to be enough.”

An editor at large noted, “Libel, slander laws, criminalization of fraud, hoaxes.”

A research scientist from Latin America replied, “The highest penalty available. Manipulating society is a crime.”

An IT director said, “The only way one could stop such people, were they identified, would be to make it prohibitively expensive for them to keep doing it or to basically make them blind, deaf and dumb and unable to write with their hands.”

A librarian based in North America said, “Intent is a huge question here. How do you determine that? If it’s truly a bad actor causing bad things to happen with false information, there should be a penalty that is scaled to the harm done. I think governments should stay out of it, though, because they are often the bad actors in this (see Russia, Turkey).”

A director of research for data science based in Spain observed, “No penalty for individuals. Penalties for organizations. Governments should educate the public in information verification practices.”

An anonymous consultant noted, “The government should follow the money and hold advertisers accountable for paying to be on websites that are spreading disinformation.”

A futurist based in North America, wrote, “Some news can be viewed as true by some people and fake by others depending on their perspective. Something happens and everyone’s account slightly or radically differs. Who gets to decide ultimately? The Supreme Court?”

A professor of rhetoric and communication noted, “It would be difficult to do this because it would be impossible in today’s environment to get consensus on what counts as ‘false information.’”

A student researcher based in North America commented, “If there is some way to limit their ability to distribute information in the future, that seems the punishment that most fits the offense.”

An anonymous futurist/consultant said, “Efforts should be focused on prevention and treatment, rather than government penalties. Rather than government intervention, platforms like Reddit and others should work with their user base to establish rules around the spread of harmful and misleading information.”

An anonymous MIT student noted, “I don’t think that the government should interfere – gets into messy problems with a free press.”

A research professor of robotics at Carnegie Mellon University said, “Should be similar to libel laws – enabling citizens to sue for the spread of false information, even if it does not specifically libel any one person.”

An assistant professor of sociology at a Southeast US university, said, “Public shaming via exposure of their falsehoods? I’m skeptical about government involvement as it could be read as the status quo seeking to defend/protect itself.”

A CEO for a consulting firm said, “I would analogize this to the libel laws. I also would support some level of criminalization as long as the bar was set high enough so that innocent inaccurate statements were not caught up in it.”

A professor of information studies based in Europe replied, “Lying and spreading false information with an aim of harming directly or indirectly others should be punishable (as it indirectly tends to be even in the current judicial systems). Government and legislators should make sure that winning such cases does not require a lot of wealth, engaging in long-term and unsure cases, and hiring expensive lawyers so that everyone in the society can have an opportunity to win such cases.”

A research psychologist commented, “For a private individual – none. For a person in government service, representing an official government position, there should be penalties.”

A researcher/statistician consultant at a university observed, “Governments are involved in spreading fake information. We need ‘ombuds’ groups to investigate and apply punitive measures. Punitive measures – loss of employment, if employed. Loss of contract – if on contract. Fines – if unemployed. Also maybe some community work to be completed.”

A vice president for learning technologies emerita said, “People spreading false information to cause harmful effects should be identified as such (and in what circumstances). Every new, evolving, and mature organization (public or private) should play a role in preventing false information.”

The president of a consultancy observed, “The tech companies who made millions on fake news by ignoring it should be held accountable. Government is so far behind on everything digital, their role has to first be to educate all government employees, then the citizenry, and sustain updates as diverse new false news strategies are identified.”

A partner in a services and development company based in Switzerland commented, “A bad reputation is the best penalty for a liar. It is the job of society to organize itself in a way to make sure that the bad reputation is easily visible. It should also extend to negligence and any other related behaviour allowing the spread of misinformation. Penal law alone is too blunt a tool and should not be regarded as a solution. Modern reputation tools (similar in approach to what financial audits and ratings have achieved in the 20th century) need to be built and their use must become an expected standard (just like financial audits are now a legal requirement).”

A research manager and senior lecturer in new media based in the South Pacific region, noted, “Government does have a role to play to prevent the distribution of false information – but just how far that role extends is another question.”

A web producer/developer for a US-funded scientific agency noted, “Penalties should be based on the harmful effects (if proven). The government should have the highest role (enforcement) should laws be passed to police this issue. Laws could follow the pattern currently in place for transmitting via other methods (before or besides the internet).”

A principal data scientist who works for IBM suggested, “The loss of speaking/authoring privileges for a period of time on public venues.”

A journalist who writes about science and technology said, “It depends on the situation. We have existing laws around fraud that might cover some false information. If the false information causes injury or harm, we have laws to cover that as well. The government should sue fraudsters, much the way the FTC currently sues businesses that make false claims or violate laws.”

A communications professor based in Hong Kong commented, “The penalties should probably be similar to the ones we currently have for defamation and libel. I don’t think these should be criminal charges. I think government should have a limited role in the prevention of distribution of false information.”

A retired university professor noted, “Why not adapt the legislation that applies to printed information?”

A professor at a major US state university said, “Severe – if they can be convicted, which few can unless someone sets themselves up as the ‘truth arbiter.’”

A cybersecurity engineer based in North America commented, “The government should not play a role in preventing the distribution of false information. What agency would be tasked with this? The Ministry of Truth?”

A publisher said, “Deny them access to the internet permanently.”

A senior fellow at a center focusing on democracy and the rule of law observed, “Treat these cases the same way as libel or defamation – same penalties, same guarantees, same adjudication process (e.g., first appeal to newspaper/circulator of information, then go to court if information is not repealed and correction is not published/publicized by the originator/circulator of information).”

A professor of information technology at a large public research university in the United States said, “There are existing legal frameworks for pursuing civil or criminal action against purveyors of false statements. Libelous, slanderous, malicious, and negligent speech are all punishable or actionable under existing law.”

A retired educator observed, “Government is inherently behind the curve in legislating on all fronts.”

A researcher at an institute of technology replied, “Before taking any steps, we need to define what ‘false’ and ‘harmful’ means to avoid too drastic step on the fundamental and important right of free speech.”

An eLearning specialist noted, “I don’t think the government should – or even wants to be – involved in anything smaller than crying ‘fire’ in a movie theatre.”

A professor at a major US university noted, “There should be no penalty.”

A principal engineer said, “It should not be considered criminal. However, this does not prevent recovery via civil action for those damaged. However, in the United States this may raise constitutional issues.”

An associate professor of sociology at a liberal arts university replied, “Existing laws against fraud, creating public panic, making terroristic threats, and so forth are adequate for handling such cases.”

An anonymous journalist urged, “The penalties should correspond to the actions, so it’s impossible to name one penalty that will be fitting in all such cases. Some such manipulation is already covered by law, e.g., if you try to manipulate share prices or financial markets or you libel someone, whereas when the manipulation is conducted by a rough state seeking e.g. to unstabilise a competing state or economy it’s much more tricky to handle. I certainly think we will se a lot more of the latter kind of ‘cyber warfare.’ In other cases, such as misinformation and false news/research on the ‘dangers’ of vaccination or other health related issues, we have very little effective legislation (and in my social media feeds, false health information is much more widespread than false political information). Overall, we need to equip people to critically evaluate information better through our education systems. We need to create more awareness, and more informed citizens, and there will be need for new legislation in areas such as algorithmic manipulation, but I don’t see how one single measure can solve this issue.”

A researcher based in Europe commented, “I have contradictory ideas regarding this subject.”

A CEO and advisor to many technology projects, wrote, “In my opinion we will need to increase libel laws and penalties, exponentially increased for public figures and those in office, to create a more trustful society. We need better data around who has been truthful.”

A futurist based in North America said, “Depending on how harmful the information is, there might be need for some repercussions. If it is weaponizable – e.g., instigation to terrorism or lobbying for unhealthy lifestyles – they should be a possibility to shut it down and even some form of punishment to those spreading it (if identifiable).”

A senior manager for a major internet infrastructure organization commented, “Hate crimes are punishable by law. False information that cause harmful effect should be criminalized by law.”

A research scientist based in Europe said, “Penalties: Reputational damage to the responsible party. But there are many special scenarios. For example, if the conduct meets the standards for libel, then libel laws should apply. If the conduct is carried out by a foreign government aiming to influence domestic political processes, then governments must be able to consider sanctions.”

A software engineer commented, “Determining penalties are up to the legal system and the impact of the ‘harmful effects.’ Government has to create and back the legal system that allows for such persecution.”

A research scientist based in North America commented, “Those spreading the information are unlikely to be in the targeted jurisdiction, and are likely to be operating with the tacit or explicit support of their jurisdiction.”

The president of a business said, “We can use what we already have and perhaps a few more. Libel laws; false advertising laws; laws against breach of contract; laws against making false scientific claims for personal or corporate gain; penalties for victim-targeted hacking and doxxing; laws to protect the integrity of the vote and bar foreign interference in elections; laws preventing corporations from having the rights of people.”

A professor of sociology with expertise in social policy, political economy and public policy said, “Purveyors must be required to produce clear evidence and, when they cannot, their failures must be publicized. As far as penalties are concerned, those must be commensurate with the kind of information that is intended to mislead.”

An assistant professor of political science at a US university wrote, “Minor financial penalties with the possibility of jail time. However, I think a lot of the responsibility ought to fall with information aggregators such as Facebook, Twitter, and web hosting services.”

An anonymous lecturer said, “Fine them.”

An anonymous editor and publisher commented, “Government role: enforce against libel, slander, TM, (c) laws and similar. Adopt new legislation as necessary; but it will be very hard as the First Amendment is interpreted today.”

A former director of a major global information freedom organization wrote, “Let people know who’s creating it; name names – shame them.”

A professor of communication replied, “It depends on the severity, we must use discretion but it is interesting people are already being prosecuted for social media postings that led to suicide.”

A fellow who works at a university in the UK said, “There should be penalties where there is clear intent to harm or undermine. However, I am concerned with how the narrow economic interests that subtly shape the information landscape are being obfuscated by technologies which are claimed to be objective and impartial but really aren’t (AI/machine learning, predictive analytics and the like).”

An anonymous researcher observed, “We don’t really know what’s needed. There are already laws on the book regarding libel, slander and fraud. The larger problem is that perpetrators may not reside in the country where they’re spreading false information, making any criminal prosecution difficult.”

A vice president of professional learning commented, “I’d hope government would support strong educational policy with intent to teach new media literacy – reading is longer enough to be called literate. A media literate public that thinks critically should be the goal of a strong educational system.”

An anonymous activist replied, “We certainly need laws and enforcement about this. Penalties – electronic disconnection? Public tagging?”

A researcher at the University of Oregon commented, “There should be penalties but a government run by fascists can’t be trusted to play a role.”

An associate professor at Brown University wrote, “Essentially we are talking about the regulation of information, which is nearly impossible since information can be produced by anyone. Government can establish ethical guidelines, perhaps similar to the institutional review boards that regulate scientific research. Or it can be done outside government, like a better business bureau.”

An anonymous respondent based in Asia/Southeast Asia replied, “The death penalty.”

An associate professor at a major university in Italy wrote, “Platforms should reduce the circulation of these false information. I don’t see a role for governments.”

An internet pioneer/originator said, “Defamation of Public Trust depends on who defines ‘The Public’ doesn’t it? And that is the fundamental problem that will always remain.”

An analyst at Stanford University commented, “I can’t imagine who would adjudicate this. What is ‘harmful effect?’”

An author/editor/journalist wrote, “To attempt to punish after the fact is pointless. Herd immunity to misinformation is far more effective.”

The former chair of a major online civil liberties organization said, “Penalties should be limited to the existing ones for libel and incitement.”

A vice president for public policy for one of the world’s foremost entertainment and media companies commented, “A more fruitful line of inquiry is the main platforms – their responsibility and practices rather than individuals and whatever soapbox they can find.”

An anonymous respondent observed, “There should be financial fines and/or imprisonment. Obviously that requires government intervention. I don’t like the idea of a mob rule pressuring organizations. That said, when advertisers pull out of media, it makes a statement. So one way is to engage advertisers (firms) to be sensitive to this as well through consumer pressure and perhaps organizations (e.g., the advertising research council).”

An anonymous respondent who works with non-profits and mission-based organizations said, “I value our First Amendment rights over almost everything else in this arena, and would strongly object to any policies the hamper free speech. The best thing that government can do is set an example by being transparent, accountable and dedicated to refraining from passing off nonsense as verifiable fact.”

A consultant replied, “The appropriate penalty is to be blackballed by journalists who retain credibility. Government should stop distributing false information, particularly about itself.”

An online experience strategist said, “Penalties should be commensurate with damages inflicted and should include accurate, sticky notation on future communications. Possibly losing the right to communicate at all, though that should be reserved for repeat offenders and very high damages.”

A project manager based in Europe commented, “Banned from the internet tools they used to create the information.”

An historian and former legislative staff person based in North America observed, “Seems more like the president himself is the one creating fake news. No, government should be quiet, and citizens should NOT retweet or post on Facebook Trump’s asinine comments. He should get less publicity NOT more.”

A computer information systems researcher noted, “Public spanking.”

A futurist/consultant based in Europe said, “Libel, fraud, incitement are existing crimes. The justice system has a current role in this area.”

A futurist/consultant based in North America said, “I don’t believe that we have anything resembling the legal or ethical infrastructure to answer this question in a consistent way. At the moment, given the tools we have, the best we can hope for is just to flag the really bad stuff and try to isolate it.”

A professor of humanities noted, “Penalties are a nice idea, but who will decide which instances of ‘fake news’ require greater penalties than others? The bureaucracy to make these decisions would have to be huge.”

A North American research scientist replied, “The ‘intent of causing harmful effects,’ i.e., willful acts, should always be prosecuted to the full extent of the law (civil and criminal).”

A senior research fellow working for the positive evolution of the information environment said, “Penalties will only reinforce the feeling of persecution. We should find ways to discredit them, and to lower their visibility, in real time and automatically.”

A small-press publisher based in North America commented, “Financial penalties, then loss of broadcast license. If slander or libel can be proven, judicial penalties.”

A professor and researcher noted, “If we are able to identify the sources of false information that caused harm, they should be held accountable the same way as physical harm.”

An anonymous respondent said, “Bad actors should be banned from access, but this means that a biography or identification of some sort would be necessary of all participants.”

A senior lecturer based in Asia/Southeast Asia commented, “It depends on the goals of the perpetrators and the consequences of their action.”

A retired public official who was an internet pioneer replied, “Normal judicial remedies; extradition; international conventions.”

A principal with a major global consultancy observed, “Ah, shouting ‘fire’ in a crowded theater (when there is no fire, of course). Lock them up. Toss them off a bridge. Everyone else will get the message. But remember the First Amendment and the essential role of individual responsibility in a free society.”

An engineer based in North America replied, “Censure.”

A CEO based in Canada replied, “Build on current libel and slander laws.”

The president of a center for media literacy commented, “The barriers to prosecute are high and they should be. One person’s facts are another’s perceptions. Intent is extremely hard to prove, and it should be. For example, fraud cases or deceptive advertising cases have high thresholds for proof. Government should tread very lightly, as it has done in the past.”

A senior research fellow based in Europe said, “Media literacy is where it’s at, and teaching that needs to start in high schools.”

An economist based at one of the top five global technology companies commented, “This is a very tricky problem due to the free-speech issue. I would hope that existing practices, such as the Brandenburg test could deal with issues such as inciting violence.”

An anonymous respondent based in Europe wrote, “Publicity, monetary fines and definitely jail term, depending on the scope and consequences of the spreading false information. In terms of the government role in terms of prevention, it should not be different than any other area, including sound legal regulation, strengthened capacities identify false information and stop at early stages using legal mechanism, education and awareness raising of citizens, as well as higher ethical stands (or zero tolerance) for public officials walking on the edge.”

A principal research scientist at a major US university replied, “One person’s harm is another person’s virtue. The government can’t impose penalties without running afoul of the First Amendment.”

A professor at a major US university said, “Current laws should be a good basis for legal penalties, but the those who spread false information will have reduced reputations and reduced capacity to promote future posts.”

A researcher of online harassment working for a major internet information platform replied, “I don’t think the government should be involved.”

A postdoctoral scholar based in North America wrote, “None. However, if we are talking about companies such as Facebook, I do think there is room for discussion on the federal level of their responsibility as, basically, a private utility. Regulation shouldn’t be out of the question.”

A North American research scientist observed, “Civil penalties for tortuous spread of false info.”

A professor of political economy at a US university wrote, “None and none.”

A research director for a major US national organization commented, “I believe in consequences, but since I see the government as the primary offender at this moment, I am not sure that we should rely on the government to prevent the distribution of false information!”

A faculty member at a research university noted, “Penalties are not the answer. Look to prison abolitionists for better methods.”

A director of research said, “Can we leverage our existing libel laws?”

An anonymous respondent wrote, “No penalties.”

An author and journalist based in North America noted, “People who publish lies should face the same penalties that journalists have always faced: You lose your publication, your savings, your reputation and your future. The courts should be the only place that government touches this process.”

An anonymous research scientist commented, “Penalties for members of the media or political establishment should include, at minimum, the revocation of press credentials (similar to a doctor or lawyer who loses their license and thus ability to practice) or formal censure.”

A North American politician/lawyer wrote, “The penalty should depend on the context. Some penalties should be non-governmental, such as terminating users from social platforms such as Twitter. Role of government should be limited to areas where there is a clear legal impact, which could include threats to public safety.”

A vice president for a company based in North America replied, “Left to its own devices, the market will likely begin reputation tracking (similar to the reputation tracking of eBay). Bad actors would suffer loss of reputation and influence. Let the market of ideas work out its own solution. Keep government meddling to a minimum; it’s almost universally destructive.”

An anonymous research scientist observed, “Since we have little common view of shared truths anymore, this would be a very dangerous thing to do. It would rapidly become weaponized.”

A senior analyst who works for a major US agency said, “It may be time to consider if certain bad behavior which could be tolerated in a pre-internet public square is no longer tolerable in the age of the internet. For example, we have more tolerance for junk snail mail than spam. Perhaps we should have less tolerance for businesses who create fake news that generates clicks that generates ad revenue. In the era of newspapers and pamphlets, this kind of fraud was difficult to implement, but in the digital era it is too easy. It is fraud, which is not protected as freedom of speech.”

A journalism professor and author of a book on news commented, “The bigger question is, how do you measure impact and harmful effects? The media calling a race before all the polling places are closed can cause harmful effects. Trolls are information bullies who just like to kick up a ruckus and get a rise out of people. Perhaps there could be some sort of disincentive for them (since people can’t seem to universally just shun them or shut them out – ignoring would be the worst penalty for them.) Government role? Yikes! There should not be any government role – otherwise we are China. Letting whatever current regime is in the White House control what constitutes truth and which can prevent the distribution of information that does not support its truth – that would truly end the American dream.”

A North American research scientist observed, “Put these efforts in the same category as libel and slander laws. Government’s role should be limited to what it can do now – the courts punishing offenders.”

A self-employed consultant said, “None. Absolutely none. Read Plato’s ‘Gorgias.’ Hold the gullible accountable for culpability.”

An anonymous respondent noted, “Expand and fully enforce laws regarding fraud, misrepresentation, and perhaps issues with creating public nuisance.”

A chief operating officer replied, “The government should stay out of it. The government made no effort to supervise the content of pre-internet news media. What makes anyone think the government should be involved in supervising the content of information disseminated via the internet?”

A chief operating officer of a global nonprofit focused on children’s issues wrote, “Fines and/or custody.”

The CEO of a research and strategy firm wrote, “The government is distributing false information. Libel laws do not solve this problem. Similarly the search algorithms used by definition skew and weight news reporting. While the penalty for a tweet is nonexistent with little context it is promulgated and debated. This creates a ‘shallow’ thinking environment. Government must use longer position papers, be more transparent and enable proper debate of issues. Increased government secrecy and lack of transparency in negotiating on behalf of the people creates an environment of distrust.”

A professor emeritus, said, “I do not believe most people who create fake news believe that they are working for a cause that will have harmful effects.”

A senior staff attorney for a major online civil rights organization said, “This is a matter for standard First Amendment law, which of course has a long tradition of having to deal with seditious libel and politically oriented defamation, i.e., the tendency of government to abuse power over speech. Really need to be careful here.”

A researcher based in Europe replied, “I’m no expert in penalties. It is difficult to generalize about government: there are those that spread false news, others that benefit from them… So if we want the government to have a role we first need to make sure it actually wants to fight false information.”

The managing partner of a technology consultancy wrote, “Often the actors are intertwined with the government sponsoring or promoting false information. It’s a propaganda tool to control the masses. The difficulty lies in establishing the role of government in preventing the distribution of false information when its authorities are the ones commissioning the acts.”

A lecturer in artificial intelligence at the a major UK university commented, “Freedom of speech means there is no crime in this. Educating the public to identify misinformation is needed.”

A CEO and research director noted, “Government: Depends on the actor. If a nation-state, it’s a national security issue. Individuals: Depends on how harmful effects are eventually defined (bigger question).”

An associate professor at a US university wrote, “This is too tricky for a clear brief answer. This can create a slippery slope where government can punish speech they brand as ‘creating harmful effects,’ Instead, communities and journalistic organizations have to develop their own clear standards and educate the public on how to consume information and why they should care about they way they consume that information.”

An internet pioneer in cybersecurity and professor at a major US research university commented, “Individuals should be fined or sanctioned. Government might establish expedited courts to enforce this.”

A senior researcher at a US-based nonprofit research center replied, “Government should not be involved. Propaganda has been a stalwart of the state since states first began. It is not their job to punish misinformation (outside of legal entrapment and espionage activities). This is not a them-vs.-us kind of scenario. The state will always have their own communication machinery and it can exist in a free and open society alongside public and private news organizations.”

A senior lecturer in communications at a UK university said, “Libel laws should serve as a template.”

A research scientist based in Europe observed, “Before establishing any ‘penalties’ (which will and should always be to open to debate), a society must first identify those producing false information and maintain an open public data base that anyone can access. All institutions involved in the chain of information distribution should be involved and held responsible, just like in criminal activity such as corruption or organized violence. If information cannot be reliably sourced, it should not be distributed and if it is distributed, the distributor should held responsible.”

A vice president for an online information company noted, “Willful distribution of harmful falsehoods look like torts but the law is slow to respond and this is an area that is difficult to prosecute – think libel and slander and think about public figures, et cetera. If real harm could be quantified that might for a basis for penalties.”

A program manager for the US National Science Foundation wrote, “We already have laws against libel and defamation. Not clear how to prove intent to harm. We have a lot to learn and work out here, but I believe there is a community of people who are studying ethical implications of this new world.”

A professional emergence theorist replied, “Major efforts should be financed to stop reproduction of false information.”

An anonymous respondent from North America wrote, “Demonstrably false? Then the libel/slander model is appropriate. But we must tread lightly, with a high burden of proof on plaintiff.”

A researcher affiliated with a company and with a major US university noted, “Governments should take a role. Manipulation of news should be treated similarly to manipulation of financial data or personal reputation – i.e., subject to legal challenge and legal penalties. Enforcement across borders will require considerable international work to establish protocols. Interpol, et cetera, and the EU are good starting places.”

A professor on the faculty of design at a university in Australia replied, “One idea is for government to financially support investigative journalism and relevant research dissemination, with grants and other funding opportunities. Now that the business model supporting independent high-quality journalism is failing, it may need the support of public entities to continue its vital role as the Fourth Estate. University researchers still maintain high credibility with publics in generating high-quality, non-biased information and governments and universities should capitalise on this status.”

An anonymous North American research scientist said, “Financial penalties. Advertisers have to shun the perpetrators. Government can have no role in limiting speech, only in setting the economic conditions for operating as an information purveyor.”

An anonymous business leader noted, “Government must lead and the penalties must be severe enough to curtail willful deceit.”

A distinguished professor of information systems replied, “Perhaps these could be similar to the penalties for perjury or libel?”

An associate professor of political science at a university in the Southeastern US commented, “Generally they should be subject to civil suits. All of this starts with the disallowance of anonymous posting. Everyone should have to own up to their postings on the internet.”

A technology journalist with one of the leading news organizations in the US said, “This should be punished as a felony.”

A North American research scientist observed, “I would like to see some criminal charges created for this.”

A researcher based in Europe said, “There should be no penalties but there should be accountability. The internet should be ruled by citizenship not by government.”

A public-interest lawyer based in North America commented, “I don’t think there is a role for government outside of intellectual property enforcement.”

A self-employed marketing professional observed, “Penalties should be jail and loss of internet privileges. It may be too dangerous to our freedoms if government controls the news.”

A former chief academic officer and professor replied, “The first step should be public disclosure. We also need reexamination of laws about spread of false and harmful information.”

An anonymous editor based in North America noted, “The government should not be involved in curtailing the flow of information, even if it is in accurate or misleading or countries. The penalty should be economic based on lack of trust by a paying public.”

A senior policy researcher with an American nonprofit global policy think tank said, “It should be criminalized just as plagiarism and falsification are now in the sciences.”

A researcher based in North America wrote, “The burden of proof required to make this case is too high to make penalties effective.”

The dean of one of the top 10 journalism and communications schools in the US replied, “The same role and same penalties that apply to print and broadcast media today. The 1990s internet laws were to promote growth, which happened, but there never was a subsequent adjustment under current laws, which is where the EU may be leading.”

A research scientist at Oxford University commented, “Rein in the power of unaccountable platforms that have become media companies – tax incentives or rewards to companies that behave ethically – force platforms to be more transparent.”

An anonymous respondent replied, “There must be serious consequences. I am not sure the government can be the body to endorse this international situation.”

An anonymous business leader said, “We do not need new laws.”

A longtime technology writer, personality and conference and events creator, commented, “Make cyberbullying a legal issue that can be brought to the courts, question this: If citizens united is a ‘person’ then why isn’t the ‘media’ treated like a person?”

An associate professor of urban studies wrote, “The First Amendment prevents government from taking on this role. It must come from other actors – universities, nonprofits, media outlets, and society as a whole.”

A research scientist based in Europe noted, “Penalties should depend on intended prejudice. Spreading ‘fake news’ with the intention of changing the outcome of an election should probably be considered a severe crime. Governments and lawmakers might need to adapt the law to define what can be considered a false information, set up rules to judge the intended prejudice and decide on suitable penalties.”

A retired consultant and strategist for US government organizations replied, “Penalty decisions will be considered, constitutionally, one specific instance/action at a time. We are in for a long, long haul as this stated problem becomes a focus of citizen opinion and behavior.”

A senior research scientist who develops electronic publishing, media and technology for learning quipped, “Vote them out of office.”

A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Differential standards should apply to differential speakers. Those in authority who provably lie should be subject to real sanctions; but free speech is also important.”

A professor of journalism at New York University observed, “Inasmuch as there’s fraud, there should be appropriate punishment. But it needs to be post-hoc rather than prior restraint.”

An independent systems integrator wrote, “This should be treated like any social and political offense. Penalties should be according to the ‘crime’ and intended or actual harm done to society. Free human intercourse is fundamental to an open, evolving society. Governments will play their roles according to their particular societies agreed upon values.”

A communications specialist at a major US university said, “This is a tough question, for what is a lie? If you believe it, then it is true for you. The problem is some of these sites are created for the sole purpose of alarming people, using click bait to entice people to click. The news is bogus and the real purpose was to earn money through ad revenue. Reviews on sites might be helpful. This problem is a significant cultural shift. We visit and trust the sites that align with preconceived beliefs. Government intrusion will be seen as censorship, which I am against. I think the answer is online publishers should have entry or eligibility into professional associations – ‘This site is a member of the XYZ organization’ – and this could be ubiquitous on legit sites. Those who don’t belong will not belong for obvious reasons and viewers/readers take that into consideration.”

A researcher investigating information systems user behavior replied, “Similar to the legal action against those who yell ‘fire’ in a crowded theatre with the intent to cause harm. Those spreading false information with the *intent* to cause injury (broadly defined) will need to be held accountable (how one will do this trans-nationally seems difficult to imagine).”

An anonymous respondent noted, “Penalties in law should be only for the most serious acts, extending enforcement wider would have the opposite effect – already there is a strong set of conspiracy culture, this would cement it. Governments should support institutions that add disinterested voices based on evidence.”

An assistant professor based in North America replied, “Punishing people for spreading fake news seems like the purview of dictators. I’m not sure that a punitive framework is best, but rather thinking about how to reshape the media environment so that false information is made visible as false.”

A development associate for an internet action group in the South Pacific observed, “Governments should impose penalties that are serious enough to compensate for the harmful effects on society, especially if it is premeditated to create harmful intent – life imprisonment?”

A senior political scientist wrote, “None.”

A content producer and entrepreneur said, “It depends upon the impact. Some actions might rise to the level of criminal activity and should be prosecuted as such.”

A senior researcher and distinguished fellow for a major futures consultancy observed, “Malicious, destructive lies are clearly criminal, but we’ll need a much deeper understanding of the sciences of information toxicity, and a strong definition ‘intent.’ Otherwise we’ll lose the voices of the great satirists like the SNL, the Onion, Jon Stewart, Andy Borowitz et al.”

A chief marketing officer wrote, “When people knowingly spread false information or is negligence in verifying the information, they should be help liable to any injured party who has a legitimate claim of damages. No, the government should not be the arbiter of truthfulness.”

A director for freedom of expression of a major global citizen advocacy organization said, “I don’t believe that we should penalize individual actors for this. Perhaps if NGOs, or churches, or politicians are engaged in spreading false information, they should be penalized, but I don’t want a scenario where individual actors are subject to the same rules.”

An anonymous professor of economics based in North America noted, “Maybe establish a ranking system, showing how many bits of information have been wrong, this way folks can see reliability.”

A research scientist with IBM Research noted, “I’m not sure how you could establish fraudulent intent. Someone could always claim some piece of ‘fake news’ was true when they shared it.”

An associate professor of communication studies at a Washington DC-based university said, “That’s too complex a question to answer in a form text entry box. There are some situations (e.g., commercial fraud) in which it makes sense to punish specific instances of speech, but those tend to be far more well-defined than ‘fake news.’”

A town council member based in the Southeastern US commented, “I don’t really think government should censor information even if it is false. I’m with Milton in that way.”

A principal architect who works for one of the top 10 communications technologies companies wrote, “Penalties should be the same as those meted out to criminal actions. Government and agencies have a role to play in preventing disinformation.”

A professor emeritus of history at a major US West Coast university replied, “First, we need to end anonymity on the web. We should attach speech to identifiable persons. People act more responsibly when their identities are known. That goes for corporate persons as well as individuals. Second, the imposition of legal penalties requires government participation in public discourse. That is worse than tolerating ugly speech by people who are willing to suffer the consequences to their reputations.”

A research scientist based in Moscow said, “I suppose that measures in this situation will destroy all the alternativeness of internet as a space.”

A senior global policy analyst for a major online citizen advocacy group said, “None unless some other crime was also committed. There is no role for government.”

An anonymous research scientist based in North America wrote, “I don’t think the government should get into the business of penalizing people for lying, even if the lies are potentially dangerous. Public review and repudiation of misinformation should be adequate, provided our civil structures are robust enough to withstand a small number of abuses that maybe don’t get caught.”

An anonymous internet pioneer/originator commented, “It depends on the harmful effects. Public ridicule is a good start. Free speech has never included the right to yell ‘fire’ in a crowded theater. Much of the existing legal framework can be used when a suitably high bar is set on what is ‘harmful.’ Government control of the press is a bad idea.”

A senior analyst for marketing insights replied, “The government should pass laws (similar to libel and slander laws) to ensure that there are penalties for purposely spreading false information with the intent of causing harm.”

A technology journalist for a highly respected global news organization commented, “Except in extreme cases fines are problematic. Government should make sure that people are educated enough to recognise blatantly false news”

A consultant in the financial services industry replied, “Legally we could consider laws, but sounds like that would violate free speech.”

A North American research scientist said, “If there is malicious intent, then perhaps there are fines, banning from certain platforms, and, if it leads to violence and adverse outcomes, perhaps even criminal penalties.”

A publisher said, “Those who have created or knowingly spread false information should face severe financial penalties so it is no longer a lucrative business.”

An anonymous research scientist based in North America observed, “I believe we already have rules about libel. What’s the larger harm you are anticipating?”

An anonymous survey participant replied, “Crimes and penalties already exist, but international law may need to develop cross-jurisdictional rules to enable the law to reach miscreants wherever they are. The three arms of government (legislature, executive, judiciary) all have a role to play in the making and enforcing of laws in this space – but they will need to be far more agile than they are now.”

A consultant based in North America noted, “The central challenge to establishing a legal restriction or penalty for the distribution of fake news is the definition of fake news and a determination that it is unlawful. It isn’t illegal to publish nonsense on the internet (nor to speak it on the street corner). Neither is it illegal (or uncommon) to lie and manipulate facts to change political opinions. Making laws that would define these practices as illegal and penalize them would necessarily conflict with the First Amendment. Further, it would require a judgment about intent to misinform, which in the context of libel and defamation has been applied by the courts in a very narrow manner (appropriately). Therefore, I conclude that the problem cannot be solved with law or through state intervention. The better approaches are economic (pressing the platform companies to bar the worst offenders from access to advertising revenue) and media literacy. The answer to debates about restricting speech in America has always been that the first and best response should be more speech.”

An adjunct senior lecturer in computing noted, “Sure, make a spectacle of one person who spreads information that actually causes harm. A thousand will be unmoved believing their scientifically proven false information is right because they believe the science is wrong. Too many people will also believe the science is wrong. The government has enough trouble controlling the spread of demonstrable harmful information from its own ranks.”

A distinguished professor emeritus of political science at a major US university wrote, “This is a touchy, touchy subject. We are back to the ‘crying fire in a crowded theater’ issue. It would be necessary to prove intent and clear harm, and the latter might be very, very subjective. Having government monitor for ‘truth’ takes us in a very dangerous direction.”

An instructor of political science based in North America wrote, “Current libel and slander laws are sufficient if properly enforced.”

An associate executive director for a Canadian think tank replied, “So much of established history is ‘false’ – victors’ stories, incomplete, inaccurate, et cetera, that the issue of ‘penalties’ is extremely challenging. If government is to have a role in preventing the distribution of false information, there will have to be a start date. Or will there be backcasting for falsehoods? What will penalties look like/cost/cover? Will plausible deniability be a valid defence? WHICH level of government?”

A principal consultant said, “Their misdeeds should be publicized widely. It’s very difficult to silence anybody any more, but we ought to be able to discredit them. I am not sure this is a government role.”

A professor based in North America wrote, “There should be laws against this, although they will be difficult to enforce. Those with money and an agenda will find ways to circumvent the laws.”

A North American research scientist commented, “We have libel and defamation laws. I am not sure if they are inadequate. In an election cycle, the speed of fake news matters, but if existing laws are used to penalize malicious actors, this will have longer lasting effect.”

A technical writer said, “Jail, lifetime ban from computers. Who else can do it other than the government?”

A professor and researcher based in North America noted, “Penalties should mirror degree of harm or intent. Government has to play a role. It also needs to not be the source.”

A research scientist said, “Penalties should include jail time and government monitoring for a definite period of time. Government should provide quality education and disengage in oppression as preventative measures.”

An author and journalist noted, “This is a slippery slope. It all depends on the harmful effects. We have laws for libel, defamation, fraud, et cetera, I think we are covered.”

A professor of sociology based in North America said, “It will be hard to police the spread of false information because truth is a nebulous concept. Creating trusted, reliable sources for information is a better strategy than punishing liars. Existing laws for libel and slander can be used for egregious cases.”

A data scientist based in Europe who is also affiliated with a program at Harvard University wrote, “Penalising ideas, either wrong or right, is called ‘censorship.’ Governments should improve equality and education in values in early schooling.”

A senior vice president for government relations noted, “There will have to be careful consideration of penalties. What is the harm intended and is the false information sufficient to cause it.”

A doctoral candidate and fellow with a major international privacy rights organization said, “From a legal perspective we should consider the crime of harmful ‘fake news’ production in the same sense as we do the production of ‘political propaganda,’ ‘libel or slander,’ and other forms of restricted speech. Government should play a more active role in the prevention of the production and of the effects and consequences of false information – but less so the ‘prevention of the distribution.’”

A professor based in Europe commented, “Our current laws already have ways to deal with fraud, libel, slander and other forms of false information.”

An anonymous research scientist based in Asia/Southeast Asia wrote, “The same penalties as offline.”

A professor of sociology based in Europe observed, “There should be an ombudsman and a legal framework.”

An anonymous business leader wrote, “They should be liable for damages incurred.”

An anonymous educator noted, “The penalties should be commensurate with the crime. Who penalises Facebook and Google?”

A professor of new media education at a university in Australia replied, “It depends what you mean by ‘those.’ Prosecuting individuals is bound to have wider negative consequences than regulating Facebook, Amazon Google et cetera. These so-called conduits need to be treated as media producers.”

A professor and expert in technology law at a West Coast-based US university said, “Yes, we already have defamation law and related torts like intentional infliction of emotional distress (IIED). We don’t need more laws. We don’t need the government enforcing those laws either.”

An anonymous futurist/consultant noted, “Criminal charges especially when it has endangered lives and caused physical damage to communities and businesses.”

An anonymous researcher based in North America replied, “Don’t we already have laws against fraud? Is false information fraudulent? If so, then existing laws should be applied.”

A Ph.D. candidate in informatics, commented, “There are other ways, other than punishment, to de-incentivize spreading fake information. Many people write click-bait fake news because they get money for it. If we could financially incentivize truthful writing in the same way, then we would see positive effects.”

A legal researcher based in Asia/Southeast Asia said, “Stop them from using any internet. Government should create regulations for internet companies to prevent the distribution of false information.”

An anonymous researcher based in North America observed, “There should be no penalties. However the government should impose an election news blackout one week before the election; all violators are not eligible for coverage of official events or funding (contracts, grants, et cetera).”

A member of the Internet Architecture Board said, “It would be difficult to define an appropriate test for the courts; e.g., see section 18C debates in Australia. In the US, this would be even more controversial. It’s also not effective when the offender is in another jurisdiction.”

A North American research scientist said, “Let market sort that out. Government must not censor, but the market will.”

An associate professor of business at a major university in Australia observed, “We have already seen that some false information has worse effects than true information that gets a person jailed for treason. It may be very difficult to assign penalties – what is satire that some silly person takes seriously? Vis.: Self-deportation, PizzaGate.”

A Ph.D. candidate at a major US university wrote, “Widely circulated news sources must be held to a standard of publishing verified facts. When news companies engage in a ‘race to the bottom’ to be the first to publish breaking news they take greater risks for getting unverified and potentially false information out there. Journalists and news companies must be held accountable through law, perhaps fines or something else.”

A university professor based in Asia/Southeast Asia said, “Government should not intervene, but online participants can take action to deter fake news creators.”

A postdoctoral associate at MIT noted, “I don’t think there should be legal penalties for creating and spreading any kind of information. However, there could be ‘reputation penalties’ for such actors. Similar to how platforms like Reddit keep their bad apple in check, there could be a universal reputation bank for all internet users.”

A leading internet pioneer commented, “This is the wrong question. In the US today, the government is the distributor of false information.”

A professor based in New York observed, “There must be a distinction between sloppy journalism (done without malice) and malicious spread of information. Both should be punished, but to a different degree. I’m not sure what role government can play, unless it is independent from political parties. That said, I think most false information will end up coming from beyond national borders, making it hard to police within a nation.”

An author and journalist based in North America said, “As a First Amendment purist, I don’t think the government should curtail free speech. That said, if someone knowingly publishes false information, that person should be subject to existing laws.”

A past chairman of a major US think tank, former CEO of a major telecommunications company, replied, “None.”

A research scientist based in North America wrote, “Prison as punishment. Government has a role. Free speech is acceptable, not freedom to lie about facts.”

A chief technology officer for a foundation based in Africa said, “Those who are found to have created or knowingly spread false information with the intent of causing harmful effects should be punished with financial penalties. The government must take steps to prevent the distribution for false information in setting up related policies and following up their execution.”

A professor of education policy commented, “In most cases, I would make the penalties public humiliation and fees. Government should play some role, but not too large a one. There are dangers indeed to any effort to police, but we have certainly shown there are great dangers to not policing as well, over the last few years. This balance is a difficult one, and I think its important to have a lot of conversations – open conversations – that include a lot of public feedback and thought about media processes. Government intervention can certainly make things worse, and we must avoid such intervention as much as is possible. Non-partisan groups, if such a thing is at all possible in the US, could play a key role here.”

A distinguished engineer for a major provider of IT solutions and hardware commented, “How do you prove such intent? In general, just like with any other crime, the punishment should be commensurate with the damage it was intended to cause. But more than anything else, people should be educated to question everything that they read – whether it be online or in print.”

A North American research scientist said, “Those found spreading false information should be publicly exposed and shamed. The only role for the government is to promote critical thinking in the education system.”

A researcher based in North America observed, “It should be a civil offense with major fines.”

An anonymous internet activist/user based in Europe commented, “Any government steps to prevent the distribution of so-called ‘false information’ is censorship, and it is not acceptable. Current penalties for creating false information are more than adequate, no new penalties are needed.”

A business leader based in North America noted, “It is unclear if society will allow sufficient penalties to curtail behavior, e.g., penalties didn’t work for drug dealing.”

An adjunct professor of management studies at a major US East Coast university said, “If people are using false information for monetary or illegal gain they should face consequences.”

An anonymous consultant based in North America commented, “A mechanism like the Sedition Act of 1918 – but written more broadly to apply to the publication or circulation of ‘false facts.’”

An academic based in North America replied, “Yelling ‘fire’ in a theatre – that’s a problem and there are protections in place regardless of communication modality. Otherwise, seriously?”

An emeritus professor of communication for a US Ivy League university noted, “Of course, there are difficulties involved in determining intent, as well as there are in demonstrating that the harm was caused, or was likely to have been caused. Nevertheless, I can see a role for the government to facilitate the development of the capacity for critique in specialists, and for its spread throughout the population, so that critical assessment becomes a social norm.”

A former software systems architect replied, “Government must stay out of it. Civil/criminal penalties that exist can continue. The problem is identifying the culprit, demonstrating that is knowingly false (or not verifiably true), and it being harmful. The problem is identifying harmed parties.”

A copyright and free speech artist-advocate observed, “If it is just speech, I don’t know that you can have a penalty (unless it is targeted at a person, in which case the slander laws could be used). I’m not sure that the government can take action, other than requiring businesses that do censor to be transparent about their rules and algorithms.”

A North American futurist/consultant commented, “Government should provide incentives for the private creation of a news database and aggregator to measure source reputability and volume of coverage.”

A consultant based in North America replied, “Have a penalty for false information, down-scoring both the content and the contributor through the use of community and expert moderation. The legal/judicial penalties should focus on the measurable, tangible effect/injury/damage, rather than intent, which is difficult to measure and prove. Governing internet posts is tricky because the internet is a worldwide forum that spans hundreds of governments and legal systems.”

A political economist and columnist commented, “Penalties should be the same as for deceit with intention to harm by any other means.”

A consultant based in Africa commented, “Authors of extreme misinformation should be brought to court and judged in accordance to law.”

An anonymous author and journalist wrote, “It depends on the type of information and the size of effect. The ‘fire in a crowded theatre’ phenomenon.”

The chairman of a business consultancy commented, “We need to enforce in the court of law AND public opinion to be a society governed by laws and values.”

A consultant said, “The penalties likely will need to be consonant with the degree of harm. And I think it may require government oversight. Just like you can’t yell ‘fire’ in a crowded theatre.”

A founder and research scientist commented, “People shouldn’t be punished for the expression of a message, but existing laws (libel, public endangerment, et cetera) should be appropriately and consistently prioritized.”

A policymaker based in North America said, “Imprisonment and fines.”

The founder of one of the internet’s longest-running information-sharing platforms commented, “It should be publicized, by name. Government might step in regarding false information spread as part of warfare.”

A professor and institute director said, “Jail. Legislation for knowingly spreading falsehood while in elected office should lead to prolonged prison sentences.”

A professor based in North America noted, “Nothing can be done.”

The executive director for an environmental issues startup commented, “There are penalties for perjury and libel, and these should apply.”

An anonymous respondent observed, “There are already laws that address this – they could be enforced – however, how to you do that internationally?”

An anonymous business leader replied, “Tort damages.”

A project leader for a science institute commented, “If content can be proven to be false and known to intentionally cause harmful effects there should be some kind of accountability. I am not in a position to say what the government should do.”

A professor and researcher based in North America noted, “I am less concerned with individuals than I am with the communication platforms and information systems that enable the propagation of misinformation. Many corporations that are now regulated as ‘technology’ companies should be regulated as ‘media’ companies.”

A professor of public policy at a major Southeast US university said, “This should be a civil offense with stiff penalties for those who harm others through false messages. Rules of procedure and evidence will need to prevent malfeasors from further impugning the targets of these falsehoods in court. Government’s role is to provide the courts as a forum for these actions.”

A marketing consultant for an innovations company wrote, “Something that passes the test of constitutionality.”

An anonymous respondent wrote, “None and none. There should be less of a ‘nanny’ state and more individual responsibility.”

A professor of sociology at a major university in the US Midwest commented, “The same standard of ‘a burning fire’ should be held to false news with an intention to harm. Only if it can be traced to actual inciting violence should it be censored.”

A department leader at a nonprofit organization based in North America, commented, “There have not been penalties thus far for people who lie, manipulate, spread false information. Why now because of social media is it even part of the discussion to punish people? As if the government should be the arbitrator of what is false and what is true.”

An anonymous activist/user wrote, “Loss of anonymity might be a way of ensuring some discipline in the system, yet the institutions which would be deciding such punishments today have no credibility with most of the population.”

The executive director of a major global privacy advocacy organization said, “Hold politicians to account for using these very same platforms with these objectives. But in truth, let the platforms become untrusted and full of problematic content and perhaps we will grow more wise again.”

An anonymous research scientist noted, “The penalty should be the same as yelling ‘fire’ in a crowded theatre. Government, however, should take NO role in preventing distribution; just in the prosecution of violators.

A graduate researcher at a major university located in the US Midwest wrote, “What needs to happen is the modernization of our laws to not criminalize false information per se, but include the intentional distribution of harmfully false materials as related to the role that plays in other crimes. So, false information targeting a witness or investigator, for instance, should be part of obstruction of justice. False information designed to suppress voter turnout should be covered under election tampering statutes. This isn’t a case of ‘here’s a new thing, we need a law for it’ – this is a case of ‘there’s a new reality, we need to figure out how we navigate it wholesale.’”

An educational technology broker replied, “Government’s role should be limited. We need to find other ways to reduce the impact by perpetrators of false information.”

A professor at an American research university noted, “Cancel their Twitter, Facebook, Reddit accounts. Not sure what role government can play in our current situation where officials would not agree on what is “false info.” We’re in a tragic situation.

A professor emerita and adjunct lecturer at two major US universities commented, “At a minimum, there should be fines and if ‘intent’ can be proved, then I would not oppose jail terms for the offenders. The consequences of deliberately distributing false information can be grave, so the punishment should be appropriately severe.”

A chief technology officer observed, “It depends on the harmful effects. If the actors are outside of US jurisdiction, there is not much that can be done.”

A research assistant at MIT noted, “This is perhaps a better question for a lawyer or an ethicist. I believe people have the right to spread lies, but the system should encourage consumers of this information to know about the relevant quantified facts and their sources.”

A journalist based in North America said, “None. Otherwise, you’d have to put huge swathes of the CIA in prison.”

The CEO of a major American internet media company based in New York City replied, “The government is the source much of the false information, definitely don’t want the Trump administration in charge of preventing the distribution of information they consider false.”

An owner and principal sage for a technology company based in North America commented, “If any group is allowed to define ‘truth’ (and thus falsehood), a rich opportunity will be opened up for the re-emergence of nationalized fascism.”

An engineering director for Google observed, “New laws similar to those for slander may be needed, though this makes more sense in a European legal context than a US one.”

A librarian based in North America noted, “Public shaming and prosecution by those who relied on the information. Government should be willing to make challenges in court easy and swift. To the extent that misinformation rises to a criminal level, criminal penalties should be assessed.”

A vice president of survey operations for a major policy research organization replied, “It is unreasonable to think government can prevent this when the current president is not concerned with accuracy.”

A senior international communications advisor commented, “Such individuals and corporations should face hefty financial penalties AND jail time. The only enforcement mechanism is through government – tricky given that many benefit from the dissemination of propaganda.”

A director of new media for a national federation of organizations said, “Penalties should be curtailment of access to systems, and libel and slander penalties should apply to those who spread information about people who are not public figures. They should suffer penalties for treason if this in done in cooperation with a foreign government meant to damage the US political system.”

A technical evangelist based in Southern California said, “It should be the same as now; it should be similar to other media.”

A doctoral candidate and Internet of Things researcher said, “Depends on the harm of the ‘harmful’ effects. Free speech laws already cover some of this; perhaps it’s a matter of re-examining what an imminent threat looks like. I would want the government to tread very carefully here, though. Solutions from the state can be much more double-edged than solutions springing from communities or culture.”

A data scientist and blockchain expert based in Europe wrote, “Elimination from the blockchain and low ratings by blacklisting them.”

An historian and writer said, “We are better focusing on how to expose the false information rather than trying to start imposing penalties for speech.”

A retired university professor noted, “Their views should confined to online fringe sites, but not reported or retweeted. Of course this could be the fate of the rationale under an irrational and totalitarian government. Improving the education of young people would help in the long run.”

An anonymous research scientist said, “Ensuring personal accountability might be a step into the right direction. Conversations change when claims and counterclaims are attributed.”

A professor based in North America replied, “This is a slippery slope. First of all, sites can always be hosted in countries with little or no accountability, like online gambling. Second of all, there is freedom of speech. Existing definitions of libel, slander, and treason are probably sufficient – but good luck enforcing them!”

An author/editor/journalist based in North America observed, “Exposure and public censure with government playing a role in exposing misinformation, though the job primarily rests with others.”

An author/editor/journalist based in Europe commented, “Public shaming should be enough – unless it’s promoting a crime, in which case the usual penalties apply.”

A vice president for stakeholder engagement said, “The answers will come not from government but from technologists, civil society and the private sector. Government should be kept out of any enforcement role.”

An anonymous researcher based in North America said, “Civil and criminal penalties by governments through the courts.”

To return to the survey’s anonymous responses home page, with links to all sets, click here.

To advance to the set of anonymous responses to survey Question 6, click here.

If you wish to read the full survey report with analysis, click here.

To read credited survey participants’ responses with no analysis, click here.