Elon University

The 2017 Survey: The Future of Truth and Misinformation Online (Q2 Anonymous Responses)

Anonymous responses to the first follow-up question:
Is there a way to create trusted, unhackable verification systems?

Technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:

Future of Misinformation LogoWhat is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.

Follow-up Question #1 was:
Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?

Some key themes emerging from respondents’ answers: – It is probably not possible to create such a system. – It would be seen as too costly and too work-intensive. – There is likely to be less profit if such systems are implemented, which is also likely to stifle such solutions. – It is possible to have commonly accepted, ‘trusted’ systems – it’s complicated because ‘what I trust and what you trust may be very different.’ – Can systems parse ‘facts’ from ‘fiction’ or identify accurately and in a widely accepted manner the veracity of information sources? – There can be no hackable largescale networked systems. – It’s worth a try to create verification systems; they may work or at least be helpful. – ‘Verification’ would reduce anonymity, hinder free speech and harm discourse. – There is hope for possible fixes.

Written elaborations by anonymous respondents

Misinformation Online Full Survey LinkFollowing are full responses to Follow-up Question #1 of the six survey questions, made by study participants who chose to take credit when making remarks. Some people chose not to provide a written elaboration. About half of respondents chose to remain anonymous when providing their elaborations to one or more of the survey questions. Respondents were given the opportunity to answer any questions of their choice and to take credit or remain anonymous on a question-by-question basis. Some of these are the longer versions of responses that are contained in shorter form in the survey report. These responses were collected in an opt-in invitation to about 8,000 people.

Their predictions:

An executive consultant based in North America wrote, “Yes, there are ways, but it is difficult and costly. Therefore, no one is motivated to do it. Right now, there are tech tools and algorithms that can point to suspicious sources of bad information, but it will take human intervention to step in, identify it items and the source, and make the decision to intervene. That will be costly.

An anonymous respondent noted, “Not yet… because the rapid development of the many-to-many communication system caught the world unprepared. Populists found social media and derivatives an easy way to exploit the feelings of the people. The gatekeepers of the one-to-many communication systems (TV-press) are not valid anymore.”

An anonymous professor of information science at a large US state university wrote, “It is possible. A confined environment is one way to go, but I would imagine people do not like being constrained when it comes to information access.”

An anonymous research scientist said, “I am not aware of any such system in the history of mankind. In fact, any system that actually /did/ what you describe would probably be regarded as the instrument of an oppressive regime. For me, contestation, explanation, agonism are what a healthy information ecosystem is about – and not one that outsources accountability to ‘verification systems.’”

An internet pioneer and principal architect in computing science replied, “If advertisers sign a pledge not to allow their ad money to flow to unreliable untrusted sources, then there will be an incentive to change – and with incentive, technical measures can be implemented.”

A research scientist based in North America commented, “Who will be the referee?”

An anonymous respondent wrote, “AI, blockchain and crowdsourcing appear to have promise.”

An anonymous international internet public policy expert said, “Yes, by fostering the public-service value of the Internet.”

A senior research fellow working for the positive evolution of the information environment said, “Platforms should deploy larger efforts to limit fake news and misinformation by white-listing reliable sources.”

A professor and researcher noted, “With current technology, we would need to identify securely every source to be reliable and accountable. This goes against many of civil liberties and privacy expectations.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “I am sure there will be technical ways to do that, however I doubt that people will use them, unless those systems are part of the tools that people already use. As fact-checkers go, part of their credibility comes from their independence, so the two solutions are a bit against each other.”

A professor of law at a major US state university commented, “I don’t think this is a technological problem. We had reliable, trusted verification systems. It was called journalism. But journalism stopped being a profession and became an industry. And accuracy was not advantageous to the bottom line. We need to rebuild not-for-profit media and help it cut through the online and cable clutter.”

A North American research scientist, wrote, “None that I’m aware of at the moment.”

Another North American research scientist replied, “Unhackable? That is unlikely, but we can continue to improve security.”

A leading researcher studying the spread of misinformation observed, “I know systems like Blockchain are a start, but in some ways analog systems (e.g., scanned voting ballots) can be more resilient to outside influence than digital solutions such as increased encryption. There are always potential comprises when our communication networks are based on human-coded technology and hardware; this less the case with analog first, digital second systems.”

An anonymous respondent noted, “Because nothing is unhackable, the answer is logically always no.”

An IT professional wrote, “It is probably possible, but it is in fact not wanted by the larger part of mankind who wants to be able to believe what they believe no matter where Truth is. The definition of being stupid: Seeing the truth, knowing the truth and choosing to still believe the lies.”

An anonymous respondent said, “Nothing is unhackable. The major online platforms, which are playing a more hands-on, curatorial role every day (despite their assertions to the contrary), will need to take responsibility, much as broadcasters and cable companies have had to do over the years.”

An anonymous respondent wrote, “No, the verification system has to have an opinion.”

A project manager for the US government responded, “I’m not sure. The hackers are usually able to defeat any system – they seem to take it as a challenge.”

A research scientist based in North America said, “It is a matter of scale: if you want Facebook to do that for a billion users, it cannot happen since it is very hard to attend to minority views on a platform that wants to scale. On smaller scale, why not?”

A distinguished engineer for one of the world’s largest networking technologies companies commented, “Multiple levels of security exist and companies such as Cisco have unhackable verification systems. However, verification systems are only as good as their level of deployment.”

A longtime US government researcher and administrator in communications and technology sciences said, “This is not a proper question to address because of serious limitations on freedom of speech.”

An assistant professor at a university in the US Midwest wrote, “Crowd-based systems show promise in this area. Consider some Reddit forums where people are called out for providing false information… if journalists were called out/tagged/flagged by large numbers of readers rather than their bosses alone, we would be inching the pebble forward.”

A media networking consultant noted, “Yes, academics have always done this. Wikipedia has demonstrated feasibility on open communities.”

A retired local politician and national consumer representative replied, “Facts may be accurate but incomplete and very selective creating the false impression. A verification system is unlikely to be able to provide the full and balanced facts.”

A professor of law at a major California university noted, “Reasonably reliable and trusted, yes. Completely unhackable? We have not managed it yet, and it seems unlikely until we can invent a system that, for example, has no vulnerabilities to social engineering. While we should always work to improve reliability, trust, and security on the front end, we must always expect systems to fail, and plan for that failure.”

A professor and author, editor, journalist based in the US wrote, “No. There are too many bad actors trying to defeat them.”

A professor of media and communication based in Europe said, “Right now, reliable and trusted verification systems are not yet available; they may become technically available in the future but the arms race between corporations and hackers is never ending. Blockchain technology may be an option, but every technological system needs to be built on trust, and as long as there is no globally governed trust system that is open and transparent, there will be no reliable verification systems.”

A professor at MIT observed, “‘Slow’ news, with adequate research and sourcing, still offers established venues credibility. It will take real forensic effort to keep up with technological fakery (lip-syncing unspoken words, compositing unlived images, generating chaff by bot-driven social media). We need to include the education of media-literate citizens in our fix, and to do that as a priority. The down side of ‘fact control’ (as opposed to critical thinking) is its ease of misuse.”

A longtime technology editor and columnist based in Europe, commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute content. DECENT is an early example.”

An anonymous principal technology architect and author replied, “No, not really; all we can do in this space is to manage the risk in some rational way, such as never taking some forms of identification online (like our fingerprints). We are rapidly taking everything online, however.”

An anonymous professor of media and communications based in Europe observed, “Never in an open system.”

An anonymous research scientist replied, “No. ‘Verified’ statements would simply be those in agreement with the ideology of the verifier.”

An anonymous CEO and consultant based in North America noted, “Create and use smaller autonomous networks where peering is based solely upon trust.”

A principal network architect said, “The process already exists. It is called earning the respect of one’s peers. It can’t be perfect but it works most of the time.”

An anonymous respondent replied, “No, because humans will go on getting information from all sorts of sources, some of which are less reliable than they think.”

An anonymous research scientist based in North America wrote, “Probably not. People will find ways of getting around it just as they do security controls.”

An anonymous software engineer based in Europe said, “Possibly, but it’s going to be painful. People will continue to discard anything that doesn’t fit in their bubble as ‘untruth,’ and dispute the verification.”

An anonymous respondent observed, “There can still be a place for professional journalists who really investigate and earn public trust. Plus there are our peers around us, but with them our trust may be misplaced sometimes.”

An anonymous respondent from the Berkman Klein Center at Harvard University said, “They will be cryptographically verified, with   concepts.”

An anonymous internet pioneer and longtime ICANN leader said, “If it’s possible, I don’t see it now. I am pessimistic.”

An anonymous internet security expert based in Europe noted, “The work begun by Phillip Hallam Baker and continued by the IETF/IRTF had an effect. EFF took over certbot, school kids now run mutually signed crypto networks. Winter is coming, with a lockout as great as the extra enigma well used to secure Shark.”

A professor and researcher of American public affairs at a major university replied, “No, such a system is unlikely, at least not without creating unacceptable limits on who can express themselves online.”

A professor of law based in North America, replied, “I am not a technologist, but I don’t think so. Any system is no better than the weakest link, and it is not possible to never make a mistake. Also, it can be very difficult to accurately identify individuals.”

An anonymous ICT for development consultant and retired professor commented, “No, because every verification system will carry both strengths and weaknesses; these will be exploited.”

An anonymous author, editor and journalist based in North America replied, “No. We can (and must) improve security, but let’s not pretend anything will ever be 100% hack-proof.”

A media director and longtime journalist said, “There is no absolute trust without an end to anonymity. Even then, human trust is hackable using economic/social/moral/peer pressure.”

An anonymous internet pioneer replied, “No. If I say I have a headache, who is able to say it is not true? Not in the next 10 years at least (Or maybe the answer is yes: you can make a reliable, trusted, unhackable verification system, just not a useful one. It will simply answer ‘I don’t know’ to every question).”

An associate professor said, “There may be but doing so would require a lot of capital. The question is then where would the financial and technical resources come from and what are the motives of those providing them.”

An anonymous respondent from the Berkman Klein Center at Harvard University noted, “No. A ‘reliable’ and ‘unhackable’ verification system implies policing the information we share and how we share it. That would seem to stand in opposition to a free exchange of ideas, however stupid some of those ideas might be. But placing the responsibility for assessing the quality of information on the listener keeps the channels open. Unfortunately, it’s far from foolproof. And there’s no reliable way to train people to be better, more critical listeners or consumers of information.”

A professor of law at a state university replied, “There is technological potential, but it will run up against equal advances in hacking technology and First Amendment protection of political speech.”

A researcher based in Europe said, “Yes, crypto can solve that.”

A professor and chair in a university’s department of educational theory, policy and administration, said, “Public education is one such venue, and we should not be surprised that [under the Trump administration in the US] the Department of Education is attempting to end public education as we know it. School regulation can contribute to this. Universities, too, can begin to certify state public education systems in terms of their curricula, which would add significant political pressure on the schools to clean up their acts in history and science in particular. Beyond public education, online media, especially Facebook and Twitter, can use AI to minimize the fake news. That said, this is going to be a constant struggle and there is no simple solution. Dog whistles are very hard to counter, as are the kind of relentless narrative building that Fox News engages in. The biggest challenge is battling the narrative of mistrust, authoritarianism and racism.”

A head of systems and researcher working in Web science said, “There are improvements that will be made that will increase all of those but not to absolute states. For instance I would not go so far as to say an unhackable source of information is possible, free from man in the middle attacks including from state actors on their populace.”

A leader of internet policy based in South America noted, “No, I do not.”

A lecturer at the University of Tripoli in Libya noted, “It could be difficult but in the same time it can be created. This depends on who is behind this system and what their agenda is. Some organizations publish biased information. So, it is difficult to control particularly if they are profit companies.”

A principal network architect for a major edge cloud platform company replied, “It is unlikely. There are varying levels of trust. Things will be better, but humans are the weak link in the chain, and they aren’t getting any better.”

A technologist specializing in cloud computing observed, “Nothing is unhackable. An unhackable system is one totally disconnected from a network and therefore of limited value. Changing the transparency model around systems to one where hacks and possible hacks are clearly defined by known interactions with specific systems at specific times is one possible path to resolving the trustworthiness of a source of information.”

A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “No, but it is also not necessary to create an ‘unhackable’ system in order to have a sufficient level of trust. Trust should be based on reputation for sound reporting, not what garners the most clicks.”

An anonymous respondent replied, “No, the financial motivation to defeat such systems and ever-improving computing capacity mean that the hackers will always catch up.”

An institute director and university professor said, “No. I wish it wasn’t the case, but sadly, no. And Google, Facebook, Twitter and the like know there’s no money in reliable, trusted, unhackable verification systems. It’s CNN versus Fox. Telling people what they want to hear, no matter for the truth, will always be more profitable.”

A professor at a major US state university wrote, “People from different areas are trying to find a way to develop verification/fact-checking system. Not sure exactly how, but I believe that some systems will be developed for the purpose, although they might not be perfect.”

A professor based at a North American university noted, “Some systems are more reliable than others and will continue to be. But every reliable one has a counter and is ultimately hackable, and the rewards of proliferating falsehoods have tended to be greater than not.”

An anonymous respondent based in North America said, “Key thought: micropayment/subscription could be required by both the author and the researcher. It’s still the old saw of whether funding will follow the reporter vs. the news platform, the researcher vs. the company/research institution. With cloud adoption up, there are more apps like academia.edu, experiment.com for information dissemination and crowd-funded science. Internet2 has its NET, plus services integrated offerings growing, but local economic-development authorities are still driven by business insurers who lobby for physical buildings being legally required to buy their products even as high-performance computing in the cloud slowly grows its adoption. As US citizens may or may not experience pro-totalitarian tendencies in our politics (and what science gets budgeted/funded), the place of ‘objective, truth-driven science/education’ (though usually/ultimately generally funded by the military budgets) will be seeking alternative funding. Will ‘patreons’ step up? With the global central banks all having been removed from any direct correlation to a standard (other than petrol dollars and the quantitative-easing-driven, inflated urban real estate REIT bubble), the cryptocurrencies proliferation has been dramatic enough that it has the Senate taking notice. Have a look at Section 13 (among others) on https://www.congress.gov/bill/115th-congress/senate-bill/1241/text.”

A technology analyst for one of the world’s leading technology networking companies replied, “What is to be verified, the source or the opinion?”

An anonymous respondent commented, “Nothing is unhackable, right? Reliability and trust are good goals for any organization, particularly those in the media space. This is different than being unhackable. Analyzing other organizational systems like credit cards/banks may be a good idea to assess trust.”

An anonymous CEO wrote, “Yes. Fact-checking and presenting facts when erroneous information is presented will be the norm in future years.”

A researcher based in North America said, “No, anything that tries to tackle misinformation once it’s already in the world will already be fighting a losing battle. And there is no way to prevent at least some misinformation from being spread without losing protections on free speech.”

A retired senior IT engineer based in Europe observed, “No. Verifying means understanding, i.e. acting on the semantic.”

A North American researcher replied, “Everything will be hackable with time, and security requires investment that typically comes after the hack.”

A political science and policy scholar and professor said, “A successful solution would have to address people’s desire to seek out the ‘facts’ they like rather than the truth.”

A policy analyst for the US Department of Defense wrote, “Two-factor authentication.”

An independent journalist and longtime Washington correspondent for leading news outlets noted, “No. This will never and should never be centralized. The Internet welcomes all comers, so different homegrown verification systems will emerge.”

A professor at a major US university replied, “I am not confident that financial incentives to encrypt and protect data will exceed countervailing incentives to hack and decrypt.”

A professor at a Washington DC-area university said, “No. Social systems are too complex and can invariably be gamed (see history of efforts to develop incentive compatible mechanisms in economics and zero proof systems in computer science).”

The managing editor of an online fact-checking site replied, “There is no complete solution, but nimbleness and adaptability will go a long way toward helping curb hacking and bad information.”

A post-doctoral fellow at a center for governance and innovation replied, “No. Mischief-makers will always sabotage such efforts.”

A senior vice president of communications said, “There probably is, but this isn’t necessarily the problem, the sources of news are so diffuse, and social networks so ingrained, that it may not be fully possible – is Facebook going to police the opinion of your Uncle Frank?”

An IT director observed, “I cannot meaningfully answer this question except to say that the blockchain may point the way for this. If I knew the answer to this I’d be a millionaire myself!”

A librarian based in North America noted, “The identity of a source of information is always important. Improving on being able to verify the identity of a source is important.”

An anonymous lawyer replied, “I imagine advancements in encryption get us close. From my perspective, there’s not necessarily an absolute, forever-enduring solution, but approaches that will work in a certain period of technology.”

A director of research for data science based in Spain observed, “These systems should involve both human elements and machine (AI) elements.”

An anonymous consultant noted, “Federated trust networks. At small enough scales they aren’t worth the effort to hack, federated means that people who personally know each other equals trust. Everyone belongs to multiple federations, and thus by two to four degrees of separation can access possibly trustworthy information (Briefest way I can put it).”

An anonymous respondent replied, “Probably not. Any fact can be spun and adjusted to fit a desired narrative.”

A professor of rhetoric and communication noted, “Unhackable: probably possible, but for every technological fix, someone out there will be able to one-up the system. As for ‘trusted,’ people trust what they already believe (confirmation bias, e.g.) In the short term, it is going to be difficult to create systems that build a ‘trust consensus:’ just look at how many people only trust Fox news.”

An anonymous futurist/consultant said, “Maybe? Possibly blockchain systems? Still, people with bad intentions will find vulnerabilities, and systems will have to be maintained and updated.”

An anonymous MIT student noted, “I don’t think so. If there are, they probably would be not user-friendly. I’m a computer science person; we try very hard to break things people think are unhackable.”

A research professor of robotics at Carnegie Mellon University observed, “I do not believe so. The cost of hacking verification systems will continue to rise, but so will the rewards for doing so.”

An assistant professor of sociology at a US university said, “It’s hard to protect information in a way to protect its ‘purity.’”

A CEO for a consulting firm said, “There will be a way to create reliable, trusted systems, but nothing will ever be 100% unhackable.”

A research psychologist commented, “It’s possible, but I don’t know if it’s likely. It would be like Snopes.com only not left-leaning.”

A vice president for learning technologies emerita said, “Keeping things unhackable will be a constant in the future since each new system seems to invite new approaches to hacking.”

A president of a consultancy wrote, “Nothing is unhackable, but Wikipedia is self-policing as once model. We’re in a reputation economy and where are the Walter Cronkite’s of the modern day? We need leaders capable of sustaining a digital presence beyond Twitter. New Allegiances based on trust are likely to evolve out of necessity.”

A partner in a services and development company based in Switzerland commented, “There is but only if they are based on users’ participation as opposed to working as black boxes pretending to relief the user of the need to understand what they are doing. The best analogy is that of paper currency. It is reliable BECAUSE users participate in, and consider themselves as co-responsible for, the effort of distinguishing fake dollar bills from real dollar bills.”

A journalist who writes about science and technology said, “We can certainly create blockchain-like systems that are pretty reliable. Nothing is ever perfect, though, and trusted systems are often hard to use.”

An assistant professor who works at a university in Asia/Southeast Asia commented, “Freedom of speech requires breathing space. Any system that claims to be 100% unhackable seems to me untrustworthy. But for things to improve, we don’t need something that is unhackable. We do need trust by developing better literacy, norms, incentives and disincentives, et cetera. Technology plays a role in this, but humans need to also improve as trust, like security, is all about the weakest link in the chain.”

A retired university professor noted, “In general NO. In relation to specific domains, and with luck, maybe.”

An assistant professor based in Denmark said, “I do not believe so. Verification is hard to systematize because its methods are not external from the issues at hand.”

A senior fellow at a center focusing on democracy and the rule of law observed, “Full reliability is not attainable. But there already exist general principles that can be used to reduce the spread of false information. Examples include: penalizing the organizations (newspapers, Facebook pages, Twitter accounts) that spread malicious information (e.g., libel laws); make trolling a punishable offense (e.g., hate speech); mechanically block distributors of malicious information (e.g., censorship – note that this particular approach can also be used to block the circulation of reliable information by non-democracies/non-democrats); encourage ethical reporting (e.g., insist on at least two independent direct sources as evidence).”

A professor of information technology at a large public research university in the United States said, “Systems are not the answer. Norms and standards for discourse are the solution.”

A retired educator observed, “The system is ultimately based on trust. There needs to be a way to VALIDATE the information received. Only then can reliable data become valid.”

A president and executive director commented, “There are only ways to create such systems in a step wise technically correct process, which can be undermined sequentially by the most wealthy, powerful and/or least scrupulous players. The signers of the declaration of independence, for example were all wealthy, powerful and HONORABLE men of that day. It cost them all dearly, as they expected. The good guys will eventually win.”

A researcher at Karlsruhe Institute of Technology replied, “I don´t have the technical expertise to answer this question. However, history shows us that there is no such thing as an ‘unhackable’ system. But one shouldn´t overstate this problem, either. Many systems work well enough and are secure enough for their purpose although they are not 100% unhackable.”

An eLearning specialist noted, “No, for instance, if a library deems itself the curator of unbiased news, it itself is exposed to the biases of the librarian or other curators.”

A vice president for stakeholder engagement said, “No. This is not possible because I will always believe the human being (friend, relative, coreligionist, et cetera) over any automated system, including when the human being tells me the system is rigged or biased.”

A professor at a major US university noted, “No. Everything is hackable.”

A principal engineer said, “This is possible only for very limited domains.”

An associate professor of sociology at a liberal arts university replied, “I am not a computer security expert, so am not qualified to answer this question. My understanding, however, is that such systems are technically feasible but can be too difficult for the general public to adopt in their day-to-day lives.”

An anonymous journalist observed, “Probably not as of now, but with the speed of technology development these days, I believe we will soon be able to create reliable and trusted, if not unhackable, verification systems.”

A futurist based in North America said, “No, there isn’t. It is up to the consumer to cross-reference.”

A researcher based in Europe commented, “I am not an expert but my experience tells me that new systems are only implemented as long as the interest of the corporations and traditional actors prevail. See the example of P2P and its decay in recent years. I don´t know how these systems are going to look.”

A research scientist based in Europe observed, “Yes, of course. Blockchain is a leading example.”

A software engineer commented, “Yes, technically feasible but unpractical and unlikely to be used. Also verifying the flow of a fact does not verify the fact itself.”

The president of a business observed, “Nothing is ever unhackable. But social engineering, editors, educational systems, the Wikipedia-style hive mind solution, and judicious law-making, combined with the type of AI that identifies fraud on a credit card, can make great strides toward protecting information systems and news.”

A professor of sociology with expertise in social policy, political economy and public policy, noted, “I believe some technical solution will emerge but I don’t know how or what it will look like.”

An anonymous lecturer said, “As well as laws and regulations, persons rely on signals among their communities to find whether the info another person is sending is trustworthy.”

An anonymous editor and publisher commented, “Probably yes, although TRUSTED is a problematic word in this question, since we don’t know who trusts it or automatically (based on source) disdains it.”

A fellow at the University of Leeds said, “Maybe it’s technically feasible (I can’t say), but I am wary of framing this as a purely technological problem, as opposed to a political, cultural and educational one (a ‘sociotechnical’ perspective would be more appropriate in my opinion).”

An anonymous researcher observed, “No. You can make it very difficult to defeat a verification system. But making it impossible is incredibly unrealistic. Never underestimate human ingenuity’s ability to subvert a complex, sociotechnical system.”

A vice president of professional learning commented, “No – as soon as you create an ‘unhackable’ system, ‘hackers’ will attempt, and often succeed in, hacking it.”

An anonymous activist replied, “Not without using biometrics or suchlike, which many will regard as unacceptable.”

An internet pioneer and Web personality wrote, “What has continued to amaze me about the growth of the internet is how whenever a problem seems insurmountable. Take, for example, spam. Systems are created to combat it. Like now there are systems in place that removes most of the deceptive emails. So because of this, and my ultimate faith in good over evil, I will say that I do believe that we will develop reliable verification systems. We need to focus our best minds on it and after what happened with the election there is fortunately a lot of attention on this now.”

An associate professor at Brown University wrote, “I have no expertise on this technology. I believe some human monitoring is necessary but this can be aided by lots of different types of technology.”

An anonymous respondent based in Asia/Southeast Asia replied, “Yes there is. Biometric systems.”

An associate professor at a major university in Italy wrote, “A blockchain-based log of verifications may lead to an unhackable and trusted verification system.”

An internet pioneer/originator said, “Of course there are but the question is who will accept them? Those committed to beliefs that contradict the information presented by trusted verifications source will continue to dismiss them as ‘fake news.’”

A leading technology expert replied, “No. ‘Trusted’ is not a property of the supplier of the news, it is an aggregate assertion about the consumers. A system is trusted if people trust it. Whether we (people comfortable with Enlightenment epistemology, roughly) approve of what others trust is not something that can be controlled with tools sort of Chinese-style censorship.”

An author/editor/journalist wrote, “If you view each piece of misinformation as a virus, trying to fight such a multitude of individual viruses is ridiculously difficult. Immunity from viral attack is the actual goal. That is only attained when the population gains high quality critical thinking skills. For those who missed out on this, the efficacy of such skills needs to be demonstrated by people with influence, frequently and explained well. In much the same way that it has become popular to look at parenting deficits through popular TV programs or even the detail of analysis which is invested in cooking programs. Without the popular uptake of critical thinking, misinformation will continue to be manufactured to appeal to specific subgroups by targeting their confirmation bias.”

A vice president for public policy for one of the world’s foremost entertainment and media companies commented, “Surely yes, we could arrive at this, if it were valued more than monetizing data.”

An anonymous respondent observed, “It won’t be reliable until one can rein in neoliberalist tendencies. But I have no idea, technically, how such a system could be created.”

A project manager based in Europe commented, “Yes, AI filtering-based systems.”

An anonymous respondent commented, “I don’t think so, because it is unclear who should be the verifier of truth.”

A futurist/consultant based in Europe said, “No. Trust can only be placed in sources that are deemed worthy of that trust. The system itself is not something to trust.”

A professor of philosophy at one of the world’s foremost universities observed, “Peer assessment works well for places where the readers are likely to be looking for unbiased opinions (product evaluations on Amazon; some academic processes). But I don’t see a way to make them work in situations where the readers have motivated biases.”

A futurist/consultant based in North America said, “From spear fishing to the spread of false information, the greatest weaknesses are in the human mind. And I have never seen anything to demonstrate that creating any kind of unhackable system is possible, especially not in a field as complex as information verification.”

A professor of humanities noted, “Verification systems must protect Constitutional rights.”

A North American research scientist replied, “History teaches us that it is always a strategic and costly mistake to assume that a defense system, whatever it is, is impenetrable.”

An anonymous respondent said, “No. This relies on personal judgment and a priori knowledge.”

A senior lecturer based in Asia/Southeast Asia commented, “It is not possible to build such systems since there is no single verifiable and indisputable truth. What is considered true today might no longer be true (or the whole truth) tomorrow. What is considered true using one way of evaluation or measurement might be different from the findings using a different method or measure.”

A retired public official who was an internet pioneer replied, “Not technically competent to respond to the IT aspects. Recent abuses will unfortunately lead to regulation.”

A principal with a major global consultancy observed, “No. Anonymity will always be possible, and via covert false statements if by no other means.”

An engineer based in North America replied, “Yes, there is a way to track and verify the source of information. It requires attributes to be associated with any source and to publish the news.”

A CEO based in Canada replied, “Only partially since (like viruses) they mutate quickly.”

The president of a center for media literacy commented, “The technology capability [of potential verification systems] is immature and the costs are high. Blockchain technology offers great promise and hope.”

A senior research fellow based in Europe said, “There probably is a way to create such systems, but in whose interest would it be to deploy them? In a public media environment (as it exists in many European countries), that might be possible, but in highly commercialized, sensationalist private systems, nobody has an interest in that.”

An associate professor at a major Australian university noted, “Possibly though each new initiative and success prompts alternate countermeasures.”

An anonymous respondent wrote, “No. This question seems to presuppose a technologically based system, which by its very nature issues an open challenge. Any reliable, trusted system would have to be based on the mores of a society.”

An economist who works for one of the world’s top five technology companies commented, “Just as you can train a machine learning system to recognize spam, you can train a system to recognize false news. This will never entirely eliminate the problem but it will control it.”

A postdoctoral scholar at a major university’s center for science, technology and society said, “I am not an expert in security research, but my friends and colleagues who are seem to agree that completely reliable and unhackable verification systems will not be feasible, and that the weakest point is the human in the loop. That said, we should be able to achieve *mostly* reliable verification systems that take great technical sophistication to overcome.”

A university professor based in Europe noted, “Collective intelligence such as Wikipedia could work but it’s time-consuming.”

An anonymous respondent based in Europe wrote, “More-robust verification systems always spur creativity on the hackers side. It would be naive to believe that any system created by humans is unhackable and fully reliable in longer perspective. So the awareness of trusted media channels (journalists and editors) and end users to sort it out is critical. I guess a review and discussion on media/journalism ethics in the ‘new environment’ is also the way to go.”

A principal research scientist at a major US university replied, “A global PKI system could provide nearly un-hackable verification, but the amount of training needed for the average person would be very high.”

An anonymous respondent said, “I fear not. We can bolster against those problems we see and can envision, but there will be other hackers out there working on work-arounds that we cannot even envision.”

A senior expert in technology policy based in Europe, commented, “Use blockchain to verify news.”

A professor at a major US state university said, “Yes. A lot can be accomplished with: 1) Automated analysis of the text, 2) Automated analysis of the sources, and 3) Social processes to suppress fake news.”

A researcher of online harrassment working for a major internet information platform replied, “I think so but it takes hiring researchers and third-party spaces to work within large social networks. Social networks HAVE to work with outsiders, and be transparent, for this to work.”

An anonymous respondent who works at a major US university said, “I’m not sure, but my sense is that verification systems will always be hackable to some degree. It’s more important to catch hacks quickly and discredit them than to try to be ‘bulletproof.’”

A postdoctoral scholar based in North America wrote, “I don’t really understand this question. ‘Verification system’ in terms of truth of a news story? The entire industry of journalism has spent a long time creating verification systems for reporting. They need to figure out how to message this, and they will in 10 years. People have always printed ‘fake news’ (The National Enquirer, The Globe)… what is really at stake is that, formerly, reader trust coincided with the large amount of capital needed to publish news to the masses. Now that that capital barrier is gone, anyone can publish AND reach mass audiences through social media networks. All journalists are trained in verification systems before they publish anything, we just need to figure out how to bake those time-tested procedures into the policy and structure – if they care!

A North American research scientist observed, ‘Yes, with active intermediation.’”

A journalist and experience strategist at one of the world’s top five technology companies said, “The blockchain can be used to create an unhackable verification system. However, this does not stop the dissemination of ‘fake news,’ it simply creates a way to trace information.”

A professor of political economy at a US university wrote, “No: the problem is the lack of a clear categorical differentiation. There are two fuzzy dimensions, the extent of falsity and the motivation of the purveyor.”

A faculty member at a research university noted, “In my opinion, I’m not sure this is the right question. Perhaps we need to look at who is doing the creating, and whose knowledge counts to begin with.”

A director of research said, “’Unhackable’ is a bit of a stretch. Everything is hackable. I think reliable and trusted information depends on trusting social systems, which we don’t seem to have at the moment.”

An anonymous respondent said, “Probably not. All systems will have weaknesses.”

An author and journalist based in North America noted, “No. The barbarians have overrun the gatekeepers. Verification systems assume information resides only in the hands of responsible publishers.”

An anonymous research scientist commented, “There probably are, though I imagine the most difficult part with be whether or not such systems are ‘trusted.’”

A North American politician/lawyer wrote, “There are ways to create trusted systems if users choose to use them. These systems may emerge as an option and alternative, but they will compete with the unfiltered, messy systems that are in place.”

A vice president for a company based in North America replied, “Verification systems are not needed; common sense may prevail. Let people sort it out for themselves. No policing of information is needed or, indeed, desirable.”

An anonymous research scientist observed, “I don’t think this would help. Identifiable individuals are creating false narratives already.”

An anonymous futurist/consultant commented, “Unclear – no idea what the capabilities of hackers will be over the next 10 years.”

A former journalism professor and author of a book on the future of news commented, “No. First of all, anything can be hacked. Someone once said the only reason your website hasn’t been hacked is that it isn’t very interesting to hackers. As for ‘reliable’ or ‘trusted’ – those are in the eyes of beholder. There is no universal truth and certainly no universal acceptance of ‘facts.’ It has ever been thus and that fact is just now more obvious and is garnering a lot of attention.”

A North American research scientist observed, “’Unhackable’ – no. Perpetrators of fake news will always be a step ahead. The gap will close, however.”

A self-employed consultant said, “No. Who is the arbiter of the trust?”

An anonymous respondent noted, “We can certainly improve such systems relative to both trust and reliability. It seems to always be an arms race re: hacking.”

An anonymous respondent said, “Nothing is unhackable. It is the interplay between different actors in democracies (media, the public, Facebook, Google et cetera) that will guarantee the truth of information.”

A chief executive officer said, “Can P2P, blockchain, with attribution be unhackable? We need a general societal move to more transparency.”

An anonymous respondent noted, “There are ways to create reliable and trusted systems. However, unhackable verification systems is a larger problem. As quickly as researchers are putting into place systems for verification they are being exploited by hackers. More money and time need to be put into security to ensure that development is made at a rate that hacking is more difficult for rouge states and actors to penetrate.”

A senior staff attorney for a major online civil rights organization said, “Depends on what you’re verifying. You might be able to verify ID or origin, but meaning seems way more difficult.”

A researcher based in Europe replied, “I don’t think so.”

A CEO and research director noted, “Nothing is ‘unhackable.’ But barriers raise costs to make it more difficult and transparency can reduce effects.”

An associate professor at a US university wrote, “Those are three different things. Yes, the first two can be created (reliable, trusted) but not unhackable. Nothing is impregnable. However, I think systems that promote transparency and ease of access of primary information will help.”

An internet pioneer in cybersecurity and professor at a major US research university commented, “Reliable and trusted, yes. We have had (and have) that with some news sources. It requires appropriate levels of support to build and maintain, however. Unhackable is a stretch.”

A senior researcher at a US-based nonprofit research center replied, “There are a number of ways to filter out bad information. First and foremost, is the user’s own filter. Being trained to spot propaganda and incorrect information at an early age, either through education or normal parenting, will certainly help. But media and social media companies can also help by providing services, possibly by outside companies and trusted partners, to weed out this information. It could even be in the form of a app or extension. The next generation is highly educated and will need jobs, so I don’t see why professional fact checking and media gatekeeping can’t be one of those jobs of the future.”

A senior lecturer in communications at a UK university said, “Not without AI. Any fixed-rules game can be gamed.”

A research scientist based in Europe observed, “YES, there is a way: a good but critical education system open to the entire population.”

A vice president for an online information company noted, “Not really – or, if we could, it would require strong authentication (and loss of anonymity) to achieve attribution. Even when attributed, false statements can still be made. And let’s not go apeshit over blockchains as the solution to everything, please. One CAN create instances where information is strongly attributed and maybe even verifiable but a general solution for all sources of information seems elusive.”

A business leader based in Europe wrote, “Eventually yes – but it will take a lot of time. It is all about filters – we have got rid of the old mass media gatekeepers, now we’ll have to learn how to replace some of their roles without going back to the old model, we’ll have to learn how to do filtering in a fair way.”

A program manager for the US National Science Foundation wrote, “A redesigned Internet architecture could make possible more accurate provenance of news. What is needed is a better way to trace the source of new information and make that source transparent, or at least identifying it as ‘anonymous.’”

A researcher in computer science and mathematics said, “Only if the basis focuses on social technical design. Humans can be fooled, and knowledge relies on human processes of interpreting and contextualizing information. So even if the information is 100% accurate and verified, social systems can still create the ‘alternative facts’ that we deal with today.”

An anonymous respondent from North America wrote, “You have to educate the populace.”

A researcher affiliated with a company and with a major US university noted, “The simple approach is to trust established news sources (i.e., institutions, such as New York Times, BBC, NPR, Washington Post. For social media, I am not sure.”

An anonymous North American research scientist said, “Uncertain. We can’t force the purveyors of misinformation to forgo their profits. Traditional news organizations have less authority and power, especially among those who only believe the purveyors of misinformation. The government cannot limit speech. Who will?”

An anonymous business leader noted, “Not without legislation.”

A professor of management replied, “I don’t know, but too much state power mitigates against trustworthy systems.”

A North American research scientist observed, “Yes there is a way. There isn’t currently the processing power we need to do it.”

A researcher based in Europe said, “Yes, with more accurate systems of online participation.”

A public-interest lawyer based in North America commented, “I am not an expert, but I doubt that this is technologically feasible.”

A self-employed marketing professional observed, “No. Sophistication of the technologists will overcome/overwhelm any systems put in place.”

An anonymous editor based in North America noted, “Supporting traditional and new journalism institutions that verify before publication will help. And for platforms like Facebook, Twitter and Google, their efforts to flag unsubstantiated posts will help. But they also need to take responsibility as media companies, not just technology firms.”

A senior policy researcher with an American nonprofit global policy think tank said, “Verification starts with the reduction of anonymity.”

A researcher based in North America wrote, “Probably, but I don’t see how one could limit people’s consumption of news to such sources.”

The dean of one of the top 10 journalism and communications schools in the US replied, “Yes for reliable and trusted. Unlikely for unhackable. If people are interested in trusted sources or people, they can find them, verifying through recommendations, their own experiences, or cross-checking with other sources. Hacking is inevitable in human design.”

A research scientist at Oxford University commented, “We need to look at blockchain and semantic web technology as possible countermeasures but it’s more about education including scientific literacy and rational epistemology.”

A director for a technology company said, “Nothing will ever be unhackable, quantum computing will change that. We have to educate people to understand that all information needs to be scrutinized and balanced against other sources. Web engines could help by de-duplicating information so it becomes possible to attribute information back to a source by discarding copies of information masquerading as original content.”

A participant in the projects of Harvard’s Berkman Klein Center for Internet & Society, said, “I think there are, certainly, but they will be complex and unwieldy, similar to high-level security, and in the same way, will be largely ignored or misused by all but the most sophisticated consumers. Effective systems will require mutli-factor verification, third parties, and watermarking.”

An anonymous respondent replied, “Yes. Multifactor I’d systems already increase this. I think our civil liabilities will be violated, but verification systems will be reliable.”

A longtime technology writer, personality and conference and events creator, predicted, “One-button fact checks based on AI; never foolproof (like antivirus).”

A research scientist based in Europe noted, “Most likely not, but rumors and false information is not a new issue (e.g., tabloids have always regularly be sued for spreading unsubstantiated information). The point is to minimize the impact of false information, in a similar way to spam. Another thing is to ask legitimate news sites to be more thorough in the way they report information, because recently the quality of the content of major media (e.g., CNN, Fox News, NY Times) has been degrading.”

A senior research scientist who develops electronic publishing, media and technology for learning, wrote, “Reliable yes. Unhackable, no. Computer security is an oxymoron.”

A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Yes, for some kinds of information in some contexts, but as general proposition question is nonsensical.”

An independent systems integrator wrote, “Blockchain and other technologies will create a trust system; other approaches will probably be developed. Nothing is ‘unhackable,’ but everything is verifiable. Reliability will result from human usage.”

A researcher investigating information systems user behavior replied, “It is unclear this is possible without some kind of universal identification system. So long as there are states/locations where one can be unidentified, there will be holes in the verification system.”

An anonymous respondent noted, “Never completely unhackable, but trust can be increased with careful design, care to address socio-technical interactions and appropriate social norms.”

An assistant professor based in North America replied, “There’s no way to create unhackable verification systems, but as a whole, the media environment can be shaped to encourage critical thinking about the media. I’m most encouraged by people working on ways to get people from different viewpoints into conversation with each other. I also think that tools for learning the provenance of news (how it moves through one’s social media sphere) could be important here.”

A development associate for an internet action group in the South Pacific observed, “I do not have technical knowledge or skills, but just as there are ways to try to resolve and create a trusted system, there will be accompanying risks and threats hovering around to try to upset it.”

A futurist/consultant, said, “So long as content is digitally distributed there is always the potential to be hacked.”

A CEO based in the Middle East replied, “I do not think so. There are too many variables to control.”

An anonymous respondent noted, “I don’t know; but this might also not be the right question if this is about trust: whether or not people trust information has more to do with them than the source (trust is given); who or what to trust has also something to do with class, et cetera (i.e., making unhackable information systems speaks to a middle-class audience but might not have an effect on lower-class people).”

A senior political scientist wrote, “There is not because people have a First Amendment right to say whatever. And they will, if it is to their advantage.”

A senior researcher and distinguished fellow for a major futures consultancy observed, “Every node in every Net, at every layer in the technology stack is currently potentially vulnerable, and humans are fallible – giving away credentials. In the long term very sophisticated multi-factor biometrics may mitigate risks at the human end. Meanwhile – advanced interconnected secure blockchain fabrics, may extend considerable security to future microservices, Internet of Things automation and valuable data and media.”

An executive for a nonprofit think tank commented, “Not sure, but I do think that preventing advertising from automatically going to those sites would be a good step in the right direction.”

A chief marketing officer wrote, “No to the way you have asked the question. However, the real answer is yes and no. Reliable and trusted are achievable but no software has ever been unhackable. Blockchain technology could make a deep impact on reliable and trusted.”

A director for freedom of expression of a major global citizen advocacy organization said, “No, I don’t believe there is. We need to focus on teaching reading/comprehension skills instead.”

An anonymous professor of economics based in North America noted, “It used to be that public funding would do this. Some private company could make money by fact checking and being reliable, like newspapers used to do.”

An anonymous respondent replied, “Yes, we’ve done it in the past. ‘Trusted news’ and academic sources are what we teach students.”

The co-founder of an internet community organization commented, “The evolution of blockchain technology may contribute to verification of online fact.”

A research scientist with IBM Research noted, “I haven’t seen any example of an unhackable system. Any systems we build in the future need to be built with the knowledge that they will be hacked, and have appropriate controls in place to minimize the damage done when they are hacked.”

An associate professor of communication studies at a Washington DC-based university said, “No, although there are systems that are more and less reliable, and more and less hackable. The problems are numerous and well-documented, ranging from zero-day weaknesses to user vulnerabilities.”

A town council member in a well-known region of the southeastern US commented, “Trusted systems might be achievable. Data is reliable, for example. But if the data doesn’t speak to you experience, you might not believe it. For example, unemployment statistics. Employment has seemed to be trending up. But if you’ve lost your job and so have all your friends and you are in depressed rural America, you might not believe it.”

A research scientist based in Moscow said, “No.”

A senior global policy analyst for a major online citizen advocacy group said, “Nothing is unhackable.”

The chief technology strategist for a nonprofit research network serving community institutions, commented, “We may see a combination of Wikipedia-like curation combined with blockchain for determining provenance.”

An anonymous research scientist based in North America wrote, “A system that enables commentary on public assertions by certified, non-anonymous reviewers – such that the reviewers themselves would be subject to Yelp-like review – might work, with the certification provided by Verisign-like organizations. Wikipedia is maybe a somewhat imperfect prototype for the kind of system I’m thinking of.”

An anonymous internet pioneer/originator commented, “The question is mis-framed, as it assumes that trust requires a single oracle that determines trust. Trust in offline sources does not work that way; why should online trust be any different?”

The technology editor for one of the world’s most-trusted news organizations commented, “Yes, the question is at what cost and who will pay.”

A North American research scientist said, “I figure that, if the incentives are correct, there must be a way to do so. People have created systems to verify that random drivers are who you think they are, so people should be able to do that for information as well.”

An anonymous respondent wrote, “There are ways to fix the current problems, but new challenges will keep coming.”

A chief executive officer said, “Maybe. The problem is that everything is probably hackable and trust is variable. It will require a combination of change in what people trust as well as a technological enforcement.”

An anonymous research scientist based in North America observed, “We can get verification of publishers. That’s some distance from ensuring unhackable systems.”

An anonymous survey participant replied, “I have no idea! But I trust people smarter than me will find a way to do it, because there will be too much at stake not to.”

A consultant based in North America noted, “Yes. But the central problem isn’t a technical one, it is a human one. You can create a very strong verification system, but you can’t scale it up easily (without deep participation from Facebook and Google that is highly unlikely). Adoption of verification systems will be strongest among those who seek them out, a demographic that is not at the center of the political disinformation problem. Further, the intervention of verification could well serve (in the short term) to deepen the dogmatic lines of ideological division.”

An adjunct senior lecturer in computing noted, “There is no way. Even the lowest level hardware has unknown analogue faults which can be triggered by particular sequences of digital instructions leading to exploitation and unintended operation.”

A distinguished professor emeritus of political science at a US university wrote, “The only mechanism I can imagine is to have a central source of information to turn to, but opponents will trash that (as in what is happening to CNN) and opponents are likely to believe the trashers.”

A professor of information systems at a major technological university in Germany commented, “Reliable, trusted, unhackable, verification: difficult terms if you want to measure yes/no.”

A principal consultant said, “People have been creating reliable and trusted verification systems since we have been able to communicate. Others have worked on hacking them for just as long. So it’s likely an ongoing battle. I think the romance with the anonymous source may be ebbing, as we come to understand how very unreliable information from sources which face no consequences for misleading us can be.”

A professor based in North America observed, “I’m not sure about the unhackable part. I do think that it will be more reliable.”

A technical writer said, “Current methods have all proven hackable. Not sure how that can change.”

A professor and researcher based in North America noted, “At 100%? No. But those of good will in tech need to relentlessly strive for it.”

An author and journalist noted, “Things will stay the same. It is a never-ending arms race. People will learn to trust or distrust the messengers more.”

A professor of sociology based in North America said, “Probably not. There is too much news content being shared in too many different places to ensure accuracy. People do not look to a single source for news. And powerful news organizations of the past (New York Times, CNN, et cetera) have been deemed untrustworthy by the current President of the United States.”

A data scientist based in Europe who is also affiliated with a program at Harvard University wrote, “No.”

A senior vice president for government relations noted, “I am not sure, but robust security is essential to building trust and everyone in the eco-system has a role to play in enhancing security.”

A doctoral candidate and fellow with a major international privacy rights organization said, “Generally speaking, I think you will find that even if you develop ranking systems or warning messages, users will just not buy into them. The problem is not technological, it is sociological and behavioural.”

A professor based in Europe commented, “Yes, but state actors will fight tooth and nail to prevent this from happening, either using law or technology. They have too much interest in spying on everything people do to allow true security.”

An anonymous research scientist based in Asia/Southeast Asia wrote, “No, there will be always new technologies to outperform the current ones.”

A professor of sociology based in Europe observed, “I am not sure but it will not one solution for all time, it will have to be a dynamic solution.”

An anonymous business leader wrote, “Yes.”

An anonymous professor of cybersecurity at a major US university commented, “It is difficult, if not impossible, to get a group of diverse individuals to agree on what constitutes bias. I don’t see how a technical system can do any better.”

An anonymous educator noted, “Blockchain springs to mind.”

A Ph.D. candidate in informatics, commented, “It is possible to create systems that are reliable and trusted, but probably not unhackable. I imagine there could be systems that leverage the crowd to check facts in real time. Computational systems would be possible, but it would be very difficult to create algorithms we could trust.”

A legal researcher based in Asia/Southeast Asia said, “The way is that all individuals become reliable and trusted online.”

An anonymous researcher based in North America observed, “No, because there is no single ‘trusted’ source of such information and the media is too partisan and polarized.”

A member of the Internet Architecture Board said, “To verify what? Generally, we can create systems that verify that something was created by someone who holds a private key (or similar secret information/capability). Linking that key to a real-world person or artefact is the tricky part. Our best example right now is the Web PKI, and it’s not great.”

A North American research scientist said, “Yes. I have trust that we can accomplish that.”

An associate professor of business at a major university in Australia observed, “Conclusions and implications often cannot be verified (vis. economic implications of a proposed policy change), but most claims of fact can be objectively verified quickly and easily.”

A Ph.D. candidate at the University of Illinois-Chicago, wrote, “I honestly do not know, although I am aware that Bing Liu is developing programs to identity false news accounts, which I assume could then be used to filter out unverified sources.”

A postdoctoral associate at MIT noted, “There is not. As with most thing on the internet, there will be always an ‘arms race’ between the system engineers and those who wish to exploit it. I don’t think it’s possible for one side to completely beat out the other, however that does not mean that we should give up, just that we need to be realistic.”

A leading internet pioneer who has worked with the FCC, ITU, GE, Sprint and VeriSign commented, “This cannot be done with an open TCP/IP internet.”

A professor based in New York observed, “Nothing is unhackable. The bigger issue is the different epistemologies we have to truth, and every system works from an algorithm, which is usually hidden from view. One thing that can be done in politics is to have quicker fact-checking. For example, in the latest US presidential debates, the candidates were able to give a bold-faced lie, even though there were video clips of them making statements to the contrary. Why aren’t debates using these video clips? That would fundamentally make it harder from candidates to contradict themselves.”

An author and journalist based in North America said, “I would never say anything is unhackable, but there are new verification tools being developed now that, at a minimum, will improve transparency for news consumers.”

A research scientist based in North America wrote, “No, but more reliable systems will become widely trusted.”

A professor of education policy commented, “I’m sure there are ways to get much closer to such systems, but again, this would require political will, government resources, and administrative capacity – none of which, in my estimation, exist right now or are likely to exist in the near future – in large part because we are a biparty system, but one of our political parties has shown itself unwilling to recognize the risk or show care about it.”

An anonymous survey participant noted, “Yes and No. As ways are found to verify facts, ways will be found around them. The issue will never go away.”

A distinguished engineer for a major provider of IT solutions and hardware commented, “This is a double-edged sword – you trust a news source, and then that news source gets to publish misinformation. Who determines whether a news article or news source is trustworthy? That entity would have too much power and it would eventually corrupt.”

A North American research scientist said, “Misinformation is a social problem that no amount of technology can solve. The only way to improve it is through better education.”

A researcher based in North America observed, “It is possible to create trusted sites through AI.”

An anonymous internet activist/user based in Europe commented, “No, there is no way to determine what is or is not fake news. Any attempt to do so is censorship.”

A business leader based in North America noted, “We have them today for (extremely) limited applications – the next 10 years will open them up to broader, more practical uses. It is to be determined whether they’ll be broad enough for smaller business concerns and consumers.”

An anonymous consultant based in North America commented, “I think not. The operators of the systems need to be trusted. I do not see an incentive system that will lead to such trusted operators.”

An emeritus professor of communication for a US Ivy League university noted, “It is not at all clear that what we are concerned with is verification – most of what we are concerned about these days is not binary, true-false assessments, but information that is relevant, useful and reliable as a guide for decision making.”

A former software systems architect replied, “Yes, we have the technical means for small groups to employ reliable verification systems. They will require trust and they can be subverted, whether or not by hacking. Don’t expect more than is achievable in person-to-person relationships.”

A copyright and free speech artist-advocate observed, “I think yes, but only for those who want to use them. There will always be fake news and people will need to confirm the veracity of the story/sites. Unfortunately, that doesn’t leave me optimistic. How many people post stories that Snopes.com has already proven false?”

A North American futurist/consultant commented, “Decentralize news from current institutions and aggregate results.”

A consultant based in North America replied, “This is an unattainable goal, and the better approach is in community and expert moderation of trending content, rather than trusting a verification system.”

A political economist and columnist commented, “This requires tech knowledge I don’t have. My grandson, 6, when asked if he liked dinosaurs replied: No. I’m more tech than that. With an attitude like that, I can’t begin to divine the future of communication.”

A technology and futures editor said, “The cybersecurity story is a joke. A series of jokes. OK: the Obama guys hire dozens from Google et al., and we have the OPM hack, likely the most disastrous assault on US cybersecurity ever. We need a dramatic shift.”

A professor of communications at a US university said, “There will always be an arms race in this area, but we could be doing a lot better in every arena even with the tools we have.”

An anonymous author and journalist wrote, “We can certainly improve.”

The chairman of a company commented, “Blockchain may hold one of the answers, but this is more of a people than a technology problem.”

An anonymous author and journalist based in North America wrote, “Technically, drone witness fleets or body cam implants or other recording technology can spy on events, but the meaning of events is utterly hackable and mutable. In addition, saying a technology is unhackable means that when counterfeit information is produced, it has too much veracity!”

A founder and research scientist commented, “No system is entirely unhackable, but it can be built to be distributed and robust. Crawlers tied to back end ensemble algorithms built on continually refreshed training sets (and administered by a neutral consortium) are a good place to start. A plugin can display live ratings or filter accordingly.”

The founder of one of the internet’s longest-running information-sharing platforms commented, “I think so, using a combination of people and AI methods. What will be visible will be user options to choose only trustworthy sources, or not.”

A researcher and journalist replied, “Any system is hackable whether using purely technical means or human weaknesses.”

A professor and institute director said, “No. There are technological means of reducing fake news, such as checking provenance and consistency with other news sources, but they are not foolproof.”

A professor based in North America noted, “Sure, but they will be drown out by the monetary interests.”

The executive director for an environmental issues startup commented, “I think so, and that it would have to involve a group of actual verifiable people at its core, a relatively large group but not too unwieldy.”

An anonymous respondent observed, “No, that is why we have editorials (opinion pieces); articles today are more like editorials – choosing the ‘facts’ and expressing an opinion; not everyone can see the same facts and come to the same conclusion. The only thing we could do is provide everyone with a video camera and show what really happens – but, even then people will come to different truths based on what they both see.”

An anonymous research scientist said, “Anything I can imagine being a possible workaround to create reliable, trusted, unhackable verification systems.”

An anonymous business leader replied, “It’s like spam – tools to slow it down are met by countermeasures.”

A professor and researcher based in North America noted, “Yes and no. It is not possible to create, at one shot, a reliable, trusted, unhackable verification system. Trust is a social value that must be developed and maintained over time. The system will be only as trusted as the institution responsible for its maintenance. I do believe that it is possible to maintain reliable and trusted systems, but it is not a technology problem. It is a problem of ongoing support, labor and social integration.”

A marketing consultant for an innovations company wrote, “Yes, in limited situations concerning limited subjects.”

An anonymous respondent said, “No. Smart people will always find a workaround. That’s the challenge: Authority says ‘No;’ Brain says ‘You aren’t the boss of me.’”

An anonymous respondent replied, “Blockchain technology is the most promising. But human reputation is also going to continue to be valued.”

An anonymous activist/user wrote, “This and related questions presupposes that technology can overcome human moral and other weaknesses, yet unfortunately this is not the case.”

The executive director of a major global privacy advocacy organization said, “Most likely it is possible but if it is they will not be used.”

An anonymous research scientist noted, “Unhackable? That is laughable. Without that, trust erodes.”

A US-based associate professor of political science commented, “I think there is a way that verification systems can be created.”

A graduate researcher at Northwestern University, wrote, “There is no such thing as an ‘unhackable’ system; that’s a ridiculous concept. Humans build things, other humans dismantle them, back and forth, again and again – that’s just history. The challenge isn’t ever creating a system that will stand forever, it’s creating a system that works for right now and balances privacy and transparency well.”

A strategist for an institute replied, “NO, the speed of the action is higher than the speed of the reaction.”

An educational technology broker replied, “Blockchain technology has some real potential.”

A chief technology officer observed, “Everything is eventually hackable. It is a misconception about cyber security that it is not. Security is about risk mitigation by reducing attack vectors. With good practice we can reduce the risk.”

A research assistant at MIT noted, “Yes. Blockchain technology is very promising for applications like data security.”

A journalist based in North America said, “The marketplace of ideas by definition contains people who are wrong, confused, who lie, or who have something to spin. Let the marketplace of ideas work.”

The CEO of a major American internet media company based in New York City replied, “Nothing is unhackable and modern approach to security isn’t about preventing breaches, it is limiting their damage. That is what we will see with verification systems, they won’t be perfect, but will improve.”

An engineering director for Google observed, “No. No system can be unhackable, but big improvements can be made at many layers, from DNS through operating systems and on to higher-level services.”

A librarian based in North America noted, “Construct something like the old ‘Good Housekeeping Seal of Approval.’”

A vice president of survey operations for a major policy research organization replied, “Organizations can develop their reputation for trustworthiness, and they can have significant influence, but it will not be complete and will struggle to be timely.”

A senior international communications advisor commented, “I’m sure there is, but I don’t know what they are. Even The Intercept completely messed up when they inadvertently outed one of their sources.”

A director of new media said, “There may be, but with state of the art encryption maintained within CLOSED systems. Insistence of authorities (NSA) on maintaining access to all systems will maintain vulnerability.”

A technical evangelist based in Southern California said, “Never, though we can do better than now.”

A business owner replied, “Yes, but unfortunately, we’re looking at white-listing sources.”

A doctoral candidate and Internet of Things researcher said, “No; misinformation evolves in terms of strategies, rhetorics, and goals as verification systems spring up; we might find a way to quash today’s misinformation, but tomorrow’s will look and act differently.”

A data scientist and blockchain expert based in Europe wrote, “It will be using global blockchains with millions of nodes worldwide.”

An anonymous respondent commented, “No. It all depends on who controls the notion of truth. It is only possible to prove the information has not been tampered with from the source.”

An historian and writer, said, “We will create trusted sources, but I’m not sure that will solve the problem. Those sources will probably be expensive, and we’ll need to decide how they are paid for.”

A retired university professor noted, “No. But we can certainly improve the ones we have by promoting encryption for all devices and communications. The FBI can get warrants and judges can hold warrant subjects in contempt if they need to get into people’s computers.”

A professor of media studies and director of a center for civic media wrote, “There may be. But ‘trust’ and security require either a fully distributed system like blockchain (anarchism) or a centralized system (like Facebook).”

An author/editor/journalist based in Europe commented, “Improve reader and viewer discrimination.”

To return to the survey’s anonymous responses home page, with links to all sets, click here.

To advance to the set of anonymous responses to survey Question 3, click here.

If you wish to read the full survey report with analysis, click here.

To read credited survey participants’ responses with no analysis, click here.