Technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:
What is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?
About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.
Links to six pages with written elaborations to six survey questions
Respondents were asked six questions tied to the main theme. Click on the following headlines to link to pages with the full responses by study participants who chose to remain anonymous in one or more responses when making remarks in the survey.
1 – Briefly explain why the information environment will improve/not improve.
2 – Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?
3 – What are the consequences for society as a whole if it is not possible to prevent the coopting of public information by bad actors?
4 – If changes can be made to reduce fake and misleading information, can this be done in a way that preserves civil liberties? What rights might be curtailed?
5 – What do you think the penalities should be for those who are found to have created or knowingly spread false information with the intent of causing harmful effects? What role, if any, should government play in taking steps to prevent the distribution of false information?
6 – What do you think will happen to trust in information online by 2027?
These responses were collected in an “opt in” invitation to nearly 8,000 people; 1,116 respondents answered at least one question and 777 submitted one or more written elaboration to the six follow-up questions.
Among the key themes emerging from all respondents’ answers to the six questions were: – Things will not improve because the Internet’s growth and accelerating innovation are allowing more people and AI to create and instantly spread manipulative narratives. – Humans are, by nature, selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar. – In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil. – The dwindling of common knowledge makes healthy debate difficult, destabilizes trust and divides the public; info-glut and the fading of news media are part of the problem. – A small segment of society will find, use and perhaps pay a premium for information from reliable sources, but outside of this group ‘chaos will reign,’ and a worsening digital divide will develop. – Technology will create new challenges that can’t or won’t be countered effectively and at scale. – Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars. – The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and are likely to remove the ability for people to be anonymous online and limit free speech. – Technology will win out, as it helps us lable, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content. – Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of ‘trust ratings.’ – Regulatory remedies could include software liability law, required identities and the unbundling of social networks. – People will adjust and make things better; misinformation has always been with us, and people have found ways to lessen its impact. – Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. – Technology can’t win the battle, though, the public must fund and support the production of objective, accurate information. – Funding must be directed to the restoration of a well-fortified, ethical, trusted public press. – Elevate information literacy; it must become a primary goal at all levels of education.
If you wish to read the full survey report with analysis, click here.
To read credited survey participants’ responses with no analysis, click here.