Elon University Home

 

The 2016 Survey: The Future of Online Discourse

Free speech, trolls, anonymity, fake news, and the future:
Will uncivil and manipulative behaviors persist or even worsen?
Credited responses by those who wrote to explain their response

Internet experts and highly engaged netizens participated in answering a five-question survey fielded by the Imagining the Internet Center and the Pew Internet, Science & Technology Project from July 1 through August 12, 2016. One of the survey questions asked respondents to share their answer to the following query:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust? Please elaborate on your answer and consider answering these issues in your response: How do you expect social media and digital commentary will evolve in the coming decade? Do you think we will see a widespread demand for technological systems or solutions that encourage more-inclusive online interactions? What do you think will happen to free speech? What might be the consequences for anonymity and privacy?

Most who responded said they fear uncivil and manipulative behaviors on the Internet will persist or get worse. Some predict this will lead to a splintering of social media to AI-patrolled and regulated "safe spaces" separated from free-for-all zones. Many worry this will hurt the open exchange of ideas and compromise privacy and some say it will damage democracy. Only about 19% said they expect social discourse to improve by 2026.

Among the key themes emerging from 1,537 respondents' answers were: - Things will stay bad because to troll is human - Anonymity abets bad behavior. - Inequities are motivating at least some of the inflammatory dialogue. - The growing scale and complexity of Internet discourse makes uncivil discourse difficult to overcome. - Things will stay bad because tangible and intangible economic and political incentives support uncivil behaviors. - Hate, anxiety, and anger drive up participation, which equals profits and power. - Technology companies have little incentive to rein in uncivil discourse, and traditional news organizations - which used to help shape discussions for the common good - have shrunk in importance. - Terrorists and other political actors are benefiting from the weaponization of online narratives, implementing human- and bot-based misinformation and persuasion tactics. - Things will get better because technical and human solutions will arise to detect and filter inappropriate behaviors. - Due to the filtering and moderation required to deal with uncivil discourse, online worlds will splinter into segmented, controlled social zones and free-for-all zones. - There will be partitioning, exclusion, and division of online outlets, social platforms, and open spaces. - Trolls and other uncivil actors will fight back, innovating around any barriers they face. - Some 'solutions' to uncivil behavior could further change the nature of the Internet. - Pervasive surveillance will become more prevalent. - Dealing with hostile behavior and addressing violence and hate speech will become the responsibility of the state instead of the platform or service providers. - Polarization will occur due to the compartmentalization of ideologies. - Increased monitoring, regulation, and enforcement will shape content to such an extent that the public will not gain access to important information and possibly lose free speech. 

If you wish to read the full survey report with analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet.xhtml

To read anonymous survey participants' responses with no analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet_anon.xhtml

Written elaborations by for-credit respondents

Following are the full responses by study participants who chose to take credit for their remarks in the survey - only including those who included a written elaboration explaining how they see the near future for online discourse. Some of these are the longer versions of expert responses that are contained in shorter form in the official survey report. About half of respondents chose to take credit for their elaboration on the question (anonymous responses are published on a separate page).

These responses were collected in an “opt in” invitation to several thousand people who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet.

Some 1,537 experts responded to the following question:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust? Please elaborate on your answer and consider addressing these issues: How do you expect social media and digital commentary will evolve in the coming decade? Do you think we will see a widespread demand for technoloigcal systems or solutions that encourage more-inclusive online interactions? What do you think will happen to free speech? What might be the consequences for anonymity and privacy?

About 41% of respondents said they expect no major change in the tone of online interaction; about 39% of the respondents said they expect online communication will be more shaped by negative activities; and about 19% expect online communication will be less shaped by negative activities

Vinton Cerf, Google vice president, co-inventor of the Internet Protocol, and Internet Hall of Fame member, wrote, "Internet is threatened with fragmentation. There are constant attacks by hackers and governments. The apparent anonymity of the internet encourages negative discourse. It seems clear that people feel free to make unsupported claims, assertions, and accusations in online media. The screen 'protects' them from immediate consequences (sort of like yelling at other drivers behind the protection of your windshield in the car). This freedom leads to chains of comments that quickly go off topic and often become ad hominem. As things now stand, people are attracted to forums that align with their thinking, leading to an echo effect. This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because 'everyone' is doing it. People are naive about the content they find on the Internet/Web and self-select that which supports their views. It is hard to see where this phenomenon may be heading. If we teach critical thinking early in schools, perhaps we will create a more thoughtful general public. See also The Righteous Mind by Jonathan Haidt for more explanation of polarizing behavior. Social media bring every bad event to our attention, making us feel as if they all happened in our back yards—leading to an overall sense of unease. The combination of bias-reinforcing enclaves and global access to bad actions seems like a toxic mix. It is not clear whether there is a way to counter-balance their socially harmful effects."

David Clark, senior research scientist at MIT and Internet Hall of Fame member, commented, "It is possible, with attention to the details of design that lead to good social behavior, to produce applications that better regulate negative behavior. However, it is not clear what actor has the motivation to design and introduce such tools. The application space on the internet today is shaped by large commercial actors, and their goals are profit-seeking, not the creation of a better commons. I do not see tools for public discourse being good 'money makers,' so we are coming to a fork in the road—either a new class of actor emerges with a different set of motivations, one that is prepared to build and sustain a new generation of tools, or I fear the overall character of discourse will decline."

Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation, wrote, "I expect surveillance and censorship to become more systematic, even in supposedly free countries such as the US. Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy."

Henning Schulzrinne, a professor at Columbia University and Internet Hall of Fame member, wrote, "There are likely to be at least two influences on this: The amount of political polarization in a country and the impact of various kinds of bots that automatically add content to social media. There may also be a segregation into different types of public discourse. For example, it seems likely that many newspapers will have to resort to human filtering or get rid of comment sections altogether. Twitter will remain unfiltered, but become more of a niche activity. Facebook is more likely to develop mechanisms where comments can be filtered, or people will learn to ignore comments on all but personal messages. (Recent announcements by Facebook about selecting fewer news stories are an indirect indicator. Heated debates about gun control don't mix well with pictures of puppies.)"

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, commented, "Between troll attacks, chilling effects of government surveillance and censorship, etc. the internet is becoming narrower every day."

Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN, commented, "Most attempts at reasoned discourse on topics interesting to me have been disrupted by trolls in last decade or so. Many individuals faced with this harassment simply withdraw. So I guess my answer would be that I don't see a lot of change one way or the other. There is a somewhat broader question of whether expectations of 'reasoned' discourse were ever realistic. History of this, going back to Plato, is one of self-selection into congenial groups. The internet, among other things, has energized a variety of anti-social behaviors by people who get satisfaction from the attendant publicity. My wife's reaction is 'why are you surprised?' in regard to seeing behavior online that already exists offline."

Glenn Ricart, Internet Hall of Fame member and founder/CTO of US Ignite, replied, "The predominance of Internet tools that assume you want ‘relevant’ information, or information that your friends recommend, or that match your own communications, all these reinforce an ‘echo chamber’ internet. Instead of seeing the wide diversity of opinion present on the Internet, you are subtly guided into only seeing and hearing the slice of the internet featuring voices like your own. With such reinforcement, there's little social pressure to avoid negative activities. It is of great concern that we have yet to find a funding model that will replace the Fourth Estate functions of the press. This problem only exacerbates the issue of internet communication tools featuring voices like your own. We desperately need to create interest in serious, fact-laden, truth-seeking discourse. The internet could be, but it largely isn't, doing this."

Seth Finkelstein, writer and pioneering programmer, said, "We aren't anywhere near Peak Disgust. One of the less-examined aspects of the 2016 US presidential election is that Donald Trump is demonstrating to other politicians how to effectively exploit such an environment. He wasn't the first to do it, by far. But he's showing how very high-profile, powerful people can adapt and apply such strategies to social media. Basically, we're moving out of the 'early adopter' phase of online polarization, into making it mainstream. The phrasing of this question conflates two different issues. It uses a framework akin to 'Will our kingdom be more or less threatened by brigands, theft, monsters, and an overall atmosphere of discontent, strife, and misery?' The first part leads one to think of malicious motives, and thus to attribute the problems of the second part along the lines of outside agitators afflicting peaceful townsfolk. Of course deliberate troublemakers exist. Yet many of the worst excesses come from people who believe in their own minds that they are not bad actors at all, but are fighting a good fight for all which is right and true (indeed, in many cases, both sides of a conflict can believe this, and where you stand depends on where you sit). When reward systems favor outrage-mongering and attention-seeking almost exclusively, nothing is going to be solved by inveighing against supposed moral degenerates."

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, noted, "The early idealism surrounding online communication clearly has been shattered. Thanks to persistent activism, high-profile anti-harassment scholarship, legal reform, and widely covered instances of 'the death of civility,' the major tech companies have become less comfortable with their older platitudes about being responsible to promote free speech full stop. They seem to realize that constantly growing a user base requires ensuring people feel safe. Accordingly, they're working harder to ensure that their platforms are designed to optimize doing things like automatically detect harassment, easily allow for users to report harassment, and swiftly act upon harassment complaints by applying sanctions derived from clear Community Guidelines and Terms of Service that revolve around expectations of civility. Online journalism has tried to deal with trolling comments primarily by either removing comment sections entirely or else implementing democratic voting systems that allow readers to rank the quality of comments. Neither of these two options is ideal. But they do suggest there may be better designs waiting to be created and implemented, and I suspect they will be. I also imagine a robust software market emerging of digital ventriloquists that combines predictive analytics with algorithms that interpret that appropriateness of various remarks. For example, the software could detect you're communicating with a person or member of a group that, historically, you've have hard time being civil with. It could then data-mine your past conversations and recommend a socially acceptable response to that person that's worded in your own personal style of writing."

Scott Amyx, CEO of Amyx+, said, "There is an imbalance between the desire for anonymity and privacy vs. actual user behavior (Facebook, Snapchat) and the advancement of technology that is pushing the boundary. Free speech will be amplified through peer-to-peer multicast, mesh network technologies. Earlier-generation platforms that enabled free speech—such as Open Garden's FireChat—will usher in even broader and more pervasive person-to-person (P2P) communication technologies, powered by the Internet of Things. Billions of IoT-connected devices and nodes will increase the density to support vibrant P2P global wireless sensor networks. IoT is transitioning our computing model from centralized to a decentralized computing paradigm. This enables self-forming, self-healing networks that enable messaging, communication and computing without the need for a central system or the traditional Internet. Everything becomes node-to-node. These technological enablements will amplify the voices of the people, especially in closed, censored nations. For clarity, new technologies will not necessarily foster greater griping, distrust, and disgust but rather it will allow private individual thoughts and conversations to surface to public discourse. Once in public domain, it will take an organic momentum of its own to create political, social, and economic changes. Some will result in positive results and others will result in negative results. New technologies are not without inherent risks. IoT poses greater surveillance and sousveillance as more objects become awakened and capable of collecting physiological, psychological (cognitive load), behavioral, emotional and contextual information about the masses. A global, more accessible network will allure malicious parties to hack or intercept private data for misuse."

Dave Howell, a senior program manager in the telecommunications industry, said, "Identity will replace anonymity on the internet. Devices will measure how a human interacts with them and compare to Web cookie-like records to match persons with an advertising database. This will become public knowledge and accessible to law enforcement and courts within the decade. There will be 'Trust Providers' at the far end of transaction blockchains who keep an official record of identity (interaction patterns), and these may be subpoenable. Individuals will learn that public utterances (on the internet) won't/don't go away, and can have consequences. Whether or not organizations (e.g., ACLU) can pass 'Right to be Forgotten' and privacy/speech protection acts in the decade will probably be irrelevant, as social belief will likely be suspicious that individuals are tracked regardless. This is a little scary. The generations who don't interact with connected devices are aging, shifting the population more into exposure and circumspection."

Susan Etlinger, industry analyst at Altimeter, wrote, "In the next several years we will see an increase in the type and volume of bad behavior online, mostly because there will be a corresponding increase in digital activity, whether on the internet as we know it today, or via messaging, virtual reality, IoT sensors, drones or other emerging technologies. Cyber-attacks, Doxing and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view. The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more hidden channels such as Tor. Of course, this is already happening, just out of sight of most of us. The worst outcome is that we end up with a kind of Potemkin internet, in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality."

Joshua Segall, a software engineer, commented, "Online activity is already heavily shaped by negative activities and there's no reason to expect the trend to reverse. The effect is due to two broad drivers. First, the social media companies have taken a false neutral stance in which they apparently believe that technology will solve social issues as opposed to amplifying them. Companies have taken very few steps to prevent online abuse, and those that have been taken are minimal and ineffective. Without strong action and new ideas to foster inclusiveness and limit abuse from social media companies, the negative activities online will continue to escalate. Abusive activity is much more of a threat to free speech than almost any policy or action that could be taken by these companies. I think there is demand for more-inclusive systems but I don't see a pure technology play that will enable it. Abuse is already widespread, so it's unclear how much more demand there can be. The second driver is the ongoing economic stagnation across the globe, which is increasing tension between groups and fueling a sharp rise in nationalism, racism, fascism, and violence. This will be reflected online by increased abuse and negative activity, especially on social networks. Technical solutions and social media have little control over this aspect, but the underlying forces will affect them nonetheless. I don't think this has anything to do with anonymity, privacy, or free speech. It's a reflection of society, and people will find a way to use any system to express themselves. Any systemic change would have to be more broad-based than a single company's online policies. However, there is a role for these companies to play in shaping public discourse by encouraging inclusiveness, civility, and true discussion."

Alf Rehn, professor and chair of management and organization at Åbo Akademi University in Turku, Finland, responded, "As the public sphere moves evermore solidly onto the internet, the fractious mood of our discussion climate will strengthen online filter bubbles, clamorous echo chambers, and walled gardens of discourse."

Esther Dyson, founder of EDventure Holdings and technology entrepreneur, writer, and influencer, wrote, "Things will become somewhat better because people will find it tougher to avoid accountability. That doesn't mean everyone will become politically correct, but reputations will follow you more than they do now. (Of course, *interpretation* of a reputation varies, e.g., people who support Trump despise those who support Hillary and vice versa.) There will also be clever services like CivilComments.com (disclosure: I'm an investor) that foster crowdsourced moderation rather than censorship of comments. That approach, whether by CivilComments or future competitors, will help. (So would sender-pays, recipient-charges email, a business I would *like* to invest in!) Nonetheless, anonymity is an important right—and freedom of speech with impunity (except for actual harm, yada yada)—is similarly important. Anonymity should be discouraged in general, but it is necessary in regimes or cultures or simply situations where the truth is avoided and truth-speakers are punished."

Ian Peter, an Internet pioneer and historian based in Australia, wrote, "The continued expansion of sale of personal data by social media platforms and browser companies is bound to expand to distasteful and perhaps criminal activities based on the availability of greater amounts of information about individuals and their relationships."

David Karger, a professor of computer science at MIT, said, "We are still at the early stages of learning how to manage online public discourse. As we've rushed to explore ways to use this new medium, our lack of experience has led to many surprises both about what does work (who would have imagined that something like Wikipedia could succeed?) and what doesn't (why aren't online discussion forums just as friendly as grandma's book club?). The first generation of movie makers thought they were recording plays, pointing a fixed camera at a stage. It took a while to realize that you could move the camera and create a more natural and immersive experience for the viewer. Future generations will look on our early attempts to manage online discourse as showing similarly limited understanding of the new medium. The research community is responding to this new medium. Studies are increasing our understanding of it, and engineering researchers are proposing new tools for taking advantage of it and coping with its challenges. My own research group is exploring several novel directions in digital commentary. I believe that in the not too distant future all this work will yield results. Trolling, doxing, echo chambers, click-bait, and other problems can be solved. We will be able to ascribe sources and track provenance in order to increase the accuracy and trustworthiness of information online. We will create tools that increase people's awareness of opinions differing from their own, and support conversations with and learning from people who hold those opinions. You ask about free speech. The Internet transforms free speech from a right to an inevitability. In the long term it will not be possible to prevent anyone from transmitting information; there are simply too many interesting distribution channels for them all to be blocked. However, we need to (and will) develop a better understanding that freedom to *speak* does not imply freedom to *be heard*. The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable (trolls will be able to say what they want, but few will be listening). I'm less sanguine about anonymity and privacy. I am convinced by David Brin's 'transparent society' vision that the ever-decreasing cost/effort of surveillance will ultimately land us in a world where very little can be hidden. In a sense, I think we're headed back to the traditional small village where everyone knew everyone's business. I expect this will force us to cope with it in a similar way: by politely pretending not to know (and gossiping about people behind their backs)."

Hume Winzar, associate professor in business at Macquarie University, Sydney, Australia, commented, "The panopticon will be real and growing in size. Online technologies will be less anonymous. What we do and say online will be held to account. There is a simple and possibly naive assumption that after a while the majority of people grow up. As a technology matures, people also mature with it, and they tend to move toward a more natural communication style."

Jim Warren, Internet pioneer and longtime technology entrepreneur and activist, responded, “It seems clear—at least in the US—that 'bad actors,' children of all ages who have never been effectively taught civility and cooperation, are becoming more and more free to 'enjoy' sharing the worst of their 'social' leanings."

Jonathan Grudin, principal researcher at Microsoft, commented, "Social media use and online commentary will evolve, but their use has matured and stabilized. I expect no significant changes."

Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, said, "Thomas Piketty, etc., have correctly predicted that we are in an era of greater social instability created by greater wealth disparity which can only be solved through either the wealthy collectively opting for a redistributive solution (which feels unlikely) or everyone else compelling redistribution (which feels messy, unstable, and potentially violent). The internet is the natural battleground for whatever breaking point we reach to play out, and it's also a useful surveillance, control, and propaganda tool for monied people hoping to forestall a redistributive future. The Chinese internet playbook—the 50c army, masses of astroturfers, libel campaigns against 'enemies of the state,' paranoid war-on-terror rhetoric—has become the playbook of all states, to some extent (see, e.g., the HB Gary leak that revealed US Air Force was putting out procurement tenders for 'persona management' software that allowed their operatives to control up to 20 distinct online identities, each). That will create even more inflammatory dialogue, flamewars, polarized debates, etc."

David Durant, a business analyst at UK Government Digital Service, wrote, "I expect no significant change in this area in the next ten years. Despite the introduction of technology such as civil comments (https://www.civilcomments.com/) the overall quality of discourse will remain the same. There are two reasons for this. The first is that a large volume of direct abuse online is created by a small number of people, and this will continue as the 'kudos' in their own peer group for doing so will remain high. Methods such as identity theft, multiple self-accrediting sets of user accounts, and technology such as IP-spoofing will help to enable this. The second reason this will continue is more social. It is in the interest of the paid-for media and most political groups to continue to encourage 'echo-chamber' thinking and to consider pragmatism and compromise as things to be discouraged. While this trend continues, the ability for serious civilised conversations about many topics will remain very hard to achieve."

Bailey Poland, author of Haters: Harassment, Abuse, and Violence Online, wrote, "We are close to a tipping point in terms of online dialogue. Things are likely to get much worse before they get any better, but the state of online discourse has been ugly for a very long time, and people are beginning to rally for real changes. The demand for better systems and user protection is increasing quickly, and companies and websites will need to get on board or lose their user base. Free speech is often a misnomer in online spaces. I am far more concerned with State suppression of internet access than trumped up concerns about Twitter banning serial abusers. One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long."

Gail Ann Williams, former director of the Internet-pioneering community The WELL, an online community consultant, wrote, "Social media and digital commentary will evolve under the pressures from participants who value respectful communications, from intentional and impulsive bad actors, from community moderators and from interaction designers. All are learning, and social learnings will shift dynamics. Culture will evolve in small, gated interaction settings as well as in larger settings with less barrier to entry, just as private face-to-face conversation relies on private small-group expression as well as published or public speaking contributions to the public. The advantages and disadvantages to anonymity are enough that there will be a range of settings with a range of choices. In many ways, the significant threat to free speech, anonymity, and privacy will be perceptions of confidentiality that do not match up to technical prowess in identifying the players later. Offering a broad variety of settings and rule sets for discourse is the best way forward."

Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, observed, "Much as spam once threatened to derail email, trolling threatens online interactions now. Technologies will evolve/adapt to allow users more control and avoidance of trolling. It will not disappear, but likely will be reduced by technological solutions."

Bryan Alexander, president of Bryan Alexander Consulting, wrote, "I expect we'll see negative communications increase, and be balanced out by rising concerns for civility. The negative comments will occur wherever they can, and the number of venues will rise, with the expansion of the Internet of Things and when consumer production tools become available for virtual and mixed reality. Moreover, the continued growth of gaming (where trash talk remains), the persistence of sports culture (more trash talk and testosterone), and the popularity of TV news among the over-50 population will provide powerful cultural and psychological backing for abusive expression. At the same time there's rising interest in warding off prejudiced expression, especially among the young, the liberal, and those in the nonprofit world. We could well see a resurgence of politeness, or at least a spate of laws, lawsuits, and policies (both governmental and commercial) making it harder to abuse people digitally. These two developments seem to be in a kind of balance, at least for the medium-term future."

Simon Gottschalk, sociology professor at the University of Nevada, Las Vegas, commented, "I anticipate public discourse online to be become more shaped by bad actors, harassment, trolls, etc. The way I see it, public discourse online seems to have been hurled into a negative spiral (witness Trump's tweets as the most grotesque example of this trend). I also anticipate the issue of free speech to become altered beyond recognition and to alter our understanding of it. In the end, it matters little if what we write/say online is indeed already officially and legally surveilled or not. The reasonable hunch is that it shapes how we experience everyday life and what we're willing to write/say in that setting. According to a New York Times article published a few days ago, even Facebook CEO Mark Zuckerberg covers the camera/microphone of his computer."

John Sniadowski, a systems architect for TrueBox, said, "More and more countries are going to adopt similar social scoring systems such as those currently expanding in China with Wechat. These kinds of systems will massively influence suitability choices for jobs, housing, social status, and government views of its citizens. This will stymie free speech because political control of systems will work negatively against individuals who wish to voice alternative views to the accepted norms in some territories."

Bob Frankston, Internet pioneer and software innovator, said, "I see negative activities having an effect but the effect will likely to be form communities that shield themselves from the larger world. We're still working out how to form and scale communities."

Wendy M. Grossman, a science writer and author of net.wars, wrote, "The Net has always had many different kinds of spaces. The only solution to keeping online discourse 'civilized' has similarly long been known: human moderation that enforces a clearly understood set of community norms. There are always going to be areas that are deliberately set up as places where people can be rude, abusive, let off steam, and attack others; even the earliest online spaces had these (London's CIX had the misanthrope conference; the WELL still has the flame conference). It's clear that the level of abusive attacks on sites like Twitter or that leverage multiple sites and technologies operates at a vastly different scale than the more-confined spaces of the past. (An example: Usenet—but see the classic Wired piece about the war between alt.tasteless and rec.pets.cats.) But I think this is partly about numbers. When the online audience was small, 1% assholes could be drowned out; now that it's large the Law of Truly Large Numbers kicks in and 1% is a lot of people, and once you have the mob others join it. I can't imagine a future in which it will be possible to force every space to become moderated and observe the same set of social norms. (I'm not even sure I want to: I think I'd find it hard to breathe in such a fully controlled environment.) The big sites will have to do more to eliminate the kinds of attacks that put people's lives and physical wellbeing in danger. Probably, though, the final solution rests with improving social justice and reducing economic inequality so there's less rage for the negative behavior to build on."

Stephen J. Neveroski, a respondent who shared no additional identifying details, commented, "I increasingly see news as both condensed and homogenized. Headlines are deceptive, click-bait abounds. Mainstream media all report the same thing, differing little in the opinions they proffer instead of facts. A turnstile of sources of 'information' crop up, but they don't keep pace with our need for relevant information. Unfortunately I see a generalized dumbing down of the population. People on the news today couldn't even recite the first line of the Declaration of Independence. Overall we are unable to process information, let alone form a cogent argument. Our intuition, rather than being shaped by the great thinkers of civilization, has been more affected by the Kardashians, and nobody seems to care."

Thornton May, futurist at FutureScapes, commented, "Society will rediscover its 'better angels.' The pendulum will swing back toward civil discourse. The ongoing technological Renaissance will be followed by a corresponding return to reasoned discourse."

David Weinberger, senior researcher at the Harvard Berkman Klein Center for Internet & Society, said, "Conversations are always shaped by norms and what the environment enables. For example, seating 100 dinner guests at one long table will shape the conversations differently than putting them at ten tables of ten, or 25 tables of four. The acoustics of the room will shape the conversations. Assigning seats or not will shape the conversations. Even serving wine instead of beer may shape the conversations. The same considerations are even more important on the Net because its global nature means that we have fewer shared norms, and its digital nature means that we have far more room to play with ways of bringing people together. We're getting much better at nudging conversations into useful interchanges. I believe we will continue to get better at it."

Jennifer Zickerman, an entrepreneur, commented, "More-active moderation will become the norm in online discourse. I expect that this will driven by: new anti-harassment laws; a greater sense of social responsibility among organizations that host spaces for discourse; and society's decreasing tolerance for racism, sexism, bullying, etc. We are already seeing this trend. Even Reddit is working on its moderation problem. While some technological solutions will help organizations moderate their discourse spaces, in the next ten years moderation will continue to be mostly a human task. This gives larger organizations with bigger resources an advantage. Smaller organizations may not have the resources to have their own spaces for discourse. This endangers free speech, as larger organizations tend to be commercial and committed to the status quo. One potential technological solution may be an aggregation system for curated discourse. For example, perhaps there will be a 'code of conduct' that organizations that host public discourse platforms might be able to adopt. Discussions that pass through those organization's platforms would be 'certified,' and thus available for aggregation to a larger repository. The idea is to distribute the task of quality control over discourse without limiting the size or power of organizations who are able to participate. A side effect of greater moderation will be the proliferation of 'underground' platforms for discourse, where people must be members in order to read or participate in discussions. These platforms will be highly toxic and may 'radicalize' people around certain causes and ideas, as closed groups are powerful tools in an 'us-versus-them' mental model. Discussion around these causes and ideas will be less visible to the general internet community, so people may have a false sense that there is less interest in and discussion around unsavory causes and ideas."

Luis Miron, professor at Loyola University-New Orleans, wrote, "Although I am not a pessimist I am deeply worried that in the next decade, and perhaps beyond, racial and economic conflict will likely exacerbate. And social and economic inequality will widen before narrowing. Globally. My fear is that terrorism will continue to strike fear in the hearts and minds of ordinary citizens, furthering the online negativity."

Ken Koedinger, professor of human-computer interaction and psychology at Carnegie Mellon University, wrote, "People are surely, but slowly getting smarter, and related aspects of improvement are better collaboration and communication skills, better disposition toward empathy, and an increasing ability to see that most ideas are not know to be true or false, but are unknown. It is a slow movement, but one we must and will be calling for, providing examples of, and teaching in these directions."

Thorlaug Agustsdottir of Iceland’s Pirate Party, said, "The Internet of Things will change our use of everyday technology. A majority of people will still rely on big corporations to provide platforms, willing to sacrifice their privacy for the comfort of computerized living. Monitoring is and will be a massive problem, with increased government control and abuse. The fairness and freedom of the internet's early days are gone; now it's run by big data, Big Brother, and big profits. Anonymity is a myth, it only exists for end-users who lack lookup resources."

Scott A. Hale, senior data scientist at the Oxford Internet Institute, University of Oxford, wrote, "As social media and other interaction technologies become more integrated into our daily lives, I expect public discourse about these technologies to increase in general. The challenges will be more salient to more people and the cost of non-participation will increase driving a need to discuss and address these challenges. A balance must be found between protecting free speech and protecting privacy, preventing harassment, and other issues. These are complicated by the fact that most platforms are operated by private companies and do not interconnect with one another. I very much hope that standards-based cross-platform protocols are developed and used in the future and that the enforcement of norms and laws moves from private companies to governments. While many companies might desire the latter, they are likely against the former."

Dana Klisanin, psychologist/futurist at Evolutionary Guidance Media R&D, Inc., commented, "While many of the impacts of the Internet of Things make themselves apparent almost instantly, the impact on the collective psyche and unfurling mythos takes longer to recognize. In the coming decade, the conversation will shift away from a focus on the negative and trend toward the positive. Trolls and cyberbullies will find themselves competing with people who identify as digital altruists and cyberheroes."

Louisa Heinrich, founder at Superhuman Limited, observed, "In order to understand what's happening in online communications nowadays we can look to sociological parallels like isolated villages or circles of like-minded 'popular-crowd' friends and acquaintances. When people feel they are in the right and backed by a majority of their peers, they can be quite rude and dismissive to those who think differently. At the same time, highly regarded media outlets set the tone of public discourse to a great degree—when the media we see is brash, brazen, and inflammatory, we adopt that language. I hope we will see a conscious shift in social networks to promote diversity of ideas and of thinking, and also a return to journalistic standards (i.e., factual truth as well as opinion), but I fear that will only come when we are able to come up with business models that don't depend on hyper-targeting content for advertising dollars."

Mahsa Alimardani, research assistant in New Media and Digital Activism at the University of Amsterdam, noted, "We are moving toward more-closed systems of communication as opposed to public ones, perhaps eliminating harrassing and negative forms of activity. There is also more acceptance and respect for anonymity and privacy, making things perhaps safer and positive, and easier for free speech."

David Wuertele, a software engineer at Tesla Motors, noted, "As access to the Internet increases and modes of communication supported by the Internet proliferate, I expect the total amount of communication to increase. I expect the proportion of that growing amount which is negative to outstrip the total growth, because forces behind negative activities will develop and enhance their methods. As a result, forces resisting negative activities will develop and enhance their methods of resistance. Unfortunately, most people are easily manipulated by fear. Donald Trump's success is a testament to this fact. Negative activities on the internet will exploit those fears, and disproportionate responses will also attempt to exploit those fears. Soon, everyone will have to take off their shoes and endure a cavity search before boarding the internet."

Jon Lebkowsky, CEO of Polycot Associates, said, "We're at a point where, with more voices in the discussion, facilitated by the Internet, negative elements have become more visible/audible in civil discourse. This could be seen as the body politic releasing toxins—and as they are released, we can deal with them and hopefully mitigate their effect."

Susan Price, digital architect at Continuum Analytics, wrote, "Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. Facebook is arguably already functioning as such a forum, though the majority of interactions are with people we know personally or at one or two steps removed. While discourse is often heated, that ability to know the identity of participants keeps harassment and trolling at lower levels. But behavior that chills authentic connection is still widely present even when both parties are transparently identified. I hope the advent of robust identity/privacy-control systems (likely powered by blockchain or a similar technology), will give individuals increased options for productively, authentically engaging with people they know, or at least can identify clearly. We’ll need to educate children, particularly, to carefully curate the information coming into their awareness with much more sophistication than they do today. New online structures something like affinity guilds will evolve that allow individuals to associate with and benefit from the protection of and curation of a trusted group. People need extremely well-designed interfaces to control the barrage of content coming to their awareness. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil. If we build in audit trails, audits, and transparency to our forums, the bad effects can be recognized and mitigated. Citizens tend to conflate a host individual or organization's enforcement of rules of civil exchange (such as removing an offensive post from one's own Facebook page) with free speech abridgement. There will continue to be many, many venues where individuals may exercise their right to free speech; one individual’s right to speak (or publish) doesn’t require any other individual to 'hear and attend.' Better education and tools to control and curate our online activities can help. Blockchain technologies hold much promise for giving individuals this appropriate control over their attention, awareness, and the data we all generate through our actions. They will require being uniquely identified in transactions and movements, and readable to holders of the keys. A thoughtful, robust architecture and systems can give individuals control over the parties who hold those keys."

Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism, wrote, "I am an optimist with faith in humanity but we will see whether my optimism is misplaced. I believe we are seeing the release of a pressure valve (or perhaps an explosion) of pent-up speech: the "masses" who for so long could not be heard can now speak, revealing their own interests, needs, and frustrations—their own identities distinct from the false media concept of the mass. Yes, it's starting out ugly. But I hope that we will develop norms around civilized discourse. Oh, yes, there will always be assholes and trolls. What we need is an expectation that it is destructive to civil discourse to encourage them. Yes, it might have seemed fun to watch the show of angry fights. It might seem fun to media to watch institutions like the Republican Party implode. But it soon becomes evident that this is no fun. A desire and demand for civil, intelligent, useful discourse will return; no society or market can live on misinformation and emotion alone. Or that is my hope. How long will this take? It could be years. It could be a generation. It could be, God help us, never."

Patrick Tucker, author of The Naked Future and technology editor at Defense One, said, "Today's negative online user environment is supported and furthered by two trends that are unlikely to last into the next decade: anonymity in posting and validation from self-identified subgroups. Increasingly, marketers needs to better identify and authentication APIs (authentication through Facebook for example) are challenging online anonymity. The passing of anonymity will also shift the cost benefit analysis of writing or posting something to appeal to only a self-identified bully group rather than a broad spectrum of people."

John Curran, CEO for the American Registry for Internet Numbers (ARIN), said, "The failure to provide for any effective attribution or remedy for bad actors will result in increasing amounts of poor behavior (volatile speech, harassment, etc.) as well an increase in actual crimes (hate speech, libel, theft) over the internet. While the benefit of unfettered internet to free speech and expression is quite high, its provision without any meaningful method of recourse when used for criminal acts deprives users of their basic human right of effective remedy."

Jason Hong, an associate professor at Carnegie Mellon University, wrote, "We've already seen the effects of trolls, harassers, and astroturfers in attacking and silencing others online, and there's very little on the horizon in terms of improving discourse. It's all too easy for bad actors to organize and flood message boards and social media with low-quality posts that drive people away. Or, to paraphrase Gresham's law, bad posts drive out the good."

Emily Shaw, a US civic technologies researcher for mySociety, an organization that creates websites for citizen empowerment, observed, "Since social networks—where news and discussion represent an aspect of that relational network—are the most likely future direction for public discourse, a million (self)-walled gardens are more likely to be the outcome than is than an increase in hostility, because that's what's more commercially profitable. Communication platforms are easy to create, but hard to maintain.'User-focused' speech-curation tools are more available than previously. It is more possible than ever for groups to create and populate speech environments that represent the kind of experience they want, and that's what we're seeing currently. Bad social experiences are an annoying bug rather than a feature—and one that's proven difficult to resolve principally because of the tech platforms' white-male staffing skew. As these companies grow and diversify, some will prioritize combating online harassment, and others will eventually fold those practices into their work as well. However, inclusiveness is not likely to be a feature of future public discussions. Public discourse will ever more officially become mini-public discourse. As users/participants get more control over their discursive environments, the effects of confirmation bias will let them winnow into ever-more-homogeneous groups."

Amy Webb, futurist and CEO at the Future Today Institute, said, "Right now, many technology-focused companies are working on 'conversational computing,' and the goal is to create a seamless interface between humans and machines. If you have young child, she can be expected to talk to––rather than type on––machines for the rest of her life. In the coming decade, you will have more and more conversations with operating systems, and especially with chatbots, which are programmed to listen to, learn from and react to us. You will encounter bots first throughout social media, and during the next decade, they will become pervasive digital assistants helping you on many of the systems you use. Currently, there is no case law governing the free speech of a chatbot. During the 2016 election cycle, there were numerous examples of bots being used for political purposes. For example, there were thousands of bots created to mimic Latino/ Latina voters supporting Donald Trump. If someone tweeted a disparaging remark about Trump and Latinos, bots that looked and sounded like members of the Latino community would target that person with tweets supporting Trump. Right now, many of the chatbots we interact with on social media and various websites aren't so smart. But with improvements in artificial intelligence and machine learning, that will change. Without a dramatic change in how training databases are built and how our bots are programmed, we will realize a decade from now that we inadvertently encoded structural racism, homophobia, sexism, and xenophobia into the bots helping to power our everyday lives. When chatbots start running amok––targeting individuals with hate speech––how will we define "speech"? At the moment, our legal system isn't planning for a future in which we must consider the free speech infringements of bots."

Doc Searls, journalist, speaker, and director of Project VRM at Harvard University's Berkman Center for Internet and Society, wrote, Harassment, trolling... these things thrive with distance, which favors the reptile brains in us all, making bad acting more possible and common. For example, think about how people can yell at other cars more easily than they can yell at a person standing next to of them in line at a store. Or how easily Group A can typify and vilify Group B when the two don’t know or talk to each other. And, let’s face it, objectifying, vilifying, fearing, and fighting The Other has always been a problem for our species. I don’t doubt that the human diaspora, dozens of thousands of years ago, was largely caused by tribes not getting along with other tribes and moving elsewhere. The online world, however, is fundamentally absent of distance. We are all a click apart, by design. This fact alone will gradually undermine bad acting, But it is still early. It may take more than a decade for the reality of being so close to each other to really sink in. The internet we share today was only born on 30 April 1995, when the last backbone that forbade commercial activity stood down. Since then we have barely begun to understand, much less civilize, this new place without space. For example, it’s easy to miss the simple fact that the Net itself is a far more social environment than any of the commercial 'social media' containers that operate on it. Speaking of those, the main issue right now is centralization. The Net is by design isn’t merely decentralized, but distributed, on the model Paul Baran described way back in 1964:

Without this diagram as a guide, we might never have had the Internet we enjoy today. Yet it is still easy to build centralized systems, which is why we have the giant controlling companies which in Europe they call GAFA (an acronym for Google, Amazon, Facebook, and Apple.) I believe we are at the far end of this swing toward centralization on the Net. As individuals and distributed solutions to problems (e.g., blockchain) gain more power and usage, we will see many more distributed solutions to fundamental social and business issues, such as how we treat each other.

Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, wrote, "After Snowden's revelations, and in context accelerating cybercrimes and cyberwars it's clear that every layer of the technology stack and every node in our networked world is potentially vulnerable. Meanwhile both magnitude and frequency of exploits are accelerating. As a result users will continue to modify their behaviors and internet usage and designers of internet services, systems, and technologies will have to expend growing time and expense on personal and collective security."

Andrew Walls, managing vice president at Gartner, noted, "The quality of online discourse ebbs and flows. In certain environments, trollish behavior is more noticeable, while in others trollish behavior is largely absent. Anonymity fuels a lack of accountability for some online discourse, producing, at times, an online Lord of the Flies (LoF) situation. LoF situations have persisted in human social groups for eons and are not created by the availability of online fora. Despite the poor behavior of some, the world of social discourse in online environments is growing in depth, diversity, and levels of participation. Free speech is readily available, but the speaker may lack the protections afforded by a close social group. Admittedly, the speaker is also, at times, accorded more freedom online compared to the physical social environment in which they reside (e.g., a parochial town in the southeast of the US, a religious community in Utah). I am confident that free speech via technological mediation will expand and gain greater participation across all demographics. However, I have two concerns. Many of our most popular online fora (e.g., The FaceBook) are private, profit-making organizations that are only partly aligned with the interests of the participants. These corporations are able to filter and shape the participants' perception of the scope of online discourse and have exercised this ability. They might do this to enhance profitability or increase retention of users, or something else. The filter bubble, when imposed, reduces the efficacy and reach of free speech and distorts the perceptions of the reader (aka- framing). The other concern rests with the user experience crafting by the software used by participants in online discourse. Software contains the embedded bias of the developer(s). As such, the very interface presented to the user optimizes certain choices and behaviors while deprecating others. This is usually invisible to the user, covertly shaping their action and participation. The most obvious version of this embedded bias is the woeful state of support for people with visual, intellectual, emotional, and physical challenges."

Karl M. van Meter, sociological researcher and director of the Bulletin of Methodological Sociology, Ecole Normale Supérieure de Paris, wrote, "'Freedom of speech' and 'privacy' are interpreted rather differently in the States and Europe. Europe does not consider 'freedom of speech' the right to broadcast any and all views or opinions, particularly concerning race, religion, human rights, sexuality, or denial of scientific findings or historical events, including genocide. In many cases, there are laws against such actions, and there are now some in the States, including some regarding hate speech. The limits between what is permissible and what is not will never be fixed and will always be debated on the internet and elsewhere. That debate, when conducted in 'good faith,' tends toward greater liberty of expression and it will very likely continue to do so, even though the debate will continue and there will be crises. The demand for online interaction will probably continue to move toward a distribution of different types of use that correspond more and more with specific social and educational milieux. Academic and educational use of the internet is probably in the process of stabilizing although increasing all the time. Social media use will probably not stabilize since it is an important and expanding economic market. There will probably continue to be new systems invented and new fashions of use that will wash over the world's social media users. This, of course, will also bring use in 'bad faith,' including criminal and even terrorist use, but that will always be part of this expanding market and the debate about Internet use. That also brings up the question of 'policing' the internet or keeping it under surveillance, and views on that question differ greatly between Europe and the States. Europeans defend their privacy and are against surveillance of everybody in an attempt to find a few criminals or terrorists. That should be the job of police services with human intelligence methods instead of technical eavesdropping methods. In the States, technical eavesdropping methods of surveillance of the entire population are applied in an attempt to reduce to a minimum the more expensive and time-consuming human intelligence methods, and there is less money to be made by high-tech corporations which have great influence over local and national governments."

Brad Templeton, chair for computing at Singularity University, commented, "Now that everybody knows about this problem, I expect active technological efforts to reduce the efforts of the trolls, and we should reach 'peak troll' before long. There are concerns for free speech. My hope is that pseudonymous reputation systems might protect privacy while doing this."

Peter Levine, Lincoln Filene professor and associate dean for research at Tisch College of Civic Life, Tufts University, said, "Lots of bad actors will continue to swarm online public discourse, but the designers and owners of Web properties will introduce tools to counter them. Not knowing who will prevail, I am predicting a stalemate."

Charlie Firestone, communications and society program executive director and vice president at The Aspen Institute, commented, "It will be more contentious in the coming four to five years because the world is continuing in that direction. But by about five years from now I think this will reverse itself. People will be fed up with the negativity and solutions will start to work. I don't know which solutions will come to be adopted, but a move toward people staying in circles that are civil is one possibility."

Mark Lemley, a professor of law at Stanford Law School, said, "Public discourse increasingly takes place on private platforms. Those platforms will increasingly realize that harassment and cyberbullying make their platforms less attractive to most users, and will take steps to restrict offensive speech."

David Bernstein, a retired marketing research director, said, "The tone of discourse in general has become more contentious. There seems to be little tolerance to diverse opinions and attitudes on many social issues. This has also crept into discussions about business, religion, and, of course, politics. I believe we may have to come up with a way to rant or express yourself online without others attacking or G*d forbid, actually finding and harassing you in the offline world. Automatic filters will likely be seen as an infringement on our First Amendment rights. But the First Amendment only works if we accept all voices without drastic consequences for voicing your position. In my opinion, the harassment is more of an infringement on free speech than is something that filters out bad language, threats, etc. My fear is that enabling anonymity could give some a free pass to get really radical in their posts, rants, etc., than they might otherwise be."

Marc Rotenberg, executive director of the Electronic Privacy Information Center (EPIC), wrote, "It will be shaped more by negative activities. The regulation of online communications is a natural response to the identification of real problems, the maturing of the industry, and the increasing expertise of government regulators. This is hardly surprising."

Itir Akdogan, research communication director at Istanbul Bilgi University/TESEV, commented, "My perspective is from the developing world: Turkey. Gradually, those who are less-educated start being active in social media/digital commentary. As much as it sounds democratic at first, we then observe an increase in hate speech, harassment, and trolls. Statistically, the less-educated are the majority of the population. In this sense, I can say that the future of digital commentary will not be more democratic."

Jamais Cascio, distinguished fellow at the Institute for the Future, replied, "I don't expect a significant shift in the tone of online discourse over the next decade. Trolling, harassment, etc., will remain commonplace, but not be the overwhelming majority of discourse. We'll see repeated efforts to clamp down on bad online behavior through both tools and norms; some of these efforts will be (or seem) successful, even as new variations of digital malfeasance arise."

Lilly Irani, assistant professor at the University of California-San Diego, wrote, "Interactions online are symptoms of systems of race, gender, and class oppression offline. Mediated speech, whether on the internet or on television, offers protection and legitimacy to sentiments built over centuries. Technological mediation simply changes the consequences and forms. I'm not sure how it could get much worse than it is today, as real-names policies are only a Band Aid."

Timothy C. Mack, managing principal at AAI Foresight, said, "I foresee a continued dialogic tussle between more-open and more-closed, but I do not expect the growth of online interaction, including virtual and augmented reality, to slacken. The example of Chinese strictures of Internet content and dynamics become the norm, the tendency to lean toward open expression will continue. Only the continued growth of what might be called digital bullying will dampen this valuable forum."

Daniel Franklin, associate professor of political science at Georgia State University, commented, "We are at the early stages of learning about how to use the internet. Over time, informal and formal rules of interaction will be adopted."

Steven Waldman, founder and CEO of LifePosts, said, "It certainly sounds noble to say the internet has democratized public opinion. But it’s now clear: It has given voice to those who had been voiceless because they were oppressed minorities and to those who were voiceless because they were crackpots. Or to be a bit more nuanced about it (and to avoid disparaging all strongly held views as being crockpottery), strongly articulated, sharply delivered, and, yes, extreme opinions get far more exposure than they did in the olden days when gatekeepers suppressed those views. Why do I think this will persist? Because it works. Extreme views generate more reaction. Facebook’s algorithm and Twitter’s re-tweets all respond to reaction. Social media doesn’t favor extremism but it does favor reaction–or engagement as we sometimes call it. I would add that it may not necessarily be 'bad actors'—i.e., racists, misogynists, etc.—who win the day, but I do fear it will the more strident. I suspect there will be ventures geared toward counter-programming against this, since many people are uncomfortable with it. But venture-backed tech companies have a huge bias toward algorithmic solutions that have tended to reward that which keeps us agitated. Very few media companies now have staff dedicated to guiding conversations online."

Robert Bell, co-founder of the Intelligent Community Forum, commented, "The human and institutional response to technology change always lags the pace of that change. The bad stuff makes the headlines, and we seldom hear about the slow but steady efforts to adapt and turn bad into good. It is entirely possible that our public dialogue will continue to coarsen as we have seen it do for the past few years but I believe we will instead gradually evolve a better understanding of the role these online tools play in our lives, the life of our community and of our nation. In the process, we will make more positive use of them. The caveat here is that it may not be obvious this is happening. The nature of instantaneous online communications is to vastly amplify that which attracts or threatens us, and a very small number of actors can make a very loud noise, despite the fact that they are less than 1% of the conversation. I would expect the noise to be loudest on the international and national level and to become increasingly drowned out by the better angels of our natures at the state, regional, and community level."

Serge Ravet, innovation director at Open Badge Passport, observed, "Everything (at least the most important) is contained in an article by Katharine Viner: https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth. Quote: 'Increasingly, what counts as a fact is merely a view that someone feels to be true—and technology has made it very easy for these 'facts' to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago). A dubious story about Cameron and a pig appears in a tabloid one morning, and by noon, it has flown around the world on social media and turned up in trusted news sources everywhere. This may seem like a small matter, but its consequences are enormous.' Another quote: 'Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want—which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs.'"

Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, noted, "The struggle we're facing is a societal issue that we have to address at all levels, and that the structure of social media platforms can exacerbate. Social media companies will need to address this, beyond community policing and algorithmic shaping of our newsfeeds. There are many ways to do this while avoiding censorship; for instance, better-individualized blocking tools and upvote/downvote measures can add nuance to discussions. I worry that if we don't address the root causes of our current public discourse, politicians and companies will engage in an increasing amount of censorship."

Michael Kleeman, senior fellow at the University of California-San Diego, wrote, "Historically, communities of practice and conversation had other, often physical, linkages that created norms of behavior. And actors would normally be identified, not anonymous. Increased anonymity coupled with an increase in less-than-informed input, with no responsibility by the actors, has tended and will continue to create less open and honest conversations and more one-sided and negative activities."

Matt Hamblen, senior editor at Computerworld, commented, "Online discourse will certainly become more nasty in the next decade, and traditional institutions and people working within those institutions will be under greater attack than now. The imagined privacy of working alone at a computer to type or speak comments provides an illusion for critics and trolls. Some critics will continue to say aloud or in print what they are thinking in ways that others can see. In the past, critics had not been heard by others because the technology wasn't available. Social media and other forms of discourse will include all kinds of actors that had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more. Free speech will reign free but will become babble and almost incomprehensible to many listeners. Many will be able to remain private if they know how to manipulate the technology, but many others will continue to express views with little regard to whether their privacy is secure or not. Privacy itself will have little meaning or value to average people."

Marina Gorbis, executive director at the Institute for the Future, said, "I expect that we will develop more social bots and algorithmic filters that would weed out the some of the trolls and hateful speech. I expect that we will create bots that would promote beneficial connections and potentially insert context-specific data/facts/stories that would benefit more positive discourse. Of course, any filters and algorithms will create issues around what is being filtered out and what values are embedded in algorithms."

Irina Shklovski, associate professor at the IT University of Copenhagen, observed, "There is no one public discourse online, but there are myriad spaces where public discourse happens. These are controlled by different actors, they develop different norms of engagement, and they may or may not fall victim to trolling and significant negative interactions. There are also many different publics that engage in different sorts of discourse, and this will only increase in number and diversity over time. Perhaps the current threat of trolling and harassment is one reason for an increasing fragmentation and focusing of public discourse into areas and spaces that are kept 'safe' for certain types of discourse, managed and protected. What the effect of this sort of fragmentation will be is hard to predict."

Baratunde Thurston, a director's Fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, replied, "To quote everyone ever, things will get worse before they get better. We've built a system in which access and connectivity are easy, the cost of publishing is near zero, and accountability and consequences for bad action are difficult to impose or toothless when they do. Plus consider that more people are getting online everyday with no norm-setting for their behavior and the systems that prevail now reward attention-grabbing and extended time online. They reward emotional investment whether positive or negative. They reward conflict. So we'll see more bad before more good because the governing culture is weak and will remain so while the financial models backing these platforms remains largely ad-based and rapid/scaled user growth-centric."

Tse-Sung Wu, a project portfolio manager at Genentech, wrote, "There is a lot to be fearful of on the internet, and a lot to be hopeful for as well. As long as there are relatively small barriers to participation and low barriers to innovation the internet will serve as a reflection of society, both good and bad. On the one hand, you have the internet echo chamber, which allows for extreme political or social positions to gain hold. Online communities are quite different from actual, face-to-face communities. In the former, there is no need for moderation or listening to different points of view; if you don't like what you're reading, you can leave; there is no loyalty. In an actual community where one lives, one is more likely to compromise, more likely to see differing viewpoints. On the other hand, as we have been shown by the Black Lives Matter movement, an ugly side of society is being held in the light. This is an amazing development that would have never occurred without the proliferation of smartphones and video sharing. I think missing from all this is the role of the local newspaper editor: one who can curate the information people see and digest. No such online analogue as sustainably emerged. When it comes to privacy, we are raising an online, connected generation who has a very different sense of privacy than their parents."

Alexander Halavais, director, MA in social technologies at Arizona State University, said, "'Bad actors' have always been a part of online discourse, a product of the anonymity that networking provides. But particularly over the last five years, we have seen the growth of technologies of reputation, identity, and collaborative moderation. Newspapers that initially rejected comment streams because of their tendency of toxicity now embrace them. YouTube, once a backwater of horrible commentary, has been tamed. While there are still spaces for nasty commentary and activities, they are becoming destinations that are sought out by interested participants rather than the default."

Norah Abokhodair, information privacy researcher at the University of Washington, commented, "There is a very clear trend that social media is already being shaped by the bad guys. Already automation (creating social bots on social media platforms) is amplifying the voices of the bad people most of the time. Terrorist organizations are able to recruit many young people through these platforms and many more examples. The good/bad part is that companies are now working together with governments and with policy makers to try to control this trend; this might impact free speech. Privacy and anonymity are double-edged swords online because they can be very useful to people who are voicing their opinions under authoritarian regimes however the same technique could be used by the wrong people and help them hide their terrible actions."

Cathy Davidson, founding director of the Futures Initiative at the Graduate Center of the City University of New York, wrote, "We're in a spy vs. spy internet world where the faster that hackers and trolls attack, the faster companies (Mozilla, thank you! plus for-profits) come up with ways to protect against them and then the hackers develop new strategies against those protections, and so it goes. I don't see that ending. I do, however, foresee some catastrophic thefts. We already have those and they have not been as public as they might be. I would not be surprised at more publicity in the future, as a form of cyber-terror. That's different from trolls, more geo-politically orchestrated to force a national or multinational response. That is terrifying if we do not have sound, smart, calm leadership."

Bernardo A. Huberman, senior fellow and director of the Mechanisms and Design Lab at Hewlett Packard Enterprise, said, "Privacy as we tend to think of nowadays is going to be further eroded, if only because of the ease with which one can collect data and identify people. Free speech, if construed as the freedom to say whatever one thinks, will continue to exist and even flourish, but the flip side will be a number of derogatory and ugly comments that will become more pervasive as time goes on."

Christine Maxwell, program manager of learning technologies at the University of Texas, Dallas, said, "Recently, referring to the House Benghazi Report, Wired magazine described the beauty and the tragedy of the internet age: 'As it becomes easier for anyone to build their own audience, it becomes harder for those audience members to separate fact from fiction from the gray area in between.' To make meaningful and actionable—contextualized—decisions today, individuals need an unbiased knowledge discovery platform to assess information objectively. Without this becoming widely available, coupled with the ability to learn how to ask better questions I fear that online communication will indeed become more shaped by negative activities."

Larry Gallagher, organizational insight analyst at Stanford University, commented, "Online communication mechanisms are still in flux, and the future is very difficult to predict. I expect that those who are stimulated by conflict and rancor will find sites that scratch that itch. Others will gravitate toward moderated forums where such behavior is discouraged or censored. So the answer to this question is that both sorts of sites will refine their techniques and outreach. I expect that larger sites will develop reputations among social networks for particular styles of discourse."

Stephen Downes, researcher at National Research Council, Canada, noted, "It's important to understand that our perception of public discourse is shaped by two major sources: first, our own experience of online public discourse, and second, media reports (sometimes also online) concerning the nature of public discourse. From both sources we have evidence that there is a lot of influence from bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust, as suggested in the question. But a great deal of public online discourse consists of what we and others don't see. For example, you don't see the discussions I have on my Facebook feed or on Twitter with interesting and informed participants. Indeed, I am even sometimes inclined to think of it as private discourse, because of course it doesn't take place on some troll-magnet like YouTube, but it is nonetheless public discourse. So a couple of things are happening. First, I'm biasing my own perception by taking a particular stance on the meaning of 'public' (as equivalent to 'mass'), and second, I'm receiving a confirmation bias because the main thing mass media says is that it is dominated by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust. I expect people, because of these biases, to project that there is more and more of this sort of behaviour, even though the rate remains steady. It's a lot like people's perception of crime rates when they are informed by mass media. And because media says (incorrectly) that this sort of behaviour is the norm, I expect a certain level of it to continue. Hence I project no real change."

Dariusz Jemielniak, professor of management at Kozminski University and Wikimedia Foundation trustee, observed, "There is a growing number of initiatives online combating harassment, and anonymity of online activities is decreasing."

Ed Lyell, professor of business and economics at Adams State University, said, "As of now people can post anonymously. This permits bad actors to act out like young children without accountability. Our 'dark' sides emerge in this situation and people often post from both extreme left and right positions just for pleasure or to inflict pain upon others. This could change if people were identified and thus became accountable for their statements, but I do not see this on the horizon."

Jordan LaBouff, assistant professor of psychology at the University of Maine, said, "As online identities become increasingly personally important and less anonymous (whether those identities are the same as one's real-world identity or entirely contained in an online environment) communication will become more socially constrained."

Erhardt Graeff, a researcher at the MIT Center for Civic Media, wrote, "Technology companies and regulatory bodies are becoming increasingly sensitive to the negative activities produced by bad actors online. Social media platforms are already being redesigning to be safer and to ensure that trolls don't hurt the bottom line. However, governments are also using these excuses to wage propaganda wars online and force companies and citizens to comply to draconian surveillance, censorship, and defamation laws. I expect this trend to continue over the next decade with parliaments feeling more comfortable regulating this space and companies trying to stay ahead of any negative press or legal responsibilities that might hamper their growth. I also expect activists to continue to use online communication for civic and political purposes, forcing companies to navigate the tension between good and bad actors who employ similar tactics on their platforms toward different ends."

Rory Lettvin, a clinical informaticist, noted, "The most significant issues will revolve around interoperability and security."

danah boyd, founder of Data & Society, commented, "There has always been negative behavior online, but as the internet has become more central in everyday life, the negativity has become more visible to more people. Furthermore, as the tools of communication have become powerful, so has the potential impact of cruelty. Mix this with downward mobility, growing inequality, and deep-seeded societal frustration and you have a recipe for trouble."

Nigel Cameron, president and CEO of the Center for Policy on Emerging Technologies, observed, "Emerging awareness of the problems will lead platforms (and in some cases governments/regulatory authorities) to increase their supervision/control/editorial roles. So there will be less freedom, and while the goal will be sunnier social media my suspicion is that the bad actors of various kinds will find fresh ways to use these platforms to do what they do."

Margath Walker, an associate professor at the University of Louisville, predicted, "We will see an expansion of attempts at democratization and a proliferation of different uses of technology."

Adrian Hope-Bailie, standards officer at Ripple, wrote, "Automated curation will continue to improve such that online discourse can be more carefully controlled however the result may not all be positive as online discourse becomes censored in a way that is more subtle and less obvious to casual observers or participants. Important voices may be shut down if their views contradict the rules defined by the moderators (which may not be limited to controlling abuse or hate-speech) because managing a censored forum that appears to be open will become easier thanks to AI-assisted moderation."

Stuart Shulman, CEO of Texifter, said, "We keep expecting the medium to change human nature. With the possible exception of the printing press, I'm not sure we can make the case. Human activity continues to reflect positive and negative traits despite the changing media over time. Are people more likely to engage in unsocial or uncivil behavior online than, say, via letter to the editor of a newspaper? I doubt it."

Michael Rogers, author and futurist at Practical Futurist, observed, "It's hard for me to imagine that so powerful a tool as the Web can be hijacked permanently by a relatively small percentage of bad actors. It's still early days, and I expect that there will be a move toward firm identities—even legal identities issued by nations—for most users of the Web. There will as a result be public discussion forums in which it is impossible to be anonymous. There would still be anonymity available, just as there is in the real world today. But there would be online activities in which anonymity was not permitted. Clearly this could have negative free speech impacts in totalitarian countries but again, there would still be alternatives for anonymity."

Miles Fidelman, systems architect and policy analyst at the Protocol Technologies Group and president at the Center for Civic Networking, said, "I expect we will continue to see more of the same—both positive and negative. There's no reason to expect that people will become less opinionated or aggressive in communications styles. At the same time, we've always had threads of collaborative and productive dialog. This is all irrespective of medium. Mobs and committees have been with us forever. These days we have social media forums and we have focused email lists (work groups, support groups, etc.) Regarding new technologies: As a person in a career that goes back to the early ARPANET and running a non-profit dedicated to early 'civic tech,' it's become pretty clear that, while folks keep asking for new technologies, pretty much all work gets done with basic email lists. People reject structure. (This experience seems to be mirrored by Steven Clift's work on e-democracy—he is probably the most accomplished person working in the field.)"

Kjartan Ólafsson, head of the department of social sciences at the University of Akureyri, Iceland, said, "As human beings become more trained in using various ways of online communication I expect these to become less shaped by negative activities as norms and values are developed around this type of communication."

Julian Hopkins, lecturer in communication at Monash University, Malaysia, wrote, "In most countries there will be the development of online accounts that are formally linked to a personal identity—i.e., through personal identification documents and/or relevant biometrics. This will increase security for online transactions, tax returns, etc. These will enable the creation of online spaces where only publicly identifiable persons can participate, and will make them more accountable. These spaces may then see less negative activities. However, unless an opportunity for anonymous online interaction is also maintained as an alternative to the above, the above development will also mean less free speech online."

Marcus Foth, professor of interactive and visual design at Queensland University of Technology, commented, "Public discourse online will become less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust, because the majority of interactions will take place inside walled gardens. Social media has taken over pornography as the number-one online activity. Social media platforms hosted by corporations such as Facebook and Twitter use algorithms to filter, select, and curate content. With less anonymity and less diversity, the two biggest problems of the Web 1.0 era have been solved from a commercial perspective: fewer trolls who can hide behind anonymity. Yet, what are we losing in the process? Algorithmic culture creates filter bubbles, which risk an opinion polarisation inside echo chambers. The media (the Fourth Estate) are failing us, and now the internet (the Fifth Estate), too. So what's left? Will there be a Sixth Estate? Considering urban/digital hybrid activism (Arab Spring, Gezi Park, Vinegar Movement, Stuttgart 21, Occupy, Umbrella, etc.), perhaps networked (not 'smart') cities will become the Sixth Estate, making up for the flaws of the other five estates. I have written about this here: https://theconversation.com/why-we-should-design-smart-cities-for-getting-lost-56492."

Reza Salim, director at Amader Gram ICTs for Development, Bangladesh, said, "I expect no major change will occur in Internet use because its potential productive use is increasing."

Isto Huvila, a professor at Uppsala University in Sweden, noted, "The emergence and current problems of negative online activities is not a digital problem but rather an issue related to the disintegration and reconfiguration of societies. Social media and digital commentary are likely to evolve simultaneously on technological trajectory and a social/societal trajectory. The central question is whether they converge or not. Similarly, I don't believe that technological systems per se would be able to encourage more inclusive online interactions but there could be (and should be) a social demand for tools that help to support social configurations and inclusivity. We can let technology (or essentially technologists) to take the lead but that might not be the best alternative. There seems to be major shift in the attitudes towards free speech. It is essentially a Western value and not everyone sees it as significant as we do. On the other hand the traditional mechanisms of communication that have been used to support, convey and limit that what is included in the idea of free speech are not valid anymore which makes it difficult to apply the idea of free speech in practice as it has been done so far. The problem is that we have to be explicit about what we want to see as the value and outcomes of speaking freely, what price we are willing to pay for it and how freely we really want ourselves and others to speak and to whom. Anonymity and privacy will be undoubtedly redefined as well. The current idea of anonymity and privacy are rather late concepts there is nothing new that the both will be renegotiated. Similarly to free speech, anonymity and privacy need to be defined in relation to digital tools, and further on as with free speech, the question is, even if many things are possible with digital technologies, it is our choice what will be done in practice. Currently the common Western ideology is very much focused on individuals and the right to do whatever technologies allow us to do—the problem is that it might not be a very good approach from the perspective of the humankind as a whole. More focussed ideas of what we would like the human society to be as a whole would be much needed. The technology comes first after that."

Christopher Wilkinson, a retired senior European Union official, commented, "On line interaction is already vast, and the experience is quite mixed. Numbers will grow, but quality will not improve. There is no indication of a will to improve; I suspect that the advertising industry likes it that way. More generally, I find the financial models and valuations of internet companies to be completely incomprehensible, and potentially destabilising. Who on earth responds to the advertisements on a scale that would justify those valuations?"

James Hinton, truck driver and writer, observed, "I don't expect any significant change in the tone of Internet discourse for better or worse. The anonymity it provides people has been, and remains, both bane and boon. It allows people to present ideas and thoughts with relative impunity, which can enable previously powerless people with tools to force their voices to the fore (example, #BlackLivesMatter) while those very same tools enable abusers to operate with impunity. Unless something significant changes in terms of the architecture of the Internet and access to it, that anonymity will continue to be blessing and curse throughout the next decade, and beyond. For my part, I consider this to be to our benefit overall."

Dave Burstein, editor at fastnet.news, noted, "Barack Obama's comment that Omar Mateen was 'inspired by various extremist information that was disseminated over the internet' (quoted from the New York Times) echoes calls by Angela Merkel and David Cameron for more censorship, which is almost inevitable. Most dangerous is the emerging monopoly-like power of Facebook and Google to impose their own censorship norms, 100,000's of thousands of times. Ask any news vendor about the de facto power of Facebook. This is just one reason to reduce the market dominance by making sure others can take market share, interoperability, users' ability to take their data (social graph) to new services."

David Cohn, an author, editor, and journalist, said, "Right now there is some disagreement on who the 'bad actors' are. I expect that will remain the case."

Anil Dash, technologist, wrote, "I expect the negative influences on social media to get worse, and the positive factors to get better. Networks will try to respond to prevent the worst abuses, but new sites and apps will pop up that repeat the same mistakes."

Fred Baker, fellow at Cisco Systems, commented, "Communications in any medium (the internet being but one example) reflects the people communicating. If those people use profane language, are misogynistic, judge people on irrelevant factors such as race, gender, creed, or other such factors in other parts of their lives, they will do so in any medium of communication, including the internet. If that is increasing in prevalence in one medium, I expect that it is or will in any and every medium over time. The issue isn't the internet; it is the process of breakdown in the social fabric. Similarly, if a person doesn't find a place for such speech in written or verbal communications, they are unlikely to in typed communications, whether 140 characters in length or otherwise limited—or not limited. If we worry about the youth of our age 'going to the dogs,' are we so different from our ancestors? In Book III of Odes, circa 20 BC, Horace wrote: 'Our sires' age was worse than our grandsires. We, their sons, are more worthless than they; so in our turn we shall give the world a progeny yet more corrupt.' I think the human race is not doomed, not today any more than in Horace's day. But we have the opportunity to choose to lead them to more noble pursuits, and more noble discussion of them."

Mary Griffiths, associate professor in media at the University of Adelaide, South Australia, replied, "Freedom of speech is never absolute. It is always moderated by ethical considerations. Developing regulation and self-protection practices will likely combat the worst effects of negative activities, bad actors, and trolling in online communication. Those regularly using anonymity to damage or abuse are either ignored by the digitally literate sharing the same space or tracked and called out by other users. An example is the response of prominent women to the abuse they receive by trolls. They have republished comment and identified abusers, and this confident and courageous approach has gained traction. It would help if moderation expectations were more clearly spelt out in online news user content. Those finding an overall griping tone unpalatable tend to leave polluted spaces to find better air elsewhere—in the end that's an argument against the baser reasons ( e.g., click-bait) why abusive comments are allowed to stand. Regulation and self-regulation follows problematic practices offline, why not online?"

Sam Punnett, research officer at TableRock Media, noted, "Online discourse is likely to be more shaped by bad actors, etc. Comment threads associated with publications that wish to remain vital will gravitate to be increasingly moderated. This is predicated on the availability of moderator resources and may be complemented by using registration tactics such as "real name" and verified email policies as requirements for participation. Audiences for social media platforms may become more sophisticated as their novelty erodes and they are replaced by whatever the next ones are. All services are different serving the varied intentions of the services themselves and their users. Some intentions for sharing will likely endure but may become compromised due to the evolving realization that they are monitored by employers, businesses and the state. All services will transform themselves as their business models mature with the intentions of their owners and their relationships to the commercial applications of big data. Likely there will be less unrestrained speech given over to self-censorship."

Laurent Schüpbach, a neuropsychologist at University Hospital in Zurich, Switzerland, said, "The time of the internet as a community of people wanting to help each other feels over. Trolls are already an important part of online discussions and avoiding them isn't easy. The reason it will probably get worse is that companies and governments are starting to realise that they can influence people's opinions that way. And these entities sure know how to circumvent any protection in place. Russian troll armies are a good example of something that will become more and more common in the future."

Nicholas V. Macek, digital director at an Ohio-based political firm, wrote, "As internet access becomes more expansive due to the increasing affordability of smart phones, the socioeconomic gap between the world's poorest and richest members of society will unfortunately become evident in their interactions on the Web. Especially in the context of political and social movements, and civil rights, the lack of understanding between people of different backgrounds will become more pronounced."

Judith Donath of Harvard University's Berkman Center, author of The Social Machine: Designs for Living Online, wrote, "With the current practices and interfaces, yes, trolls and bots will dominate online public discourse. But that need not be the case: there are designs and technological advances that would help tremendously. We need systems that support pseudonymity: locally persistent identities. Persistence provides accountability: people are responsible for their words. Locality protects privacy: people can participate in discussions without concern that their government, employer, insurance company, marketers, etc., are listening in (so If they are, they cannot connect the pseudonymous discourse to the actual person). We should have digital portraits that succinctly depict a (possibly pseudonymous) person’s history of interactions and reputation within a community. We need to be able to quickly see who is new, who is well-regarded, what role a person has played in past discussions. A few places do so now (e.g., StackExchange) but their basic charts are far from the goal: intuitive and expressive portrayals. 'Bad actors' and trolls (and spammers, harassers, etc.) have no place in most discussions—the tools we need for them are filters; we need to develop better algorithms for detecting destructive actions as defined by the local community. Beyond that, the more socially complex question is how to facilitate constructive discussions among people who disagree. Here, we need to rethink the structure of online discourse. The role of discussion host/ moderator is poorly supported by current tech—and many discussions would proceed much better in a model other than the current linear free for all. Our face-to-face interactions have amazing subtlety—we can encourage or dissuade with slight changes in gaze, facial expression, etc. We need to create tools for conversation hosts (think of your role when you post something on your own Facebook page that sparks controversy) that help them to gracefully steer conversations."

Tom Sommerville, agile coach, commented, "Negative activities online will be mitigated over the next decade by several factors, in particular: 1) Technological change, especially in the area of AI, will drive improvements in filtering and software agency that will reduce the ability of negative elements to become prominent in the infosphere. 2) Online aspects of people's lives will become more important, such that negative behaviour will have a growing impact on their lives in a broader context. This will create pressure on many to behave better. 3) As people trade elements of their privacy for benefits in the online world, online personas will be more transparently associated with the people behind them. That transparency will drive more civil discourse." 

Beth Corzo-Duchardt, assistant professor at Muhlenberg College, commented, "My historical research in American media history reveals that there is an ebb and flow to the tone of public discourse in any media form. Certainly 'the overall tone of griping, distrust, and disgust' characterized the muckraking journalism of the early 20th century. While trolls are uniquely empowered by the affordances of 21st century social media, their empowerment is also tied to the freedom and diffuseness of contemporary media platforms. As these platforms become increasingly monetized and controlled by corporate interests (which is an inevitable condition of any medium operating within capitalist economies), opportunities for trolling may lessen. In the next decade, we will see a rise and then fall of these negative activities. This is not about the media platforms per se, but rather the general tone of political discourse that we've seen ramping up since the presidential primaries. It will get worse with the upcoming election, then people will get sick of it and there'll be some lessening of these negative activities, but on balance, I don't see any major, long-lasting change."

Richard J. Perry, a respondent who shared no additional identifying details, noted, "I fully expect continued growth of social medial/digital commentary with demand for more-inclusive online interactions. Coincident with this is the increased likelihood of loss of privacy due to nefarious as well as programmed access to personal data."

John Anderson, director of journalism and media studies at Brooklyn College, observed, "The continuing dimunition of what Cass Sunstein once called 'general interest intermediaries' such as newspapers, network television, etc., means we have reached a point in our society where wildly different versions of 'reality' can be chosen and customized by people to fit their existing ideological and other biases. In such an environment there is little hope for collaborative dialogue and consensus—just more opportunities for tribalism and cliché as communication."

David Adams, vice president of product at a new startup, replied, "The Net was a friendlier place when it was still a social club for academics, but as soon as it became popular it became rancorous. Anonymity has always emboldened bad actors, but even in non-anonymous Facebook I'm shocked at the level of discourse. New forum-moderation technologies might help control bullying and trolling a little, but I see the central cultural battlefields that spill over into online life becoming bloodier before they become more peaceful."

Maria Pranzo, director of development at The Alpha Workshops, commented, "Human interaction is human interaction. There's a balance at play—one that we've seen through the advents of radio, television, and now the Internet. It feels more prevalent, possibly, because on the Internet the ugly is harder to avoid. But I think we'll always have trolls, and we'll always have the enlightened."

Jerry Michalski, founder at REX, observed, "I would very much love to believe that discourse will improve over the next decade, but I fear the forces making it worse haven't played out at all yet. After all, it took us almost 70 years to mandate seatbelts. And we're not uniformly wise about how to conduct dependable online conversations, never mind debates on difficult subjects. In that long arc of history that bends toward justice, particularly given our accelerated times, I do think we figure this out. But not within the decade."

Diane Carr, a services coordinator at US national health organization, replied, "People will write shorter, have less interest in in-depth information. World by bullet point."

Dean Landsman, digital strategist and executive director of PDEC (Personal Data Ecosystem Consortium), wrote, "With each new communications medium comes fear, loathing, abuse, misuse, and then a calming down. Gutenberg printed a bible, and shortly thereafter the printed word represented a danger, a system used for wrongdoing. Reactionaries were up in arms. And yet decades, even centuries past 1455, there are still those who mistrust or fear the use of the printed word. Book burnings, as we all recall, were the way children or the citizenry were, depending on one's view, either protected from the horrors contained in those books, or prevented from freedom of thought or, of choice. Same was felt about newspaper, movies, radio, TV, cassettes, CDs, DVDs, and now social media. In each case, after the hubbub, what followed was primarily sanity and calm as these media became widespread and expanded in usage as part of everyday life. In each case there would be fringe or fundamentalist groups seizing the apparent opportunity in all of those media to spread their hatred or militant intent(s). Social media will evolve by virtue of what Facebook and Twitter have begun. Blogs were a forerunner to both (especially Twitter, some of whose founders began their career ascent by selling a blogging platform to Google) in that they (blogs) were means of personal expression. Vlogs (video blogs) followed, and then came YouTube. More systems will arise. Inclusive online interaction methods and platforms emerge by the day. Free speech is made possible and more freely distributed by technology. Capture (read: production) and distribution are burgeoning. The individual has more than a soapbox; he or she or they have video and streaming or 'store now and play later' with repositories in the cloud becoming cheaper by the moment. Anonymity is both compromised by the metadata inserted and accompanying the methodologies. Consider the metadata footprint of digital photography. Now think of that as source or meta factors in all online actions. To anonymize these one must take careful and intricate steps. But intricate now is simplicity tomorrow as generations of technology turn over more rapidly with each rotation. Privacy is a more difficult matter, but it, too, can be dealt with by those wearing black or white hats."

Chris Showell, an independent health informatics researcher, said, "Before online social communication became widely available, there was much less interaction between 'tribes,' and the dispersion and relatively small size of tribes espousing extreme views made them irrelevant and invisible. Current and emerging social media tools allow small tribes to become more cohesive across social, cultural, and geographic boundaries, and the unsavoury opinions which some of them espouse can propagate rapidly into general awareness. Social media tools also allow a small group to create a disproportionately 'noisy' presence. This trend will continue, and develop further. Online interaction is still a new social phenomenon, and communities have yet to establish tacit norms for 'decent' and acceptable behaviour."

Alan Moore, software architect, responded, "The tone of the internet, especially social media, is driving by people being frustrated by our system of government and especially the corporatocracy that money in politics brings. Those without the monty to pay for access will vent online. It is somewhat equivalent to the letters to the editor previously found in newspapers. The younger generation uses social media for much the same purpose, only decentralized and not filtered by editors and limited copy space. I don't see a change in the political situation so much of the same will continue. As for privacy, given that scores of businesses depend on private information being bought and sold to the highest bidder (without any protections from the government) I see the vitriol and negativity directed at those corporations continuing unabated. We want to be free. We want to be free from manipulation and coercion, from incessant tracking of our every move. As technology lures us into it's comforting ease and convenience may, not all, will slowly lose whatever sense of privacy we have left. Still, we will fight on. Big clue: The system isn't working for the average American. Neither the Democrats nor Republicans have a clue but they will be the first to be burned at the (virtual) stake when the shit hits the fan, mark my words."

Luis Lach, president of the Sociedad Mexicana de Computación en la Educación, A.C., noted, "This is not an easy question, because we don´t have an accurate vision on what is going to happen in the technological evolution in the time to come. That´s why, I say there will be no major change in the current trend, especially regarding with social matters (harassment, fraud, cyber bullying, etc.). This social phenomenon already exists and probably in the following years people will become more aware, but also new technological threats will rise. The more concerning question is regarding free speech, for unauthorized hacking by governments around the world, and it is possible that new regulations will arise. In general terms, governments don't like people expressing thoughts in the network. The temptation to invade people's privacy is a major challenge. Having said this, I also see a huge challenge for the education system due to the current big technological gaps. I expect this will change but a very slow pace, increasing the risks for the population that is already connected."

Paula Hooper Mayhew, a professor of English and humanities at Fairleigh Dickinson University, commented, "My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. This response may work against support of free speech in the US."

Marc Brenman, managing partner at IDARE, said, "Electronic communications systems will become more ubiquitous. Privacy will continue to disappear. Coarseness will increase. Attempts at censorship will increase."

Leah Stokes, an assistant professor at the University of California-Santa Barbara, wrote, "I am hopeful that online discourse will become more regulated over time, and less anonymous. The New York Times comment section, where people have to register, can up vote, and be flagged in a positive way by editors leads to a mature, interesting dialogue. Without this semi-moderated atmosphere, many newspaper comments devolve into ad hominem attacks and conspiracy theories. The writers themselves do not appreciate this dynamic, and we know these attacks can be sexist and racist. So I am hopeful more editorial control will be exercised over time."

John B. Keller, director of elearning at the Metropolitan School District of Warren Township in Indiana, observed, "I'm not expecting to see major changes in the tone of online interactions over the next decade. One of the drivers of social media is the connection that people feel with each other. If being in social spaces were to become a downer or overall negative experience, it seems like people would not utilize these platforms. People who are currently energized by negativity are finding plenty of content and support in social platforms and this will also not change in the next decade. The content of online social interactions will become more searchable and represent more of a public thought record which may result in a contraction of open dialogue in social media spaces."

Mike Warot, a respondent who chose not to share additional identifying details, wrote, "Filter bubbles are a real thing engineered by the companies that host social media, I expect them to get very good at it."

Stephan G. Humer, head of the internet sociology department at Hochschule Fresenius Berlin, noted, "Social media and especially digital commentary will be used in a more strategic way. In my research I have seen that social media in general and digital commentary in a very special way reflects societal moods and thoughts, so influencing this discourse level will be much more interesting in the near future."

Klaus Æ. Mogensen, senior futurist at the Copenhagen Institute for Futures Studies, said, "When a problem becomes too great, workarounds will be developed. I expect that automated context analysis will weed out most trolls and harassers the way that spam filters weed out most spam today."

Demian Perry, mobile director at NPR (US National Public Radio), said, "Jack Dorsey said it best: 'Abuse is not civil discourse.' As more of our lives move online, people will naturally gravitate, as they do in the real world, to healthy, positive relationships. The success of online communities will hinge on the extent to which they are able to prevent the emergence of a hostile environment in the spaces under their stewardship. Algorithms will play an increasing role, but probably never fully replace the need for human curators in keeping our online communities civil."

Lee McKnight, associate professor of information studies at Syracuse University, wrote, "In the year that WWE-trained Donald Trump became presidential it is hard to imagine bad actors, harassment, trolling, griping, distrust, and disgust—what we used to call flaming and then learned not to do online—becoming more plentiful and empowered worldwide than those so engaged do now."

Michael Whitaker, vice president of emerging solutions at ICF International, commented, "I expect online communication to be less shaped by negative activities, but not necessarily because of behavior changes by the broader community. With social community algorithms that tend to reinforce biases and ideological preferences (e.g., Facebook's algorithm that gives you News Feed stories based in part on past likes), we are likely headed toward more-insular online communities where you speak to and hear more from like-minded people. Obvious trolls will become easier to mute or ignore (either manually or by algorithm) within these communities. This future is not necessarily desirable for meaningful social discourse that crosses ideologies but it is a trend that may emerge that will make online communications less negative within the spheres in which most people interact."

Vin Crosbie, on the faculty at the S.I. Newhouse School of Public Communications at Syracuse University, said, "Due to their psychological motivations, the negative 'bad actors' are people who more intently and avidly adopt new technologies than do others. Plus, lobbyists, political action committees, and other special interests likewise have that motivation. Thus, the tone of public discourse online will become worse in the future."

Jesse Drew, a professor of cinema and digital media at the University of California-Davis, wrote, "The mass media encourages negative and hateful speech by shifting the bulk of their media coverage to hot-button click-bait. There is a small minority of negative individuals online who hide behind the anonymity of the Net."

M.E. Kabay, a professor of computer information systems at Norwich University, predicted, "As the global economy increases the number of people with modest disposable income, increasing numbers of people in developing countries around the world will use smartphones to access the internet (or the restricted portions of the Net permitted by increasingly terrified dictatorships). We will see increasing participation in social networks, including increasing numbers of comments by new users. The widespread availability of anonymity and pseudonymity will encourage social disinhibition; without real-world consequences for intemperate remarks and trolls (attempts to provoke angry responses), the amount of negativity will increase. The numbers of new users will overwhelm the resources dedicated to monitoring and purging (some) social networks of abusive language—even today, networks such as Facebook are experiencing difficulty in taking down abusers. Censorship is already a major problem around the globe, where frightened minorities continue to grasp at increasing levels of control to forestall revolution. Restriction of speech is growing even in privately-owned venues such as popular social media sites; the current approach is focused on removing abusive postings and blocking abusive members, but there is nothing to stop corporations from limiting speech in any way that fits their end-user license agreements (EULAs). EULAs are civil contracts; even where free speech is protected from government interference, there is no such limitation for private firms. Perhaps we will see the development of social media sites with stringent requirements for traceable identity. These systems may have to demand evidence of real-world identity and impose strong (e.g., multifactor) authentication of identity. Even so, malefactors will continue to elude even the best of the attempts to enforce consequences for bad behavior."

Uta Russmann, professor of communication, marketing, and sales at the FHWien University of Applied Sciences in Vienna, Austria, wrote, "In times of growing critical political and social challenges, people’s desire to talk about their problems, (negative) emotions, less-promising situations, etc., is constantly increasing. Online, people are given a chance to talk about 'who' they think is 'responsible' for everything in their lives with others and remain anonymous. I do not think this will have any implication on the right of free speech, but those who are attacked online will be more and more interested in who is behind these anonymous persons. And technical developments make it easier and easier to trace things back. Because of this, certain groups of people (such as radical/extremist groups) will be using more-inclusive online interaction solutions. But organizations (such as companies, NGOs, etc.) will use technological systems or solutions with low (or no) privacy settings for their public online communication to be able to follow up on any online public discourse."

Lauren Wagner, a respondent who shared no additional identifying details, replied, "Hypertargeted articles, like hypertargeted ads, will prove the most lucrative for online platforms. While there may be a utopian wish for technological systems that encourage more-inclusive online interactions, polarizing pieces will result in more engagement from users and be financially advantageous to online platforms. Consequently, online public discourse will be shaped by a more divisive tone and 'bad' actors. Writers are becoming more adept at authoring articles that engage their core readership online, whether it's a broad audience using general clickbait tactics or a more specific audience with, for example, an article supporting a specific political candidate. With the rise of Donald Trump we are seeing that this phenomenon is not only limited to writers. Subjects are learning how to persuade the media to ensure that they receive a certain type of online coverage, which tends to be divisive and inciting."

Axel Bruns, a professor at the Queensland University of Technology's Digital Media Research Centre, said, "Trolling has been a constant presence in computer-mediated communication since well before the Web was invented, and I don't expect this to change any time soon. There is an ongoing arms race between trolls and platform providers, and a real limit to the extent that trolling can be combatted using purely technological means without simultaneously undermining the open and free environment that makes many social media platforms so attractive to users. Just as important an approach to addressing trolling is social measures, including digital literacies education and peer pressure. Here, unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend, across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities. Unless this trend can be arrested and reversed, I don't expect the problem of trolling to be reduced, either. The problem here is not one of free speech, as such: it is instead that the particular interpretations of 'free speech' that are embedded into leading social media platforms, and exploited by the trolls, represent a specific maximalist, American understanding of free speech rather than the more balanced European model of free speech that attempts to balance the benefits and detriments to society that arise from free speech."

Karen Blackmore, a lecturer in IT at the University of Newcastle, wrote, "Misinformation and anti-social networking are degrading our ability to debate and engage in online discourse. When opinions based on misinformation are given the same weight as those of experts and propelled to create online activity, we tread a dangerous path. Online social behaviour, without community-imposed guidelines, is subject to many potentially negative forces. In particular, social online communities such as Facebook also function as marketing tools, where sensationalism is widely employed, and community members who view this dialogue as their news source, gain a very distorted view of current events and community views on issues. This is exacerbated with social network and search engine algorithms effectively sorting what people see to reinforce worldviews."

Naomi Baron, a professor of linguistics at American University, commented, "The Internet is a magnifier of face-to-face social practices. As the 2016 US presidential election process demonstrates, conventional rules of civility and personal respect are increasingly ignored. Online tools such as Twitter often seed such negative behavior, but in-person and televised events are proving equally caustic."

Erik Johnston, an associate professor and director of the Center for Policy Informatics at Arizona State University, observed, "Simply, it will be an arms race of design between new technologies and the way they are exploited and used. We wrote a paper called crowdsourcing civility that talks about how once different threats to a community are identified, there are a wide variety of solutions for addressing these concerns."

Aaron Chia Yuan Hung, an assistant professor of educational technology at Adelphi University, replied, "Neil Postman predicted in the 1990s that the internet will lead to more balkanization of groups, and we have been seeing this increasingly more. For example, people who gravitate toward online communities that favor their social and political views seem to overestimate the popularity of their views. Blogs and news aggregates that lean left or right become particularly influential in political seasons, offering skewed perspectives. Down the line, people either continue believing that their views are shared by the majority, or they eventually come to realize that their views represent a narrow group and they begin to take that into account. However, as an optimist, I think this all evens out in some way in the end. At the moment, although the internet has been around for quite a while, we are still experiencing new ways of communicating, and people have yet to develop the critical lens with which to analyze online communication. But online literacy is likely to become increasingly important and people may start questioning one another in more critical ways in the future. Having spent more time than I should in online communities like Reddit, it seems to me that that is already gradually happening, when people begin to ask for the source of information, or begin looking at the broader agendas of particular news and blogging sites. Things might get worse before they get better, however."

Peter Morville, president of Semantic Studios, said, "The nature of public discourse online is in a state of persistent disequilibrium (see Out of Control by Kevin Kelly), so I expect the pendulum to swing back and forth between better and worse."

Anders Fagerjord, an associate professor at the University of Oslo, wrote, "I see three reasons why negativity activities may wane: First, people do increasingly learn how to use and behave in online media. Second, young people tend to be more private in what they share and do not go public with everything any more. Third, Facebook and many other actors are taking away the possibility of acting anonymously. Being held responsible for one's statements helps foster a somber tone."

Avery Holton, an assistant professor at the University of Utah, commented, "We have seen the struggles Twitter has faced recently with free speech. As more platforms open up to innovative forms of sharing and communicating, they will have to consider regulations that help police those who intend to hurt or damage individuals and networks. There’s a delicate balance to be reached between offering safe spaces for free speech and safe spaces that protect individuals against inciting, hateful speech. The last few years have given us communication back channels such as Slack and Mattermost as well as reinvigorated social media spaces such as Snapchat that allow users to selectively engage. The ability to make public or make private content will remain an important ingredient in online discourse where trust remains king."

David Lankes, professor and director at the University of South Carolina's School of Library and Information Science, wrote, "I see the discourse on the Net evolving into a competition between trolls, advocates of free speech, and increased automation seeking to filter speech on public sites. We are already seeing the efforts of large search firms using natural language processing and advanced machine learning to create chatbots and filtering software to identify extremism and harassment. The complexity of this software will continue to increase in sophistication and effectiveness, however it is ultimately competing against nuances of interpretation and attempts to be heard by bad actors."

Jessica Vitak, an assistant professor at the University of Maryland, commented, "Trolling online has existed as long as bulletin board services have. The difference in 2016 is that social media's affordances, including increased visibility and persistence of content, amplify the volume of negative commentary. As more people get internet access--and especially smartphones, which allow people to connect 24/7—there will be increased opportunities for "bad behavior"; at the same time, online harassment, cyberbullying and related activities are a major issue that social media platforms have begun to seriously address, and I believe/hope that over the next decade we'll have found more effective ways of reducing harmful speech and mitigating the negative effects of trolling and related behaviors."

Marc Smith, a sociologist at the Social Media Research Foundation, observed, "While our organization does not endorse enforced registration for all content creation we predict that anonymous content authorship and network distribution will become a crime. We predict that all content will need to be associated with a 'licensed' and credentialed legal entity. In practice, we are not very far from this today."

Peter Brantley, director of online strategy at the University of California-Davis, replied, "I expect that there will be more technologically mediated tools to condition parameters of community participation. There is a great interest in helping to create 'safe' or self-regulating communities through the development of metrics of mutual ratification. However at the same time, there will be an enlargement in the opportunities and modes for engagement, through annotation or development of new forums, and these will be characterized by the same range of human discourse as we see now. Ultimately some additional legal measures may be required to preserve free speech and regulate socially acceptable online behavior, defining minimally what constitutes endangerment, harassment, or invasion of privacy."

Trevor Hughes, CEO at the International Association of Privacy Professionals, wrote, "I expect that negative activities will always be a problem. Anonymity creates a lack of accountability for online behavior. However, controlling forces will also continue to develop. Social norms, regulations, better monitoring by service providers, all will play a role in balancing the rise of negative activities."

Robert Matney, COO at Polycot Associates, wrote, "Reputation systems will evolve to diminish the negative impact that bad actors have on online public discourse, and will become as broadly and silently available as effective spam systems have become over the last decade."

John Markoff, senior writer at the New York Times, commented, "There is growing evidence that that the Net is a polarizing force in the world. I don't believe to completely understand the dynamic, but my surmise is that it is actually building more walls than it is tearing down."

Chris Kutarna, a fellow at the Oxford Martin School and author of Age of Discovery wrote, "Part of the context we need to understand is that unpleasant shocks are becoming more frequent and more severe in their effect. This is a consequence of rising concentrations and complexity within society, and within social and natural systems. Our global entanglement makes us more vulnerable, while also making it harder to see cause and effect and assign accountability and responsibility to the injuries we suffer. Anger and frustration are a predictable consequence, and I expect public discourse online to reflect it."

Rebecca MacKinnon, director of Ranking Digital Rights at New America, wrote, “My answer [that the negative may grow] assumes current trend lines continue without some type of innovation or event that I cannot foresee. I'm very concerned about the future of free speech given current trends. The demands for governments and companies to censor and monitor internet users are coming from an increasingly diverse set of actors with very legitimate concerns about safety and security, as well as concerns about whether civil discourse is becoming so poisoned as to make rational governance based on actual facts impossible. I'm increasingly inclined to think that the solutions, if they ever come about, will be human/social/political/cultural and not technical."

Paul Lehto, author, commented, "While the Internet powerfully facilitates communication, it doesn't facilitate all types of communication equally well. We all know that certain kinds of conversations should only take place (if we all wish successful outcomes) in person. Examples include mediation of disputes without an intermediary, sensitive conversations between friends, and finding common ground with political opponents. In examples such as these, the internet helps political allies find each other, and helps amplify disagreements, but does not facilitate the more constructive forms of discourse for many subtle but powerful reasons. The internet being a prominent form of communication causes me to conclude that communication in the next decade will erode in quality."

Kate Crawford, a well-known internet researcher studying how people engage with networked technologies, said, "Distrust and trolling is happening at the highest levels of political debate, and the lowest. The Overton Window has been widened considerably by the 2016 US presidential campaign, and not in a good way. We have heard presidential candidates speak of banning Muslims from entering the country, asking foreign powers to hack former White House officials, retweeting neo-Nazis. Trolling is a mainstream form of political discourse."

Amy Zalman, principal owner at the Strategic Narrative Institute and professor at Georgetown University, replied, "In the next decade, we will see the contest over the nature of public digital space continue—thus it is not so much that the tone will not change, as that bad actors and griping will persist, and questions to address that persistence will also intensify. Can this space be legislated? Can new norms be introduced and spread? Can public service campaigns be effective? Can we quantify the business and efficiency costs of bad behavior? These may the kinds of questions that those seeking to refine our public discourse in this new space may address. Regarding anonymity and privacy: our theoretical grasp of the relationship between public and private selves and spheres seems to be radically behind current realities, which have been foreseeably unfolding since at least the late 1990s. The lack of anonymity and privacy will persist, driving a new conception of the self. I might add that this disappearance of boundaries between public and private self may be mimicked in our understanding, within the next decade or so, of public and private market spheres, and public and private goods. We will get to a phase of online maturity where discussions are counter-balanced by online references to ensure the validity of the information. It would be an uphill task but after eliminating the oversaturated info-garbage, we will come to a realization that a civilized online discourse is the way to move forward. Haters will be haters, there will be zealots selling fear and hate to ensure their political stature, but all this will end when people realize the negative impact that comes with this."

Randy Albelda, a professor of economics at the University of Massachusetts, Boston, said, "Inequality will play out badly for online interactions. The 'haves' will not need it for their own communications and interactions but will have more power/resources to control the venues, messages, and even research on how data collected from the internet is used (and then thrown back to us in the form of ads, etc.). The 'have-nots'—but mostly those on the bottom rungs without much mobility will be angrier and angrier (let's face it Trump nor Clinton will provide short run or long term policies toward more equality, making people even more politically disaffected). Anger gets translated into trolling, and other really bad behavior generally but especially on-line. In general, there is a tendency for the companies with the largest internet/social media interfaces (Facebook, Google, Twitter, etc.) to want to make more and more money. They will use the internet to sell more things. This shapes the technology and how we use it. While there is lots of ‘free choice’ in what we can buy, this does not contribute to the expansion of democratic practices."

Sunil Paul, entrepreneur, investor, and activist at Spring Ventures, wrote, "The general trend of media, advertising, and communication on the internet is to create views of the world that conform to one's existing views. Compare early USENET to discussion fora, to Facebook to Snapchat. Each brings the world of influential people closer and closer to your existing social circle. There are countervailing pressures, but the world of mass media is dead and buried. We are now cooperatively building our own echo chambers with the help of machine learning."

Paul Edwards, professor of information and history at the University of Michigan, commented, "Social media will continue to generate increasingly contentious, angry, mob-like behavior. The phenomenon that underlies this behavior has been consistently observed since the early days of email, so there is no reason to think that some new technique or technology will change that. Mediated interaction tends to disinhibit people's expression of strong opinions, use of inappropriate language, and so on. It also makes it easier to misunderstand others' tone. Emoticons have at least given a means of indicating the intended tone. Fact-checking sites have also helped to control the spread of rumors, but not very much. The very rapid interaction cycle on social media causes it to be governed by ‘fast’ thinking (see Daniel Kahneman's Thinking Fast and Slow), which is intuitive, reactive, and often emotionally based. For this reason, social media discourage long-form arguments and long, complex exchanges of nuanced views. Since ‘free speech’ normally refers to the ability to speak without government sanctions or interference, the question posed above doesn't make much sense to me in this context. The government isn't going to regulate social media content, though some providers (perhaps in concert with vigilante users and user groups such as Anonymous) may shut down some of the most extreme language and illegal activities. I think you are not asking about ‘free speech,’ but instead about whether people will feel more or less free to express their opinions on social media. The ability of trolls to harass and hurt other users online will only increase in the near future, but the counter-current is that as social media users become more experienced, they learn to ignore or block trolls."

Arzak Khan, director at the Internet Policy Observatory Pakistan, observed, "The next decade challenges are balancing the act between free speech and anonymity. The spread and rise of social media on one has empowered the population especially in Global South to for the first time to express their freedom of expression and opinion. While, government and regimes are still coming to terms to this massive revolution since the invention of the wheel. Digital commentary is on the rise and people express their opinions without realizing they can be tracked by mass surveillance programs engineered by security agencies in collaboration with powerful corporations ruling the internet sphere. As more and more will come to terms to mass surveillance phenomena of the internet they will look for tools that will allow them to be anonymous on the internet and embolden online interactions."

John Bell, software developer, data artist, and teacher at Dartmouth College, wrote, "I expect that there will be increasing demand for social networks that have more algorithmic separation of opinions. Rather than reputation- or karma-based systems that try to improve the behavior of all participants, software will respond to trolls by separating competing camps and enforcing filter bubbles. Over time, networks that take a more active hand in managing content (by banning trolls or applying community standards) will be abandoned by communities that feel repressed and replaced with networks that explicitly favor their points of view. This will mirror the self selection we've seen in news viewers in the US who favor Fox News vs. other sources, etc."

Tiffany Shlain, filmmaker and founder of The Webby Awards, said, "I expect that social media and digital commentary will evolve in the coming decade away from harassment, trolls and towards more enlightened discussions. As we connect our identity more to what we say, there will be more accountability. Since it is easier to say harsh words anonymously, the continued direction of transparency will lead to more civil discourse."

Scott A. Hale, senior data scientist at the Oxford Internet Institute, wrote, "As social media and other interaction technologies become more integrated into our daily lives, I expect public discourse about these technologies to increase in general. The challenges will be more salient to more people and the cost of non-participation will increase driving a need to discuss and address these challenges. A balance must be found between protecting free speech and protecting privacy, preventing harassment, and other issues. These are complicated by the fact that most platforms are operated by private companies and do not interconnect with one another. I very much hope that standards-based cross-platform protocols are developed and used in the future and that the enforcement of norms and laws moves from private companies to governments. While many companies might desire the latter, they are likely against the former."

David Banks, co-editor of Cyborgology, commented, "Without deliberate intervention, something technology companies are very reticent to do, harassment will continue because the same structural oppression that existed prior to its popularization still exists today and has survived digital mediation. Structural oppression must be actively antagonized, otherwise it will reproduce itself. The tendency to favor privacy over the safety of communities will also exacerbate the ability of people to harass over digital networks."

Stowe Boyd, chief researcher at Gigaom, observed, "I anticipate that AIs will be developed that will rapidly decrease the impact of trolls. Free speech will remain possible, although AI filtering will make a major dent on how views are expressed, and hate speech will be blocked."

Galen Hunt, a research manager at Microsoft Research NExT, replied, "As language-processing technology develops, technology will help us identify and remove bad actors, harassment, and trolls from accredited public discourse."

David Brin, author of The Transparent Society and a leader of at the University of California-San Diego's Arthur C. Clarke Center for Human Imagination, said, "Some company will get rich by offering registered pseudonyms, so that individuals may wander the Web ‘anonymously’ and yet vouched-for and accountable for bad behavior. When this happens, almost all legitimate sites will ban the un-vouched anonymous."

Frank Pasquale, professor of law at the University of Maryland and author of Black Box Society, commented, "The major Internet platforms are driven by a profit motive. Very often, hate, anxiety, and anger drive participation with the platform. Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases."

Edward Friedman, emeritus professor of technology management at the Stevens Institute of Technology, observed, "During the next decade society will mature in its use of online discourse. It will be less of a novelty and will become better incorporated into daily affairs. Also, a wider age group will be engaged, thus contributing to more-thoughtful use."

David Golumbia, associate professor of digital studies at Virginia Commonwealth University, said, "I hope that the increased awareness of these problems will lead us to develop better means of dealing with them. However, it's a stretch to say that I believe this will happen—I can see many reasons to believe things will get worse, but others to think they will stay the same or get better."

Jon Hudson, a futurist and principal engineer, wrote, "If you want a social media property to be loved by its users you must reduce the negative content. You can't remove it. It's human nature. However you can create a system when positive actions and constructive criticism become the norm and trolls become like unicorns."

Steven Polunsky, www.spin-salad.com, observed, "I expect we will see a continuation in the trend of sites muting or blocking aggressive or abusive commenters, and then limiting comments that are counter to the site's preferred worldview to commenters who normally post pro-comments and selected outside commenters. Frequent commenters with counter arguments will be muted. Technology will allow these actions to happen automatically and will be as easy as checking a box in the website interface. Note that this is not a limitation on free speech, as all sites except government sites have authority to regulate or eliminate comments (and government sites do as well if rules and consequences are clearly spelled out—transgressing comments can be routes to a page away from the main discussion page, which satisfies freedom of speech without discouraging honest discourse."

Justin Reich, executive director at the MIT Teaching Systems Lab, said, "Human beings will continue to be terrible to one another online, but they will also be really wonderful to each other online as well. The attention that goes to acts of hatefulness and cruelty online overshadows the many ways that strangers answer each other's questions, tutor one another, and respectfully disagree. I'm currently conducting a study about engagement across difference in MOOCs, where we have evidence that anonymous students in voluntary online courses can have effective civil disagreements across political and ideological differences. See an early paper here: https://scholar.princeton.edu/sites/default/files/bstewart/files/civicmooc.pdf I'm quite encouraged by the work that Jeffrey Lin has done at Riot Games to create sociotechnical systems that reward kindness, civility, and cooperation over disrespect and cruelty. There are smart people out there trying to engineer a more civil internet, and in various spaces, they will be very successful."

Paul Jones, clinical professor and director of ibiblio.org at the University of North Carolina, commented, "The id unbound from the monitoring and control by the superego is both the originator of communication and the nemesis of understanding and civility. As we saw in Forbidden Planet, the power of the id is difficult to subdue. If not impossible. Technologies of online discourse will continue to battle with the id, giving us, most recently, Yik Yak (id freeing) and comment control systems like Disqus (id controlling). Like civility in all contexts, controlling the id is more social and personal in the end. Technologies will nonetheless attempt to augment civility—and to circumvent it."

Eduardo Villanueva-Mansilla, associate communications professor at Pontificia Universidad Católica del Perú, said, "While 'liberated areas' of expression on the internet will continue to thrive, other areas, mostly developed by global brands and expanding through most of the world, will push toward standards of behavior, allowing for a detente of 'polite' and 'non-polite' services that will balance themselves, as most of the users will be the same subjects."

Jean Russell of Thrivable Futures wrote, "We are learning better how, as a collective, we can modify protocols for contributing content online that reduce the ability of trolls to act without repercussions. It is still pretty ugly, but we are at the turning point. First, conversations can have better containers that filter for real people who consistently act with decency. Second, software is getting better and more nuanced in sentiment analysis, making it easier for software to augment our filtering out of trolls. Third, we are at peak identity crisis and a new wave of people want to cross the gap in dialogue to connect with others before the consequences of being tribal get worse (Brexit, Trump, etc.)."

Scott Fahlman, computer science and artificial intelligence research professor at Carnegie Mellon University, replied, "Human nature will not change, but I anticipate some technological development that will give people better tools to filter out what they don't like. Anonymity must survive as an option, but people will be able to filter out or flag anonymous or pseudonymous comments."

Thomas Claburn, editor-at-large at Information Week, wrote, "I expect more pressure on service providers to police their services more actively. And I suspect people will be more careful about what they say because it will be harder to hide behind pseudonyms. Also, I anticipate more attention will be paid to application and website design that discourages or mitigates negative interaction."

Christopher Mondini, an executive with a major global Internet governance organization, said, "Taking a global perspective, the billion Internet users who will be newly connected in the next four years will have the same initial surge of productive and valuable interactions experienced by more mature online markets a dozen years ago. This will counterbalance growing pockets of self-important and isolated pockets of griping and intolerance that we see in these mature markets. I believe there is no impetus whatsoever for greater inclusiveness and ascribe to Eli Pariser's 'filter bubble' theory of online interaction."

Cornelius Puschmann, Hans-Bredow-Institute for Media Research, Hamburg, observed, "It is difficult to assess objectively how good or bad the climate is. It seems likely that more will be done to counter hateful speech, with (potentially negative) implications for freedom of expression."

Valerie Bock, of VCB Consulting, commented, "I would love to think we will mature as commentators on the Internet, but we have 35 years of experience with online discourse at this point, and the trade-offs have become pretty clear. Spaces where people must post under their real names, and where they interact with people with whom they have multiple bonds, regularly have a higher quality of discourse than places where it is possible to post anonymously and without personal consequence. This is unlikely to change. In response to this reality, we'll probably see some consolidation as it becomes easier to shape commercial interactive spaces to the desired audience. There will be free-for-all spaces and more-tightly-moderated walled gardens, depending on the sponsor's strategic goals. There will also be private spaces maintained by individuals and groups for specific purposes. These will tend to be more reliably civil, again, because people will be known to one another and will face consequences for behavior outside of group norms."

Jannick B Pedersen, futurist and impact investor at DareDisrupt, replied, "A self-correcting mechanism will arise—peers educating peers—just as happened with traditional media."

Marcel Bullinga, trendwatcher and keynote speaker @futurecheck, wrote, "In comparison to the 2012 Pew survey, I am much more pessimistic. It seems we have developed different attitudes for online and offline behaviour. Online we express hate and disgust we would never express offline, face-to-face. It seems that social control is lacking online. We do not confront our neighbours/children/friends with antisocial behaviour. The problem is not anonymous bullying: many bullies have faces and are shameless, and they have communities that encourage bullying. And government subsidies stimulate them—the most frightening aspect of all. We will see the rise of the social robots, technological tools that can help us act as polite, decent social beings (like the REthink app). But more than that we need to go back to teaching and experiencing morals in business and education: back to behaving socially."

John Lazzaro, a retired EECS research specialist at the University of California-Berkeley, wrote, "I like to use the real-world example of waiting in a line to order drinks at a coffee shop. On most days, there's no drama in the social experience—everyone is polite to everyone else, and the line moves swiftly. If anonymity and privacy are valued by a customer, with respect to other customers or the folks behind the counter, it is easy enough to maintain using the standard social tools (and cash payment). And for the days where life isn't so smooth, there's an escalating series of behaviors that 'polite' customers learn, and that employees are trained to use, to keep the drama from spiraling into a call to 911 (one hopes). Online today, we can find small venues where this coffee shop approach has been translated to online comments, and which, on the whole, work well. The example I am most familiar with is the Hacker News comment section, which on most days, for most topics, is a very civil place. The challenge is to scale systems of this nature up from small, shared-culture sites like Hacker News, to sites in the top 50. Fortunately, we live in an era where scaling is considered the core competency—and this is the reason why I'm optimistic that, in 10 years, the YouTube comment section will be as civil as Hacker News comments are today in 2016."

Terry Langendoen, an expert at the US National Science Foundation, said, "Human nature overall is not likely to change much, so the ratio of positive vs. negative communications and of bad actors vs. everyone else won't change much. Changes in these ratios will be driven by the changes to the communication channels and how they are managed. The question presupposes that we have agreed-upon metrics for determining these ratios, and we don't have any. Research in computational sentiment and opinion analysis is however developing more and more sophisticated analytic measures and these may in the near future be able to provide a rough measure of the first ratio, at least for social media text communications in a few of the world's major languages. Management, including detection and suppression, of the activities of bad actors is a form of defensive warfare on the part of those we may call 'good actors,' so we can comfortably predict that the conflict will take the form of an arms race—in fact it already has, and while there is no counterpart of a nuclear deterrent, the means for controlling bad behavior in social media is now and will continue to be widely distributed, so that those who may be harmed by such behavior will increasingly have access to resources for defending themselves."

D. Yvette Wohn, assistant professor at the New Jersey Institute of Technology, commented, "Bad actors and harassment will not go away, and some services may lose users for trying to aggressively eliminate these forces while others, but certain technologies that target underage users will be able to create 'safe' places where negativity will be constrained to constructive criticism. These safe places will arise through a joint effort between community policing and system designs that encourage supportive behavior. Mainstream social media services will not be initiating this- rather it will arise from youth with coding and social skills who self-identify this need for a safe space."

Frank Elavsky, data and policy analyst at Acumen, LLC, replied, "Theory: Internet + Confirmation bias + growing economic/political/social dissent = more negative activities shaping our world. I am not sure that policing of internet space is a thing that we will see in the next decade. I believe that the freedom of the internet may continue to work in favor of the justice systems in place, since allowing people to openly threaten others or take place in illegal activity makes them easier to catch. Security breeches like Ashley Madison and the FBI's fake child pornography site are examples of ways I believe the internet will continue to find justice, despite being a fairly unregulated frontier. Of course, perhaps the future of corporate fascism will come about through these means, and we won't quite see it coming (here's looking at you Google/Amazon)."

Dan Ryan, a professor of sociology at Mills College, wrote, "'Bad actor' negativity is both self-sustaining and self-limiting. In some places we'll see explosions of negativity that feeds on negativity, but in the long run people will tend to flee from rather than flock to the venues where this happens. Thus, my optimism."

David Karpf, an associate professor of media and public affairs at George Washington University, commented, "Public discourse online will continue to get worse before it gets better. It is a lot easier to ruin or pollute an online conversation than it is to build a safe and productive space online. And social media companies like Facebook and Twitter face a real challenge in developing policies that are complex enough to discourage harassment while still fostering active speech online. Thus far, the mass public has demonstrated a distaste for corporate violations of privacy and online harassment, but we have not seen an active demand for products or policies that take privacy or inclusive online interactions seriously. So things will keep getting worse online. And then someday, I hope, they will get better."

Rob McKenna, a librarian and lecturer in Dublin Ireland, noted, "Online discourse will become less influenced by trolling/negativity for a few reasons: anonymity will become less possible; unmoderated spaces will become less common for media outlets due to the experience of trolling and the descent into negativity; education will have caught up with the necessity of promoting civil space and discourse; tort law will most likely make wealthy outlets liable for the despicable behaviour of the trolls they enable by not moderating sensibly; law enforcement will stop giving a free ride to harrassers—particularly misogynistic ones. None of these reasons bode particularly well for anonymity and privacy. Anonymity will be disproportionately blamed for trolling whereas it is, in fact, only a minor issue."

Shawn Otto, organizational executive, speaker, and writer with ScienceDebate.org, commented, "This is largely a problem of anonymity and newly adopted technology, combined with an older generation that are generally less polite and socially connected online. Younger Millennials have better-integrated brain communication pathways between written and real-world communication, having grown up with the technology. Plus, several publications and other outlets are moving to limit trolling or going so far as to remove their commenting sections because trolls have made it nonproductive."

Kevin Novak, CEO of 2040 Digital, responded, "Discourse whether in person or online represents a variety of views (positive and negative). We as humans first complain before sharing positive outcomes or feelings. Human nature and the need to communicate emotions will continue to expand and proliferate online."

Scott McLeod, associate professor of educational leadership at University of Colorado-Denver, observed, "The internet will continue to serve as an outlet for voices to vent in ways that are both productive and necessary. Societal and political 'griping' and 'disgust' often are necessary mechanisms for fostering change. We are going to find ways to preserve anonymity where necessary but also evolve online mechanisms of trust and identity verification—including greater use of community self-moderation tools—that foster civil discourse in online communities that desire it. Yes, there will be marginalized communities of disgust but many of these will remain on the fringes, out of the mainstream. The ideas that bubble up from them and gain greater traction will represent the larger public and probably deserve some constructive attention."

Arthur Kover, an emeritus professor at Fordham, said, "Communication will become more shaped by negative activities because the lessening of constraints of community and personal interactions will allow formerly repressed material to surface, and it will."

Adam Gismondi, Ph.D., visiting scholar at Boston College, noted, "I feel confident that balance will come in terms of online discourse. Bad actors will always exist, but the positive shift will be a result of two major changes. First, platform designers concerned with the risk of less active (or loss of) users will find ways to deemphasize forms of harassment and trolling. Second, the still-forming social norms online will continue to evolve, and eventually we will see the level of discourse naturally rise."

William J. Ward, a college professor also known on Twitter as @DR4WARD, wrote, "This question implies that public discourse is shaped by negative activities. That is a false assumption. Negative and positive activities will remain the same. Negative activities online will remain in the minority, only 1-10% are negative; 90% are positive or neutral. The media may focus on the less than 10% of negativity but actual public discourse is 90% positive or neutral."

Dave Robertson, professor of political science at University of Missouri-St. Louis, wrote, "There will be more and different platforms that may be more closely targeted to specific groups. Platforms that are more inclusive may lose some of these groups, making them less inclusive. I don't see any reason to think that privacy and anonymity will be improved."

Sam Anderson, coordinator of instructional design at the University of Massachusetts-Amherst, said, "It will be an arms race between companies and communities that begin to realize (as some online games companies like Riot have) that toxic online communities will lower their long-term viability and potential for growth. This will war with incentives for short-term gains that can arise out of bursts of angry or sectarian activity (Twitter's character limit inhibits nuance, which increases reaction and response)."

Tim Norton, chair of Digital Rights Watch, wrote, "I see the potential for greatly increased public discourse through online media. Social media allows people to take part in a public debate that they may have not previously had access to. But alongside this, an increasing culture of attack language, trolls, and abuse online has the potential to do more damage to this potential. Anonymity (or at least the illusion of it) feeds a negative culture of vitriol and abuse that stifles free speech online."

Dmitry Strakovsky, a professor of art at the University of Kentucky, wrote, "Online interactions are driven by either increased familiarity between the subjects or by complete anonymity. It's pretty polarized. I don't foresee these trends changing."

Tom Vest, internet activist, analyst, and expert, observed, "Although the scope for anonymity and privacy are likely to continue eroding, the population of gripers, distrusters, and disgust/ed/ing post authors is sure to at least keep pace with overall internet user growth rates—and on the current trend, there is likely to be more to gripe, distrust, and be disgusted about over time."

Musiliu Lawal, senior engineer at the University of Ilorin, based in Nigeria, said, "The media and digital world definitely will be evolved in the next 10 years. There will be a high demand for technological development in online interactions to encourage free speech, anonymity and privacy."

Charles Perkins, a senior principal engineer for a major global telecommunications company, commented, "People will see that the potential for mutual benefit and growth is being lost because of the unconstructive actions noted above."

James Kalin of Virtually Green wrote, "Surveillance capitalism is increasingly grabbing and mining data on everything that anyones says, does, or buys online. The growing use of machine learning processing of the data will drive ever more subtle and pervasive manipulation of our purchasing, politics, cultural attributes, and general behavior. On top of this, the data is being stolen routinely by bad actors who will also be using machine learning processing to steal or destroy things we value as individuals: our identities, privacy, money, reputations, property, elections, you name it. I see a backlash brewing, with people abandoning public forums and social network sites, in favor of intensely private 'black' forums and networks. As global discourse is devolving into village discourse the days of Facebook, Twitter, and the other mass social networks are numbered."

Henry Mead, self-employed legal assistant and leatherworker, observed, "Digital assistants, the proto-AIs that are starting to have a significant presence in regular people's daily lives, will come to have a much greater impact on our experience and interaction with social media, digital commentary, and our society which those reflect and impact. We will be able to train our digital assistants to aggregate our information flow and tailor it to our tastes. Part of that function can be the filtration of negative contributions from bad actors or trolls. However those bad actors will continue to act in the same way they do now. There are currently means to avoid much of the negativity. Those means range from simply choosing to not read online comments, to careful control and self-moderation of one's social media contacts and feeds, to the decision to not participate in social media all together. The digital assistants may make non-participation less attractive, while making it easier and less time-consuming for the average person to implement the former two means."

Stewart Dickson, digital sculpture pioneer, said, " I don't expect human nature to change. However, I am hopeful that analysis interventions like politifact.com, Backing up statements with actual evidence will come into more common use; I am optimistic that eventually, people will wander beyond their insular social network bubbles, broaden their worldviews and actually educate themselves. After this has occurred, then maybe the general tone will begin to improve. It will probably operate on a generational timescale. (30-year time steps) The best thing for free speech, anonymity and privacy is: decentralized publishing; reversing consolidation of media ownership; decentralizing internet service providers; nurturing the diaspora; more Last-Mile (First-Mile) local broadband fiber projects. Note: Chattanooga owns its own electric power utility. This is how broadband successfully piggybacked on the public right-of-way. Nationalizing the power grid would be a good step toward decentralized broadband."

Chris Zwemke, a Web developer, commented, "People feel empowered to say hateful things and complain and shame those hateful things if they aren't face to face. Shaming a harasser or a troll is definitely negative noise. (I don't know that it is wrong, but it is negative noise.) We haven't reached peak argument yet online. Folks will continue in the next decade to speak ill of each in either true hate or trolling. Either way the people who visit 'public' places online will have worse content to consume. Best to avoid the comment sections for the foreseeable future. My hope if that online discussion can solve the echo chamber problem of online discourse so that people can see the other side with more clarity."

David Krieger, director of the Institute for Communication & Leadership at IKF in Lucerne, Switzerland, noted, "Trolls we will always have with us. Despite everything, they serve the useful purpose of challenging and improving the evolution of the social immune system. The pressure for more transparency and authenticity that comes with increasing connectivity and flow of information will tend to make life more difficult for the trolls. Free speech becomes unpredictability of information flow. Privacy will yield to 'publicy' in knowledge economy of abundance, Anonymity for what, if everyone knows everything anyway? What we need is Network Publicy Governance instead of market individualism and bureaucratic hierarchies."

Dan York, senior content strategist at the Internet Society, wrote, "Unfortunately, we are in for a period where the negative activities may outshine the positive activities until new social norms can develop that push back against the negativity. It is far too easy right now for anyone to launch a large-scale public negative attack on someone through social media and other channels—and often to do so anonymously (or hiding behind bogus names). This then can be picked up by others and spread. The 'mob mentality' can be easily fed, and there is little fact-checking or source-checking these days before people spread information and links through social media. This will cause some governments to want to step in to protect citizens and thereby potentially endanger both free speech and privacy."

Ida Brandão, a professional educator, observed, "I wish that Internet evolves into a democratic tool accessible to all, a means to learn more and to understand other cultures around the world in a path of tolerance. I know that it also carries negative aspects but I prefer to highlight the best ones."

Garth Graham, board member at Telecommunities Canada, said, "We used to think that all politics was local. But online interactions are showing us how the internet is an artifact of a culture of individualism. To the individual, all politics are the politics of identity, and autonomy is a far more important value than either anonymity or privacy. People will continue to tell their own stories in their own way."

Ray Schroeder, associate vice chancellor for online learning at the University of Illinois, Springfield, commented, "Over the next decade I expect the Internet to continue to become more and more important as the forum for expression both nationally and internationally. MOOCs will continue to evolve into a leading mode of higher education and learning at all levels. Degrees and certificates will abound online—with credentials assembled and authenticated via blockchain networking architecture."

Janet Salmons, independent scholar, writer and educator at Vision2Lead, observed, "I hope social media and the internet more broadly will become more inclusive and celebrate exchange and constructive commentary. But I fear it will increasingly become a platform for hate speech and spam. Also, I fear the commercial interests that own social networking sites will try too hard to sell every dot of data, squeezing out the sense of community that many enjoyed. I find myself more drawn to scholarly/professional blogs and websites and away from social media platforms for these reasons."

Amanda Licastro, assistant professor of digital rhetoric at Stevenson University, said, "With anonymity comes immoral acts. I prefer to believe in the moderation efforts of the many to silence the few rather than policing by the state. So I hope we can maintain this level of freedom online, despite the bad actors."

Paul Dourish, chancellor's professor of informatics at the University of California-Irvine, said, "In the short term, it seems unlikely that we'll see much of a change here. My sense of the current deplorable state of online discourse is that it reflects a series of broader social and cultural conditions that are unlikely to be resolved in the short term. Further, while technical mitigating factors might help with the situation, it takes an inordinate amount of time to see those rolled out or adopted widely. Our current platforms are unlikely to disappear in the short term without some significant pain."

Vance S. Martin, instructional designer at Parkland College, wrote, "With the advent of Facebook, Reddit, comments on sites such as YouTube, Twitter, Wikipedia, and online learning it becomes so much easier to make comments that reach a wider audience. With the distance it becomes easier to simply say things that are mean or harassing. It also becomes easier to follow through on these comments by looking up information about a target. This is evidenced last year with Gamergate, or even in this US political cycle in Nevada with Sanders supporters targeting those selected to go to the conventions. My guess is in the next decade there will an increase in this behavior. I would like to think that with some of the recent social movements such as Black Lives Matter there could be a change in tone online and off, but that could require something drastic to change the tone."

Jeff Stonecash, a professor emeritus of Syracuse University, replied, "Social media and the internet are valuable in showing us the real opinions people have. Many people who would not comment do so with social media. They also allow people to vent in an unedited, unbridled fashion without much concern for the impact on others. It is very self-gratifying. I doubt there will be restraints imposed and I worry that we are headed into an era of more venting and less dialogue."

Lisa Heinz, a doctoral student at Ohio University, commented, "Trolls are gonna troll. They existed before the Web and they will continue to exist long after I'm gone. Humanity's reaction to those negative forces will likely contribute more to the ever-narrowing filter bubble, which will continue to create an online environment that lacks inclusivity by its exclusion of opposing viewpoints. An increased demand for systemic internet-based AI will create bots that will begin to interact—as proxies for the humans that train them—with humans online in real-time and with what would be recognized as conversational language, not the word-parroting bot behavior we see on Twitter now. The lessons learned by the Tay-bot fiasco will create a new level of social media or Web bot that will become more conversation-friendly but also, again, less inclusive. When this happens, we will see bots become part of the filter bubble phenomenon as a sort of mental bodyguard that prevents an intrusion of people and conversations to which individuals want no part. The unfortunate aspect of this iteration of the filter bubble means that while free speech itself will not be affected, people will project their voices into the chasm, but few will hear them."

Ebenezer Baldwin Bowles, editor at The Indie Tribune, wrote, "Haters will hate, the pop singer tells us. Online haters need someone to listen to their discontent, someone to read their lament. What if the forums available to us provide the means to silence the quarrelers by denying them entrance into the places of our discourse and our chat? Does the free speech of the hater also demand that I freely listen and engage? Does the act of blocking the hater's voice from my stream constitute denial of free speech?"

William Ian O'Byrne, an assistant professor at the College of Charleston, said, "We need to consider who we mean by the 'bad actors' and the nuances of trust in online spaces. We will continue to see hacks, data breaches, and trolling behavior in online spaces. I hope that, as Web-literate citizens, we increasingly speak out against these behaviors, but also read, write, and participate more thoughtfully in online spaces. My concern is the chilling effect that we see in this post-Snowden era in which we have to be concerned about privacy and security and how these are controlled by businesses and governments. Most of this happens in private and I believe can be more detrimental to online public discourse than trolls and data hacks."

Eleni Panagou, cultural informatics professional and information systems researcher analyst at CulturePolis, wrote, "I do not expect major changes because all the main and 'big' players are keeping pace with major legal and privacy issues on the other, corporate thinking is already there where it has to be so as to sell out more or less in an affordable way their products on ICT, internetworking, etc."

Daniel Pimienta, head of the Networks & Development Foundation (FUNREDES), noted, "The key factor for the answer is the speed of the deployment of media and information literacy (MIL), a much-required strategic education through various channels. A study (Y. Eshet-Alkalai, and E. Chajut, Change Over Time in Digital Literacy, published in Cyberpsychology & Behavior) offers very worrying trend data. The study measured, at a five-year interval and using the same methodology, the respective levels of media and information literacy of students and compared with those of their parents. In the first study appears a low level of digital literacy in the parents and in information literacy of the children. In the second, the level of digital literacy of parents improved and approached the children's, while the level of information literacy of children worsened, revealing the dangerous myth behind the fashionable concept of 'digital natives' and the urgent need to organize the information literacy of young people. The low level of information literacy is the cultural broth for conspiracy theories, disinformation, hate discourses, and so on. The answer to that question is extremely dependent on the extent of MIL policies."

Erik Anderson, a respondent who did not share additional identifying details, wrote, "Privacy is an illusion and people are trying to protect what they don't understand. We live in a hyperconnected information age. If you want privacy then don't go online. I expect data security and privacy services to become more mainstream for the users who care. Ever wonder why you started getting targeted ads when you bought something online? Don't use Gmail. Privacy is less of a concern in Western countries, but in some countries standing out means persecution/death."

Laura Stockwell, digital strategy consultant and owner of Strat School, said, "We are still transitioning into a fully digital way of connecting and communicating, with an older generation having grown up without digital communications channels, and Generations Y and X growing up partially using digital communication channels. We will see new types of interactions when Gen Z reaches maturity, in the next 20-30 years. This generation is incredibly collaborative, they are being raised with MOOCs, and they are creating and consuming media from each other at a very early age. They will not only better understand how to communicate more clearly in the digital space, but, if you subscribe to theories of media ecology, they will think in a more tribal way. Yes, digital can be divisive, but it can also be connective. In addition, systems theory suggests that all systems are self-correcting. Look at where the harassment is coming from. For the most part it is from groups with longstanding power retaliating to less-empowered groups, who now have a voice! What might happen when those with less power gain equality? We may live in a more engaged, more equitable society."

Theo Armour, a coder, replied, "We will learn that content needs to be curated. The techniques for curation will improve."

Eric Marshall, a systems architect, noted, "We have had twenty-plus years of online communications already. The ability of folks to counteract solutions that might be generated leads me to believe we will stay where we are."

David Williams, a respondent who did not share additional identifying details, wrote, "I expect the quantity of negativity to increase, and perhaps the stridency as well; however, I'm optimistic we will have enough control over our access tools to filter much of that out. I'm hopeful folks will use the tools at their disposal to ignore, gloss over or not even see the majority of the purely negative, sensationalist attempts at communication."

Stephen Schultz, a respondent who did not share additional identifying details, said, "In my own experience, commenting platforms like Discourse draw in far more thoughtful people than trolls, and part of this is structural. Discourse allows only upvotes (likes) and implements a flagging process that is the logical conclusion of any system allowing for downvotes in the first place. This encourages thoughtful conversation. Commenting platforms like Discourse are being adopted by more and more sites, so I expect this productive dynamic to become the norm well within this decade. As to online identities and privacy: embedded Facebook commenting is, for now, the most transparent platform in terms of real-world identity. I think what will ultimately matter more, however, is identity persistence, regardless of its real-world verifiability. Commenters are invested in their online identities. Whether they 'go visible' is secondary to their reputation associated with that identity."

Dudley Irish, a software engineer, wrote, "It seems likely that it will become increasingly possible to tie an actual person to an otherwise anonymous account. This loss of anonymity will lead to a reduction in the trolling behavior seen. Not so much because the trolls behavior will change but because the ability to effectively target them and block them (shun them). This loss of anonymity will have a chilling affect on free speech. This could be address legally, but only a minority of government actors are interested in extending and increasing free speech. Also, the most common target (at least according to what I have read) of these sorts of attacks are also the prime target for a great deal of advertising. The major corporations will act to protect the advertising channel and they have no interest in protecting free speech. These two factors mean that the behavior will be 'nicer' but at a tremendous cost in freedom of expression and free political speech."

Matt Bates, programmer and concept artist at Jambeeno Ltd., said, "Participants in public discourse will become more choosy about when and where they communicate due to the increasing importance of online commentary. Positive, negative, or benign, such commentary is increasingly impacting peoples' daily lives via, e.g., fostering profound personal relationships, affecting employability, enabling education and personal expression, etc. This discretion will be second-nature to people who spent their entire lives with ready access to the internet. I suspect top-down moderation tools and techniques will continue to broaden and in some circles will become increasingly draconian. I don't know what will come of free speech in the United States specifically, but change, as ever, will continue to be gradual (it was never the case that all speech was free here, and the numerous erosions to free speech since the 20th century have all been fairly assiduously circumscribed by the courts, even though I tend to disagree with most of them). Vis-a-vis anonymity and privacy: I foresee their continual and gradual erosion as technocracy inexorably expands. Shoshana Zuboff's Three Laws are apropos: 1) Everything that can be automated will be automated. 2) Everything that can be informated will be informated. 3) Every digital application that can be used for surveillance and control will be used for surveillance and control. To paraphrase Dan Geer: when one-inch block letters can be seen from space, how does that change our calculus about what is and is not 'private'? When a kid with a small allowance can afford a drone that can peek through most peoples' windows? When all the streetlights installed in your town include 360-degree surveillance cameras? When anybody's phone can be trivially hacked to record the sounds of their surroundings? The very notion of what is and is not private will, necessarily, be shifting at an increased rate. As a civil libertarian I view this as extremely regrettable, but I also see it as inevitable, especially given the rapidity with which technology undermines extant power structures and changes our mores and habits. Whether this leads to increased devolution of government to local modes or to more centralization and the dystopian intrusively-paranoid police states of science fiction is beyond my ken, but I expect the latter is more likely, at least in the short-term."

Alan Cain, a respondent who shared no additional identifying details, asked, "Who is the one who defines bad actor? Unfortunately, the internet will be more restricted, with any excuse, likely 'civility, security, and selectivity of worldview.' Our oligarchical control structure will determine what is communicated."

Lindsay Kenzig, a senior design researcher, said, "Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now. Given that so much of the world is so uneducated, I don't see that more-inclusive online interactions will be the norm for many many years."

Ryan Sweeney, director of analytics at Ignite Social Media, commented, "Online discourse is so new, relative to the history of communication. The optimist in me believes we're in the worst of it right now and within a decade the kinks will have been worked out. There are a lot of tough and divisive but crucial conversations occurring online at this moment in time. People are more open with their communication. Groups that have previously had minimal access to mass communication are able to have a voice and those who previously had the microphone aren't quite sure how to react pleasantly. Technological evolution has surpassed the evolution of civil discourse. We'll catch up eventually. I hope. We are in a defining time. We have to be cautious of censoring speech. If social media becomes an exception to this, it will defeat the purpose of its existence to begin with. I am also hopeful that, with increased dialogue and increased transparency, saying something false in order to create an alternate reality will go out of trend."

Manoj, an engineer working in Singapore, replied, "Negative interaction will increase to a limit after which I feel there will be some self regulation coupled with governmental and procedural requirements. Free speech will be a big loser and I hope people come out more as a person than hide behind anonymity which in any case is always traceable."

Jean Burgess, a professor of digital media at Queensland University of Technology, wrote, "We'll see a growth in tools and systems to prevent or regulate hate speech and filter for quality discourse, but at the same time we'll see a retreat to safe spaces and closed groups, reducing the mutual visibility and recognition of diversity."

Mary K. Pratt, freelance journalist, commented, "Online communication mirrors what we see in society in general, where the tone of discourse has long had negative elements. The online world allows for both anonymous as well as attributed thoughts, just like many real-world interactions. As a society we struggle to balance free speech against speech that is criminal in nature and therefore not protected. I don't see society and lawmakers willing to cut back on free speech nor do I foresee large societal shifts toward more civility in general, so I don't think online communication will become less negative; and I don't foresee any large societal shifts that will prompt an increase in negative activities that would more greatly shape online communication."

Davin Heckman, professor of mass communication at Winona State University, observed, "Proposed solutions to negative expression online are generally geared towards policing individual behavior (banning, deleting, censoring, shaming) and/or training (teaching people to govern themselves) and not towards an assessment of the logic of social networks that contribute to such behavior. The reality is that social networks thrive on confusion between public/private speech, decontextualization, the reductive gestures of tagging, and a distorted relationality. I think some platforms or communities will achieve a kind of piece by eliminating difference, I don't know that this will achieve the necessary social and political activity that Arendt discusses in The Human Condition."

Eelco Herder, senior researcher at the L3S Researcher Center (Germany), replied, "Until about 10 years ago, the World Wide Web was mainly considered as a virtual reality with only limited connections to the 'real' world. It was common practice that users participated in forums and newsgroups with a nickname, thus ensuring (semi-)anonymity. Nowadays, many people use their own names in social media - either enforced (e.g., Facebook) or (semi-)voluntarily (e.g., Twitter). It seems that the real-name policy of Facebook only played a limited role in this change—the Web and the world have become tightly connected and our actions on the Web (purchasing, reviewing, liking, dating) often have direct 'real' implications. In addition, 'privacy by obscurity' (meaning that conversations or other online actions are private, because no one will be interested or able to find them) has virtually come to an end, now that major search engines index social media content as well. It is often thought that people are more direct and rude when they are anonymous than when their identity is known. Flame wars in Facebook comment threads and hot-headed Twitter conversations indicate that this is not the case (anymore). Some years ago, we investigated how users respond to warnings that their tweeting behavior might put their jobs into danger, and only a small percentage of users actually took action: http://fireme.l3s.uni-hannover.de/fireme.php It would be interesting to compare the quantity and quality of harassment and trolling on the Web as currently happens on the Web with, say, 10 years ago (the early days of Twitter)."

Michael Wollowski, associate professor of computer science at the Rose-Hulman Institute of Technology, said, "I see our society as having moved toward a considerably less civil way of interacting with each other. This trend will continue for a while and it will be reflected in online tools."

Dave McAllister, director at Philosophy Talk, wrote, "The ability to attempt to build up status by tearing down others will result in even more bad actors, choosing to win by volume. It is clear that the concept of the ‘loudest’ wins is present even now in all aspects of life in the United States, as represented by the 2016 presidential campaign."

Joanna Bryson, senior associate professor at the University of Bath, commented, "Generally speaking the trajectory for society is to learn to cooperate more and more. We will find ways to encourage good behaviour and police bad, though of course it will remain an arms race and progress will not be consistent. This is just based on historic precedent over the last 10,000 years. The internet isn't that different from language in general."

Karen Mulberry, a director, replied, "As the Web becomes the main, in not sole source of information and data for the majority the issue of trust/distrust in what can be obtained will rise. The majority still place significant trust in anything they read on the internet and there is not clear way for them to determine if the information is factual and correct. In most part this will be a generational issue as the newly online appear to believe everything online from social media to reporting news. The you need to overlay the ability to frame facts and data with the right for anyone to post what they want under freedom of speech, which should not be restricted as that then becomes content control on the internet"

Andrew Nachison, founder at We Media, said, "Public discourse about public affairs is already a kind of shouting match. It's a brawl, a forum for rage and outrage. It's also dominated social media platforms on the one hand and content producers on the other that collude and optimize for quantity over quality. Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals. But that's really a ruse to optimize for quantity. The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. I don't know that prevalence of harassment and 'bad actors' will change—it's already bad—but if the overall tone is lousy, if the culture tilts negative, if political leaders popularize hate, then there's good reason to think all of that will dominate the digital debate as well. But I want to stress one counterpoint: There's much more to digital culture than public affairs and public discourse. The Net is also intensely personal and intimate. Here, I see the opposite: friends and family focus on a much more positive discourse: humor, love, health, entertainment, and even our collective head shakes are a kind of hug, a positive expression of common interest, of bonding over the mess out there. It would be wrong to say the Net is always negative."

Susan Mernit, CEO and co-founder at Hack the Hood, wrote, "Bad actors equal click-bait; humans universally respond to anger and fear. For balanced dialogue, this is a challenging combination."

Annette Markham, a respondent who shared no additional identifying details, observed, "Two factors seem relevant to mention here: Historically, new media for communication have been accompanied by large spikes in impact on forms of interaction. This tends to decline as technologies move from novel to everyday. This suggests that extreme uses tend to normalize. The second factor to add to this is that many stakeholders are responding to extreme homophily."

Joe McNamee, executive director at European Digital Rights, replied, "There are several strands here. All other things being equal, some educational efforts and more understanding that the 'virtual' is actually real (read by real people, reacted to by real people) should lead to a) more maturity over time and b) more understanding of the fact that crazies on the internet are generally just crazies on the internet. However, in the context of a political environment where deregulation has reached the status of ideology, it is easy for governments to demand that social media companies do ‘more’ to regulate everything that happens online. We see this with the European Union's 'code of conduct' with social media companies. This privatisation of regulation of free speech (in a context of huge, disproportionate, asymmetrical power due to the data stored and the financial reserves of such companies) raises existential questions for the functioning of healthy democracies."

Peter Eckart, a respondent who shared no additional identifying details, said, "By the end of the decade, the rise of authentication will make some online spaces less troll-y, but general public rhetoric will continue to be dominated by thoughtless comments by people with no accountability for their words."

Christian Dawson, a respondent who shared no additional identifying details, replied, "The important thing is the word 'shaped.' There will be just as much hate and divisiveness, but we are learning as a society how to manage it, and how to not let it rule us. Honestly, the current political discourse in America and the coarsening of dialogue on 24-hour news networks have led us down a terrible path, but only a small portion of our society is like that, and this is their moment—after this, their time will pass and we will be able to advance as a society with this ugly moment in our rear view."

Rex Troumbley, postdoctoral fellow at Rice University's Humanities Research Center, said, "I anticipate that, as online communication becomes increasingly mediated by communications companies, traditional appeals to the state for the enforcement of public rights to free expression and equal protection will be made to private corporations."

Frank Odasz, president of Lone Eagle Consulting, wrote, "In the mid-80s, Peoplelink in Santa Monica created a BBS to engage homeless persons, who then delighted in posting offensive messages because they could, without any consequences. Since then, we’ve seen the scalability of negative postings, including beheading videos. Creating a trusted mutual-support network is what most people need to learn from peers evolving best practices. AARP continues to warn senior citizens about the growing number of scams. Social media allows anyone to post anything—resulting in cyberbullying, suicides, rumor mongering—and, currently, we lack mechanisms for social recognition for positive capacity-building contributions or for disincentives for negative contributions. Solutions for encouraging more-inclusive online interactions need to start with teaching and modeling good behaviors associated with encouragement and support of peers in meaningful ways. The nature of civil online communication is likely to be less public and more within focus groups of peers in order to be more select in who participates. Digital inclusion for non-self-directed learners—those with minimal abilities and interest in reading and writing requires a major shift in communications behaviors. Voice-to-text allows those without typing skills to post, which is new. Teaching of the positive consequences of good behavior and growing an upward spiral of mutual encouragement is necessary, as well as teaching of the negative consequences of creating discouraging downward spirals. This requires self-assessment at the individual level as well as at the local community level. Among the goals of the Alaska Native Innovations Incubator are: creating a mechanism for ongoing public visual feedback and community self-assessment; rewarding civic contributions; and flagging those whose actions tear at the very fabric of community goodwill, innovation, and capacity-building. I predict that free speech will evolve to where anyone can join a cause and contribute in meaningful ways, as www.globalcitizen.org has begun to explore, noting the viral potential social media has already demonstrated with the Arab Spring and much more. That a cause would win 100 million supporters internationally within days or weeks is significant. Such virtual nations of purpose will be able to actionably leverage economics and loyalties based on values instead of geographic borders. LinkedIn and Facebooks' algorithms are designed to manipulate user behaviors to grow the revenue generation, using push technologies to grow the user base and content volume as fast as possible. Those who choose to opt out of participating in such systems, can still benefit from access to essential resources, and find a voice, as necessary. I teach online 'Social Media for Educators' for the Alaska Staff Developers Network. We all need to better understand how to use online tools for positive ends, and to know how to avoid being manipulated. Presuming access alone will transform education and society is naïve. What matters most is what you learn is possible, and what you choose, to do with internet access and the potential for dramatic socioeconomic impacts locally and globally."

Jan Schaffer, executive director at J-Lab, commented, "I expect digital public discourse to skew more negative for several reasons, including: the polarization of the country, which is a barrier to civil discourse; the rise of websites, Twitter accounts, and Facebook pages dedicated to portraying an opponent in a bad light; and the awful online trolling and harassment of women who are active in social media. I do not think things will get better on their own."

George McKee, a retired research scientist who began online in 1974, replied, "Most of the transformation of social life by digital media has already occurred. Three additional aspects may enhance those trends: real-time voice language translation will enable international friendships and transform international tourism. The same technology will enable an unprecedented degree of government surveillance and censorship, though this will be mitigated to a degree by end-to-end encryption. Augmented reality and telepresence will enable multiplayer games that are only dimly foreshadowed by Pokemon Go. An aging, housebound population will enjoy interactive virtual tourism via VR telepresence, assuming that nausea-inducing latency issues can be overcome."

Joel Barker, futurist and author at Infinity Limited, replied, “The damaging effects of the negative will be measured and steps will be taken to give rapid response to false, negative information by countering with positives."

Daniel Menasce, professor of computer science at George Mason University, said, "While social media and digital commentary have some very positive aspects, they also serve as tools for the dissemination of lies, propaganda, and hatred. It is possible that technological solutions may be developed to assign crowd sourced reputation values for what is posted online. This, in my opinion, will not stop people from consuming and re-posting information of low value provided it conforms with their way of thinking. I do not think that free speech will be affected and I do not see the Supreme Court rendering any rulings that curtail free speech based on the attributes of social media. I believe that the first years of social media and digital commentary have given people tools to more freely and easily share their ideas and sentiments. On the other hand, many people who use social media have not yet realized that their sharing habits have caused them to lose their privacy and have in many cases posed severe security risks."

Aj Reznor, vulnerability and network researcher at a Fortune 500 company, commented, "The popularity of social networking has been producing a growing trend towards the echo chamber: Reposting/reblogging articles or posts, the latter which usually is a call-to-arms over a perceived injustice. These are typically negative, and also inaccurate. Few take the time to validate claims or legitimacy and rather just click, forward, and move on. While free speech theoretically provides (Americans) the right to say what is on their mind, it does not guarantee them the right to be taken seriously. Or, thanks to social networking, the right to be free from repercussions of that speech (e.g., loss of employment over expressed personal beliefs). Free speech, to remain free, will require at a minimum pseudonymous communication. Which is, of course, the stature many trolls automatically assume."

Annie Pettit, vice president of data awesomeness at Research Now, observed, "The last couple of years has seen a tremendous amount of negativity and hate on the internet. Because of this, many companies have had to implement technical strategies to deal with the negativity, from more explicit Terms of Service, reporting buttons, and more. With the advent of artificial intelligence, many companies will build processes that are better able and more quickly able to detect and deal with inappropriate negativity. Simply seeing less negativity means that few people will contribute their own negativity or share other negativity. On top of that, people are reaching their threshold as to how much negativity they will accept in their lives. They will report, block, and leave websites that aren't handling it as well as they'd like. It is becoming a better business decision to deal with the negativity proactively rather than hoping no one will care."

Micah Altman, director of research at MIT Libraries, replied, "The design of current social media systems is heavily influenced by a funding model based on advertisement revenue. Consequences of this have been that these systems emphasize 'viral' communication that allows a single communicator to reach a large but interested audience, and devalue privacy, but are not designed to enable large-scale collaboration and discourse. While the advertising model remains firmly in place there have been increasing public attention to privacy, and to the potential for manipulating attitudes enabled by algorithmic curation I am optimistic. I am optimistic that in the next decade social media systems will give participants more authentic control over sharing their information, and will begin to facilitate deliberation at scale."

Helmut Krcmar, professor of information systems at the Technical University of Munich, wrote, "Since using communication technology is social appropriation of these technologies, there could be a stronger development of social rules on what not to do. Also, technologies could be developed in a more restrictive sense (word-matching to stop comments and so on). Although the level of not keeping might stay the same using technology might lower the described issues."

Richard Forno, a senior lecturer in computer science and electrical engineering at the University of Maryland-Baltimore County, commented, "Online interactions are already pretty horrid—just look at the tone of many news site comment sections (or how quickly they devolve into ad hominem attacks) and/or the number of sites that simply remove user feedback/forum sections altogether. My sense is that, absent legions of paid curators to filter user comments, there will be less opportunity for user/reader/community engagement as prominent websites look for a simple and economical way of solving the trolling problem."

Antero Garcia, an assistant professor at Colorado State University, observed, "The problem with demanding more inclusive technologies is that the learning principles around using these tools are not being sustained in K-12 environments in the US. As such, if we are looking at the communicative practices taught (and not taught) in schools today, as well as looking at the language in the Common Core State Standards and the Every Student Succeeds Acts, there is an emphasis on proficiency with using digital tools but not with understanding social discourse in new publics. Unless we see schools shift to supporting socialization practices rather than just tool-based fluency, I can't imagine we see positive changes in what is happening online."

Ansgar Koene, senior research fellow at the Horizon Digital Economy Research Institute, replied, "A large factor in the tone of online interaction is the lack of direct physical human immediacy with the audience. Instead of communicating with actual people, we are communicating with an imagined audience. As a result, the tone of interaction is prone to be influenced by external factors, e.g. venting of frustrations from work. At the same time, there is only a small minority of trolls who are engaged in actual campaigns of negative interactions. For the most part people online want to interact and communicate constructively, same as they do offline. The perception of the level of negativity is stronger than it really is due to a current over reporting in the media, which will decline as soon as online interacting is no longer the newsworthy due to its relative novelty. A decline in reporting may lead to a slight drop in trolling, but this is unlikely to be significant."

Ryan Hayes, owner of Fit to Tweet, said, "You'll have a couple of offsetting effects. On one side, you'll have more people online and it will increasingly be the place for people to interact and that combined with what I think will be an increasing trend of polarization will push those interactions in a negative direction. On the other side, there will be less room for the fringe (where so much of the fuel for these negative interactions comes from) as technology helps us better understand what is true. For example, we may have augmented reality apps that help gauge whether assertions are factually correct, or flag logical fallacies, etc. Rather than just argue back and forth I imagine we'll invite bots into the conversation to help sort out the arguments and tie things to underlying support data, etc. The net effect will be similar to what we see today."

Masha Falkov, artist and glassblower, wrote, "Online activities will be less shaped by negative activities as moderation becomes more of an aspect of social media. People today are beginning to recognize the phenomenon of trolling for what it is. There is more discourse as to the nature of the type of personality that likes to troll, and people that hear about this are less likely to feed trolls and their desire for attention through harassment, overly-simplistic negative comments, and clear attempts to throw off the topic of conversation. Certain social platforms work through up and down-vote systems for bringing popular responses to the top of the conversation, and though it's not a perfect system, too many down-votes alert the human moderators that the content of a post or comment may be hate speech or otherwise abusive. Many systems have made it their policy to forbid hate speech and abusive. Perhaps there are negative consequences for freedom of speech. It's a complicated question. However, online, speech becomes more than just printed word on paper. It becomes a vector for the binding of a community. People who wish to speak hatefully against their targets—women, minorities, LGBT, etc.—always seem to bind together with much more force than people who speak to defend those same targets from hate. Hate speech online isn't the polar opposite of supportive conversation or respectable polite discourse. It's a weapon made to cause its targets to feel fear, inadequacy, and it has real-world effects on those people, with virtually no consequences for the speaker save for disapproval for the community. In real life we have freedom of speech for writing books, saying what we like among friends and strangers, but it is tempered with social consequences. What we say changes who we are friends with and can make us unpopular. If we print a book or publication, we spend money on it and we think about what we write beforehand. If we say something on television, our face is attached to the speech. While I don't believe online discussion should have our real identities or money involved, I do not see limits on hate speech and abuse in online communities as genuine limits on the freedom of speech. It is simply treating those communities as closed gardens to specific types individuals considered harmful to that community. Whether limits on hate speech and abuse online are part of a larger trend toward limits on freedom of speech should be evaluated on a case-by-case basis rather than shouting an alarm that our freedoms are being eroded."

James McCarthy, a manager, commented, "Negative activity on the internet sort of becomes background noise, like (most) advertising. People will learn to tune it out, unless they want to participate, whether through direct engagement or commentary in other venues. The flipside of this is that the people who engage in these activities will end up only really communicating with others who behave similarly, creating an echo-chamber effect. Since the 'echo chamber' is more or less a norm these days, though, increasingly egregious behavior will be required for non-participants to notice such activities. Whether this escalation might be said to 'shape' online discourse, though, is murky. People still fly in planes in spite of September 11, but they do have to cope with increasingly-intrusive security checkpoints; does this mean the TSA is 'shaping' American travel habits? Trolling and harassment, specifically, is just human nature. It would be nice to develop toward a society where, in the absence of tools that can be used to reduce such proclivities, it's possible to separate your online identity from your personal identity as a form of protection. Doing so would protect the harasser from prosecution and accountability, but the victim would be more able to shut down the victimized persona and move on to a new one without losing their core data. The ethics on this one are tough to parse, though; are we protecting victims or enabling perpetrators? In the end, it becomes a wash; negative behavior online can't be addressed online. It stems from real-world people in real-world situations. Any widely-available tech that would seek to protect one group of people from another group would only end up being adopted by both groups and used to benefit those groups' own ends."

Francisco Javier Juarez Serdio, a product specialist, said, "Unless social media leverages anonymity, there are no reassurances this will tilt to a positive or negative side in the future. As individuals, we need to take more and more responsibility, not only for ourselves but also for our families and all exposed to online discourse."

John Gallup, a respondent who shared no additional identifying details, said, "There is not, or should not be, a Gresham's Law for internet discourse. The incivility and nonsense will increase, but our methods for blocking it will improve."

John Paine, a business analyst, wrote, "I expect that the continuously increasing number of communications possible within Internet-based apps, devices and other means will sustain the currently slowed development of normative courtesies in Internet discourse."

John Howard, creative director at LOOOK, a mixed-reality design and development studio, wrote, "Online behavior today is the result of bad actors working anonymously which rewards fundamentally base behavior. What will change is our response to it: As the generation raised with social media comes of age, their ability to navigate this landscape will result in greater self-selection and a further narrowing/echo chamber of information sources."

Michael O'Donnell, a respondent who did not share additional identifying details, commented, "So long as we have a consolidation of channels and the barrier to reactionary discourse is so low, few people will take the time to share intelligent thoughts and ideas."

Pete Cranston, a respondent who shared no additional identifying details, said, "In most commercial spaces, which are the majority of the heavily used platforms, pressure from users—or fear of losing custom—will force an overdue pro-active response against hate speech and harassment. Public outing of trolls will also gather pace."

Elisabeth Gee, a professor at Arizona State University, said, "The growing economic and social divides are creating a large number of disenfranchised people and undoubtedly they will express their frustration online, but they'll mostly be interacting with each other. Just as 'public' places like city parks have become mostly the realm of the poor, so will public online spaces. I suspect that the real trend will be toward increasingly segmented and exclusive online interactions. We know that's already happening."

Garland McCoy, president of the Technology Education Institute, predicted, "I expect fewer 'negative' activities as there will be greater levels of surveillance, more self-appointed 'PC' police, and for those engaged in public discourse on the internet that share items deemed inappropriate or not 'PC' there will be swift consequences. I think the internet will evolve into a 'safe zone,' and the more spirited discussions will move onto dark-nets specifically set up to encourage open and uncensored discussion on topics of the day."

Tom Ryan, CEO of eLearn Institute, Inc., wrote, "The disruption that new technology brings seems to cause both good and bad actors to push the limits of public acceptance. Public discourse will continue to evolve online. From our fixed point in time it may be perceived to worsen, although this may be a more romantic reflection of the past. Just look at the public discourse by US Senators and newspaper editorials after Booker T. Washington ate dinner at the White House 110 years ago!"

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, said, "We are still figuring out the netiquette of online social interaction. Networks seem to rearrange themselves over time (newsgroups -> IRC -> MySpace -> Facebook) and interaction becomes more inclusive and more structured. I believe we are at the point of highest integration but lowest regulation. Over the next decade social networks will become more fractured and in some cases more (self-)regulated. This will reduce the negative experiences, as the benevolent majority becomes relatively more vocal and crowds out the trolls. I say this with a worldview in mind; I feel that in the US the current political reality will negatively impact online discourse in the short run, but this problem may resolve itself within the decade."

Jan-Hinrik Schmidt, a communication science researcher, commented, "There is no single public discourse online, but rather a wide variety of spheres where people interact, deliberate and argue about publicly relevant matters. I believe some of these spheres will continue to be (or change to be) un-constructive or even hateful, so there will always be evidence of a deteriorating public discourse. However, a combination of constructive efforts by (some) users, thoughtful moderation, legal sanctions, and technological innovation will produce spaces for reasonable and productive discourse."

Steve Anderson, founder and senior strategist at Open Media, said, "Online discourse is in large part shaped by the economic, social, and cultural realities of our world. Over the coming years in-person norms for dialogue will begin to be integrated into online communications. Communities will begin to take a more active role in regulating the negative content posted by their peers. Beyond that the only way to fully deal with negative and hateful online communications is to address the isolation and alienation that permeates our societies."

Dan McGarry, media director at the Vanuatu Daily Post, observed, "The internet is people. It's not good; it's not bad. It's just US. In the 1920s, right at the height of Yellow Journalism, we saw a counterpoint in the Menckens of the day. The 1950s and '60s brought us Murrow and Cronkite. Why should this time be any different?"

Adrian Schofield, an applied research manager, said, "The excitement of social media has worn off. Although there was identifiable segmentation in the early years, there has been a decline in the quality of content and a loss of the divide between business/personal, between formal/social, between news/rumour. Users will have to be more discriminatory about the channels."

Grant Blank, a sociologist and survey research fellow for the Oxford Internet Institute, replied, "Social media will continue to spread. I expect that they will evolve into more niches. This means more specialized social media sites, and away from broad sites like Facebook, although Facebook will continue to be extremely popular."

Katharina Anna Zweig, a professor at Kaiserslautern University of Technology, Germany, wrote, "Overall, negative behavior will decrease in the future. As in all new communication media, we need to develop ethics, rules, and tools to enforce these rules. We are, at the moment, developing the ethics, the corresponding rules, and tools. Once they are employed, negative behavior will go down (but certainly it will not vanish altogether)."

Paul Davis, a director based in Australia, said, "I consider the tone of online discussion reflects the tone in the broader community; it is simply that online facilitates that discussion better. Before the internet, it took either access to capital or access to existing networks to gain a platform to promulgate your views. The internet democratised this process, proving anybody with access to the internet with the ability to publish their views. Of course, this removal of barriers to wide-scale communication carries with it risks, such as the promulgation of views the broader society considers abhorrent. Anonymity has, in many ways, removed some of the inherent societal controls to ensure civil discourse. However the overall benefit of anonymity, enabling the previously voiceless to be heard, is worth that risk."

Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, wrote, "This is a curious combination of concerns. First, given the preview statement, I expected to see a question about tone. I also am uncertain about what (in this context) 'more-inclusive' actually means. If it means that the diversity of the populations, ideas, etc., will be expanded within social media spaces, I doubt that seriously. I see the forces within the market, with Facebook in particular pushing us toward narrower and narrower spheres of interaction, my sense is that 'widespread demand' will be seen as re-affirming that push by social platforms."

Joan Noguera, professor at the University of Valencia Institute for Local Development in Spain, commented, "At present, an important part of the social media experts and digital comentarists have learned (and are still learning) from experience in a field that is young and evolving, I expect more professional and specifically trained experts coming in over the next few years to improve the general panorama. We will definitely see a widespread demand for technological systems and solutions encouraging more-inclusive online interaction. On the one hand, it will be because of the increased fields in which this interaction will be possible. On the other hand, because the more tech-oriented new generations will be accessing relevant jobs I think free speech will be more protected from disturbances as a consequence of new regulations or as a result of new generations of users more familiar with this type of communication. A third reason might also be that improvements in technology make it more difficult to interfere from behind digital 'masks.'"

Fredric Litto, emeritus professor of communications at University of São Paulo, Brazil, commented, "As the recent presidential election primaries in the United States have shown, there is a surprising revelation of uncivilized attitudes and communicational behavior on a disturbingly large scale. The Brexit referendum in the UK offers similar characteristics. I have always shunned blogs for their unedited partiality, and favored daily newspaper letters-to-the-editor because they are filtered for their excesses. Even the 'commentaries' sections of academic sites reveal a shocking amount of name-calling, logical fallacies and 'know-nothingness.' The internet, in future, will see an unfortunate expansion of aggressive opinionating. This is something that, luckily, had been held back for many, many decades, but, protected by freedom of speech principles, it will increase (and perhaps even intensify), much to the detriment of sane democratic practices. Anonymity and privacy, in general, deserve protection, but not when issues of life and death (singularly or in groups) are concerned. There must be limits set to protect life and well-being!"

Dave Kissoondoyal, CEO of KMP Global, located in Mauritius, said, "With the rapid change in the human environment today—be it in a social context, or professional, or even societal—people have the tendency to be stressed, frustrated, and demotivated. It is human nature to voice frustration, demotivation, and stress, and one way to do this is by using technology. People use social media to express anger, disgust, and frustration. This tendency will continue and it will expand in the next decade."

Ben Railton, professor of English and American studies at Fitchburg State University, observed, "It's pretty simple: more and more of us (public scholars, but also interested and knowledgeable and engaged folks from all walks of life) are committed to being part of social media and online conversations. More and more of us are willing to read the comments, to engage in discussion and debate, to both add our voices and hear and respond to others. And the vast majority of us are doing so in respectful and collegial and communal ways. We're influencing the conversation, collectively, and will continue to do so."

Polina Kolozaridi, a researcher at the Higher School of Economics, Moscow, said, "Online interaction will become less in written form, even less than now. Voice messages, videos and photos, personal broadcasting, sharing of personal measurements (such as the number of steps you take and other quantities): this is the future of the interaction, even in work communication. Concerning commentary itself, it will tend to become simultaneously more personal (more people will communicate only with those whom they know) and at the same time it will become more massive. Many people globally who have never had experiences in a community will be coming online, therefore it will be more difficult to set norms and administrate big online resources. Free speech will become less regulated. That has its pros and cons. All people will able to express their opinion, but they will be less aware of consequences. Therefore the communication will be at the same time more structured in one cluster of the internet-space and less structured in another. We see the example of such trends in the Brexit vote."

Alexis Rachel, a respondent who shared no additional identifying details, commented, "The only way that tone will change is if there is somehow a reduction in anonymity online. If it changes in any direction, it will be toward the negative—I base this assumption on general cultural trends away from courtesy and politeness with the progression of time, and the fact that internet social interaction tends to run to the negative of those norms."

Trevor Owens, senior program officer at the Institute of Museum and Library Services, wrote, "I really hope social media and digital media evolve in the coming decade, but I remain very skeptical. As more and more of the public square of discourse is created, managed, and maintained on platforms completely controlled by individual companies, they will continue to lack the kind of development required to develop the kind of governance that makes communities viable and functional. Given that the handful of technology companies that increasingly control discourse are primarily run by very privileged individuals it seems very likely that those individuals will continue to create systems and platforms that are not responsive to the issues that those who are vulnerable and less privileged face on the Web."

T. Rob Wyatt, an independent network security consultant, commented, "Humans are hard-wired to react without thinking to perceived danger. Marketing and political analysts fine-tuned to near perfection methods of manipulation of human cognitive biases long before the internet came along. Now that we practice that manipulation at internet scale and in instantaneous real time it doesn't even require human bad actors to drive it. Inherent systemic incentives power feedback loops that generate and perpetuate memes, with human biases making the negative ones more robust. Meanwhile, governments are cracking down on privacy. Even while ostensibly strengthening privacy rights in the European Union, the US and UK governments are on the verge of mandating back doors into all encrypted communication. https://medium.com/@tdotrob/indifference-and-algorithms-2e37b7042e9"

K.G. Schneider, an administrator in higher education, wrote, "Twitter has abdicated social responsibility for the troll incubator it has created, and the right wing has fomented a level of ugliness not seen since the wind-up to WWII."

Raymond Plzak, former CEO of a major regional internet governance organization, replied, "Negative actors are more active and tend to be more provocative hence they are more likely to spur responses."

David Morar, a doctoral student and Google policy fellow at George Mason University, said, "The concern is that demanding more-inclusive interactions leads to a definitional quandary. Do we define inclusive to mean open to all, without fear of reprisal for opinions and thoughts that may not be part of the mainstream agenda? Or do we define it to mean inclusive as a form of eradicating violent threats and negative, hurtful comments? I believe that there will be an internal struggle within progressive movements and throughout a more liberal digital audience. Those who are promoting the free and open internet as a source of unbiased and representative media that give a voice to the voiceless in direct competition with corporate-sponsored entertainment conglomerates will easily find a willing coalition with those who see free speech as above any other concern, to the detriment of those who are pushing for a minimum level of respect and decency. In an age when 'don't read the comments' is no longer just a joke some netizens use to show their disdain for those who take the time to comment on an article, blog or social media post, but an accurate and worthwhile warning, there is little hope for the coexistence of censorship-free content and commentary or for a space that protects and promotes civilized discourse. One potential solution to this concern is to build systems that shield the user experience rather than shielding the platforms themselves. Simple filters and AI algorithms can personalize each user's version of the platform, be it blogging, social media networks, or any other online content and discussion platform, while leaving intact the way in which others communicate."

Joe Mandese, editor in chief of the MediaPost, wrote, "Digital, not just online, communication will continue to expand, providing more platforms for all forms of public discourse, including 'negative' ones. Of course, negative is in the eye of the beholder, but since there is no regulator on the open marketplace of digital communications, it will create as much opportunity for negative discourse as anything else."

Alice Marwick, a fellow at Data & Society, commented, "Currently, online discourse is becoming more polarized and thus more extreme, mirroring the overall separation of people with differing viewpoints in the larger US population. Simultaneously, several of the major social media players have been unwilling or slow to take action to curb organized harassment. Finally, the marketplace of online attention encourages so-called 'clickbait' articles and sensationalized news items that often contain misinformation or disinformation, or simply lack rigorous fact-checking. Without structural changes in both how social media sites respond to conflict, and the economic incentives for spreading inaccurate or sensational information, extremism and therefore conflict will continue. More importantly, the geographical and psychological segmentation of the US population into 'red' and 'blue' neighborhoods, communities, and states is unlikely to change. It is the latter that gives rise to overall political polarization, which is reflected in the incivility of online discourse."

Pamela Rutledge, director of the Media Psychology Research Center, observed, "Communications are a reflection of local and global sentiment--online public discourse reflects how people feel offline. We are in a period of considerable economic and political chaos across the globe. All people instinctively seek certainty and stability to offset the fear of chaos and change. This increases tribalism and 'othering,' as people seek to make their worlds feel more stable and controllable. Media provides a means of identifying tribes and groups and these tendencies have deep evolutionary roots. The problem won't be trolls and general troublemakers--these have always been a minority The problem is the tendency of the cacophony of negative media voices to increase the social schisms contributing to the rising anger over a world undergoing massive shifts. We are watching what happens when the audience becomes accustomed to 'having a voice' and begins to assume that being heard entitles one's opinion to dominate rather than be part of a collaborative solution."

Sam Ladner, a respondent who shared no additional identifying details, commented, "To say that discourse will either get better, worse, or stay the same is a rather simplistic way of seeing the shifts possible. I believe that we will see both an increase in harassment and in support and positive interactions. It would be more accurate to say that these two extremes will deepen."

John Laprise, founder of the Association of Internet Users, observed, "Like any other relatively new communication medium with new users still joining it, online public discourse is still developing norms and this process takes time. I suspect that tools to tamp down on negative discourse will be developed and deployed."

Walter Minkel, a librarian, commented, "We will have less privacy, because not enough of the public seems to care much if their online space is invaded. We will see ads, ads, ads, and more ads, being shoved into our faces constantly."

Nancy Heltman, a director for Virginia State Parks, wrote, "The biggest challenge of the availability of information online is that a user has to decide what is accurate or what truth they want to accept. This isn't different than in the past—history books and encyclopedias are shaped by the people who write them. News today is as much commentary and facts but it was like that as long as there have been people. The problem is the internet is more readily available and people can instantly add comments. It is hard for me to imagine that things could become more polarized. We will need to be more discerning and critical in believing everything we see. I try to avoid reading comments on articles because there tend to be few responses I care about. I prefer to read the article, make a decision on the validity of the source, truth-test it with other sources and make my own decisions. I'd rather not see censorship/filtering because, again, whoever wields that tool, also makes decisions as to what is acceptable or not. Everyone has an agenda."

David Sarokin, author of Missed Information: Better Information for Building a Wealthier, More Sustainable Future, said, "I hope we've hit bottom in terms of incivility on the internet, and we have nowhere to go but up."

Shreedeep Rayamajhi, an activist and blogger, said, "It's more of an issue of freedom of expression. Steps toward criminalisation of expression should not be taken. Social media should be given the benefit of doubt. Let's not require social media and blogs to be a particular platform for formal expression. Their nature is to be informal."

David Klann, a media industry technology consultant, said, "On-line social discourse is, in some ways no different that social discourse 'in real life.' I see signs of non-technical as well as technical solutions emerging to encourage and foster positive behavior in online discussion forums. As an example (I realize this is but one example), the chat function at the online radio station WFMU.org permits anonymous chats (traditionally a troll-enabling feature). This chat service remains mostly positive through a combination of peer pressure and moderator intervention."

Robin James, an associate professor of philosophy at the University of North Carolina-Charlotte, wrote, "The problem with online harassment isn't a technological problem, it's a sociopolitical problem: sexism, racism, etc. These systems of domination motivate harassment online, in the street, in homes. As technology changes and adapts, so do the underlying systems of domination. So online harassment may look different in the future, but it will still exist. Sexism and racism also impact how we need to talk about free speech: the issue here isn't censorship but power inequities. The language of 'free speech' misidentifies the actual problem: punching down, people in positions of power and privilege using speech to harass and assault people in minority positions."

John Perrino, a digital and creative communications associate at George Washington University, said, "While a drastic shift in online public discourse is unlikely in the next ten years, I do foresee a decrease in negative activities as a result of less anonymity on social media channels. The technology is here to make that happen, but it will be government and corporate policy decisions that determine the balance between privacy and online harassment."

Richard Lachmann, a professor of sociology at the University at Albany, wrote, "The internet will reflect greater conflict in most societies as economic decline and environmental pressures lead to conflicts that will be reflected online."

Chuck Gallagher, president of the Ethics Resource Group, commented, "Don't expect there to be much difference."

Tim Hulley, a respondent who shared no additional identifying details, said, "Online communication becoming more mainstream has led to a softening of some of the negative discussions and activities in a few ways: 1) An increase in normal, constructive activity has reduced the overall percentage of negative activity taking place on the Web. As the amount of people using the Web grows, I believe that the positive aspects of information sharing, creative expression, and social connection will outweigh the darker sides of the Web. 2) Supporting this is the fact that the hazards of negative activity have become more widely acknowledged and sites are taking steps to curtail it. For example, many news sites are altering their comment policies and platforms, and there is now pressure on sites to curb and punish online bullying. Reddit is a big example of a site that has made this shift."

Allenna Leonard, a cybernetics researcher, said, "Already there is little privacy and anonymity if anyone skilled attempts to break it. The negativity is already very prevalent so I don't see that there is much room for it to get worse although I do expect that more protections will be available, especially for children and teenagers."

Mary Ellen Bates, a respondent who shared no additional identifying details, observed, "I don't see technology as a way to solve the issues of inclusiveness or maintaining civil discourse. There will always be online trolls, and any venue that fosters anonymity and privacy runs the risk of dealing with uncivil conversations."

Joshua Freeman, an IT director working in education, noted, "The world is facing some major challenges—climate change, huge refugee crises, economic woes, etc.—and no leadership prepared to lead and make difficult decisions that would lead to improvement. Since I only see things getting worse, I do not see how online life can remain unaffected, and it will get worse rather than better."

Emmanuel Edet, legal adviser at the National Information Technology Development Agency of Nigeria, said, "There are various issues that will affect online communication, but safety and security are the greatest issues that will shape this communication. Most involved in interactions will seek to guarantee safety and thus this will shape the internet for the future."

Christopher Sebastian Parker, a professor of political science at the University of Washington, said, "Online communication will continue to democratize the public sphere. However, the question would benefit from better wording. By that I mean to say that I'm not at all sure what 'more inclusive online interaction' entails. Does that mean stripping anonymity, or not?"

Lynn Dombrowski, an assistant professor in human-centered computing, commented, "Trolls won't stop trolling because the internet and humanity will always be filled with people who are insecure and like to do dumb stuff anonymously. However, people will become more emotionally savvy when dealing with trolls, because more online and emotional self-defense-oriented education will happen (at different levels such as high school and college, and so on) so people will know how to engage with people who are emotionally destructive. Additionally, it is likely that more real-name policies will be enacted and thus lessen troll behavior because it's going to likely impact employability. To be clear, I don't necessarily agree with these policies, but I think they are likely to happen."

Robert W. Glover, an assistant professor of political science at the University of Maine, commented, "Communication of all forms has always been characterized by positive and negative elements. I don't know that the internet has changed the tone of our communications, only made it easier to connect with more people at once across greater distance."

Sergio Zaragoza, CEO of Botón Rojo, observed, "Social media will evolve into social networks more enhanced toward privacy and more ephemeral content as a protection against rage comments and violence. Inclusion will grow with the penetration of social media and internet technologies in third-world countries. High-speed connections will be as necessary as drinking water. Free speech will be amplified but so will be accountability for libel or defamation and for the right to be forgotten by search engines. Anonymity will be a fight, with governments trying to regulate it and public opinion trying to amplify it. Privacy will be a hard-gained asset, won by closely guarding one's personal digital reputation."

Aidan Hall, head of user experience at TomTom Sports, said, "Negative activities are shaped by human nature, and efforts to protect against these is driven by business needs. Real improvements in protection and filtering will be balanced out by people's extra access and behaviour."

Don Lindell, a respondent who shared no additional identifying details, wrote, "I expect people will demand accountability for postings on 'public' spaces, by which I essentially mean most commercial spaces. Consequently, there will be more effort to tie your physical presence to your online presence. The first hint of this has already begun with the US Customs and Border Protection's proposed changes to the US immigration form asking for non-US citizens to disclose their online identities. Eventually, masquerading will be punishable and the overall demeanor of speech online will become more civilized because it will be more sanitized. People with real thoughts will have to share them on 'rogue' sites where they can do less harm."

Remy Cross, an assistant professor of sociology at Webster University, commented, "I expect online discourse to continue to worsen unless and until more online venues take issues such as harassment and toxicity seriously. Given the cavalier attitude that many online fora possess to behavior that does not directly affect their bottom line, unless this behavior begins to be bad for business it is unlikely to be curtailed. As for free speech concerns most of the places this occurs are not bound by governmental duty to free speech and many already restrict speech in one form or another. I see the ultimate result of such toxicity creating a number of 'radioactive' online spaces where such toxicity is encouraged (like 4chan is now) and where discourse by underrepresented or maligned groups is nearly absent."

Rob Smith, a software developer and privacy activist, said, "Online abuse is likely to continue at a similar rate to the current one, although the public will become more aware of it and it will be more widely condemned. I suspect many individual forums will become safer spaces, with their owners and moderators making substantial efforts in keeping them that way. I'm sure other spaces will, as now, adhere to a slavishly 'free speech' doctrine. It wouldn't surprise me if these approaches became more polarised still in the coming years. People will also become more aware of the consequences of participating in online discussion and hopefully develop better practice. Consumer organisations, blog owners, social media owners etc. will also hopefully provide better advice to participants as well as clearly defined privacy and moderation policies. The existence of more and safer spaces will not necessarily reduce online abuse. They may be used to search for abuse targets in other spaces. The best defence against online abuse will therefore remain anonymity. There will also be many spaces in which people want or need to participate which are not safe, even if abusive behaviour is frowned upon there. I don't think we'll find a solution to that problem anytime soon. I worry about technical solutions because it seems likely that many would involve some measure of accountability for online actions. While I'm all for accountability, it will necessarily come at the cost of anonymity. Many people cannot afford, for a wide variety of reasons, to surrender their anonymity and so such solutions would surely inhibit free speech. On the other hand, technical solutions that help users maintain their anonymity or pseudonymity could certainly help to alleviate some of the problems of online abuse while leaving free speech intact."

Chris Womack, a respondent who shared no additional identifying details, wrote, "Both bad actors and bad behavior will become even more prominent/pervasive and methods for curbing and punishing such behavior will become more robust, both online and off—with legal ramifications for the worst online offenses becoming both better-codified and better-enforced. Old trolls will either mend their ways or be put away, but there will always be new trolls coming up behind them."

Yutan Getzler, an associate professor and department chair at Kenyon College, observed, "I do not expect it to change in a major way. People are people. There is a tendency by people who were involved early on in things like BBS or IRC to overestimate how broadly friendly these spaces were, as they tended to be small and homogenous with regard to gender/race/interests. Also, as much as there are flares of hateful speech, people also can support those who get hated on when it seems broadly unfair. That being said, I avoid social media and online commentary specifically because I have no interest in the vitriol it inevitably invites."

Andrew Eisenberg, a respondent who shared no additional identifying details, noted, "Internet communication is still young. As people and society become more used to social networks and the mores catch up to the technology, individuals will be better equipped to deal with online harassment and other cyber-bullying problems. In fact, the term "cyber-"bulling will become meaningless. Instead we will just call it what it is: bullying, harassment, and anger (that happens to take place online)."

Matt Mathis, a respondent who shared no additional identifying details, wrote, "I choose a better answer: 'It depends on other changes.' For example, legislation to enable the good guys to forcibly disable insecure devices that the bad guys are using to attack others."

Giacomo Mazzone, head of institutional relations at the European Broadcasting Union, commented, "Social media are simply the reflex of the society in which they are encapsulated. In Europe, the US and all rich countries of the world, the social media debate will worsen because in the next decade the populations there will become older and poorer. It's demography, stupid!"

John Cato, a senior software engineer, said, "I don't think the overall tone of the internet has changed since its very early days. Trolling for arguments has been an internet tradition since Usenet. Some services may be able to mitigate the problem slightly by forcing people to use their real identities, but wherever you have anonymity you will have people who are there just to make other people angry. There will still be moderated, walled-garden services where all interactions are monitored, as well as free form, anything-goes forums, as well as everything in between. Debating what impact service moderation has on free speech is irrelevant. Free speech doesn't give someone the right to a platform, nor does it shield a person from criticism. Companies like Facebook or Twitter are well within their rights to restrict anything that appears on their service. The structure of the internet is completely open, anyone can buy a server and host whatever content they like if they have something to say."

Julie Gomoll, CEO at Julie Gomoll Inc., said, "I expect the tone of communication will stay roughly the same, but some of the methods will change. We'll learn how to protect ourselves from this kind of trolling, and the trolls will find another method. Problems/solutions will stairstep, just like computer viruses/inoculations. Trolls have been and will be around forever—that will never change."

Grant Barber, an Episcopal priest, said, "I suspect bad actors will get what they want if they want it badly enough."

Jeff Kaluski, a respondent who shared no additional identifying details, said, "There will always be dickheads, and no matter what mechanisms are put in place, the dickheads will always find a way. They will become less relevant, and easier to delete, but the discourse will always have bad actors."

Thornton Prime, a cloud computing architect, commented, "There is a strong possibility that some of what is currently perceived as negative activity will become viewed as positive whistle-blowing. The most notable example is the actions of Edward Snowden. While he is currently perceived by the government as a rogue inside hacker, many in the public view him as a whistle-blower who brought attention to and sparked debate about fundamental privacy rights in a digital age."

Wolfie Rankin, a respondent who shared no additional identifying details, said, "While I expect use of the internet to grow, I doubt things will get better or worse. I've been online since the 1990s, and don't feel that people themselves have changed much in what they say online."

Richard Milner, a respondent who shared no additional identifying details, commented, "We are in a phase where people's social interactions online are freed from the previous face-to-face constraints. Social mores—ideas of politeness and how they are transmitted and enforced—have not yet caught up, however I believe they will. There already is automated filtering of swearing on some systems, and this no doubt could be extended to speech patterns indicating rudeness or racism and the like. I do not believe that ultimately can substitute for human moderation, whether it comes from 'official' forum moderatos or from the interactions between ordinary users. This of course will not apply to extremist sites where the whole point is to enter an echo chamber that confirms and validates previously held opinions. These sites, by definition of being extremist, though, are not likely to come to dominate the mainstream."

Dara McHugh, a respondent who shared no additional identifying details, said, "There will be enhanced legislative and technical approaches to controlling the tone of online discourse, driven by a combination of genuine concern from activists and 'soft' opportunism from political elites who will attempt to use it to stifle criticism and police discourse. The overall trend in society is towards greater inequality and social conflict, and this will be reflected in online discourse."

Will Ludwigsen, a respondent who shared no additional identifying details, said, "My suspicion (perhaps my hope, now that I think about it), is that the internet will naturally bifurcate into a wild, anything-goes environment and a curated one. The need for safe spaces and reliable information will eventually lead to more 'trusted' and 'moderated' places, though of course the question is whom we're trusting to do the moderating (probably corporations) and what's in it for them."

Seti Gershberg, executive producer and creative director at Arizona Studios, wrote, "Bad behavior in the internet is a function of human behavior. There are a certain number of bullies, trolls, bad actors, etc. I see no reason or signs humans will evolve to being better humans so why would behavior improve online. By the same token there is no reason it will get worse. It is simply a reflection of who we are."

Nick Tredennick, a technology analyst, commented, "Reading a little history of—for example—early America, shows that communication was quite polar even then; I don't expect a major change."

Jaime Solís, a respondent who lives in Spain, replied, "I see things getting worse through AI before they get any better. I can only hope that by the end of the next decade we start to discern the wheat from the straw."

Joshua Freeman, a respondent who shared no additional identifying details, said, "I don't have data to undergird my opinion things will be more negative—it just seems the world is getting less civil, not more."

Charles Freeman, a respondent who shared no additional identifying details, wrote, "I expect that we will see some change, to the positive, of our online discourse. Primarily this will be due to the broadening of access to the Internet as a communication mechanism in harder to reach places in the world with more diverse contributors. Much of the negative aspects we hear are blown up to primarily sell copy. That said, increased access to the internet goes to everyone, not just those with positive contributions. Systems like TOR will be more prevalent and—with luck—help tear down tyrannical aspects of our world and make it a smaller, more communicative place to live."

Amber Tuthill, a respondent who shared no additional identifying details, replied, "New movements in social media currently appear to be causing a close-minded divisive medium for black-and-white opinions. There is a development towards self-censorship by internet users themselves, who adhere to strict rhetoric with no room for negotiation therein. I suspect that technology will only exploit these divisive behaviors through the advent of social media AI who will post fake opinions as a form of propaganda to keep people separated, for the biggest threat to the current state of affairs to centralized powers is a united global network. If this can be attacked, centralized powers can enact a 'divide and conquer' strategy."

Rick Dudley, a respondent who shared no additional identifying details, wrote, "This depends very strongly on how re-decentralization and strong attribution efforts progress on the web. If users are able to set criteria to filter their own content then the negative social activities will decrease otherwise, I suspect they will stay about the same."

Laurence Cuffe, a teacher, said, "As media becomes more balkanized and selectively tailored to the user, we will see less and less negative content, or even content which diverges from our own worldview."

Edward Tomchin, a retiree, wrote, "I expect and am, in my small way, working toward improving attitudes and commentary in the next decade. I have a sense that people are getting tired and disgusted with the current state of rhetoric and behavior as evidenced and likely led by a number of politicians who have allowed themselves to sink into the muck that at times engulfs humankind and our civilization. I rely on my perception of history that while humankind frequently encounters and engages in backsliding, overall, our movement has been forward toward higher morality and civilized behavior. We have just completed a century of wall-to-wall war, which I believe will propel us to a future of peace and brotherhood around the globe. I also believe the horrific behavior we are currently seeing is partly due to a societal form of PTSD resulting from our violent and destructive behavior in the past century. We have always risen above our brutal past and I believe we are at the beginning of a new era in human history, one that will prove our ability to rise above ourselves. We have survived so far and always managed to overcome our past mistakes and failures. We keep moving forward and I see no reason to believe that will change."

Megan Browndorf, on the staff at Towson University, commented, "The internet will evolve into a space very much like the physical public sphere. There are angry, loud people in the physical public sphere. There will continue to be angry, loud people in the digital one. I do, however, expect laws and regulations to develop which will de-center some of these conversations and actors. I expect that we will develop a way together eventually to enforce consequences for disruptive and harassing behavior on the internet. I do not expect this to happen soon, but I do expect it to begin in the next decade."

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., replied, "Public discourse online is an emerging form of self-presentation, and this presenting of self affects what we say and how we say it. The reason such discourse is often shaped by bad actors, harassment, trolls—and an overall tone of gripe and distrust: online rhetoric is no longer a display of logic modified by grammar and punctuation, nor is it solely message and response. Public discourse is Selfie Theater, often contorting into digital pretend. The younger you are, the more your sense of self is shared, even 'tried on' as you present yourself to your peers, your network. Further, to some extent, the self you display is the self you try to become. While this is a slippery notion, our communication tools—and our headlines—increasingly tend to validate that approach where what is said evokes 'likes' and hearts—and 'friends' are currencies of approval and lies. In trolling, even challenging or calling out those who agree with you, self-presentation becomes a game of catch-me-if-you-can. What shapes our discourse? Our hidden physical state. Online our identity is disembodied, only a simulation of what we do in the physical presence of others; it is missing our moving countenance, the mask that encounters—and counters—the world. As online discourse becomes more app-enabled, our ability to disembody ourselves will only grow more dexterous. Texting is our sim face. Erving Goffman, the Canadian-American sociologist who wrote The Presentation of Self in Everyday Life, said when an individual appears before others he gives a 'performance,' which shows initially on our face. Online, our face is absent—a snapshot at best, a line of code or address at worst. Politeness, sociologists tell us, is about 'facework'—presenting a face, saving face, smiling, reassuring, showing. But online we are disembodied; our actual faces are elsewhere. This present-yet-absent dynamic not only affects our identity, whether people can identify us behind the shield of online presentation, it also affects our speech and, ultimately, our 'performance.'  Into this pool jump the hackers and mischief-makers and deadly serious manipulators who realize that they can do their work behind the shield with impunity—until they are caught or 'outed.' The purpose of trolling, as Adrian Chen has written, is to flood social media with fake content with the intent of 'destroying the possibility of using the Internet as a democratic space.' Since we cannot see the actors, we do not realize their performance is an act. When this happens online discourse can quickly devolve to undermine the messenger’s credibility and we ultimately question what is made up or fake, and what is real. In the next decade a number of factors in public discourse online will continue to converge and vigorously affect each other: 1) Nowness is the ultimate arbiter: the value of our discourse (everything we see or hear) will be weighted by how immediate or instantly seen and communicated the information is. Real-time search, geolocation, just-in-time updates, Twitter, etc., are making of now, the present moment, an all-subsuming reality that tends to bypass anything that isn’t hyper-current. 2) Faceless selfism rocks: With photos and video, we can present ourselves dimensionally, but due to the lack of 'facework' in the online sim, our faces are absent or frozen in a framed portrait found elsewhere, and so there is no face-to-face, no dynamic interactivity, no responsive reading to our commentary, except in a follow-up comment. Still, we will get better at using public discourse as self-promotion. 3) Anonymity changes us: identity-shielding leads to a different set of 'manners' or mannerisms that stem from our sense (not accurate, of course) that online we are anonymous. 4) Context AWOL: Our present 'filter failure,' to borrow Clay Shirky’s phrase, is almost complete lack of context, reality check, or perspective. In the next decade we will start building better contextual frameworks for information. 5) Volume formula: the volume of content, from all quarters—anyone with a keypad, a device—makes it difficult  to manage responses, or even to filter for relevance but tends to favor emotional button-pushing in order to be noticed. 6) Ersatz us: Online identities will be more made-up, more fictional, but also more malleable than typical 'facework' or other human interactions. We can pretend, for a while, to be an ersatz version of ourselves. 7) Any retort in a (tweet) storm: Again, given the lack of 'facework' or immediate facial response that defined human response for millennia, we will ramp up the emotional content of messaging to ensure some kind of response, frequently rewarding the brash and outrageous over the slow and thoughtful. This emotional button pushing often ignores facts in favor of fear, truth in favor of innuendo, perspective in favor of partisan fire branding. Social media and digital commentary will evolve in the next decade by incorporating how you feel into what you say and think. Rana El Kaliouby, CEO of Affectiva asks, "What if our devices could sense, and then adapt to our emotions?" Further, many aspects of your behavior, activities, and preferences will be incorporated into what used to be considered purely verbal or written commentary. Your location, your emotional response to a given topic or situation, your videos and selfies, perhaps even your biometrics (heart rate, blood pressure, sleep duration), will all become part of the ‘you’ the world sees as self-monitors and novel ways to quantify ourselves become incorporated into the apps we use to access social media. These will create a curious tension: on the one hand, we will capture and present those aspects of ourselves that we most want the world to see—the best of who we are. And we will have the snaps and data to prove it. On the other hand, we may be just using facts and evidence to further the widespread practice of digital pretend. It is not surprising that rather than the content of an individual’s messaging or character, people today tout someone they like as being 'authentic.' Social media are vehicles of deliberate inauthenticity. Online anyone can pose, pretend, exaggerate, and their tone can quickly shift from polite to disparaging because that can be a cooler, more hip way of presenting oneself. Because there is no filter, communication tools on the ends of our fingers can also facilitate a shot straight from the amygdala, the fight or flight part of the brain that can’t distinguish between a real and perceived threat. It has taken us since around 1440 (invention of the printing press) to understand the fuller implications of seeing our world mediated through printed words. In the next decade we will become more aware of some aspects of engaging with social media, but our meta-awareness of how social media use us as we use them will still be in its infancy. At the same time, social media connectivity will continue to be built into everything we use in our lives. Social media and digital commentary will become more tool- and app-mediated in the coming decade as everything we do becomes a documentary and each of us aspires to be a documentarian. (Facebook: 'Live video can be a powerful tool in a crisis—to document events or ask for help.') We will not turn our back on the dopamine-hit of instant response as everyone in the world becomes a participant in Breaking News Now—the instant-on world buzz of events and commentary. We will seek to become the news as often as we check it. Social media will continue to alter our habits of dressing, eating, and relating as we crowdsource response to everything from our clothes to our bodies. This metalife will take on even greater importance for identity, reputation, and will change every social sphere, from personal to political. Regarding solutions that encourage more-inclusive online interactions, there is no editorial board for public discourse online. We haven’t found, or even thought up, the rules of online engagement. We’ve just borrowed them, mostly unconsciously, from the last place we got comfortable: our newspapers and magazines. Their trade profession rules—hire and assemble professionals who are responsible to management and a board—no longer apply to the online world. While it is exhilarating to have no rules of engagement for online discourse, beyond what we have inherited from earlier media formats, collectivity and connectivity demand civility—or chaos and intolerance, like weeds, take over the landscape. McLuhan’s 'new state of multitudinous tribal existences' neatly describes online echo chambers where tribal affinities discourage and penalize inclusiveness. This is a threat to all democratic institutions, based on the Socratic principle of dialogue and debate, because tribalism tends to cannibalize rather than engage those not in the tribe. For this reason, the key dynamic of public discourse online will be the willingness (or the resolute refusal) to embrace complexity and diversity. Difference of perspective is the genetic diversity—the evolutionary forward momentum—of collective commentary. Inbreeding, whether in genetics or discourse, creates deformity. So first, we must embrace and champion tolerance for diverse and conflicting perspectives. Then, we must become savvier about the tools we use to build messaging. Communication tools, all of them, create distortions; from Twitter to Snapchat to Instagram, we’re just starting to see what compression or click-baiting does to a message. We are still learning how to use the remarkable freedoms social media and digital commentary enable. The novel, for example, evolved considerably from Cervantes' Quixote in 1605 to Bronte and Dickens and Tolstoy and then Joyce, Morrison, or Murakami. Different voices teach us differences. We are social animals who learn by observing others. While online commentary is different than novels or poetry, the process of seeing what others are doing and reacting to it is similar. We will evolve, fitfully, and while there may be more widespread demand for technological solutions that encourage more-inclusive online interactions, and we may rely on bots or AI to aid us in compositions, in the end discourse relies on language, reason, facts, and insights—but above all on the willing embrace of differences. What will happen to free speech, anonymity and privacy? The cost of free is rising: The more your speech challenges authoritarian, sexist, and other specious moral perspectives, the more you become a target for various kinds of attack, many of which are personal, deliberately intimidating, and thereby frightening. These attacks, especially against women, often come from people who count on the openness of a democratic society to speak their mind, while also counting on the Internet’s capacity to keep their identity anonymous. Online discourse then provides the ultimate sniper’s perch. Anonymity and privacy are constructs that were developed before online media and digital identities. In the digital age, they mean different things: we might say they are morphing into evolved constructs that make our older notions wholly inadequate. Imagine a young woman from the Victorian period dropped magically into 2016. She, like most of her friends, wakes up 20 minutes early so she can do her makeup and check her clothes with six to eight friends via snaps and text messages; she just fell for a boy who is sexting her and she wants to sext back until their messaging gets hacked by his friend; she will document her upcoming vacation at the shore on Instagram and Snapchat; her future employer, a three-letter government agency, will later review many of these messages as well as other details of her online life and may consider whether these messages affect her hiring profile. By comparison, any Victorian woman’s life lacked the added dimension of this other entity, a metalife that is inescapable, permanent, identity-focused. The modern woman’s anonymity and privacy, by comparison, are non-existent. Regarding free speech, anonymity and privacy—three related, but very different topics—our devices both enable and confound our ability to make sense of these issues. Everyone with a device can find a platform, or has built a platform for speech. Free speech, hate speech, double speak, lies and distortions are all part of the new freedom that everyone-as-publisher brings with it.  We can no longer think in binary formulations: 'more or less shaped by negative activities' doesn’t accurately capture the new digital landscape. We are at the vortex of the yin-yang: we are going to have both. The underside of freedom is the messiness, even the chaos, of coming to consensus amidst the shifting sands of agreement. Curiously, as some of us seek privacy and anonymity, these will vanish due to intrusive technologies that advance a myriad of recognition platforms. So while many of us worry that our identities are known in greater detail and specificity than ever before, as we continue to use devices to create online discourse, our locations and activities become part of multiple tracking data streams that ensure we have no privacy or anonymity. (idiCORE, a Florida data fusion company, says it has already built a profile on every American adult.) So in the very democratic act of engaging in public discourse and expressing our views, we are possibly targeting ourselves by identifying ourselves and ensuring that we will never have privacy or be anonymous. This was brought home recently when a prominent feminist writer dropped off social media after being harassed online by anonymous stalkers who posted rape and death threats against her 5-year-old daughter. And this never-anonymous realization brings with it a kind of nihilism, a bravado, that will further inspire many to create fake identities, fake histories, fake associations based on the thinnest of connections. Finally, it is easy, with a bit of paraphernalia (all available online) to change our persona to a ComicCon character, a Pokemon, or something more menacing. So, as we port more of our self-presentations into online forums and venues, our sense of self may waver, morph or multiply, and for some the digital persona may be preferable to their so called 'real self.'"

If you wish to read the full report with analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet.xhtml

To read anonymous survey participants' responses with no analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet_anon.xhtml

About this Canvassing of Experts

The expert predictions reported here about the impact of the Internet over the next 10 years came in response to one of five questions asked by the Pew Research Center Internet Project and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 1 and August 12, 2016. This is the seventh Internet study the two organizations have conducted together. For this project, we invited nearly 8,000 experts and highly engaged members of the interested public to share their opinions on the likely future of the Internet and 1,537 responded to this question. 

The Web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research and Elon University during the six Future of the Internet studies, as well as those identified across 12 years of studying the Internet realm during its formative years in the early 1990s. Among those invited were people who are active in global Internet governance and Internet research activities, IETF, ICANN, ISOC, ITU, AoIR, OECD. We also invited a large number of professionals and policy people from technology businesses, government (NSF, FCC, European Union, and so on), think tanks and interest networks (for instance those that include professionals and academics in anthropology/sociology/psychology/law/political science/communications); globally located people working with communications technologies in government positions; other technologists, entrepreneurs and innovators working in the technology sector; top universities’ engineering/computer science, business/entrepreneurship faculty and graduate students and post-grad researchers; plus many who are active in civil society organizations such as APC, EPIC, EFF and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there was a "snowball" effect as the invitees were joined by people they invited.

Since the data are based on a non-random sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample. The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. About 80% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of Internet interest,” 25% identified themselves as research scientists; 7% said they were entrepreneurs or business leaders; 8% as authors, editors or journalists; 14% as technology developers or administrators; 10% as advocates or activist users; 9% said they were futurists or consultants; 2% as legislators, politicians or lawyers; 2% as pioneers or originators; and 25% specified their primary area of interest as “other.”

More than half of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their Internet expertise or background, and this was noted where relevant in the reports.

A small selection from the hundreds of organizations survey participants identified as places of work:
AT&T, MediaPost, Berkman Klein Center at Harvard University, IBM, Tesla Motors, Vodaphone, Internet Education Foundation, Data & Society, York University, Michigan State University, Red Hat, OpenMedia, Singularity University, KMP Global, Mozilla, National Public Radio, Microsoft, Semantic Studios, Stanford University Digital Civil Society Lab, U.S. Ignite, Internet Engineering Task Force, NASA, Altimeter, Syracuse University, Square, Adobe, Social Media Research Foundation, Craig's List, Google, MIT, New York Times, International Association of Privacy Professionals, Oxford University's Martin School, Flipboard, Raytheon BBN, New America, Karlsruhe Institute, Gigaom, Innovation Watch, Cyborgology, Human Rights Watch, We Media, NYU, U.S. Department of Defense, Philosophy Talk, Rose-Hulman Institute of Technology, UCLA, Hack the Hood, European Digital Rights, Computerworld, Neustar, Institute for the Future, Gartner, Rochester Institute of Technology, Gilder Publishing, Rice University Humanities Research Center, Digital Economy Research Center, Wired, DareDisrupt, AAI Foresight, Rensselaer Polytechnic Institute, Future of Humanity Institute, Carnegie Mellon University, Electronic Frontier Foundation, Internet Corporation for Assigned Names and Numbers, Genentech, University of Pennsylvania, New Jersey Institute of Technology, Michigan State University, Cyber Conflict Studies Association, Georgia Tech, Intelligent Community Forum, University of Copenhagen, Digital Rights Watch, Futurewei, Kenya ICT Network, Institute of the Information Society, Telecommunities Canada, dotTBA, Farpoint Group, University of California-Irvine, University of California-Berkeley, Hewlett Packard, Cisco, United Steelworkers, University of Milan, Electronic Privacy Information Center, Federal Communications Commission, University of Toronto, Center for Policy on Emerging Technologies, Tech Networks of Boston, Queensland University of Technology, Privacy International, Institute for Ethics and Emerging Technologies, University of Michigan, Nonprofit Technology Network, Worcester Polytechnic Institute, Jet Propulsion Laboratory, Internet Society, Booz Allen Hamilton, Lockheed Martin, UK Government Digital Service, Yale University, California Institute of Technology, Groupon, Nokia, Logic Technology, Unisys, Spacetel, University of California-Santa Barbara, Internet Initiative Japan, The Linux Foundation, National Science Foundation, InformationWeek, Free Software Foundation, The Aspen Institute, Center for Digital Education, National Institute of Standards and Technology, George Washington University, Future of Privacy Forum, Ethics Research Group.

If you wish to read the full survey report with analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet.xhtml

To read anonymous survey participants' responses with no analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/social_future_of_the_internet_anon.xhtml