This page holds hundreds of complete responses from experts who were asked in a Summer 2021 canvassing if the toxic side of digital public forums such as social media platforms can be significantly improved by 2035.
Critics say activities on social media platforms are damaging democracy and the fabric of society. Can these digital spaces be improved to better serve the public good by 2035? How? If not, why not? Researchers at Elon University and the Pew Research Internet and Technology Project asked experts to examine the forces at play and suggest solutions. They were invited to share their insights via a web-based instrument that was open to them from June 29-Aug. 2, 2021.
The responses on this page are all from experts who preferred to comment anonymously. The report with full analysis is here. This long-scroll page has a brief outline of major findings of the report, followed by all of the anonymous responses with no analysis, just the comments.
The Question – Bettering the digital public sphere: An Atlantic Monthly piece by Anne Applebaum and Peter Pomerantsev, “How to Put Out Democracy’s Dumpster Fire,” provides an overview of the questions that are being raised about the tone and impact of digital life. Today people are debating big ideas: How much harm does the current online environment cause? What kinds of changes in digital spaces might have an impact for the better? Will technology developers, civil society, and government and business leaders find ways to create better, safer, more-equitable digital public spaces?
Looking ahead to 2035, can digital spaces and people’s use of them be changed in ways that significantly serve the public good? Yes, or No?
862 respondents answered
- 61% said by 2035, digital spaces and people’s use of them will change in ways that significantly serve the public good.
- 39% said by 2035, digital spaces and people’s use of them will NOT change in ways that significantly serve the public good.
- It is important to note that a large share of those who chose “yes” – that online public spaces will improve significantly by 2035 – said it was their “hope” only and/or also wrote in their answers that the changes between now and then could go either way. They often listed one or more difficult hurdles to overcome before that outcome can be achieved. The simple quantitative results are not fully indicative of the complexities of the challenges now and in future. The important findings are found in the respondents’ rich, deep qualitative replies; the key findings are reflected in the most commonly occurring insights shared there.
Qualitative responses published on this page were initiated by this follow-up prompt: If you answered “yes,” what reforms or initiatives may have the biggest impact? What role do you see tech leaders and/or politicians and/or the public playing in this evolution? What could be improved about digital life for the average user in 2035? What current problems do you see being diminished? Which will persist and continue to raise major concerns? If you answered “no,” why do you think digital spaces and digital life will not be substantially better by 2035? What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces may not much change?
Among the key themes emerging among hopeful respondents’ qualitative replies were:
* Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech. They said social media platforms have come to dominate the public’s attention to the point of replacing journalism and other traditional sources in providing information to citizens. These experts argued that surveillance capitalism is not the only way to organize digital spaces. They predict that better spaces in the future will be built of algorithms designed with the public good and ethical imperatives at their core. They hope upgraded digital “town squares” will encourage consensus rather than division, downgrade misinformation and deepfakes, surface diverse voices, kick out “bozos and bots,” enable affinity networks and engender pro-social emotions such as empathy and joy.
* Government regulation plus less-direct “soft” pressure by government will help shape corporations’ adoption of more ethical behavior: A large share of these experts predicted that legislation and regulation of digital spaces will expand; they said the new rules are likely to focus on upgrading online communities, solving issues of privacy/surveillance and giving people more control over their personal data. Some argued that too much government regulation could lead to negative outcomes, possibly stifling innovation and free speech. There are worries that overt regulation of technology will empower authoritarian governments by letting them punish dissidents under the guise of “fighting misinformation.” Some foresee a combination of carefully directed regulation and “soft” public and political pressure on big tech, leading corporations to be more responsive and attuned to the ethical design of online spaces.
* The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will bring improvements: A share of these experts predicted that the public will apply more pressure for the reform of digital spaces by 2035. Many said tech literacy will increase, especially if new and improved programs arise to inform and educate the public. They expect that people who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces. Some experts noted that this is how every previous advance in human communication has played out.
* New internet governance structures will appear that draw on collaborations among citizens, businesses and governments: A portion of these experts predict the most promising initiatives will be those in which institutions collaborate along with civil society to work for positive change that will institutionalize new forms of governance of online spaces with public input. They expect these multistakeholder efforts will redesign the digital sphere for the better, upgrading a tech-building ecosystem that is now too reliant on venture capital, fast-growth startup firms and the commodification of people’s online activities.
Among the key themes emerging among worried respondents’ answers were:
* Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears, these experts argued. In a digitally networked world in which people are constantly surveilled and their passions are discoverable, messages that weaponize human frailties and foster mis/disinformation will continue to be spread by those who wish to exert influence to meet political or commercial goals or cultivate divisiveness and hatred.
* The trends toward more datafication and surveillance of human activity are unstoppable: A share of experts said advances in digital technology will worsen the prospects for improving online spaces. They said more human activity will be quantified; more “smart” devices will drive people’s lives; more environments will be monitored. Those who control tech will possess more knowledge about individuals than the people know themselves, predicting their behavior, getting inside their minds, pushing subtle messages to them and steering them toward certain outcomes; such “psychographic manipulation” is already being used to tear cultures asunder, threaten democracy and stealthily stifle people’s free will.
* Haters, polarizers and jerks will gain more power: These experts noted that people’s instincts toward self-interest and fear of “the other” have led them to commit damaging acts in every social space throughout history, but the online world is different because it enables instantaneous widespread provocations at low cost, and it affords bad actors anonymity to spread any message. They argued that the current platforms, with their millions to billions of users, or any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.
* Humans can’t keep up with the speed and complexity of digital change: Internet-enabled systems are too large, too fast, too complex and constantly morphing. making it impossible for either regulation or social norms to keep up, according to some of these experts. They explained that accelerating change will not be reined in, meaning that new threats will continue to emerge as new tech advances arise. Because the global network is too widespread and distributed to possibly be “policed,” these experts argue that humans and human organizations as they are structured today cannot respond efficiently and effectively to challenges confronting the digital public sphere.
News release with nutshell version of report findings is available here
Responses from all those preferring to make their remarks anonymous. Some are longer versions of expert responses contained in shorter form in the survey report.
Some people chose not to provide a written elaboration, so there are not 800-plus recorded here. Some of the following are the longer versions of responses that are contained in shorter form in one or more places the survey report. Credited responses are carried on a separate page. These comments were collected in an opt-in invitation to more than 10,000 people that asked them to share their responses to a web-based questionnaire in July 2020.
A professor of computer science and data studies wrote, “The damage done by digital spaces seems irreparable. Society is fractured in regard to basic truths, so leaders cannot even make changes for the better because factions can’t agree on what ‘better’ means.”
A foresight strategist based in Washington, D.C., said, “I believe interventions such as enforceable data-privacy regulations, antitrust enforcement against ‘big tech,’ better integration of humanities and computer science education and continued investment in internet-freedom initiatives around the globe may help create conditions that improve digital life for people everywhere. This is necessary because by 2035, exogenous factors such as climate change and authoritarianism will play even more significant roles in shaping global society at large and social adoption of digital spaces in particular. The net results will be both the increased use of pervasive digital surveillance/algorithmic governance by large state and commercial actors and increased grassroots techno-social liberatory activity.”
An internet pioneer working at the intersection of technology, business/economics and policy to drive effective change said, “Digital spaces will be even more ubiquitous in 2035 than today, so I hope we won’t even have to think about ‘am I online or not?’ by then. That’s only not creepy if it’s a positive experience. I don’t think we’re going to get there through policing or enforcement by technology, technology companies or governments. I do think we need support from all of those as well as public support for improved discourse, but there is no magic bullet, and there is nothing to enforce. What will help is having some level of accountability and a visible history of all interactions in digital spaces for identifiable individuals and for organizations.”
A tech CEO, founder and digital strategist said, “I answer ‘yes’ as an optimist, but evolution could go either way. I’m hopeful that cooler and smarter heads will prevail and reverse the dystopian trends we’ve been seeing. A positive transformation could occur if the large tech platforms can find ways to mitigate effects of propaganda and disinformation campaigns. Legislation could help, but much depends on the will and capabilities of the platform operators. How well can they manage the problem of disinformation while honoring the principle of free speech? Tech monopolies have evolved partly due to network effects, and these are widely held to be a substantial part of the problem. Addressing monopoly is partly a legal issue, partly a business issue and partly (in this case) an issue of technology. Possible solutions might be to restrict the uses of data and enforce interoperability.”
A French professor of information science wrote, “On the one hand, humanity has shown forms of resilience and intelligence to combat many plagues. But human nature in each of us seeks power, money and domination which are such strong attractors that they are very difficult to give up. Buddhists describe futility and the need to give up any desire for possessions responsible for the suffering of all men and all species in the eco-system who suffer the hegemony of man on Earth. Powerful people find new ways to dominate the weakest on the internet. Technological tools and the digital space are primarily at the service of those who master the technologies, the specifications of these tools and even the ethical charters through the lobbying that these companies organize … Hell is paved with good intentions. Digital ethical charters strongly influenced by digital companies do not make the digital spaces ethical. At the beginning of the internet years (1980-1990), this digital technology was at the service of science and researchers and made for knowledge-sharing and education. Today, the internet is 95% at the service of marketing and customer profiling, and the dominant players recursively feed on profits and the recurring influence of influencers followed on the net (most of the time because they benefit from a superficial positive image). The internet has become a place of control and surveillance over all people. It has become a threat to democracy and the government institutions that become themselves controlled and influenced by digital companies. Some positive points: 1) The internet remains a very useful tool for disseminating knowledge and scientific publications and teaching of values, but many mis- and disinformation is hindering this. 2) A genuine internet that is only dedicated to art, sciences and education, free of advertising, should be developed. 3) Everywhere, the internet instantaneously informs of bad attitudes and attacks on freedoms, for instance as new means of repressing predators (sexual harassment is an example). (However, it simultaneously facilitates harassment and fraud.)”
A business professor researching smart cities and artificial intelligence said, “I am very fearful about the impact of AI on digital spaces. While AI has been around for a while, it is only in the last decade that, through its deployment in social media, we have started to see its impact on, inter alia, human nature (for those who have access to smart technology, it has become an addiction), discourse (echo chambers have never been more entrenched), and consent/agency (do I really hold a certain belief or have I been nudged towards it?). Yes, I do think that there are ways to move our societal trajectory towards a more optimistic future. These include meaningful and impactful regulation; more pervasive ethical training for anybody involved in creating, commercializing or using ‘smart’ technologies; greater educational efforts towards equipping students of all ages with critical-thinking tools; and less capture by – bitter and divisive – political interest.”
An internet pioneer commented, “Our societal descent into truth decay – which threatens the world like no other ill – will not be solved by digital savants, some different form of internet governance, nor new laws-regulations-antitrust actions. Truth decay is first a symptom; its seeds were planted long ago in jarring market transitions across the economy, in employment, in political action and rhetoric. The internet – an intellectual buffet that begins and ends with dessert – has accelerated and amplified the descent, but cannot be re-shaped to stop it, let alone reverse it.”
A writer, speaker and teacher commented, “Email, social media and other forms of digital media are tools. They can be very useful or harmful, depending on how they are deployed. The main problem is capitalism – digital tools are used to sell advertising. So, the public good is not at the center, profits are. Much of the harm caused by social media is due to the fact that the people running it are primarily interested in making a profit. If that were not the case, it could be an amazing tool. We need more opportunities to use digital media that are not aimed at selling advertising.”
An award-winning author and journalist based in the Northeast U.S. said, “I don’t know if the Internet will be better or worse in 2035 but I think it can’t get better unless we address this: Today it is important to pressure tech companies to be far more accountable in battling misinformation and online racism.
- It is crucial for policymakers to do much more to police anti-trust infringements of such behemoths.
- We need to do more to teach children from the earliest ages online etiquette, information literacy and other critical-thinking skills tailored to the digital realm.
- But it’s equally or more important to re-think and re-envision the meta-architecture of digital spaces so that they can allow for more open-minded, dialectical, creative thinking and social connections.
“This meta-question seems to me to be almost entirely ignored in society today. What’s missing from national conversations about the nature of digital spaces is a realization that the architecture and aesthetics – i.e., the look and feel and bones – of our virtual realities exacerbate human inclinations to see the world in clear, binary and easily-digestible terms.
“In a nutshell, the way digital spaces are set up deeply shapes our behavior in these spaces, just as strongly as physical landscapes and human-built buildings implicitly and explicitly influence our actions and moods. In effect, the meta-quality of digital spaces disturbs me more than even the current alarming content of these realms.
“The digital realm is a space of boxes, templates, lists, bullet points and crisp brevity. In searching most people are offered a linear, pre-prioritized list of ‘answers’ – often even before they finish asking a question. The value and worth of people and objects are aligned with explicit data; ratings have become a standard of measurement that squeezes out all room for in-betweenness or dynamic change.
“In these and many other ways, digital spaces narrow our vision of what it means to know, paving the way for the types of cursory, extremist and simplistic content online we see today. And, in fact, the binary nature of a question like this one – i.e., will things get better or worse – hinders our collective ability to have constructive conversations about such intricate issues.
“While human survival depends upon a sophisticated ability to categorize, the current notion that our intellectual lives should rest upon aligning with one side or another or seeing the world in either/or terms – can be traced in my view to the chokehold that digital spaces have on our minds and lives today.
“Today we need digital spaces – from email to TikTok – that leave more room for not-knowing and for attending to issues and questions that are messy, murky and shifting. This kind of digital space might allow for multiple tempos of communication-and-response as well as for operating systems that are more in sync with freeform, associational, ‘inefficient’ types of human thinking, such as reverie, forgetting, confusion, doubt and above all, uncertainty.
“It is not a coincidence, in my view, that such mysterious yet astonishing realms of human thinking are devalued in society today.”
A professor of communication and culture who is based in Australia commented, “The biggest issue of concern is the underlying internet business model that is predicated on widespread capture and analysis of user data, and on selling of user data to third parties such as advertisers. This creates large-scale privacy concerns but also the development of various forms of targeting related to advertising, content recommendation system and political communication.
“Placing limits on the extent of data capture and the uses which can be made of user data will be fundamental to improving digital spaces. In particular such moves might constrain some of the monopoly power of digital platforms such as those operated by Alphabet and Facebook. Another critical reform is to evolve better governance structures for digital platforms, making governance more transparent and responsive to community concerns but also keeping it at arms-length from direct government control.”
A principal research scientist at a major digital media laboratory commented, “As defined in economics, public good refers to a commodity or service that is made available to all members the society, and their use does not deplete their availability for future use. In most cases public goods are administered by governments and are paid for collectively through taxation. This is not the case, and should not be the case, for digital spaces.
“Digital spaces evolve through innovation. The government has demonstrated that it is not a good innovator. Over the past 30 years, access to digital spaces has continued to grow. This is a good outcome. However, throughout this period, digital spaces have evolved significantly. So long as the government does not control innovation, digital spaces will continue to evolve and support a broad range of communication options.
“Projecting to 2035, digital life will be very present. As the barrier to entry declines, increased one-to-one, one-to-many and many-to-many activities will become available. Almost anyone who wants to will be able to access and will have the knowhow to access digital technology. The barrier to entry will continue to decline. However, that will only happen if private interests continue to innovate.”
A professor and researcher said, “Things that could improve digital spaces include the emergence of regulation that weakens the advertising-driven business model, which has led to providers of digital spaces intentionally and/or unintentionally encouraging confrontation and mistrust. A system of ‘nudges’ might emerge that encourages/forces users to read and consider content before widely forwarding it. Society may also educate itself about how to read and interpret the current style of communication in digital spaces.
“Things that could degrade digital space further include reliance on messages as brief as bumper stickers and headlines as the basis for making decisions (particularly about forwarding or commenting in digital spaces), a further decline in society’s ability to understand objective analysis and demand it, and a decline in broad basic critical thinking skills.
“I am reminded that in some ways the use of digital spaces resembles news in the late 19th and early 20th century. Newspapers were biased in their reporting, but people often read multiple newspapers and presumably had a sense of how to interpret and contextualize what they were reading. The big difference today is readers are no longer passive consumers of news, but rather active participants in transmitting it.”
A principal architect for one of the world’s leading technology companies responded, “2035 could be revolutionary. The pandemic compressed a decade’s evolution of communication technology down into a few months in 2o20. Before it, streaming media and real-time communications were evolving independently, offering a choice of either large scale OR low latency. That changed as the technology distinctions blurred and applications emerged enabling both large scale and low latency. This manifested itself in massive online courses and interactive meetings with over 100,000 participants, as well as online events and streaming/real-time hybrids. The impact of this technology revolution made itself quickly felt.
“During the 2020 U.S. election season we saw this technology harnessed by groups such as the Lincoln Project, that were able to bring more people together in a single meeting (over 100,000) than the total margin of victory in the battleground states. By 2035 it seems quite likely that we will see interactive meetings with 1 million+ viewers, turning ‘interactive podcasting’ into a viable alternative to today’s news media. In addition to the positives of this there are, of course, potential downsides – mass rallies will become much less expensive to put on and allow demagogues to get their message out more easily.”
A free software activist and ICANN and IEEE leader based in India responded, “I expect the following changes to happen by 2035: 1) Enhanced quality of life on account of technology in general, and digital technologies in particular. 2) Innovative models in participatory democracy may emerge where governments are non-authoritarian. 3) Better work-life balance and gender equity through labour-saving technologies, four-day workweeks, work-from-home. 4) More-flexible ways of learning, working, recreation and networking.”
A researcher, educator and international statesman in the field of medicine commented, “Our current uses of technology have not contributed to a better society. We are ‘always on,’ ‘present but absent,’ ‘alone in the company of others’ and inattentive. Many of the problems in the digital sphere are simply due to the ways humans’ weaknesses are magnified by technology. People have always faced challenges developing meaningful relationships, and conspiracy theories are not new. Digital technology is a catalyst. There has been a change in our communication parameters and there are cyber effects. The biggest burden is on educators to help each generation continue to develop psychologically and socially.
“When trying to use this technology to communicate, too many fail to consider others and appreciate differences. Many messages are performances and not part of building anything together. Too many people are compulsive users of this technology. Many have moved from overuse to compulsive use and from compulsive use to addiction. We have invented terms to describe our attempts to control our behavior – technology deprivation, technology detox or internet vacations are expressions suggesting people are becoming more mindful of their use.
“Many people have not used the technology to be responsive to others. Being responsive appears to have been difficult even without this technology. To ask meaningful questions, provide encouraging nonverbal communication that encourages others to continue talking, or even use a paraphrase to signal or check on understanding and to confirm others has always been difficult because it requires focusing outside oneself and on others. Now, too many post a comment and leave the field, and too many cannot seem to provide that third text (A’s message, B’s response, A’s response) in the stream that indicates closure on even the most simple task coordination. Many create dramatic messages that are variations of ‘pay attention to me’ while failing to pay attention to others! Relationships lack depth. People are using the technology to make contacts, but contacts, connections and social media followers are not friends, even if they are called that.
“I am afraid we are losing our sense of appropriateness, disclosure and intimacy in an era of disposable relationships. We are using our limited time and mental capacity to ‘keep in touch’ or ‘lurk.’ There are more than 22,000 YouTube sites with over a million followers each. There are a lot of people online to be entertained and relieve ‘boredom’ instead of developing a network of meaningful relationships. Many are not using the opportunities we have to develop conversations. Indeed, because people seem to be losing their ability to sustain conversations scholars have developed a set of questions for them to ask each other that might help them feel closer to each other. This is wonderful and also sad. I teach classes about human communication technology, and one of my early assignments in an undergraduate class is to have students engage a stranger in a 15-minute face-to-face conversation. As many as 25 to 30% of them have a difficult time doing this. One of my students said, ‘Fifteen minutes is a long time.’
‘Civic engagement has had a resurgence, and people have used technology to develop activist networks. However, these will be temporary manifestations unless people form sustainable groups aimed at accomplishing renewable goals. Otherwise, these efforts will fade. Instead, people seem to have found like-minded people to confirm their biases, creating consequent social identities that dominate individuals’ personal identities.
“Most online conflict about public issues becomes ego-defensive or dramatic declarations instead of simple conflict recognizing differences and solving problems. All of this has brought many people to confuse their sense of reality. We live in a hybrid world in which our technologies have become indispensable. We seem to have lost our ability to discriminate events, news, editorials or entertainment. Indeed, some have lost their ability to discriminate simulated and virtual experiences from the rest of their lives. Advances in artificial intelligence encourage this trend.
“The trends I noted above are worse with younger people. I have seen young people become increasingly more afraid of each other. They have greater communication apprehension, regardless of technology. We can all help by being more mindful of our use of technology, helping others to be more mindful and modeling our technology use to develop greater relational and communication depth.
“There is very little that business leaders or politicians can do beyond modeling behaviors and limiting abuses associated with general use. ‘Alternate facts’ and repeated efforts to explain away what the rest of us can see and hear do not help. Using the internet to attack scientists, educators, journalists and government researchers creates the impression that all reports and sources of reports are equally true or false. Some people’s facts are more validated and reliable than others. Confirmation bias and motivated reasoning are the problems here. When the population begins to reject the garbage, there will be less of it. It will take a while since so many have staked their sense of themselves on different positions.”
An expert on media and information policy commented, “Several forces and initiatives will start to mitigate the problem thanks to an increasing awareness of the heterogeneous positive and negative impacts of digital spaces and digital life on individuals, communities and society. For one, technology designers will increasingly reconsider the behavioral and social effects of their choices and stronger ethical considerations will start to change the technological architectures of digital spaces.
“Government regulation may contribute to align digital spaces with the public good. Although I do, in principle, trust in government and believe in the importance of good government solutions, I am concerned that the low ability of government to solve important problems will also limit its ability to find meaningful solutions that are appropriate for the challenges we face. Despite all this, digital spaces and digital life will continue to be shaped by existing social and economic inequalities, which are at the heart of many of the current challenges and will, for a long time, continue to burden the ability to engage in productive dialogue in digital spaces. Schools and initiatives will foster digital inclusion and this will gradually increase digital literacy and digital civility.”
“Government regulation may contribute to align digital spaces with the public good. Although I do, in principle, trust in government and believe in the importance of good government solutions, I am concerned that the low ability of government to solve important problems will also limit its ability to find meaningful solutions that are appropriate for the challenges we face. Despite all this, digital spaces and digital life will continue to be shaped by existing social and economic inequalities, which are at the heart of many of the current challenges and will, for a long time, continue to burden the ability to engage in productive dialogue in digital spaces.”
A futurist and consultant based in Europe said, “Regulation will impact significantly on the evolution of digital spaces, tackling some of the more egregious harms they are currently causing. The draft UK ‘online safety’ legislation – in particular the proposed duty of care for platforms – is an example of a development that may help here, together with measures to remove some of the anonymity that users currently exploit. A move away from the current, largely U.S.-centric model of internet governance will enable the current decline to be reversed. The current ‘digital sovereignty’ focus of the European Commission will be helpful in this regard, given that progress only seems to be made when tech companies are faced with the threat or actual imposition of controls back by significant financial penalties, potentially with loss of access to key markets.”
An expert on human-computer interfaces who is based in Pakistan wrote, “Being an HCI person myself, I have seen many initiatives taken towards improving technology design for social good, e.g.: work on AI for social good that aims to remove biases from technologies and include better context in technologies, that highlights problems related to religion, and consider inclusion of cultures other than Western to better-design Western-centric technologies. If we keep on working in such a direction, we will get closer to alleviating harmful use of technology.”
A professor who studies civil society and intelligence elites wrote, “Surveys repeatedly show that people across the world do not like deceptive and microtargeted messages and campaigns that strive to influence them on key civic and political functions, such as voting, filling in a census form, attending a rally, etc. It may not be too late to take corrective steps, but it will require a highly coordinated set of actions by stakeholders (e.g., government, intelligence agencies, digital intermediaries and platforms, mainstream media, the influence industry – PR, advertising, etc. – educators and citizens).
“We will likely need supra-national governmental regulation to steer things in the right direction and fight the default settings and business models of dominant social media platforms. Throughout, we need to be alert, and guard against, the negatives that can arise from each type of stakeholder intervention (including damage to human rights). There are numerous social and democratic harms arising from what we could term the ‘disinformation media ecology’ and its targeted, affective, deception. It impacts negatively on citizenship in fundamental ways. These include attacks on:
- Our shared knowledge base – Can we agree on even the most basic facts anymore?
- Our rationality – Faulty argumentation is common online, as evidenced by conspiracy theorists.
- Our togetherness – Social media encourage tribalism, hate speech and echo chambers.
- Our trust in government and democratic institutions and processes – Disinformation erodes this trust.
- Our vulnerabilities – We are targeted and manipulated with honed messages.
- And our agency – We are being nudged, e.g., by ‘dark design’ and influenced unduly.
“In short, the disinformation media ecology that generates and targets messages that are deceptive, and/or designed to bypass thoughtful deliberation in favour of profiled, emotionalised engagement, severely challenges the democratic ideal of treating people as citizens rather than as ‘targets’ or ‘consumers.’ This is an ecology where the psychological and emotional behaviour of individuals and groups is increasingly quantified and datafied (as evidenced by the rise of emotion AI or affective AI).
“Also important is the nature of psychology, in that influential behavioural sciences downplay rationality in favour of a neo-behaviourist outlook. In an applied context, neo-behaviourism and seeing people in psycho-physiological terms disregards (or denies) agency and civic autonomy.
“This near-horizon future is bleak, particularly since such techniques for emotional profiling are rapidly becoming commonplace in the political and civic world, starting with social media but spilling out into once offline domains (e.g., cities that have become ‘smart’, and dwellings that have become ‘Internet of Things-connected’).”
The leader of a well-known global consulting firm commented, “The emergence of new business and economic models, and a new and updated view of what public commons are in the digital age might possibly help. Digital spaces suffer from the business models that underly them, those that encourage and amplify the most-negative behaviors and activities.”
A director of a research project focused on digital civil society wrote, “The main business model of our current digital spaces is advertisement and data extraction. That, coupled with the rise of political authoritarianism, will continue to shape digital spaces in ways that are harmful and effectively erode trust in democracy and public institutions. I imagine there will be growing awareness among the public of the dangers and harms of digital spaces. Civil society has been and will be playing a key role in raising this public awareness, and we are likely to see groups from a wide spectrum of civil society (not just those promulgating digital rights) coming together to confront issues of data privacy and regulation.”
An anonymous respondent said, “By 2035 there will be better automation of common tasks; easier access to education, resources and updates; and lower frictional costs to implementing social and societal policies.
“Today’s public platforms have almost all been designed in a way that allows for the fast, creative generation of fake accounts. The use of these platforms’ automated tools for discussion and interaction is the dominant way to be seen and heard, and the dominant way to be perceived as popular and seek approval or agreement from others. As a result, forged social proof has become the most common form of social proof. Second-order effects convert this into ‘real’ social proof, erasing the record of the first. This is allowing cult-forming techniques that were once only well understood in isolation to become mainstream.
“The lack of a single shared physical space in which real people must work toward coming to a mutual understanding and the reduced need for more than a few humans to be in agreement to coordinate the activity of millions has reduced the countervailing forces that previously led cults to remain isolated or to fade over time.
“The regular historical difficulties that have often resulted from such communication trends in the past and in the present (but to date only in isolated regions, not globally) include the suppression of and destruction of science, of histories, of news, and the creation and enshrining of artificial histories as the only allowed narrative. It also leads to a glorification of the destruction of people, art, architecture and many of the real events of human civilization.”
An expert in marketing and commercialization of machine learning tools commented, “The biggest hurdle we will overcome is misinformation online. I believe regulators, academics, tech leaders and journalists will develop systems and processes that society will need to partake in and work with to learn how to better communicate and collaborate in digital spaces. At first this will be painful, but it will become normalized and more efficient over time, using greater levels of digital signatures and processes. Means will evolve for advancements in communicating the rising complexity associated with digital identity, traces and how information might be used in malicious and inappropriate means.
“This is incredibly challenging to simplify and communicate and to achieve having a vast audience cognitively process their role in keeping information secure and maintaining a level of accuracy while sharing information. Regulations associated with privacy, reporting, auditing, access to data will have the largest impact.
“Uprooting the deep web and dark web to remove malicious, illicit and illegal activity will eventually be done for the public good. There will also be more research and understanding associated with challenged to individuals’ digital/physical balance as more-immersive technology becomes mainstream (e.g., virtual reality). There will be limits imposed and technology enablers will work to ensure that individuals still also get together IRL [in real life].”
A vice president for learning technologies commented, “Reforms I foresee include filtering mechanisms that recognize a filter’s origins – such as gatekeepers recognized for point of view, methods, etc. I believe tech leaders will help achieve improvements through their personal guidance (public and private) of their concerns to recognize the larger missions/aims that exist beyond corporate growth and personal power. Improvements in the digital lives of the average users will come through increasing the transparency of sources of information. Persistent concerns will remain, especially the emerging approaches we see today in which players are gaming the system to harmful ends, including various forms of warfare.”
An AI scientist at a major global technology company said, “I would love to believe in the utopian possibility laid out in the article ‘How to Put Out Democracy’s Dumpster Fire,’ where the equivalent of online town halls and civic societies bring people closer together to resolve our toughest challenges, but I cannot.
“Government has proven increasingly unable to keep pace with technological advancement. Lawsuits and regulations have lagged painfully behind, addressing problems years out of date and unable to adjust accordingly. It’s not just the slow pace of bureaucracy that is to blame; graft and self-interest are largely at play.
“Historically, the most egregious violators of societal good in their own pursuit of wealth and power have only been curbed once significant regulation has been enacted and government agents then enforced those regulations. Unfortunately, Congress and local governments are run by people who must raise hundreds of thousands to millions of dollars to run for office, be elected, and then stay in office. Lobbyists are allowed to protect the interests of the most-powerful companies, organizations, unions and private individuals because the Supreme Court voted in favor of Citizens United.
“Money and power protect those with the most to gain. The global wealth gap is the largest in history, and it has only increased during the pandemic, rather than bringing citizens closer to each other’s realities. The U.S. is battered by historic heat waves and storms, and states with low vaccination rates are seeing new waves of Covid-19 outbreaks, yet a significant portion of Americans still deny science.
“The richest men in the world are using their wealth to send themselves into space for their own amusement while blindly ignoring nations unable to afford vaccines, food and water. Instead of vilifying these men for dodging taxes and shirking any societal responsibility to the people they made their fortunes off of, the media covers their exploits with awe and the government is either incapable or unwilling to get any of money back that should be going into public infrastructure.
“How can digital spaces improve when there is so much benefit for those who cause the greatest societal harm while neither government nor society seem capable or willing to stop them? Whistleblowers inside powerful companies are not protected. Sexual predators get golden parachutes and move on to cause harm at the next big tech company, start-up or university.
“The evidence that [uses of social media] were at the heart of the two greatest threats to our democracy – the 2016 election and the Jan. 6 Capitol riot – is overwhelming, but there have been no consequences. Congress puts on a bit of a show and yells at The Zuck on TV, but he doesn’t have to worry because no real action will ever be taken.
“As long as Google and Facebook pay enough, they will continue to recruit the best and brightest minds to ensure that a tiny fraction of white men keep their wealth and power.”
A professor of political science expert in e-government and technology policy commented, “I see these digital spaces as becoming an even more common place for political extremists, especially white power and anti-democratic groups. Government is always behind the curve in dealing with these types of groups, and internet governance tends to take a hands-off or ad-hoc approach. Because these digital spaces also are forms of mass communication and spaced together with groups promoting the public interest, the views of extremists are easily spread and digested by the public and often appear to be quite legitimate. I don’t think things will change for the better. I can’t say I have the answers on how to counter this.”
A leader in global business development for a major internet organization wrote, “I believe the issue of making digital spaces safer/more valuable is something that tech companies, research organizations, governments and academia are thinking about and working on. The general users of the Web and Internet are leery of doing more digitally now and they are looking to these organizations to show that it will be better moving forward.”
An internationally-known clinical psychologist said, “We have to move beyond the self-interests and corporate monopolies if we are going to use this world-altering tool in the service of humanity. Improvements can only happen if the internet has a pro-democracy regulatory management/governance system/laws. Otherwise, I fear a rapid disintegration of civility and I fear for our long-term survival. I believe it is possible, but we can see how dangerously slow the global community is to act on behalf of the environmental crisis, so things can continue in this spiral.”
A professor whose work is focused on technology and society observed, “It is going to be a bumpy ride, but I’m optimistic about the long game. The image of digital spaces is overwhelmingly negative, and the U.S. is in the midst of a crisis between classes and over the basics of capitalism and democracy. Something is going to give, whether it’s a descent into chaotic market-only frameworks or a swing back towards democratic principles. I’m gambling that it’s the latter, more out of a process of destruction and renewal and reconsideration rather than proactive learning and growth.
“Most of this comes down to the role of civil society and government – what do we want and are we willing to give anything up to get it? At the moment, the levers of power sit in the hands of those who aren’t willing to give much up. I suspect more climate crises, more democratic undermining and more general bad news, combined with a more progressive and longer-term outlook from younger generations is going to swing us back into a more progressive, more civic direction over the next 15 years.”
The co-founder of a global association for digital analytics commented, “What reforms or initiatives may have the biggest impact by 2035? I expect:
- Effective regulation of social media companies and major service providers, such as Amazon and Google. These monopolies will be broken up.
- The rise of better citizen awareness and better digital skills.
- The rise of Indie resistance – anti-surveillance apps, small-scale defensive AI, personal servers, cookie blocking, etc.
- The for-profit tech leaders will not be a source of positive contribution toward change. Some politicians will continue to seek regulation of abusive monopolies, but others may have an equally negative effect. I think the most influence will come via demands for social/cultural change arising from the general public.
- Monopoly domination by current leaders may be removed or reduced, however, emergent technology will drive new monopoly domination by large corporations in aspect of tech and society that are currently unpredictable.
- Common, cheap and widespread AI applications will dominate concerns and create the most challenges in 2035.”
A professor of information technology and public policy based at a major U.S. technological university said, “Similar to the likely outcome for humanity of the doleful predictions we are seeing regarding climate change, the deleterious influences on society that we have put in place through novel digital technologies could keep gaining momentum until they reach a point of irreversibility – a world with no privacy, of endemic misinformation, and of precise, targeted, intentional manipulation of individual behavior that exploits and leverages our own worst instincts.
“My hope (it’s not an expectation) is that recognition of the negative effects of human behavior in digital spaces will lead to a collective impetus for change and, specifically, for regulatory interventions that would promote said change (in areas including privacy, misinformation, exploitation of vulnerable communities and so forth). It is entirely possible in fact that the opposite will happen.”
An executive with an African nation’s directorate for finance for development wrote, “It would be utopian of us to underestimate the impact of the lack of ethics in the assembly of certain technologies. They can cause disasters of all kinds, including exacerbated cyber terrorism. People must collaborate to put in place laws and policies that have a positive impact on the evolution of the digital ecosystem. The regular adaptation of existing technologies will be reworked to offer the options of the possible. Teleworking and medical assistance at home will be generalized.
“By 2035, the digital transformation of space will be obvious in all countries of the world, including poor countries. The mixing of scientific knowledge and the opening up of open-access data in the world will be an opportunity for progress for each of the peoples. The transparency imposed by the intangible tools of artificial intelligence will make public service more and more available than it has ever been in the past.”
A professor of computer science and entrepreneur wrote, “The benefits of the internet and digital spaces are well-known and enjoyed by billions across the ‘health,’ ‘wealth’ and ‘wisdom’ spheres of everyday life. Arguably, they have contributed to deflationary effects and increased standards of living across the globe. Such efficiencies are bound to continue, aided by advances in AI and quantum computing. Today’s Pied Pipers – entrepreneurs and sci-tech, political and religious leaders and social media influencers – need to be cognizant of the unintended consequences of digital tools. They need to be deliberate in their actions while crafting tools or nudging policies.
“Social media allows many to shout ‘Fire!’ easily in crowded, digital theaters with the potential to create chaos in the lives of millions of people in numerous physical spaces across the globe simultaneously. Uniform solutions for this are hard to come by, as previewed by the 2020-21 pandemic pandemonium. While pathogens and bits are not constrained by national boundaries, rules and regulations usually are. This will continue to be a hurdle as nationalism is on the rise.”
A media technologist and author observed, “I expect accountability reforms for big tech – whether that means breaking large companies up or updating legal structures to actually deal with digital evolution. Market-based digital spaces run amok are part of the current problem; we need to treat internet tech like the public good that it is.
“I got online in 1994. As an internet veteran, I saw the transformation from its use as a simple public-information space to a being a capitalist’s wet dream. Advocates for digital-justice issues (i.e., online harassment) have been sounding the alarm for years, saying that private companies with unlimited and unaccountable power to shift public conversations do not have our collective best interests at heart.
“Until we can unpack what this means, how to handle it and how to make them accountable – all while maintaining capitalism’s entrepreneurial/innovation spirit – things will get worse. But I feel like we might be at a turning point. The fact that y’all are even doing this survey says something big.”
The founder and director of a digital consultancy said, “If we are to survive the coming decade, change is essential. The platforms we all use to communicate online must be reoriented toward the good of their users rather than only toward financial success. Classifying some entities as ‘information utilities’ might be a good first step.
“The last 20 years of the internet have very effectively answered the question, ‘how do we profit materially from the online world?’ The next 20 years needs to answer the question, ‘how do we profit humanely and humanly from the online world?’
“Government initiatives that target algorithms are an excellent start in protecting citizens. Legislation around technology has to be pitched at a truly effective level. Tackling the rules governing algorithms is a good meta-level for this. Applications or platforms, like Facebook may change or even vanish but the rule set will remain. This kind of thinking is not common in the world of government, so technologists and designers need to be engaged to help guide the legislation conversation and ensure it’s happening at an effective level.
“Arguably, a lot of the social progress (e.g., LGBTQ+ rights) that’s been made in recent decades can be credited to the access the internet has given us to other ways of thinking and to other ways of life and the tolerance and understanding this access has bred. If we can reorient technology to serve its users rather than its oligarchs, perhaps that path of progress can be resumed.”
An anonymous respondent wrote, “The past seven years and recent events have shown us the limits of the early days of a technology and how naïve the ‘build it and they will come’ approach to the digital sector was. Hindsight shows that the human species still has a lot to learn about how to use the power of digitally enhanced networking and communications.
“The unconsidered and unaddressed issues baked into the current form of our digital spaces have been exposed to us more clearly now, especially by the activities of Vladimir Putin’s Internet Research Agency, which many see to be a key causal factor in the political outcomes of Brexit, Trump 2016, and Brazil’s populist swing. These are examples of geopolitical abuse of digital spaces fostering perception manipulation tantamount to mind control.
“Inequalities in education and access to development pathways for critical thinking skills have set the stage for these kinds of influence campaigns to succeed.”
A professor of sociology and anthropology commented, “The key problem is an advertising model which – coupled with socio-psychometric profiling algorithms – incentivizes destructive digital spaces. This has to change. Democracy is not sustainable without significant transformations to digital spaces. Things will get worse before they get better but, ultimately, I believe that citizens will demand government regulation that limits the worst downsides of digital spaces while allowing more upsides to blossom again. These changes will be supported by increased public awareness and knowledge of digital spaces brought about by both demographic change and better education about such spaces.”
A researcher working in the field of global humanitarianism commented, “Current major trends in governments around the world, across multiple types of government and culture, appear to consistently track with kleptocratic oligarchy that favors a more Hobbesian than Lockean outlook for 2035. What causes me to trend pessimistic on this question is not a lack of faith in what ordinary people can do when empowered with what are considered ‘basic’ digital ICT capacities. That faith stands firm.
“What I have next to zero faith in, however, is the ability of policymakers to act decisively on catastrophic climate change impact and its knock-on impacts. Even a trigger event that would cause limited disaster/blackout on a major grid during *what used to be considered normal conditions* could easily spiral into a multi-month disaster, the impact of which a modern knowledge-based economy has no resilience for whatsoever. There is no Plan B for such a massive grid blackout. The scale of such a catastrophic event would result in mass migration and cascading impacts on host communities for climate refugees.
“Are there ways that things can change for the better? Yes. Is that change not complex and dramatic? NO. We need to stop playing at this with technocratic incrementalism. Incrementalism makes sense if – and frankly, only if – we continue to prostrate ourselves to the idea that nothing means anything and ‘all things are a matter of opinion.’ One doesn’t need to be an absolutist to be a scientist. Nor does being able to comprehend and sit with competing interpretations or hypotheses make science irrelevant. That’s just science.
Here are some needed internet governance measures:
“1) Firmly establish that information is a human right, interdependent upon other established rights (particularly the right to protection). The right to information – accessing, creating, sharing, updating, storing and deleting it – is particularly critical during crises *and must be protected as a vital condition for securing all other human rights.* This right to information must also be protected and comprehensively advanced – along with its interdependent rights – through the activities and obligations of human-rights and humanitarian organizations that operate according to shared standards. (For more information on this approach, see Nathaniel Raymond at Yale, Stuart Campo at UN OCHA Centre for Humanitarian Data, Adrienne Brooks at Mercy Corps, Meghann Rhynard-Geil at Internews, and the many others they can recommend.) As Hugo Slim (fmr ICRC) and others have called for, this is the moment for a fifth Geneva Convention given the fact that ICT systems are routinely targeted first as ‘dual use’ infrastructure (and therefore considered valid targets under outdated laws of armed conflict, despite the overwhelming intelligence weaknesses of using civilian-grade ICT if conducting covert or armed operations).
“2) Using a rights-based approach, substantially advance these rights using a comprehensive framework of accessibility, security and protection (e.g., digital security and surveillance awareness), civilian redress and rectification measures (e.g., regulatory guidance and claims structure, perhaps akin to the original design of the Consumer Financial Protection Bureau) and *eliminating and/or ending liability-shielding practices* for major technology companies.
“3) Every cybersecurity professional is aware that governments, including the U.S., are on the cusp of achieving quantum computing breakthroughs that will render current digital security protocols meaningless. Invest explicitly and rapidly in quantum-era civilian protection mechanisms that could meaningfully advance their human rights when such government capacity comes online; if not, we risk a rapid descent into wholesale authoritarianism.
“4) Establish hard national and international regulations on the propagation of cyber currencies and the use of blockchain technologies that bear disproportionately harmful environmental burdens without demonstrable, comparable benefits to society as a whole. Similarly, regulate the use of digital-identification systems, especially those connected to biometric data and irreversible data storage, to ensure the fundamental bodily integrity of human beings’ ‘digital bodies’ as well as their physical persons. (See Kristin Bergtora Sandvik, Oslo PRIO.) When systems cannot pass the stress tests to meet minimum rights-based requirements, they should not be permitted to profligate and harm. We need regulatory systems similar in focus and function (and, inshallah, without the comprehensive corporate regulatory capture) to the FDA for platforms of such significance.”
The founder and chief scientist of a network consultancy commented, “Generational change will make a difference. The vast majority will have had the experience of ‘digitalhood’ by that time, importantly, their parents will have had experience as well. Issues of veracity will remain but it is to be hoped that their consumption will be better tempered. The real remaining issue will be one that has existed in the physical world for centuries: closed (and self-isolating) communities. The notion of ‘purity of interaction’ will still exist, as it has in various religious-/cultural-based groups. The ‘Plymouth Brethren’ of the internet has arrived, and managing that tribalism and its antagonistic actions will remain.
“It is clear that it will not be a smooth ride, it is clear that it both society and individuals will suffer in mental and physical ways, however, it is my hope that people will adapt and learn to filter and engage constructively. That said, I have seen low-level mental illness in very intelligent individuals explode into full-fledged ‘QAnon-ness,’ so I can only say that this is a hope, not something I can evidence.”
A director with an African nation’s regulatory authority for communications said, “It is very important that all members of society play an equal role in devising and operating the evolving framework for the governance of digital spaces. Most services – both economic and social – will be delivered through digital platforms in 2035 … The current environment, in which digital social media platforms are unregulated, will be strongly challenged. The dominance of developed countries in the digital space will also face a strong challenge from developing countries.”
An anonymous respondent said, “We are at the very early stages of understanding this new technology and how it reaches people and how they use it. Perhaps other technologies (e.g., VR, virtual assistants) will upstage what we find problematic today. But there will also be solutions through tech for those who need more-accessible devices and ways to stay connected.”
A professor emeritus of engineering wrote, “Responsible internet companies will rise by monitoring and deleting dangerous posts. Irresponsible internet companies will become the home to a small number of dangerous organizations.”
A professor of information science based in California said, “Just as climate change amplifies extremes (floods, drought), social media amplifies the positive and negative forces in our society. We have more control over technology-mediated social media than over nature. Much of its negative excesses are driven by the profit motive of companies. Broadcast media used to be more regulated, both in terms of content and in terms of ownership. Although there were some drawbacks in terms of conformity of society and suppression of expression, there were benefits in terms of social cohesion.
“Regulation is always a balance between competing values, but it seems that it is time to for the pendulum to swing towards more restrictions for both social media and other forms of media (broadcast, cable, etc.) in terms of consolidation of ownership and the way content is distributed. It is important to remember that the technical systems we have are often a series of accidental or almost arbitrary choices that then become inevitable. But we can rethink these choices. For instance, video sites do not have to allow anyone to upload anything for instant viewing. Live streaming does not have to be available for for-profit reasons. Shares and likes and followers do not have to be part of an online system. These choices allow one or a few companies to make use of network externalities and become the largest, but not necessarily the best for individuals or society.”
A veteran investigative reporter for a global news organization said, “The transformation of digital spaces into more-communitarian, responsible fora will happen mostly at the local and regional level in the United States and may not achieve national or global dominance. This presupposes a dim view of the immediate future of the United States, which is in grave danger of breaking up.
“I believe the same anti-democratic forces that threaten the integrity of the United States as a country also threaten the integrity of digital spaces, the reliability of the information they carry and their political use. I see a global balkanization of the internet in the near term with the potential for eventual international conventions and accords that could partially break down those barriers. But the path may be rocky and even war-studded.”
A director of strategic relationships and standards for a global technology company observed, “Digital spaces and digital life have dramatically reduced civility and kindness in the world. I honestly don’t know how to fix this. My hope is that we will continue to talk about this and promote a desire to want to fix it. I worry that a majority won’t want to fix it because it is not in their interest. There are two driving reasons for the incivility. 1) In the U.S., First Amendment rights are in conflict with promoting civility and mitigating attempts to control cruelty and facts. 2) A natural consequence of digital spaces is a lack of physical contact which, by definition, facilitates cruelty without penalty.”
A computer science professor noted, “At present, the governance of digital spaces is limited by our capacity to understand how to deploy these tools and create or manage these spaces. By 2035, that capacity problem will be mitigated at least to some degree. So digital space will evolve in ways that improve society simply because that space does not exist now and will develop.
“In terms of the management of existing spaces, I anticipate investment will stabilize many of the problems that currently cause worry. Consider e-mail, and, to a lesser extent, websites used for things like fraud and malware distribution. Early on, many of the same concerns were prevalent around these spaces, yet today we have new social norms, new governance structures and investment in tools and teams to police these spaces in effective ways.
“A more worrying development is the transjursidictional nature of digital spaces, which might require new agreements to manage enforcement that requires cooperation among many parties. These will emerge as driven by need, as has happened in the management of malware, fraud and spam.
“In some cases, this will create barriers to accountability or governance. Consider the parable of what happened to 8chan when they lost their hosting and were picked up by a company that wasn’t terribly responsible, VanwaTech. That company is in the U.S. and subject to U.S. law, which has in certain cases brought them to heel despite not wanting to comply with social or legal norms. But a similar company abroad may be harder to rein in, as is seen in anti-malware operations or anti-fraud operations undertaken by large technology companies.
“A final worry I have related to the development of online spaces in the next 10 years is the emerging misinformation-as-a-service business model and new methods of monetizing activity considered malign. But just as with e-mail, I believe a new and better equilibrium can eventually be reached.”
A computer science professor based in Japan said, “It takes quite some time for society to figure out how to use new technology in a productive and positive way. Although the internet as a technology is already about 50 years old, its use in the society at large is much more recent, and in terms of society adapting to these new uses, including the establishment of laws and general expectations, this is a very short time span.
“Tech leaders will have to invest in better technology to detect and dampen/cull aggressive/negative tendencies on their platforms. This will require an understanding that short-term negative clickbait is not in the long-term interest of a company. Such understanding may only be possible with the ‘help’ of some laws and public pressure that penalize the tolerance of overly negative/aggressive tendencies.
“Figuring out how to apply such pressure without leading to overly strict limitations will require extreme care and inventiveness. Education will also have to play quite a role in making sure that people value true communications more than negative clickbait.”
A leading technology foresight expert wrote, “A big change in public attitudes is needed to survive today’s massive threats, and I think it is likely to happen in a few years. Social media platforms will play a large role in making this shift in consciousness possible.”
The founder and leader of a global futures research organization commented, “Information warfare manipulates information channels trusted by a target without the target’s awareness, so that the target will make decisions against their interest but in the interest of the entity conducting the attack. This will get worse unless we anticipate and counter, rather than just identify and delete.
“We could reduce this problem if we use infowarfare-related data to develop an AI model to predict future actions, to identify characteristics needed to counter/prevent them and match social media uses with those characteristics and invite their actions. Since nation-states are waking up to these possibilities, I think they will clearly do this or come up with even better prevention strategies.”
An internet architecture expert based in Europe said, “Citizens must reconquer digital spaces, but this is a long path like the one towards democracy and freedom. Digital life will improve if the whole population has access to these spaces and digital literacies are learned. Some problems may be diminished if citizens are full participants in the governance of digital spaces; if not, the problems can worsen. It might be useful to create especially targeted digital spaces, governed by appropriate algorithms, for all of the people who want to express and vent their rage.”
A senior economic analyst who works for the U.S. government commented, “Over time, society will develop social norms backed by government policies, rules and laws to better govern digital space and digital life.”
The CEO of a technology futures consultancy said, “As we advance into the Fourth Industrial Revolution – the digital age – there is a heightened focus on digital privacy, digital inclusion, digital cooperation and digital justice across governments, society and academia. This is causing tech companies to face the consequences, hearing and responding to those who loudly advocate for digital safety and having to comply with regulation and guidance and join in sustainable collaborative efforts to ensure tech is trustworthy.
“The average user in 2035 will not have experienced the world before tech and will have grown up as a tech consumer and data producer. I foresee users developing social contracts with tech companies and governments in exchange for their data. This could look like public oversight, and there will be engagement of efforts, initiatives that require or request public data.
“I foresee more tech-savvy and data-privacy-oriented elected officials who have a strong background in data advocacy. I believe society will continue to demand trust in the use, collection, harvesting and aggregation of their data. This will diminish misuse. However, law enforcement’s use of data-driven tools used to augment their work will continue to present a challenge for everyday citizens.”
The director of a cognitive neuroscience group wrote, “There will be regulatory reform with two goals: increased competition and public accountability. This has to be developed and led by political leaders at all levels and it will also require active engagement by technology companies.”
A program officer for an international organization focused on supporting democracy said, “There will be trends and transformations that are not substantially better by 2035. These are likely to be accelerated by illiberal voices and authoritarian actors. But we should not underestimate the ability of civil society to innovate positive changes that will incentivize constructive behavior and continue to provide crucial space for free expression. The COVID-19 pandemic has demonstrated that digital connectivity is more important to societies around the world than ever.
“Western tech platforms, for all their faults, are making an effort to be more receptive and responsive to civil society voices in more-diverse settings. In particular, there is growing recognition that voices from the Global South need to be heard and involved in discussions about how platforms can better respond to disinformation and address privacy concerns.
“Civil society and democratic governments need to be more involved in global internet governance conversations and in the standards-settings bodies that are making decisions about emerging technologies such as artificial intelligence and facial recognition. If civil society sectors unite around core issues related to protecting human rights and free expression in the digital sphere, I am cautiously optimistic that they can affect a certain degree of positive change.
“One major area of concern relates to the role of authoritarian powers such as China, Russia and others that are redesigning technology and the norms surrounding it in ways that enable greater government control over digital technologies and spaces. We should be concerned about how these forces will affect and shape global discussions that affect platforms, technologies and citizen behavior everywhere.”
A policy entrepreneur said, “All wealthy Western countries are going to surpass the U.S. in responsible digital technology regulations before 2030. Between compliance with the non-U.S. standards and the example provided by Engine No. 1 to shake up boards of directors, multinational corporations will choose the lowest-cost compliance strategies and will be swayed not to be on dual tracks. Some corporations will successfully market their differentiation as leaders in trust-building and proactive ethical behaviors. However, there will be some holdouts continuing to exploit surveillance capitalism and providing platforms for misinformation that serves social division.”
An information science professional based in Europe responded, “Before 2035 we shall see improved mechanisms for recognizing, identifying and then following up on each and every discriminatory or otherwise improper action by the public, politicians or any group that does harm. Digital spaces and digital life will be transformed due to more and better regulation and the education of public audiences along with the setting of explicit rules of acceptable use and clear consequences for abuse. Serious research and analysis are needed in order to increase our understanding of the situation before establishing new rules and regulation.”
A lecturer at Columbia University commented, “The question centers on whether we remain optimistic or pessimistic about the potential of digital technology to enhance civil society. Since I think these Pew surveys actually have a role in affecting attitudes and behavior, I can only respond in the affirmative lest I see the negative impacts of this research project contribute to a sense of inertia and inevitability.
“I hope citizens will demand accountability from governing leaders that encourages them to create laws and regulations that will pressure the corporate entities that are actively working to limit their own responsibility for the democratic ends and social good that might be possible. My optimism is limited by some aspects of human nature that play to the baser emotions and activities that many technology companies are exploiting. They are stimulating market-driven social engagement, individualistic branding for self-interest and personal financial gain; conflating digital social protest with monetization of social justice causes; and reinforcing personal responsibility rather than institutional responsibility for addressing social issues.
“There is a potential for people to continue to use digital tools to advocate for equity issues and use digital life to enhance the necessary critique and education that will create greater opportunities and justice, but I see tech companies putting limits on that regularly.”
A professor of computer science based in the northwestern U.S. said, “The digital spaces will be better-managed and fact-checked by human monitors as well as algorithms. The ranking algorithms will pay more attention to the factual content, bias, hate speech, polemic, sarcasm, rhetoric, etc. People who would like to see high-quality content will be able to do so. However, people will also be able to block the monitoring and censoring if they want to. While insular information bubbles will persist, people who would like to be exposed to different points of view will be able to.
“Companies should take the lead in improving their platforms to serve the educational and informational needs of the public instead of monetizing people’s attention. Forcing digital platforms like Facebook to fact-check their content through regulation and competition is an important first step. This would need active regulation and monitoring by the government. There should be more grassroots community organizing around digital spaces. Federal and state governments should incentivize and subsidize efforts like Wikipedia, and also fund the organizations that support education, science, social welfare and democracy.”
A professor emeritus of social sciences commented, “Initiatives like California’s broadband expansion and other state and local infrastructure developments will reduce costs and increase public access to digital spaces. Our experience with Covid-19 will have a positive effect on the development of digital work and educational space designs and acceptance by businesses and educational institutions. I am optimistic that the FCC’s attempts to block net neutrality will be overcome.
“The biggest problems are security and privacy. The tremendous clout of advertisers makes it extremely difficult to restrict corporate surveillance, which often is done insecurely, leaving everyone vulnerable to hackers and malware. The struggle for security in online communications and transactions from attempts to mandate backdoors everywhere makes it difficult for device and system developers to make a secure computer or phone. Another challenge is finding ways to reduce hate and dangerous misinformation while preserving civil liberties and free speech. But I do believe that that the continuation of the information commons in the form of open courseware, Wikipedia, the Internet Archive, fair-use provisions in intellectual property laws, open university scientific papers, all of the current and future online collaborations to address environmental problems, and open access to government will provide support to all of our efforts to make 2035 a better world than it seems to be heading towards at the moment.”
A scientist and expert at data management who works at Microsoft said, “Facebook, Twitter and other social media companies are investing heavily in flagging hate speech and disinformation. Hopefully, they’ll have the legal option to act on them. Meanwhile, I’m cautiously optimistic that the federal government will enact legislation to regulate social networking companies.”
A professor and researcher who studies the media’s role in shaping people’s political attitudes and behaviors said, “By 2035 tech leaders will be more aware of the problematic aspects of the digital sphere and design systems to work against them. There will be greater government regulation and more public awareness of the problematic aspects of digital life. There will be more choice in digital spaces. There will be less incivility and mis- and disinformation. There will still be problems with bringing diverse people together to cooperate.”
An associate professor whose research focuses on information policy wrote, “I believe in the good in human nature. In addition, humans in general are problem solvers. The use of digital spaces currently is a problem, particularly for civil communication (and, hence, democracy), but it is a problem we can address. Raising younger generations to think critically and write kindly would be a good start to changing norms in digital spaces.”
An anonymous respondent said, “My intuition is telling me that the generation that has lived with digital spaces their entire lives will be the force that applies new laws and social mores to improve those spaces. They inherently understand them to be an integral part of their everyday lives and so digital space and digital life will be in mind in any endeavor. I suspect that control and collection of one’s digital life may be a focus of reform and have some of the most significant impacts. This will be driven by public audiences and politicians and not tech leaders.
“I believe tech leaders will be the most resistant force to most if not all of these changes as they rise to the surface. The public will take ownership of its digital life and data while tech companies still struggle to capitalize on user presence, use and data. It’ll be a tug of war over this ownership and control. I do think over the next 15 years tech leaders and tech companies will continue their entrenched ways of thinking, their sense of entitlement to what they have already collected through their creations. I don’t see them giving this up, or significantly keeping pace with social change. I see them as the major impediment over the next 15 years and the reason why positive change will be slow.”
A professor and graduate director of sociology wrote, “Digital spaces will connect people who may never come into physical contact with one another. These bonds have the potential to promote positive change.”
A Southeast Asia-based expert on the opportunities and challenges of digital life wrote, “Although digital media seem to have caused polarization, this is probably not really true. Technologies do not determine culture. Instead, they allow people to more easily to see divides that already exist. The new generation of digital media users came of age at a time when the internet promised to them an alternative to ‘mainstream’ culture – new digital economies, certainly, and special prices and products only available online – and the application of this sales pitch to information has been initially unhealthy. Some Americans strive for hidden truths, actively seeking evidence of conspiracies and looking for what they think of as authentic voices. In the coming years, the disruptive effects of these new conversations will be minimized. Users will accustom themselves to having conversations with others, and content providers will be better able to navigate the needs of their audiences.”
A director with a European resource center supporting safe spaces online responded, “Trust and safety will form an intrinsic aspect of digital spaces in the same way it has done with the automotive sector.”
An active participant in global internet governance processes said, “We can expect transformation of digital spaces and life at least of people who are at the bottom of the pyramid and are yet to get the benefit of technology. We hope there would be better regulations and protections for users, their privacy and rights with improvement of Trust online so that technology empowers people. Better regulations and their implementation will hopefully take care of present issues of power of big tech, censorship, tech biases, digital divide, and so forth.”
The director of an institute for media based in India commented, “People are becoming conscious of the negatives of the digital media. By 2035 people will be more cautious and overcome the negatives.”
A director of technology standards based in California wrote, “I believe the digital space needs to include serious security and privacy solutions. Tech leaders need to control the use of personal information. I hope politicians stay out of the media spaces and digital life in the future, else they ruin it.”
The CEO of a professional services company helping clients engage in the global economy wrote, “Digital technologies and spaces enable people to interact, collaborate and share knowledge/interests. These activities, while able to be exploited for bad motives, historically bring out the positive qualities people innately have.”
A specialist, retired, with the Asian Development Bank said, “Technology is taking root in the developing world. Education has become one of the key development areas. Awareness about transparency of political actions and development programs is spreading through technology. Poverty and health services are major issues.”
An expert in regional and urban economics, public finance and economic development policy predicted that by 2035, “Ease of voting, getting information, discussing public issues, sharing public information are plusses. That will be counter-balanced by ease of misinformation. I envision online voting, maybe by ranked choice.”
A futurist, writer, researcher, and consultant wrote, “A large number of people using digital spaces today came to digital spaces later in life – late teens through adulthood (even older Millennials) – and they often view digital spaces as different or apart from ‘real’ spaces or ‘real’ life. As such, they don’t see their behavior in the digital sphere having any real or lasting impact and consequences, and thus act out in the worst ways because ‘it’s not real.’
“Younger generations will have grown up all their lives interacting and negotiating with digital spaces, moving back and forth and between them with little effort or notice. And we are seeing that their behavior in these spaces is a lot more personally moderated, because they know that digital *IS* real. The rules for digital interaction that they are creating now will follow them throughout their lives as they live, learn, work in these digital spaces.
“COVID has also sparked positive changes in digital spaces. With the move of a lot of work and schooling to digital spaces, it’s allowing those who are differently abled to participate in ways that were previous denied them due to the necessity of being physically present. This allows those with physical disabilities to work in jobs that in the past mandated they come to a workspace that would be hard to reach.
“This push to move more work and schooling online to digital spaces is also driving a push to get the disadvantaged and rural connected to digital spaces, bringing them online, and allowing all of their households, not just the students or workers the advantages of connecting to digital spaces.”
A chief information officer expert in cybersecurity wrote, “I’m taking an optimistic view. I suspect the evolution of more polarization is equally possible. It depends on the right combination of self-policing and government regulation.”
A policy expert based in Oceania said, “E-commerce, digital trust and daily interaction will have an indispensable online component by 2035. Political, social, commercial, medical, religious and cultural experiences will be incomplete without an online component. Efficiencies and redundancies created by online commerce and interaction will begin to be rationalised into society’s very fabric, creating tensions between those with ready access to internet and those without.
“In 2035, the ubiquitous ‘always-on’ internet will make the no-tech experience a rare – and in some cases a coveted – experience. The obvious benefits of efficiency and increased wealth, coupled with collective adaptation will have rationalised if not resolved many of the tension we currently experience.”
An Internet Hall of Fame member expert in data networks responded, “COVID pushed us to improve remote work and digital spaces for collaboration very swiftly. The improvements were imperfect, and I expect to see continued refinement. At the same time, virtual reality is growing in capabilities. 2035 is around the right time frame for virtual reality and the evolution of remote-work products to come together. Perhaps even earlier.
“VR will enable even greater collaborative digital spaces, which will be a plus. What I don’t know is how the improvements in digital spaces will impact our personal lives. Again, during COVID we saw people use digital spaces for dating and safe sexual content – which I gather mostly was OK, though I’ve yet to see a full understanding of the emotional impacts of remote ‘intimacy’ as a substitute for in-person interactions. I really don’t know how VR will shape this space.”
An ICT CEO based in Africa responded, “I strongly believe by 2035 there will be a transformation of the digital space to raise its quality far above what we have today, Usage will be more widespread, it will be protected by rules and regulation and many more users will be more aware of the dangers and will react accordingly. As Covid struck a blow all over the world digital services evolved to better align with people’s needs. Today, conferences are accomplished via Zoom, courses/studies are accomplished online, purchases of goods and services are done online, and much more. Looking at what has been achieved in 2020-2021, it is quite clear that as the world keeps inventing new technologies it evolves to make them safer and better to use. Digital space will improve.”
A writer and linguist who is expert in local initiatives commented, “I believe public audiences are becoming more savvy about the influence – both positive and negative – that the internet is having on democratic life. As that audience grows into becoming more creative and adept at using the internet as a tool for democratic initiatives and outlets, I think we will see more public spaces where people apply their passions and skills for the public good.”
A professor emeritus of computer science wrote, “Over the last few years computer scientists have become vividly aware of the negative consequences of various forms of social media. Research is being focused on methods to mitigate these negative consequences and enhance the positive. I foresee that tech solutions together with changes in the law will eventually vastly improve the benefits of digital life: mainly because attention is not acutely focussed on these problems, which has not been the case for very long.”
An expert in computational law responded, “I assume that by the 2035 time horizon for purposes of there will no longer be a significant distinction between spaces and digital spaces (probably sooner than that) and I observe that today digital spaces are lagging behind as a public sphere and therefore are most likely to catch up when they are finally recognized as being tantamount to the public and private spheres.”
A professor of computer science based in Canada wrote, “I’m an optimist. My hope is that social media will be forced to either adopt First Amendment-type practices or Section 230 must be removed. This will result in them being more neutral, which they are not at present.”
An expert in digital learning environments said, “So much has been observed and learned during the past 15 years that will inform the next 15. We learned that anonymity can lead to extreme and dangerous speech, that algorithms that recommend content can influence thought and behavior, unmoderated open social spaces can lead to abuse, and that chatbots can be manipulated to be anti-social. These lessons have been applied in a range of digital spaces to thoughtfully achieve the goals of the participants. Moderators and developers have also become clearer about the limitations of free expression and how to balance individual expression and the broader good. Historically marginalized and unrepresented groups have also created safe and visible spaces that bring voice to a wider group of creators.”
A data science and marketing expert commented, “The digital public sphere can be improved by strongly regulating and holding social media companies accountable, particularly for disinformation spread on them; specifically, companies like Facebook. Regulation is a last resort but appears inevitable because these companies have made no substantial effort to solve the problem on their own.”
A professor and director of an institute for data science and intelligent systems wrote, “Many of the current difficulties in digital spaces have to do with the anonymity afforded by online services. This is exacerbated by textual interfaces. However, there is a trend toward support of video across much of the online world. If extrapolated a dozen years into the future, I expect that we will have many more digital spaces in which video is the dominant mode of conversation. Video (including showing faces) will reduce anonymity and remind the audience that we are all human beings with physical/emotional frailties, increasing compassion.
“There are, of course, many issues that will continue to be problematic: One is providing suitable anonymity for those who deserve it, but this should be public anonymity (i.e., identity would still be known to the system). A second is the creation of false video (i.e., another head is speaking for you).”
A global entrepreneur based in Europe responded, “I see wonderful technology and services being developed as well as a host of good people working to create a better future.”
A professor of information science wrote, “Slowly, we’ll start to appreciate what digital spaces afford that’s an improvement on or supplementary to physical AND we’ll come to appreciate that the goal should not be to try to reproduce the physical. Additionally, I think we’ll better learn how to use digital to maximize creative listening and to zero in on optimal mixes of parallel and serial collaboration.”
A Germany-based CEO commented, “Digital spaces will evolve and continue to support and enhance daily life. The users of digital services will adapt to negative use of the media and will – hopefully – start to ignore bad influences.”
A professor of business wrote, “Better regulation of social media companies as content managers is necessary. Individuals are exhausted by the toxic political discourse and seriously interested in being able to showcase the ability to discern fact from fiction. Investment in education and higher education toward the responsible development, curation and sharing of digital content is crucial.
“There should also be an investment in creating algorithms that better police trolls, bots and dis/misinformation campaigns. There must be more public awareness of misinformation, and legal action should be taken against those in positions of responsibility, such as any news agencies that perpetrate misinformation in order to raise the level of viewer engagement and make larger profits.”
A user-experience designer based in Boston wrote, “I believe that, as it goes for all publicly shared resources, it will take us a while evolve rules and mores for the internet’s digital public sphere that create a better experience for those using it. When the first public roads were built we did not have anywhere near today’s level of laws and norms about how to drive on them. Over time, a combination of laws and behaviors evolved that helped make them more useful for all. I am optimistic about the basic good of people, and I believe we as a people will learn how to regulate and develop useful norms around the use of these cyber resources to make them more beneficial to all.”
A professor of information science based in Norway said, “There will be more-flexible working situations (remote work), more access to medical professionals (example, online psychological counselling and simple examinations) and remote telecontrol of machines such as vehicles or simple construction/maintenance robots.”
The managing partner of a consultancy helping clients reach digital objectives responded, “All spaces will be digital. As an optimist, I see this as being beneficial to society.”
An entrepreneur based in the American South said, “By 2035, I believe there will be a growing demand for the digital space to evolve with respect for users’ privacy and the types of experiences they wish to have while online. There will be more options for platforms that are transparent in the way an individuals’ data is handled and used across the platform. The user will ultimately be able to choose one experience over another based on their preferences. These platforms will be attractive for their privacy initiatives, content, and transparency. Over time, this option will transform the digital space and there will be an opening for an emerging marketplace of similar platforms that are driven by tech leaders who meet the demands of this specific online consumer. This emerging marketplace will compete with the alternative options by providing more visibility for consumers to have a snapshot view into the nature of businesses they interact with, knowingly and unknowingly.”
A distinguished professor of computer science at one of the largest universities in the U.S. commented, “As Richard Feynman put it, ‘To every man is given the key to the gates of heaven; the same key opens the gates of hell.’ This statement can be applied to nuclear technology and to digital technology. Whether a technology opens the door to heaven or hell is up to how well people regulate the use of it. The internet’s unprecedented growth took people by surprise, thus society was unprepared. Few foresaw where it might be headed, and warning voices were not well heard (see the book ‘Surveillance Capitalism’ as one example). It takes a deep understanding to develop effective solutions to make digital technology better serve society’s needs and to raise the bar against the abuse. We are not there yet but we will get there.”
The chair of political science at a top U.S. university wrote, “The problem to be solved between now and 2035 is to undercut a conglomeration of digital providers so as to allow competition and encourage entrants with new technology and new ideas. The communication space needs to be such that ‘bad’ ideas and ‘misinformation’ are matched with alternative views so that insular information communities do not exist. What will be better by 2035? Education will be equally online and in the classroom. Think tanks, seminars, labs will no longer be geographically constrained but rather, academics, intellectuals and the attentive public will have access to information now only available to elites. The portal for that information will be easier to use and widely available. In terms of jobs, the digital world will expand opportunities globally and create flexible work environments, allowing families more contacts with each other and their community. Food, consumer goods and entertainment will be available online, providing every person with access.”
A North American futurist/consultant responded, “The biggest impact is going to come from how the U.S. approaches data protection and individual privacy, this would include the regulations/policies around the end user giving permission for all data (including medical and health) to be used by third parties. For example, today with COVID vaccinations, individuals who have not been vaccinated are being contacted and those who have been vaccinated are not. How do the county, state and federal governments know who has and who has not been vaccinated? Data is being shared without the end user knowing or understanding.
“I believe that, depending upon privacy requirements, the next area that will impact future digital life will be how government watchdogs look at competition, how companies will be taxed for online services and who determines the fine line between a free, innovated marketplace and government restrictions. For the end user, moving everything to a digital online environment unleashes freedom and opportunities for global collaboration, communication and makes geographic location irrelevant. Again, the biggest concern is who has access to your data and will the end user know and be able to give permission for that access. It boils down to who owns and controls the data that is being generated.”
The founder of the machine learning department at a major U.S. technological university commented, “I expect online communities might remain a polarizing force but expect more specialized communities to develop to share information and provide peer-to-peer education around specialized interests (e.g., gardening, job-seeking, various sports, appliance repair).”
An African researcher who works in Australia said, “In 2035 there will be better digital space consumption, competition and security laws and better affordability of access to digital spaces”
An impact entrepreneur commented, “Facebook needs to stop allowing falsehoods. Tracking across all sites has to be more limited.”
A broker using technology to empower communities noted, “As we move into greater use of digital spaces we have opportunities to improve how individuals can learn at their own pace and at their own selected times. Yes, there will be issues, especially if we put more emphasis on the digital tools than on individual choice.”
An author and social media and content marketing expert wrote, “The question used the word ‘can’ and the choice of answers used the word ‘will.’ There is a difference. I believe the digital public sphere can be improved, but it is only going to happen with regulation and with tech companies taking responsibility for what is shared on their platforms. ‘Will that happen?’ is an entirely different question. Knowingly spreading false information or allowing it to be spread bears consequences in every other aspect of our society. Without guardrails in place, social discourse will continue to decline.”
A digital researcher based in India responded, “The transformation of the public sphere will take place in following ways: 1) Smart Cities development. 2) Security and surveillance evolution. 3) No more human death from accidents. 4) The prediction and monitoring of the health of all. 5) Public faith in democracy analysis through social media. 6) Job creation by humans in the loop. 7) Digital life would mean more superiors, empathy-based and secure. 8) 6G reforms will play a major role in free-flowing internet infrastructure. 9) Major concerns are in the realms of security, digital wallets, digital banking, facial recognition, privacy, ethics and giving users control over information about their personal lives.”
An expert on the future of software engineering said, “Optimistically, we will regulate social media to adhere to Jonathan Rauch’s notion of the Constitution of Knowledge. If we fail to do this, social media will continue to serve as a destructive, divisive instrument, fostering growing pockets of deluded thinking.”
An activist based in Australia said, “Improved connectivity will see safer roads through automated driving, for example, there will be improvements in healthcare. But there will be a diminishment of individual privacy. Politicians need to legislate to protect citizens, as tech leaders on the whole can’t be trusted.”
An anonymous respondent exclaimed, “Things can’t get much worse, so between now and 2035 I expect a backlash against the over-sensationalised, over-algorithmed, over-conmercialised internet spaces of today.”
A vice president for research and economic development responded, “AI, ML and security concerns may have the biggest impact on digital life. Beneficial outcomes will be in the access to information and unique spaces, creation of new tools, etc. However, there may be significant associated risks as a result of ‘bad actors’ or limited developments of appropriate safeguards to protect personal data, etc. Information and data management will be improved and so will human-machine interfaces. The digital divide will narrow as broadband will become almost ubiquitous and highly available. Personal security and individual data safety will continue to raise concerns. The use of digital capabilities to address human concerns such as food insecurity could be impactful.”
The leader of a faith-based organization said, “More people will see their responsibility to shape digital spaces as well as grasp the tools that are or will be available to shape them.”
A professor of business analytics at a major U.S. university wrote, “The world is shrinking. In 2035 there will be easier communication and integration.”
An anonymous respondent commented, “Government will have a larger role in ensuring improved outcomes, working in partnership with tech leaders. And the current market power of social media companies will be reduced by government actions, e.g., antitrust suits.”
A research director for a social and economic institute based in the Middle East wrote, “Online education, access to data and digital literacy will improve for the good of society. I believe, by 2035, educational institutions will learn from the online education experience during Covid-19 and design suitable courses. Students as well will improve their online learning and digital skills. Additionally, I believe access to data will improve for decision-makers for their data-driven policy making (we, in our think tank, develop digital policy tools for that end). Finally, I believe digital literacy will have to be improved among common people by 2035 as there will be more web natives in the population. The digital divide in its many aspects will still be an issue, however.”
A professor of social policy and practice at a major U.S. university wrote, “There is always a pull, back and forth, and ultimately most issues resolve. People are angry at the likes of Facebook, which will lead to a combination of changes made by the platforms in response and of policies and regulations to address the worst aspects. Algorithms can be used to improve the tone and quality of information, it’s a matter of setting the right things to maximize for the algorithms.”
A senior fellow and expert on international security at a major think tank observed, “The value and gain vastly outweighs the negative.”
A technology policy writer/editor based in Europe wrote, “Things will have to get better, otherwise the current systems will be unworkable.”
A researcher who works for a federal government commented, “The younger people, who will be the main users of the digital space, are better-informed about the positive potential of as well as the pitfalls of the digital space/life.”
A professor of technology and society based in Portugal wrote, “Platforms will erode public space, changing other spheres of society.”
A military leader specializing in understanding the impact of social information systems observed, “While I can *hope* for better and I can hope for the political will of enough nations that the public sphere will get better, I can rest assured that 2035 will be a different landscape. On the downward trajectory, we have to face the fact that the infrastructure upon which ‘digital public spaces’ exists is largely in the hands of large corporations and/or nation-states that have their own interests.
“Yes, some sort of tribalism is happening – though I don’t like to use a word once applied (albeit in a somewhat racist fashion) within a somewhat colonial project (anthropology), but, let’s be honest, it’s the colonialist technology that has set the awful ethno-centrist mind virus in motion and we can’t seem to put that genie back into its bottle. (What a terrible mixing of metaphors).
“While it is no surprise that autocratic states seek to use the public sphere as a surveillance apparatus, it is rather appalling to see just how good a pairing the digital public sphere and autocracy are: the ready feeding of ethnocentric ideas to key groups keeping a larger population whipped into a threshold level of frenzy is impressive in ways that I would rather not be impressed.
“There is some hope for us in that some corporations have benevolent impulses, despite capitalist impulses to maximize profits, which again seem to lean into ethnocentrism and autocracy.”
A retired consultant based in Canada said, “Marshall McLuhan noted that: ‘The most human thing about us is our technology.’ Language and culture are technology. Life is the emergence of complexity that engenders more complexity. Uncertainty is integral to evolutionary constraints shaping survival choices. We are at the threshold of a phase transition that demands we guide our choices during this struggle between empires ruled by elites and the next flourishing and ‘leveling-up’ toward a participatory democracy.
“All technologies can be weaponized. All weapons can find a positive use. There will never be a shortage of work and activity to do and to value when we are engaged in the enterprise of a flourishing life, community and ecology. In the 21st century, where everything that can be automated will be, there are three paradigms enabling response-able action.
- The power of a nation with its own currency – modern monetary theory.
- The enabling of the people to flourish as citizens – accomplished through universal basic assets (UBA) and guaranteed jobs (rather than unemployment insurance).
- Enabling communities to be response-able in a changing world through Asset-Based Community Development.”
An internet pioneer now working as an internet privacy consultant commented, “The most perverse uses of digital spaces are enabled by big advertisement-funded platforms. They strive to ‘grab attention’ in order to get more page views and thus more advertising revenues. At the same time, the ad-funded model drives them to get ‘better smarter ads,’ more tailored towards each specific reader. This tailoring also drives the selection of content, whether by algorithms or by publication policy. The net result is reinforcing the worst instincts of readers and driving them into extremist bubbles.
“Of course, it is hard to be optimistic under such conditions, because the business model of the platforms pretty much drives the proliferation of toxic content. But there is still a hope, because of two trends: a realization by the public at large of the toxic nature of the ‘better advertisement’ model, also known as surveillance capitalism; and the realization by advertisers that this supposed ‘better ads’ model does not in fact help them make more profits by selling more products, because the added value created by these supposed better ads is almost entirely captured by the platforms.
“By 2035, there is a small hope that the atrocious surveillance model behind the better ads will be outlawed in the name of privacy, and that honest publications will evolve without relying on intrusive ads. Of course, it is only a faint hope.”
A principal consultant for a major analytics platform wrote, “Users will seek to make something good of a tool they need to communicate with. The written word in books includes good and bad. So do digital spaces.”
A North American research scientist commented, “There will be a push for equitable digital spaces for low-income or impoverished communities. The internet will be available to all. Individuals will be working much more from home with virtual reality and augmented reality workspaces through VR. This will allow the workforce to work from wherever they want. Large businesses will not waste money on power for lights, workstations, heating and cooling. These spaces will be transformed into shelters and housing for low-income or homeless people. When people do not need to commute to work daily this will help save energy and the planet. Workers who telework will be able to engage with their family members more, bringing back the proven positive interactions of families and thereby decreasing the number of high school dropouts and as such there will be a decrease in the number of homeless and petty and violent crimes.”
A professor of humanities and award-winning expert in cultural commentary commented, “Regulations specific to digital spaces and the social behaviors they enable will need to be put in place if the ‘public good’ is to be served. Defining the ‘public good’ will be difficult in itself since it involves competing priorities and values. When digital infrastructure becomes a public utility, and is modified accordingly, fairness and social equity can be at least supported. We are a long way from that.”
An anonymous respondent commented, “I see two sides to a positive outcome. First, I do not see the younger generation being so ‘duped’ by social media and media. While use preference will continue to change from generation to generation, I believe the next generations will be less political when using social media and keep messages and posts more lighthearted. Second, in order to attract the largest possible audience for their platforms, tech companies will have to work even harder to remove bad actors, those people spreading misinformation or creating anger within a community. Tech companies will need to better refine their policies in order to continue making profits.”
A research director who is expert in community health policy wrote, “There will be a great expansion of broadband, particularly in areas of education and health care. Affordability will still be a big issue in the U.S., where internet and cellular service remains expensive compared to other developed countries. The spread of misinformation will continue to be a major problem.”
A professor of digital economy and culture commented, “There is often a time lag between the appropriation of technologies and the ramifications of these on social life, public/private life, ethics and morality. Due to this lag between the point at which extensive usage is reached and the recognition of moral/social consequences and because there is a human dimension along with its interplay with a capitalist agenda in the appropriation of technologies, we will often only remedy social and ethical ills after a period of time has lapsed.
“Our evaluations of technologies at a social and ethical level are not in sync with the arrival and uses of technologies as a platform for economic enterprise, and the glorifications of these by nation-states and neoliberal economies. The ascendency of data empires attests to this. We are creating huge commercial organizations with large repositories of data that are not politically accountable. These organizations possess quasi-extralegal powers through data that we need to regulate now.”
A leading internet infrastructure architect at major technology companies for more than 20 years responded, “Government regulation isn’t going to solve this problem. Governments will step in to ‘solve’ the problem, but their solutions will always move toward increasing government power toward using these systems for government ends. I don’t see a simple solution to this problem.
“The problem is one of human nature and our beliefs about human nature. From the perspective of designers and operators of these digital spaces, individual users are ‘shapeable’ towards an idealistic set of ends (users are the means toward the end of an ideal world) rather than being ends in themselves that should be treated with dignity and respect. This means the designers and operators of these digital spaces truly believe they are ‘doing good’ by creating systems that can be used to modify human behavior at large scale.
“Although this power has largely been used to increase revenue in the past, as the companies move more strongly into the political realm and as governments realize the power of these systems to shape behavior, there will be ever-greater collusion between the operators of these digital spaces and governments to shape societies toward ends that the progressive elements of governments believe will move societies toward their version of an ‘ideal future.’
“There is little any individual can do to combat this movement, as each individual voice is being drowned in an overwhelming sea of information, and individual voices that do not agree with the vision of the progressive idealists are being ‘depromoted,’ flatly filtered, and – in many cases – completely deplatformed.”
An expert in how psychology, society, and biology influence human decision-making commented, “People are people; tech might change the modality of communication, but people drive the content/usage, not the reverse.”
The founder and CEO of a privacy and security firm based in Washington, DC, said, “Human nature can’t be changed.”
An American author, journalist and professor said, “Attention-seeking behavior won’t change, nor will Skinnerian attention rewards for extreme views. It’s possible that algorithms will become better at not sending people to train wreck/extreme content. It is also possible that legislation will change the relationship between the social media sites and the content they serve up.”
A leader of a center for society, science, technology and medicine responded, “Without a major restructuring of capitalist incentives or other substantial regulatory action – neither of which I think are likely unless climate change makes it all a moot point – digital spaces and digital life will continue to be ‘business as usual,’ emphasis on the business. While my teaching in technology ethics writ broadly betrays at least some optimism that things *could* change, I think it is unlikely that they will.”
A network operations practitioner who works on converging development management, IT networking and internet governance said, “Most emerging internet technologies are in private spaces and are monetized. Public digital space is shrinking and has become mostly dependent on the big private platforms. Governments have less incentive to innovate since they could have companies to do it for them, especially if these governments aren’t investing heavily in R&D or are late adopters of the recent internet platforms.”
A prolific programmer and innovator based in Berkeley, CA, wrote, “Simply put, digital spaces are driven by monetary profit, and I don’t see that changing, even by 2035. The profit motive means that providers will continue to do the least amount of work necessary to maximize profit. For example, software today is insecure and unreliable, but the cost of making it secure and reliable is higher than providers want to pay; it would cut into their profits.
“In a slightly different but still related vein, the ‘always on’ aspects of digital spaces discourage people from human things like inner contemplation or even just reading a book. Again, the providers don’t make money if you are just meditating on inner peace, so they make their platforms as addictive as possible. There is no incentive for them to do otherwise.”
A professor of informatics based in Athens, Greece, responded, “There will not be significant improvement by 2035 due to greed, lack of regulation, money in politics and corruption.”
A scholar, practitioner and teacher of legislation and national security law responded, “I think the trajectory is likely to include both improvements and increasing problems, but the negatives will probably outweigh the positives for a while. It’s a tough call. I am especially concerned about the potential of authoritarians (in democracies and authoritarian states) to manipulate public opinion in the direction of repression.”
A consultant whose research is focused on youth, families and media said, “Political actors will use any and all possible tools they can to gain advantage and sow division. There is no stopping that without strong governmental regulation, which will not occur. The drive for maximum private profit on the part of tech industries will prevent them from taking significant action. Foreign entities seek to sow division, create chaos, and profit from online disruptions. Diplomacy will not be able to address this sufficiently, and U.S. technological innovation will lag behind.”
A leading expert in human-computer interfaces for one of the world’s largest technology companies commented, “Ethicists at large tech companies need to have actual power, not symbolic power. They can advise, but rarely (ever?) actually stop anything or cause real practices to change.”
A professor of sociology based in Italy said, “Unless we break down the workings of platform and surveillance capitalism, no positive outlook can be imagined.”
A futures strategist and lecturer in sociology said, “There is no incentive structure that would lead to improvement in digital spaces except ones that regard the lubrication of commerce.”
An anonymous activist wrote, “There are too many very powerful public and private interests who control outcomes who have no incentive to make significant changes.”
An eminent expert in technology and global political policy observed, “There is insufficient attention paid to risk when assessing digital futures. To date this has enabled substantially positive impacts to take place, but with an underlying undercurrent of constraints on rights, inattention to impacts on (in)equality, environment, the relationships between states/businesses/citizens and many complex areas of public policy. Rapid technological changes facilitated by market consolidation and a libertarian attitude to innovation (‘permissionless’) can have irreversible impacts before accountability mechanisms can be brought to bear.
“The pace and potency of these changes are increasing, and there is insufficient will in governments or authority in international governance to address them. There will be substantial gains in some areas of life, though these will be unequally distributed; with substantial loss in others. The trajectory of interaction between technology and governance/geopolitics will be crucial in determining that balance, and that future does not currently look good.”
The director of a highly respected research center focused on the future of work said, “While I am optimistic about the long run, I think it will take a long time to reverse the political polarization that we are currently seeing. In addition, I worry about the surveillance state that China is developing and looking to export. I think we will become better at regulating digital platforms and handling misinformation.”
A futurist and consultant based in Europe said, “We need radical regulation, transparency and policy changes enacted at scale. This has to happen. Movement toward regulation feels like pushing one very small boulder up a very big hill. We need more regulation, more dissent within platforms, more whistleblowers, more de-platforming of hate speech/harmful content, more-aggressive moderation of platforms of misinformation/disinformation, etc. We just need more.”
A UK-based expert on well-being in the digital age observed, “Social media speaks to our darkest needs: for games, for validation and for the hit of dopamine. This isn’t discerning. In 2035 there will still be people who abuse online spaces, finding ways to do so beyond the controls. Too often we focus on helping the child, helping the bully, and not on kicking those who exercise certain behaviours off social media altogether.”
A computer science and engineering professor at a major U.S. technological university said, “Things will and are not changing significantly, just moving from one platform (newspaper, radio) to another (internet). Past history indicates that politics creates hot emotions and wild claims and nasty attacks, whatever the platform. Attempts to curtail expression by legislation can sometimes have a useful dampening effect, but most are not widely supported because of infringing on free speech.”
A principal architect for a major global technology company commented, “I wish I could have more techno-optimism, because we in tech keep thinking up creative improvements of the users’ options in digital spaces, allowing for better control over one’s data (SOLID, the work on decentralizing social applications by Tim Berners-Lee is an interesting project in this realm) and better algorithm equity and safety (there are multiple efforts in this area). But at the broad level, the digital space being bad for people isn’t sufficiently addressed by such improvements.
“Our complex tech ideas might not even be necessary if the companies operating the digital spaces committed to and invested in civic governance. For the companies to do so, and for it to be a consensual approach with the users, requires them to change their values for real. They would have to commit to improving the product quality of experience because it’s worth investing in for the long term even if it lowers the growth rate of the company.
“Companies have spent decades not investing in defenses from security attacks, and even now investments in that are often driven by regulations rather than sincere valuation of security as a deliverable. That’s one reason the security space continues to be hellish and damaging. That’s an analogy, in my opinion, to explain why there are likely to only be ineffective and incremental technical and governance measures for digital spaces. There may be a combination of good effort by regulatory push and some big tech pull, but it would be nothing like enough to significantly change the digital-space world.”
An enterprise software expert with one of the world’s leading technology companies observed, “There are two disturbing trends occurring that have the potential to dramatically reduce the benefits of the internet. The first is a trend towards centralized services controlled by large corporations and/or governments. Functions and features that are attractive to many users are being controlled more and more by fewer and fewer distinct entities. Diversity is falling by the wayside. This centralization:
- Limits choices for everyday users.
- Concentrates large amounts of personal information under the control of these near monopolies.
- Creates a homogeneous environment, which tend to be more susceptible to compromise. The second trend is balkanization within the internet ecosystem.
Countries like China and Russia are making or have made concerted efforts to build capabilities that will allow them to segment their national networks from the global internet. This trend is starting to be propagated to other countries as well. Such balkanization:
- Reduces access to global information.
- Creates a vector for controlling the information consumed by a country’s citizens.
- Facilitates tracking of individuals within the country.”
An expert at helping developing countries to strategically implement ICT solutions said, “Technologies continue to amplify human intention and behaviour. As long as people are not aware of this the digital space will not be a safe place to be. People with power will continue to misuse it. The digital divides between north and south, women and men, rich and poor, will not be closed because digitalisation exacerbates polarisation.”
An educator based in North America wrote, “Seems like there will be less discourse and more censorship, mass hysteria, group-think, bullying and oppression in 2035.”
A distinguished engineer at one of the world’s leading technology companies noted, “There are always bad players and sadly most digital spaces design security as an afterthought. Attackers are getting more and more sophisticated, and AI/ML is being overhyped and over-marketed as a solution to these problems. Security failures and hacks are happening all over the place. But of bigger concern to me is when AI/ML do things that single out individuals incorrectly. It often makes not just mistakes but serious blunders that are often completely overlooked by the designers of applications that use it. This is likely to have increasingly negative consequences for society in general and can be very damaging for innocent individuals who are incorrectly targeted. I foresee this turning into a legal mess moving forward.”
A futurist and transformational business leader commented, “As long as digital spaces are controlled by for-profit companies they will be continue to focus on clicks and visibility. What is popular is not necessarily good for our society. And increased use of algorithms will drive increased micro-segmentation that further isolates content that is not read by ‘people like me,’ however that is defined. The only way to combat this is to:
- Provide consumers with full control over how their data is used both at the macro and micro levels.
- Provide full transparency of the algorithms that are used to pre-select content, rate consumers for eligibility for services, etc., otherwise bias will creep in and discriminate against profiles that don’t drive high-value consumption patterns.
- Provide reasonably-priced paid social platforms that do not collect data.
- Provide clear visibility to users of all data collection, uses (including to whom the personal data is being routed), and the insights derived from such data.”
A professor emerita of informatics and computing wrote, “Most people have seen the impact of individualistic efficacy on the internet and are likely to be resistant to government attempts to regulate content such that controls individuals. We have seen so much affective polarization in recent years in this country and around the world that it will be difficult to roll that back through policies. As for technological changes that might effect change, I don’t have a crystal ball to tell me how those might interact with governments and citizens. We have also witnessed the rise of online hate groups that have wielded power and will also resist being controlled.”
The founding director of an institute for analytics commented, “The changes that need to be made – which reasonable people would probably debate – won’t matter, because they won’t be made soon enough to stop the current trajectory. Technology is moving too fast and it is uncontrollable in ways that will be increasingly destructive to society. Still, it’s time for the internet idealists to leave the room, and for a serious conversation to begin about regulating digital spaces – and fast, otherwise we may not make it to 2035. Digital spaces have to be moved from an advertising model into either a subscriber model or a utility model with metered distribution. Stricter privacy laws might kill the advertising model instantly.”
A professor of architecture and urban planning at a major U.S. university wrote, “2035 is too far away to predict in any meaningful way. Besides any externalities of climate, capital or politics, digital technology itself seems ever less under the control of humans. AI has already dealt some major successes with Covid vaccines and stock market stability. AI increasingly infuses itself into everyday tasks, for example messaging in Slack. While even in the best of cases this creates an addiction to usability, more often that addiction is a goal in itself amid an attention economy. Attention is the coin of the realm. Alas the kinds of attention that support trustful, undivided participation in civic and institutional contexts fall by the wayside.
“Perhaps the most important concern is the loss of ability to debate nuances of issues, to hold conflicting and incomplete positions equally in mind, or to see deeper than the callow claims of technological solutionism. But since this survey question seems specifically directed to the role of ‘space,’ consider embodiment. Embodied cognition and the extended mind emphasize other, more fluent, more socially situated kinds of attention that one does not have to ‘pay.’
“Per Aristotle and still acted out in the daily news cycle, embodiment in the built spaces of the city remains the main basis for thoughtful political life. Disembodiment seems unwise enough, but when coupled with distraction engineering it becomes quite terrifying. China shows how. In America, a competent tyrant would find most of the means in place. Factor in some shocks from climate and America’s future has never seemed so dire (on the other hand, to do the world some good right now, today, just give an East African a phone).”
A researcher wearily complained, “This isn’t about the digital spaces, this is about human behavior. Too many people are aggressively ignorant, and I am tired of dealing with this.”
An expert in urban studies based in Venezuela observed, “The future looks negative because it is not sufficiently recognized that the current business model of the digital world – the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) plus AI – creates and promotes inequalities that are an impediment to social development. The ethical values that should safeguard the rights of citizens and the various social groups require further review and support, which must be based on broad consultations with the multiple stakeholders involved.”
A professor of sociology at an American Ivy League university responded, “Unless we re-educate engineers and tech-sector workers away from their insane notions of technology that can change society in ways in line with their ideologies and toward a more nuanced and grounded understanding of the intersection of technology and social life, we’ll continue to have sociopathic technologies foisted upon us.
“Unless we can dismantle the damaging personal data economy and disincentivize private data capture and the exchange of database information for profit, we will continue to see the kinds of damage through personalization algorithms, leaks, and the very real possibilities that such information is used to nefarious ends by governments.
“Until Jack Dorsey pulls the plug on Twitter and Mark Zuckerberg admits that Facebook has been a terrible mistake, and Google steps away from personal data tracking, we are not headed anywhere better by 2035.”
A professor of psychology at a major U.S. technological university whose specialty is human-computer interaction wrote, “One can imagine a future in which digital life is more welcoming of diverse views, supportive to those in need and wise. Then we can look at the nature of human beings, who have evolved to protect their own interests at the expense of the common good, divide the world into us and them, and justify their actions by self-deception and proselytizing. Nothing about the digital world provides a force toward the first vision. In fact, as now constituted – with no brakes on individual posts and virtually no effort by platforms to weed out evil doers – all of the impetus is in the direction of unmitigated expression of the worst of human nature. So, I am direly pessimistic that the digital future is a benevolent one.”
An angel and venture investor who previously led innovation and investment for a major U.S. government organization commented, “The educational system is not creating people with critical-thinking skills. These skills are essential for separating what is real from what is fake in any space. Further, the word fake has become, itself, fake. So, we’re creating a next generation of digital consumers/participants who are not prepared to separate reality from fantasy. Lastly State actors and non-state actors who are rewarded by and wish to continue to take advantage of this disconnect. The disconnect will continue to affect politics, social norms, education, healthcare and many other facets of society.”
An associate dean for research in computer science and engineering commented, “I am very worried there will not be much improvement in digital spaces due to the combination of social division, encouragement of that social division by any and all non-democratic nations, the profit focus of business interests, individuals protecting their own interests and the lack of a clearly invested advocate for the common good.
“Highly interested and highly motivated forces tend to always win over the common good because the concept of what constitutes the common good is so diffuse among people. There may be ways things could improve. I see promise in local digital spaces in connecting neighbors. But I have yet to see much success in connecting them across the political spectrum. I see potential for better-identifying falsehoods and inflammatory content. But I don’t see a national (or global) consensus or a structure for actually enforcing social good over profits and selfish/destructive interests.”
A sociologist based in North America wrote, “There will continue to be both positive and negative aspects of digital spaces and digital life, but my training as a sociologist tells me that inequalities rarely decrease over time. Most likely, the drawbacks will steadily decrease for most internet/mobile users while the benefits will increase in ways that mostly align with the interests of privileged groups (e.g., the highly educated, those in Western nations, upper classes, etc.).”
A professor of internet studies wrote, “There is not enough regulation in place to deal with the misinformation and echo chambers, and I doubt there will ever be enough regulation. The internet’s architecture will always allow end-runs around whatever safeguards are put in place. As EO Wilson said, ‘The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.’”
A North American strategy consultant wrote, “There will always be spin-offs of the Big Lie. Negativity wins over truth, especially when the volume is loud. Plus, there’s far too much money involved here for the internet companies to play ball.”
An educator who has been active in the Second Life online community wrote, “Human egos, nature and cognitive dissonance will continue to prevail. Political, marketing and evangelistic agendas will continue to prevail.”
A well-known UK-based professor of media history said, “I am gloomy with hopeful glints, but I frankly don’t, in my exchanges with policy makers, etc., believe that they are up to speed on this. There is a vanishingly small opportunity but presumably a real one to get the right or better policies and regulations in place so that the digital space is tipped in a positive way. There never has been and never will be a ‘medium’ that is inherently anything. It depends how they are used and regulated.
“Some ‘public-interest’ algorithms are being developed and some governments have at last woken up to the real challenges that dis/mis/mal information are causing. But it’s late. Plus, what is a good regulation in a democratic society is a bad one in an authoritarian one – so policy is quite complex.
“Looking at the changes in private and public lives over the last five years it is remarkable how uncivil public discourse has become so swiftly. It is the degradation of manners that is so dangerous. Manners require a taking into account of the experiences of others. In addition, the capacity of the foreign/domestic/rich to attack and protect their own interests online has grown exponentially.
“So, there might be a policy shift, there might be an ability to bring the big social media companies who profit from division to a more public-interest view of their power. But, right now, we are looking at the tabloidisation of life. There are some ways forward – vaccine hesitancy in the UK has been tackled really interestingly (locally and familiarly). On the other hand, the sense of collective values is weaker.”
A Chinese social media researcher wrote, “Whether or not we can tame big tech politically, there are so many other challenges to the architecture of internet that everything is leaning toward being controlled and centralized, eventually becoming fragile enough to be further abused or fall into worse perils. We need to redesign the internet, but many incumbents won’t yield, or – based on the same reasons – they won’t let it happen.”
A North America-based entrepreneur said, “It seems clear that digital spaces will continue to trend toward isolationist views and practices that continue to alienate groups from one another. I foresee a further splintering and divide among class, race, age, politics and most any other measures of subdivision. The self-centered views and extreme beliefs will continue to divide society and erode trust in government, and educational and traditional news sources will continue to diminish. As individuals and groups continue to align with perspectives that match their own self-interests, we will continue to see an erosion of communication between disparate groups.”
A professor based in Oceania said, “While the online space offers incredible opportunities for collaboration, information-sharing and problem-solving, I see the increasing encroachment of states through amplification of narrow political messaging, control through regulation and adoption of technical tools that are less transparent/visible.
“The justification for increased surveillance to keep people safe – safe from threats from others who might threaten local livelihood, threat from viruses – will open up broader opportunities for state control of populations and their activities (much like 9/11 changed the public comfort levels with some degree of surveillance, this will be amplified even further by the current pandemic). Global uncertainty and migration as a result of climate change and threat will also accentuate inequity and opportunities to harness dissatisfaction.
“Increasing conservatism as a result of uncertainties such as Covid, climate change, digital disruption and changes in higher education towards an increased focus on job skilling rather than also developing critical thought and social empathy/citizenship understood in the broadest sense do not inspire much confidence in a brighter future. However, the growth in open science and global collaboration opportunities to address complex problems does offer real opportunity for progress on these broader global challenges.”
A North American entrepreneur wrote, “Technology is advancing at a rapid pace and will continue to outpace policy solutions. I am concerned that a combination of bad actors and diminishing trust in government and other institutions will lead to the continued proliferation of disinformation and other harms in digital spaces. I also am concerned that governments will ramp up efforts to weaponize digital spaces. The one change for the better is that the next generation of users and leaders may be better equipped to counter the negative trends and drive improvements from a user, technical and governance perspective.”
A writer and editor who reports on management issues affecting global business said, “I am not confident the disparate coalition of state, country and international governing bodies needed to correctly influence and monitor commercialized digital public spaces will be able to come to agreement and have enough clout to push back against the very largest and growing larger tech players, who have no loyalty to customer, country or societal norms.”
An analytics director for a social media strategies consultancy said, “I don’t think digital spaces and digital life have the capacity to experience a substantial net increase until we change how we operate as a society. The technology might change, but – time after time – humans seem to prove that we don’t change. The ‘net’ amount of change in digital spaces and digital life will not be substantially better. Certainly, there will be some positive change, as there is with most technological developments. I can’t say what those changes will be, but there will be improvements for some. However, there’s always the other side of the coin, and there will certainly be people, organizations, institutions, etc., that have a negative impact on digital spaces/life.”
An Ivy League professor of science and technology studies said, “Overall, voices of criticism and disenchantment are rising, and one can hope for a reckoning. The questions remain: ‘How soon? And what else will have become entrenched by then?’ Things don’t look good. There is near-monopolistic control by a few firms, there are complex and opaque privacy protections, and then there is the addictive power of social media and an increasing reliance on digital work solutions by institutions that are eager to cut back on the cost and complications of having human employees.
“Things might get somewhat better. Even a single case can resonate, like Google v. Spain, which had ripple effects that can be seen in the GDPR and California’s privacy law. But people’s understanding of what is changing – including impacts upon their own subjectivity and expectations of agency – is not highly developed. The buzz and hype surrounding Silicon Valley has tamped down dissent and critical inquiry to such an extent that it will take a big upheaval – bigger than the January 6 2021 insurrection – to fundamentally alter how people see the threats of digital space.”
A professor of political science based in the U.S. said, “The only way things might change for the better is if there is a wholesale restructuring of the digital space – not likely. The majority of digital spaces are serving private economic and propaganda needs, not the public good. There is no discernible will on the part of regulators, governmental entities or private enterprise to turn these spaces to the public good. News organizations are losing their impact, there is no place for shared information/facts to reach a wide audience. Hackers and criminal interests are threatening economic and national security and the protection of citizens.”
A retired U.S. military strategist commented, “The financial power of the major social media platforms, enabling technology providers and competing macro political interests, will act in ways that enable maximum benefit for them and their financial interests. We need look no further than capitalist experience in other economic sectors, in which the industries of digital spaces have thus far not demonstrated a singleness or distinctive separateness from the type of economic power exercise and consolidation quite familiar to us in U.S. industry.”
A strategy and research director observed, “Digital spaces today polarize and feed people an increasingly small range of information. This information is also increasingly dispersed in short-video formats. There is no middle ground. Digital spaces, rather than representing the broader community or providing balanced views, simply produce data and information used for profit. Fifteen-plus years ago I could search for information that was freely and willingly shared. As a blogger I could share and link to information independently. Now it isn’t worth writing anything because you are highly unlikely to gain an audience even when you express interesting views that are well backed up by facts, science and balanced behavioral observations.
“Digital spaces have also been poisoned by the arrival of bots and AI. The hosts of the platforms escape legal responsibility for the gross untruths that many in society promote for their own purposes. There is no signal that society is becoming smarter or asking better questions as a collective or group.
“Tech leaders fail to look to ‘better questions’ and help their platforms accelerate learning rather than polarizing beliefs. The real difference between the blogoverse and digital spaces is that in the blogoverse the readers and writers helped correct each other and point the way forward. Today, the internet’s fact-checking mechanisms are much more complex, and search engines are just paid content. I suspect an increasing number of people who can pay a few dollars here and there will join communities where they can have civil discussions and learn without experiencing the ‘shaming’ or trolls that thwart conversation in more-public spaces today. The thinkers are already or increasingly going underground. The urgency appears to be increasing, given government actions and the stagnation of evolution in digital space.
“Because this question asked us to look 14 years out, I also looked 20 years back. What’s different? The current digital businesses are so locked into profiting on data that many are owned by the financial systems and, increasingly, by government. Look to recent events in India or to Twitter to see how governments are increasingly using these systems to increase their political power and isolate opposition.
“The optimist in me hopes we do see significant positive change, although I don’t think it can happen. Initiatives could shift the value equation to cooperative/community-based rewards systems for information at the personal level. This is more likely to happen outside the current financial/reward system. So, cryptocurrency would likely play a role, and digital assets and exchanges would aggregate P2P.
“A large trigger would be the open-source developments of biochemistry (CRISPR technologies) that enable gene-editing to sharply address the increasing tyranny of healthcare costs. By working to eliminate disease, cancers, etc., people will come to understand the value of sharing their genetic code despite the risks – pooling information for the common good means we learn faster than the government and the providers. When this trigger brings people back into learning, science may again have a role to play.
“But I do not see government/politicians or current tech leaders playing a role in positive change in the next 10 to 15 years. For a positive scenario to play out, wealth must be more evenly distributed. Because so much of today’s wealth is tied up in digital spaces and assets, how they evolve must include a redistribution. Making positive change also requires a rethinking of educational access and some return to meritocracy for accelerated access so a broader swath of the population can again prosper.”
An editorial manager for a high-tech market research firm said, “Elites are now firmly in control of emerging digital technology. The ‘democratization’ of internet resources has run its course. I don’t see these trends changing over the next few decades.”
A researcher at the Center for Strategic and International Studies wrote, “Absent external threats or strong regulatory action at the global or European level, the prospects of substantial positive improvement within the U.S. seem dim. There are a number of forces at work that will frustrate efforts to improve digital spaces globally. These include geopolitics, partisan politics, varied definitions and defenses of free speech, business models and human nature.
“In the West, U.S. technology companies largely dominate the digital world. Their business models are fueled by extracting personal data and targeting advertising and other direct or indirect revenue generating data streams at users. Because human nature instinctively reacts to negative stimulus more strongly than positive stimulus, feeding consumers/users with data that keeps them on-screen means that they will be fed stimulating, often-divisive data streams.
“Efforts to change this will be met with resistance by the tech companies (whose business models will be threatened) and by advocates of free speech who will perceive such efforts as limiting freedoms or as censorship. This contest will be fuel for increasingly partisan politics, further frustrating change. These conditions will invite foreign interests to ‘stir the pot’ to keep the U.S. in particular, but Western democracies overall, at war internally and thus less effective globally.
“The rise of a Chinese-dominated internet environment outside of the West, however, could provide an impetus for more-productive dialogue in the West and more beneficial changes to digital spaces.”
A Pacific Islands-based activist wrote, “While the problem of centralisation of the internet to the major platforms is clear to most, solutions are not. Antitrust/monopoly legislation has been discussed for decades but has not been applied. In fact, corporate concentration has been encouraged by nation-states in order ‘to produce local enterprises that can compete on the world market.’
“In addition, nation-states have profited from the concentration of communication in platforms in order to have a minimal number of ‘points of control’ and to gain access to the data that they can provide.
“In addition, some of the proposals aimed at controlling the behaviour of anti-competitive companies seem worse than the problems they are meant to solve, for instance, requiring such companies to censor or not censor, on the pain of immense fines – in essence privatising government powers and leaving little to no ability to appeal decisions. This is already in place for copyright in many countries where the tendency is to expand the system to whatever legislators wish for. Governments can then proclaim that it is the companies that are doing the censorship, and companies can state they have no choice because the government required it, leaving citizens who are unfairly censored with little recourse.
“Another related area is the increasing push to limit encryption that is under the control of individual citizens. If states, or companies to which they have delegated powers cannot read what is being written, filmed, etc., and then communicated, then the restrictions on content proposed will have limited impact. But taking away encryption capabilities from individual citizens leaves them at the mercy of criminals, snoopers, governments, corporations, etc.
“The initial promise of the internet – to enable ordinary citizens to communicate with each other freely as the wealthy and/or powerful have been able to in the past seemed in large part to have been realised. BUT this seems to have shaken the latter group enough to reverse this progress and again limit citizens’ communication. Time will tell.”
The director and co-founder of a nonprofit organization that seeks social solutions to grand challenges responded, “We seem woefully unconcerned about the fact that we are eating the seed corn of our civilization. I see no sign that this will change at the moment, though we’ve had civic revivals before and one may be brewing. Our democracy, civic culture and general ability to solve problems together is steadily and not so slowly being degraded in many ways, including through toxic and polarizing ‘digital spaces.’ This will make it difficult to address this issue, not to mention any challenge.”
A network consultant active in IETF commented, “Advertising-supported digital services have an inherent need to encourage engagement, and the easiest way to do that is to promote or favor content that is divisive, promotes prejudice or otherwise stirs up enmity. These are exactly the opposite of what is needed to make the world better. In addition, the internet – which was originally based on open standards not only for its lower-layer protocols but for applications also – is increasingly becoming siloed at the application layer, which results in further division and unhealthy competition.
“Right now, I don’t know what incentives would encourage a change away from these trends. I have little faith in laws or regulations to have a positive effect, beyond protecting freedom of speech, and there are increasing, naive public demands for both government and tech industries to engage in censorship.
“A glimmer of hope may be in distributed peer-to-peer applications that are not dependent on central servers. But governments, network service providers and existing social media services can all be expected to be hostile to these. That’s not to say that there can be no change – the internet is constantly changing – but what I don’t currently see is any factor that would encourage people to see their fellow humans in greater depth and to look past superficial attributes.”
A researcher based in Ireland said, “Increasing corporate concentration, courts that favor private-sector rights and data use and politicians in the pockets of platforms will make things worse. People who are most made vulnerable in digital spaces will have decreasing power.”
An advocate for free expression and open access to the internet wrote, “Undoubtedly, the advent of the internet and other technologies has contributed to growth and development around the globe. People rely on the internet daily for almost all activities – from communication with loved ones to pursuing education, accessing and sharing life-saving information, working from home, and research and academic purposes, among others.
“Despite all these benefits the internet provides, I am skeptical about the next 14 years ahead given the increase in threats and challenges that has come with digital evolution. While it is true that the internet and digital spaces are empowering people, governments around the world are equally threatened by the liberation the internet provides and tend to impose or adopt policies in order to control information.
“Increasingly, governments are weaponizing internet shutdowns, censorship, surveillance, exploitation of data among others to have control. These practices in the next few years will negatively impact democracies and provide avenues for governments to violate fundamental human rights of the people with impunity.
“Other stakeholders including Internet service providers and technology companies are also complicit when it comes to the deterioration we are seeing in digital spaces. The recent revelation of how NSO Group’s spyware tool Pegasus was implemented in mass human rights violations around the world through surveillance, as well as the involvement of Sandvine in facilitating the Belarus internet shutdowns last year brings to bear some of these concerns.”
A futurist based in North American commented, “I anticipate plenty of change in digital life, however not so much in human beings. Almost all new and improved technologies can, and will be, used for bad as well as good ends. Criminality and the struggle for advantage are always with us. If we can recognize this and be willing to explore, understand, and regulate digital life and its many manifestations we should be okay.”
A digital security expert based in New York City wrote, “The problem is that the financial incentives of the internet as it has evolved do not promote healthy online life, and by now there are many large entrenched corporate interests that have no incentive to support changes for the better.
“Major platforms deny their role in promoting hate speech and other incendiary content, while continuing to measure success based on ham-fisted measures of ‘engagement’ that promote a race to the bottom with content that appeals to users’ visceral emotions.
“Advertising networks are also harnessed for disinformation and incendiary speech as well as clickjacking. (One bright spot is the great work the Global Disinformation Index is doing to call out companies benefitting from this promotion of dangerous garbage.) The expanding popularity of cryptocurrencies, built on a tremendous amount of handwaving and popular unfamiliarity with the technologies involved, poses threats to environment and economy alike.
“We have also failed to slow the roll of technologies that profile all of us based on data gathering; China’s large-scale building of surveillance tools for their nation-state offers few escapes for its citizens, and with the United States struggling to get its act together in many ways, it seems likely more and more countries around the world will decide that China’s model works for them.
“And then there’s the escalation of cyberwarfare, and the ongoing lack of Geneva Convention-like protections for everyday citizens. I do hold out hope that governments will at least sort out the latter in the next 5-10 years.”
A teacher based in Oceania observed, “It has become so people are almost being forced to own and maintain a smartphone in order to conduct their daily lives. I cannot conceive of any scenario where this trajectory will improve our lives in the areas of social cohesion – more likely digital spaces will continue to be marshaled in order to divide and rule.
“Many people are unaware of how they are being either manipulated or exploited or both. Some of them are not interested in key issues of the internet, its governance and so on. They are online as a matter of course and their lives are dependent on connectivity. They are not interested in how data is collected or whether everything they do with IT is either already being tracked or could be given to some entity that might want to use such data for their own ends.
“The most difficult issue to be surmounted is the increasing division between ‘camps’ of users. Social media has already been seen to enhance some users’ feelings of entitlement while others have been reported to feel unable to speak out in digital public due to the chilling effects of what some are policing.
“I believe this sort of fragmentation of society is not going to be improved, but only enhanced in the future – most obviously by those with digital ‘power’ (large companies such as Google, Facebook, Amazon, TikTok, etc). It also seems as if nation-states are getting on board with widespread surveillance and law-making to prevent anyone from sticking their heads above the parapet and whistle-blowing – we already have seen many imprisoned or being harassed for reporting online.
“Social fragmentation is also exemplified in areas such as online dating and the fact that many people don’t even know any more how to simply meet others in real life due to utter dependence on their mobile technology.”
A futurist and cybercrime expert responded, “The worst aspects of human nature, its faults, flaws and bias are amplified beyond belief by today’s tech and the anticipated technologies still to come. There is always a subset of people ‘hoping’ for humans’ kindness and decency to prevail. That’s a nice idea but not usually the smart way to bet.”
An internet pioneer wrote, “The major changes in society point to greater stratification in its wealth. So, for-fee subscription services will do a better and better job of serving public good while only serving the wealthy. Free services that compete will continue to profit from manipulation by advertisers and other exploitive actors. Thus, community spaces will get better and worse depending on their revenue models, and social problems will not be addressed. (Black swan events like a change in our economic system might change things. Don’t bet on it.)”
A director of strategic initiatives wrote, “Addressing the digital divide including access and literacy will require substantial commitment. I am not optimistic the necessary changes can occur by 2035.”
A distinguished professor at a major U.S. university focused on data practices and information policy observed, “The downward spiral of disinformation, deep fakes, and ‘personal liberty’ outweighing concerns for democracy.”
An attorney expert in international law said, “Anonymous speech is frequently used for harmful purposes, and the internet enables such speech to be heard by millions. It also provides an echo chamber, so viewpoints are reinforced and manipulated. Falsity can seem equivalent to fact on the internet thus the public does not have a common set of facts to inspire needed democratic action.”
A machine learning research scientist based in the U.S. noted, “There is no single internet. Thoughtful, beautiful and pro-social digital spaces will continue to develop and thrive on the internet while digital spaces that exploit and magnify the worst of humanity continue to expand. Ensuring companies build technologies that benefit society at large requires a complete rethinking of our approach to communication platforms, and in particular algorithmic ranking of content and communications. I am not optimistic that government bodies are up to the task of writing regulation preventing harms derived from these technologies. My only hope is that we may establish a body of law detailing how companies are liable for the emergent social behaviors encouraged by their algorithms.”
An anonymous respondent wrote, “Commercialization, polarization and increasing silos will not be overcome.”
An internet pioneer who helped lead its diffusion in Southeast Asia wrote, “I am afraid that big economic interests will continue to be dominant, not the public good. Also, self-interested state power seems to be assuming a stronger role and this does not always include the popular/civil society voices who are concerned with the public good.”
The founder and director of a U.S. state’s resilience network observed, “My primary concern is that individuals select digital spaces that speak to them such that diverse views are lacking. As well, the use of AI will eliminate much of the ‘accidental’ stumbling into random information or opportunities. It already does, and I fear this will worsen and be a very narrow slice of perspective and information. I worry people just gravitate to others who think like them, hear advertising that is so perfectly targeted, and have no reason to think critically about what is coming their way.
“Something also needs to change with monopolies for information sharing, like Facebook/WhatsApp/Instagram limiting the possible alternative platforms that could be used but don’t have the same type of financial power thus limiting exposure and opportunity to be used by more people. I have all kinds of hope how things could change for the better. Innovative technology that somehow represents multiple voices and perspectives not just by those who pay to play.
“I know I can’t envision the world my children will be working in (they are 3 and 5), and the digital spaces they will have opportunities to inhabit.”
A staff attorney for a global internet rights organization wrote, “I take a U.S.-centric point of view as I am an American. Wealth in the U.S. is extremely unequally distributed and I don’t see any ‘organic’ trends that suggest that this inequity will reduce, period. The standard of living at the poorer end could go up, and I am doubtful that the rich will not get richer. In my mind, the issue is whether the gap between rich and poor increases or decreases – are the poor catching up or not? And I think we would not think this way if the statistics on the superrich weren’t so crazy. I am not saying this is always the right policy approach, in every country, at all times in history, but this is a time of especially great inequality and I do not think it is good for our children.
“Much of the new wealth is certainly tied up with tech. Bezos, Gates, Musk. I see no reason to expect the vast majority of the wealth in the U.S. to depart from its trajectory of increasing itself. I do not associate wealth with altruism. Our politics does not help. It may not just be the situation in the U.S., it may be global, that we can see authoritarianism all over the world.
“In particular, facts, truth, science, reason, are not valued by many in public political discourse. We have significant minorities in the U.S. who do not believe that Joe Biden won the election. And then there’s the Covid-19/pandemic. Every day, I am amazed by stories about how people just won’t get vaccinated, partly because they don’t think it’s a big deal. Some of this is bullshit politics, some of this is cultural (Black folks who have very different reasons to not trust the U.S. government), some of this is the weirdness of religion, but I will tell you that I did not expect how large the numbers would be.
“Finally, white people in the U.S. are so racist it’s kind of amazing. I do not see this changing except via generational change and birth rates. I’m not white; I married a white woman and have two mixed-race kids. They are the change. Unfortunately, voting rates in these younger cohorts are low. And so much of what we’re seeing right now – destabilizing elections, voter suppression – is about trying to prevent generational change. It’s a modern ‘lost cause’ again in service of white status.
“In conclusion, it is July 25, 2021. We managed to elect Biden, and definitely things are better, but I don’t see any relief for the poor coming. If you think digital spaces will get better by 2035, please show me a plausible path to that outcome.”
A professor of humanities based in Australia said, “States will retain further control over the ways in which online spaces are accessed. Covid-related surveillance measures and data capture widen the potential for data and activity cross connections to be made that may hinder free speech. Positively, climate change may drive counter measures that enable broader action towards global good. Negatively, increased migration pushback may increase the rise of right-wing protectionism, racism and parochial attitudes to be disseminated and mobilized.”
An educator and director based in Texas commented, “Given the realities of political gridlock we will not see the sort of government regulation or economic reform necessary to make substantial improvements in digital spaces like Facebook or YouTube, where misinformation runs rampant.”
An experienced sociological and demographic researcher commented, “Due to the way national governments are trending toward populist and autocratic forms I expect more uses of the internet to create connections among negative forces. The reasons for nations trending toward the autocratic – as I see it – include the increasing international migration which is altering populations and threatening native-born groups. This can be labeled as ‘white rage,’ but it happens in all countries.
“Another reason for my pessimism is the financialization of markets, which is a form of capitalism most notably devoid of local connections and relationships, and indeed, eschews them. I don’t know what the solution is, but I suspect some help may come from finding a way to hold the big internet behemoths liable for their created environments and the violent, vitriolic behavior they enable. In the U.S. that might be done through lawsuits. It would be nice of corporate bigwigs did jail time for their misdeeds, but since we don’t see it for oil rig environmental destruction and deaths why would we see it anywhere else?”
A communications professor based in Canada said, “Things will always change, but they aren’t going to change universally one way or another, even if laws change because fundamentally the technologies exclude peoples, etc.”
An expert on cybersecurity and cyberspace noted, “Digital spaces are not regulated and, unfortunately, when left to their own devices, people tend to disseminate misinformation. We have already seen this even with the most basic well-known facts. The web is an amplification tool for misinformation, and with social media, search, and other algorithms, people are fed content that reinforces misinformation.”
A sociologist who studies the social and cultural impact of the internet noted, “Key problems are online security, safety and harassment. These are all deteriorating, and they will continue to deteriorate. Related problems are the easy spread of fake news, false information and lies online. None of these problems have technical solutions and no other solutions (or mitigations) are on the horizon.”
A professor based at a national technological institute in Europe said, “2035 is very short for a such revolutionary change, so I do not believe it will be accomplished by then.”
A marketing and business consultant based in Ohio wrote, “We are at a point where there is too much digital insertion in people’s lives. They are not processing it well and it shows in society. Humans are beginning to interact with other humans less. People are becoming addicted to digital life, very similarly to addiction to alcohol or other substances. There are increased cases of depression and suicide. We should stop for a while.”
A CEO and editor-in-chief based in South Africa wrote, “It is becoming obvious that technology’s developments are widening the digital divide between advanced nations (mostly in Europe, Middle East, Asia and North America) and stagnant ones. The backward countries are using tech to bolster quasi-democratic regimes like those in sub-Saharan Africa. This negative trend will worsen in the future.”
A professor of information production and systems who is based in Japan said, “I can’t think of a world that doesn’t exist anymore. But to continue to exist is to make the world worse and to make people unhappy.”
A North American technology professional wrote, “In 2035 there will be a mixture of good and bad impacts of the use of digital spaces, very similar to what we see today. This will fuel more isolationism of radical ideas and more divisiveness. However, I also expect that online communities will continue to develop and thrive in positive ways.”
A professor based in Australia responded, “There are certain internet actors who intentionally capture, groom and indoctrinate large groups of people for monetary gain. Unless processes are developed to manage this kind of activity on the internet, the value of the internet resource will be degraded along with the stability of societies.”
The president and co-founder of a U.S. software company observed, “I don’t believe that internet governance, laws, or regulations will effectively challenge the divisive discourse and misinformation enabled by profit driven profiling and amplification of the most engaging content. Although it is possible to envision a world in which Facebook like micro targeting and amplification is outlawed, I don’t believe that there is sufficient public will to overcome the entrenched interests.”
A professor of public affairs whose research is focused on the governance of AI wrote, “I am concerned about the centralization of power in digital spaces that has been made worse by the pandemic. I suspect that everyday citizens will have less power compared with major tech companies or governments.”
A professor and researcher commented, “It is not clear why we would necessarily expect digital spaces and digital life to be substantially ‘better’ by 2035. Technologies change in complex ways. For instance, is television better or worse today than it was 14 years ago? The answer, clearly, is both. And there is no particular reason for optimism in this case. Our institutions and the technology itself will clearly evolve but the net effect of those changes against the various normative and empirical criteria we might consider is not clear.”
A professor of economics and senior research fellow at a center focused on the future of social policy responded, “Most digital spaces are privately owned and operate for profit. They are also very lightly regulated. Together this is a recipe for the negative side of digital spaces to rule.”
A futurist/consultant based in Europe said, “Cancel culture and misinformation will be on the rise, politicians will manipulate the ‘truth’ for their own gain.”
A communications professor wrote, “The profit system – capitalism – will continue the negative trend.”
A managing director working in the space of sustainable technologies for cities responded, “Government, especially local government, cannot keep up with private sector developments. Local government does not have the resources or capacity to preempt, regulate or respond. Bad actors can penetrate local government defenses. Actors with aggressive business models such as Uber or Facebook or Nextdoor find avenues to citizens that local government cannot regulate or monitor. There are two distinct issues here: 1) How does government protect its citizens from bad actors and those with aggressive business models? 2) How does government protect its own data? The first is almost impossible, the second will take a concerted effort using best practices for data governance.”
A global public policy consultant wrote, “Digital spaces will only become less productive and uglier when the platforms themselves, those spewing lies and only rhetoric, and people take a stand for good. (Good doesn’t mean we agree; good means we are honest and forthright.) Unfortunately, there doesn’t seem to be wholesale interest in improving conditions online. Mean girls IRL [in real life] are mean girls online. Children are not necessarily being taught the value of civics and civic life. There is little agreement about the American experiment. To the last, who really thinks about the American experiment?”
A professor of public administration based in the U.S. South said, “Government doesn’t appear to really have a set of expectations around social media and digital spaces, so any policy that would shape these spaces is likely to work at the margins. It’s not going to handle trolls, misinformation and lying, conspiracy theorists, misinformation, and the like very well. People will continue to sort into information silos and bubbles, and at least one cable TV network will continue to trade in lies as a means by which to attract viewers. These tendencies are baked into the media system today. The only way things change for the better is if something fundamentally different comes along to break the stranglehold that Facebook and Twitter have on social media and public discourse. These two platforms have proven to be toxic and ungovernable.”
The president and founder of an internet architecture company wrote, “There are movements on the left to suppress speech, speakers and conservative ideas, aided and abetted by left-leaning leadership at major tech and media companies. Coordination among the Democratic Party and Leftist movements on those same platforms to spread disinformation and conduct mob attacks to delegitimize valid opposing speakers and content.”
A senior economist and risk strategist said, “Cybercrime is on the upswing, governments are increasingly involved in hacking each other and state surveillance of citizens is on the increase. While there are some significant positive advances, there is a real down and dirty downside reflected by ransomware, malware and hacking. The internet and related technology are great things, but we are still in the Wild West in terms of controlling the negative aspects.”
A North American professor responded, “The division between rich and poor, haves and have-nots, north and south will all increase unless we take significant action. Digital spaces are no different than physical spaces.”
An independent technology writer observed, “Digital spaces can be a positive force for those who understand how to use them effectively – nurturing the real information sources, and suppressing the trolls (negative sources of information). But most people using digital spaces don’t understand how to do that, and in such cases digital spaces tend to form a negative feedback loop. They see the worst and only the worst because that’s what is most engaging, thus spending even more time in digital spaces.
“I just don’t see anything with a reasonable possibility of reigning in this tendency towards the negative feedback loop. Facebook was literally built on negative feedback loops, Twitter is just a shouting contest, and I haven’t got the energy to attempt to engage with newer forms of social media; they just don’t seem worth my time.”
An activist and voice of the people wrote, “In the digital realm it is too easy to lie and be nasty with no personal repercussions.”
A professor of political communication based in Hong Kong observed, “Human nature will not change before 2035 and digital technologies will intensify their negative impact on civil society through more-sophisticated micro-targeting, improved deepfake technologies and improved surveillance technologies. Minimizing negative impacts will require government regulation, which is too difficult to accomplish in democracies due to strong lobbying and political polarization. Authoritarian countries, on the other hand, will use these technologies not only to suppress civil society, but also to gain a technological advantage over democracies.”
A digital activist wrote, “Digital spaces will be normalized, like newspapers, television. Digital life will continue to grow in many ways. I do worry about humans’ ability to read and write, since more and more information will be oral, voice commands and ‘voice’ replies.”
A North American research scientist said, “Sadly, looking back at the idealism in the 1990s such as the thinking of individuals like John Perry Barlow and the Electronic Frontier Foundation, I think we have to CANCEL the idea that digital technology is a force for democratizing, liberating, progressive political action. Recent phenomena conclusively reject these utopian notions.”
A data scientist and fellow of the American Statistical Association commented, “You asked, ‘can digital spaces and people’s use of them be changed in ways that significantly serve the public good?’ Looking 15 years back to 2005, and comparing it to today, I don’t see technology unilaterally improving public life.
- It has become much easier to communicate in mobile ways – except my teenage children never pick up their phones.
- It has become much easier to catch up with friends very far away in ways unimaginable 30 years ago – except that social media platforms produced echo chambers full of outright lies and conspiracy theories.
- It has become significantly easier to shop online – and get your identity stolen, too.
- Ethics in AI, from credit scores to automated filtering of resumes to predictive policing, is an afterthought at best and not a design requirement – it is only safe to travel by airplane when it is designed to be safe, unlike Boeing 737 MAX.
“Given how polarized American society is, and how uneven the penetration of technology is on the urban-rural spectrum, I don’t foresee people in the U.S. agreeing on regulating the tech to the extent that it will mostly bring public good. Maybe other countries that are better in coming to social cohesion would be more successful with that. Less densely populated areas do not have the critical mass to create demand for a great variety of apps that are inherently local. A small town with population of 10,000 cannot support Uber. Everybody has a car anyway, and at any given time, there will be 0.5 drivers available with a wait time of 30 minutes, so it’s just not worth it. This part of the urban life will simply not gain traction.”
The director of a European nation’s center promoting internet safety for children wrote, “I am an optimist. I see young people using technology positively and thoughtfully, and I have seen great bravery and activism online. The threat of regulation of online services has provided a useful boost to issues around child protection, but there is much further to go. Regulation will have an impact.
“Tech companies’ responsiveness to user reporting should improve – the use of machine-learning will help improve this environment. Initiatives like the Age-Appropriate Design Code in the UK can help to bake children’s best interests into new tech services as well as existing ones.
“While politicians are devising some regulation, they are also a part of the problem, particularly due to their clumsy language and behaviour, which maps over spikes in online hate. Better political discourse will have a positive effect online, too. End-to-end encryption expansion may weaken current industry child-protection measures where they are being introduced on services that had not implemented it before.”
A senior systems engineer based in Canada said, “There will be legislation and regulation that comes with maturity and experience with digital tools. There will also be new entrants who focus more on the common good and not on massive financial gain. There will also be peer-to-peer options without any ownership at all. However, manipulators and con artists will always be with us.”
An international economics and e-transformation expert wrote, “I am fundamentally an optimist and believe that our online and political representatives will correct the distortions that have developed allowing the distribution of false information with political objectives. Major players such as social media companies need to take action or new regulations will be established.”
A Canadian multimedia journalist and consultant wrote, “Recognition of the digital space as more powerful than the in-real-life space, mainly because of ease of access and increased isolation recently bolstered by COIVD-19 isolation. More professionals across all fields recognizing that negative impact of false information online can destroy the public sphere, reality perception, and social structures. If it does not become a more civil and structured place, a worldwide criminality will subsume current equality and economic justices. People are already demanding more of a role in designing their digital spaces. Digital redirection is already underway.”
A policy scholar said, “When new means of communication appear they reflect people’s thinking rather than affect it. The dark thoughts we blame on the internet now have long been present and circulating before this technology appeared and matured: see everything from the Klan and the Birchers to conspiracy theories about cloudseeding, the JFK assassination and moon landing, to 9/11 Truthers, to the anti-Catholic pamphlets of J.S. Chick. The internet only made that thinking more transparent. Such transparency ultimately opens these thoughts to investigation and disputation long-term. The process is slow, but that’s because people want to believe these ideas, not because they’re circulated with a new technology versus an old one.”
A futures strategist and consultant commented, “By 2035 AI will become an iatrical part of computer technology. As long as the AI platform is directed at supporting humanity there will be growth of society. Society will be of a higher degree technology-oriented in all areas. Labor will have to be retrained as technology advances. Rather than turning labor out on the street employers will need to use technology to make a better workforce; high turnover rates in labor hinder market growth. The one pitfall is if AI becomes self-aware and sees humanity as a threat or is too destructive by its own nature. This could create a new caste system, relegating families to stay in the same social structure and stifle creativity and development of new products.”
A researcher of digital trends and social behavior wrote, “By 2035 there will be more transparency of public data policies and enhanced data citizenship. Data governance should be applied in all levels of public space and data ethics should be implemented in all organizations.”
A communications expert and associate at the Center for Strategic and International Studies said, “Even as global governments try to control digital spaces, innovation and the desire for freedom will find new outlets and means of expression.”
A research professor of international affairs observed, “Governments are learning they must regulate, but they now view data as an issue of sovereignty.”
A technology developer responded, “Reforms in digital spaces need to provide safety for children and a ‘civil’ society. The safety for children involves the reduction of pornography and other things which are harmful to children by enable content to be typed on the public network within a country. The U.S. needs to follow certain countries which absolutely ban these features. The harmful nature of digital-only life should be explained to children.
“On the positive side, education and support for internet-enabled school, technologies and medical care will be increased. I would have liked to answer both yes/no. The current highly politicized nature of many of those in the U.S. government – based on ideological differences with no regard for the overall good of civil society – make the future dim.
“It is not technology that causes the problem, it is those who believe that they know ‘the correct way’ and wish to force everyone else into it. A civil society allows for debate without denigrating the opponent. Today we are seeing the denigration of:
- The police forces trying to protect cities by the cities’ elected officials without regard to their influence.
- The ‘other’ political party as ‘evil’ and ‘monsters’ without regard to their responsibility in the efforts.
- Some people’s lifestyle choices by those who don’t agree with them.
- Children and vulnerable adults.”
A professor who is an expert on the politics of inequality commented, “We have the potential to use digital media and digital platforms for the social good, but whether we succeed is by no means assured. People’s actions using digital media are simply extensions of existing social relations. These can be hierarchical, unequal power relations or horizontal. They can be ‘bridging’ or ‘bonding’ social capital. We often fall into the error of supposing that digital media are transformative in and of themselves. In fact, they accelerate and extend the trends in our society. So, if we have the capacity to use the new communications technologies to expand our sense of common good, common purpose, we can take advantage of their ability to connect people who are widely separated in space.”
An expert in communications and technology law and politics commented, “It could go either way, but it is more likely that digital life will not get substantially better. Creating a more-positive digital environment will require effective government regulation at both the national and international levels. Misinformation and disinformation make it harder for governments to address other critical problems such as global warming and climate change, increasing economic and racial inequality, and the spread of coronavirus and other diseases. But, in the U.S., the populace is deeply divided and the political system largely dysfunctional, making effective government regulation unlikely. Cooperation at an international level is even more difficult. The tech companies have way too much power and money and lack incentives to work for the good of all.”
A 30-year veteran of Internet and Web development said, “Maybe – if we are lucky – over next decade or two various digital spaces and people’s use of them will change in ways that serve (or seem to serve) the public good (within an evolving definition of that term) to an extent greater than they do today. It is likely that the ‘digital oligarchy,’ as well as Wall Street, are going to fight tooth-and-nail to maintain the status-quo. In the meantime, we are barreling headlong towards a country that is isomorphic, with Huxley’s ‘Brave New World,’ Collins’ ‘The Hunger Games,’ Atwood’s ‘The Handmaid’s Tale,’ etc.; cf. Chris Hedges ‘American Requiem’: ‘An American tyranny, dressed up with the ideological veneer of a Christianized fascism, will, it appears, define the empire’s epochal descent into irrelevance.’”
An associate professor of philosophy said, “There isn’t a yes or no answer. The fundamental and underlying problems are 1) capitalism and 2) white supremacy, and as long as those are not meaningfully addressed the fundamental problems with digital spaces and digital lives will evolve but not get substantively better. Those are the two main factors driving what’s toxic and harmful about digital spaces and digital life, and framing the issues as stemming from tech obscures the fact that the issues actually stem from capitalism and white supremacy as political forces.”
An Australian researcher of cyberculture said, “The environment on Earth is being eradicated by the number of people on it. Projected numbers show a worst-case scenario of over 10 billion people trying to get enough resources out of this planet to stay alive. The worst-case-scenario will continue to be the massive numbers of disenfranchised people who have: 1) No resources to live. 2) Blame that lack on environmentalists and scientists trying to save the planet. The blame game will continue to escalate online, those shouting the loudest (the pro-Trump domestic terrorists) will continue to outshout the forces of reason.”
A senior research scientist in artificial intelligence at a major research institute predicted, “In 2035 there will be more-equitable access to information and services and a reduction in people’s carbon footprint (there will be less use of paper and travel will become more purposeful, not ‘just’ to get to the office). Digital transactions will be safer, researchers and common folks will be able to search for multi-modal information. However, readily tackling misinformation will become an even bigger issue, though technology will be able to help there too). Government oversight will be harder.”
A technology professional based in North America responded, “In 2035 the continual rollout of electronic payment and ordering will foster innovations in the delivery of goods and services. It will lower costs and increase competition while raising flexibility in employment and other markets. It will be hindered by poor security, theft of financial assets and slow-moving government initiatives.”
A professor of information science based in Illinois wrote, “In 2035 I expect to see further development of unique digital cultures and subcultures, improved technology for meeting-at-a-distance and the advances in the evolution of the workplace. Problems I expect to remain include the intensification of the segregation of groups online and the continued expansion of ‘different ‘realities.’”
An expert in organizational communication commented, “Corporations have taken over the internet. Governments serve corporations and will allow them to do as they wish to profit. Nothing really can be done. Money speaks and the people don’t have the money. The marketplace is biased in favor of profit-making companies.”
If you wish to read the full survey report with analysis, click here:
To read for-credit survey participants’ responses with no analysis, click here: