This page holds hundreds of responses from experts who were asked in a Summer 2021 canvassing if the toxic side of digital public forums such as social media platforms can be significantly improved by 2035.
Critics say activities on social media platforms are damaging democracy and the fabric of society. Can these digital spaces be improved to better serve the public good by 2035? How? If not, why not? Researchers at Elon University and the Pew Research Internet and Technology Project asked technology innovators, entrepreneurs, analysts, academics and digital professionals to examine the forces at play and suggest solutions. They were invited to share their insights via a web-based instrument that was open to them from June 29-Aug. 2, 2021.
The responses on this page are all from experts who were willing to take credit for their comments. The report with full analysis is here. This long-scroll page has a brief outline of major findings of the report, followed by all of the for-credit responses with no analysis, just the comments.
The Question – Bettering the digital public sphere: An Atlantic Monthly piece by Anne Applebaum and Peter Pomerantsev, “How to Put Out Democracy’s Dumpster Fire,” provides an overview of the questions that are being raised about the tone and impact of digital life. Today people are debating big ideas: How much harm does the current online environment cause? What kinds of changes in digital spaces might have an impact for the better? Will technology developers, civil society, and government and business leaders find ways to create better, safer, more-equitable digital public spaces?
Looking ahead to 2035, can digital spaces and people’s use of them be changed in ways that significantly serve the public good? Yes, or No?
862 respondents answered
- 61% said by 2035, digital spaces and people’s use of them will change in ways that significantly serve the public good.
- 39% said by 2035, digital spaces and people’s use of them will NOT change in ways that significantly serve the public good.
- It is important to note that a large share of those who chose “yes” – that online public spaces will improve significantly by 2035 – said it was their “hope” only and/or also wrote in their answers that the changes between now and then could go either way. They often listed one or more difficult hurdles to overcome before that outcome can be achieved. The simple quantitative results are not fully indicative of the complexities of the challenges now and in future. The important findings are found in the respondents’ rich, deep qualitative replies; the key findings are reflected in the most commonly occurring insights shared there.
Qualitative responses on this page were initiated by this follow-up prompt: If you answered “yes,” what reforms or initiatives may have the biggest impact? What role do you see tech leaders and/or politicians and/or the public playing in this evolution? What could be improved about digital life for the average user in 2035? What current problems do you see being diminished? Which will persist and continue to raise major concerns? If you answered “no,” why do you think digital spaces and digital life will not be substantially better by 2035? What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces may not much change?
Among the key themes emerging among worried and concerned respondents’ qualitative replies were:
* Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech. They said social media platforms have come to dominate the public’s attention to the point of replacing journalism and other traditional sources in providing information to citizens. These experts argued that surveillance capitalism is not the only way to organize digital spaces. They predict that better spaces in the future will be built of algorithms designed with the public good and ethical imperatives at their core. They hope upgraded digital “town squares” will encourage consensus rather than division, downgrade misinformation and deepfakes, surface diverse voices, kick out “bozos and bots,” enable affinity networks and engender pro-social emotions such as empathy and joy.
* Government regulation plus less-direct “soft” pressure by government will help shape corporations’ adoption of more ethical behavior: A large share of these experts predicted that legislation and regulation of digital spaces will expand; they said the new rules are likely to focus on upgrading online communities, solving issues of privacy/surveillance and giving people more control over their personal data. Some argued that too much government regulation could lead to negative outcomes, possibly stifling innovation and free speech. There are worries that overt regulation of technology will empower authoritarian governments by letting them punish dissidents under the guise of “fighting misinformation.” Some foresee a combination of carefully directed regulation and “soft” public and political pressure on big tech, leading corporations to be more responsive and attuned to the ethical design of online spaces.
* The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will bring improvements: A share of these experts predicted that the public will apply more pressure for the reform of digital spaces by 2035. Many said tech literacy will increase, especially if new and improved programs arise to inform and educate the public. They expect that people who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces. Some experts noted that this is how every previous advance in human communication has played out.
* New internet governance structures will appear that draw on collaborations among citizens, businesses and governments: A portion of these experts predict the most promising initiatives will be those in which institutions collaborate along with civil society to work for positive change that will institutionalize new forms of governance of online spaces with public input. They expect these multistakeholder efforts will redesign the digital sphere for the better, upgrading a tech-building ecosystem that is now too reliant on venture capital, fast-growth startup firms and the commodification of people’s online activities.
Among the key themes emerging among worried respondents’ answers were:
* Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears, these experts argued. In a digitally networked world in which people are constantly surveilled and their passions are discoverable, messages that weaponize human frailties and foster mis/disinformation will continue to be spread by those who wish to exert influence to meet political or commercial goals or cultivate divisiveness and hatred.
* The trends toward more datafication and surveillance of human activity are unstoppable: A share of experts said advances in digital technology will worsen the prospects for improving online spaces. They said more human activity will be quantified; more “smart” devices will drive people’s lives; more environments will be monitored. Those who control tech will possess more knowledge about individuals than the people know themselves, predicting their behavior, getting inside their minds, pushing subtle messages to them and steering them toward certain outcomes; such “psychographic manipulation” is already being used to tear cultures asunder, threaten democracy and stealthily stifle people’s free will.
* Haters, polarizers and jerks will gain more power: These experts noted that people’s instincts toward self-interest and fear of “the other” have led them to commit damaging acts in every social space throughout history, but the online world is different because it enables instantaneous widespread provocations at low cost, and it affords bad actors anonymity to spread any message. They argued that the current platforms, with their millions to billions of users, or any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.
* Humans can’t keep up with the speed and complexity of digital change: Internet-enabled systems are too large, too fast, too complex and constantly morphing. making it impossible for either regulation or social norms to keep up, according to some of these experts. They explained that accelerating change will not be reined in, meaning that new threats will continue to emerge as new tech advances arise. Because the global network is too widespread and distributed to possibly be “policed,” these experts argue that humans and human organizations as they are structured today cannot respond efficiently and effectively to challenges confronting the digital public sphere.
News release with nutshell version of report findings is available here
Download the print version of the full report with analysis here
Responses from those taking credit for their remarks. Some are longer versions of expert responses contained in shorter form in the survey report.
Some people chose not to provide a written elaboration, so there are not 800-plus recorded. Some of the following are the longer versions of responses that are contained in shorter form in one or more places the survey report. Anonymous responses are carried on a separate page. These comments were collected in an opt-in invitation to more than 10,000 people that asked them to share their responses to a web-based questionnaire in Summer 2021.
Yvette Wohn, associate professor and director of the Social Interaction Lab at New Jersey Institute of Technology, said, “Increasing development of remote technologies will lead to more-immersive online experiences. What used to be the realm of digital games will expand into other segments of life, including education, entertainment and business.
“In this process, there will be a struggle – especially in certain geographical locations – of the deployment/development of broadband infrastructure and whether or not the internet should be considered a public good. How local or federal governments decide on this will greatly influence equity and economic disparity. Rural areas where the government takes more initiative in subsidizing and developing high-speed internet will rise out of poverty, as tired city dwellers now have the opportunity to work remotely.
“Education, particularly higher education, will be slowest to change curriculum to support a hybrid online/offline experience. Now that most knowledge can be obtained online and college degrees are commonplace, universities will struggle to remain relevant. Many universities will close. In K-12, the increased demand on teachers to engage both in-person and digitally will cause a massive shortage of personnel. Unless the federal government raises teachers’ wages, there will be a proliferation of many smaller private schools that will create education opportunity disparity.”
Cory Doctorow, activist, journalist and author of “How to Destroy Surveillance Capitalism” and many other books, said, “The move to lower switching costs – by imposing [platform] interoperability on online spaces – will correct the major source of online toxicity – the tyranny of network effects. Services like Facebook are so valuable (due to network effects) that users are loathe to leave, even when they have negative experiences there. If you could leave Facebook but still connect to your Facebook friends, customers and communities, then the equilibrium would shift – Facebook would have to be more responsive to users because otherwise the users would depart and it would lose money. And if Facebook wasn’t responsive to user needs, the users could take advantage of interoperability to leave, because interoperability means they don’t have to give up the benefits of Facebook when they go.”
Craig Newmark, the founder of Craigslist, now leading Craig Newmark Philanthropies, commented, “Social media becomes a force mainly for good actors when the platforms (and mass media) no longer amplify disinformation; I hope for this by 2035.”
Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, wrote, “First, a couple caveats. Yes/No answers obviously simplify such a very large question. I would choose Both, if possible. And we have not clearly defined ‘digital spaces.’ If we assume that for most purposes ‘digital spaces’ refers to the largest social media services, then I do believe these businesses will continue spending to make their services more appealing to the masses, and to avoid regulatory responses that could curb their growth and profitability. They will look for ways to support public initiatives towards confronting global warming, advocating for diversity and equality, and optimizing our civic infrastructure while supporting innovators of many stripes.
“To serve the public good, social media services will likely need to re-evaluate their business models, innovate on identity and some degree of digital embodiment, and scale up automating content moderation in ways that may challenge their business models.
“Regulators will likely need to be involved to require more guardrails against misinformation/disinformation, memetic ideologies, and exploitation of the ad model for micro-targeted persuasion. However, this discussion often overlooks the reality that people have flocked to social media and continue to use it. Surveys continue to show that most users don’t change their behaviors, and when things become problematic they often want regulators to hold the companies accountable rather than taking responsibility themselves. So, part of this may simply be about maturing digital literacy. We are, after all, living through the largest disruption since the printing press and we’re less than 30 years into it.
“Social media has aggregate billions of users in a decade. We will grapple with the consequences for decades more. This discussion has only focused on the big social media services, but there are many other ‘digital spaces’ – in games, online forums, messaging platforms, and the long tail of smaller niche groups both on the public internet and in dark nets. And this is probably the true nature of 2035: there will be just as much – maybe more – fragmentation of the social commons through this proliferation of ‘digital spaces.’
“So, it will be difficult to recover a shared collective sense of what the world is – ideologically, culturally, politically, ethically, religiously, etc., when people are scattered across innumerable disembodied and non-local digital networks. It’s very easy for fringes to connect and coordinate across the globe. Will this fact change by 2035? Or will it continue to deconstruct the social, political and economic mechanisms that are meant to contain such problems?”
Anna Andreenkova, professor of sociology at CESSI, an institute for comparative social research based in Europe, wrote, “Any innovation or social change always evokes concerns about its consequences. These concerns are often expressed in very radical manner. Over the centuries, eschatological or catastrophic consequences have been predicted for most emerging processes or innovations, and most of these worries are eventually forgotten.
“Digitalization of life domains is certainly not straightforward or easy. But at the end it is inevitable and unavoidable. What is really important to discuss is how to minimize the negative sides. Attempts to censor or ‘clean up’ digital space by any actors – private or public – will not be possible or beneficial. People will have to learn and adapt to living in open information space, how to sort the fake from the real, trustful from untrustful, evidence-based from interest-driven. Digital education is a more fruitful approach that any limitations or attempts at guiding in paternalistic way.”
Alexa Raad, chief purpose and policy officer at Human Security and host of the TechSequences podcast, said, “Fundamentally the same aspects of human nature that have ruled our behavior for millennia will continue to dictate our behavior, albeit with new technology. For example, our need for affiliation and identity coupled with our cognitive biases has and will continue breed tribalism and exacerbate divisions. Our limbic brains will continue to overrule rational thought and prudent action when confronted with an emotional stimuli based in fear. The incentives for our elected officials are not aligned with public good. There will likely be some regulatory reform, but it will likely not address the root cause.
“Transformation and innovation in digital spaces and digital life have often outpaced the understanding and analysis of their intended or unintended impact and hence have far surpassed efforts to rein in their less-savory consequences. Business models drive innovation. Advertising has driven innovations in design of digital spaces as well as innovations in machine learning. For example, advertising which is the primary business model for the tech behemoths like Facebook, Google and Twitter and even relative newcomers like TikTok – requires algorithms that engage and elicit an emotional response to reliably predict an action (be it to buy a product or buy into a system of beliefs). It is hard to see new business models emerging that have the same pull on the human psyche and behavior and the economic immediacy and return.
“Role of politicians and tech leaders: Unfortunately, without significant and fundamental reforms in our system of government, the incentive for the politicians is less about public service and transparency, and more about holding on to power and re-election. So long as the incentives for our government representatives are misaligned with the public interest we can expect little in the way of meaningful reform. So long as internet services and their delivery continue to get consolidated (think more and more content being pushed into content delivery networks and managed by large infrastructure plays like Amazon AWS) tech leaders will have greater power to push their own agenda and/or influence public opinion.”
“Improvements on digital life of average user: The pandemic has already proven that many of the services that we assumed required traditional physical delivery, can now be reliably performed online. AI will increase access to a basic level of medical diagnostic care, mental health counseling, training and education for the average user. Advances in Augmented Reality/Virtual Reality will make access to anything from entertainment to ‘hands-on’ medical training on innovative procedures possible without restrictions imposed by our physical environment (i.e., geography). Advances in Internet of Things and robotics will enable the average user to control many aspects of their physical lives online by directing robots (routine living tasks like cleaning, shopping, cooking, etc.). Advances in biometrics will help us manage and secure our digital identities.”
Shel Israel, author of seven books on disruptive technologies and communications strategist to tech CEOs, responded, “Positives for 2035: 1) Medical technology will prolong and improve human life. 2) Immersive technology will allow us to communicate with each other with holograms that we can touch and feel, beyond simple Zoom chatting or phoning. 3) Most transportation will be emissions-free. 4) Robots will do most of our unpleasant work, including the fighting of wars. 5) Tech will improve the experience of learning. On the dark side of 2035: 1) Personal privacy will be eradicated. 2) The cost of cybercrime will be many times worse than it is today. 3) Global warming will be worse. 4) The computing experience will bombard us with an increasing barrage of unwanted messages.”
Tony Smith, a Melbourne-based researcher of complex systems who has written and presented extensively on ICT trends and policy, commented, “My hope is that in 2035 digital spaces serve as a fleet of lifeboats for those trying to navigate the terminal collapse of final-stage capitalism and nation-states and their enforcement operations, reestablishing indigenous and hyperlocal sovereignty transparently to provide confidence that others’ localities are not preparing hostile actions, for an understanding of ‘local’ as much or more concerned with other commonalities than with physical location and thus placing individuals concurrently in several and, as likely as not, some individuals in most intersections.
“The biggest impacts will come from the natural realm. With the ecological and climate crises delivering ever-more-rapid shocks, it will be a time in which the abandonment of monetary accumulation, of land ownership, of intellectual property regimes and of adversarial systems will be readily accepted, with holdouts disconnected. Generalisation of open-source principles into coordination centre stage should keep enough digital infrastructure operational that few questions will be asked.
“Engagement with the interested public will be the primary design filter. Some of the now temporarily rich or supposedly powerful will enjoy release from the demands such roles had placed on them and recover their creative abilities. Others will struggle with disconnection, not always quietly. The pressure to appear to be doing something will have largely disappeared as slowing down proves to be nowhere near as catastrophic as the ever-more-frequent challenges of keeping Life viable.
“Spambots and their human imitators will have largely disappeared through a combination of reward failure and aggressive purging tools. Barriers to entry will have long fallen. Archival and editing tools will be accepted as a way of making knowledge more widely available, with less need for explicit cross posting. People will continue to inject irrelevancies and/or be misunderstood.”
Paul Epping, chairman and co-founder of XponentialEQ, said, “Digital spaces will live in us. Direct connectivity with the digital world and thus with each other will drive us to new dimensions of discovery of ourselves, our species and life in general (thus not only digital life). And it will be needed to survive as species. Since I think that the technologies being used for that purpose are cheaper, faster, smaller and safer, everyone can benefit from it. A lot of the problems along the way will be solved and will have been solved, although new unknowns will brace us for unexpected challenges. E.g., how will we filter information and what defines the ownership of data/information in that new digital space. Such things must be solved with the future capabilities of thinking in the framework of that time; we can’t solve them with our current way of thinking.”
Thornton May, futurist and co-founder of the Digital Value Institute, observed, “With everyone and just about everything online, the ‘cost of knowledge’ has never been lower. Thomas Jefferson almost bankrupted himself buying books. In 2035, access to knowledge will be free or negative cost (i.e., philanthropic institutions will pay people to remove ignorance in critical areas). With so many folks on the ‘past middle age’ stage of the demographic curve, there is a huge body of knowledge that can be tapped. I envision significant improvements in the digital ability of someone to raise their hand and ask for help. Think a knowledge version of Go Fund Me pages. The dark side of digital is misinformation and the ability of personal-agenda-obsessed ‘fringers’ to slow legitimate knowledge accumulation.”
Maja Vujovic, owner/director of Compass Communications in Belgrade, Serbia, said, “By engineering more tools to tap our commonalities (rather than our differences) we will keep transcending our restrictive bubbles. Automatic translation and transcription already tackle our language differences; our public fora, like Wikipedia and Quora, teach us about foreign cultures, customs or religions. We will also find new ways to manage our conflicting gender or political identities, by ‘translating,’ role-playing or modeling them (maybe through AR and VR). The gaming industry, for one, could creatively crush its misogyny and help reform hostile workplaces and audiences everywhere faster.
“Over these early digital decades our online public spheres have brought major issues of contention to the surface, truly globally, for the first time ever. The social media algorithms exploited our many frustrations, thus the rage was all the rage. In the future, we’ll turn that public rage into public awareness, then into acceptance, then, in a distant future, into rapport. One step down, three to go – we will struggle through a four-step algorithm regarding each of our principal polar opposites. We will learn to hold ourselves accountable over time. When our public online spheres normalize our real identities (eliminating bozos and bots) we will prove civil on the whole.
“In the years to come, a new global consensus and protocols will inevitably emerge from and for dealing with worldwide emergencies such as pandemics or the climate change. Improvements will largely be owed to the global public debates we passionately exercise online. If we, the taxpayers of all countries, crowdsource the most viable identity-vouching solutions, we could de facto become fully represented. The distributed technologies will boldly attempt to keep a tally of everyone in all of our demographic, economic, cultural and other tribes. We should also fund more efforts to connect even the most remote communities, so truly every citizen could have an equal voice.
“The subscription economy and the creator economy alike are already aggregating large inventories of real individuals. They will, in time, corner both the demand and the supply side of all original digital assets we produce or consume (and many tangible ones). Our online public spheres will pragmatically follow suit. In time, these ledgers will encompass a vast majority of us. This will dilute the traditional de jure hold that our authorities have had over our most vital affairs – birth, education, driving, earning, death. Our elections, school certifications and even our legislative processes cannot forever stay immune to and detached from a fully digitized and outspoken citizenry that operates its own tallying tools.
“A paradigm shift, the real digital revolution, will naturally follow. It would be ludicrous to not want to walk our talk directly, once we become equipped to do so. We could then automate, gamify or distribute the governance (or choose ‘all of the above’). As a bonus, our global digital public spheres would vastly improve as well. In effect, we would be saving the civilization baby and purifying its bath water, too.”
Bart Knijnenburg, associate professor of human-centered computing at Clemson University, said, “One big transformation that I am really hoping for is the de-commodification of the spaces that facilitate online discourse. Right now, most of our online interactions are aggregated on a few giant social networks (Twitter, Facebook, Instagram). We tend to use these networks for multiple purposes, which leads to context collapse: if you mostly talk on Facebook about cars and politics, your car junkie friends will be exposed to your political views and your political kindred spirits will learn about your mechanical skills.
“On the consumer side this context collapse may induce some serendipity, but on the author’s side it could have a stifling effect: If your words are shared with an increasingly broad audience, you will likely be less outspoken than you’d be in smaller circles. This problem is exacerbated by the lack of transparency in how social networks show content to your audience, and by the tendency of social networks to make this audience as broad as possible (e.g., by encouraging users to add more ‘friends,’ or by automatically translating posts into other languages).
“I envision the de-commodification of these spaces to result in interest-oriented peer networks (e.g., surrounding a common interest in a certain podcast, author, sports club, etc.), hosted on platforms like Slack, Clubhouse, or Discord, which do not specifically aim to grow the network or to algorithmically control/manipulate the presentation of the shared information. By joining *multiple* networks like this, people can mentally separate the expression of a variety of their interests, thereby overcoming the current issue of context collapse.
“If AI technologies do end up playing a role in this scenario, then I hope it to be at the level of network creation rather than content distribution. The idea would be for an AI system to automatically create ad-hoc networks of people with preferences that are similar enough to create an engaging discourse, but not so similar that they result in ‘echo chambers.’”
Alexander B. Howard, director of the Digital Democracy Project, said, “Just as poor diets and sedentary lifestyles affect our physical health, today’s infodemic has been fueled by bad information diets. We face intertwined public health, environmental, and civic crises. Thousands of local newspapers have folded in the last two decades, driving a massive decline in newsroom employment. There is still no national strategy to preserve and sustain the accountability journalism that self-governance in a union of, by, and for the People requires – despite the clear and present danger data voids, civic illiteracy, and disinformation merchants pose to democracy everywhere.
Research shows that the loss of local newspapers in the U.S. is driving political polarization. As outlets close, government borrowing costs increase. The collapse of local news and nationalization of politics is costing us money, trust in governance, and societal cohesion. Information deprivation should not be any more acceptable in the politics of the world’s remaining hyperpower than poisoning children with lead through a city water supply. A lack of shared public facts has undermined collective action in response to threats, from medical misinformation to disinformation about voter fraud or vaccination to the growing impact of climate change.
1) Investors, philanthropists, foundations and billionaires who care about the future of democracy should invest in experiments that rebuild trust in journalism. They will need to develop, seed, and scale more-sustainable business models that produce investigative journalism that doesn’t just depend upon grants from foundations and public broadcasting corporations – though those funds will continue to be part of the revenue mix.
2) Legislatures and foundations should invest much more in digital public infrastructure now, from civic media to public media to university newspapers. News outlets and social media platforms should isolate viral disinformation in “epistemic quarantines” and inject trustworthy information into diseased media ecosystems, online and off. Community leaders should inspire active citizenship at the state and local level with civics education, community organizing. Congress should fund a year of national service for every high school graduate tied to college scholarships.
3) Congress should create a “PBS for the Internet” that takes the existing Corporation for Public Broadcasting model and reinvents it for the 21st century. Publishers should build on existing public media and nonprofit models, investing in service journalism connected to civic information needs. Journalists should ask the ‘people formerly known as the audience’ to help them investigate. State governments should subsidize more public access to publications and the Internet through libraries, schools, and wireless networks, aiming to deploy gigabit speeds to every home through whatever combination of technologies gets the job done. Renovate and expand public libraries to provide digital and media literacy programs, and nonpartisan information feeds to fill data voids left by the collapse of local news outlets.
4) The U.S. government, states, and cities should invest in restorative information justice. How can a national government that spends hundreds of billions on weapon systems somehow have failed to provide a laptop for each child and broadband internet access to every home? It is unconscionable that our governments have allowed existing social inequities to widen in 2020. Children were left behind by remote learning, excluded from the access to information, telehealth, unemployment benefits, and family leave that will help them and their guardians make it through this pandemic.
“By 2035, we should expect digital life to be both better and worse, depending on where humans live. There will be faster, near-universal connectivity – for those who can afford it. People who can pay to subscribe will be able to browse faster, without ads or location and activity tracking. “The poor will trade data for access that’s used by corporations and insurance companies unless nations overcome massive lobbying operations to enact data protection laws and enforce regulations. Smartphones will evolve into personalized virtual assistants we access through augmented reality glasses, health bands, gestural or spoken interfaces, and information kiosks.
“Information pollution, authoritarianism, and ethnonationalism supercharged by massive surveillance states will pose immense risks to human rights. Climate change will drive extreme weather events and migration of refugees both within countries and across borders. Unless there are significant reforms, societal inequality will destabilize governments and drive civil wars, revolutions, and collapsed states. Toxic populism, tribalism, and nativism antagonistic to democracy, science, and good governance will persist and grow in these darkened spaces.”
Calton Pu, professor, software chair and co-director of the center for experimental research in computer systems at Georgia Tech, wrote, “We are building an information civilization unlike anything else in the history of mankind. The information civilization is built on digital technologies and platforms that can be called digital spaces. The impact of information has been profound in economy (both macro and micro), society (as an organization affecting its population, and the people transforming the social organization), and humans (an aspect that can be called digital life).
“Information starts as bits, but appropriately arranged bits can transform and create technologies, digital and physical. As we learn more information and create more technologies, they have transformed our economy and society in fundamental ways. The impact of information-based technologies has been greatest in environments with less constraints (within reason) on the flow and application of information and technologies. In terms of beneficial roles, let us consider an example of politicians. President Trump was banned from Twitter (and other social media) due to his misinformation campaigns, an abuse of digital space. In contrast, President Biden advocates evidence-based decision-making, an approach enabled by the abundance of factual information.
“Digital spaces provide an environment in which all kinds of information, including factual information, misinformation, and disinformation, grow side-by-side. Technical leaders and politicians who help build the information civilization will make beneficial contributions, and those who misuse the digital spaces for their own benefits will lead us towards the downfall of information civilization. For the information civilization to thrive, the builders must find technological and political means to distinguish factual information (the constructive building blocks) from misinformation and disinformation (the destructive, eroding bacteria/fungi). As the information civilization grows stronger, there is hope that its building blocks of factual information will become better organized and easier to adopt.
“This improvement will help more humans to grow wiser and help build the human civilization, including the information and physical dimensions. Throughout the human history, all civilizations have risen and fallen. It appears that as the builders construct an increasingly more sophisticated civilization, the intricacy of organization also makes it more susceptible to manipulation and disruption by the schemers. It is clear that the schemers are not acting alone: they reflect deep, dark desires in human nature. The battle between the builders and schemers will persist in the information civilization, as it has been through all the civilizations in the history.”
Alejandro Pisanty, professor of internet and information society at UNAM, National Autonomous University of Mexico. said, “By 2035 it is likely that there will be ‘positive’ digital spaces. In them, ideally, there will be enough trust in general to allow significant political discussion and the diffusion of trustworthy news and vital information such as health-related content. These are spaces in which digital citizenship can be exerted in order to enrich society. This is so necessary that societies will build it, whatever the cost.
“However, this does not mean that all digital spaces will be healthy, nor that the healthy ones will be the ones we have today. The healthy spaces will probably have a cost and be separated from the others. There will continue to be veritable cesspools of lies, disinformation, discrimination and outright crime. Human drivers for cheating, harassment, disconnection from the truth, ignorance, bad faith and crime won’t be gone in 15 years.
“The hope we can have is that enough people and organizations (including for-profit) will push the common good so that the ‘positive’ spaces can still be useful. These spaces may become gated, to everyone’s loss. Education and political pressure on platforms will be key to motivating the possible improvements … Most major concerns for humanity’s future stem from deeply rooted human conduct, be it individual, corporate, criminal or governmental.”
Mark Davis, associate professor of media and communications at the University of Melbourne, wrote, “Against all expectations otherwise, we are still in the ‘Wild West’ phase of the internet, where ethical and regulatory frameworks have failed to keep up with rapid advances in technology. The internet, in this phase, and against early utopic hopes for its democratic utility, has had severely negative impacts on democracy that are not offset by more-hopeful developments such as Black Twitter and #metoo, among the many innovative, emancipatory uses of online media. One reason for this is that the surveillance business model on which digital platforms operate, which has seen traditional liberal democratic intermediaries displaced to some extent by algorithmic intermediaries, privileges quantities of engagement over the qualities of content.
“Emancipatory movements exist in the digital folds of an internet designed to maximise corporate profits, and that has seen a new class of mega-rich individuals and corporations emerge that, in effect, now own the infrastructure of the ‘public sphere,’ and have enormous lobbying power over government. The affordances of these systems have at the same time, fostered the creation of alternative media spheres where extremism and hate discourse continue to proliferate.
“We are fast approaching a crisis point where the failures of the present hyper-corporate, relatively unregulated model of the internet are having severe, detrimental impacts on public communication. We are at a proverbial fork in the road. One route leads an ever deeper downward spiral into digital dystopia: hyper surveillance, predictive technology working hand in hand with authoritarianism, disinformation overload, and proliferating online divisiveness and hatred.
“The alternative route is a more-regulated internet where accountability matters, guided by a commonly assented ethics of public culture. Is this alternative possible in an era of winner-takes-all partisanship and corporate greed so vast that it is literally interplanetary in its ambitions? I fear not, but if we are to be civic optimists then it is the only possible hope, and we have no alternative but to say ‘yes’ to a better digital future and to become digital activists who collectively work to make it happen.”
David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, commented, “We are not powerless in the face of our technology. We can choose the tech we find acceptable, and we can mandate changes to make it serve us better rather than worse. Of course, tech does have a life of its own, which is a clearer way of saying that complex dynamic systems are often — usually — unpredictable, non-linear, and chaotic. But because we humans can exert control if we choose to, I have to believe our social tech will get better at serving our human needs and goals.
“Still, predicting what that social tech will look like in 14 years is literally impossible because these technologies are complex dynamic systems embedded in the complex dynamic system that we call life on Earth. So, I won’t pretend to be able to look further than short-term extrapolations of where are now: I expect to see more concern about how the current systems are tearing us apart, along with continuing the underplaying of how they are binding us together. Perhaps there will be more human moderation. There will very likely be more (and better, we hope) algorithmic moderation.
“I expect an increasing acceptance of the demand for hearing an increasing diversity of voices. Beyond that, who knows? Not me. But I do want to note that it is entirely possible that our ideas about what constitutes useful and helpful discourse is being changed by our years on social media. Because this is a change in values, what looks like negative behavior now may start to look positive. By this I don’t mean that racism and homophobia will start to look like positive values. Good lord, I hope not. I mean that the social-media ways of collaboratively making sense of our world may start to look essential and look like the first time we humans have had a scalable way of building meaning.
“If we are able to get past the existential threat posed by the ways our online social engagements can reinforce deeply dangerous beliefs, then I have hope that with the aid of 2035’s tech we’ll be able to make even more sense of our world together in diverse ways that have well-traveled links to other viewpoints.”
Brad Templeton, internet pioneer, futurist and activist, a former president of the Electronic Frontier Foundation, wrote, “I hold some hope for the advancement of a new moral theory I am exploring. Its thesis is that it is wrong to exploit known flaws in the human psyche. A well-known example is gambling addiction. We know it is wrong to exploit that and we even make it illegal to exploit it and other addictive behaviours.
“On the other hand, we have no problem with all sorts of marketing and computer interaction tricks which unconsciously lead us to do things that, when examined later, we agree are against our interests and which exploit flaws well established in the scientific literature. A-B testing to see what’s more addictive would be deprecated rather than be a good idea. This approach is new, but might lead to a way to design our systems that has stronger focus on our true interests. While I also think it would be nice if we could make social media that are not driven by advertising, and thus work more towards serving the interests of users/customers than advertisers/customers, this is not enough. After all, Netflix also works hard to addict users and make them binge, even though they do not take advertising.
“I don’t think anybody knows what form the changes for the better in the digital public sphere will take, but it’s clear that the players and their customers find the current situation untenable. They will find solutions because they must. Tristan Harris has convinced Facebook to at least give lip-service to his ‘time well spent’ positioning; to make people feel, upon reflection, that their time on social media was worthwhile where today many feel it’s not.
“I have proposed there be a way for friends to anonymously ‘shame’ friends who post false and divisive material. A way that you can learn that some of your friends found your post false or lacking, without knowing who they were (so they don’t feel they will risk the relationship to tell you that you fell for a false meme.) This will not be enough, but it’s a start. I also hope we’ll be trained to not trust video evidence any more than we do text because of deepfakes. It will get worse in some ways too. This is an adversarial battle, with some forces trying deliberately to disrupt their enemies. But they will certainly try. Propaganda, driven by AI, will continue to be weaponized.”
David J. Krieger, director of the Institute for Communication and Leadership, based in Lucerne, Switzerland, responded, “When talking about ‘digital life’ or ‘digital spaces’ we are actually talking about the global network society and not merely about tech, that is, social media, platforms, AI, automation, internet monopolies, etc. The global network society is – at least in principle – a data-driven society. This means that decisions on all levels and in all areas, business, education, healthcare, science, and even politics, should be made on the basis of evidence and not on the basis of status, privilege, gut feelings, bias, personal experience, etc.
“Data-driven decision-making can in many situations be automated. This requires as complete and as reliable data on everything and everyone as possible. The most important reforms or initiatives we should expect are therefore those that make more data of better quality to more people and institutions available. Here the primary values and guiding norms are connectivity, flow of information, encouragement of participation in production and use of information, and transparency.
“Business, politics, civil society organizations and the public should focus on practical ways in which to implement the values of connectivity, flow of information, encouragement of participation in production and use of information and transparency. To the extent that these values are implemented it will become possible to mitigate against the social harms caused by the economy of attention in media (click bait, filter bubbles, fake news, etc.), political opportunism and the lack of social responsibility by business.
“What is needed in the face of global problems such as climate change, migration, a precarious and uncontrolled international finance system, the ever-present danger of pandemics, not to speak of a Hobbesian ‘state of nature’ or a geo-political ‘war of all against all’ on the international level, is a viable and inspiring vision of a global future.”
Amali De Silva-Mitchell, futurist and founder/coordinator of the IGF Dynamic Coalition on Data-Driven Health Technologies, said, “The increasing knowledge of the space and of its benefits and risks by the average user of technology could be exponential, as digital becomes the norm in health, education, agriculture, transport, governance, climate change mitigation including waste management, and so forth. By 2035 most global citizens will be more conversant with the uses of technology, easing the delivery of technology goods and services.
“The biggest advances will be in the universal quality of connectivity and increased device accessibility. Citizens who are unable to participate digitally must be served by alternative means. This is a public duty. A 100% technology-user world is not possible, and this limitation must be recognized across all services and products in this space. Perfection of technology output will continue to be marred by misinformation, fake news, poor design, bias, privacy versus copyright, jurisdiction mis-matches, interoperability issues, content struggles, security problems, data ocean issues (data silos, fickle data, data froth, receding-stability data, and more) and yet-to-be-identified issues.
“All of these must be managed in order to create a more-positive digital public sphere with better opportunities. Politicians will be motivated to ensure resilient economic societies and will pursue the ideal of universal accessibility through all means such as satellite, quantum and other emerging technologies. The public will be focused on affordable, quality, unbiased (AI/ML/quantum) internet access.
“In the nano, quantum and yet-unidentified operational spaces the private sector will be focused on issues of interoperability for the Internet of Things and other emerging applications (for market growth versus democratization). In future, quantum entanglement will create new opportunities and unexpected results while challenging old principles and norms due to potential breakthroughs, for instance, telepathy for human information exchange competing with traditional wireless technology.”
Beth Simone Noveck, director of the Governance Lab and author of “Solving Public Problems: How to Fix Our Government and Change Our World,” wrote, “Many people are working today on building better alternatives to the current social media dumpster fire and many institutions turning to the use of platforms designed to facilitate more-civil and engaged discourse …
“Brazil has adopted platforms like Mudamos, which enables citizens to propose legislation and which is being used systematically and on an ongoing basis for ‘crowdlaw,’ namely to enable ordinary citizens to participate in the legislative process. Taiwan has engaged the public in co-creating 26 pieces of national legislation, but perhaps even more exciting is its creation of a ‘Participation Officers Network’ to train officials to work with the public in a more-conversational form of democratic engagement enabled by technology, day in and day out.
“The most exciting initiatives are those where institutions are collaborating with civil society, not as a pilot or experiment, but as an institutionalized and new form of governance and problem solving. In the UK, GoodSAM uses new technology to crowdsource a network of thousands of amateur first responders to offer bystander aid in the event of an emergency, thereby dramatically improving survival rates. Petabancana enables residents in parts of Indonesia and India to report on fair weather flooding to facilitate better governmental disaster response.
“Civic tech developers are creating exciting new alternatives designed to foster a more participatory future. Whether it is platforms for citizen engagement like Pol.is or Your Priorities or projects like Applied – hiring software designed by the UK Behavioral Insights team designed to foster diversity rather than inadvertently entrenching new biases – there has always been a community of tech designers committed to using tech for good.
“But the technology is not enough. The reforms that have the biggest impact are those changes in law and governance that lead to uses of technology that promote a systematically more responsive, engaged and conversational forms of governance on a quotidian basis by prohibiting malevolent uses of tech while encouraging good uses. For example, New Jersey is exploring opportunities to regulate uses of hiring technology that enable discrimination. But, at the same time, New Jersey is running a Future of Work Accelerator to invest in and promote technologies that protect workers, amplify workers’ voices and strengthen worker rights.
“In the United States, many positive uses of technology are happening in cities and at the local, rather than the national, level. The Biden Administration’s July 2021 OMB request for comments to explore more-equitable forms of citizen engagement may portend greater investment in technology for sustained citizen engagement. Also, the introduction of machine learning is enabling the creation of new kinds of tools to facilitate more efficient forms of democratic engagement at scale.
‘Given the proliferation of new platforms and initiatives designed to solve public problems using new technology and the collective intelligence of communities, I am hopeful that we will see increasing institutionalization of technologies that promote strong democracy and civil rights. However, in the absence of sufficient investments in civic infrastructure (i.e., government and philanthropy paying for these platforms) and investments in training citizens to learn how to be what Living Cities calls ‘resident engaged,’ the opportunity to use technology to enable the kind of democracy and society we want will go unrealized.”
Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, said, “By 2035 there will be a great paradigm shift in terms of digital transformation via the Internet of Things (IoT), augmented reality, robotics, artificial intelligence, etc. Many businesses are already moving more broadly into these digital spaces. Thanks to technological advances, financial inclusion has moved into the underserved and unserved areas of the world. The Fourth Industrial Revolution has started in most countries, and we are witnessing manufacturing in the digital space in a way that is unprecedented.
“Our society will be smarter and have richer experiences – it will be bettered as it engages in more-immersive education and virtual-reality entertainment. Our currency may be totally digital. The IoT will facilitate a brighter society. However, there are concerns.
“More financial heists and scams may be perpetrated through digital platforms. Cryptocurrency, due to its decentralised nature, is used to facilitate crime; ransomware perpetrators demand cryptocurrency as a method of untraceable payment, and illegal international deals are made possible by payment through untrackable cryptocurrency. Terrorism may be advanced using new robotics tools and digital identities to wreak more havoc. It is possible that with a proper framework and meticulous, robust regulatory approach that the positive advantages will outweigh the ills.
“Most aspects of our lives will be impacted positively by the emerging technologies. The IoT can usher in smart cities, smart agriculture, smart health, smart drugs, smart sports, smart businesses, smart digital currencies. Robotics will be used to combat pandemics by promoting less physical contact where it will help to flatten the curves and it will be used in advanced industrial applications. The opportunities are limitless.
“However, all hands should be on deck so that the negative impact will not erode the gains of digital evolution. Global collaboration through global bodies is necessary for positive digital evolution. International governance and national governance of each country will have to be active. Sensitisation of the citizenry against the ills of digital transformation is key to sustaining the gains. Inventors and private businesses have roles to play. Even a future technological singularity is also a threat.”
Mei Lin Fung, chair of People-Centered Internet and former socio-technical lead for the U.S. Department of Defense’s Federal Health Futures initiative, wrote, “The trajectory of digital transformation in our lives and organizations will have parallels to the transformation that societies underwent with the introduction of electricity. Thus, the creation of digital public goods and digital utilities will allow for widespread participation and access in digital transformation. This is already underway at the IEEE.org, the International Telecommunication Union and action-oriented forums like the World Summit on the Information Society and the Internet Governance Forum.
“There are tech leaders and/or politicians who are playing and can play a beneficial role. 1), Antonio Guterres, the first electrical engineer to be UN Secretary-General, has established The Digital Cooperation Roadmap, bringing together stakeholders from across many sectors of society. 2) Satya Nadella, CEO of Microsoft. 3) Ajay Banga, executive chair of MasterCard. 4) Marc Benioff, chairman of Salesforce. 5) An original innovator of the internet, Vint Cerf, now a Google vice president, and other internet pioneers who built the internet as a free and open resource.
“All of these and more will be working to build bridges to a better approach to digital transformation The most noticeable improvement in the network in 2035 will be that digital will become more invisible, and it will be much more natural and easier to navigate the digital world. This transformation will be similar to the evolution of the impact of writing. At the beginning, it was difficult to learn to write, but it advanced broadly and quickly. After it becomes a normal part of people’s education we will see a shift to a digital world with much more digitally literate people. It will be like ‘Back to the Future’ – the best parts of human life will flourish, augmented by digital. Current problems that will be diminished include cyberattacks, misinformation, fake news and the stirring up of tribal conflicts.
“What will persist as major concerns? The uses of digital tools and networks by criminals for human and sex trafficking, for online abuse of the vulnerable, especially children, for fraud, for violence and drug trafficking; increasing attacks via cyber by both state actors and non-state actors; and increasing attempts to shape and manipulate political discourse by cyber means.”
Olivier Crépin-Leblond, internet policy expert and founding member of the European Dialogue on Internet Governance, wrote, “I am optimistic about the transformation of digital spaces for the following reasons: 1) Natural Law will ensure that the extreme scenarios will ultimately not be successful. 2) The Public at large is made up of people who want to live a positive, good life. 3) Unless it is completely censored and controlled, the internet will provide a backstop to any democracy that is in trouble. 4) The excesses of the early years’ GAFAs [an acronym for Google, Apple, Facebook, Amazon that is generally meant to represent all of the tech behemoths] will be soon kept more in check, whilst innovation will prevail. 5) The next generations of political leaders will embrace and understand technology better than their predecessors. 6) Past practice will help in addressing issues like cybersecurity, human rights, freedom of speech – issues that were very novel in the context of the internet only a few years ago. 7) On the other hand, this could be only achievable if all stakeholders of the multistakeholder model keep each other in check in the development of the future internet. If this model is not pursued, the internet’s characteristics and very fabric will change dramatically to one serving the vested interests of the few at the expense of the whole population.”
Melissa Sassi, the Global Head of IBM Hyper Protect Accelerator, focused on empowering early-stage startups, commented, “Initiatives with the largest impact on digital life, in my opinion, include: 1) Access to affordable internet for the 50% that are not currently connected and/or those that are unable to connect due to costs. 2) Digital skill-building for those with access but currently unable to make meaningful use of the internet. 3) Empowering underserved and underrepresented communities via digital inclusion (woman/girls, youth, people with disabilities, indigenous populations, elderly populations, etc.). 4) Investment in locally generated tech entrepreneurship endeavors in hyper-local communities.
“Tech leaders play an important role by incorporating design thinking into everything and anything built. It is important to hire and involve a more-representative group of builders, design makers and experts into designing and creating solutions that are more empathetic with audience needs, making the customer and/or user central to what gets shipped and/or evolved.
“Tech leaders from social media platforms should be playing a greater role in data stewardship, protection, privacy and security, as well as incorporating more-informed consent protocols for those individuals who might lack the necessary skills to understand what data is going where and how data is being used when it comes to ad serving and other actions taken by social media networks.
“Tech leaders play a fundamental role in training our current and next generation of users on the introductory building blocks of learning to code, as well as what it means to be digitally skilled, ready, intelligent, literate and prepared for the future of work. This is something that could be incorporated into a multistakeholder approach where industry comes together with the public sector to broaden access to digital skills.
“Improvement areas relating to digital life includes individuals becoming more productive at work and in their personal lives, utilizing technology to drive outcomes (healthcare, education, economic, agricultural, etc.) and incorporating technology to solve the 17 UN Sustainable Development Goals.
“Lastly, technology could play an incredibly important role in evolving the global monetary system to one that is decentralized. One that is for the people, with the people, by the people; where those at the bottom of the pyramid do not suffer from faulty monetary policies that play an incredibly important and sometimes corrupt role that can lead to inflation and hyper-inflation. Many problems could be diminished, including alleviating the need for tactical, operational and repeatable tasks that could be handled by artificial intelligence. E-government services could replace the timely, manual and repeatable tasks that lack accountability, transparency and ease. Blockchain and digital assets could alleviate challenges associated with inflation relating to faulty monetary policies and/or federal banking institutions.
“The world could be connected to the internet and have the access, skills and utilization of tech necessary to drive the aforementioned outcomes. Concerns include not connecting the last mile or not enabling the world through digital skills, as this could create a fourth world of have-nots. We must include more women and girls in tech and more local creators and makers to ensure that products and solutions are truly reflective of the communities they serve.
“Media misinformation and disinformation are two of the largest challenges of our time, and people need to be informed about how to recognize real from fake news and build better skills around what to believe, share and not share. The current trend in social media networks raises significant concern around the role access to the information shared by users in the platform plays when it comes to causing strife around the world that could drive genocide, authoritarianism, bullying and crimes against humanity. Equally, it is concerning when governments shut down internet connectivity or access to specific sites to curtail dissent or adjust the narrative to benefit their own political party and/or agenda.”
John Lazzaro, retired professor of electrical engineering and computer science, wrote, “The only way to make progress is to return to people being ‘the customer’ as opposed to ‘the product.’ By 2035, a new generation of platforms will replace smartphones (and the apps that run on them). The new platforms will be built from the ground up to address the intractable issues we face today. Unlike the smartphone – a single platform that tries to do it all – the new platforms will be customized to place and purpose.
- A platform for the body: Wearables that function as stand-alone devices, incorporating augmented reality, with a direct connection to the cloud.
- A platform for built environments: Displays, sensors, computing and communication built into the home and office environments not as add-ons, but as integral parts of the structure.
- A platform for transportation: The passenger compartment of fully self-driving automobiles will be reimagined as a ‘third place’ with its own way interface humans to the cloud.
“What the platforms will share is a way to structure interactions between an individual and the community that mirrors the way relationships work in the physical world. Inherent in this redesign will be a reworking of monetization.”
Mark Surman, executive director of the Mozilla Foundation, a leading advocate for trustworthy AI, digital privacy and the open internet, said, “It was my optimistic side that said ‘yes,’ although it’s far from a certainty. Right now, we have governments and a public who actively want to point the internet in a better direction. And you have a generation of young developers and startup founders who want to do something different – something more socially beneficial – than what they see from big tech. If these people can rally around a practical and cohesive vision of what ‘a better internet’ looks like, they have the resources, power and smarts to make it a reality.”
Marc Rotenberg, president and founder of the Center for AI and Digital Policy and editor of the AI Policy Sourcebook, said, “Digital Spaces will evolve as users become more sophisticated, more practical and more willing to turn aside from online environments that are harmful, abusive and toxic. But the techniques to lure people into digital spaces will also become more subtle and more effective, as interactive bots become more widespread and as more spaces are curated by autonomous programs. By 2035, we will begin to experience online a society of humans and machines that will also be making its way into the physical world.”
Frank Kaufmann, president of the Twelve Gates Foundation, said, “I see digital life and digital spaces and the ‘evolution’ of these as following classic patterns of all prior major turning points of radical change that are connected to technological progress and development. Wheels, fire, the printing press, electricity, the railroads, flight and so forth. To me the pattern goes:
- A genius visionary or visionary group opens an historical portal of magic and wonder. These first people tend to be visionaries with pure, wholesome dreams and the desire to help people.
- The new technology explodes in a ‘Wild West’ environment during which time underdeveloped, avaricious, power-hungry, vile people amass obscene amounts of wealth and power by exploiting the technology and exploiting people. Eventually these criminals vanish into their private, petty hells and try to coat the horror they perpetrated by establishing self-serving veneers of work for ‘charitable’ causes and ‘grant-giving foundations,’ Their time of power lust has come and gone. In the meantime…
- a widespread reaction by normal, good people to the harm and evil caused by the avaricious exploiters, gradually…
- implements ‘checks and balances’ to bring the technology more fully into genuine healthy and wholesome service to people, plus a natural ‘decentralization’ occurs, yielding an explosion of creativity and positive growth and development.
Both the implementation of guardrails, and ‘checks and balances,’ after the ‘Wild West’ time and the smaller-ness, the more local-ness, the more manageable, humane little sub-units of the boundless benefits afforded by all these miraculous technologies settle down and they help us improve steadily.”
James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, wrote, “In the fast-changing world of today we see new technologies emerging rapidly, and then they become institutionalized. Thus, most people only see the larger, more tech-giant-dominated applications. But we are seeing moves towards several things that encourage me to believe innovators will help to create useful solutions. In particular, much work is now going into ways to give individuals more control of their online identities in order to control the flow of information (privacy-enhancing technologies). By 2035 it is likely some of these will have broken through and they may become heavily used. Additionally, the further internationalization of communication technologies reaching more of the world can help break down barriers. The primary role of tech leaders and politicians is to help keep the innovation space alive and to make sure that decentralized and open systems can thrive (a counter to tendencies towards authoritarianism, etc.). Today’s children and teens are learning to be less trusting of everything they see online (much as in the past they had to learn not to believe everything one saw in a TV commercial or read in a newspaper) and that will also help in navigating a world where dis- and misinformation will continue to exist.”
John Battelle, co-founder and CEO of Recount Media, wrote, “Within 15 years, I believe the changes wrought by significantly misunderstood technologies – 5G and blockchain among them – will wrest control of the public dialogue away from our current platforms, which are mainly advertising-based business models.”
Judith Donath, Berkman-Klein Center of Harvard University, shared this predictive scenario set in 2035, “Back in 2021, almost 5 billion people were connected to the internet (along with billions of objects – cameras, smart cars, shipping containers, bathroom scales and bear collars, to name a few). They thought of the internet as a useful if sometimes problematic technology, a communication utility that brought them news and movies, connections to other humans and convenient at-home shopping.
“In 2035, nearly all humans and innumerable animate and inanimate others are online. And while most people still think of the internet as a network for their use, that is an increasingly obvious illusion, a sedating fiction distracting them from the fact that it now makes more sense to consider the internet to be a vast information-digesting organism, one that has already subsumed them into its vast data and communication metabolism.
“As nectar is to bees, data is to The Internet (as we’ll refer to its emergent, sovereign incarnation). Rather than producing honey, though, it digests that data into persuasive algorithms, continually perfecting its ability to nudge people in one direction or another. It has learned to rile them up with dissatisfactions they must assuage with purchases of new shoes, a new drink, a trip to Disney or to the moon. It has mastered stoking fear of others, of immigrants, black people, white people, smart people, dumb people – any Other – to muster political frenzy. Its sensors are everywhere and it never tires of learning.
“In retrospect it is easy to see the roots of humankind’s subsumption into The Internet. There was the early blithe belief that ads were somehow ‘free,’ that content which we were told would be prohibitively expensive if we paid its real cost was being provided to us gratis, in return for just a bit of exposure to some marketing material. Then came the astronomical fortunes made by tycoons of data harvesting, the bot-driven conspiracies.
“By the end of the 2020s, everything from hard news to soft porn was artificially generated. Never static, it was continuously refined – based on detailed biometric sensing of the audience’s response (the crude click-counting of the earlier web long gone) – to be ever-more-addictively compelling.
“Arguably the most significant breakthrough in The Internet’s power over us came through our pursuit of health and wellness. Bodily monitoring, popularized by Fitbitters and quantified selfers, became widespread – even mandated – during the relentless waves of pandemics. But the radically transformative change came when The Internet went from just measuring your response to chemically inducing it with the advent of networked infusion devices, initially for delivering medicine to quarantined patients but quickly adapted to provide everyone with personalized, contexu-aware micro-doses of mood-shifting meds: a custom drip of caffeine and cannabis, a touch of Xanax, a little cortisol to boost that righteous anger.
“It is important to remember that The Internet, though unimaginably huge and complex, is not, as science fiction might lead you to believe, an emergent autonomous consciousness. It was and is still shaped and guided by humans. But which humans and towards what goal?
“The ultimate effect of The Internet (and its earlier incarnations) has been to make power and wealth accrue at the very top. As the attention and beliefs of the vast majority of people came increasingly under technological control, the right to rule, whether won by raising armies of voters or of soldiers, was gained by those who wield that control.
“From the standpoint of 2021, this prediction seems grim. Is it inevitable? Is it inevitably grim?
“We are moving rapidly in the direction described in this scenario, but it is still not inevitable. The underlying business model of the internet needs to not be primarily based upon personal data extraction. Strong privacy protection laws would be a start. Serious work in developing fair and palatable ways of paying for content must be developed. The full societal, political and environmental costs of advertising must be recognized: We are paying for the internet not only with the loss of privacy and ultimately of volition, but also with the artificial inflation of consumption in an overcrowded, climate-challenged and environmentally degraded planet.
“If we allow present trends to continue, one can argue the future is not inevitably grim. We simply place our faith in the mercy of a few hugely powerful corporations and the individuals who run them, hoping that instead of milking the world’s remaining resources in their bottomless status competition, they use their power to promote peace, love, sustainability and the advancement of the creative and spiritual potential of the humans under their control.”
Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Wellville.net, responded, “I see things getting both better and worse for people depending on who you are and under what jurisdiction you live. (It is ever thus.) There is no particular endpoint that will resolve the tension between more power for both good and bad actors. We will have AI that can monitor speech and to some extent reactions to speech closely – but we will have both good and bad actors in charge of the AIs. As more of life goes online, people will have more freedom to choose their virtual jurisdictions, and the luckier ones will be able to get an education online and perhaps move to move out to a better physical jurisdiction. By 2065, I would hope that there would be some worldwide movement that would simply rescue the bottom-of-the-pyramid citizens of the most toxic governments, but I believe that the (sometimes misguided) respect for sovereignty is strong enough to persist through 2035. At what point will we be able to escape to now floating jurisdictions (especially as many places get flooded by climate change) or even into space (though that will remain an expensive proposition)? Somehow, we have evolved to prefer superiority over absolute progress, and we are unlikely to move into a world of evenly distributed power. To get more specific, I do see business playing a bigger role, but businesses are seduced by and addicted to increasing profits just as just as political actors are seduced by and addicted to (the trappings of) power. Somehow, we need to start training our babies as carefully as we are talking about training our AIs. Train them to think long-term, to favor their own species, to love justice and fairness.”
Barbara Simons, past president of the Association for Computing Machinery and Board Chair of Verified Voting, said, “I expect there will be a mixture of good and bad. For example, if the internet didn’t exist, things would have been far worse during the pandemic. On the other hand, the internet vacillates the spread of disinformation, including by foreign enemies of our democracy.”
Wendell Wallach, senior fellow with the Carnegie Council for Ethics in International Affairs, commented, “The outstanding question is whether we will actually take significant actions to nudge the current trajectory of digital life towards a more beneficial trajectory. Reforms that would help:
- Holding social media companies liable for harms caused by activities they refuse, or are unable, to regulate effectively.
- Shifting governance away from a ‘cult of innovation’ where digital corporations and those who get rich investing in them have little or no responsibility for societal costs and undesirable impacts of their activities. The proposed minimum 15% tax endorsed by the G7/G20 is a step in the right direction, but only if some of that revenue is directed explicitly towards governing the internet, and ameliorating harms caused by digital life, including the exacerbation of inequality fostered by the structure of the digital economy.
- Development of a multistakeholder network to oversee governance of the internet. This would need to be international and include bottom-up representation from various stakeholder groups including consumers and those with disabilities. This body, for example, might make decisions as to the utilization of a portion of the taxes (see #2) collected from the digital oligopoly.”
Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab at the University of Maryland, wrote, “My view toward 2035 has been darkened by the harsh tone of politics over the past few years that is continuing to play out. If President Biden’s cooperative efforts succeed, I will be more optimistic. Journalists can’ resist reporting on outrageous behaviours, and false claims and lies still make the news. Social media have also been a problem, with algorithms that amplify misinformation rather than stopping bot farms and giving more control to users. Twitter has gotten better at controlling the abuses, but Facebook is stuck on its business model, so I go there only minimally.
“I’m careful on YouTube and limit my Instagram or other platforms. I don’t see the misinformation, as I protect my bubble and stay in safe places, like the New York Times, The Guardian, Washington Post, Atlantic, PBS, BBC, CBC – even though they have their biases as well.
“My fears are that political maneuvers that encourage divisiveness will remain strong, misinformation will continue, and racism and other forms of violence will endure. I am troubled by the Google/Facebook surveillance capitalism (strong bravos to Shoshanna Zuboff for her amazing book on the topic, ‘Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power’), social media abuses and the general tone of violence, anger and hate speech in the U.S. My journalist father wrote a book, ‘Between Fear and Hope,’ in 1947 about post-war Europe. That title fits for now, but I am hoping for a brighter tomorrow.”
Peter Suber, director of the office for scholarly communications at Harvard University, said, “I see signs (emerging tools, innovations, policies, practices, attitudes) that digital spaces are getting better and signs that they are getting worse. I expect that both trends will continue to 2035 and beyond. I just can’t predict how those two trends will net out.”
Rob Reich, a professor focused on public policy, ethics and technology who also serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford University, wrote, “In the absence of a significant change in the ethos of Silicon Valley, the regulatory indifference/logjam of Washington, DC, and the success of the current venture capital funding model, we should not expect any significant change to digital spaces, and in 2035 our world will be worse. So, our collective and urgent task is to change all three of these core elements: the ethos of Silicon Valley, the regulatory savvy and willpower of DC and the European Union and an intervention in the funding model of VCs.”
Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, said, “A recent Q&A with me headlined ‘Fixing the Internet Will Require a Cultural Shift’ was published in the Harvard Gazette about this topic. I am quoting it here as it addresses these questions.
“What the internet and information technologies have brought us is tremendous power. Tech has become critical infrastructure for modern life. It saved our lives during the pandemic, providing the only way for many to go to school, work, or see family and friends. It also enabled election manipulation, the rapid spread of misinformation and the growth of radicalism.
“Are digital technologies good or evil? The same internet supports both Pornhub and CDC.gov, Goodreads and Parler.com. The digital world we experience is a fusion of tech innovation and social controls. For cyberspace to be a force for good, it will require a societal shift in how we develop, use and oversee tech, a reprioritization of the public interest over private profit. Fundamentally, it is the public sector’s responsibility to create the social controls that promote the use of tech for good rather than for exploitation, manipulation, misinformation and worse. Doing so is enormously complex and requires a change in the broader culture of tech opportunism to a culture of tech in the public interest.
“How do we change the culture of tech opportunism? There is no magic bullet that will create this culture change — no single law, federal agency, institutional policy or set of practices will do it, although all are needed. It’s a long, hard slog. Changing from a culture of tech opportunism to a culture of tech in the public interest will require many and sustained efforts on a number of fronts, just like we are experiencing now as we work hard to change from a culture of discrimination to a culture of inclusion. That being said, we need to create the building blocks for culture change now — pro-active short-term solutions, foundational long-term solutions and serious efforts to develop strategies for challenges that we don’t yet know how to address. In the short term, government must take the lead.
“There are a lot of horror stories — false arrests based on bad facial recognition, data-brokered lists of rape victims, intruders screaming at babies from connected baby monitors — but there is surprisingly little consensus about what digital protections — specific expectations for privacy, security, safety and the like — U.S. citizens should have. We need to fix that. Europe’s General Data Protection Regulation (GDPR) is based on a well-articulated set of digital rights of European Union citizens. In the U.S. we have some specific digital rights — privacy of health and financial data, privacy of children’s online data — but these rights are largely piecemeal.
“What are the digital privacy rights of consumers? What are the expectations for the security and safety of digital systems and devices used as critical infrastructure? Specificity is important here because to be effective, social protections must be embedded in technical architectures. If a federal law were passed tomorrow that said that consumers must ‘opt in’ to personal data collection by digital consumer services, Google and Netflix would have to change their systems (and their business models) to allow users this kind of discretion. There would be trade-offs for consumers who did not opt in: Google’s search would become more generic, and Netflix’s recommendations wouldn’t be well-tailored to your interests. But there would also be upsides — opt-in rules put consumers in the driver’s seat and give them greater control over the privacy of their information.
“Once a base set of digital rights for citizens is specified, a federal agency should be created with regulatory and enforcement power to protect those rights. The FDA was created to promote the safety of our food and drugs. OSHA was created to promote the safety of our workplaces. Today, there is more public scrutiny about the safety of the lettuce you buy at the grocery store than there is about the security of the software you download from the internet. Current bills in Congress that call for a Data Protection Agency, similar to the Data Protection Authorities required by the GDPR, could create needed oversight and enforcement of digital protections in cyberspace.
“Additional legislation that penalizes companies, rather than consumers, for failure to protect consumer digital rights could also do more to incentivize the private sector to promote the public interest. If your credit card is stolen, the company, not the cardholder, largely pays the price. Penalizing companies with meaningful fines and holding company personnel legally accountable — particularly those in the C suite — provide strong incentives for companies to strengthen consumer protections. Refocusing company priorities would positively contribute to shifting us from a culture of tech opportunism to a culture of tech in the public interest.
“It’s hard to solve problems online that you haven’t solved in the real world. Moreover, legislation isn’t useful if the solution isn’t clear. At the root of our problems with misinformation and fake news online is the tremendous challenge of automating trust, truth and ethics. Social media largely removes context from information, and with it, many of the cues that enable us to vet what we hear. Online, we probably don’t know whom we’re talking with or where they got their information. There is a lot of piling on. In real life, we have ways to vet information, assess credentials from context and utilize conversational dynamics to evaluate what we’re hearing. Few of those things are present in social media.
“Harnessing the tremendous power of tech is hard for everyone. Social media companies are struggling with their role as platform providers (where they are not responsible for content) versus their role as content modulators (where they commit to taking down hate speech, information that incites violence, etc.). They’ve yet to develop good solutions to the content-modulation problem. Crowdsourcing (allowing the crowd to determine what is valuable), third-party vetting (employing a fact-checking service), advisory groups and citizen-based editorial boards all have truth, trust and scale challenges. (Twitter alone hosts 500 million tweets per day.)
“The tremendous challenges of promoting the benefits and avoiding the risks of digital technologies aren’t just Silicon Valley’s problem. The solutions will need to come from sustained public-private discussions with the goal of developing protective strategies for the public. This approach was successful in setting the original digital rights agenda for Europe, ultimately leading to multiple digital rights initiatives and the GDPR. While GDPR has been far from perfect in both conceptualization and enforcement, it was a critical step toward a culture of technology in the public interest.
“Today it is largely impossible to thrive in a digital world without knowledge and experience with technology and its impacts on society. In effect, this knowledge has become a general education requirement for effective citizenship and leadership in the 21st century. And it should be a general education requirement in educational institutions, especially in higher ed, which serve as a last stop before many professional careers.
“Currently, forward-looking universities, including Harvard, are creating courses, concentrations, minors and majors in public interest technology — an emerging area focused on the social impacts of technology. Education in public-interest technology is more than just extra computer science courses. It involves interdisciplinary courses that focus on the broader impacts of technology — on personal freedom, on communities, on economics, etc. — with the purpose of developing the critical thinking needed to make informed choices about technology. And students are hungry for these courses and the skills they offer. Students who have taken courses and clinics in public-interest technology are better positioned to be knowledgeable next-generation policymakers, public servants and business professionals who may design and determine how tech services are developed and products are used. With an understanding of how technology works and how it impacts the common good, they can better promote a culture of tech in the public interest, rather than tech opportunism.”
Jerome Glenn, co-founder and CEO of The Millennium Project, predicted, “The race is on to complete the global nervous system of civilization and make supercomputing power available to everyone. Another race is to develop artificial general intelligence (AGI), which some say might never get developed while others think it could be possible within 10 to 15 years; if so, its impact will be far beyond artificial narrow intelligence (ANI). Investments in AGI are forecast to reach $50 billion by 2023. The human brain projects of U.S., EU, China and other countries, plus corporate ANI and AGI research, should lead to augmented individual human and collective intelligence.
“We are moving from the Information Age into the Conscious-Technology Age, which will force us to confront fundamental questions about life as a new kind of civilization emerges from the convergence of two megatrends. First, humans will become cyborgs, as our biology becomes integrated with technology. Second, our built environment will incorporate more artificial intelligence.
“Conscious-technology raises profound dangers, including artificial intelligence rapidly outstripping human intelligence when it becomes able to rewrite its own code and individuals becoming able to make and deploy weapons of mass destruction. Minimizing these dangers and maximizing opportunities – such as improving governance with the use of collective intelligence systems, making it easier to prevent and detect crime and matching needs and resources more efficiently – will require that we actively shape the evolution of conscious-technology.
“The age of conscious-technology is coming as two mega technology trends converge: our built environments become so intelligent that they seem conscious, and humans become so integrated with technology that we become cyborgs. Like every other revolution in human history, from agriculture to industry to the internet, the arrival of conscious-technology will have both good and bad effects.
“Can we think deeply and wisely about the future we want while we still have time to shape the effects of conscious-technology? Humans will become cyborgs as our biology becomes integrated with technology. We are already microminiaturizing technology and putting it in and on our bodies. In the coming decades, we will augment our physiological and cognitive capacities as we now install new hardware and software on computers. This will offer access to genius-level capabilities and will connect our brains directly to information and artificial intelligence networks.
“Our built environment will incorporate more artificial intelligence. With the Internet of Things, we are integrating chips and sensors into objects, giving them the impression of consciousness – as when we use voice commands to control heating, lighting or music in our homes. As our increasingly intelligent environments connect with our cyborg future, we will experience a continuum of our consciousness and our technology.
“As humans and machines become linked more closely, the distinction between the two entities will blur. Conscious-technology will force us to confront fundamental questions about life. All ages and cultures have had mystics who have been interested in consciousness and the meaning of life, as well as technocrats who have been interested in developing technology to improve the future. All cultures have a mix of the two, but the representatives of each viewpoint tend to be isolated from and prejudiced towards each other.
“To improve the quality of the Conscious-Technology Age, the attitudes of mystics and approaches of technocrats should merge. For example, we can think of a city as a machine to provide electricity, water, shelter, transportation and income; or we can think of it as a set of human minds spiritually evolving and exciting our consciousness. Both are necessary. Without the technocratic management, the city’s physical infrastructure would not work; without the spiritual element, the city would be a boring place to live. Like the musician who reports feeling his consciousness merge with the music and his instrument to produce a great performance, one can imagine the future ‘performance’ of a city, or of civilization as a whole, as a holistic synthesis experience of the continuum between technology and consciousness.
“History teaches us that civilizations need a kind of ‘perceptual glue’ to hold them together, whether in the form of religious myths or stories about national origins or destinies. The idea of a feedback loop between consciousness and technology moving towards a more enlightened civilization offers a perceptual glue to help harmonize the many cultures of the world into a new global civilization.
“There are profound dangers along the path towards a conscious-technology civilization. At some point, it is likely that development will start to happen very quickly: when artificial intelligence is able to rewrite its own code, based on feedback from global sensor networks, it will be able to get more intelligent from moment to moment. It could evolve beyond our control in either a positive or a destructive fashion. By exploring scenarios about the possible future evolution of artificial intelligence, can we make wise decisions now about what kinds of new software and capabilities to create?
“As cognition-enhancing technology develops, we will have a world full of augmented geniuses. With the new perceptual, technological and artificial biological powers at his/her disposal, a single individual could be able to make and deploy weapons of mass destruction – a prospect known as SIMAD, or ‘Single Individual MAssively Destructive.’ We already have structures, albeit imperfect, to monitor and prevent the mass-destructive capacity of nation-states and groups – what structures could prevent the threat of SIMADs?
“Connecting human brains directly to information and artificial intelligence networks raises the question of whether minds could be hacked and manipulated. How can we minimize the potential for information or perceptual warfare and its potential consequence of widespread paranoia?
“Accelerated automation will render much of today’s work unnecessary. Driverless vehicles could remove the need for taxi, bus and truck drivers. Personal care robots could take over many functions of nurses and care workers. Artificial intelligence could make humans redundant in professions such as law and research. Will conscious-technology create more jobs than it replaces? Or is massive structural unemployment inevitable, requiring the development of new concepts of economics and work?
“If we think ahead and plan well, the conscious-technology civilization could become better than we can currently imagine. Governance could be vastly improved by collective intelligence systems; it could become easier to prevent and detect crime; needs and resources could be matched more efficiently; opportunities for self-actualization could abound; and so on. However, it would be wise to think through the possibilities of the Conscious-Technology Age today and shape its evolution to create the future civilization we desire.”
Vinton G. Cerf, vice president and chief internet evangelist at Google and Internet Hall of Fame member, observed, “Digital spaces have evolved dramatically over the past 50 years. During that time, programmable devices have become central to an unlimited number of products upon which we increasingly depend. Information space is instantly accessible thanks to the World Wide Web and search engines such as Google. Collaboration is facilitated with email, texting, shared documents, access to immeasurable amounts of data and increasingly powerful computer-based tools for its use. Over the next 15 years, instrumentation in every dimension will color our lives to include remote medical care, robotics and self-driving cars. Cities will have models of themselves they can use to assess whether they are functioning properly or not; these models will be invaluable to aid in response to emergencies and to smooth the course of daily life. During this same period, we will be coping with the amplifying effects of social media, including the side-effects of misinformation, disinformation, malware, stalking, bullying, fraud and a raft of other abuses. We will have made progress in international agreements on norms of civil behavior and law enforcement in online environments. The internet or its successor will have become safer and more secure, and preservation of these properties will be easier with the help of new devices and practices. There will be more collaboration between government and the private sector in the interest of citizen safety and privacy. These are hard problems and abuses will continue, but tools will evolve to provide better protection in 2035.”
Adam Nelson, software development manager at Amazon, commented, “Initiatives around privacy, data portability and – most importantly – putting the rights of individuals, governments and marketplaces above those of companies will lead to a more equitable digital space and digital life. This will be an uneven transition though, with many people still suffering from abuse.”
Srinivasan Ramani, Internet Hall of Fame member and pioneer of the internet in India, wrote, “I am reminded of life in Kerala one of the states of India. There are many rivers and backwaters there and it is common for people to live on the river; that means that most people live on the edge of riverbanks or the backwaters. The river gives them food (mostly fish), and transportation by boat. The rivers, of course, give them drinking water. The people are very hygiene conscious, because if they pollute the river, they will be ruining their own lives. We now live by the internet, and we should be equally careful not to pollute it with misinformation, unreliable information, etc. Of course, people have freedom of expression. Going back to the river analogy, do they have freedom to pollute the river? I think, and I hope, that rubbish will reduce on the internet in the coming years. People should have freedom of expression, but they should not be able to hide behind anonymity. I would hope that every original post and every forwarding would be signed in a manner that would let us identify the person responsible. Then there is the question of ignorant postings. One may express one’s opinion and own the responsibility for it. That does not guarantee that it is a contribution for the good of society. You may claim in all sincerity that a certain herbal remedy protects you against COVID-19, but it may be a statement with no reliable evidence behind it whatever. It can land the reader in trouble by misleading him or her. We can probably invent an effective safeguard against it, but it may not be very easy.”
Ethan Zuckerman, director of the Initiative on Digital Public Infrastructure at the University of Massachusetts-Amherst, said, “Digital spaces are evolving in a mostly negative direction. But there’s an important caveat. It is the large, commercial platforms that are evolving the most negatively. There are many amazing, supportive and successful communities online. They tend to be small, self-governing and not focused on monetization, and, therefore, we tend to ignore them. Accepting that digital spaces are largely evolving in a negative direction should be an indictment of financial models, not of the technology nor of people’s aspirations for healthy online communities. We can, absolutely, change digital spaces to better serve the public good. But we’ve not made the broad commitment to do so. Right now, we are overfocused on fixing existing broken spaces, for instance, making Facebook and Twitter less toxic. We need to do more work imagining and creating new spaces with explicit civic purposes and goals if we are to achieve better online communities by 2035. We begin solving the problem of digital public spaces by imagining spaces designed to encourage pro-social conversations. Instead of naively assuming that connecting people will lead towards increased social harmony, we need to recognize that functional public spaces require careful engineering, moderation and attention paid towards marginalized and traditionally silenced communities. This innovation is more likely to come from real-world communities who take control of their own digital public spaces than it is to come from tech entrepreneurs seeking the next billion-person network. Regulation has a secondary role to play here – its job is not to force Facebook and others into pro-social behavior, but to create a more level playing field for these new social networks. Regulating interconnectivity so that new social networks can interact with existing ones, and mandating transparency about the algorithms that power existing networks are critical steps towards building this new vision of the space. Finally, new and older social networks need to be open to examination, so we can study them and answer questions on an ongoing basis about how social media is impacting society as it evolves.”
Nazar Nicholas Kirama, president and CEO of the Internet Society chapter in Tanzania and founder of the Digital Africa Forum, said, “The internet is the conduit of digital space that makes digital life possible. It is key in driving innovation for development towards 2035. The internet is a reflection of our own societies’ good and bad sides: the good far outweighs the harms. As digital spaces evolve, stakeholders need to find ways to curb online harms, not through ‘sanitation’ of digital spaces but by creating reasonable regulations that promote freedom of online expression and personal accountability that promote Internet Trust. The internet has evolved to a stage where it is now a necessary ‘commodity.’ Over the past year we have learned how key it is for communication and business continuity in times of global emergencies like the COVID-19 pandemic. During the first wave, more than 1.5 billion learners who were put out of classrooms due to global lockdowns could not continue their education because they had no connection. Had their homes been connected, the disruption would have been minimal. Being online is vital and good for societies. There are many bad actors online. For example, a busted online child pornography ring can send a shock wave throughout the world because the internet makes news travel in blink of an eye! But that does not mean there are porn rings everywhere.”
Stephan G. Humer, internet sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, said, “Initiatives aimed at empowerment and digital culture will probably have the greatest impact, because this is where we continue to see the greatest deficits and therefore the greatest need for change. People need to be able to understand, master and individually shape digitization, and this requires appropriate initiatives. A diverse mix of initiatives – some governmental, some non-governmental – will play a crucial role here! The result will be that digitization can be better appreciated and individually shaped. Increasingly, the effects that were expected from digitization at the beginning will come to pass: a better life for everyone, thanks to digital technology. The more self-evident and controlled digital technology becomes part of our lives, the better. People will not let this aspect of control be taken away from them. The dystopias presented so far, which have prophesied a life ‘in the matrix,’ will not become an issue. So far, sanity has almost always triumphed, and that will not change. The more people in the world can use the internet sensibly, the more difficult it will be for despots and dictators.”
Douglas Rushkoff, digital theorist and host of the NPR-One podcast “Team Human,” noted, “There will be many terrific, wonderful innovations for civics in digital spaces moving forward. There will also be almost unimaginably cruel forms of oppression implemented through digital technology by 2035. It’s hard to talk too specifically about digital technology in 2035, since we will likely be primarily dealing with death and destruction from climate change. So, digital technology will be useful for organizing humanity’s broad retreat from coastal areas, organizing refugee camps for over a billion people, administrating medical and other forms of triage, and so on. That’s part of the problem when casting out this far. We don’t really know how much of the world will be on fire, whether America will be a democracy, whether China will be dominating global affairs, how disease and famine will have changed the geopolitical landscape, and so on. So, if I have to predict, I’d say digital technology will be mostly applied to: 1) Control populations. 2) Administrate mass migrations and resource allocation. 3) Provide entertainment.”
Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network, observed, “Misinformation is a massive concern that will continue without deliberate and direct work. As we build policies, programs and funding mechanisms to support bringing more and more people online, it will be necessary to also build policies and appropriate investments to ensure folks are coming online to digital spaces that are safe and accessible. The pandemic has made clear the level of necessity we have for internet in everyday lives from virtual school to virtual work, telehealth to social connection. That means that the inequities that we’ve enabled around digital divides were exacerbating many other clear inequities around health, social supports, employment, schooling and much more. This means that while challenges are compounded, investing in change can have a multiplied impact.”
Gary A. Bolles, chair for the future of work at Singularity University, said, “The greatest opportunity comes from community-anchored digital spaces that come with heterogeneity, collaboration and consequences. Community-anchored, because the more humans can interact both online and in-person, the more the potential there is for deeper connection. Heterogeneity, because homogeneous groups build effective echo chambers and heterogenous groups expose members to a range of ideas and beliefs. Collaboration, because communities that solve one problem together can solve the next, and the next. Consequences, because effective public discourse requires people to be aware of and responsible for the potential negative results of their words and actions. What is critical is that the business models of the digital communications platforms must change. Tech leaders must turn the same level of innovation they have brought to their products, toward business model innovations that encourage them to design for more heterogeneity, collaboration and consequences.”
Erhardt Graeff, assistant professor of social and computer science at Olin College of Engineering, commented, “The only way we will push our digital spaces in the right direction will be through deliberation, collective action and some form of shared governance. I am encouraged by the growing number of intellectuals, technologists and public servants now advocating for better digital spaces, realizing that these represent critical public infrastructure that ought to be designed for the public good. Most important, I think, are initiatives that bring technologists together to realize the public purpose of their work, such as the Design Justice Network, public interest technology and the tech worker movement. We need to continue strengthening our public conversation about what values we want in our technology, honoring the expertise and voices of non-technologists and non-elites; use regulation to address problems such as monopoly and surveillance capitalism; and, when we can, refuse to design or be subject to anti-democratic and oppressive digital spaces.”
Jonathan Grudin, principal human-computer design researcher at Microsoft and affiliate professor at the University of Washington, wrote, “In 2005, digital spaces served the public good. Recovering from the internet bubble, we were connecting with long-lost classmates and friends and conducting business more efficiently online. By 2020, digital spaces had become problematic. Mental health problems afflicted young and old, there was rising income inequality, trust in governments and institutions had eroded, there were elected politicians of staggering ineptitude, and tens of millions were drawn to online spaces rife with malicious conspiracy fantasies and big lies. Trillions of dollars are spent annually to combat bad actors who may have the upper hand. Debt-ridden consumers are succumbing to marketers armed with powerful digital technologies. In 2035, another 15 years will have elapsed.
“The trajectory is not promising, but I’ll bet on a positive transformation, because the current path isn’t sustainable. In 10 years, polar bears and emperor penguins will be gone. Drought and fire will circle the globe. We’ll have too much salt water and not nearly enough fresh water. It could end very badly, but facing the cliff, we can respond. Leaders in China, Russia, Europe, the United States and elsewhere will see they must cooperate in converting swords into sustainable energy and clean water. The resulting science and technology will include digital spaces for addressing the challenges.
“It won’t be easy. Many of the younger generation may have fewer resources than their parents, and will have to make difficult choices their parents didn’t. Life may be worse for the average person in 2035 than today, but I’m betting the digital spaces will be better places.”
Jeff Jarvis, director of the Tow-Knight Center for entrepreneurial journalism at City University of New York, said, “We have time. The internet is yet young. I have confidence that society will understand how to benefit from the net just as it did with print. After Gutenberg, it took 150 years before innovations *with* print flourished: the creation of the first regularly published newspaper, the birth of the modern novel with Cervantes and of the essay with Montaigne. In the meantime, yes, there was a Reformation and the Thirty Years War. Here’s hoping we manage to avoid those detours. Media is engaged in a full-blown moral panic about the net. It is one of their own engineering and it is in their self-interest, as media choose to portray their new competitor as the folk devil that is causing every problem in sight. In the process, media ignore the good on the net. It is with the net and social media that #BlackLivesMatter rose to become a worldwide movement. Imagine enduring the pandemic without the net, preserving jobs, the economy, connections with friends and families. Media’s narrative about the net is dystopian. It is an incomplete and inaccurate picture of the net’s present and future.”
Bruce Bimber, a professor of political science and founder of the Center for Information Technology and Society at the University of California-Santa Barbara, observed, “I envision that eventually new ways of thinking about regulation and the responsibility of social media companies will have an influence on policy. Every major industry with an effect on the public’s safety and well-being is managed by regulatory regime today, with principles of responsibility and accountability, with limits, with procedures for oversight, with legal principles enforced in courts. That is, except for internet industries, which instead enjoy Section 230. I anticipate that this will change by 2035, as countries come to understand how to think about the relationship of the state and the market in new and more productive ways. That being said, it is not at all clear that this will happen in time. Every year of unrestrained market activity and lack of accountability damages the public sphere more, and we may reach a point where things are too broken for policy to matter.”
Mike Liebhold, distinguished fellow, retired, at The Institute for the Future, commented, “Here are a few of the technical foundations of the shifts in digital spaces and digital life expected by 2035:
- Cross-Cutting Forces – (across the technology stack):
- Applied Machine Intelligence Everywhere
- Continuous pervasive cybersecurity vulnerabilities, and vastly Amplified Security and Privacy Engineering
- Energy Efficiency and Circular Accountability will become critical factors in personal and organization decision processes
- Systemic Digital Technology Shifts – (layers of the technology stack):
- User Experience Technologies (conversational agents everywhere), shift from glass screens to AR for common interaction including holographic telepresence and media.
- Continued evolution and adoption of embedded Intelligent and automated technologies and physical spaces and in robotics and cobotics.
- Connection and Network Technologies – continuous adoption of fiber and broadband wireless connections including LeoSat broadband internet connections in remote geographies
- Computing and Cloud Technologies
- Continued adoption of hybrid edge-cloud AI micro services”
Paul Saffo, a leading Silicon Valley-based forecaster exploring long-term technology trends and their impact on society, wrote, “This particular media revolution (a shift from mass to personal media) is approximately 25 years old, and it has unfolded in precisely the same way every single prior media revolution has evolved. This is because beneath the technological novelty is a common constant of human behavior. Specifically, when a new media technology arrives, first it is hailed as the utopian solution to everything from the common cold to world peace. Then time passes, we realize there is a downside, and the new medium is demonized as the agent of the end the civilization. And finally, the medium, now no longer new, disappears into the cultural fabric of daily life. In short, we hoped cyberspace would deliver a new cyber-utopia, then we feared cyber-geddon. But what we are getting in the end is Cyburbia, an amplified version of our analog reality.”
Barry Chudakov, founder and principal at Sertain Research, wrote, “How I imagine this transformation of digital spaces and digital life will take place: I imagine an awakening to the nature and logic of digital spaces, as people realize the profound human, psychological, and material revolutions these spaces – the metaverse (virtual representation combined with simulation) – will provoke. Yet, I suspect we will go through a transition period of unlearning: We will look at emerging digital spaces and have to unlearn our inherited alphabetic logic to actually see their inherent dynamics. Features of that logic include:
- Digital twins (operating in digital spaces) create a doubling effect of everything and everyone
- Digital spaces’ mirror worlds start by complementing, then competing with – or replacing – reality (Truman Show syndrome)
- Digital spaces evolve from solely a screen experience to more immersive, in-body, in-place experiences
- Augmented reality adds dimension to any experience within digital spaces
- Immersion in digital spaces challenges (devours) human attention
- Time compresses to Now, aka eternal nowness
- Here and there swap: everything, everyone is here in the now, in the digital space
- Identity is identity in the mirror (compounded exponentially by the implementation of digital spaces as mirror worlds)
- Self goes digital: digital spaces become the emerging venue for the presentation of self; I am who I am in digital spaces
- Identity is thereby multiple, fluid: roles, sexual orientation, and self-presentation evolve from solely in-person to in-space
- Privacy in digital spaces becomes a paid service with multiple layers and options like cable TV or streaming services (as tracking and data identification are built into all objects and all things start to think)
- The sequential legacy of the Alphabetic Order becomes digital simultaneity
- Everything (action, reaction, statement, response, movement) generates data which exponentially increases information barrage: the outmoded notion of memorization and retention are replaced with ambient findability
- Wholes become miscellaneous as everything is turned into miscellaneous data
- Navigation replaces rules
- Original and copy conflate, objects and experiences become duplicative, as digital spaces become mirror worlds and mirror worlds become the metaverse
- Cut and paste, copy and paste, are no longer merely computer commands, they are behaviors – the prevailing psychology of digital spaces.
- Robots engage with the mirror world as augmented eyes and ears: “reality fused with a virtual shadow” (Kevin Kelly)
- The need for interoperability and portability among digital spaces generate mandates for standards of governance
“Taking (and evolving) simulation and virtual representation from the gaming world, digital spaces will morph from apps and social media platforms into mirror worlds (“the third platform, which will digitize the rest of the world … all things and places will be machine-readable, subject to the power of algorithms,” Kevin Kelly, Wired, Mar 2019). Market dynamics will force these digital spaces to become more “sticky.” Commerce – making money – will drive this dynamic. To make more money, to get more people to spend more, any surviving digital space will decide it must become stickier. If you doubt that just watch or talk to teenagers playing video games. Vid games are highly involving, addictive, engendering the “I don’t want to leave” dynamic.
“That realization will not be lost on the designers of future digital spaces. Digital spaces will become the video game/cell phone of the future. As they promise information about any and everything, we will be always plugged in, as the spaces will be always (constantly) updating, morphing, evolving. Soon, as users now do with cell phones, we will ignore conventional reality and/or people in that reality for life in the digital space. This is the first critical step in digital spaces competing with, and often replacing, conventional reality.
“To manage the assault of multiple simultaneous changes – new realities from emerging digital spaces – we will be forced to find a new language of ethics, a new set of guidelines for acting and operating in digital spaces. Even now, the National Institute of Standards and Technology (NIST) – part of the U.S. Department of Commerce – is asking the public for input on an AI risk management framework, which the organization is in the process of developing as a way to “manage the risks posed by artificial intelligence.” This is an initial step in what will be a continuing process of understanding and trying to create reasonable protections and regulations.
“A central question: By 2035 what will constitute digital spaces? Today these are sites, streaming services, apps, recognition technologies, and a host of (touch)screen-enabled entertainments. But as we move into mirror worlds, as Things That Think begin to think harder and more seamlessly, as AI and federated learning begin to populate our worlds and thinking and behaviors – digital spaces will transform. It is happening already. Consider inventory tracking – making sure that a warehouse knows exactly what’s inside of it and where: Corvus Robotics uses autonomous drones that can fly unattended for weeks on end, collecting inventory data without any human intervention at all. Corvus Robotics’ drones are able to inventory an entire warehouse on a rolling basis in just a couple days, while it would take a human team weeks to do the same task. Effectively Corvus’ drones turn a warehouse into a working digital space.
“Another emerging digital space: healthcare. In the last couple of years, the sale of professional service robots has increased by 32 percent ($11.2 billion) worldwide; the sale of assistance robots for the elderly increased by 17 percent ($91 million) between 2018 and 2019 alone. Grace, a new medical robot from Singularity Net and Hanson Robotics, is part of a growing cohort of robot caregivers working in hospitals and eldercare facilities around the world. They do everything from bedside care and monitoring to stocking medical supplies, welcoming guests and even cohosting karaoke nights for isolated residents. As these robots warm, enlighten and aid us, they will also monitor, track and digitize our data.
“The gap between digital spaces and real-world space (i.e., us) is narrowing. Soon that seeming gap will be gapless. By 2035 a profound transition will be well on the way. The transition and distinction between digital worlds and spaces and the so-called real world will be less distinctive, and in many instances will disappear altogether. In this sense, digital spaces will become ubiquitous invisible spaces. Digital spaces will be breathing; will be blinking; will be moving.
“Digital spaces will surround us and enter us as we enter them. William Gibson said, ‘We have no future because our present is too volatile. We have only risk management. The spinning of the given moment’s scenarios. Pattern recognition.’ The new immersion is submersion. We will swim through digital spaces as we now swim through water. Our oxygen tanks will be smart glasses, embedded chips, algorithms and AI. The larger question remains: What will this mean? What will this do to us and what will we do with this?
“Like Delmore Schwartz’s ‘heavy bear who goes with me,’ we carry our present dynamics into our conception of future digital spaces. Via cell phones, computers, or consoles we click, swipe or talk to engage with digital spaces. That conception will be altered by the following technologies, which will fuse, evolve, transform and blend to effect completely different dynamics:
- Mirror worlds
- Quantum computing
- Robotics, machine intelligence, deep learning
- Artificial intelligence
- Federated learning
- Recognition technologies
- Surveillance capitalism and totalitarian oversight (As of 2019, it is estimated that 200 million monitoring CCTV cameras of the “Skynet” system have been put to use in mainland China, 4x the number of surveillance cameras in the US)
- Contact tracing
- Data collecting, management, and analysis
“We presently approach technology like kids opening presents at Christmas. We can’t wait to get our hands on the tech and jump in and play with it. No curriculum or pedagogy exists to make us stop and consider what happens when we open the present. With all puns intended, once we open it, the present itself changes. As does the past. As do we. Digital spaces change us, and we change in digital spaces. So, we will transform digital spaces in crisis mode, instead of the better way: using game theory and simulation to map out options. Example: An insurance company, Lemonade Insurance, faced significant backlash in May 2021 for announcing that their AI system would judge whether a person was lying about a car accident based on a video submitted after the incident. Incidents like this fuel an already longstanding movement to end the use of AI in facial-recognition software adopted by public and private institutions.
“As reality is digitized the digital artifact replaces the physical reality. We have no structural or institutional knowledge that aids us in understanding, preparing for or adjudicating this altered reality. What are the mores and ethics of a world where real and made-up identities mingle? Consider for a moment how digital dating sites have affected how people get to know and meet significant others. Or how COVID-19 changed the ways people worked in offices and from home. Ask yourself: How many kids play outside versus play video games? The common thread in these examples is that digital spaces have already been replacing reality. The immediate effect of ubiquitous digital spaces that are not distinct spaces but extensions of the so-called real world will be reality replacement.”
“What reforms or initiatives may have the biggest impact? Like pandemics that morph from one variation to another, digital spaces and our behavior in them change over time, often dramatically and quickly. Proof on a smaller scale: In one generation virtually every teenager in the Western world and many the world over considers a cell phone a bodily appendage as important as her left arm and as vital to existence as the air going through his lungs. In a decade, that phone will get smaller, will no longer be a phone but instead will be a voice prompt in a headset, a streaming video in eyeglasses, a gateway in an embedded chip under the skin. Our understanding of digital spaces will have to evolve as designers use algorithms and bots to create ever more sticky and seamless digital spaces. Nothing here is fixed or will remain fixed. We are in flux, and we must get used to the dynamics of flux.
“The number-one initiative or reform regarding digital spaces would be to institute a grammar, dynamics and logic training for digital spaces, effectively a new digital spaces education, starting in kindergarten going through graduate school. This education/retraining – fed and adjusted by ongoing digital spaces research – is needed now. It is as fundamental to society and the public square as literacy or STEM. Spearheading this initiative should be the insistence among technologists and leaders of all stripes that profit and growth are among a series of goods – not the only goods – to consider when evaluating and parachuting a new technology into digital spaces.
“New digital spaces will be like vast cities with bright entertainments and dark areas; we will say we are ourselves in them but we will also be digital avatars. Cell phones caused us to become more alone together (see the work of Sherry Turkle). Emerging digital spaces which will be much more lifelike and powerful than today’s screens, may challenge identity, may become forces of disinformation, may polarize and galvanize the public around false narratives – to cite just a few of the reasons why a new digital spaces curriculum is essential.
“We think of reforms and initiatives in terms of a slight alteration of what we’re already doing. Better oversight of online privacy practices, for example. But to create the biggest impact in digital spaces, we need to understand and deeply consider how they operate, who we are once we engage with digital spaces and how we change as we engage. Example: Porn is one digital space phenomenon that has fundamentally changed how humans on the planet think about and engage in sex and romance. We hardly know all the ramifications. While it appears the negative effects of porn have been exaggerated, the body dysmorphia issues associated with ubiquitous body images in digital spaces have created new problems and issues. These cannot be resolved by passing laws that abolish them. Can we fix hacking or fraud in digital spaces by abolishing them? While that would be a noble intent, consider it took centuries for the effects of slavery, for example – once abolished – to be recognized, addressed and reconciled (still in process). Impersonation and altering identity are fundamental dynamics of digital spaces. These features of digital spaces enable hacking. We are disembodied in digital spaces which is a leading cause of fraud. This is not an idle example.
“The nature of identity in digital spaces is intimately involved with privacy issues; with dating and relationship issues; with truth and the fight against disinformation. Today we are struggling to grapple with managing the size and scope of certain tech enterprises. That is presently what reforms or initiatives look like. But going forward we are going to have to dig deeper. We are going to have to think more broadly, more comprehensively.
“Our educational systems are based on memorization and matriculation norms that are outmoded in the age of Google and a robotic and remote workforce. Churches are built around myths and stories that contain injunctions and perspectives that do not address key forces and realities in emerging digital spaces. Governments are based on laws which are written matrices. While these matrices will not disappear, they represent an older order. Digital spaces, by comparison, are anarchic. They do not represent a new destination; they are a new disorder, a new way of seeing and being in the world. So, to have the biggest impact, reforms and initiatives must start from a new basis. This is as big a change as moving from base ten arithmetic to base two. We cannot reform our way into new realities. We have to acknowledge and understand them.”
“What beneficial role do you see tech leaders and/or politicians and/or public audiences playing in this evolution? There is one supremely important beneficial role for tech leaders and/or politicians and/or public audiences concerning the evolution of digital spaces. Namely, understanding the drastically different logic digital spaces represent compared to the traditional logic (alphabet and text-centric logic) that built our inherited traditional physical spaces. Our central institutions of school, church, government and corporation emerged from rule-based, sequential alphabetic logic over hundreds of years; digital spaces follow different rules and dynamics.
“A central issue fuels, possibly even dwarfs that consideration: We are in the age of accelerations. Events and technologies have surpassed – and will soon far surpass – political figures’ ability to understand and make meaningful recommendations for improvement or regulation. In the past, governments had a general sense of a company’s products and service. Car manufacturers made cars with understandable parts and components. But today, leading technologies are advancing by inventing and applying new, esoteric, little-understood (except by creators and a handful of tech commentators) technologies whose far-reaching consequences are either unknown, unanticipated, or both.
“The Covid pandemic has revealed colossal ignorance among some politicians regarding the basics of public health. What wisdom could these same people bring to cyber hacking? To algorithm-mediated surveillance? To supporting, enhancing and regulating the metaverse? At its most basic, governance requires a reasonable understanding of how a thing works. Who in government today truly understands quantum computing? Machine intelligence and learning? Distributed networks? Artificial intelligence?
“We now need a technology and future-focused aristos: a completely neutral, apolitical body akin to the Federal Reserve focused solely on the evolution of digital spaces. In lieu of an aristos, education will need to refocus to comprehend and teach new technologies and the mounting ramifications of these technologies – in addition to teaching young minds how perceptions and experiences change in evolving digital spaces.
“Digital spaces expand our notions of right and wrong; of acceptable and unworthy. Rights that we have fought for and cherished will not disappear; they will continue to be fundamental to freedom and democracy. But digital spaces and what Mary Aiken called the cyber effect create different, at times alternate, realities. Public audiences have a significant role to play by expanding our notion of human rights to include integrities. Integrity – the state of being whole and undivided – is a fundamental new imperative in emerging digital spaces which can easily conflate real and fake, fact and artifact. Identity and experience in these digital spaces will, I believe, require a Bill of Integrities which would include:
Integrity of Speech | An artifact has the right to free expression as long as what it says is factually true and is not a distortion of the truth.
Integrity of Identity | An artifact must be, without equivocation, who or what it says it is. If an artifact is a new entity it can identify accordingly, but pretense to an existing identity other than itself is a violation of identity sanctity.
Integrity of Transparency | An artifact must clearly present who it is and with whom, if anyone, it is associated.
Integrity of Privacy | Any artifact associated with a human must protect the privacy of the human with whom the artifact is associated and must gain the consent of the human if the artifact is shared.
Integrity of Life | An artifact which purports to extend the life of a deceased (human) individual after the death of that individual must faithfully and accurately use the words and thoughts of the deceased to maintain a digital presence for the deceased — without inventing or distorting the spirit or intent of the deceased.
Integrity of Exceptions | Exceptions to the above Integrities may be granted to those using satire or art as free expression, providing that art or satire is not degraded for political or deceptive use.
“What will be noticeably improved about digital life for the average user 2035? In 2035, many will see the merger of physical and digital worlds as an encroachment on their worldview. At the same time, facility of use and integration of physical and digital realms will improve many experiences and transactions. For example, the automobile will become a significant digital space. One notable improvement will be the reduction in the 38,000 deaths annually from traffic accidents. As driverless cars become mobile digital spaces with end-to-end digital information streaming in and out of each car our mobile digital experience will reduce accidents, deaths and congestion.
“The most noticeably different aspect of digital life for the average user in 2035 will be a more seamless integration of tools and so-called reality. Importing the dynamics of simulation and virtual representation from the gaming world, we will swallow the internet; digital spaces will move inside us. Time and distance will effectively vanish, whether you are implementing augmented reality, virtual reality or a mirror world in your interaction. Here is where I am, where I can find you or any other – so there is only here. There is only now. The proscenium arch and backstage of ‘The Truman Show’ will have disappeared.
“What is now known as ‘stickiness’ – the ways in which the design of a digital space encourage more engagement – will become full immersion. The outside of any digital space will be harder to fathom because physical spaces will include adjunct digital spaces (just as every business and person has a URL now) and – just as people today pore over their phones and ignore cars, pedestrians, and loved ones – by 2035 digital spaces will become so immersive we will have a problem. It will be extremely difficult to get people to disengage with those digital spaces. We will all become video gamers, hooked on the mirror world of the world.”
Perry Hewitt, chief marketing officer at data.org, a platform for partnerships to build the field of data science for social impact, commented, “Achieving a transformation of digital spaces and improved digital life will require collaboration: private sector tech, government and social-impact organizations coming together in a combination of regulation and norms. Aligned incentives enabling for-profit and social impact to come together is critical. Healthy, informed and engaged publics are better consumers and citizens. Public audiences will play a role to the extent that we build digital spaces that are engaging and convenient to use; it’s hard to see people flocking toward digital broccoli in a candy store of addictive apps. Nate Matias’s Reddit work that shows the actions of individuals improving the platform’s algorithm is hugely encouraging. I am very bullish on the ability to better manage spam, misinformation and hate speech, the scourge of digital spaces today. But it will be an ongoing battle as deepfakes and similar technologies (fake VR in one’s living room?) become more persuasive. Perhaps the biggest challenge will be the tradeoffs between personal privacy and safe spaces. There are many legitimate reasons people require anonymity in public spaces (personal threats, whistleblowing, academic freedom), but it’s really tricky to moderate information and abuse in communities with high anonymity.”
Adam Nagy, project coordinator at Harvard Law School’s Cyberlaw Clinic, commented, “In general, the digitization of sectors that have lagged behind others – such as government social services, healthcare, education and agriculture – will unlock significant potential productivity and innovation. These areas are critical to accelerating economic growth and reducing poverty. At the same time, sectors that have led the pack in digitization, such as finance, insurance, media and advertising, are now facing regulatory headwinds and public scrutiny. Globally, politicians, regulators, civil society and even some industry players are increasingly trying to understand and mitigate harms to individual privacy rights, market competitiveness, consumer welfare, the spread of illegal or harmful content and various other issues.
“These are complex issues, and not every solution is waiting just around the corner, easy to achieve or free of difficult tradeoffs. But digital life is not divorced from the realities of analog life. There is no ‘healthy digital realm’ if the world is in the grips of existential crisis. We are currently enduring overlapping global disasters. COVID-19 has killed millions of people worldwide and sparked an economic disaster the human costs of which will be felt for many years. Meanwhile, the habitability of our planet hangs in the balance due to climate change. Failure by global leaders to act decisively against these crises will have a much greater impact on digital life than any tech-focused reform or initiative.”
Peter Padbury, a Canadian futurist who has led hundreds of foresight projects for federal government departments, NGOs and other organizations, wrote, “1) Artificial intelligence will play a large role in identifying and challenging mis- and disinformation. 2) There could be a code of conduct that platforms use and enforce in the public interest. 3) There could be a national or, ideally, international accreditation body that monitors compliance with the code. 4) Reputable service providers would block the non-code-compliant platforms. 5) The education system has an important role to play in creating informed citizens capable of critical thinking, empathy and a deep understanding of our long-term, global, collective interest. 6) Politicians have a very important role to play in informing, acting and supporting the long-term, global, public interest.”
Susan Price, human-centered design innovator at Firecat Studio, responded, “People are taking more and more notice of the ways social media (in particular) has systematically disempowered them, and are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds. An example is Clubhouse – a live-audio platform with features such as micropayments to content and value creators and a lively co-creation community that is pushing for accessibility features and etiquette mores of respect and inclusion. Another signal is the popularity of the documentary ‘The Social Dilemma,’ and the way its core ideas have been adopted in popular vernacular.
“The average internet user in 2035 will be more aware of the value of their attention and their content contributions due to platforms like Clubhouse and Twitter Spaces that monetarily reward users for participation. Emerging platforms, apps and communities will use fairer value propositions to differentiate and attract a user base. Current problems like the commercial exploitation of users’ reluctance to read and understand terms of service will be solved with competing products and services that strike a fairer bargain with users for their attention, data and time. Privacy, malware, trolls will remain an ongoing battleground; human ingenuity and lack of coordination between nations suggests that these larger issues will be with us for a long time.”
Zak Rogoff, a research analyst at the Ranking Digital Rights project, wrote, “In 2035, a larger portion of overall time spent using digital spaces will be in contexts were privacy is better protected by GDPR-like laws. I also believe that most people will have more control and understanding of algorithmic decision-making that affects them in what we currently think of as online spaces. However, I also feel that physical space will be more negatively impacted, in ways that online space is today, for example through the reduction of privacy due to ubiquitous AI-powered sensor equipment.
“I suspect we’ll see a cycle where, as more elements of life become at least partially controlled by machines, new problems arise and then they are at least partially addressed. The new problems will continue to keep appearing at the margins with the newer tech. Social media and driverless cars, for example, as they have emerged have been good for most people most of the time, but eventually they caused unforeseen systemic problems. By 2035 there will probably be newly popular forms of always-on wearables that interface with our sensorium, or even brain-computer interfaces, and these will be the source of some of the most interesting problems.”
Scott Santens, senior advisor at Humanity Forward, commented, “We really have no choice but to improve digital spaces, so ‘no’ isn’t an option. I think we’re already coming to realize that the internet isn’t going to fix itself, and that certain decisions we made along the way need to be rectified. One of those decisions was to lean on an ad-driven model to make online spaces free. This was one of the biggest mistakes.
“In order to function better, we need to shift towards a subscription model and a data ownership model, and in order for that to happen, we’re going to need to make sure that digital space users are able to afford many different subscriptions and are paid for their data. That means potentially providing digital subscription vouchers to people in a public-funded way, and it also means recognizing and formalizing people’s legal rights to the data they are generating. Additionally, I believe universal basic income will have been adopted by 2035 anyway, which itself will help pay for subscriptions, help free people to do the unpaid work of improving digital spaces, and perhaps most importantly of all, reduce the stress in people’s lives, which will do a lot to reduce the toxicity of social media behavior.
“The problem of disinformation and misinformation will also require investments in the evolution of education, to better prepare people with the tools necessary to navigate digital spaces so as to better determine what is false or should not be shared for other reasons, vs. what is true or should be shared for other reasons. I do think this shift in education will change because, again, it has to. We can’t keep teaching kids as we were once taught. A digital world is a different place and requires an education focused one critical thinking and information processing versus memorization and information filing.”
Clifford Lynch, director of the Coalition for Networked Information, commented, “The digital public sphere has become the target of all kinds of entities that want to shape opinion and disseminate propaganda, misinformation and disinformation; in effect, it has become an attack vector in which to stage assaults on our society and to promote extremism and polarization. Also, digital spaces in the public sphere where large numbers of sometimes anonymous or pseudoanonymous entities can interact with the general public have become full of all of the worst sort of human behavior: bullying, shaming, picking fights, insults, trolling – all made worse by the fact that it’s happening in public as part of a performance to attract attention, influence and build audience.
“I don’t think the human-behavior aspects of this are likely to change soon; at best we’ll see continued adjustments in the platforms to try to reduce the worst excesses. Right now, there’s a lot of focus on these issues within the digital public sphere and discussions on how to protect it from those bad actors. It is unclear how successful these efforts might be. I am extremely skeptical they’ve been genuinely effective to this point. One thing that is really clear is that we have no idea of how to do content moderation at the necessary scale, or whether it’s even possible. Perhaps in the next 5 to 10 years we’ll figure this out, which would lead to some significant improvements, but keep in mind that a lot of content moderation is about setting norms, which implies some kind of consensus. There is, as well, the very difficult question over deciding what content conforms to those norms – so this isn’t a purely, or perhaps even primarily, technology-based challenge.”
Chris Labash, associate teaching professor of information systems management at Carnegie Mellon, said, “This is a difficult question to give a ‘yes’ or ‘no’ to. Digital spaces will, to be sure, have both a positive and negative evolution but my fear is that the negative evolution may be more rapid, more widespread and more insidious than the potential positive evolution.
“We have seen, 2016-present especially, how digital spaces act as cover and as a breeding ground for some of the most negative elements of society, not just in the U.S. but worldwide. Whether the bad actors are from terror organizations or ‘simply’ from hate groups, these spaces have become digital roach holes that research suggests will only get larger, more numerous and more polarized and polarizing.
“That we will lose some of the worst and most extreme elements of society to these places is a given. Far more concerning is the amount of less-thoughtful people who will become mesmerized and radicalized by these spaces and their denizens: people who, in a less digital world, might have had more willingness to consider alternate points of view. Balancing this won’t be easy; it’s not simply a matter of creating ‘good’ digital spaces where participants discuss edgy concepts, read poetry and share cat videos. It will take strategies, incentives and dialogue that is expansive and persuasive to attract those people and subtly educate them in approaches to separate real and accurate inaccurate information from that which fuels mistrust, stupidity and hate.”
David Barnhizer, a professor of law emeritus, human rights expert and founder/director of an environmental law clinic, said, “In the decades since the internet was commercialized in the mid-1990s it has turned into a dark instrumentality far beyond the ‘vast wasteland’ of the kind the FCC’s Newton Minow accused the television industry of having become in the early 1960s. A large percentage of the output flooding social platforms is raw sewage, vitriol and lies.
“In 2018, in a public essay in which he outlined ‘Three Challenges for the Web,’ Tim Berners-Lee, designer of the World Wide Web, voiced his dismay at what his creation had become compared to what he and his colleagues sought to create. He warned that widespread collection of people’s personal data and the spread of misinformation and political manipulation online are a dangerous threat to the integrity of democratic societies … He noted that the internet has become a key instrument in propaganda and mis- and disinformation has proliferated to the point that we don’t know how to unpack the truth of what we see online, even as we increasingly rely on internet sites for information and evidence as traditional print media withers on the vine.
“Berners-Lee said it is too easy for misinformation to spread on the web, particularly because there has been a huge consolidation in the way people find news and information online through gatekeepers like Facebook and Google, which select content to show us based on algorithms that seek to increase engagement and learn from the harvesting of personal data. He wrote: ‘The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking or designed to appeal to our biases can spread like wildfire.’ This allows people with bad intentions and armies of bots to game the system to spread misinformation for financial or political gain.
“The current internet business model, with its expanding power and sophistication of AI systems, has created somewhat of a cesspool. It has become weaponized as an instrumentality of political manipulation, innuendo, accusation, fraud and lies, as well as a vehicle for shaming and sanctioning anyone seen to be somehow offending a group’s sensitivities. When people are subjected to a diet of such content they may become angry, hostile and pick ‘sides.’ This leads to a fragmentation of society and encourages the development of aggressive and ultra-sensitive identity groups and collectives.
“These tend to be filled with people convinced they have been wronged and people who are in pursuit of power to advance their agendas by projecting the image of victimhood. The consequence is that society is fractured by deep and quite possibly unbridgeable divisions. This allows the enraged, perverted, violent, ignorant and fanatical elements of society to communicate, organize, coordinate and feel that they are not as reprehensible as they appear. This legitimizes, for some, hate, stupidity and malice, while rendering the absurdity and viciousness nurtured by the narrowness of these groups’ agendas and perceptions.
“(There is still much good … I absolutely love, for example, being able to almost instantaneously unearth an incredible range of data and information due to the information sources available on the internet. That does not mean, however, that it is something that benefits our entire society because it does not involve simply a scholar’s information acquisition but the opening of a diverse range of data systems – a excellent, good, mediocre, poor, false or inaccurate.)
“There are hundreds of millions of people who, as Tim Berners-Lee suggests, lack any filters that allow an accurate evaluation of what they are receiving and sending online.
“Despots, dictators and tyrants understand that AI and the internet grant to ordinary people the ability to communicate with those who share their critical views, and to do so anonymously and surreptitiously threatens the controllers’ power and must be suppressed. Simultaneously, they understand that, coupled with AI, the internet provides a powerful tool for monitoring, intimidating, brainwashing and controlling their people. China has proudly taken the lead in employing such strategies. The power to engage in automated surveillance, snooping, monitoring and propaganda can lead to intimidating, jailing, shaming or otherwise harming those who do not conform. This is transforming societies in heavy handed and authoritarian ways. This includes the United States.
“China is leading the way in showing the world how to use AI technology to intimidate and control its population. China’s President Xi Jinping is applauding the rise of censorship and social control by other countries. Xi recently declared that he considers it essential for a political community’s coherence and survival that the government have complete control of the internet.
“A large critical consideration is the rising threat to democratic systems of government due to the abuse of the powers of AI by governments, corporations and identity group activists who are increasingly using AI to monitor, snoop, influence, invade fundamental privacies, intimidate and punish anyone seen as a threat or who simply violated their subjective ‘sensitivities.’ This is occurring to the point that the very ideal of democratic governance is threatened.
“Authoritarian and dictatorial systems such as China, Russia, Saudi Arabia, Turkey and others are being handed powers that consolidate and perpetuate their oppression. Recently leaked information indicates that as many as 40 governments of all kinds have gained access to the Pegasus spyware system that allows deep, comprehensive and detailed monitoring on the electronic records of anyone, and that there have been numerous journalists targeted by individual nations.
“Reports indicate that the Biden Administration has forged a close relationship with Big Tech companies related to the obtaining of citizens electronic data and online censorship. An unfortunate truth is that those in power such as intelligence agencies like the NSA and politicized bureaucrats, and those who can gain financially or otherwise, simply cannot resist using the AI tools that serve their interests.
“The authoritarian masters of such political systems have eagerly seized on the surveillance and propaganda powers granted them by the AI and the internet. Overly broad and highly subjective interpretations about what constitutes ‘hate’ and ‘offense’ are destructive grants of power to identity groups and tools of oppression in the hands of governments. They create a culture of suspicion, accusation, mistrust, resentment, intimidation, abuse of power and hostility. The proliferation of ‘hate speech’ laws and sanctions in the West – formal and informal, including the rise of ‘cancel culture’ – has created a poisonous psychological climate that is contributing to our growing social divisiveness and destroying any sense of overall community.
“We are in a new kind of arms race we naively thought was over with the collapse of the Soviet Union. We are experiencing quantum leaps in AI/robotics capabilities. Sounds great, right? The problem is that these leaps lead to include vastly heightened surveillance systems, amazing military and weapons technologies, autonomous self-driving vehicles, massive job elimination, data management and deeply penetrating privacy invasions by governments, corporations, private groups and individuals.
“The Pentagon is investing $2 billion in the Defense Advanced Research Projects Agency (DARPA) ‘AI Next Campaign,” focusing on increased AI research and development. The U.S. military is committed to creating autonomous weapons and is in the early stages of developing weapons systems intended to be controlled directly by soldiers’ minds. Significant AI/robotics weaponry and cyber warfare capabilities are being developed and implemented by China and Russia, including autonomous tanks, planes, ships and submarines, tools that can also mount dangerous attacks on nation-states’ grids and systems.
“The ‘bad’ in celebrating the undeniable ‘good’ that will flow from further developments in AI and robotics, is that we can move too fast and be blind to the ‘bad.’
“We face extremely serious challenges in our immediate and near-term future. Those challenges include social disintegration, largescale job loss, rising inequality and poverty, increasingly authoritarian political systems, surveillance, loss of privacy, violence and vicious competition for resources. With the possibility of social turmoil in mind, former Facebook project manager, Antonio Garcia Martinez, quit his job and moved to an isolated location due to what he saw as the relentless development of AI/robotic systems that will take over as much as 50 percent of human work in the next 30 years in an accelerating and disruptive process. Martinez concluded that, as the predicted destruction of jobs increasingly comes to pass, it will create serious consequences for society, including the probability of high levels of violence and armed conflict as people fight over the distribution of limited resources.
“Tesla’s Elon Musk describes artificial intelligence development as the most serious threat our civilization faces. He is on record saying that the human race stands only a 5 to 10 percent chance of avoiding being destroyed by killer robots. Max Tegmark, physics professor at MIT, has also warned that AI/robotics systems could ‘break out’ of human efforts to control them and endanger humanity. Tommi Jaakkola, an MIT AI researcher described the dilemma, explaining: ‘If you had a very small neural network [deep learning algorithm], you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.’ He added, ‘We can build these models, but we don’t know how they work.’ This fact exists at a point that is quite early in the development of AI.
“If Masayoshi Son, CEO of SoftBank, is right, the AI future is a great danger. Like anyone else trying to gain a sense of our future, we simply don’t know what the future holds, but we are playing with fire and beset by unbounded hubris and tunnel vision. Like opioid and heroin addicts, it seems that we simply ‘can’t help ourselves’ and will innovate, create and invent right up to the point when we aren’t in control. Just because you can do something does not dictate that you should. … Stephen Hawking warned: ‘I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer… Computers can, in theory, emulate human intelligence – and exceed it. … And in the future, AI could develop a will of its own – a will that is in conflict with ours. In short, the rise of powerful. AI will be either the best, or the worst thing, ever to happen to humanity.’
“Hawking is not alone. Oxford University philosopher Nick Bostrom focuses on the development of artificial intelligence systems and has raised the possibility that fully developed AI/robotic systems may be the final invention of the human race, indicating we are ‘like small children playing with a bomb.’ The developments in AI/robotics are so rapid and uncontrolled that Hawking posited a ‘rogue’ AI system could be difficult to defend against, given our own greedy and stupid tendencies. Already today we are inundated with deceptive AI propaganda ‘bots’ and subjected to continuous invasions into our most private and personal information.
“Big data mining is being used by businesses and governments to create virtual simulacra of ‘us’ so that they can more efficiently anticipate our actions, preferences and needs. This is aimed at manipulating and persuading us to act to advance their agendas and deliver advantages. If people such as Hawking, Tegmark, Bostrom and Elon Musk are even partially correct in their concerns, we are witnessing the emergence of an alternative species that could ultimately represent a fundamental threat to the human race.”
danah boyd, founder and president of the Data & Society Research Institute and principal researcher at Microsoft, commented, “My expectation is that technology will be leveraged to reify capitalism rather than to help undo its most harmful components. Technology mirrors and magnifies the good, bad and ugly of society. There are serious (and daunting) challenges to public life in front of us that are likely to result in significant civil unrest and chaos – and technology will be leveraged by those who are scared, angry, disenfranchised even as technology will also be used by those seeking to address the challenges in front of us. But technology can’t solve inequality. Technology can’t solve hate. These require humans working together. Moreover, technology is completely entangled with late-stage capitalism right now and addressing inequality/hate and many other problems (e.g., climate change) will require a radical undoing/redoing of capitalism. My expectation is that technology will be leveraged to reify capitalism rather than to help undo its most harmful components.”
Ayden Férdeline, a public-interest technologist based in Berlin, Germany, commented, “We have recentralized what was a decentralized network of networks by relying on three or four content-distribution networks to store and cache our data. We are making the internet’s previously resilient architecture weaker, easier to censor and more reliant on the goodwill of commercial entities to make decisions in our interests.
“If we don’t course-correct soon, I worry that the internet of 2035 will be even more commercial, government-controlled and far less community-led. I am concerned that we are moving towards more closed ecosystems, proprietary protocols and standards, and national Splinternets that all abandon the very properties that made the internet such an impactful and positive tool for social change over the past 25 years.
“Of course, in not addressing many of the very real issues that the internet does have on society, we have found ourselves in a situation where some kind of intervention is required. I just worry that the wrong actors have identified this opportunity to intervene. If we think back to how the internet was developed, it grew somewhat surreptitiously, as far as commercial and political interests are concerned, which gave the Internet the time and space to have defined around it norms and governance structures that we now take for granted. Values like interoperability, permissionless innovation, and reusable building blocks. These are excellent properties. But, ultimately, they are not technical values, they were political choices only possible because the internet was a publicly funded project intended for use in democracies for academic and then military networks.
“As the internet has grown in importance and commercial interests have recognized opportunities to monetize it, the internet’s foundational values have been abandoned. Social media and messaging services have no interoperability.”
Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, wrote, “It’s unfortunate that the digital space has been so thoroughly polluted, but it’s also unlikely to change for one reason – people don’t change. We can ruin anything. Most new technologies started out with great promise to change society for the better. Remember what was being said when cable was introduced? There is a lot that’s good and useful in the digital space, but the bad drives out the good and causes more harm. Do we have to talk these days about Russian interference, the Big Lie of the election, or the fact that people aren’t getting vaccinated against Covid? It’s not all the online space – cable contributed also. Technology will never keep up with all the garbage going in.”
Adam Clayton Powell III, executive director of the Election Cybersecurity Initiative at the University of Southern California, commented, “While I wish this were not the case, it is becoming clear that digital spaces, even more than physical spaces, are becoming more negative. Consider as just one example the vulnerability of female journalists, many of whom are leaving the profession because of digital harassment and attacks. In Africa, where I have worked for years, this is a fact of life for anyone opposing authoritative regimes.”
Russell Newman, associate professor of digital media and culture at Emerson College, wrote, “My ‘no’ is based on the continuance of our present trajectory. Much could change it. In fact, I almost answered ‘yes’ to the question on the basis that I have a lot of hope that activism might well prevail in these ways. However, assuming we remain in a moment of unabated present forward movement, what prevails is a set of business models that continue to rely heavily on intensified tracking with an assist from artificial intelligence and machine learning, all of which we now know bake in societal inequities rather than alleviating them and point systems far away from any democratic outcome.
“Many of the debates about misinformation occurring now are in fact epiphenomena of several trends as parties harness them toward various ends. Several trends worry me in particular:
1) While the largest tech companies receive the largest share of attention, the conduit providers themselves – AT&T, Comcast, Spectrum, Verizon – have been constructing their own abilities to track their users for the purpose of selling data about them to advertisers or other comers, and/or to strengthen their ability to provide intermediary access to such information across supply chains and more. Verizon’s recent handover of its Verizon Media unit to Apollo only means that one of the largest tracking entities in existence has been transferred to a sector that cares even less about the quality of democratic communications, seeking instead deeper returns. Clampdowns by tech giants on third-party tracking is similarly likely only serving to consolidate the source of tracking information on users with fewer, larger players. This is to leave aside that we are nowhere close to serious privacy legislation at the federal level.
2) Adding to this, the elimination of network neutrality rules by the Trump FCC is devastating for future democratic access to communications. In fact, the Trump FCC did not just remove network neutrality rules, but took the agency itself out of even overseeing broadband communications overall. The resultant shift from common carriage communications, which required providers to take all paying comers, to private carriage portends all sorts of new inequities and roadblocks to democratic discourse while also potentially intensifying tracking (blocking the ability to use VPNs, perhaps). Maddeningly, the Biden administration shows little serious interest in fixing this; the fact it has yet to even hint at appointing a tiebreaking Democratic FCC commissioner with dwindling time remaining in this Congress is a disaster.
3) Our tech giants are not just advertising behemoths but are also effectively and increasingly military contractors in their own right with massive contracts with the intelligence and defense arms of the government. This instills troubling incentives that do not point toward increased democratic accountability. Facial-recognition initiatives in collaboration with police departments similarly portend intensifications of existing inequities and power imbalances.
4) Traditional media upon which democratic discourse depends is continuing to consolidate; to add insult to injury, it is becoming financialized. Newspapers in particular are doing so under the thumb of hedge funds with no commitment to democratic values, instead seeing these important enterprises as revenue centers to wring dry and discard. ‘Citizen journalism’ is not a foundation for a democracy; a well-resourced sector prepared and incentivized to do deep investigative reporting about crucial issues of our time is. Emergent entities like Vox, Buzzfeed and Axios themselves received early support from the usual giants in tech and traditional media; and their own logics don’t necessarily lean toward optimally democratic ends, with Axios as recently as late 2020 telling the Wall Street Journal it saw itself as a software-as-a-service provider for other corporations.
5) Finally, and perhaps most challengingly, our communications networks and the metadata of our use of them have themselves become intrinsically embedded within global capital flows, with aspects of our interactions with traditional media being as folded into this amalgam as the tracking of container ship cargo. Making democratic media policy in its own right is challenging when it is interwoven with flows of global capital in this way. This is also another reason why antitrust has, itself, become fraught in many ways. New interest in resuscitating a moribund antitrust policy does not address the core logics in play here, as developing manifestations of power are unaddressable by it barring much rethinking.
“There are numerous technical initiatives that seek to instill different rationales and logics for new forms of participation. Such initiatives, while useful to explore, neglect the banal and almost crucial insight of all: that all these problems we face are social ones, not technological ones, and developing new web platforms of varying logics is ancillary to addressing the conditions the trends above do not just exacerbate but actually support.
“The notion that policy just ‘lags’ behind emergent tech is a red herring. The business models being pursued today were agendas before they became realities in policy debates, even if still gestating. I study this stuff intensively and I was barely familiar some of these initiatives introduced in the piece. Participation in these new arenas is a privilege of both knowledge and, frankly, time that many working people do not possess (for that matter, even that I do not possess, and I occupy a position of relative privilege in that regard). Broadband access itself remains spotty and inequitably distributed, mostly because of price, intensifying the issue. A New Deal-esque effort to render broadband affordable as well as available is necessary to address this.
“All of the ills identified are endemic to a time in which wages have effectively stagnated and the power of collective bargaining has been brought low (leading to greater efforts by necessity to pinpoint perfect audiences so as to clear markets), where policy toward corporate interests has intensified a divergence between the capital-owning sector and main street; where basic needs like health care are lacking for so many; where a personal-debt crisis (born not just of student debt but historically stagnating wages) threatens the financial health of multiple generations and, by extension, the economy writ large. This is to leave aside the barriers being thrown up to voting itself and the constitution of right-wing echo chambers our new platforms have afforded which have been armed and deployed to forestall these trends from changing.
“Elites across the globe share more commonalities in their interests and station than differences even if national prerogatives differ. The climate crisis intensifies every single one of the trends above, one that these same economic elites look to evade themselves, rather than solve.
“All of this does not portend stronger democratic features across our landscape. It portends continued division sown by artificial intelligence-driven suggestion engines, an economic climate that only finds bullet wounds covered over with Band-Aids that threaten new and larger future implosions, and a climate crisis that will only heighten these tensions.”
Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for science, technology and innovation policy, “Forwarding the public good requires both collective action and trust in democratic institutions. Online spaces may become even better places for yelling and organizing in the years to come, but so far they are of zero usefulness in causing genuine policy changes to happen through the public-spirited work of elected representatives. Restoring trust in our real-world democratic institutions will require some exogenous stuff. And online spaces don’t do exogenous.”
Nicholas Proferes, assistant professor of information science at Arizona State University, responded, “There is an inherent conflict between the way for-profit social media platforms set up users to think about the platforms as ‘community’ but also must commodify information flows for user-content vastly exceeding what normally exist in a ‘community.’ Targeted ads, deep analysis of user-generated content (such as identification of brands or goods in photos/videos uploaded by users), facial recognition, all pose threats to individuals. As more and more social media platforms become publicly traded companies (or plan to), the pressure to commodify will only intensify. Given the relatively weak regulation of social media companies in the past decade in the U.S., I am pessimistic.”
Peter Rothman, lecturer in computational futurology at the University of California-Santa Cruz, wrote, “As long as digital spaces and social media are controlled by for-profit corporations, they will be dominated by things that make profits, and that’s outrage, anger, bad news and polarized politics. I see nothing happening on any service to change this trajectory. A change of direction would require a significant change of law and it can’t happen in the current political environment.”
Christopher Richter, a professor at Hollins University whose research focuses on communications processes in democracies, said, “Diagnosis, reform and regulation are all reactive processes. They are slow, and they don’t generate profit, while new-tech development in irrational market environments can be compared to a juggernaut, leading to rapid accumulation of huge amounts of wealth, the beneficiaries of which in turn rapidly become entrenched and devote considerable resources to actively resisting diagnosis, reform and regulation that could impact their wealth accumulation.
“I am confident that the interacting systems of design processes, market processes and user behaviors are so complex and so motivated by wealth concentration that they cannot and will not improve significantly in the next 14 years. Social media and other digital technologies theoretically and potentially could support a more-healthy public sphere by channeling information, providing neutral platforms for reasoned debate, etc. But they have to be designed and programmed to do so, and people have to value those functions. Instead, they are designed to generate profit by garnering hits, likes, whatever, and people prefer or are more vulnerable to having their emotions tweaked than to actually cultivating critical thinking and recognizing prejudice. Thus, emotional provocation is foregrounded.
“Even if there is a weak will to design more-equitable applications, recent research demonstrates that even AI/machine learning can reflect deep-seated biases of humans, and the new apps will be employed in ways that reflect the biases of the users – facial-recognition software illustrates both trends. And even as the problems with something like facial recognition may get recognized and eventually repaired, there are many, many more new apps being rapidly developed the negative effects of which won’t be recognized for some time.”
Carolina Rossini, an international technology law and policy expert and consultant who is active in a number of global digital initiatives, said, “For years to come – based on the current world polarization and the polarization within various powerful and relevant countries – I feel speech and security risks will increase. Personal harm, including a greater impact on mental health, might also increase within digital realms like the Metaverse. I feel we might need some new form of a regulatory agency that has some input on how technology impacts people’s health. We have FDA for medicines and more, why not something like that for the tech that is getting closer and closer to being put inside our bodies?
“If countries do not come together to deal with those issues, the future might be grim. From building trust and cooperation to good regulation against large monopolistic platforms to better review of the impact of technologies to good data governance frameworks that tackle society’s most pressing problems (e.g., climate change, food security, etc.) to digital literacy to building empathy early on, there is a lot to be done.”
Danny Gillane, an information science professional, commented, “People can now disagree instantaneously with anybody and with the bravery of being out of range and of anonymity in many cases. Digital life is permanent, so personal growth can be erased or ignored by an opponent’s digging up some past statement to counter any positive change. Existing laws that could be applied to large tech companies, such as antitrust, laws are not applied to these companies nor to their CEOs. Penalties imposed in the hundreds of millions of dollars or Euros are a drop in the bucket to the Googles of the world.
“Relying on Mark Zuckerberg to do the right thing is not a plan. Relying on any billionaire or wannabe billionaire to do the right thing to benefit the planet as opposed to gaining power or wealth is not a plan. It is a fantasy. One would be better served buying a lottery ticket. I think things could change for the better. But they won’t. Elected officials, especially in the United States, could place doing what’s best for their constituencies and the world over power and reelection. Laws could be enforced to prevent the consolidation of power in the few largest companies. Laws could be passed to regulate these large companies. People could become nicer.”
Alan Mutter, consultant and former Silicon Valley CEO, observed, “The internet is designed to be open. Accordingly, no one is in charge. While good actors will do many positive things with the freedom afforded by digital publishing, bad actors will continue to act badly with no one to stop them. Did I mention that no one is in charge?”
Kate Carruthers, chief data and insights officer at the University of New South Wales-Sydney, said, “Digital spaces will not magically become wholesome places without significant thought and action on the part of leaders. U.S. leadership is either not capable or not willing to make the necessary decisions. Given the political situation in the U.S., any kind of positive change is extremely unlikely. All social media platforms should be regulated as public utilities and then we might stand a chance for the growth of civil society in digital spaces. Internet governance is becoming fragmented and countries like China are Russia are driving this.”
Leah Lievrouw, professor of information studies at the University of California – Los Angeles, commented, “If the pervasive privatization, ‘walled garden’ business models and network externalities that allowed the major tech firms to dominate their respective sectors – search, commerce, content/entertainment, interpersonal relations and networks – continue to prevail things will not improve, as big players continue to oppose meaningful governance and choke off any possible competition that might challenge their incumbency. Despite growing public concern and dismay about the climate and risks of online communication and information sources, no coherent agenda for addressing the problems seems to have yet emerged, given the tension between an appropriate reluctance to let governments (with wildly different values and many with a penchant for authoritarianism) set the rules for online expression and exchange, and the laddish, extractive ‘don’t blame us, we saw our chances and took ’em’ attitude that still prevails among most tech industry leadership. It’s not clear to me where the new, responsible, really compelling model for ‘digital spaces’ is going to come from.”
Neil Richards, professor of law at Washington University in St. Louis and one of the country’s foremost academic experts on privacy law, observed, “Right now, I’m pretty pessimistic about the ability of venture capital-driven tech companies to better humanity when our politics have two Americas at each other’s throats and there is massive wealth inequality complicated by centuries of racism. I’m confident over the long term, but the medium term promises to be messy. In particular, our undemocratic political system (political gerrymandering, voting restrictions and the absurdity of the Senate, where California has the same power as Wyoming and a dozen other states with a fraction of its population), tone-deaf tech company leaders and viral misinformation mean we’re likely to make lots of bad decisions before things get better.
“We’re human beings. The history of technological advancements makes pretty clear that transformative technological changes create winners and losers, and that even when the net change is for the better, there are no guarantees, and, in the short term, things can get pretty bad. In addition, you have to look at contexts much broader than just technology.”
Joseph Turow, professor of media systems and industries University of Pennsylvania, said, “Correcting this profound problem will require a reorientation of 21st century corporate, national and interpersonal relationships that is akin to what is needed to meet the challenge of reducing global warning. There are many wonderful features of the internet when it comes to search, worldwide communication, document sharing, community-oriented interactions and human-technology interconnections for security, safety and health. Many of these will continue apace.
“The problem is that corporate, ideological, criminal and government malefactors – sometimes working together – have been corrupting major domains of these wonderful features in ways that are eroding democracy, knowledge, worldwide communication, community, health and safety in the name of saving them. This too will continue apace – unfortunately often faster and with more creativity than the socially helpful parts of our internet world.”
Liza Potts, professor of writing, rhetoric and American cultures at Michigan State University, responded, “The lack of action on the part of platform leaders has created an environment where our democracy, safety and security are all at risk. At this point, the only solution seems to be to break apart the major platforms, standardize governance, implement effective and active moderation and hold people accountable for their actions. Without these moves, I do not see anything changing.”
Natalie Pang, a senior lecturer in new media and digital civics at the National University of Singapore, said, “Although there is now greater awareness of the pitfalls of digital technologies – e.g., disinformation campaigns, amplification of hate speech, polarisation and identity politics – such awareness is not enough to reverse the market dynamics and surveillance capitalism that have become quite entrenched in the design of algorithms as well as the governance of the internet. Broader governance, transparency and accountability – especially in the governance of the internet – is instrumental in changing things for the better.”
Larry Lannom, director of information services and vice president at the Corporation for National Research Initiatives (CNRI), commented, “Solutions will be hard to come by. The essential conundrum is how to preserve free speech in an environment in which the worst speech has a many-fold advantage. This general phenomenon is not new. Jonathan Swift wrote in “The Art of Political Lying” in 1710, ‘If a lie be believed only for an hour, it has done its work, and there is no farther occasion for it. Falsehood flies, and Truth comes limping after it.’ But the problem is enormously exacerbated by the ease of information spread across the internet, and it is unclear whether the virus-like behavior of misinformation that strikes the right chords in some subset of the population can be stopped.
“The negative sense I have is primarily about social media and the algorithms that drive users into more and more extreme positions. As long as there is profit in scaring people, in pushing conspiracy theories and in emphasizing wedge issues instead of the common good, societies will continue to fracture, and good governance will be harder to achieve.
“There is still a lot of good in collaboration technologies. You can focus the world’s expertise on a given problem without having to get all of those experts together in a single room. It makes information more readily available. Consider the transformative protein-folding announcement from DeepMind. Researchers say the resource – which is set to grow to 130 million structures by the end of 2021 – has the potential to revolutionize the life sciences. These sorts of advances, widely shared, will increase over time, with great potential benefits.”
Kent Landfield, a chief standards and technology policy strategist with 30 years of experience, wrote, “Critical thinking is what made Western societies able to innovate and adapt. The iPhone phenomenon has transformed our society to one of lookup instead of learning. With the lack of that fundamental way of looking at the world being mastered today, generations that follow may become driven by simple herd mentality. The impact of social media on our society is dangerous as it propels large groups of our populations to think in ways that do not require original thinking. Social media platforms are ‘like or dislike’ spaces that foster conflict, causing these populations to be more susceptible to disinformation, either societal or nation-state. ‘Us versus them’ is not beneficial to society at all. The days of compromise, constructive criticism and critical thinking are passing us by. Younger generations minds are being corrupted by half-truths and promises of that which can never be achieved.”
Michael Kleeman, senior fellow at the University of California-San Diego, commented, “The Digital space has radically altered the costs of information distribution, including the costs of misinformation. This economic reality has created and will likely continue to create a cacophony with no filters and likely cause people to continue to move towards a few sources that echo their beliefs and simplify what are inherently complex issues. The threats of this to civil society, democracy and physical and mental health are very real and growing. The only hope I feel is a move toward more local information where people can ‘test’ the digital data against what they see in the real world. But even that is complex and difficult as partial truths can mask for more complete information and garner support for a distorted position. I am, sadly, not hopeful.”
Oscar Gandy, an emeritus scholar of the political economy of information at the University of Pennsylvania, said, “Much of my pessimism about the future of digital spaces is derived from my observations regarding developments that I have seen lately, and on the projections of critical observers who project further declines in these directions.
“While there are signs of growing concern over the growth in the power of dominant firms within the communications industry and suggestions about the development of specialized regulatory agencies with the knowledge, resources and authority to limit the development and use of data and analytically derived inferences about individuals and members of population segments or groups, I have not got much faith in the long-term success of such efforts, especially in the wake of more widespread use of more and more sophisticated algorithmic technologies to bypass regulatory efforts.
“There is also a tendency for this communicative environment to become more and more specialized, or focused upon smaller and smaller topics and perspectives, a process that is being extended through algorithmically enabled segmentation and targeting of information based upon assessments of the interests and viewpoints of each of us. In addition, I have been struck by the nature of the developments within the sphere of manipulative communication efforts, such as those associated with the so-called ‘dark matter,’ or presentational strategies based upon experimental assessments of different ways of presenting information to increase its persuasive impact.”
Marcus Foth, professor of informatics at Queensland University of Technology, exclaimed, “Issues of privacy, autonomy, net neutrality, surveillance, sovereignty, etc., will continue to mark the lines on the battlefield between community advocates and academics on the one hand, and corporations wanting to make money on the other hand. Things could change for the better if we imagine new economic models that replace the old and tired neoliberal market logic that the internet is firmly embedded in.
“There are glimpses of hope with some progressive new economic models (steady state, degrowth, doughnut, and lots of blockchain fantasies, etc.) being proposed and explored. However, I am doubtful that the vested interests holding humankind in a firm grip will allow for any substantial reform work to proceed. The digital spaces you refer to are largely hosted by digital platform corporations operating globally. In the early days of the internet, the governance of digital spaces on the top ‘applications’ layer of the OSI (Open Systems Interconnection) model comprised simple and often organically grown community websites and Usenet groups.
“Today, this application layer is far more complex, as the commercial frameworks, business plans and associated governance arrangements, including policies and regulations, have all become far more complex and sophisticated. While the pace with which this progression advances seems to accelerate, the direction since the World Wide Web went live in 1993 has not changed much. The underlying big platform corporations that have emerged are strongly embedded in a capitalist market logic, set to be profitable following outdated neoliberal growth key performance indicators (KPIs). What they understand to be ‘better’ is based on commercial concerns and not necessarily on social or community concerns.”
Jamais Cascio, distinguished fellow at the Institute for the Future, responded, “The further spread of internet use around the globe will mean that by 2035 a significant part – perhaps the majority – of active digital citizens will come from societies that are comfortable with online behavioral restrictions. Their 2035 definition of the ‘social good’ online will likely differ considerably from the definition we most frequently discuss in 2021. This isn’t to say that attempts to improve the social impacts of digital life won’t be ongoing, but they will be happening in an environment that is culturally fractured, politically restive and likely filled with bots and automated management relying on increasingly obscure machine-learning algorithms.
“Our definition of ‘social good’ in the context of digital environments is evolving. Outcomes that may seem attractive in 2021 could well be considered anathema by 2035, and vice-versa. Censorship of extreme viewpoints offers a ready example. In 2021, we’re finding that silencing or deplatforming extreme political and social voices on digital media seems to have an overall calming effect on broader political/social discourse. At the same time, there remains broad opposition (at least in the West) to explicit ‘censorship’ of opinions.
“By 2035, we may find ourselves in a digital environment in which sharp controls on speech are widely accepted, where we generally favor stability over freedom. Conversely, we may find by 2035 that deplatforming and silencing opinions too quickly becomes a partisan weapon, and there’s widespread pushback against it, even if it means that radical and extreme voices again garner outsized attention. In both of these futures, the people of the time would see the development as generally supporting the social good – even though both of these futures are fairly unattractive to the people of today.”
Meredith P. Goins, a group manager connecting researchers to research and opportunities, wrote, “The internet is being used to track people, guestimate what they are going to purchase and track their every waking moment so that they can either be found or be advertised to. Tech leaders will continue to make billions from reselling content the general public produces while the middle class goes extinct.
“The inequalities will continue until broadband and internet service becomes regulated like telephone, TV, etc. If not, Facebook, Twitter and all social media will continue to devolve into a screaming match with advertising. My children (16 and 19) use the internet for their classes (had to due to Covid) but both hate social media and the ability for folks to get in touch with them at all hours of the day. They miss face-to-face interaction and have thanked me for limiting their screen time over the years as they are skilled at communicating in real life, which many of their friends (yes, college freshmen!) have a hard time with.
“Individuals have lost their ability to communicate both in person and through fully written sentences. I have 14 staff members who are all Millennials and I have had to teach most of them how to write in complete sentences. It is getting worse, not better. Let’s also reinstate spelling, cursive writing and speech classes so that our students and workers can be successful in their future work.”
Marc Brenman, managing partner of IDARE LLC, observed, “Human nature is unlikely to change. There is little that is entrenched in technology that will not change much. The interaction of the two will continue to become more problematic.
“Technology enables errors to be made very quickly, and the errors, once made, are largely irretrievable. Instead they perpetuate, extend and reproduce themselves. Autonomy becomes the possession of machines and not people. Responsibility belongs to no one. Random errors creep in. We, as humans, must adjust ourselves to machines.
“Recently I bought a new car with state-of-the-art features. These include lane-keeping, and I have been tempted to take my hands off the steering wheel for long periods. This, combined with cruise controls and distance regulation come close to self-driving. I am tempted to surrender my will to the machine and its sensors and millions of lines of code. The safety features of the car may save my life, but is it worth saving? Similarly, the technology of gene-splicing enables the creation of mRNA vaccines, but some people refuse to take them. We legally respect this thanatos, as we legally respect another technology: guns.”
Gary Marchionini, dean and professor at the School of information and library science at the University of North Carolina-Chapel Hill, wrote, “I expect that there will be a variety of national and local regulations aimed at limiting some of the more serious abuses of digital spaces by machines, corporations, interest groups, government agencies and individuals. These mitigation efforts will be insufficient for several reasons: The incentives for abuse will continue to be strong. The effects of abuse will continue to be strong. And each of these sets of actors will be able to masquerade and modify their identity (although corporations and perhaps government agencies will be more limited than machines, individuals and especially interest groups). On the positive side, individuals will become more adept at managing their online social behaviors and cyberidentities.”
David Porush, writer, longtime professor at Rensselaer Polytechnic Institute and author of “The Soft Machine: Cybernetic Fiction,” said, “Digital spaces are like all technologies: They change our minds, and even our brains, but not our souls. Or if the word ‘soul’ is too loaded for you, try ‘the eternal, enduring human instincts and impulses that drive our interactions with each other and considerations of our selves.’ (You can see why I prefer the shorthand). Digital spaces have unleashed new facilities for getting what’s in our souls into each other’s, for better or worse. We can do so wider, faster and with more fidelity and sensation (multimedia) and intimacy. New media grant us ways to express ourselves that were inconceivable without them. We can share subjectivities (i.e., Facebook) and objectivities (academic and scientific sites).
“The world is mostly made a better place by digital spaces, though new terrors and violence come with it, too. This is as always since we scrawled on cave walls and invented the phonetic alphabet and the printing press. It’s been millennia-long ride on the asymptote, up towards technologically-mediated telepathy. Neuralink is just the latest, most-explicit manifestation of what’s always been implicit in the evolution of communication technologies.
“So, to answer the question at hand: I believe leaders, politicians and governments can do more to civilize the digital commons and regulate our behaviors in them, make the Wild West into a national park or theme park, but I both a) despair of them having the wisdom to do so, and b) sort of hope they don’t. I say a) because I don’t trust their wisdom beyond self-interest and ideology. I say b) because I believe the attempt is likely to do more damage to liberties in the short run up to 2035.
“In the long run, the digital commons, the virtual world – like the meatworld – will get better. It will be a healthier, safer, better, saner space. Sneakers, air conditioning, food, vaccines, and knowledge and education available for everyone, though unevenly. It is always already, and will continue to be, a war with plenty of Doomsday scenarios ready to write. But the future is bright. And with the help of the digital commons, we’ll get there.”
Howard Rheingold, a pioneering sociologist who was one of the first to explore the early diffusion and impact of the internet, responded, “When I wrote ‘The Virtual Community’ (published in 1993), I felt that the most important question to ask about what was not yet known as ‘social media’ was whether the widespread use of computer-mediated communication would strengthen or weaken democracy, increase or decrease the health of the public sphere. Although many good and vital functions continue to be served by internet communications, I am far from sanguine about the health of the public sphere now and in the future.
“My two most important concerns are the amplification of the discourse of bad actors and the emergence and continuing evolution of computational propaganda (using tools like Facebook’s ability to segment the population according to their beliefs to deliver microtargeted misinformation to very large numbers of people). If the rising tide of internet communications lifts all boats (by enabling like-minded people to meet, communicate and organize), it lifts both the hospital ships and the pirate ships, the altruists and the fascists.
“Misinformation and disinformation about the Covid-19 epidemic has already contributed to mass deaths. Flat-earthers, Q-anon cultists, racists, anti-semites, vandals and hackers are growing in numbers and capabilities, and I see no effort of equivalent scale from governments and private parties.
“Facebook is the worst, and unless it dies, it will never get better, because Facebook’s business model of selling to advertisers microtargeted access to large, finely segmented populations is exactly the tool used by bad actors to disseminate misinformation and disinformation. I have called for the increased creation and use of smaller communities, either general-purpose or specialized (e.g., patient and caregiver support groups to name just one example of many beneficial uses of social media)”
Jay Owens, a research and innovation consultant with New River Insight, wrote, “You ask, ‘What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces do you think are so entrenched…’ The entrenched issue here isn’t ‘human nature’ or technology or regulation – it’s capitalism. Unless we overthrow it prior to 2035, digital spaces will continue to be owned and controlled by profit-seeking companies who will claim they’re legally bound to spend as little as possible on ‘serving the public good’ – because it detracts from shareholder returns.
“The growth of Chinese social media companies in Western markets will mean there are firms driven by more than purely for-profit impulses, yes – but the vision of ‘good’ that they are required to serve is that of the Chinese state. Theirs is not a model of ‘public good’ that either speaks to Western publics or indeed Western ideas of ‘good.’ I retain faith in individual users’ capacity for improvisation, bricolage, resistance, creative re-use and re-interpretation. I do not think this will grow substantially from today – but it will remain a continuing contrapuntal thread.”
Ellery Biddle, projects director at Ranking Digital Rights, commented, “I am encouraged by the degree to which policymakers and influential voices in academia and civil society have woken up to the inequities and harms that exist in digital space. But the overwhelming feeling as I look ahead is one of dread. There are three major things that worry me.
“1) Digital space has been colonized (see Ulises Mejias and Nick Couldry’s definition of data colonialism) by a handful of mega companies (Google, Facebook, Amazon) and a much broader industry of players that trade on people’s behavioral data. Despite some positive steps toward establishing data-protection regimes (mainly in the EU), this genie is out of the bottle now and the profits that this industry reaps may be too enormous for it to change course any time soon. This could happen someday, but not as soon as 2035.
“2) While the public is much more cognizant of the harms that major social media platforms can enable through algorithmic content moderation that can supercharge the spread of things like disinformation and hate speech online the solutions to this problem are far from clear. Right now, three major regimes in the Global South (Brazil, India and Nigeria) are considering legislation that would limit the degree to which companies can moderate their own content. Companies that want to stay competitive and continue collecting and profiting from user data will comply, and this may drive us to a place where platforms are even more riddled with harmful material than in the past and where government leaders dominate the discourse. The scale of social platforms like Facebook and Twitter is far too large – we need to work towards a more diverse global ecosystem of social platforms, but this may necessitate the fall of the giants. I don’t see this happening before 2035.
“3) Although the pandemic has laid bare the inequities and inequalities derived from access to digital technologies, it is difficult to imagine our current global internet (to say nothing of the U.S. context) infrastructure morphing into something more equitable any time soon.”
Hans Klein, associate professor of public policy at Georgia Tech, responded, “The U.S. has a problem: ‘state autonomy.’ Its military and foreign policy establishments (‘the state’) are only imperfectly under civilian/democratic control. The American public is not committed to forever wars in the Middle East, Russia and China, nor to deindustrialization through global trade, but no matter whom the citizens elect, the policies hardly change.
“Elections – the will of the people – have remarkably little effect on policy. Policies arguably do not represent the will of the people. The state is autonomous of the citizens. Large media corporations play an important role in enabling such state autonomy. The media corporations repeat and amplify policy-makers’ narratives, with little criticism. They report on select issues while ignoring others and frame issues in ways that reinforce the status quo. So, in 2003 we heard endlessly about weapons of mass destruction but nothing about antiwar protests. In 2020 we heard endlessly about protests but nothing about people of color suffering from violent crime.
“What we call the ‘public sphere’ might better be called the narrative sphere. Citizens are enclosed in a state-corporate narrative sphere that tells them what to think and what to feel. Media corporations’ control of this narrative sphere is essential to state autonomy, because the narratives shape facts in ways that support the autonomy of policy makers.
“Around 2010 a revolution occurred: social media punctured the corporate narrative sphere. Alongside the narrative sphere there appeared a public sphere, in which the voices of people could be heard. This new social-media-enabled public sphere led to political movements on the left and the right. On the left, Bernie Sanders criticized state and especially corporate power. He focused citizens’ attention upwards to the power structure. On the right, Donald Trump did something literally unthinkable prior to social media: he ran on an anti-war platform. Sanders was contained by his party, but Trump broke his party, won the nomination and won the election.
“This new, social-media-enabled public sphere is often crude, and the voices it empowers may be both constructive and destructive. Donald Trump manifested that. Those who could see beyond his personal style saw an elected official who finally raised important questions of war and peace, work and justice. The autonomy of the state was named and criticized (colorfully, as a ‘swamp’). Social media made it possible for such issues – perhaps the most important issues facing American society – to be publicly raised.
“Social media empowered the public. Therefore, social media had to be brought back under control. Following the election of such a critic of state autonomy, both the state and the corporate media have sharply attacked the social media that made his election possible. The corporate-created narrative sphere doubled down to inform the American public that the bad voices in social media are all there is.
“The power structure is working hard to demonize social media and the public sphere. Voices like those of Peter Pomerantsev, who has made a career of promoting cold war with Russia, are given outlet in state-quoting corporate media like The Atlantic. The public is being silenced. Looking ahead to 2035, it seems possible that the social-media-enabled public sphere will merely be a memory.
“Digital spaces and people’s use of them will be safely bounded by the understandings disseminated by the state. The wars will be good wars, and there will be no stories about people losing their livelihood to workers in Bangladesh. Perhaps the greatest challenge of our time is to prevent such a suppression of the social-media-enabled public sphere. Citizens on both the left and the right have a powerful interest in making sure that social media survives to 2035.”
Kenneth A. Grady, a lawyer and consultant working to transform the legal industry, said, “Could digital spaces and digital life be substantially better by 2035? Of course. But present circumstances and the foreseeable future suggest otherwise. For them to become substantially better, we need consensus on what ‘substantially better’ means. We need changes in laws, customs, and practices aimed at realizing that consensus position. And we need time.
“At present, we have a gridlocked society with very different ideas of where digital space and digital life should be. These ideas reflect, in part, the different ideas we see in other areas of society on cultural issues. If we look back roughly 15 years at where things were, we can see that reaching a consensus (or something close to it) over the next 15 years seems unlikely. Without a consensus, changes to laws, customs, and practices will fall over a spectrum rather than be concentrated in one direction. As a society, this reflects how we work out our collective thoughts and direction. We go a bit in one direction, course correct, move a bit in another direction, and continue the process over time.
“Will 15 years be enough time to reach a substantially better position for digital spaces and digital life? I doubt it. Inertia, vested capital interests and the lack of consensus I mentioned mean that the give-and-take process will take longer. We may make progress toward ‘better,’ but to get to ‘substantially better’ will take longer and require a less-divisive society.”
Greg Sherwin, a leader in digital experimentation with Singularity University who earlier engineered many startups, including CNET and LearnStreet, said, “Technology cannot remove the human from the human. And while the higher bandwidth capabilities of some digital spaces stand to improve empathy and connection, these can be just as easily employed for negative social outcomes. As it is now, most of the leadership behind the evolution of digital spaces is weighted heavily towards those with a reductionist, linear view of humans and society. As long as humans are treated as round pegs forced to fit into the square holes in the mental models of the greatest technological influencers of digital spaces, negative side-effects will accumulate with scale and users forced into binary states will react in binary conflicts by design.”
Gus Hosein, executive director of Privacy International, commented, “The world is messy and diverse. And, yes, digital spaces are in turn also messy. They were supposed to be diverse. Except we have the sense of order established by platforms – and that’s the problem. To exist the platforms work to gamify behaviour, promote consumption and ensure that people continue to participate.
“While much could be said of old media, they weren’t capable of making people behave differently in order to consume. And so, we have small numbers of fora where this is taking place and they dominate and shape behaviour. To minimise this, if this is something we want because to be fair the benefit is that there are limited private-public spheres and we may deem that a good thing instead of a fully diverse society where there are few common platforms to meet, we would have to promote diversity of experience.
“Yes, we could promote alternative platforms but that hardly ever works. We could open infrastructure, but someone would still have to build and take responsibility for and secure it. The fact that alternative fora have all failed is possibly because singular fora weren’t ever supposed to be a thing in a diverse digital world that was to reflect the diversity in the world.
“The platforms need users to justify their financial existence. So that’s why they need to do all those negative things: shape behaviour, promote engagement, ensure consumption. If they didn’t, then they wouldn’t exist. So, potentially, a promotion of diversity of experience that isn’t mediated by companies that need to benefit from human interaction – maybe that’s the objective. And that means we will have to be OK that there are fora where people are nearly solely up to ‘bad things’ because the alternative is fewer fora that replicate the uniformity of the current platforms.”
Henning Schulzrinne, an Internet Hall of Fame member and former CTO for the Federal Communications Commission, wrote, “One likely path seems to be bifurcation: Some subset of people will choose fact-based, civil and constructive spaces, others will be attracted to or guided to conspiratorial, hostile and destructive spaces. Indeed, they may well exist on the same platform. For quite a few people, Facebook is a perfectly nice way to discuss culture, hobbies, family events or ask questions about travel – and even to, politely, disagree on matter politic. But others will be drawn to darker spaces defined by misinformation, hate and fear.
“All major platforms could make the ‘nicer’ version the easier choice. For example, I should be able to choose to see only publications or social-media posts that rely on fact-checked, responsible publications. I should be able to avoid posts by people who have earned a reputation of offering low-quality contributions, i.e., trolls, without having to block each person individually. This might also function as the equivalent of self-exclusion in gambling establishments. (I suspect grown children or spouses of people falling into the vortex of conspiracy theories would want such an option, but that appears likely to be difficult to implement short of having power of attorney.)
“All social media platforms have options other than a binary block-or-distribute, such as limiting distribution or forwarding. This might, in particular, be applied to accounts that are unverified. There are now numerous address-verification systems that could be used to ensure that a person is indeed, say, living in the United States rather than at a troll farm in the Ukraine.”
Ian Peter, Australian internet pioneer, futurist and consultant, commented, “Monetisation of the digital space seems to be a permanent feature and there seems to be no mechanism via which concerned entities can address this. The reality is that most nation-states are far less powerful than the digital giants, and their efforts to control them have to be watered down to a point where they are often ineffective. There is no easy answer to this problem with the existing world order.”
Dan Pelegero, a consultant based in California, responded, “The issues around the governance of our digital spaces do not have to do with technology, it has to do with policy and how we, as people interact. Our bureaucracies have moved too slowly to keep up with the pace of communication changes. Regulation of these spaces is predominantly a volunteer-led effort or still remains a low-compensation, high-labor activity. If the approach towards making our digital spaces better is either profit-driven or compliance-driven, without any other motivators, then the economics of our digital spaces will only make life better for the owners of platforms and not the users.”
Brooke Foucault Welles, an associate professor of communication studies at Northeastern University whose research has focused on ways in which online communication networks enable and constrain behavior, commented, “I think it is possible for online spaces to change in ways that significantly improve the public good. However, current trends conspire to make that unlikely to happen, including:
- An emphasis in law and policymaking that focuses on individual autonomy and privacy, rather than systemic issues: I think many policymakers are well intended when they propose individual-level protections and responses to particular issues. As a network scientist, I know these protections may stem the harm for individual people, but they will never root out the problems. For example, privacy concerns are (as a matter of policy or practice) often dealt with by allowing individuals to opt out of tracking or sharing identifying information. However, we do not need to know about very many individuals to accurately infer information about everyone in the network. So, it is my sense that these policies make people *feel* as if they are protected when they are probably not protected well at all. There should be a shift towards laws and policies that de-incentivize harms to individual autonomy and privacy. For example, I could imagine laws that prevent micro-targeting, instead allowing targeted advertising to segments no larger than some anonymity-preserving size (maybe 10,000 people).
- Persistent inequalities in the training, recruitment and retention of diverse developers and tech leaders: This has been a problem for at least 30 years, with virtually no improvement. While there has been some public rumbling of late, I see few trends that make me believe tech companies or universities are seriously committed to change. It does not help that many tech companies are, as a matter of policy, not contributing to a tax base that might be used to improve public education, community outreach, and/or research investments that might move the needle on this issue.
- The increasing privatization of research funding and public-interest data make it virtually impossible to monitor and/or intervene in platform-based issues of public harm or public good. We frankly have no idea how to avoid algorithmic bias, introduce community-building features, handle the deleterious effects of disinformation, etc., because there is no viable way for objective parties to study and test interventions.
- Last, the current consolidation of media industries (including new media industries) leaves little room for alternatives. This is an unstable media ecosystem and unlikely to allow for (much less incentivize) major shifts towards the public good. There is, by fiduciary duty, little room for massive, consolidated media companies to serve the public good over the interests of their investors.”
Don Heider, executive director of the Markkula Center for Applied Ethics at Santa Clara University, wrote, “Internet technology, like all technology, is neither imbued with inherent goodness or evilness. Human designers and engineers make a series of choices about how technology will work, what behaviors will be allowed, what behaviors will not be allowed and hundreds of other basic decisions which are baked into technology and are often opaque to users. Then human users take that technology and use it in myriad ways, some helpful, some harmful, some neutral.
“Governments and regulatory groups can require certain features in technology, but ultimately have great difficulty in controlling technology. That’s why we spend time thinking about ethical decisions and teaching folks how to incorporate ethics into decision making, so individuals and companies and governments can consider more carefully the effect of technology on humans.
“Technology could be designed to promote the common good and human well-being, but this is a decision each organization must make in regard to what it produces. Whether or not to promote the common good and human well-being is also a decision each citizen must make each time they use any technology.”
Rick Doner, emeritus professor wrote, “My concern is that, as so often happens with innovation/technology, changes in the ‘marketplace’ – whether financial, commercial or informational – outpace the institutions that theoretically operate to direct and/or constrain the impact of such innovations. I view digital developments almost as a sort of resource curse. There are, to be sure, lots of differences, but we know that plentiful, lucrative natural resource endowments tend to be highly destructive of social stability and equity when they emerge in the absence of ‘governance’ institutions, and here I’m talking not just about formal rules of government but also institutions of representation and accountability. And we now have a vicious cycle in which the digital innovations are undermining both the existing institutions (including informal trust) and the potential for stronger institutions down the road.”
Raashi Saxena, project officer at The IO Foundation and scientific committee member at We, the Internet, wrote, “When we look at digital spaces and services offered in Asia, they are primarily run by the private and government sectors. The intimate connection between users and their data is traditionally disregarded. Data requires full contextualization to be of any use and by virtue of it becomes intrinsically connected to the user; so much so that severing such connection results in data losing all meaning and value. To a major or minor degree, this is something data-protection laws have attempted to protect. The lines between physical and digital are blurred. If digital infrastructure fails, the consequences will be the same as a failure of the physical infrastructure even in 2035.
“Governments are not closing the loop when it comes to tech policies by not offering infrastructures that implement them. Examples of how this is possible can be found in corporate tech: Apple can enforce its policy (their licensing business model) in their digital assets such as music because they have implemented their own infrastructure that takes care of it. The same degree of protection should be provided to citizens as their sharing of data does not follow a different model from a technical perspective. In essence, they are licensing their personal data.
“The underlying problem however is that we do not have a global, agreed upon list of digital harms, that is, harms that can be inflicted upon the data that models all of us. In order to implement public infrastructures that foster meaningful connectivity, philanthropies should pursue the core principle of ‘Rights by Design.’
“We first need to catalog and collectively agree on a common definition of digital harms so that we can proceed to define the Rights to be protected. The areas of work for them should be around digital governance, sustainability and capital to promote the rise of other stakeholder groups that can sustain, scale and grow. Supporting projects to implement research-informed best practices for conflict zones and sparsely populated terrains should be the highest priority, since access to information and communication can constitute a critical step in the defense of the territories of these communities.
“With this in mind. we need to move towards defining technical standards that will protect citizens’ data in digital spaces from harm. One such initiative from The IO Foundation is the Universal Declaration of Digital Rights, which would act as a technical reference for technologists, which we identify as the next generation of rights defenders, so that technology is designed and implemented in a way that proactively protects citizens.”
Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, commented, “Currently, our entire social media environment is deliberately engineered throughout to promote fear, hatred, division, personal attacks, etc., and to discourage thought, nuance, compromise, forgiveness, etc. And here I don’t mean the current moral panic over ‘algorithms,’ which, contrary to hype, I would say are a relatively minor aspect of the structural issues. Rather, the problem is ‘business models.’
“Fundamentally, the simplest path of status-seeking in one’s tribe is treating opponents with sneering trashing, inflammatory mischaracterization or even outright lying. That’s quick and easy, while people who merely even take a little time to investigate and think about an issue will tend to find themselves drowned out by the outrage-mongering, or too late to even try to affect the mob reaction (or perhaps risking attack themselves as disloyal). These aren’t original, or even particular novel, observations. But they do imply that the problems have no simple technical fix in terms of promoting good information over bad or banning individual malefactors. Instead, there has to be an entire system of rewarding the creation of good information and not bad. And I’m well aware that’s easier said than done.
“This is a massive philosophical problem. But if one believes there is a distinction between the ‘public interest’ (truth) versus ‘what interests the public’ (popularity), having more of the former rather than the latter is not ever going be accomplished by getting together the loudest screamers and putting advertising in the pauses of the screaming.
“I want to stress how much the ‘algorithms’ critique here is mostly a diversion in my view. ‘If it bleeds, it leads’ is a venerable media algorithm, not just recently invented. There has a been a decades-long political project aimed at tearing down civic institutions that produce public goods and replacing them with privatized versions that optimize for profits for the owners. We can’t remedy the intrinsic failures by trying to suppress the worst and most obvious individual examples which arise out of systemic pathology.
“I should note even in the most dictatorial of countries, one can still find little islets of beauty – artists who have managed to find a niche, scientists doing amazing work, intellectuals who manage to speak out yet survive and so on. There’s a whole genre of these stories, praising the resilience of the human spirit in the face of adversity. But I’ve never found these tales as inspiring as others do, as they’re isolated cherry-picking in an overall hellscape.”
Sam Punnett, retired owner of FAD Research, said, “It’s difficult to read a book such as Nicole Perlroth’s ‘This is How They Tell Me the World Ends’ and then think we are not doomed. It’s like trying to negotiate a mutually-assured-destruction model with several dozen nation-states holding weapons of mass destruction. I’d guess many Western legislators aren’t even aware of the scope of the problem. Any concerns about social media and consumer information are trivial compared to the threats that exist for intellectual property and intelligence theft and damage to infrastructure.”
Rob Frieden, retired professor of telecommunications and law at Penn State University, responded, “While not fitting into the technology determinist, optimist or pessimist camps, on balance I worry that the internet ecosystem on balance will generate more harms than benefits. There is too much fame, fortune, power, etc., to gain in overreach in lieu of prudence. The need to generate ever-growing revenues, enhance shareholder value and pad bonuses/stock options creates incentives for more data mining and pushing the envelope negatively on matters of privacy, data security, corporate responsibility. While I am quite leery of government regulation, the almost libertarian deference facilitates the overreach.”
Sonia Livingstone, a professor of social psychology and former head of the media and communications department at the London School of Economics and Political Science, wrote, “Governments struggle to regulate and manage the power of platforms and the data ecology in ways that serve the public interest while commerce continues to outwit governments and regulators in ways that undermine human rights and leave the public playing catch-up. Unless society can ensure that tech is ethical and subject to oversight, compliance and remedy, things will get worse. I retain my faith in the human spirit, so some things will improve, but they can’t win against the power of platforms.”
Randall Gellens, director at Core Technology Consulting, observed, “We have ample evidence that significant numbers of humans are inherently susceptible to demagogs and sociopaths. Better education, especially honest teaching of history and effective critical-thinking skills, could mitigate this to some degree, but those who benefit from this will fight such education efforts, as they have, and I don’t see how modern, pluralistic societies can summon the political courage to overcome this.
“I see digital communications turbocharging those aspects of social interaction and human nature that are exploited by those who seek power and financial gain, such as groupthink, longing for simplicity and certainty, and wanting to be part of something big and important. Digital media enhances the environment of emersion and belonging that, for example, cults use to entrap followers. Digital communications, even such primitive tools as Usenet groups and mailing lists, lower social inhibitions to bad behavior. The concept of trolling, for example, in which people, as individuals or as part of a group, indulge in purely negative behavior, arose with early digital communications.
“It may be the lack of face-to-face, in-person stimuli or other factors, but the effect is very real. During the pandemic shutdown of in-person activities, digital replacements were often targeted for attack and harassment. For example, some school classes, city council meetings, addiction and mental health support groups were flooded with hate speech and pornography.
“Access controls can help in some cases (e.g., school classes) but is inimical in many others (e.g., city council meetings, support groups). Throughout history and in current years, dictators have shown how to use democracy against itself. Exploiting inherent human traits, they get elected and then consolidate their power and neutralize institutions and opposition, leaving the facade of a functioning democracy. Digital communications enhance the effectiveness of the mechanisms and tools long used for this. It’s hard to see how profit-driven companies can be incentivized to counter these forces.”
William L. Schrader, board member and advisor to CEOs, previously co-founder of PSINet Inc., said, “Democracy is under attack, now and for the next decade, with the help and strong support of all digital spaces, not just FOX News. The basic problem is ignorance (lack of education), racism (anti-fillintheblank) and the predilection of some segments of society to listen to conspiracy theories A through Z and believe them (stupid? or just a choice they make due to bias or racism?). To quote the movie ‘Red October,’ ‘We will be lucky to live through this’ means more now than before the 2016 U.S. election.
“I think things could change for the better but not likely before 2035. The delay is due to the sheer momentum of the social injustice we have seen since humankind populated the earth. That plus the economic- and life-extinguishing climate change that has pitted science against big money, rich against poor, and, eventually, the low-land countries against the highlanders. Hell is coming and it’s coming fast with climate change. Politics will have no effect on the climate but will on the money. The rich get richer, and the poor get meaner. Riots are coming and not just at the U.S. Capitol. Meanwhile, the digital space will remain alive and secure in parts and insecure mostly. It will assist and not fully replace the traditional media.”
Ivan R. Mendez, a writer and editor based in Venezuela, said, “The largest danger is no longer the digital divide (which still exists and is wider in 2021, after the pandemic), but the further conversion of the public into large digital herds easily marketable as a marketing tool. The evolution of digital spaces into commercialized platforms poses new challenges. The arrival of agile big-tech players with proposals that connect quickly with the masses (who are then converted into customers) gives them a large amount of influence in governments’ internet governance discussions …
“Other important internet stakeholders – entities that have been attributed the representation of the internet ecosystem in order to work for the betterment of networks through organized cross-sector discussions, such as the Internet Governance Forum (IGF) have not gained enough authority in the governance discussions of governments; they are not given any input and have not been allowed to participate or influence global or nation-state digital diplomacy. This fragmentation is what makes it possible for me to visualize that in 13 years digital spaces will not be better, nor extremely worse.”
Sean Mead, strategic lead at Ansuz Strategy, commented, “Twitter exists on and is programmed to reward hate, intolerance, dehumanization, libel and performative outrage. It is the cesspool that most clearly demonstrates the monetization of corruption. Many people sought out addiction to strawman mischaracterizations of other people who hold any beliefs that are in anyway different from their own. Why have a ‘two-minute hate,’ when you can have a full day of hating and self-righteousness every day, whether its justifications have a basis in reality or not?
“Algorithms are encouraging indulgence of these hate trips since doing so creates more time for the participants to be exposed to advertising. The social media oligarchy have been behaving not like platforms, but in violation of the intent of Section 230, like publishers promoting some views and disappearing others. If they were treated as publishers since they are behaving as publishers, this would force quite an improvement in community behavior, particularly in regard to libel. Many businesses may choose to move to a more-controlled network where participants are tied to a verified ID and anonymity is removed. That would not remove all issues, but it would dampen much problematic behavior.”
Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, said, “As technology advances, there will continue to be positives and negatives that impact the ways in which the internet and communication technologies impact the lives of average users. One of the biggest challenges is that the systems and algorithms that control these digital spaces have largely become unintelligible. For the most part, the decisions that are made in our apps and platforms are only fully understood by a handful of individuals.
“As machine learning continues to advance, and corporations rely on AI to make decisions, these processes will become even less understood by the developers in control let alone the average user interacting in these spaces. This negatively impacts users as we do not fully understand the forces that impact our digital lives or the data that is collected and aggregated about us. As result, individuals use these texts, tools and spaces without fully understanding or questioning the decisions made or being made therein. The end result is a populace that does not possess or chooses not to employ the basic skills and responsibilities needed to engage in digital spaces. I fear the tech leaders and politicians will view the data collection, and opportunities to influence or mislead citizens as a valuable commodity.
“Digital spaces provide a way to connect and unite communities from a variety of ideological strains. Online social spaces also provide an opportunity to fine-tune propaganda to sway the population in specific contexts. As we study human development and awareness this intersects with ontology and epistemology. When technologies advance, humans are forced to reconcile their existing understandings of the world with the moral and practical implications said technologies can (or should) have in their lives. Post-Patriot Act era – and in light of Edward Snowden’s National Security Administration whistleblowing – this also begets a need to understand the role of web literacies as a means of empowering or restricting the livelihood of others.
“Clashes over privacy, security and identity can have a chilling impact on individual willingness to share, create and connect using open, digital tools, and we need to consider how our recommendations for the future are inevitably shaped by worries and celebrations of the moment. In the end, I think most users will surrender to these digital, social spaces and all of their positive and negative affordances. There will be a small subset that chooses to educate themselves and use digital tools in a way that they believe will safely allow them to connect while obfuscating their identity and related metadata.
“Our world is increasingly dictated by algorithms. These algorithms – complex algorithms – drive, direct and govern children’s experiences but have not been constructed with their needs and interests in mind. Children represent an especially marginalized and vulnerable population who are exposed to high levels of poverty and inequality while being dependent on adults to structure their experiences and opportunities. Big tech and policymakers have a responsibility to consider the rights and needs of children. Instead, the burden is most often placed on families, educators and community leaders to understand, support, guide and regulate children’s access to media, information and social connection.
“Children live in and shape a connected world where they have the ability to consume and create literally at their fingertips. We need to prepare them to be lifelong learners with the skills they need to access, analyze, evaluate, create and participate through digital technologies. Youth must also navigate the realities of a digital world in which every time they log into an app on a device they are using at school, they leave a data trail. They engage in the affordances of digital technologies often at the price of their privacy. At the same time, we know that developing digital literacy includes the understanding that algorithms drive users to particular content. Children’s worldviews can be limited by geofencing and other algorithmic tools that are driven by for-profit purposes.
“Even with all of these challenges, I am hopeful for the future of digital life for the average user in 2035 because of what I’m seeing as youth interact online. As adults seemingly do not understand how to effectively and critically use these texts and tools, in many ways youth are shown to be thoughtful, perhaps skeptical, users of tools and spaces. As youth leverage digital texts to restore their narratives, or engage in activist practices, they are documenting strategies to engage with algorithms and drive offline policies and behaviors. The hope is that we can protect them long enough for them to more deeply develop as critical and aware consumers and creators in digital spaces.”
Steven Livingston, founding director of the Institute for Data, Democracy and Politics at George Washington University, commented, “Narratives about technology tend to run hot or cold: ‘It is all terrific and a new democratic dawn is breaking!’ Or… ‘Technology is ushering in a dystopian nightmare!’ Both outcomes are possible. With the former, Western scholars tend to ignore or be unaware of digital network effects in the developing world that have a positive effect. This would include M-Pesa in Kenya and the entire array of ICT4D (information and communication technologies for development) applications.
“I wrote an article several years ago about the positive effects of crowdsourced elections monitoring in Nigeria. In another publication I came up with a whopper example of academic jargon to describe this: Digitally-enabled collective action in areas of limited statehood. Positive human intentions have been made actionable by the lower transaction costs in digital space. Another example of positive outcomes is found in the work of online information sites such as Bellingcat, Forensic Architecture, and The New York Times Visual Investigations Unit headed by Malachy Browne. We know things about war crimes and other horrific events because of the digital breadcrumbs left behind that are gathered and analyzed by people and organizations such as these.
“On the other hand, where human intentions are less laudable these same affordances are used to erode confidence in institutions, spread disinformation and make the lives of others miserable. The kicker here is that digital phenomena such as QAnon are seen and understood by participants – at least many of them – as doing good. After all, they are in a fight against evil, just as Forensic Architecture is out to expose war criminals. We end up judging the goodness and harmfulness of these two moments according to our own value structures. Is there some external position that allows us to determine which is misguided and which is god’s work? I believe there is. QAnon is no Forensic Architecture.”
Bill Woodcock, executive director at the Packet Clearing House, wrote, “For the internet’s first 40 years, digital spaces and the conversations they engender were largely defined by individual interaction, real conversation between real people. In the past 10 or 15 years, though, we’ve moved away from humans talking with humans to machine-intermediated and machine-dominated ‘conversation’ which exists principally to exploit human psychological weaknesses and direct human behavior. This is the ‘attention economy,’ in which bots interact with people or decide what people will see in order to guide them toward predetermined or profitable outcomes. This is destroying civic discourse, destroying the fundamental underpinnings of democracy and undermining the human intellectual processes that we think of as ‘free will.’ It’s not clear to me that any of the countervailing efforts will prevail, though 2035 is a long time from now, and I am irrationally optimistic.”
Steve Jones, co-founder of the Association of Internet Researchers and distinguished professor of communication at the University of Illinois-Chicago, said, “Digital spaces reflect analog spaces, that is, they are not separate from the pressure and tensions of social, political, economic, etc., human life. It is not that digital spaces (which begs defining, by the way) are ‘entrenched’ as that they will evolve in ways that are unpredictable but will predictably track social and political evolution regionally.”
Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago, observed, “We enter space with our baggage – there is no check-in counter online, where we enter and get to leave that baggage behind. This baggage includes toxicity. Toxicity is a human attribute, not an element inherent to digital life. Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”
Toby Shulruff, senior technology safety specialist at the National Network to End Domestic Violence, wrote, “Digital spaces are the product of the interplay between social and technical forces. From the social side, the harms we’re seeing in terms of harassment, hate and misinformation are driven by social dynamics and actors that predate digital spaces. However, those dynamics are accelerated and amplified by technology.
“While a doctrine of hate (whether racialized, gendered or along another line) might have had a smaller audience on the fringe in previous decades, social media in particular among digital spaces has been pouring fuel on the flames, attracting a wider audience and disseminating a much higher volume on content.
“On the technological side, the business models and design strategies for digital spaces have preferenced content that generates a reaction (whether positive or negative) at a rapid pace. This therefore discourages thoughtful reflection, fact-checking and respectful discourse. Legal and regulatory frameworks have not kept pace with the rapid emergence of digital spaces and the platforms that host them, with policymakers left without adequate assessment or useful options for governance.
“Digital spaces are accelerating existing, complex deeply entrenched inequalities of access and power rather than shaping more pro-social, respectful, cooperative forms of social interaction. In sum, these trends lead me to a pessimistic outlook on the quality of digital spaces in 2035. I do think that a combination of shifts in social attitudes, wider acceptance of concepts of equality and human rights, dissemination of more cooperative and respectful ways of relating with each other in person, and a deliberate redesign of digital spaces to promote pro-social behavior and add friction and dissuasion of hateful and violent behavior holds a possibility for improving not only digital spaces, but human interaction IRL (in real life).”
Richard Barke, an associate professor in the School of Public Policy at Georgia Tech, wrote, “Communications media – book publishers and authors, newspaper editors, broadcast stations – have always been shaped by financial forces. But for most of our history there have been delays between the gathering of news or the production of opinions and the dissemination of that information. Those delays have allowed (at least sometimes) for careful reflection: Is this true? Is this helpful? Can I defend it?
“Digital life provides almost no delay. There is little time for reflection or self-criticism, and great amounts of money can be made by promulgating ideas that are untrue, cruel or harmful to people and societies. I see little prospect that businesses, individuals, or government have the will and the capacity to change this … The meme about crying fire in a crowded theatre might become a historical relic; there is a market for selling untruths and panics, even if they cross or skirt the line between protected speech and provocation.
“To alter this, laws and regulations might be tried but these change much more slowly than digital technologies and business practices. (Policies have always lagged technologies, but the speed of change is much greater now.) And many possible legal remedies are likely to confront conflicting interpretations of constitutional rights.”
Robert Bell, co-founder of Intelligent Community Forum, said, “We will eventually adapt to use digital spaces in more positive ways. I don’t expect the solution to be technological but in human behavior, as more people have negative experiences with false information, misleading advice and the general panic level of concern that digital spaces seek to generate. As long as providers can make big profits from the Dumpster Fire, I don’t expect them to change. But people will evolve, and that takes much more time than just a few years.”
Eugene H. Spafford, leading computer security expert and professor of computer science at Purdue University, responded, “Balkanization due to politics and ideology will still create islands of belief and information in 2035. Some will embrace knowledge and sharing, but too many will represent slanted and restricted views that add to polarization. Material in many of these spaces will be viewed (correctly or not) as false by people whose beliefs are not aligned with it. Governments will be further challenged by these polarized communities in regulating issues of health, finance and crime. The digital divides will likely grow between the haves and have-nots in regard to access to information and resources. Trans-border propaganda and crime will be a major problem.”
Deana Rohlinger, professor of sociology at Florida State University focused on media, digital participation and politics, “Like many other issues, the problems with Big Tech are ones that politicians seem unlikely to really address. When those in charge spend more time engaged in partisan bickering than governing, it is nearly impossible to believe that we will see the kind of regulatory changes necessary to incentivize tech companies beyond the current profit motive. My assumption is that companies cannot (or will not) regulate themselves well or indefinitely. The scale is tipped to profit-making, and money inevitably influences decision-making.
“There has also been another troubling trend that makes it difficult to believe things will change for the better: the increasingly blurred lines between party politics, social movements, think tanks and partisan media. This development has paced with technological innovation and is the furthest along on the right. Conservative groups have invested resources in an array of off- and online platforms that reinforce a narrative about a radical left poised to destroy America … Conservative think tanks were not only able to publicize their ideas online but also to amplify the messages of other conservative groups, including specialized and sometimes more local conservative groups/individuals that were trying to disseminate their ideas and calls to action to a much larger audience.
“While participation in institutional politics across the political spectrum is desirable, these developments portend trouble for liberal democracy. The line between the conservative movement and the Republican Party is no longer clear. While, arguably, movements and political parties always have had a marriage of convenience, it typically has been clear how citizens can work in- and outside of the electoral system to pressure politicians and parties to affect political processes. Now, that the lines between the two are blurred, it will be harder for the state to maintain its legitimacy and harder for conservatives concerned about this development to effectively challenge the trajectory of the modern movement.
“Equally concerning, liberal groups and politicians are starting to mimic strategies associated with the conservative movement, which will further polarize politics in the U.S. There is no reason to believe that tech companies or politicians have a serious interest in reversing these trends.”
Czesław Mesjasz, associate professor at Cracow University of Economics, Poland, wrote, “The survey question is too broad question. How can we balance a social phenomenon with such a broad scope with a simple positive vs. negative? It is a typical dualistic (dialectical, diadic, paradoxical situation). Yes, there are some natural phenomena, e.g., climate change, decreasing biodiversity, economic and social inequality, that can be directly assessed as negative. Human-created phenomena, especially in the intangible world, cannot be assessed with such a high level of generality. They are ambiguous due to fundamental ontological and epistemological causes.”
Mark Andrejevic, head of the Culture, Media and Economy Program at Australia’s Monash University and member of the NSF-funded Council for Big Data, Ethics and Society, responded, “Unless we re-invent a version of public-service media for the digital era, we are facing the prospect of the increasing privatization of our communication and information infrastructure. Public institutions and private have readily adopted commercial platforms as their go-to modes of internal and external communication. The result is likely to be the infiltration of commercial imperatives into all these modes of communication.
“Commercialization privileges a version of sociality that emphasizes personal and individual preferences while backgrounding our irreducible interdependence which serves as the necessary basis for conceptions of shared and public interests. It exists in a profound tension with the forms of citizenship we need for a well-functioning democracy. The prospect of digital spaces substantially improving in the coming decades hangs on the very slim possibility of the development of a public-service alternative to the hyper-commercialization that drives the online economy.
“From the advent of the World Wide Web there has been, in both academic and journalistic realms, an attempt to treat the technology as somehow transcending the social relations in which it is embedded. The expectations that flowed from predictions about the promise of digital interactivity in the abstract have been sorely disappointed – as the recent backlash suggests.”
Serge Marelli, an IT security analyst based in Luxembourg, wrote: “1) Looking at the ‘Human User Privacy protection’ side, I see not much positive changes. Some laws have been voted and enacted in various places (EU, other) but mostly, big companies find ways to obey the letter of the law, while still not respecting neither users, nor their privacy. Where ‘User consent’ is required, many still do not give the internet users the possibility to object and refuse privacy invasion… It’s now ‘accept and use our ‘service’ get out (and f* off)’… Looking at some news media for instance one finds up to 300+ different tracers, trackers, cookies and references from ‘third-party sites’ who offer absolutely no added value to the internet-users (- but significant monetary value to the websites and so on). Humans and their lives are being monetarized more and more, the concept of privacy is changing at best, disappearing at worst.
“2) Looking at social networks, the damage to society and democracy has been done, proven and still, there is absolutely no change in perception or approach either from their human victims or from lawmakers.
“3) Looking at technology, it still evolves, moves and progresses at a staggering pace. It is a fantastic thing and very positive on one way as it offers tremendous opportunities for BETTER and positive changes and unimaginable new ‘things’ – I can not imagine what will be in 15 years, just as few would have imagined the evolution and growth of the smartphones 20 years ago. But we still have no way to influence these changes in a positive direction. New devices, inventions, services will be. Like tools it may be possible to apply them for good or for other use. We may one day consider them as we consider guns in the societal debate about guns in the U.S. ‘Guns are just a tool; it depends on who uses them’… Technology is also ‘just a tool.’ In a way, we (mankind) are not mature enough to decide how to use the tools we create.”
Robert D. Atkinson, president of the Information Technology and Innovation Foundation, commented, “Digital spaces work pretty well now. I believe that new spaces will develop that people value and existing ones will get better. But I don’t believe massive change is needed, nor likely to happen. Modest reform and incremental improvement are likely to be the path.”
Riaz Tayob, a researcher for the Southern and East African Trade Institute in South Africa, said, “Control of the narrative to influence democracy on the internet will be the primary objective of the dominating U.S. It takes a great deal to control narratives in a dispersed media space.”
Andrea Romaoli, international lawyer expert in AI and a leader of the United Nations Global Compact, responded, “It’s not long before we reach 2035. It’s a short time for getting digital spaces and digital life to turn around significantly. Globally, the low adherence to human rights can be blamed on two primary reasons: 1) The economic interests of some entities surpass the value of human life. 2) Corruption works hard to spread distortions and undermine ethical and humanitarian principles; this weakens legal institutions and governance. Discrimination, non-inclusiveness, segregation, xenophobia and misogyny are rampant despite the efforts of leaders, politicians and public audiences.
“We have seen technological solutions delivering public services that disregard the poor or those who don’t have access to connectivity and those where prices and taxation are too expensive, and the services are not quality. Consequence: Consumers are being harmed and don’t have the strength or support to fight for their rights, which are weakened by political favoritism afforded to large and powerful groups.
“In addition, it is discriminatory and non-inclusive to deliver technological solutions that require modern devices such as mobile phones, notebooks and computers. It is increasingly expensive to update to modern systems and devices. During the pandemic, children around the world haven’t had access to education because they don’t have modern devices and cannot pay for the connection. Inclusive and non-discriminatory policies for vulnerable people are lacking or insufficient.
“Another challenge is that society is being manipulated by organized crime to be blind to online threats. Criminals persecute politicians, promote human trafficking, inspire corrupt governance, fuel terrorism, revive Nazi culture yet they cannot be stopped in the online world. Change may come through Transformational Governance, a resilient and sustainable process that emphasizes the one sovereignty that matters: the sovereignty of human life that values human rights in all spheres.”
Monica Murero, associate professor and director of the E-Life International Institute at the University of Naples, wrote, “I see the current situation both with positive and negative connotations. I see a dual future too, although I am pretty skeptical about it. I expect 1) disruptive changes in the society as a consequence of AI /techno deployment and 2) unprecedented access to the public sphere and new online services. Millennials, Gen X and Gen Z, will accelerate the introduction and adoption of new products and services life. This will be a greatly surveilled society in which people may fear to express themselves. I do not see regulation making a great difference. There are many ways things can be changed for the better at the micro-level. I also expect that getting people organized around an idea or a project in the digital sphere will be more easily disrupted by 2035.”
Fernando Barrio, a senior lecturer in law and business at Queen Mary University of London, wrote, “If we take into account current trends, it will not only not change, but it will substantially worsen society’s situation. In the first half of the last century we witnessed the impact of mass manipulation on highly educated societies and the global impact that the manipulation can have. Now technology has taken that manipulation to a new level. We keep seeing standards being bent to conform to the views of the many while the views of the many can be influenced by others who present complex realities as binary situations. Therefore – unless there is a concerted effort to use the technology to create truly, not nominally, more open, diverse and inclusive societies – technology will be one of the reasons that society will be much worse off in 2035.
“CAN technology have a positive impact on the digital sphere? The answer is definitively yes if we were to act in such a way that the positive impact is materialised and augmented. Technology can be used to produce equitable results in different actions, not only to make it easier to add a label to any action and then replicate the label until enough number of people repeat it as being true. The initiatives with biggest impacts will be those that provide real data about issues that affect society and magnify the actions of those working to tackle them.
“Unfortunately, the aim of most technology companies is absurd profits – enough to fuel extravagant lifestyles and the accumulation of vast wealth for the sake of it, not simply enough to enjoy a luxurious lifestyle and invest in further developments. The positive impact that they could have becomes a collateral to the absurd-rent seeking activities, therefore cannot be expected or taken for granted. The ‘greed is good’ of the 1980s became the ‘greed is everything’ of the 2010s and 2020s, with the addition of ‘and I have the means to convince you that it is true.’
“Also, with a political system where the electorate faces choices between candidates who went through the process of becoming part of a structure in which corporate and special interests have much more opportunities to express their views and further their interests, and not true elections, it is difficult to see real change in the regulatory framework. That is exacerbated with more and more academics being funded by those corporations and special interests, so even the creation of the knowledge, data and theories that should inform the political process is co-opted by the same groups.
“All that is further enhanced by the overemphasis in education and training settings on technology (rooted in the belief that technological development is always good with stress on ‘always’), with the consequential neglect of an accompanying emphasis on the social sciences and humanities – the important sectors that deal with actual human and social development.
“As metaphor, we keep investing and pushing people to make better cars without giving attention to educate better drivers and then we get surprised when there are more and more car crashes, to which the answer is to try to improve the car design neglecting even more drivers’ training, resulting in an increasing number of accidents, and the tale goes on. Dramatic changes can happen as individuals, groups and organisations can make a difference, but it might be random at best, just because that person, group or organisation decides to do so when it has the opportunity, not because the system, as it stands, is geared toward much-needed positive change.”
Amy Gonzales, an associate professor of communication and information technologies at the University of California-Santa Barbara, wrote, “I believe that we can assume that technology generally has a ‘the rich-get-richer’ effect, so those with the most resources benefit the most from technological innovation. In short, I suspect that technology may improve living standards across the board for a society even as it also contributes to greater socio-economic disparities.”
William Lai, a self-employed entrepreneur/business leader, responded, “Facebook and other social media/digital spaces are great at creating engagement and attention and that will not change in 2035. Human beings are slaves to these algorithms, whether it is network TV’s programming as in earlier times or Facebooks’s algorithm today. When Facebook goes away, as it will, something else will take its place. The only hope is that there will be diversity in the digital spaces and no one space will dominate the national consciousness. But does that really help the various portions of the country connect with each other? We will end up in enclaves of our own preference. I think an even larger question is whether nations will be the dominant organizing principle. What separates a myth-believing Trumpian in South Carolina from a neo-Nazi sympathizer in Berlin if they all participate in the same space? Maybe in the future, we self-organize not by geography and political citizenship, but something more akin to religion, in that the belief system is more important than the country we (currently) reside in. Pop culture, as evidenced by Japanese Anime, Marvel superheroes, and KPop is consumed on a worldwide basis and not limited by where you call home.”
Robert Cragie, senior principal research engineer with Gridmerge Ltd., said, “It is unlikely to be substantially better because of the levelling effect of social media platforms. This allows participants at the extremities who would never have had an opportunity previously to promote their views and propagate them very easily. This is a deliberate act on their part as they seek to recruit followers to their cause. Their followers then amplify the message, in turn recruiting more followers and so on. Beyond that, the echo chamber effect entrenches views even further, as exposure to alternative views decrease. One may say that this is democracy in action and free-thinkers are allowed to believe what they want. However, it is well-known that certain behaviours and memes are more seductive than others, e.g., populism. The likelihood of rational debate decreases and polarisation occurs. The media in general is driven more and more to provocative and sensationalist articles, which stokes the fire further. Any attempt to rationalise this behaviour is seen as censorship (‘cancel culture’). It is difficult to see how things will change for the better.”
Czesław Mesjasz, associate professor at Cracow University of Economics, Poland, wrote, “The survey question is too broad question. How can we balance a social phenomenon with such a broad scope with a simple positive vs. negative? It is a typical dualistic (dialectical, diadic, paradoxical situation). Yes, there are some natural phenomena, e.g., climate change, decreasing biodiversity, economic and social inequality, that can be directly assessed as negative. Human-created phenomena, especially in the intangible world, cannot be assessed with such a high level of generality. They are ambiguous due to fundamental ontological and epistemological causes.”
Timothy L. Haithcoat, deputy director of the Center for Geospatial Intelligence, said, “The problems will get worse since there is, as of yet, no unbiased method for conducting reviews of postings. Unless things change, bias, misinformation and viewpoints will be taken as fact by segments of society who want to believe them as fact. As well, the anonymity that these spaces allow leads to no accountability and no ramifications. Finally, the reluctance to provide universal tags for certain kinds of websites (think Pornography, Hate Groups, etc.) makes the ability for parents or society to screen/filter these elements out of their ‘view’ much more difficult and thus their influence grows.”
Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, said, “While there may be significant changes in what will amount to niche sectors for the better, my strong sense is that the conditions and causes that underlie the multiple negative affordances and phenomena now so obvious and prevalent will not change substantially. This is not primarily a matter of human nature, as a start – but about human selfhood and identity as culturally and socially shaped, coupled with the ongoing, all but colonizing dominance of the U.S.-based tech giants and their affiliates.
“Much of this rests on the largely unbridled capitalism favored and fostered by the United States. The U.S., for all of its best impulses and accomplishments, is increasingly shaped by social Darwinism, the belief that humans are greedy, self-interested atomistic individuals thereby caught up in the Hobbesian war of each against all, ruthless competition as ‘natural’ – and that all of this as somehow a good thing as it allegedly generates greater economic surplus, however unequally distributed it is (as a ‘natural result’ of competition).
“All of this got encoded into law, starting in early 1970s regulation of networking and computer-mediated communication industries as ‘carriers’ instead of ‘content-providers,’ i.e., newspapers, radio and TV regulated vis-à-vis rights to freedom of expression as importantly limited with a view towards what contributes to fruitful democratic debate, procedures and norms.
“To use but one example: Facebook’s famous censorship problems – i.e., its unwillingness to censor political content while perfectly willing to prevent the entire world from seeing naked female breasts – rest on their insistence, as Zuckerberg once put it, that he and Facebook are not editors (but ‘cultural’/’ethical’ censors, imposing U.S. prudery on the rest of the world). The ongoing efforts – and usually spectacular failures – to impose any sort of regulation, much less a fair global taxation, on the tech giants further reflects the U.S. embrace and enforcement of a neoliberal approach to minimal government and maximum business competition, in hopes that the rising tide will raise all boats. (As 40+ years of such policies have made clear – all yachts, yes … all boats, not so much.)
“The tech giants are also able to tap into a U.S. sense of exceptionalism and correlative disdain for the rest of the world, including the European Union – looking down their collective nose at EU efforts to regulate them as an exercise in needless bureaucracy that will only gum up the works of ruthless, i.e., unregulated competition. The tech giants can also point to the more-ruthless competitors out there – Russia and China as a start – to stoke further fear of any sort of government intrusion as hobbling a global competition with such high stakes (i.e., superpower dominance).
“An additional, utterly central conceptual brick all of this puts into place is the notion that ‘freedom’ is what Isaiah Berlin famously identified as ‘negative freedom,’ i.e., simply freedom from constraint. All of this entirely misses the point of ‘positive freedom’ – as well as the responsibilities and duties we owe one another as members of democratic societies and larger eco-systems (the ‘more-than-human’ webs of relationships that include the climate we are rapidly destroying, also in the name of unbridled capitalism – but I digress.)
One of the very sad and tragic ironies here is that social media, as the primary engines of economic growth and power, are driven by a very different understanding of human identity and selfhood – namely, as George Herbert Mead, Irving Goffman, Carol Gilligan, and innumerable other social scientists and philosophers have established, we are deeply relational beings whose sense of self-worth and identity is deeply dependent upon what others think/feel/say about us.
“This relational sense of self operates more obviously in other parts of the world, starting with the EU and Scandinavia, and certainly in non-Western societies. In many of its expressions, such as the Chinese Social Credit System, relationality rather ruthlessly suppresses any significant sorts of individual expression, much less the central democratic rights to protest, disobey and contest – rights that have led to some of our best democratic and humane achievements in terms of women’s suffrage, civil rights, greater income and social equality, and so on.
“Middle grounds, such as are articulated in feminist notions of ‘relational autonomy’ conjoin relational forms of autonomy in ways that foster, e.g., possibilities of conscientious objection alongside strong senses of interconnection with and thereby duties and obligations towards others. These are perhaps best exemplified in the Scandinavian societies, but also understood at important levels in EU countries (if in varying degrees). This sense of relational autonomy is an essential foundation for these non-U.S. approaches to living and working as social beings – starting with a sense of Berlin’s positive freedom, i.e., that I am more free to do what is important to me as well as to us as a society through my relationships with others, not in conflict or competition with them.
“In these societies, ‘the rules’ have to be established through democratic procedures, including open, rational, fact-based debate. They remain open to criticism – including rejection in the next election cycle. But all of this further depends, so far as I can see, on a democratic socialism that ensures high levels of education more or less universally, along with other central goods important to good lives of flourishing, starting with health care.
“Last but not least, all of this is to be shared _very_ equally, starting with income and gender equalities, access to education, health care, the political processes that define our lives, etc. These foundational equalities, as the Scandinavian examples demonstrate, thereby entail the highest trust levels in the world: some 71% of us believe that we can trust most of those around us to do the right thing – while the Hobbesian war of each against all in the U.S. correlates with 38% on the same scale. But as I am forcefully reminded every time I land in a U.S. airport and try to make my way to whatever my destination may be – ‘you’re on your f****ing own’ is the message broadcast by the raft of commercial service providers and the nearly complete absence of effective and easily accessible public services. The latter, of course, makes perfect sense for ruthlessly competitive atomistic individuals – it is appalling from these different perspectives and assumptions.
“With all of this as background: I have seen calls and suggestions for what amounts to an internet/social media technology environment that is developed as yet one more form of public good/service by national governments. This makes excellent sense to me and follows in the train of the Scandinavian governments’ subsidizing internet access to their citizens very early on, contributing to their enjoying the highest levels of ‘internet penetration,’ as a start. Treating internet-facilitated communication, including social media, as public goods in these ways might further include both education and legal arrangements that would teach and enforce the distinctions between protected speech that contributes to informed and reasonable civil debate clearly contributing to democratic deliberation, norms, processes, etc. – and non-protected expression that fosters, e.g., hatred, racism and the stifling of open democratic deliberation. (This is not entirely fantasy. Norway, for example, has explicit constitutional codes providing for extensive freedom of expression – among the best in the world – as well as for criminalizing hate speech. These are not unproblematic or uncontested: they are constantly tested and debated – but in some very important ways, they work.)
“Such a system and infrastructure would thereby avoid at least some of the commercial/competitive drivers that shape so much of current internet and social media use; ideally, it would develop genuine and far more positive environments as alternatives to the commercially driven version we are currently stuck with. But again, all of this will depend on foundational assumptions of selfhood, identity and meaning, along with the proper governmental roles vis-à-vis public goods vis-à-vis capitalism, etc., that are largely alien to the U.S. context. It is hard to be optimistic that these underlying conceptions will manage to diffuse and make themselves felt in the U.S. context anytime soon.”
Courtney C. Radsch, journalist, author and free-expression advocate, wrote, “Digital spaces and digital lives are shaped by and shape the social, economic and political forces in which they are embedded. Unfettered surveillance capitalism coupled with the proliferation of public and private surveillance, whether through pervasive facial and sentiment recognition systems and so-called ‘Smart’ cities is creating a new logic that governs every aspect of our lives.
“Surveillance capitalism is a powerful forcing logic that compels other systems to adapt to it and become shaped by its logic. Furthermore, the datification of every aspect of human experience and existence, coupled with the potential for behavioral modification and manipulation, make it difficult to see how the world will come together to rein in these forces since it would require significant political will and regulatory effort to unwind the trajectory we are on. There is not political will to do so.
“It’s hard to imagine what a different future alternative logic would look like and how that would be implemented given that American lawmakers and tech firms are largely uninterested in meaningful regulation or serious privacy or oversight. Furthermore, surveillance, and the proliferation of facial- and sentiment-recognition systems, sophisticated spyware and tracking capabilities are being deployed by authoritarian and democratic company countries alike. So, it’s hard to see how the future does not end up being one in which pervasive surveillance is the norm and everyone is watched and trackable at all times, whether you’re talking about China and its model, in Xinjiang but also its export of its approach to countries around the world through the Belt and Road initiative, or American and Five Eyes mass surveillance, approaches like ClearView AI and so-called Smart Cities.
“These pervasive surveillance-based approaches to improving life or safety and security are likely to expand and deepen rather than become less concerning over this time period. Politics is now infused by the logic of surveillance capitalism by microtargeting individual targeting and behavioral manipulation, and this is only going to become more prevalent as an entire industry is already evolving to serve campaigns around the world.
“We’re going to see insurance completely redefined from collective risk to individualized, personalized risk, which could have all sorts of implications for cost and viability. Digital spaces are also going to expand to include the inside of our bodies. The wearable trend is going to become more sophisticated and implantables that offer the option to better monitor health data are unlikely to have sufficient oversight or safety given how much further ahead the market is from the legal and regulatory frameworks that will be needed to govern these developments.
“Constant monitoring and tracking and surveillance will be ubiquitous, inescapable and susceptible to abuse. I don’t see how the world is going to move away from surveillance when every indication is that more and more parts of our lives will be surveilled whether it’s to bring us coupons and savings or whether it’s to keep us safe, or whether it’s to deliver us better services.
“The decline in the concept of truth and a shared reality is only going to be worsened by the increasing prevalence of so-called deep fake videos, audio, images and text. The lack of a shared definition of reality is going to make democratic politics, public health, journalism and myriad aspects of life more challenging.”
Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association, responded, “The configuration of digital spaces is greatly influenced by the fundamental forces that shape society. The greater bifurcation of society that developed in the last few decades will continue to 2035. Knowledge workers, often college graduates, will do relatively well; they have the education and improving skills from their profession that will enable them to navigate the voluminous and complex digital spaces to serve their purposes. Other workers will not do so well, with no replacement for the blue-collar, unionized, factory jobs (and other similar employment) that placed them in the middle class in the 20th century. As the possibilities of digital spaces become increasingly numerous and complex with nuanced interconnections, these workers will have more difficulty in navigating them and shaping them to accommodate their needs. Indeed, there will be increasing opportunities to manipulate these workers through ever-more sophisticated technology. The haves and have-nots dichotomy will not be about access to technology or information, but rather on the cognitive ability to understand, manage, and take advantage of the ever-growing abstractions of digital space.”
Randall Mayes, instructor at Duke OLLI, futurist and author, said, “For policy analysts, there is a tendency to focus on market and government failures and successes. To objectively evaluate the transformational impacts of industrial evolutions such as the digital, it is also important to consider the trade-offs. To continue the trends of longer lives, reduced work-loads, and a higher standard of living, we need focus on minimizing the negative impacts through forecasting and foresight.”
Theresa Pardo, senior fellow at the Center for Technology in Government at University at Albany-SUNY, commented, “I believe an increasingly positive transformation of digital spaces and digital life will take place between now and 2035. My belief is based on evolution in two areas. The first is based on an increasing appreciation for the potential of technology to create value and more importantly recognition of the risk to society from a lack of deep understanding of the potential unintended consequences of the use of new and emerging technologies. It is this recognition among both leadership and the public that will drive tech leaders and politicians to fulfill their unique roles and responsibilities by addressing the need for and creating the governance required to ensure necessary understanding is built among those leaders and the public. Lack of understanding of the need for governance of new and emerging technologies, that requires trustworthy AI for example, is a problem that is just beginning to be diminished.
“The second is based on an increasing appreciation of the need for sophisticated data management practices across all sectors. Leaders at all levels appear to have moved beyond the theoretical notion that data-informed decision making can create public value to actually seeking more and more opportunities to draw on analytics in their decision making. They are, as a consequence, becoming more aware of the pervasive issues with data and the need for sophisticated data governance and management capabilities in their organizations. As they seek also to fully integrate programs and services across the boundaries of organizations at all levels and sectors building among other assets, data collaboratives, they are also recognizing the need for leadership in the management of data as a government asset.”
Valerie Bock, principal at VCB Consulting, wrote, “It has taken a very long time for the digital cheerleaders to understand how serious destructive use of online spaces could become, but I believe the January 6, 2021, insurrection at the U.S. Capitol to have served as a wake-up call not only to the digerati, but to our lawmakers. I expect that the future will see: More-extensive surveillance of such places by law enforcement. Creation of legislation which will hold hosts liable for providing space for the promulgation of likes and the planning of illegal activities. Actual meaningful enforcement of such legislation. Of course, if such efforts are successful, they will drive a great deal of activity ‘underground,’ but the upside of that is that casual users will no longer be exposed to casual conspirators. Once the price of malfeasance goes up, it will concentrate the hardcore who are willing to pay up to finance fines, legal fees, etc., undertaken by their costs.”
Pia Andrews, an open- and data-driven government leader for Employment and Social Development Canada (ESDC), wrote, “I don’t see this as something that will just happen one way or another. I am not that fatalistic :) What I am seeing is a trend to the internet bringing out both the best and worst of people, and, with new technologies creating greater challenges for trust and authenticity, people are starting to get activated and proactive in saying they want to create the sorts of spaces that improve quality of life, rather than naturally allowing spaces to devolve without purpose. This engagement by normal people in wanting to shape their lives rather than waiting to have their lives shaped for them, sees a trend of more civic engagement, civil disobedience and activism, and a greater likelihood that digital and other spaces will be designed by humans for good human outcomes, rather than being shaped by purely economic forces that value the dollar over people.”
Sam Lehman-Wilzig, professor and former chair of communications at Bar-Ilan University, Israel, said, “As with most new technologies that have significant social impact, the beginning is full of promise, then the reality sets in as it is misused by malevolent forces (or simply for self-aggrandizement), and ultimately there is societal pushback (or technological fixes). Regarding social media, we seem to be now in the latter stage as policymakers are considering how to ‘reform’ the social media ecology and as public pressure grows for additional self-supervision by the social media companies themselves. I also expect that the educational establishment will enter the fray with ‘media literacy’ education at the grade school and high school level. As a result of all these, I envision some sort of ‘balance’ being reached in the near future between free speech and social responsibility.”
Tim Bray, founder and principal at Textuality Services, previously a vice president in the cloud computing division at Amazon, wrote, “There’s a surge of antitrust energy building. Breaking up a few of the big techs is very likely to improve the tenor of public digital conversation. There is an increasing awareness that social media that is programmed for engagement in a way that’s oblivious to truth and falsehood is damaging and really unacceptable.”
Stephen Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “Effective community decision-making is necessary, even vital, to address what may be existential challenges in our near future, ranging from the possibility of international conflict to the certainty of climate change and environmental collapse. Numerous other systems, such as disease control, energy supply and management, information systems, food production and safety, etc., also require effective community decision-making that is not unduly influenced by those with greater power.
“The biggest change by 2035 will be the introduction of measures that allow for the creation of contexts. In an environment where every message can potentially be seen by everyone (this is known as ‘context collapse’) we’ve seen a trend toward negative and hostile messaging, as it is a reliable way to gain attention and followers. This has created a need, now being filled, for communication spaces that allow for the creation of local communities. Measuring online impact by high follower counts, which leads to the proliferation of negative impacts, will become a thing of the past. It should be noted that this impact is being created not by content moderation algorithms, which has been the characteristic response by social media (Facebook, Twitter, TikTok, etc) but by changes in network topology.
“These changes can be hard-coded into the design of the system, as they are for example in platforms like Slack and MS Teams. They can be a natural outcome of resource limitations and gateways, for example in platforms like Zoom. I think we may see algorithmically-generated network topologies in the near future, perhaps similar to Google’s Federated Learning of Cohorts (FLoC) but with more benign intention than the targeting of advertising. Making such a system work will require more than simply placing login or subscription barriers at the entrance to online communities; today’s social networks emerged as a response to the practice in the early 2000s and trying it again is unlikely to be successful.
“A more-promising approach may be found in a decentralized approach to online social networks, as found in (say) Mastodon or Diaspora. Protocols, such as ActivityPub and Webmention, have been designed around a system of federated social networks. However, the adoption barrier remains high and they’re too technical to reach widespread adoption.
“There needs to be a concerted effort to, first, embrace the idea of decentralized social networking, and second, ease the transition from toxic social media platforms to more personable community networks. This will require that social and technology leaders embrace a certain level of standardization and interoperability that is not owned by any particular company (I recognize that this will be a challenge for the tech community). In particular, a mechanism for decentralized and (in some way) self-sovereign identity will be required, to on the one hand enable portability across platforms, but on the other hand ensure account security.
“Government can, and may be required to, play a role in such a mechanism. As I’ve said, we’re seeing signs that we’re moving toward such an approach. We can draw perhaps a parallel between what we might call ‘cognitive networking’ with what we already see in financial networking. A person can have a single authenticated identity, guaranteed by government, that moves across financial platforms. Their assets are mostly fluid with the system; they can move them from one platform to another, and exchange them for goods and services. In cognitive networking, we see a similar design, however a person’s cognitive assets consist of activity data, content created by the person, lists and graphs, non-fungible tokens and other digital assets. The value of such assets is not measured financially but rather directly by the interactions generated in decentralized communities.
“In essence, the positive outcome from such a development is a transition from an economy based on mass to an economy based on connection and interactivity. This, if well executed, has the potential to address wealth inequality directly by limiting the utility of the accumulation of wealth, just as decentralized communities limit the utility of the accumulation of large numbers of followers, by making it too expensive to be able to extract value from low-return practices such as mass advertising and propaganda. Needless to say, there’s a lot that could go wrong.
“Probably the major risk is the concentration of platform ownership. Even if we achieve decentralized communities, if they depend on a given technology provider (for example, Slack or Microsoft) then there is a danger that this centralization will be monetized, creating again inequality and a concentration of wealth, and undermining the utility of cognitive networking.
“There needs to be a public infrastructure layer underpinning such a system, and the danger of public infrastructure being privatized is ongoing and significant. We might also get identity wrong. For example, how do we distinguish between individual actions and actions taken by a proxy, such as an AI agent? Failure to draw that distinction creates an advantage for individuals with access to masses of AI proxies, as they would be able to be simultaneously in every community. The impact would be very similar to the impact of targeted advertising in social network platforms such as Facebook, where it’s not possible to know what messages a given entity is targeting to different individuals and different communities, because each message is unique, and each message may be delivered by proxies whose origins cannot be detected or challenged by the recipient.
“These risks are significant because unless individuals are able to attain an equitable standing in a cognitive network, they are unable to participate in community decision-making, with the result that social decision-making will be conducted to the advantage of those with greater standing, just as occurs in financial networks today.”
Richard H. Miller, CEO and managing director at Telematica and executive chairman at Provenant Data, commented, “What reforms or initiatives may have the biggest impact?
“1) Those that revolve around data sovereignty, the capture of personal data, rights to use, and the ability of individuals and corporate entities to delegate or license rights to use by third parties. Accompanying the reforms will be technical solutions regarding fairly radical approaches to transparency through the use of zero knowledge data storage and retrieval approaches. By these means, clarity in the use (or definitive indications of misuse) of personal data is accomplished with reasonably strong means of protecting privacy. And technologies that retain tamper-proof/tamper-evident data along with the provenance and lineage of data will result in provable chains of data responsibility.
“2) Telecommunication/Data Services reform that establishes principles of fairness in access, responsibility and liability for transgressions, establishment of common carriage principles to be applied by law and the willingness of governments (federal, state, regional and so on) to clearly identify, call out and appropriately penalize cartel or monopolistic business practice.
“What beneficial role can tech leaders or politicians or public audiences playing in this evolution? In both cases 1 and 2 above, technology leaders are capable of clearly describing the risks of not addressing the issues and can clearly present them in an understandable fashion to legislative bodies and to the populace so there is an informed public.
“Politicians, insofar as they are responsible for the establishment and enforcement of law, are potentially the most important contributors. But should they continue, as they have in the past 20 years, to abrogate responsibility for and modernization of regulation and its enforcement, they also represent the most impactful threat.
“What will be noticeably improved about digital life for the average user 2035? Trust in the knowledge that there is greater transparency and control over the use of personal data. Trust in identification of the source of information. Legal recourse and enforcement regarding data usage, information used for manipulation, and active pursuit of cartel and monopolistic behavior by technology, telecom and media hyperscalers.”
Peng Hwa Ang, professor of media law and policy at Nanyang Technological University, Singapore, commented, “What we are seeing is friction arising from the early days of use of disruptive technologies. We need law to lubricate these social frictions. Yes, I know we Americans tend to see laws as stopping action. But consider a town where all the traffic lights are green. If laws, judiciously formulated, passed and enforced, are social lubricants, these frictions will be minimised. I expect therefore that people will appreciate the need for such social lubrication.
“The Atlantic piece mentions John Perry Barlow’s Declaration of the Independence of Cyberspace. That is not an ideal. It was obvious to me then that it was not realistic. It has taken many people some 20 years to realise that. We need time to realise that the laws need to catch up with the technology. Facebook for example is now aware that it needs some regulation (internal rules short of hard government laws) in order to actually help its own business. Without some restraint, it is blamed for, and thus associated with, bad and criminal action.
“In short, I am optimistic because I think that 1) we are realising the futility of JPB’s Declaration; 2) the problems we face and will face highlight the need for social lubrication at different levels; and 3) these regulations will come to pass.”
Paul Jones, emeritus professor of information science at University of North Carolina-Chapel Hill, wrote, “Authors Charles F. Briggs and Augustus Maverick wrote in their 1858 book ‘The Story of the Telegraph,’ ‘It is impossible that old prejudices and hostilities should longer exist while such an instrument has been created for the exchange of thought between all nations of the earth.’ I’m reminded that the telegraph was supposed to be an instrument of peace, but that the first broad use was to suppress anti-colonial rebellion in India.
“I’m not sure why we talk about digital spaces as if they were separate from say telephone spaces or shopping mall spaces or public park spaces. In many ways, the social performance of self in digital spaces is no different. Or it is? Certainly, anonymous behaviors when acted out in public spaces of any kind are more likely to be less constrained and less accountable. Prank online posts, like their elders, prank telephone calls, abound. ‘Is your refrigerator running?’ has been replaced by Photoshopped sharks in odd places.
“We do see how affinity groups both support communitarian efforts – cancer and rare disease support groups, Friends of the Library during the pandemic, say. We also are aware that not all affinity groups are formed to serve the interests of others or in service of democracy and society – see Oath Keepers for example.
“Digital spaces can and do act to accelerate and maintain cohesion and cooperation of real-world activities. The hard work of regulation and of societal norms is to allow for benefits from new technologies to grow and spread while restricting the detriments and potential harms. Some countries will be better at this than others.
“Technologists have to learn to think politically and socially. Politicians have to learn to think about technology in a broader way. Both will have grown up with these problems by 2035 and will have seen and participated in the construction of the social, legal and technical environments. From that vantage point, the likelihood of being able to strike a balance between control and social and individual freedoms is increased. Not perfected but increased.”
Rich Salz, a senior director of security services at Akamai Technologies, responded, “I hope that large social media companies will be broken up and forced to ‘federate’ without instances, so that global interaction is still possible but it’s not all under the control of a few players. This can be done, although some tricky (not hard) problems have to be solved. In spite of recent failed court actions tied to suits against Facebook, I maintain that the European Union and perhaps the U.S. Congress, will do something.”
Tom Wolzien, inventor, analyst and media executive, suggested the following: “1) Civil accountability for ALL platforms as publishers for what appears on/in them by any contributor, similar to legacy media publishers (broadcast and print). 2) Appropriate legislation by politicians and acceptance by tech leaders. 3) Platforms do not allow anonymity of contributors or persons retransmitting messages of others. Persons retransmitting may be held accountable for material re-transmitted by platform and in litigation. This will force individual contributors to take personal accountability as enforced by the platforms, which should fear civil liability. This will diminish, but not eliminate a lot of the current issues.”
William Lehr, an associate research scholar at MIT’s Computer Science & Artificial Intelligence Laboratory with more than 25 years of internet and telecommunications experience, wrote, “Digital spaces are changing the nature of public life and human identity, and we need to adapt both our society and our technology. The rise of fake news is obvious bad outcome and if post-truth discourse continues, things will get worse before they can get better. The fixes will require joint effort across the spectrum from technologists to policymakers. There is the potential for digital spaces to produce public goods, but also potential for the opposite. Neither outcome is a foregone conclusion. Digital spaces will be critical part of our future in any case, and either that future will be mostly good or mostly bad, but a future without digital spaces is unrealistic.”
Dweep Chand Singh, professor and director/head of clinical psychology at Aibhas Amity University in India, said, “Communication via digital mode will advance, evolving to an addition of non-physical means, i.e., brain-to-brain transmission/exchange of information. Biological chips will be prepared and inserted in people’s brains to facilitate non-physical communication. Artificial neurotransmitters will be developed in neuroscience labs for an alternative mode of brain-to-brain communication.”
Carl Frey, director of the Future of Work project at Oxford University, said, “While I am optimistic about the long-run, I think it will take some time to reverse the political polarization that we are currently seeing. In addition, I worry about the surveillance state that China is building and exporting. But I do think we will gradually become better at regulating digital platforms and handling misinformation.”
Christopher Yoo, founding director of the Center for Technology, Innovation and Competition at the University of Pennsylvania, responded, “Digital spaces have become increasingly self-aware of the impact that they have on society and the responsibility that goes along with it. The government interventions that have gained the most traction have been in the area of economic power, highlighted by the EU and U.S. cases against Google and Facebook and proposed legislation, such as the EU’s Digital Markets Act and the bloc of bills recently reported by the House Judiciary Committee.
“Interestingly, the practices that are the focus of these interventions are the most ambiguous. Digital platforms now generate trillions of U.S. dollars in economic value each year, with many of the practices playing essential roles, and much of the supposed harms are backed more by theory than empirical evidence. Any economic interventions that are justified must be carefully targeted to curb abuses proven by evidence rather than conjecture in ways that do not curtail the benefits on which consumers now depend. Interestingly, the impact of digital platforms on political discourse is more important. In the U.S., the First Amendment limits the government’s ability to intervene. Any reforms must come from the digital platforms themselves. Fortunately, they are showing signs of greater conscientiousness on that front.”
Jeremy West, senior digital policy analyst at the Organisation for Economic Cooperation and Development (OECD), wrote, “I am optimistic that improvements will be made. The fixes won’t all be technical, though. Some of the most effective ones may be education, transparency and awareness. Take awareness, for example – experience with social media grows all the time, and I think we are already seeing embryonic inklings in the general public that perhaps their social media spheres aren’t actually representative of viewpoints in the wider population (or of reality, for that matter). Those inklings may grow, and/or be followed by awareness that sometimes the distortions are intentionally aimed at them. This should, in principle, lead to greater resilience against mis/disinformation.
“A factor that could have an especially powerful effect is that greater transparency from online service providers about harmful content, including mis/disinformation, is on the way. That will improve the evidence base and facilitate better policymaking in ways that are not currently possible. Neither tech leaders nor politicians (with some scattered exceptions) have been especially helpful, and I don’t have much hope for improvement there.
“By 2035, I expect to see users having substantially greater control over the data they wish to share, and more options for accessing formerly ‘free’ services by choosing to pay a pecuniary fee rather than sharing their data. I expect to see terrorist and violent extremist content, child sexual abuse material and the like pushed into ever smaller and more remote corners of the internet. That is not to say that it will be eradicated, though.”
Robin Brewer, professor of information, electrical engineering and computer science at the University of Michigan, said, “As an expert in aging, accessibility, and human-computer interaction research, I have seen how digital spaces have already been transformed to support meaningful connections and social well-being, provide access to remote education and healthcare, and organize people around important social issues.
“Recently, artificial intelligence has been incorporated in many of our digital spaces. For example, voice assistants are learning how to recognize atypical speech patterns. Facial-recognition systems help to provide alternative forms of authentication for people with limited motor skills. Virtual chatbots may be able to detect suicidal risk or depressive systems. However, as AI is woven into every aspect of digital life, we must be careful to protect such digital spaces, while mitigating harms that affect marginalized communities (e.g., age, disability, race, gender).
“Reforms with the biggest impact will be those that enforce regulating AI-based technologies with routine audits for potential bias or errors. The most noticeable improvements about digital life by 2035 will likely be better ways for digital residents/users to report AI-related harms, more accountability for such harms, and as such, more trust in using digital spaces for every aspect of our lives (e.g., communication, driving, health) across age groups.”
Annalie Killian, vice president for strategic partnerships at sparks & honey, based in France, commented, “Given the climate change crisis, by 2035 the acceleration of digital interaction should, in theory, allow for a lot less friction in terms of moving ideas, people and goods in a global economy and it should democratize access to education, health and economic participation for millions of people in underdeveloped countries, whilst also bringing the costs down of things like healthcare in developed countries ( in theory, if the incumbents in the system were to cooperate rather than trying to maintain their fiefdoms).
“I wonder if Digital Life/ Lives is the right term. We will remain human, with physical needs connected to our emotional and spiritual needs, so our lives will not be entirely digitized but much of our interactions with the world will be. I think this will give rise to the premiumization of the physical. It will be more expensive, and less accessible and become the new luxury.”
Andrew Wycoff, director of the OECD’s Directorate for Science, Technology and Innovation, said, “The twin forces of innovation and heightened recognition that the digital infrastructure is essential to the economy and society will have the biggest impact. As for innovation we will witness a profound change as ubiquitous computing enabled by fibre, 5G and embedded sensors and linked equipment and devices (the Internet of Things) augmented by AI becomes a reality. This new platform will unleash another innovation cycle.
“The pandemic has made it clear to all policy makers that the digital infrastructure – from the cables to widely used applications and platforms – are essential public services and the light-touch regulation of the internet’s first few decades will fade. Governments are slowly developing the capacity and know-how to govern the digital economy and society. This new cadre of policy makers will assert ‘sovereignty’ over what was ungoverned and will seek to promote digital spaces as useful, safe places, just as they did for automobiles and roads in the 20th Century.
“What will be noticeably improved about digital life for the average user 2035? Key initiatives will be digital identities, control over personal data, protection of vulnerable populations (e.g., children) and measures to improve security. What current problems will persist and continue to raise major concerns? The end-to-end property of the internet, which is its ‘democratising’ feature has led to an inevitable decentralisation and recentralization, altering power dynamics. This shift is destabilising and naturally resisted by incumbents, causing strife and calls to reassert control.”
Michael M.J. Fischer, professor of humanities and of anthropology and science and technology studies, said, “I suspect the Internet of 2035 will no longer be a single global or universal platform, but a series of divided spaces with different constraints and affordances for each. Navigating tools, permissions and accountability tracking will all evolve. The challenge will be to keep as many of these spheres as possible ‘open’ or ‘democratic’ in the sense that they will not be controlled by either commercial or governmental agendas.”
Mike O’Connor, retired, a former member of the ICANN policy development community, said, “Online life is just like real life. Good people will do good things, bad people will do bad things. I hope that the good people outnumber the bad.”
Evan Leibovitch, director of community development at Linux Professional Institute, commented, “The extent to which governments can create – and preferably collaborate – on these issues will determine everything. This can go either way. The Internet can be used as a tool for elite control of the masses – China is already providing a blueprint – or for massive social progress. Whether the transformation of digital spaces becomes a net positive or negative is dependent upon political and economic factors that are too volatile to predict. Much depends upon the level of regulation that governments will choose to impose, both in concentration of monopoly power and the necessity to make computer users responsible for what they say. This will impact laws and regulations on monopoly concentration, libel/slander and intellectual property.”
Fred Baker, RSSAC co-chair at ICANN and longtime IETF leader, wrote, “I see positive developments and negative ones. At best, I think things will be different in 2035, and we’ll decide along the way whether and how we like that.”
Thomas Streeter, a professor of media, law, technology and culture at Western University, Ontario, Canada, commented, “The future is unknowable to the degree specified and it is irresponsible to suggest otherwise; and the character of digital life will largely be determined by non-digital issues, like global warming, the state of democracy and globalization, etc. That said, if an international coalition of liberal social democracies are able to dramatically reorganize digital technologies, perhaps through first breaking up the big companies with antitrust law and then regulating the pieces according to a mixture of common carrier and public media principles, while replacing advertising with subscriptions and public subsidies, that will help. There is no way to know if such efforts would succeed, but stranger things have happened in the past, and if we don’t try, we will guarantee failure.”
Grace Wambura, an associate at DotConnectAfrica based in Nairobi, said, “Digital transformation will pursue unlimited growth, and our limitless consumption threatens to crowd out everything else on Earth. Climate change is currently happening, we are overspending our financial resources, we require more fresh water than we have, there is increasing income inequality, a diminishing of other species, and all of these are triggering shockwaves. At this important time, technology initiatives that are aimed at working forward to end climate change, achieve financial inclusion, overcome gender inequalities, and enable the provision of safe drinking water will have a great impact on communities by 2035. Tech leaders are increasing their power and digital surveillance. They can also apply technology to come up with new options to cope with the problems arriving with the digital technology evolution. Thanks to technology, everyone can be able to access the world’s best services, resources and knowledge. With this, information sharing is made easy. One thing that will remain as a puzzle and continue to cause concern is the vital need for both privacy and security.”
Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, where he is researching artificial intelligence and the social implications of technology, commented, “In the near-term future (next 10 to 15 years), I expect that top-down regulation will have the biggest impacts on digital environments, particularly through safeguarding privacy and combating some of the worst cases of misinformation, hate speech, and incitements to violence.
“First, regulations shaping data governance and protecting privacy rights like GDPR and CCPA are well suited to tackle a subset of current problems with digital spaces and can do so in a relatively straightforward fashion. Privacy by design, opt-in consent, purpose limitation for data collection and other advances are likely to accelerate through diffusion of regulatory policy, buttressed by the Brussels and California Effects, and the pressure applied to technology companies by governments and the public. For example, there may be enough policy pressure along the lines of the EU’s Digital Services Act and Digital Markets Act to limit the use of micro-targeted advertising, perhaps for vulnerable populations and sensitive issues (e.g., politics) especially. A rare consensus in U.S. politics also suggests that federal action is likely there as well. These would no doubt constitute improvements in digital life.
“Second, I think there is also reason for moderate hopefulness about the fight against misinformation. While I don’t expect public news media literacy or incentives to change dramatically, social media platforms may have enough in the form of technical and platform control tools to mitigate certain issues like bot accounts and viral spreading of untrustworthy sources. Significant research and pressure, along with compelling examples of actions that can be taken, suggest these improvements are available.
“However, this positive transformation for some is complicated by the willingness of unscrupulous actors, authoritarian governments, and criminal groups to promote misinformation, particularly for the many countries and languages that are less well monitored and protected. Further, it is not clear whether a loss of participants from mainstream social media platforms to more fringe/radical platforms would increase or decrease the spread of misinformation and polarization overall. Deepfakes and plain old fake news are likely to (continue to) have significant purchase with large portions of the global population, but it is possible that platforms will be able to minimize the most harmful misinformation (such as misinformation promoting violence or genocide) especially around key periods of interest (such as elections).
“For a portion of the world then, I would expect the misinformation problem to improve, though only in the more well-regulated and high-income corners. However, deepfakes could throw a wrench in this, and it is unclear whether perpetrators or regulators will stay ahead in the informational battle.
“Finally, I would not expect the quality of public discourse to improve dramatically on average. While companies may have some incentives to remediate the worst offenses (violent speech), my concern is that human nature and emergent behavior will continue to lead to activities like bullying, uncharitable treatment of others, and forming out-groups. I find it unlikely that more positive, pluralistic, and civil platforms will be able to outcompete traditional digital spaces financially and in terms of audience desire.
“Given that regulation is unlikely to impose such dramatic changes, and that users are unlikely to go elsewhere, I suspect there are not sufficient incentives for the leading firms to transform themselves beyond, for example, protections to privacy and efforts to combat misinformation. Overall, while some of the worst growing pains of digital spaces may be remediated in part, we can still expect outcomes like hostility, polarization, and poor mental health. Progress then, may be modest, and limited to areas like privacy rights and combating misinformation and hate speech – still tremendously important advances.
“Further, my skepticism about broader progress is not meant to rule out the tremendous benefits of digital spaces for connection, education, work, and so on. But it stretches my credulity, in light of human nature and individual and corporate incentives, to believe that the kind of transformations that could deeply change the tenor of digital life are likely to prevail in the near future.”
Stowe Boyd, managing director and founder of Work Futures, wrote, “Decreasing the amplification of disinformation is the most critical aspect of what needs to be done. Until that is accomplished, we are at risk of growing discord and division. It will be necessary for policy makers – elected officials, legislatures, government agencies and the courts – to take action to counter the entrenched power of today’s social platforms.
“The coming antitrust war with major platform companies – Facebook and its competitors – will lead to more and smaller social media companies with more-focused communities and potentially lessened commercial goals. That will diminish the amplification potential of social media, and will likely lead to better ways to root out disinformation. Improvements for the average person’s digital life are already evident in the adoption of synchronous and (more importantly) asynchronous ways to interact with coworkers, family, friends and the greater community. I also believe that augmented reality will be transformative for the average person to a degree equivalent to the impact of personal computing and smartphones.”
Rick Lane, founder and CEO of Iggy Ventures, wrote, “I believe that policy makers around the world, the general public and tech companies are coming to the realization that the status quo around tech public policy that was created during the 1990s is no longer acceptable or justified. The almost unanimous passage of FOSTA/SESTA, the EU’s NIS2, the UK’s recent child safety legislation, Australia’s encryption law, and the continued discussions around modifying Section 230 of the U.S. 1996 Communications Decency Act and privacy laws here in the States highlight how views have drastically changed since the SOPA/PIPA fights.”
Willie Currie, who became known globally as an active leader with the Independent Communications Authority of South Africa, commented, “Regulation tends to lag after technological developments, as occurred with radio in the U.S. in the 1920s. And changes to legislation work to fix the problems that arise in the implementation of new technologies. If one regards the internet as a hyperobject, similar to global warming, regulating its problematic aspects will require considerable global coordination. Whether this will be possible in the current global political space is unclear.
“During the 2000s there was an opportunity to introduce global internet regulation through a treaty process, but the process broke down into two global blocs with different views on the matter. So global regulation is unlikely before 2035. What is more likely to happen is that the European Union will be the main regulatory reference point for many countries, with the U.S. following an anti-trust approach and the authoritarian countries seeking greater control over their citizen’s use of the internet.
“As the lack of accountability, political and psychological abuse perpetuated by tech leaders in social media continues to multiply, the backlash against them will grow. The damage to democracy and the social fabric caused by unregulated tech companies in the West will continue to become more visible and will reach a point where the regulation of algorithms will become a key demand.
“The combination of antitrust interventions in the U.S. and algorithm regulation in the European Union will rein in the tech companies by 2035. Organised civil society in both territories as well as increasing digital literacy will drive the demand for antitrust and regulatory action. This is the way with technological development.”
Kate Klonick, a law professor at St. John’s University whose research has focused on private internet platforms’ policies and social responsibilities, responded, “Norms will coalesce around speech and harms on platforms though I think political leaders will have little role in this happening. I see tech leaders and academics playing a role in shaping and identifying where the norms come out and where effective policy can land. I think that users in 2035 will have more control over what they see, hear and read online, and, also, in some ways there will be less control by consolidation of major technologies.”
Brent Shambaugh, developer, researcher and consultant, predicted, “Although there are people with resources and power who wish for a dystopian future where they come out on top, I believe that decentralized and distributed technologies will challenge these monopolies. Many current tech leaders and politicians will become less relevant as they drown in their own hubris. The next 14 years will be turbulent in both the physical and digital worlds, but the average user will come out on top. Tech leaders and politicians who follow this trend will survive. I could believe the opposite, but I choose to be an optimist.”
Charles Anaman, founder of waaliwireless.co, based in Ghana, said, “While the media tends to rally to the negatives (because the public tends to react to that kind of information), the reality is that better conversations are taking place in real-life interactions in digital spaces. When better conversation can be had, discussing ideas without shaming the ‘ignorant,’ society will benefit greatly in the long term, rebuilding trust. It will be a slow process. It is taking us a while to realise that we have been manipulated by wealthy entities playing off of all sides to achieve their own goals.
“Transparency has been a farce for some time. Reality is fueling a new wave of breaking down digital silos to develop better social awareness and review of facts to understand the context and biases of the sources being used. Cybersecurity, as it is being taught now, is going to have to be applied with the understanding that ALL attack tools can be misused (NSO tools\Stuxnet\et al) to cause real-world damage in unexpected ways. Open-source solutions to proactive security from trustless authentication can and should be applied to all online resources to develop better collaboration tools that can merge digital- and flesh-based platforms to meet new goals in an environment of mutual distrust.”
Christina J. Colclough, founder of the Why Not Lab, commented, “Whilst much change is needed in relation to online bullying, harassment and abuse, I fear this will be the last issue that will receive political/policy attention. Where I expect governments to act is on the requirement for all fake news, fake artefacts, fake videos/texts, etc., to be labelled as such. I expect also we will see advancements in the labelling of ‘bots’ so we know what we are interacting with. I also believe we will see advancements in data rights – both for workers and citizens, including much stronger collective rights to the data extracted and generated (including inferences) and stricter regulations on what Shoshanna Zuboff calls ‘Markets in Human Futures.’”
Aaron Chia Yuan Hung, associate professor of education technology at Adelphi University, responded, “Although digital life can seem to take a step back in the short term, when we look at it at longer timescales, I do think it will evolve towards a better society. As much power as technology companies have, they do tend to bend towards the demands of their users. In that sense, I have more hope in users than in companies. Of course, users themselves are not a monolithic group and some will want to push digital life in a negative direction (e.g., entities that conduct troll farming, manufactured news, mis/disinformation, etc.). But I believe most people don’t want that and will push back, through education, through public campaigning, through political pressure. 2035 will bring about its own problems, of course, and every era will seem dire. It’s hard to imagine what those new concerns would be, just as it was hard to imagine what our current concerns are back in 2005.”
Carolyn J. Heinrich, professor of public policy, education and economics at Vanderbilt University, wrote, “I answered ‘yes,’ that digital spaces and life will transform in ways that increase the public good in a hopeful perspective, but I do believe it could go the other way as well. There are so many dark and corrupt spaces/forces that could ultimately mean we are worse off for the availability of the internet. Digital spaces can be incredibly useful for sharing information, increasing access to services and supports and giving people more ways of acquiring information and learning. They can also open the world to those in isolated locations. But digital spaces can also be isolating if they are used in ways that limit interactions and dissemination of knowledge, such as groups or sites that cater to narrow groups (politically and socially) and share biased information or skewed perspectives. We need to figure out how to educate youth to use digital spaces judiciously and for the public good.”
Andy Opel, professor of communications at Florida State University, responded, “As with all systems of social control and surveillance, capillary, bottom-up resistance builds and eventually challenges the consolidation of power. We are seeing that resistance from both ends of the political spectrum, with the right calling for regulation of social media to prevent the silencing of individual politicians while the left attempts to respond to the viral spread of misinformation about public health and the 2020 election. Both groups recognize the dangers posed by the current media-ownership landscape and, while their solutions differ, the social and political attention on the need for media reform suggests a likely turn where a digital bill of rights becomes a major issue in near-term political election cycles.
“Right now, there is a very active and dynamic struggle over transparency, access and personal data rights. The outcome of this struggle is what will shape the future of our digital lives. As the ubiquitous commercialization of our digital spaces continues, audiences have grown increasingly frustrated and resistant to instantaneous marketing of products that receive so much as a mention within earshot of Alexa or Siri (or any internet-connected device that is actively listening to our every utterance). This frustration is fueling a growing call for a political and regulatory response that defends individual rights and restores balance to a system that currently does not offer non-commercial, anonymous, transparent alternatives.
‘Markets only work when citizens have a range of products to choose from, and currently every major media product we interact with online – social media, dominant news and entertainment sites, search engines – track and market our every move, selling granular, detailed profiles of the public that we are not even allowed to access.”
Aaron Falk, senior technical product manager at Akamai Technologies, said, “Pervasive anonymity is leading to the degradation of online communications because it limits the accountability of the speaker. By 2035, I expect online fora will require an accountable identity, ideally one that still permits users to have multiple personas.”
Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, said, “We need to assume that in the coming 10-15 years, we will learn to harness digital spaces in better, less polarizing manners. In part, this will be due to the ability to use better AI-driven for filtering and thus developing more-robust digital governance.
“The real progress will stem from improvements in media literacy, the capacity to for individuals to critically assess claims made in digital spaces and social behavior in digital spaces. We are already seeing some positive moves in this direction, particularly among younger groups who are more aware regarding how digital spaces can be co-opted and perverted, and less gullible when it comes to ‘digital-first falsehoods.’
“There will of course always be those who would weaponize digital spaces, and the need to be vigilant isn’t going to go away for a long while. Better filtering tools will be met by more-advanced forms of cyberbullying and digital malfeasance, and better media literacy will be met by more elaborate fabrications – so all we can do is hope that we can keep accentuating the positive.”
Brock Hinzmann, co-chair of the Millennium Project’s Silicon Valley group and a 40-year veteran of SRI International, said, “Public access to online services and e-government analysis of citizen input will continue to evolve in positive ways to democratize social function and to increase a sense of well-being. The Internet of Things will obviously vastly increase the amount of highly detailed data available to all. Analytics (call it AI) will improve the person-system interface to help individuals to understand the veracity of the information they see and to help the system AI to understand what the people are experiencing. Small business and other socially beneficial organization formation will become easier and more sustainable than they are today. Nefarious users, criminals and social miscreants will continue to be a problem; this will require continuous upgrades in security software.”
Amy Zalman, futures strategist and founder of Prescient Foresight, wrote, “My answer, yes, is underwritten by my bias toward optimism and the history of the development of behavioral norms around new technologies. Positive change could come from:
- Engineering/programming options and choice into designing digital spaces differently so that those that work according to recommender systems or predictive algorithms open new spaces up for people rather than closing them into their preferences and biases.
- Voluntary accountability by technology platform CEOs and others who profit from the internet/digital spaces. This accountability will come about, if it does, from consistent nudging by government leaders, other business leaders and the public. I do not believe that the public sector can impose these options through law or regulation very effectively right now, except at blunt levels.
- Literacy training – I would like to see schools, governments, civil society and businesses participate in better education in general so future generations can apply critical thinking skills to their how they live their lives in digital spaces. People should understand how to evaluate what they see and hear better. We need to shape a positive culture on and in digital spaces, starting with simply recognizing they are an extension of our daily lives. There are also many unspoken rules of behavior that help us generally get along with those around us.”
Miguel Moreno, director of the department of philosophy at the University of Grenada, commented, “Technological platforms with greater technical capacity and a clear business model will gradually integrate a growing number of digital services in response to the common needs of millions of users. They can enhance the actions of other institutions serving the general interest (education, publicly owned media, public health, etc.) and make tasks that are still complex, laborious and inefficient much easier and more cost-effective. Major changes will be needed in regulatory frameworks, in antitrust laws, in privacy cultures and in the standardization of guarantees for users and consumers in different countries. But their experience in disseminating services on a global scale does not seem for now, nor in the near future, replaceable by any other scheme of activity managed by state institutions.”
Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab, said, “The current situation in which a handful of commercial enterprises dominate what is thought of as ‘digital spaces’ will crash and burn, not of its own accord but because the combined weight of climate catastrophe and democratic demise will force other changes that ultimately lead to a re-creation of the digital sphere.
“The path to this will be painful, but humans don’t make big changes until the cost of doing so becomes less than the cost of staying the same. The collapse of both planetary health and democratic governance are going to require collective action on a scale never before seen. Along the way, the current centralized and centralizing power of ‘tech companies’ will expand, along with autocracy. Both will fail to address the needs of billions of people, and, in time, be undone.
“Whether this will all happen by 2035, who knows? Just as climate changes is compressing geologic time, digital consolidation is compressing political time. It’s possible we’ll push through both the very worst of our current direction and break through to a more pluralistic, less centralized, participatory set of governing systems – including digital ones – in 24 years. If not, and we only go further down the current path, then the answer to this question becomes a NO.”
Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, wrote, “It’s fatuous to predict any inevitability, but we now are widely aware of some of the ills – of social media especially. Underlying the success of social media and also their ills is the widespread recognition that these media can be used to get potentially wide attention, and that it’s exceedingly easy to give that a try. And underlying that is the fact that a very large percentage of people worldwide want and desire attention, and possibly a lot of it.
“Algorithms used, for instance, by Facebook, may further distort what gets attention, but that’s not the only problem. The best way to get attention is to say or do something different from just the daily ‘boring’ sort of colloquy. You can do that with cute cat videos, by inventing and showing off a new dance, by juggling 13 balls at once, or by saying something that recognized authorities or widespread consensus is not saying. Thus, an outright lie is one attractive method. A whole series of lies and wild assertions gets you something like the attention that goes to QAnon.
“If what you say can be ‘shared’ by an at-first-little, self-reinforcing community, that helps, too. When those lies underline and amplify a widely shared but not widely articulated attitude, such as the feeling of being oppressed by technocrats, experts or just the self-appointed ‘elite’ with supposedly more credentialized ‘merit’ than most people have (as pointed out for example in Michael Sandel’s ‘The Tyranny of Merit’) such views can easily gain wide followings.
“Again, algorithms may help further amplify support of such messages, but that ignores their underlying sources of strength. We, especially in the U.S. – though by no means only here – are divided by very real differences that did not at all originate with the internet. These are differences primarily in who gets heard and how, as well as in monetary income levels that partly follow along with the former.
“In one sense, social media offer a new path to greater equality. These are not refereed journals by any means. Anyone can try to seize an audience. Movements I would regard as positive, such as: the effort for stronger response to climate change; Black Lives Matter; #MeToo; GLBTQ rights – these all have been strengthened in my judgment by social media. (By the way, the overall focus of this survey is supposedly not social media per se but ‘digital spaces,’ which would also include, for instance, spaces of scientific or artistic collaboration that can be far more advantageous for the public good – as long as they are not dominated by corporate interests.)
“Clearly, over the next few years, until well beyond this survey’s target date of 2035, we are in for a wild ride, dealing with the ongoing pandemic, horrendous effects of climate change and the social issues including various kinds of inequality that are only exacerbated and in some cases brought to light through social media.
“Another crisis is that the political motion we might hope for is stalled by the inadequacies and susceptibilities to crass manipulation that our now elderly political institutions and constitutions now reveal. It will be more and more difficult to remain either aloof from or unaware of these interlocking struggles.
“It may well turn out to be a good thing in the long run that we are all drawn in. It will be good, if somehow, we move toward greater acknowledgment of all the inequalities and problems and somehow forge a degree of consensus about the solution. We may not, but we could.”
Doc Searls, internet pioneer, co-author of “The Cluetrain Manifesto” and “The Intention Economy” and co-founder and board member at Customer Commons, wrote, “Yes, there is hope for 2035 if we think, work, invest and gather outside the web and the closed worlds of apps available only from the siloed spheres provided by giant companies and company stores. Those companies and their walled gardens are private spheres that host gatherings and expressions that are public in ways they alone allow.
“That closed world, or collection of private worlds, is based on a mainframe-era model of computing on networks called ‘client-server’ and might better have been called ‘slave-master.’ This model is now so normative that, without irony, Europe’s GDPR refers to the public as ‘data subjects,’ California’s CCPA calls us ‘consumers’ and the whole computer industry calls us ‘users’ – a label used elsewhere only by the drug industry. None call us ‘persons’ or ‘individuals’ because they see us always as mere clients. But the web and the tech giants’ app ecosystems are just early examples of what can be built on the internet. By its open and supportive end-to-end design, however, the internet can support full agency for everyone and not just the servers of the world and the companies that operate them.
“I don’t see full agency being provided by today’s tech leaders or politicians, all of whom are too subscribed to ‘business as usual.’ I do see lots of help coming from technologists working with communities, especially locally, on solutions to problems that can best be solved first by tools and systems serving individuals and small groups.
“The term “digital spaces” is ironic because the digital world has no spaces. It also has no distance or gravity. Yet we now live in the digital world as well as the physical one. This is new to human experience. And yet we are likely to live digital as well as physical lives for many decades, centuries, or millennia to come. But we are still, and will always be, embodied creatures in the physical world. And, as embodied creatures, we cannot stop understanding everything with metaphorical frames grounded in our bodily experience. That’s why we leverage physical-world framings to make sense of what happens in the digital world (itself a metaphor). ‘Spaces’ is one of those.
“In the physical world ‘spaces’ are both literal and figurative. In the digital world, they are only figurative. While this should liberate our thinking and our work in the digital world, we remain constrained by physical-world notions about places, spaces, ecosystems and so on. Among those constraints is the tacit belief that the world of the web, where servers alone determine what clients can do, is and will remain the primary container for our experience of the digital world, including the world of apps on mobile devices. That belief is holding us back.
“In ‘Understanding Media’ Marshall McLuhan writes, ‘Any technology gradually creates a totally new human environment,’ adding, ‘Environments are not passive wrappings but active processes… The railway did not introduce movement or transportation or wheel or road into society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure.’ Later he called the causal process behind this ‘formal,’ meaning that technical environments form us. Put more quotably, ‘we shape our tools and then our tools shape us.’ Those tools, and those environments, have both good and bad outcomes. As Marshall and his son Eric McLuhan later put it in ‘Media and Formal Cause,’ ‘radio caused Hitler and Gandhi alike.’
“I expect mostly good outcomes because it will soon be clear to all that we have no choice about working toward them. As Samuel Johnson said, ‘When a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’ Our species today is entering a metaphorical fortnight, knowing that it faces a global environmental catastrophe of its own making.
“To slow or stop this catastrophe we need to work together, learn from each other, and draw wisdom from our histories and sciences as widely and rapidly as possible. For this it helps enormously that we are digital now and that we live in the first and only technical environment where it is possible to mobilize globally to save what can still be saved. And we have already begun training at home during a strangely well-timed global pandemic. The internet and technology are the only ways we have to concentrate our collective minds on the metaphorical fortnight or less that is still left to us.”
Ebenezer Baldwin Bowles, an advocate and activist, shared this potential scenario: “2035: Digital life devolves into fragmented subcultures. From the top, reaching outward and downward in all directions, public arms enclose and squeeze the populous. The embrace is cold, indifferent. The Everyman is nudged into a perpetual maze of circular requirement. And confusion. And punishment. Citizens become more and more entangled in a digital web of misdirection. Malicious overreach by the Collective A-nom-o-nym pitches citizenry into a benumbed state of resigned passivity. A fear. A calculated mess. Tech leaders, unable to contain rampant digital consciousness – their Web is now beyond any one or any group’s understanding – shuffle Bitcoins back ’n forth in a cynical dance of accumulation. Besotted by grandiose delusion, they retreat into cocoons of material wealth and fragile isolation. On the outskirts, gathered in Alt Collectives, creative AI-Luddites withdraw from the norm to craft insulated digital cells, weaving together hope and rebellion into a far-flung but determined community. Without a recognizable center, strung out on the margins, they contrive to discover ways to overcome fragmentation. ‘Be mindful,’ they chant.”
Eileen Rudden, co-founder of LearnLaunch, responded, “Pressure from the public, governments and tech players will push for change, which is why I believe the future will be more positive than today. Internet spaces will evolve in a positive direction with the help of new legislation (or the threat of new legislation) that will cause tech spaces to modify what is considered acceptable behavior.
“External forces such as governments are being forced to act because the business model of the internet spaces is based on targeted advertising and the attention economy, and the tech players will not be able to respond without governments getting involved. Tech players’ rules for what is acceptable content will become subject to norms that have developed over time, such as those already in place offline for libel.
“Whether a rating system to identify reliable information can be developed is open to question. Witness how varied the media brands are, from Breitbart to the New York Times. The root cause is that we social human beings are structured to be interested in difference and changes.
“Tech social spaces amplify the good and the bad of human nature. Laws were created to address shared views of what is acceptable human behavior. In the mid-1990s, during the birth of the internet, we rejoiced in the internet’s possibility to enable new voices to be heard. That possibility has been realized, but the bad of human nature as well as the good has been given a broader platform.
“A current problem I see beginning to be diminished is the problem of what is ‘acceptable use’ online and who determines that. An issue I expect to see remain unsolved by 2035 is bad actors exploiting the slowness of the public’s responses to emerging challenges online.”
Scott G.K. MacLeod, an associate professor of educational leadership at the University of Colorado-Denver, said, “The following initiatives may have a big impact as they develop further: 1) A realistic virtual Earth, a mirror world for everything – and especially actual-virtual, physical-digital developments and conversation. 2) A single, open cryptocurrency backed by most of 200 countries’ central banks, such the Pi public-access digital currency network, could help end poverty in concert with other positive economic developments such as universal basic income experiments with each person receiving a Wikidata PIN number for distribution questions as well as a device for access this. 3) The World University and School – like MIT’s advanced Open Courseware in its four languages and Wikipedia in its 300 languages – but in each of 200 countries and in the 7,139 known living languages; a free online university offering high school and college-level and graduate school degrees.”
Daniel Castro, director of the Center for Data Innovation, wrote, “Social media platforms will give users more choice. This will give people more control over their data, and also how they interact with others. For example, more choice over what is shown in their timelines or what appears in their recommendations. More choice over whether to see only vetted news sources or whether to see labels about the veracity of content. Better connectivity will also transform physical space, leading to more augmented and virtual reality. The same debates we have today about social media today will map onto these virtual spaces. And email will be dead.”
Victor Dupuis, managing partner at the UFinancial Group, said, “Digital spaces will continue to improve the methods and efficiencies of how we transact life. Financial decision-making, information interpretation, major personal and home purchases, all will be handled more efficiently, resulting and reduced unit costs for consumers and the need for companies to plan on higher sales volume to thrive. On the negative side, we are eroding relationally because of an increased dependence on digital space for building relationship and fostering long-term connections. This will continue to erode the relationship aspect of human nature, resulting in more divorces and fractured relationships, and fewer deep and abiding relationships among us.”
Leiska Evanson, a futurist, strategist, business analyst, digital marketer and project manager based in Barbados, said, “Worldwide, governments are attempting to break encryption to follow and punish criminals, but also opening people up to hounding for their public commentary. Authoritarian governments are leaning towards the Russia/China style of localised, firewalled ‘internet,’ which is not actually connected to the world and is more like a local wide-area network. Platforms are fighting, winning and ignoring the reasons they are having to deal with antitrust cases while limiting the entry of meaningful competitors by monopolising computer resources. The cost of access will skyrocket worldwide as the economic dislocation of the pandemic ravages poorer countries and large wealthy countries drain human resources. When Facebook launches its own currency, numerous small countries will collapse economically. All of the above and Cancel culture will lead to further offline radicalisation.”
Terri Horton, work futurist at FuturePath, LLC, said, “In my view, the desire to create a future that is equitable, inclusive, sustainable and serves the public good is human. I believe that desire will persist in 2035. The growth and expansion of novel digital spaces and platforms will enable people across the globe to use them in positive ways that drive the energy and combustion for improving the lives of many and creating a future that serves society. In the future, people will have more choices and opportunities to leverage AI, ML, VR and other technologies in digital spaces to improve how they work, live and play, amplify passions and interests and drive positive societal change for people and the planet. The challenges, however, lie in bridging the global digital divide, reducing equity gaps, governing privacy, evolving ethical use and security protocols and rapidly increasing global digital and AI literacy. Mitigating these challenges will require substantial collaborative interventions that merge private and public industries, governments, and global technology organizations.”
Hume Winzar, a professor and director based at Australia’s Macquarie University with expertise in econometrics and marketing theory, commented, “A series of crises, like those occurring now regarding election rigging and related conspiracy theories, will force changes to publishing laws, so that posters must be identified personally and take personal responsibility for their actions. AI-based fact-checkers and evidence collection will be automatic, delivering a validity score on each post and on each poster. Of course, that also means we will see more-sophisticated attempts at ‘gaming’ such systems.”
Jessica Fjeld, assistant director of the Cyberlaw Clinic at Harvard’s Berkman Klein Center for Internet & Society, commented, “I have hope for the future of digital spaces because we are rightly beginning to understand the issues as systemic, rather than the result of the choices of individual people and companies. Dealing with threats to democracy, free expression, privacy and personal autonomy become possible when we view these issues through a structural lens and governments begin to take ownership of the issue.”
Jan Schaffer, director of J-Lab, said, “Yes, I believe that digital spaces will transform for the public good by 2035. But I don’t expect it to happen without government, and perhaps economic, intervention. I expect there will be legislation requiring internet platforms to take more responsibility for postings on their sites, particularly those that involve falsehoods, misinformation or fraudulent fundraising. And I suspect that the social media companies themselves will bow to public pressure and implement their own reforms.”
Jesse Drew, a professor of media at the University of California-Davis, said, “I see people shedding their naivete about technology and realizing that they must take a more involved role in deciding how tech will be used in our society. This assumes democracy is able to survive both the perils of right-wing totalitarianism as well as neo-liberal surrender to corporations. The public must take a lead.”
Heather D. Benoit, a senior managing director of strategic foresight, wrote, “Digital life will (hopefully) be improved by a number of initiatives aimed at reducing the proliferation of misinformation and conspiracy theories online. Blockchain systems can help trace digital content to its source. Detection algorithms can identify and catalog deep fakes. Sentiment and bias analysis tools allow readers to better understand online content. A number of digital literacy programs are aiming to help educate the general public in online safety and critical thinking. One of the more interesting solutions I’ve seen are AIs built to break down echo chambers by exposing users to alternative viewpoints. There are a number of challenges to overcome – misinformation may just be one. But the fact that questions are being asked and solutions devised is a sign that digital life is maturing and that it should improve given enough time.”
Jenny L. Davis, a senior lecturer in sociology at the Australian National University, said, “Although the good/bad question always obscures complex dynamics of any evolving sociotechnical system, it is true that the speed of technological development fundamentally outpaces policies and regulations. By 2035, I expect tighter policies and regulations to be imposed upon tech companies and the platforms they host. I also expect platforms themselves to be better regulated internally. This will be motivated, indeed necessary, to sustain public support, commercial sponsorships and a degree of regulatory autonomy.”
Evan Selinger, a professor of philosophy at Rochester Institute of Technology, wrote, “Increased platform literacy might be the primary driver for improving digital spaces. Simply put, the idea that widely used platforms aren’t neutral spaces for information to flow freely but are intermediaries that exercise massive amounts of power when deciding how to design user interfaces and govern online behavior has gone from being a vanguard topic for academic researchers and tech reporters to a mainstream sensibility. Indeed, while there are diverse and often conflicting ideas about how to reform corporate-controlled digital spaces to promote public interest outcomes better, there is widespread agreement that the future of democracy depends on critically addressing, right here and now, central civic issues such as privacy and free speech.”
John L. King, a professor at the University of Michigan School of Information Science, said, “It’s a matter of learning. As people gain experiences with these technologies they learn what’s helpful and what’s not. Most people are not inclined toward malicious mischief – otherwise there would be a lot more of it. A few are inclined toward it, and of course, they cause a lot of trouble. But social regulation will evolve to take care of that.”
Eric Goldman, co-director of the High-Tech Law Institute at Santa Clara University School of Law, observed, “In 15 years, I expect many user-generated content services will have figured out ways to mediate conversations to encourage more pro-social behavior than we experience in the offline world.”
Ed Terpening, industry analyst with the Altimeter Group, said, “Increased regulatory oversight will result in uniform rules that ensure digital privacy, equity and security. Tech markets – such as those involved in development of the Internet of Things (IoT) – have shown that they aren’t capable of self-regulation and the harms they have caused seldom have consequences that change their behavior. Legislative action will succeed through a combination of consumer groundswell, as well as political input from business leaders whose operations have been impacted by digital crimes such as ransomware attacks and intellectual property theft. While the scope and value of digitally connected devices will help consumers save time and money in their daily lives, the threat of bad international state actors who target those systems will increase the risk of disruption and economic harm for consumers.”
Ginger Paque, an expert in and teacher of internet governance with the Diplo Foundation, observed, “Today’s largest problems are not all about digital issues. They are all human issues, and we need to – and we will – start to tackle important human issues along with their corresponding online facets. Health (COVID-19 for the moment), climate change, human rights and other critical human issues are far more important than the current internet trend. The internet must become a tool for solving species-threatening challenges. 2035 will be a time of doing or dying. To continue a negative trend is unthinkable, and how we imagine and use the internet is what we will make our future into. The internet is no longer a separate portion of our lives. Online and offline have truly merged as shown by the G7 proposal for a minimum corporate tax of 15% for the world’s 100 largest and most profitable companies with minimum profit margins of 10% involves tech giants like Google, Amazon and Facebook and this was undertaken in consideration of digital issues.”
Deirdre Williams, an independent internet governance consultant, said, “I was lucky enough to attend an early demonstration of ‘Mosaic’ [the first graphical Web browser] at the University of Illinois, Urbana-Champaign in 1993 I can still remember how I felt then – ‘Charm’d magic casements, opening…’ to borrow from Keats. How wonderful this would be for the students in the rather remote small island state I had come from. Nearly 30 years later it feels that the miracle I was expecting didn’t happen. And plenty of unwelcome things happened instead – things to do with identity, with the community/individual balance in the society. Those unwelcome things are not all to be attributed to ‘digital life,’ but ‘digital life’ seems to have failed to provide much of its positive potential. I tend to be pessimistic, however, underneath there is optimism.
“The human perspective fails in its refusal to accept other ways of looking, of seeing, other priorities. Time is often ignored because it is an element beyond human control. And human agency is not the only agency. ‘There’s a divinity that shapes our ends, Rough-hew them how we will,’ says Hamlet to Horatio in Shakespeare’s play ‘Hamlet’ Act 5, Scene 2. Call it divinity, or Gaia, or simply serendipity, but the system is such that it always strives for balance. What is missing currently in ‘digital life’ is a sense of balance; the weightings are all uneven. They need time to reach equilibrium.
“The questions posed by this survey are all about human agency, but the system itself is superhuman. Fourteen years may (or may not) be sufficient for the system to effect its leveling, but I would expect the pendulum to swing towards improvement because that is what is in its nature.
“At the human level ‘digital life’ has the potential to create globally shared experience and improve understanding, so bringing greater balance among the human variable. Climate change, the movement of asteroids, solar flares, the evolution of the earth’s geology will re-teach human animals their true place in the system, force them to learn humility again. Fourteen years and the opportunities provided by ‘digital life’ will hopefully be enough at least to begin the re-ordering and balancing of a system in which humans acknowledge their place as members, not leaders, parts of a greater whole.”
Peter B. Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, wrote, “It is challenging to make plausible predictions about the impact that digital spaces will have upon society in 2035. For perspective, consider how things looked fourteen years ago when the iPhone was first introduced to the world. A wonderous gadget it was, but nobody would have predicted that 14 years later nearly half the population of the planet would own a smartphone, no less how reliant upon them people would have become.
“With that disclaimer in mind, I expect that digital life will have both negative and positive effects upon many aspects of our lives in the year 2035. Among the positives I would include automation of routine day-to-day tasks, improved algorithmic medical diagnoses, and the availability of high-quality AI assistants that take over everything from making reservations to keeping track of personal spending.
“The worry is that such cognitive offloading will lead to the sort of corpulent torpor envisioned in the animated film ‘Wall-E,’ with humans increasingly unable to care for themselves in world where the digital takes care of essentially all worldly needs. Yet such a dystopian outcome may be unlikely. Victor Frankl vividly describes the human need for finding meaning in one’s life, even in when the abyss seems near at hand. Faced with the manifold offerings of the digital world, many will look for meaning in creative tasks, in social discourse and perhaps even in improving the intolerable state of political affairs today.
“While some blame digital spaces for providing a breeding ground for divisive political views, what we are witnessing seems more an amplification of persistent prejudice by people who are, for the first time in generations, feeling less powerful than their forebears. The real problem is that our digital spaces cater to assuaging the ego rather than considering what makes for a life well-lived.
“In the current instance, social media, driven by the dictates of surveillance capitalism, is largely predicated on individuals feeling better (for a few seconds) when someone notices them with a like or a mention. Harder to find are digital spaces that foster the sort of deep interpersonal interaction that Aristotle famously extolled as friendships of virtue.
“The optimistic view is that the public will tire of the artifice of saccharine digital interactions and gravitate towards more meaningful opportunities to engage with both human and artificial intelligence. The pessimistic view is that, well, I prefer not to go there.”
Jonathan Taplin, director emeritus at the University of Southern California’s Annenberg Innovation Lab commented, “In the face of a federal judge’s recent dismissal of the FTC’s monopoly complaint against Facebook, it is clear that breaking up big tech may be a long, drawn out battle. Better to focus on two fairly simple remedies. First, remove all ‘safe harbor’ liability shields from Facebook, YouTube, Twitter and Google. There are currently nine announced bills in Congress to address this issue.
“As soon as these services acknowledge that they are the largest publishers in the world, the sooner they will have to take the responsibilities that all publishers have taken since the invention of the printing press. Second, Facebook, Google, YouTube, Instagram and Twitter have to start paying for the content that allows them to earn billions in ad revenues.
“The Australian government has passed a new law requiring Google and Facebook to negotiate with news outlets to pay for their content or face arbitration. As the passage of the law approached, both Facebook and Google threatened to withdraw their services from Australia. But Australia called their bluff and they withdrew their threats, proving that they can still operate profitably while paying content creators.
“The Journalism Competition and Preservation Act of 2021 that is currently before the Judiciary Committee in both House and Senate would bring a similar policy to the United States. There is no reason Congress couldn’t fix these two problems before the end of 2021.”
Bryan Alexander, a futurist, consultant researcher and writer wrote, “So much depends on how one defines good, and for which population. Strong regulations against blasphemy or defaming a nation might please some people politically, while outraging the rest. Deleting some online content pleases some people, while breaking the historical record. Overall, though, we’re headed in a positive direction, at least as I see it. Librarians and instructors continue to teach information, media and digital literacy. Online populations are becoming more diverse, as are those who create digital tools, platforms and content. Popular and governmental pushback efforts against corporate surveillance are building up. Meanwhile, I think people will increasingly use digital tools to do practical climate-crisis work, and that will gradually overpower climate deniers.”
Stephan Adelson, president of Adelson Consulting, commented, “Not every digital space is experienced in the same way. Facebook became popular, causing the downfall of MySpace. I do believe that Facebook will have its place as it evolves, but as it continues to age the spaces that are hard for it to emulate – like TikTok – will continue to grow and dominate the online social space in future. With Gen Z adopting TikTok, those of all generations are following. With Facebook, comments are difficult to avoid. Comments create conflict and reduce the enjoyment of the experience for most. As TikTok grows, the content diversifies and the algorithm improves as the diversity of content increases making the experience more personal and thereby positively reinforcing subjective realities (reinforcing personal norms, interest and beliefs).
“Advertising and entertainment are often almost indistinguishable on TikTok, and entertainers are often recruited as advertisers, increasing the value of word of mouth in dramatic ways, making the platform fertile for monetization. Follow the money and you will often find the future trend. There is also a uniting of like-minded people on TikTok that gives one a feeling that is more communal than the groupings on Facebook. The virtual relationships are deeper for several reasons, but primarily because of the face-to-face aspect of the platform. It is the feelings of community, more-effective content distribution, reduction of unavoidable conflict and the youth of users that make the platform most viable as the Facebook replacement.
“Above all else, the video and emotional foundation of TikTok will influence all digital spaces in the future and improve digital lives for the better. Providing facial expressions offers emotional impact for content and can help convey meaning while invoking emotions like compassion. For example, a Facebook comment a person is passionate about can easily be misunderstood by the reader. Without the video visual of facial and verbal clues that convey emotion, written words are often understood individually, based on personal interpretation, rather than as intended. With video content there is ‘eye contact’ and the witnessing of the providers emotion. the impact of the same words as a written FB post is vastly difference. The level of emotional impact to the recipient of a video has the capacity to provide more depth to the statements made while removing a good amount of personal interpretation.
“It is my hope that as platforms like TikTok, platforms that use more personal and emotionally engaging content grow, so will the understanding, compassion and inclusiveness of the users. As we increase face-to-face interaction online, it is my hope that the positive impact of social media will increase but, there will likely be a time of transition between now and 2035 when the positive impact may not be so apparent.
“There will be other platforms that follow and improve the video-based format, especially as the technological infrastructure of the world improves. I imagine improvements to video-based platforms will include improvements in response time, making digital lives not only more personally and emotionally engaging but providing the experience in something closer to real-time. I think there will be a continuation of the ongoing merger of our real and virtual lives.”
George Sadowsky, Internet Hall of Fame member and member of the Internet Society Board of Trustees, said, “The question is not good in the sense that it offers only two alternatives, positive or negative. On balance, I see improvements in our digital life, but I fear that the dark and ugly side of digital life will remain a force to be contended with. I think that we in the U.S. will have to make some accommodation to adjust to an emerging world view.
“On the positive side, I see the potential for education being clearly enhanced, and the opportunities offered by the democratization of health care being quite important.
“On the negative side, I fear that some accommodation with European – and increasingly other continents’ – views regarding data privacy will be required. Now this will severely limit the markets for individual data collection and sale in our country, and I hope that does occur. Personal data brokers have been responsible for the growth of some of our most commonly used applications, and it is no longer needed, if it ever was.”
Jim Spohrer, board member of the International Society of Service Innovation Professionals and IBM open AI director, said, “What television did for widespread entertainment in the 1960s U.S., digital spaces will do for lifelong learning and upskilling by 2035 globally.”
David Eaves, a public policy entrepreneur and expert in information technology and government at Harvard University’s Kennedy School of Government, wrote, “My sense is that – if they don’t change – many existing forms of public discourse risk collapsing or fragmenting to a point of not being useful. As a result, either digital public spaces evolve to serve our needs better or the systems (such as democracy) that rely on these (now digital) public spaces will, themselves, collapse. So, mine really isn’t really an optimistic view. But then many of our legacy systems have been optimized for a broadcast or even written era, so we should not be surprised that they are collapsing in a digital era.
“My sense is that digital life will be more divided between those comfortable with sharing and broadcasting and those who maintain a very narrow or entirely digitally private life. My sense, too, is that the sheer amount of surveillance – from the state, from companies, from family, from other individuals – will come to dominate and reshape norms and how people participate in civic life. (I mean it already has).”
Jeff Johnson, longtime Silicon Valley tech veteran now working as a professor of computer science at the University of San Francisco, commented, “Actually, I didn’t want to answer Yes or No. I wanted to answer: Both. Some changes will benefit the public good, and other changes will be detrimental to the public good. I predicted a dual future with positive and negative scenarios way back in 1996 when I, as a member of the board of directors of Computer Professionals for Social Responsibility (CPSR). I wrote two articles for the CPSR journal: one presenting positive scenarios of people using what we at the time called the National Information Infrastructure (NII), and one presenting a worst-case view. It was also printed in Communications of the ACM.
“Of course, the future will consist of both positive and negative developments. It is up to all of us to work to make sure that positive developments outweigh negative ones. Sadly, CPSR no longer exists, but other organizations have sprouted to fill the hole it left. CPSR’s target audience was multi-faceted: we aimed to influence ‘the four Ps:’ policy-makers, press, public and the computing profession. Those four target audiences are just as important today as they were in 1996.”
Thomas Lenzo, a consultant, community trainer and volunteer in Pasadena, CA, said, “I expect a continuing transformation of digital spaces and life and I expect it will be a mix of good and bad based on the driving actor. Tech leaders, in general, will push technology they create, some as visionaries and some to make money. Politicians will push technology in an effort to ensure they and their political party remain in office. Public audiences for the most part will want those digital spaces that will improve the quality of their lives. Criminals will seek digital spaces that enable them to commit crimes and get away without risk.
Andrew Tutt, an expert in law and author of “An FDA for Algorithms,” wrote, “Digital spaces are still an evolving medium. We did not know when these were first created how people, both individually and collectively, would respond to and use these spaces and, as a consequence, the people who created these spaces did not anticipate all the ways in which they could be potentially misused or abused. The companies that control digital spaces are discovering myriad ways to make them more positive for society by making them more inclusive and more procedurally fair and by working to make the content people post and read in these spaces more appropriate for constructive civil discourse.
“I am quite confident that the ‘mainstream’ digital spaces of 2035 will reflect a maturation of the medium where most people most of time find that these digital spaces form a substantial and positive part of their lives. Issues of filter bubbles, incendiary content, and misinformation will likely move out of mainstream digital spaces and into alternative channels – much as they have in physical spaces.”
Brian Southwell, director of the Science in the Public Sphere Program at RTI International, wrote, “Embracing the possibilities for technology to allow people to connect while overcoming challenges in their personal circumstances and reducing environmental pollution offers an important path forward.
“We are not guaranteed to reach a better place simply through the use of technology, however. If people are allowed or encouraged to only gather virtually with others like them in terms of demographics or ideology, for example, we will likely have a polarized and weakened society. Moreover, allowing people to work or participate in school remotely does not guarantee that people will be optimizing their mental health and well-being.
“We need workplace policies that encourage time away from working and that encourage people to tend to their personal well-being, regardless of where they physically are working. The spread of misinformation also will continue in online spaces without improved connections and relationships between people and credible scientific institutions.”
Collin Baker, an expert in data structures, software design and linguistics said, “I believe that national initiatives in the U.S. will reduce the inequity in the availability of broadband, allowing citizens everywhere to participate more fully in digital life and national politics. I also believe that international initiatives with regard to global warming, health care and other issues requiring broad international cooperation will be increasingly enabled by near-universal access to the internet. Unfortunately, I don’t see any easy solution to the fragmentation encouraged by the current model of a web supported by advertising revenue; unless we can prevent people from profiting by leading users down increasingly weird rabbit holes, the kind of large-scale cooperative movements I just mentioned will not succeed.”
Marjory S. Blumenthal, senior fellow and director of the Technology and International Affairs Program at the Carnegie Endowment for International Peace, said, “First, saying there will be improvement is not saying there will only be improvement – some things will get better, while more negative phenomena will continue. The reason for the latter is that as more aspects of social, economic and civic life become digital, the bad guys (of different kinds) will go where the activity is. At the turn of the century, ‘walled gardens’ were criticized – and it is easy to wonder whether those seeing a dystopian future might reconsider that judgment. That said, government entities at all levels, nonprofits/NGOs, and many companies are dedicated to the creation and use of positive digital spaces and experiences. Tech leaders and politicians can be helpful, but they are likely to have a variety of impacts given their respective interests and motivations. No one involved is a saint.”
Mary Chayko, distinguished teaching professor of communication and information at Rutgers University, commented, “Digital spaces will eventually transform for the better via the entrepreneurship of those who are motivated to imagine and build a digital infrastructure that serves the public good. These ‘new’ tech leaders will prioritize (and monetize) inclusivity rather than division as they design algorithms and platforms in which consensus can more readily emerge and collective understandings can more readily be reached. Our current problems – disinformation, abuse, fear of the ‘other’ – will not disappear, but will become less prevalent as these better and more-sophisticated digital spaces are created, become financially successful and spread.”
Melissa R. Michelson, professor of political science and dean at Menlo College, commented, “Digital life will improve by 2035, but it will first get worse. There is increasing awareness of the negative impacts of digital life today – especially the impact of social media platforms –including the degree to which these platforms sow negative partisanship, spread misinformation, and are generating real-world harms. However, there is not yet consensus about those negative effects to motivate policy makers or private social media owners to make real change.
“Until more people become aware of the real-world harms being generated by digital spaces, action will not be taken. I think within the next decade or so (hopefully sooner) there will be enough evidence of those harms to motivate internal policy changes by social media companies or regulatory changes by elected officials, either to break up the social media monopolies or to regulate their content, perhaps by making them liable for damage and violence that is advocated by or coordinated on their platforms.”
Karl M. van Meter, author of “Computational Social Science in the Era of Big Data,” said, “My ‘yes’ answer is meant to say, ‘I believe it had better improve or it will no longer exist in a recognizable form.’ This is a more-realistic answer than simply choosing ‘no.’ The availability of information (scientific info, not opinion info) and its convenient transmission is the major strength of the internet and will continue to be so, and will probably be reinforced by all institutions involved.
“The distribution of false or misleading information is the major weakness of the internet and it can be its misdoing unless a solution can be found, but can the institutions involved find a consensus to ‘contain’ this negative factor or even eventually find a solution, which – at present – does not seem to be forthcoming? They will have to clearly deal with the question, ‘what is ‘freedom of expression?’ and the established sociological findings that speech is not ‘free’ but engenders concrete results in the real world.”
Karen Yesinkus, a digital services professional, said, “Along the 14-year run up to the survey’s marker year of 2035, there will be tipping points, some of which may arrive in a cascade. These tipping points (or events) will drive major but mostly incremental change within this short time frame to the 2030s. Some of these will be positive, for instance, high-quality internet access for more citizens to information and services related to local, state and federal governments and better, broader access to healthcare and financial services. It is imperative that tech leaders and innovators working with elected and/or appointed officials are doing so on behalf of the public and in a neutral framework with solid, unified goals for outcomes.”
Mark Deuze, professor of media studies at the University of Amsterdam, The Netherlands, said, “Most people are wary of extreme and extremist positions. Social polarisation tends to occur only rarely in face-to-face situations. People know that the digital environment in general, and social media in particular thrive on outrage, and this explains to some extent why so many are turning away from Facebook and Twitter toward more controllable, closely knit environments online – these are not echo chambers. People ultimately seek community, which I why I remain optimistic about the future. Let us not forget that the current political polarisation in the U.S. did not come from either the internet or social media – it comes from its two-party system.”
Larry Masinter, an internet and artificial intelligence pioneer who developed standards with IETF and W3C and worked at Xerox PARC, AT&T and Adobe, commented, “The rapid technological evolution and weaponization will outrun security and regulation. Cyber is a WMD (Weapon of Mass Delusion). But at the same time users will delight in technology enabled by emerging standards for and the broad arrival of Web RTC [newly arriving open-source tech that allows for a fuller realization of real-time communication] and virtual reality and augmented reality.”
John McNutt, professor of public policy and administration at the University of Delaware, wrote, “Many of the institutional spaces that remain from the industrial era have 1) ceased to function altogether, 2) are functioning at a critically low level or 3) have been taken over by oppressive factions in society. Digital spaces offer a viable alternative. This is a more-nuanced version of the second superpower argument. A more positive civil society.”
Micah Altman, an MIT-based researcher in social science and informatics, said, “Digital spaces have developed an unprecedented reach and have had unique impacts, many of which are positive. During the COVID-19 pandemic, ‘digital spaces’ have made it possible for many to work and learn safely and more effectively than most of us would have thought possible. Over the last decade, ‘digital spaces’ have made it possible for ordinary individuals to have their thoughts influence millions; for instance, writers, musicians and artists from far-flung places can engage with audiences of thousands that have a unique connection to their work.
“Digital spaces can support connection, voice and communication in the best tradition of physical public forums and social spaces. However, digital spaces are not currently real public spaces. The current generation of digital spaces relies on platforms developed by private corporations in order to generate profits by commodifying attention, influence and surveillance. At best, this business model makes goals of the public good and offering meaningful individual control second-class considerations. At worst, these values are simply treated as constraints to be circumvented whenever possible.
“Digital spaces do not have to be this way. And many are now coming to realize this, especially following the revelation of the extent to which social platforms played pivotal roles in 2020 as springboards for political disinformation, amplifiers of medical misinformation, and abettors of censorship. Substantial portions of the public now realize how they have been commodified and influenced.
“Technologists, especially those coming recently from universities, are increasingly attempting to understand how the platforms, technologies and algorithms shape behavior that promotes non-commercial values. Policymakers in the European Union (and now, perhaps, in the U.S.) are paying sustained attention to the impacts of these platforms and the goals them driving them. A degree of hope is warranted because the problem at the core of current digital platforms has been widely recognized.
“Much work will be required to reform our platforms so that can provide meaningful agency to individuals, and accountability to society. The technical challenge of designing platforms that are accountable is a serious one requiring substantial research. However, the largest obstacles to progress are political due to both the amount of influence that platforms are able to wield through lobbying and campaign finance and partisan temptations to leverage platforms to undermine political enemies.”
Jim Kennedy, senior vice president for strategy and enterprise development at The Associated Press, responded, “Despite the clear problems digital platforms have created for accurate and authoritative information to circulate throughout society and around the world, hopeful signs are emerging that awareness of these issues is on the rise and that there will be enough pushback to begin turning the tide well before 2035 in the form of countervailing technology development, political action, government regulation and scalable forms of moderation and curation of the digital information flow.”
Jared Holt, resident fellow at the Atlantic Council’s Digital Forensic Research Lab, wrote, “I am cautiously optimistic that digital spaces will improve because I think it will be necessary to their survival. The major companies are motivated by profit, and they will lose money if they can’t keep people around. There will be a breaking point, eventually. That said, I believe change will come from outside pressures. Recommendation systems will be refined in response to criticism, privacy concerns could be addressed via government policies, and initiatives could decentralize the internet. These changes must occur, because the alternative is profoundly grim.”
Jan English-Lueck, professor of anthropology at San Jose State University and distinguished fellow at the Institute for the Future, responded, “By 2035 new public space will open up and create more misinformation but also more venues for correcting and considering the impact of that misinformation. Continued health, climate and employment crises will generate different lines of discourse. Whether or not those spaces add positively to the public and civic discourse will vary. To the degree that the current digitally immersed generation can create a new skill set for vetting, identifying and managing misinformation, the debates can be useful. More overall access and frustration with misinformation might move the conversations in the direction of transparency and accountability. The time to develop those skills and strategies is now.”
Hempal Shrestha, co-founder of the Nepal Entrepreneurs’ Hub, said, “The digital spaces will evolve to a level, which is yet unimaginable at the current stage. It will for sure touch most of the people and societies of the planet earth. The”
Giacomo Mazzone, secretary-general of the Eurovisioni think tank, commented, “How do I imagine this transformation of digital spaces and digital life will take place; what reforms or initiatives may have the biggest impact? I have big expectations in regard to some current activities: 1) Taxation of big tech corporations in order to abolish the competitive advantage they have benefitted from for decades which has some negative impact on other economic activities. 2) Measures to ban micro-profiling and micro-targeting for political propaganda purposes in order to prevent voter manipulation; look at how Silvio Berlusconi became three-time prime minister of Italy thanks to his media empire – Mark Zuckerberg could be president of the U.S. thanks to Facebook’s database and Whatsapp network. 3) Sanctions by European courts and privacy watchdogs against misuse of individuals’ personal data to encourage data-mining companies to change their business model. 4) Movement toward the global adoption of a common set of minimum rules on the design and application of A.I.
“How do I see tech leaders, politicians and/or public audiences playing in this evolution? Within the digital world we are seeing finally some differentiations. For instance, Mozilla realizes that the future of internet is endangered by the predatory habits of some digital platforms and is working to lead the fight against it.
“New business models based on trust and transparency could finally emerge once today’s overwhelming competitive advantages for the largest players are abolished at the global level. The first companies that are willing to head in a better direction for humanity could intercept this will of change and could become the ‘unicorns’ of tomorrow.
“Citizens through a new consciousness of their digital rights could be freed from the current lose-lose deal of having to completely give up their rights to their personal data in exchange for their use of popular digital spaces. If by 2035 the above-mentioned problems will not be solved, we shall have Marc Zuckerberg and Jeff Bezos as U.S. presidents and we shall have a ‘splinternet’ where each dominant economic space will have his own sphere of influence on which will exercise its data sovereignty.”
Stephen Abram, principal at Lighthouse Consulting, commented, “I believe that there is a public space that is teetering on the edge of being a bad place without the information and media literacy skills to manage your presence and feeds. That said, we know more about caves in public opinion than we ever have, and wise organizations must keep an eye on those. On the other hand, there are tools inherent in these places that can be exploited as walled gardens for defined friends lists, work teams, organizations, communities, associations, professions, academia and more. Seth Godin’s book ‘Tribes’ discusses the value of these lightly and well. It is here that I find cause for optimism and the opening up of under-heard voices and points of view.”
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, commented, “Yes! We are moving forward in transforming digital spaces and digital life. The pandemic accelerated our move. There are strong moves to bring about ubiquitous broadband in America. The need is clear; the support is bipartisan. We will see greater numbers of virtual workers and virtual learners. The freedom of the digital life will appeal to more and more Americans. Seniors will find this particularly beneficial – avoiding commutes and working flexible hours.”
Peter Levine, professor of citizenship and public affairs at Tisch College, Tufts University, said, “I really don’t know how things will look in 14 years, but I can imagine that – as a result of deep dissatisfaction with the main current platforms – some kind of better alternative will develop that competes successfully for attention, or else policies may be passed in major markets like the U.S. and European Union that improve the current platforms.”
Rich Ling, a professor of media technology at Nanyang Technological University in Singapore, responded, “It is difficult to tell what the world will be like one and a half decades into the future. This can be confirmed by looking back to about 2005 when Facebook/Twitter/AI, etc. were all still in a very nascent stage. I chose to be optimistic. I have the sense that in 15 years’ time, we will have AI applications that help us to deal with issues such as global warming. I also choose to believe that digitally mediated communication will help us to bridge gaps and increase collective solidarity for social good. In writing these words, I clearly see that there is a Pollyanna dimension to them. AI can be used for corporate profits just as easily as it can be used for public good. Social media can be used to build walls between groups and not build understanding. That said, I am betting on the positive side while recognizing that getting there will require careful navigation.”
Paul Manuel Aviles Baker, senior director for research and innovation at Georgia Tech’s Center for Advanced Communications Policy, wrote, “While many observers see problems with digital spaces, there are not clear agreements on how to fix the problems. One approach is to ask if we are better or worse off for having them. There might not be a commonly agreed upon solution set, which binds the public response.
“Digital spaces are a microcosm of communities in the larger physical world. As they are voluntary and participation is contractual, there may be philosophical if not practical limits to public sector intervention. This is not to say that nothing can be done, but part of the consequence of free or low-cost use is that users agree to play by the rules of the platform. Still, if enough organized efforts are generated then the threat of exit can act to influence platform change.
“Alternatively, designing and developing public digital spaces, analogous to the public radio platform, public libraries and town squares could be realized that offer alternative, regulated digital spaces. If the present path continues, I expect that digital spaces will become increasingly homogenous, with participants (based on platform algorithms) amplifying and reifying their beliefs. This serves to strengthen in-group belief systems, but gradually complicates inter-group, cross-boundary communications.”
Alison Gopnik, a professor of psychology and philosophy at the University of California-Berkeley, commented, “Previous examples of rapid media change almost always have led to negative disruptive effects initially (e.g., see Darton’s ‘Literary Underground and the Ancient Regime.’ Over time and especially over generations, however, both explicit regulation and implicit norms seem to kick in and at least ameliorate those effects. Interestingly the narrative has always been that the effects will be worst for children and young people, see the debates over comics and TV, although empirical evidence always suggests the opposite – media effects are marginal. In fact, you could think of each generation as a technique for adapting to the technological change introduced in the previous one. Of course, this time could be different – but I don’t see any obvious reason why it should.”
Jeremy Foote, a computational social scientist studying cooperation and collaboration in online communities at Purdue University, responded, “Every change to communication technology is accompanied by disruption and upheaval. The implications take time to reveal themselves, and organizations, institutions, and norms respond and evolve as the benefits and dangers become more apparent. Blunt legislation like the ‘right to be forgotten’ has already emerged, and further legal and algorithmic approaches will continue to be taken.
“I think (hope!) that our current concerns with disinformation and propaganda will be mostly solved in the next 15 years. I think that there is reason for optimism because these are the kinds of problems that can be ameliorated by algorithmic changes. I hope many of the anti-disinformation campaigns have been successful, because the internet makes it easier to surround content with the trappings of credibility (e.g., a professional-looking website or many followers) in a deceptive way.
“I believe tech companies will be able to work in open, transparent ways to create algorithms that do better at identifying and demoting disinformation. This is a problem where politicians, tech companies and the public all have moderately well-aligned incentives and solutions seem fairly well-defined. The broader problems of tech company power are thornier. Many of the most difficult issues are already surfacing, including difficulties in operating across cultural and political boundaries. More broadly, tech companies have natural, international network effects and therefore natural monopolies, without strong enough international legislative bodies to regulate them. These tensions seem very likely to continue.”
Mark Lemley, professor of law and director of the Stanford University program in Law, Science and Technology, observed, “The great promise of the internet was openness – openness to new ideas as well as to new technologies. Anyone who wanted to connect anything to the internet could. On the technical side, that openness is threatened by the dominance of a few tech giants who tend to rely on walled gardens – limiting interoperability and trying to keep people tied into their ecosystems. That is a major step backwards for the freedom of digital spaces. A new commitment to interoperability is important both to restore the generative nature of Internet innovation and to open those walled gardens to effective competition. There are signs of interest in that commitment. I think the desire to move beyond walled gardens will grow as people come to miss the benefits the open internet brought.”
Miguel Alcaine, head of the International Telecommunication Union office in Central America, said, “I expect many relevant actors will influence and have a positive impact in our digital development: International organizations like the UN, ITU, the Internet Governance Forum, the internet technical community – including the Internet Society, the Internet Engineering Task Force and the World Wide Web Consortium, plus the internet registrars and others – and governments, whose public policy should put the people and human rights at the center.
“More positive impact will come from better digital literacy and skills in the population made possible through public policies promoted by the influencers listed above. The positive impact will also come from societies’ understanding of completely erased borders between the physical and digital worlds – they are becoming, have become, only one world. The challenge of keeping a single universal internet will remain with us. Additionally, the world needs to solve universal meaningful connectivity from the supply side and from access, affordability and usage points of view.”
Rajnesh Singh, chair at Asia Pacific Regional Internet Governance Forum, commented, “Where we end up in 2035 will depend on how things play out with the use – and abuse – of digital spaces, and what actions stakeholders end up taking as a result. This could range from regulatory action by policymakers to business- (and reputation-) driven action by the private sector to activist action.”
Christopher Savage, partner and cyberlaw specialist at Davis Wright Tremaine, responded, “I believe that over the next five to ten years social norms for online discourse will evolve that will permit more hopeful and robust positive use of online digital spaces. Abuse and disinformation will remain, but will become marginalized.”
Alex Halavais, associate professor of data and society and director of the master’s program in social technologies at Arizona State University, commented, “There will never be an anti-Facebook, a large platform that has freed itself from the influence of commercial and deceptive messaging. Scale favors these influences. But there always have been and will continue to be small, networked communities that form small pockets free from such influences. These will emerge both using new platforms and within some of these legacy platforms. I suspect that these will gain strength as some people back away from the larger platforms, but it is likely that both will continue to move forward in parallel.”
Steven Miller, professor emeritus of information systems at Singapore Management University, said, “There will continue to be both positive and negative usages and impacts of digital spaces. There is no way of knowing whether the trend in usage patterns related to positive and negative usages will shift from the current situation over the next 15 years. However, if I have to make a prediction of whether there will be a shift towards proportionally more positive usage and less negative usage or not so, I would rather err on the side of optimism and hope for an increase in the relative amount of positive usage. If we lose that hope, we lose the chance to see a better future. This goes beyond rational analysis. Let’s all do what we can to bring about that better future while being pragmatic and clear-eyed realistic on what is actually happening each step towards that future 15 years from now.”
Garth Graham, longtime communications policy activist with Telecommunities Canada, said, “From a Canadian perspective, I see a shift in attitudes that stands a chance of re-framing public policy about the uses of digital space. First, because of internet-access pressures occurring during COVID, there is a growing awareness that our 98% access to physical infrastructure masks a reality that a very large number of Canadians face significant barriers to the use of that infrastructure. The second is that Canadian municipalities are becoming aware of a need for a hugely localized ‘digital autonomy,’ in the ways information is collected, used, managed and protected. Although our municipalities aren’t there yet, they may come to realize that control of the evolution of their own digital infrastructure in the ‘public interest,’ must encompass a vision and definition that also takes the citizen’s need for autonomy and control in the digital realm into account.”
Liz Rykert, retired president and founder of Meta Strategies, observed, “Ultimately digital space gives people a space to participate and share their opinions and ideas, to work together and find connection and community. The downside is when those opinions turn to hate and racist tactics. We need to find the measures to protect people’s rights and freedoms online just as we must in the offline worlds. New means for accountability are required.”
Jillana Enteen, co-founder of the Digital Humanities Lab at Northwestern University, observed, “I have seen my daughters flounder this past 18 months in school with too many social networking outlets. Rather than turning on their screens to learn I see them multitasking on simple apps like Netflix & TikTok during school. On the other hand, my highly invested college students went from extreme situations at home to managing online learning well. Part of this was my kids’ high school did not require screens as an attempt at equity. Because I’m at a private university my students had screens on, and they adapted multiple online tools to reimagine their classroom.”
Joly MacFie, a longtime technical leader with the Internet Society, observed, “Yes, there will be continued improvement. Where to start? Even in the last year we have seen leaps in audio algorithms and voice transcription that have massively enhanced joint online activity and the platforms they happen on, including VR. One group that is sure to benefit from this are the disabled, a class that will include many more of us as we age.”
David Bray, inaugural director of the Atlantic Council Geo Tech Center, wrote, “‘It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair.’ So begins Charles Dickens’ ‘A Tale of Two Cites’ (1859) — with the narrator later noting that this characterization of contrasting superlatives is in fact a common thread throughout all of human history. We humans always seem to want to claim that the present moment is the ultimate apex of something in human history: either something really good or something really bad. Nowadays it appears the same remains true.
“I’d like to suggest three challenges that distinguish our modern era from previous ones:
“1) The Internet May Redefine How We Organize as Large Groups – Humans are social creatures. Over the years, I’ve written about human nature and how we humans want to work in groups and at the same time we humans face challenges of adapting to changing environments as groups. It’s only when things get really bad that a human group finally accepts that what previously worked in the past no longer works and we must try something different, thus necessitating the need for ‘positive change agents’ in the most adverse of situations. Pulling up a plane from a downward uncontrolled spiral is hard to do, and even harder when the plane only has 2,000 feet of elevation left before it hits the ground. The same is true for organizations that have lost their relevancy to the new demands of an era. In public talks throughout the years – dating back as far as 2003 and 2004 when I was with the Bioterrorism Preparedness and Response Program at the U.S. Centers for Disease Control and Prevention, I have raised questions about whether the internet might challenge the notion of organizing by geography. The printing press predated the Treaty of Westphalia by about 200 years, and one could argue that organizing by national sovereignty versus other means (be they religious or imperial in nature) was only possible once enough people had learned, through books, about philosophy, economic principles, logic and reason. The distribution of knowledge made possible through the printing press gave rise to humans choosing a new form of organizing – specifically nation-states – to meet the changing world order. Our world is witnessing internet use and ubiquity (though not for all of the world yet; for more on this I recommend checking Vint Cerf’s People-Centered Internet initiative). The question is whether the internet, which transcends national borders and does span the plan, might challenge nation-states as the only way in which humans might organize? We already see transnational corporations as large in annual GDP as nation-states. Last year there were five companies (Google, Facebook, Apple, Amazon and Ali Baba) who combined had more revenue than Russia. Each of these companies also were expected to grow much faster in revenue in the next few years ahead. These same corporations are able to receive special privileges that would not necessarily have been possible decades era. One wonders: How long it will be before a company asks a country for the equivalent of ‘diplomatic immunity’ in return for providing jobs and revenue from the incomes that their locally-based employees spend? Similarly, around the world we now see emergent movements, some internet-based and some ideology-based, that span geography and use the internet to spread their beliefs and values. These too challenge notions of the sovereignty of nation-states. Finally, the disappointment in the unfulfilled promise of globalization to ‘rise all boats’ has left large numbers of individuals in developed nation-states disillusioned with what was promised to improve their standard of living. This, too, creates pressures on new ways of organizing to meet our internet era that might include an attempted return to strong nationalism or a continued fragmentation of nation-states into smaller and smaller groups?
“2) The Internet Challenges How We Know Truth and Each Other – Much has been written in 2020-2021 about the growing challenge of ‘fake news’ from all angles and perspectives. In 2009 I saw this problem firsthand in Afghanistan, where I was deployed. I was involved in the follow-up to an incident involving the planned detonation of a propane tank by combatants who then published photos on social media and claimed that innocent civilians had been killed by a ‘missile strike.’ I had another experience in 2013. When I transitioned from working as a senior national intelligence service executive to a senior executive and CIO at the Federal Communications Commission something odd happened. First, multiple individuals started a Wikipedia article that initially posed fairly benign factual things about my past roles with the Intelligence Community and FCC, and then after multiple edits started adding additional details myself and my wife. This struck me as odd because, first, while I recognize as a public figure I’m exposed to whatever the public wants to critique about my work, I don’t think my wife is subject to the same — especially since I’m a non-political civil servant, and, second, some the details would not have been easy to know through just web searches. Some editors on Wikipedia however didn’t necessarily agree. I created the @fcc_cio Twitter account in October 2013 so I personally could engage directly if the public had questions about what we were doing at the FCC, about why I had made the shift from the IC to the FCC, and our focus on digital transformation. Unlike on Wikipedia, I could voice my views and respond and, at the same time, any member of the public with a real name or pseudonym could ask me questions, respond, and interact. The @fcc_cio account has been and continues to be monitored by just me and I personally answer tweets directly because I think engagement is a way to overcome distances involving geography and perspectives. About two years ago I noticed even more details being posted about myself and my wife to the point of being uncomfortable with the Wikipedia article — and it was shared that the multiple edits were believed to be the same unknown individual. Given all of this, I requested Wikipedia remove or ‘reboot’ the article, which was not an easy process. It took several weeks to demonstrate to Wikipedia that I was the actual person/subject the article asking for it to be removed. Since that time, I have noticed other articles that seem to have edits to them that seem benign at first and over time seem to build to skew the article one way or another. It makes me wonder if Wikipedia recognizes how it can be used and misused to skew current perspectives; while it strives for a ‘neutral’ point of view, is that possible in today’s era? Caveat: I am and remain a strong supporter of both transparency and ‘collective intelligence’ (aka, crowdsourcing), even though I also know from the research that collective intelligence only works if all participants have the same shared goals from the start. I’m not sure that’s the case with some of the crowdsourced platforms on our internet today? I intentionally am not on Facebook for example. Lastly, WIRED has a good summary of photo editing to make photos appear more dramatic or impactful throughout the years — noting this is not a uniquely internet-based phenomenon. What the internet has done is make it easier for almost everyone to edit content to add more smoke, more sirens or even more missiles being launched, akin to what apparently happened in 2008.
“3) The Internet Warps Our Sense of How Social Change Happens – ‘Distraction’ is a wonderful book by Bruce Sterling. The title sums up my third thought about how the challenges of our modern era differ from other challenges that humanity has faced in the past and overcome: We are increasingly distracted at an increasing cost, including the loss of focused thought and the cost to organizations of shifting between multiple tasks. When I wrote about this as a Ph.D. student in 2007, the cost from distractions in the workplace was estimated to $588 billion a year. The cost is presumably much higher now. At the same time the amount of video content being uploaded to the internet continues to increase dramatically – when I gave a talk in 2014 an estimated 72+ hours of YouTube video was uploaded every minute. By 2016 that number was, as reported to an official with Google, more than 500 hours of YouTube video was uploaded every minute and growing exponentially. To be certain, a lot of the video probably includes cute cats; yet the videos also include ‘video bloggers’ (vloggers), non-syndicated news programs, and other forms of narration about our changing times. To some degree humanity has always pondered what makes one particular era different from another. As Charles Dickens’ begins in ‘A Tale of Two Cities,’ we always think our specific era is both the best and worst compared to all that came before. The hundreds of thousands of videos and films being posted online today show us only snippets of how real social change happens. We get the shiny highlights and perhaps a little character development if we’re lucky, before the credits roll and we feel like the world changed in less than two hours. Real, meaningful social change is much harder. It requires working across different groups and teams, often with different agendas, beliefs and views of the world. Real, meaningful social change requires sharing and refining narratives to bring folks together. It requires identifying what motivates different groups and incentivizes them to make lasting change happen – sometimes it is the promise of better outcomes, sometimes it is the promise of financial returns, sometimes it is political returns, sometimes it is thinking about the future and the next generation, sometimes it is the creation of a common understanding of the present and the needs of the now. Meaningful change involving groups of humans is never simple, never easy and never done overnight. The internet in all of its videos, social media and other distractions may make us feel like we’re more connected and informed about the what’s going on, but really meaningful social change takes hard work, time and committed change agents across organizations.”
If you wish to read the full survey report with analysis, click here:
To read anonymous survey participants’ responses with no analysis, click here: