Elon University

Credited Responses: The Best / Worst of Digital Future 2035

This page holds thousands of predictions and opinions expressed by experts who agreed to have their comments credited in a canvassing conducted from December 27, 2022, to February 21, 2023, by Elon University’s Imagining the Internet Center and Pew Research Center. These experts were asked to respond with their thoughts about what are the BEST AND WORST CHANGES likely to occur by 2035 in digital technology and humans’ uses of digital systems. 

Results released June 21, 2023 – Internet experts and highly engaged netizens participated in answering a survey fielded by Elon University and the Pew Internet Project between December 27, 2022 and February 21, 2023. Some respondents chose to identify themselves, some chose to be anonymous. We share the for-credit respondents’ written elaborations on this page. Workplaces are attributed for the purpose of indicating a level of expertise; statements reflect personal views.

This page does NOT hold the full report, which includes analysis, research findings and methodology. Click here to read the full report. In order, this page contains only: 1) the research question in brief; 2) a brief outline of the most common themes found among both anonymous and credited experts’ remarks; 3) the submissions from respondents to this canvassing who agreed to take credit for their remarks. (Anonymous responses are found here.)

The Prompt: The best and worst of digital life in 2035: We seek your insights about the future impact of digital change. This survey contains three substantive questions about that. The first two are open-ended questions. The third asks how you feel about the future you see.

The first open-ended question: As you look ahead to 2035, what are the BEST AND MOST BENEFICIAL changes that are likely to occur by then in digital technology and humans’ use of digital systems? We are particularly interested in your thoughts about how developments in digital technology and humans’ uses of it might improve human-centered development of digital tools and systems; human connections, governance and institutions; human rights; human knowledge; and human health and well-being.

The second open-ended question: As you look ahead to the year 2035, what are the MOST HARMFUL OR MENACING changes that are likely to occur by then in digital technology and humans’ use of digital systems? We are particularly interested in your thoughts about how developments in digital technology and humans’ uses of it are likely to be detrimental to human-centered development of digital tools and systems; human connections, governance and institutions; human rights; human knowledge; and human health and well-being.

The third and final question: On balance, how would you say that the developments you foresee in digital technology and uses of it by 2035 make you feel? (Choose one option.)

  • More excited than concerned
  • More concerned than excited
  • Equally excited and concerned
  • Neither excited nor concerned
  • I don’t think there will be much real change

Results for third question – regarding the respondents’ general mood in regard to the changes they foresee by 2035:

  • 42% of these experts said they are equally excited and concerned about the changes in humans-plus-tech evolution they expect to see by 2035
  • 37% said they are more concerned than excited about the change they expect
  • 18% said they are more excited than concerned about expected change by 2035
  • 2% said they are neither excited nor concerned
  • 2% said they don’t think there will be much real change by 2035

Click here to download the print version of the “Best and Worst Digital Change” report

Click here to read the full “Best and Worst Digital Change” report online

Click here to read anonymous responses to this research question

Common themes found among the experts’ qualitative responses:

Some 37% of these experts said they are more concerned than excited about coming technological change and 42% said they are equally concerned and excited. They spoke of these fears:

* The future of human-centered development of digital tools and systems: The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. These experts worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems

*The future of human rights: These experts fear new threats to rights will arise as privacy becomes harder if not impossible to maintain; they cite surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes and disinformation, advanced facial-recognition systems and widening social and digital divides as looming threats. They foresee crimes and harassment spreading more widely, and the rise of new challenges to humans’ agency and security. A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to the loss of jobs, resulting in a rise in poverty and the diminishment of human dignity.

*The future of human knowledge: They fear that the best of knowledge will be lost or neglected in a sea of mis- and disinformation, that the institutions previously dedicated to informing the public will be further decimated, that basic facts will be drowned out in a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people’s cognitive skills will decline. In addition, they argued that “reality itself is under siege” as emerging digital tools convincingly create deceptive or alternate realities. They worry that a class of “doubters” will hold back progress.

*The future of human health and well-being: A share of these experts said humanity’s embrace of digital systems has already spurred high levels of anxiety and depression and predicted things could get worse as technology embeds itself further in people’s lives and social arrangements. Some of the mental and physical problems could stem from tech-abetted loneliness and social isolation; some could come from people substituting tech-based experiences for real-life encounters; some could come from job displacements and related social strife; and some could come directly from tech-based attacks.

*The future of human connections, governance and institutions: The experts who addressed these issues fear that norms, standards and regulation around technology will not evolve quickly enough to improve the social and political interactions of individuals and organizations. One overarching concern: a trend towards autonomous weapons and cyberwarfare and the prospect of runaway digital systems. They also said things could worsen as the pace of tech change accelerates. They expect that people’s distrust in each other may grow and their faith in institutions may deteriorate. This, in turn, could deepen already undesirable levels of polarization, cognitive dissonance and public withdrawal from vital discourse. They fear, too, that digital systems will be too big and important to avoid, and all users will be captives.

Some 18% of these experts said they are more excited than concerned about coming technological change and 42% said they are equally excited and concerned. They shared their hopes for beneficial change in these categories:

*The future of human-centered development of digital tools and systems: The experts who cited tech hopes covered a wide range of likely digital enhancements in medicine, health, fitness and nutrition; access to information and expert recommendations; education in both formal and informal settings; entertainment; transportation and energy; and other spaces. They believe that digital and physical systems will continue to integrate, bringing “smartness” to all manner of objects and organizations, and expect that individuals will have personal digital assistants that ease their daily lives.

*The future of human rights: These experts believe digital tools can be shaped in ways that allow people to freely speak up for their rights and join others to mobilize for the change they seek. They hope ongoing advances in digital tools and systems will give more people more access to resources, help them communicate and learn more effectively, and give them access to data in ways that will help them live better, safer lives. They urged that human rights must be supported and upheld as the internet spreads to the farthest corners of the world.

*The future of human knowledge: These respondents hope to see innovations in business models; local, national and global standards and regulation, societal norms and digital literacy that will lead to the revival of and elevation of trusted news and information sources in ways that attract attention and gain the public’s interest. Their hope is that new digital tools and human and technological systems will be designed to assure that factual information will be appropriately verified, highly findable and well-updated and archived.

*The future of human health and well-being: These experts expect that the many positives of digital evolution will bring a healthcare revolution that enhances every aspect of human health and well-being. They emphasize that full health equality in the future should direct equal attention to the needs of all people while also prioritizing their individual agency, safety, mental health and privacy and data rights.

*The future of human connections, governance and institutions: The hopeful experts said society is capable of adopting new digital standards and regulation that will promote pro-social digital activities and minimize anti-social activities. They predict that people will develop new norms for digital life and foresee them becoming more digitally literate in social and political interactions. They said in the best-case scenario these changes could influence digital life toward promoting human agency, security, privacy and data protection.

Responses from those preferring to take credit for their remarks. Some are longer versions of expert responses contained in shorter form in the survey report.

Following are the responses from survey participants who chose to take credit for their remarks in the survey; some are the longer versions of expert responses that are contained in shorter form in the official survey report. (Anonymous responses are published on a separate page.) The respondents were asked two qualitative questions: “What are the BEST AND MOST BENEFICIAL changes, and what are the MOST HARMFUL AND MENACING changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems?”

Some of the experts answered only one of the two questions. Some answered both in one response rather than responding separately to the two questions. Some respondents chose not to provide any written elaboration, only choosing to respond to the closed-end Yes-No question; those responses are not included here, only respondents’ written remarks.

The statements are listed in random order. The written remarks are these respondents’ personal opinions; names of their workplaces are published only to indicate the locus of their expertise and do not represent their employers’ point of view.

Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, predicted, “Improvement will come from shrewd management of the Internet’s own way of making known human conduct and motivation act through technology: mass scaling/hyperconnectivity; identity management; transjurisdictional arbitrage; barrier lowering; friction reduction; and memory+oblivion.

“As long as these factors are managed for improvement, they can help identify advance warnings of ways in which digital tools may have undesirable side effects. An example: phishing grows on top of all six factors, while increasing friction is the single intervention that provides the best cost/benefit ratio.

“Improvements come through human connections that may cross many borders between and within societies. They throw a light on human rights and enhance them, while effecting timely warnings about potential violations, creating an unprecedented mass of human knowledge while getting multiple angles to verify what goes on record and correct misrepresentations (again a case for friction).

“Health outcomes are improved through the whole cycle of information: research, diffusion of health information, prevention, diagnostics and remediation/mitigation considering the gamut of social determination of health.

“Education may improve through scaling, personalization and feedback. There is a fundamental need to make sure the Right to Science becomes embedded in the growth of the Internet and cyberspace in order to align minds and competences within the age of the technology people are using. Another way of putting this: We need to close the gap – right now 21st century technology is in the hands of people and organizations with 19th century mentalities and competences, starting with the human body, microbes, electricity, thermodynamics and of course computing and its advances.”

Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, commented, “The same set of factors that can map what we know of human motivation for improvement of humankind’s condition can help us identify ways to deal with the most harmful trends emerging from the Internet.

“Speed is included in the Internet’s mass scaling and hyperconnectivity, and the social and entrepreneurial pressure for speed leaves little time to analyze and manage the negative effects of speed, such as unintended effects of technology, ways in which it can be abused and, in turn, ways to correct, mitigate or compensate against these effects.

“Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of expansion of the Internet makes it easy to identify dissidents and to attack them with increasingly extensive, disruptive and effective damage that extends into physical and social space.

“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy. The effectiveness of these tools incursions continues to remain based both on the tool and on features of the victim or the intermediaries such as naïveté, lack of knowledge, lack of Internet savvy and the need to juggle too many tasks at the same time between making a living and acquiring dominion over cyber tools.”

James Hendler, director of the Future of Computing Institute at Rensselaer Polytechnic Institute, said, “We have reached the point where major approaches to the global challenges to humankind – climate change, fresh water, health and wellness, etc. – will require a new generation of computing which will include the integration of heterogeneous systems including supercomputing, specialized AI hardware, and by 2035, quantum computing. In addition, these challenges will require scaling in many new ways – billions of sensors contributing to distributed learning systems, reduced precision devices that can scale computation without corresponding scaling of energy consumption, and many other new technologies. From a theoretical point of view, new foundations will be needed for researchers to understand the next generations of computational fabric that will allow these advances.

“I am encouraged by the growing realization in academic, industrial and increasingly government circles that research and development must go into this kind of interdisciplinary work, which will combine theory, engineering, and social sciences (to understand the policy implications that new models bring). The notion of Ph.D. research tightly tied to departments will have to give way to increasingly interdisciplinary efforts focused on the grand challenges.

“If successful, I would expect that health technology will be one of the first areas to benefit as the new computational approaches are well-suited to scaling genomic and proteomic research. While I am still pessimistic about major breakthroughs in climate change per se, I believe major work will be done in the impacts of climate change on infrastructure and the mitigations thereof.

“Finally, the new generation of AI technologies, which still are not living up to their hype, when coupled with both humans who better understand the limitations, and heterogeneous systems that will be needed to support ever larger models, hold tremendous potential to help human scientists to solve these problems with ever larger data scale underlying the analytics.”

James Hendler, director of the Future of Computing Institute at Rensselaer Polytechnic Institute, observed, “There are a number of well-known quotes from scientists who used to claim the key to controlling climate change was better modeling, but now believe the issue is primarily political, beyond the edges of computation and the like. As I watch the evolution of powerful technologies, my optimism about the future of computing is counter-balanced (if not outweighed) by my cynicism that the political world will be able to control the negative impacts.

“In the more capitalist societies, the political power of the wealthy continues to grow, and thus those least impacted by the problems have the most power that could be wielded to solve them. In more authoritarian governments, we see oligarchs and power seekers controlling the very politics that are needed to solve the problems – solutions likely to come at a cost to themselves.

“We need to find new ways to teach technologists to speak to politicians and the powerful, we need people to understand that we have only one world to live in, and we need the political will such that as scientific innovation is achieved, the will to implement it must be concurrently developed. The new foundations of computing must include educating students in policy, public administration and implementation that focuses not on personal enrichment, but on planetary good.

“The progress made in technology in the coming decade will only help solve the real problems if we can align the technical with the social and create a movement of scientists who can understand and explain the realities.

“What Rachel Carson did with ‘Silent Spring’ in raising awareness of pesticide dangers must become something valued among scientists. We can have impact, but not by living in ivory towers or working solely on wealth generation – we must train a generation of technologists who understand not just the science, but the social impacts that go with them.

“Just as bioethics grew as an increasingly important part of the biological research world, motivated to a large degree by the horrors perpetrated in World War II, we must realize that we live in a time where the ethics of algorithms and technologies cannot be ignored.”

Bart Knijnenburg, associate professor and researcher on privacy decision-making and recommender systems at Clemson University, predicted, “I am hoping that the gap between AI’s appearance and capabilities will shrink, thereby improving the usability of our interactions with AI systems and making our interactions with complex digital systems more intuitive. In our current interaction with AI systems there is a mismatch between the appearance of the systems (very humanlike) and their capabilities (still lacking far behind real humans). People tend to use the human-likeness of these systems as a shortcut to infer their capabilities, which leads to usability issues. I am hoping that advanced AI systems provide a more powerful and efficient interface to such knowledge. While we currently think of generative AI (e.g., GPT4) as the key to the future, I would like to see a shift toward a more explicit goal of summarizing and integrating existing sources of human knowledge as a means to more robustly answer complex user queries.

“In terms of human rights, I hope that AI systems can increasingly free human workers from menial (mental) tasks. Ideally, teaming with AI systems would make human work more interesting, rather than simply more demanding.

“In terms of human health and well-being, I would like to see AI systems that take a ‘digital twin’ approach to modeling the mental state of a human user, where the AI serves as an intuitive interface for the user to interpret and critically reflect upon their personal mental state.”

Bart Knijnenburg, assistant professor and researcher on privacy decision-making and recommender systems at Clemson University, said, “In terms of human-centered development, I am worried that the complexity of the AI systems that are being developed will harm the transparency of our interaction with these systems. We can already see this with current voice assistance: they are great when they work well, but when they don’t do what we want it is extremely difficult to find out why.

“In terms of human rights and human health/happiness, I worry that a capitalist exploitation of AI technology will increase the expectations of human performance, thereby creating extra burden on human workers rather than reducing it. For example: while theoretically the support of an AI system can make the work of an administrative professional more meaningful, I worry that it will lead to a situation where one AI-assisted administrative worker will be asked to do the job of 10 traditional administrative workers.

“In terms of human knowledge, I worry that the products of generative AI will become indistinguishable from actual human-produced knowledge. This has severe consequences for data integrity (e.g., there have already been several example situations where GPT4 generates answers that look smart but are actually very wrong – will a human evaluator of AI answers be able to detect such errors?) and authenticity (e.g., how do we know for sure that this Pew survey is being answered by real humans, rather than bots?).”

Mojirayo Ogunlana, principal partner at M.O.N. Legal in Abuja, Nigeria, and founder of the Advocates for the Promotion of Digital Rights and Civic Interactions Initiative, wrote, “Human-centered development of digital tools and systems will take place – safely advancing most human progress in these systems. There will be an increase in technological advancement, including a phenomenal rise in encryption and in technologies that would evade governments’ intrusion and detection.”

Mojirayo Ogunlana, principal partner at M.O.N. Legal in Abuja, Nigeria, and founder of the Advocates for the Promotion of Digital Rights and Civic Interactions Initiative, predicted, “The internet space will become truly ungovernable. As governments continue to push using harmful technologies to invade people’s privacy, there will also be an increase in the development of technologies that will be able to evade governments’ intrusion, which will invariably leave power in the hands of people who may use this as a tool for committing crimes against citizens and their private lives. Then digital and human rights will continue to be endangered as governments continue to take decisions based on their own selfish interests rather than for the good of humanity. The Ukraine/Russia war in context.”

David Clark, Internet Hall of Fame member and senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, wrote, “To have an optimistic view of the future you must imagine several potential positives come to fruition to overcome big issues:

  • The currently rapid rate of change slows, helping us to catch up.
  • The Internet becomes much more accessible and inclusive, and the numbers of the unserved or poorly served become a much smaller fraction of the population.
  • Over the next 10 years the character of critical applications such as social media mature and stabilize, and users become more sophisticated about navigating the risks and negatives.
  • Increasing digital literacy helps all users to better avoid the worst perils of the Internet experience.
  • A new generation of social media emerges, with less focus on user profiling to sell ads, less emphasis on unrestrained virality and more of a focus on user-driven exploration and interconnection.
  • And the best thing that could happen is that application providers move away from the advertising-based revenue model and establish an expectation that users actually pay. This would remove many of the distorting incentives that plague the ‘free’ Internet experience today.

“Consumers today already pay for content (movies, sports and games, in-game purchases and the like). It is not necessary that the troublesome advertising-based financial model should dominate.”

David Clark, Internet Hall of Fame member and senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, commented, “I fear that the next 10 years may see many negative trends in the Internet experience. The current abuse of social media for manipulative purposes is going to bring greater government attention to the experience, which may lead to a period of turbulent regulation with inconsistent character across the globe. The abuse of social media may lead to continued polarization of societies, which will have an uncertain but potentially dramatic effect on the nature of the Internet and its apps.

“The use of the Internet as a tool for inter-state conflict (and conflict between state and non-state actors) may have increasing real-world consequences. We may see increasing restriction of cross-border interaction at the application layer. Attacks and manipulation of online content may overwhelm the ability of defenders to maintain what they consider a factually grounded basis, and sites like Wikipedia may become less trustworthy.

“Those who view the Internet as a powerful tool for social action may come to realize that social movements have no special claim to the Internet as a tool—governments may have been slow to understand the power of the Internet but are learning how to shape the Internet experience of their citizens in powerful ways. The Internet can either become a tool for freedom or a tool for repression and manipulation, and we must not underestimate the motivation and capabilities of powerful organized actors to impose their desired character on the Internet and its users.”

S.B. Divya, an author, editor and electrical engineer and Hugo and Nebula Award-nominated author of “Machinehood,” said, “By 2035, I hope to see good advances in areas of biotechnology – especially in terms of gene therapy and better treatments for viral infections – that arise as a result of better computational modeling. I also anticipate seeing alternatives for antibiotics when treating bacterial infections. Medical diagnostics will make greater use of noninvasive techniques like smart pills and machine intelligence-based imaging.

“I expect to see a wave of new employment in areas involving AI-based tools, especially for people to harness these tools and elicit useful results from them. I think we’ll continue to see rapid improvement in the capabilities of such tools, including new systems that integrate multiple modalities such as computer vision, audio and robotic motion. By 2035 we could see robots that can interact naturally with humans in service roles with well-defined behaviors and limited range of motion, such as ticket-taking or checking people in at medical facilities.

“I hope to see the internet and social media being put to use to address climate migration and refugee challenges. Microloans, crowdfunding and other types of grass roots charity will continue to expand as the needs become greater and require more rapid and dynamic deployment. In terms of governance, we might start to see effective regulation of qualifying the accuracy of digital information. This might also end up being decentralized, with crowdsourced metrics of ‘truth’ or ‘reliability’ for content across the web.”

S.B. Divya, an author, editor and electrical engineer and Hugo and Nebula Award-nominated author of “Machinehood,” commented, “By 2035, I expect that we will be struggling with the continued erosion of digital privacy and data rights as consumers trade ever-increasing information about their lives for social conveniences. We will find it more challenging to control the flow of facts, especially in terms of fabricated images, videos and text that are indistinguishable from reliable versions. This could lead to greater mistrust in government, journalists and other centralized sources of news. Trust in general is going weaken across the social fabric.

“I also anticipate a wider digital divide – gaps in access to necessary technology, especially those that require a high amount of electricity and maintenance. This would show up more in commerce than in consumer usage. The hazards of climate change will exacerbate this burden, since countries with fewer resources will struggle to rebuild digital infrastructure after storm damage.

“Human labor will undergo a shift as AI systems get increasingly sophisticated. Countries that don’t have good adult education infrastructure will struggle with unemployment, especially for older citizens and those who do not have the skills to retool. We might see another major economic depression before society adjusts to the new types of employment that can effectively harness these technologies.”

Glenn Grossman, a consultant of banking analytics at FICO, said, “Advances in AI and data-driven decision-making can lead to improvements in the quality of many sectors of our culture and economy. In our current state, technology complements many human-driven processes. With the appropriate usage of data-driven technology improved decisions in all sectors can be achieved. Healthcare decisions can deliver improved health, especially when access to care is essentially a human-driven operation. Consider legal services where those with fewer resources can possibly obtain services where today it is a greater challenge.”

Glenn Grossman, a consultant of banking analytics at FICO, commented, “Advances in AI and data-driven decision-making can, when designed with biased data, cause harm to individuals. There is also a concern that many professions would be disrupted by new technologies, which may occur, but often we see that new technologies create new jobs. The transition can be difficult if some do not retrain.”

Satish Babu, a pioneering internet activist based in India and longtime participant in ICANN and IEEE activities, predicted, “The outstanding gains will be made in: Digital communications – in mobile devices such as battery capacity, direct satellite connectivity and more. Health and well-being – in sensors and measurements, health data privacy, diagnosis and precision medicine. Rights, governance and democracy – direct democracy, tracking of rights and the right to Information. Recreation – improvements in simulated reality, virtual reality, mixed reality and augmented reality”

Satish Babu, a pioneering internet activist based in India and longtime participant in ICANN and IEEE activities, said, “There will be many major concerns in the years ahead. Social media, and fake news will become more of a problem, enabling the hijacking democratic institutions and processes. There will continue to be insufficient regulatory control over Big Tech, especially for emerging technologies. There will be more governmental surveillance in the name of ‘national security.’ There will be an expamsion of data theft and unauthorized monetization by tech companies. More people will become attracted by and addicted to gaming, and this will lead to self-harm. Cyber harassment, bullying, stalking, and the abetment of suicide will expand.”

Beneficial and Harmful
Paul Jones, professor emeritus at UNC-Chapel Hill School of Information and Library Science, commented that, “There is a specter haunting the internet  –  the specter of artificial intelligence. All the powers of old thinking and knowledge production have entered into a holy (?) alliance to exorcise this specter: frenzied authors, journalists, artists, teachers, legislators and, most of all, lawyers. We are still waiting to hear from the Pope.

“In education, we used to teach people how to use computers. Now, we teach computers how to use people. By aggregating all that we can of human knowledge production in nearly every field, the computers can know more about humans as a mass and as individuals that we can know of ourselves.

“The upside of these knowledgeable computers can provide, and will quickly provide, better access to health, education and in many cases art and writing for humans. The cost is a loss of personal and social agency at individual, group, national and global levels.

“Who wouldn’t want the access? But who wouldn’t worry, rightly, about the loss of agency?

“That double desire is what makes answering these questions difficult. ‘Best and most beneficial’ and ‘most harmful and menacing’ are opposite so much as co-joined. Like conjoined twins sharing essential organs and blood systems. Unlike for some such twins, no known surgery can separate them.

“Just as cars gave us, over a short time, a democratization of travel and at the same time became major agents of death – immediately in wrecks, more slowly via pollution – AI and the infrastructure to support it will give us untold benefits and access to knowledge while causing untold harm.

“We can predict somewhat the direction of AI, but more difficult will be how to understand the human response. Humans are now, or will soon be, co-joined to AI even if they don’t use it directly. AI will be used on everyone just as one need not drive or even ride in a car to be affected by the existence of cars.

“AI changes will emerge when it possesses these traits:

  • Distinctive presences (AKA voices but also avatars personalized to suit the listener/reader in various situations). These will be created by merging distinctive human writing and speaking voices, say maybe Bob Dylan + Bruce Springsteen.
  • The ability to emotionally connect with humans (AKA presentation skills).
  • Curiosity. AI will do more than respond. It will be interactive and heuristic, offering paths that have not yet been offered – we have witnessed this AI behavior in the playing of Go and chess. AI will continue to present novel solutions.
  • A broad and unique worldview. Because AI can be trained on all digitizable human knowledge and can avail itself of information from sensors more in variance with those open to humans, AI will be able to apply, say, Taoism to questions about weather.
  • Empathy. Humans do not have an endless well of empathy. We tire easily. But AI can seem persistently and constantly empathetic. You may say that AI empathy isn’t real, but human empathy isn’t always either.
  • Situational Awareness. Thanks to input from a variety of sensors, AI can and will be able to understand situations even better than humans.
  • No area of knowledge work will be unaffected by AI and sensor awareness.

“How will we greet our robot masters? With fear, awe, admiration, envy and desire.”

John Verdon, a retired Canada-based complexity and foresight consultant, said, “Imagine a federally-funded foundation (the funding will be no issue because the population is becoming economically literate with Modern Monetary Theory). The foundation would be somewhat along the lines of DARPA. It will only seed and shape the development of open-source tools, devices and platforms in order to strengthen and fertilize a flourishing infrastructure of digital commons. Let your imagination run free, keeping in mind that every light will cast shadows and every shadow is cast by some light.”

John Verdon, a retired Canada-based complexity and foresight consultant, commented, “The enclosure of the programmable lifeworld by private property rights and the inevitable failures, inequalities, rapacious extractions of value and dampening of response-abilities for flourishing our world.”

Beneficial and Harmful
Tom Valovic, journalist and author, wrote, “AI and ChatGPT are major initiatives of a technocratic approach to culture and governance which will have profound negative consequences over the next 10 years. If there’s one dominant theme that’s emerged in my many years of research, it’s parsing the ingrained tension between the waning humanities and the rising technology regimes that absorb us.

“It’s impossible to look at these trends and their effects on our social and political life without also including Silicon Valley’s push toward Transhumanism. We see this in the forward march of AI in combination with powerful momentum toward the metaverse. That’s another contextual element that needs to be brought in. I see the limitations of human bandwidth and processing power to be problematic. I worry about the implications of an organic, evolving, complex, adaptive, networked system that may route around slow human processors and take on an existence of its own. This is an important framework to consider when imagining the future.

“When we awake from this transhumanist fever dream of human perfection that bears little resemblance to the actual world we’ve managed to create, I think steady efforts at preserving the core values of the humanities will have proved prescient. This massive and imposed technological infusion will be seen as a chimera. Perhaps we’ll even learn how to use some of it wisely.

“I do think that AI is going to force some sort of omega point, some moment of truth past this dark age where the necessary balance between technology, culture and the natural world is restored. Sadly, it’s a question of how much ‘creative destruction’ is needed to arrive at this point. With luck (and effort) I believe there will be a developing understanding that while hyper-technology appears to be taking us to new places, in the long run it’s actually resurrecting older, less desirable paradigms, a kind of cultural sleight of hand (or enantiodromia?)

“I found this observation from Kate Crawford, founder of the AI Now Institute at NYU to be useful along these lines: ‘Artificial intelligence is not an objective, universal or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural and economic worlds shaped by humans, institutions and imperatives that determine what they do and how they do it. They are designed to discriminate, to amplify hierarchies and to encode narrow classifications.’

“If ChatGPT thinks and communicates, it’s because programmers and programs taught it how to think and communicate. Programmers have conscious and unconscious biases and possibly, like any of us, faulty cognitive assumptions that necessarily get imported into platform development. As sophisticated as that process or program has or will become, it can still be capable of the unintended consequences of human error, albeit still presenting and masking to the end user as machine-based error as sequences propagate. These can be hidden and perpetuated in code. If at some point, the system learns on its own (and I’m just not familiar enough with its genesis to know if that’s already the case) then it will be fully capable of making and communicating its own errors. (That’s the fascinating part.)

“In the current odd cultural climate, we’re all hungry to go back to a world where the ‘truth’ was not so maddeningly malleable. The idea of truth as some sort of objective reality based on purely scientific principles is, in my opinion, a chimera and an artifact of our Western scientific materialism. And yet we still keep chasing it. As Thomas Kuhn pointed out in his books on the epistemology of science, scientific knowledge is to a large extent a social construct, and that’s a fascinating rabbit hole to go down.

“As we evolve, our science evolves. In that sense, no machine, however sophisticated, will ever be able to serve as some kind of ultimate arbiter of what we regard as truth. But we might want to rely on these systems for their opinions and ability to make interesting connections (which is, of course, the basis for creative thinking) or not leave important elements of research out (which happens all the time in academic and scientific research, of course). But the caution is not to be seduced by the illusion of these systems serving up true objectivity. The ‘truth’ will always be a multifaceted, complex, socially constructed artifact of our very own human awareness and consciousness.

“The use of sophisticated computer technology to replace white-collar and blue-collar workers has been taking place for quite a while now. It will become exponentially greater in scope and momentum going forward. The original promise of futurists back in the day (the 1960s and 70s) was that automation would bring about the four-day work week and eventually a quasi-utopian ‘work less/play more’ social and cultural environment. What they didn’t factor in was the prospect of powerful corporations latching onto these new efficiencies to feather their nest to the exclusion of all else and the lack of appropriate government oversight as a result of the merging of corporate and government power.”

Lawrence Lannom, vice president at the Corporation for National Research Initiatives, wrote, “The first and, from my perspective, the most obvious benefit of improved digital technology to the world of 2035 will be the improvements in both theoretical and applied science and engineering.

“We have gone from re-wiring patch panels in the 1940s, to writing assembly language, to higher-level languages, to low code and no code, and now on to generative AI writing code to improve itself. It has been 80 years since the arrival of the first real software, and the pace is accelerating. The changes are not just about increased computing power and ease of programming, but equally or even more importantly, networking capability.

“We routinely use networked capabilities in all aspects of digital technology, such that we can now regard the network as a single computational resource. Combine compute and network improvements with that of storage capacity and, to a first level of approximation, we can expect that by 2035 all data will be available and actionable with no limits on computing power.

“A great many challenges remain, mostly in the areas of technical and semantic interoperability, but these problems are being addressed.

“The result of all of this new ability to collect and correlate vast amounts of data, run large simulations, and in general provide exponentially more powerful digital tools to scientists and engineers will result in step changes in many areas, including materials science, biology, drug development, climatology and, in general, our basic understanding of how the world works and how we can approach problems that currently appear insoluble.

“Collaboration will continue to improve as virtual meetings move from the flat screen to a believable sense of being around the same table in the same room using the same white board. AI assistants will be able to tap the collective resources of humankind to help guide discussion and research. The potential for improvements in the human condition are almost unimaginable, even at the distance of 10-12 years. The harder question is whether we are capable of applying new capabilities for our collective betterment.”

Larry Lannom, vice president at the Corporation for National Research Initiatives, observed, “In thinking about the potential harm that exponentially improved digital technologies could wreck by 2035 I find that I have two levels of concern.

“The first is the fairly obvious worry that advanced technologies could be used by malevolent actors, at the state, small group or individual level, to cause damage beyond what they could achieve with today’s tools. AI-based autonomous weapons, new pathogens, torrents of mis-information precision crafted to appeal to the recipients, and total state-level intrusion into the private lives of the citizenry are just some of the worrying possibilities that are all too easy to imagine evolving by 2035.

“A more insidious worry, however, is the potential erosion of trust at all levels of society and government. More and more of our lives are affected by or even lived in the digital realm and as that environment increases in size and sophistication it seems likely that the impact will increase. But digital reality is much more amenable to distortion and manipulation than even the worst human-level deception.

“The ability of advanced computing systems of all kinds to convincingly generate fake audio and video representations of any public figures, to generate overwhelming amounts of reasonable sounding mis-information, and to use detailed personal information, gathered legally or illegally, to craft precision messaging for manipulation beyond what can be done today could contribute to a complete lack of trust at all levels of society. Once trust is lost it is difficult to reclaim.”

Josh Calder, partner and founder at The Foresight Alliance, wrote, “Proliferating devices and expanding bandwidth will provide an ever-growing majority of humanity access to immense information resources. This trend’s reach will be expanded by rapid improvements in translation beyond the largest languages. Artificial intelligence will enable startling new discoveries and solutions in many fields, from science to management, as patterns invisible to humans are uncovered.”

Josh Calder, partner and founder at The Foresight Alliance, predicted, “Access to quality, truthful information will be undermined by information centralization, AI-produced fakes and propaganda of all types and the efforts of illiberal governments. Getting to high-quality information may take more effort and expense than most people are willing or able to do. Centralized, cloud-based knowledge systems may enable distortion or rewriting of reality – at least as most people see it – in a matter of moments. Also key to the future is AI and automation’s impact on people. A scenario remains plausible in which growing swathes of human work are devalued, degraded or replaced by automation, AI and robotics, without countervailing social and economic structures to counteract the economic and social damage that result. The danger may be even more acute in the developing world than in richer countries.”

Jane Gould, founder of DearSmartphone, commented, “With the speed and rapid diffusion of information between academic researchers and scientists, the foundations of science and technology will grow rapidly. Even those who cannot contribute to this knowledge base will gain from the progress made in technological solutions. However, there is a lot of room for deception and misinformation. In the less-scientific communities, we seem to be moving to a more image-based way of processing data. I am not an expert in cognitive learning, but I know processing images takes less cognitive work than writing and reading. So, the seeds for change will rest even more they do today on an elite, well-educated, well-versed scientific community.”

Jane Gould, founder of DearSmartphone, responded, “We have been rewriting the concept of screen time and exposure. This trend began in the 2000s but the introduction of mobility and iPhones and mobile apps in 2007 accelerated the change. We are rewriting childhood for youngsters ages 0 to 5, and it is not in healthy ways. All infants must go through discrete stages of cognitive and physical growth. There is nothing that we can do to speed these up, nor should we. Yet from their earliest moments we put young babies in front of digital devices and use them to entertain, educate and babysit them. These devices use artifices like bright lights and colors to hold their attention, but they do not educate them in the way that thoughtful, watchful parents can. More than anything else, these electronics keep children from playing with the traditional hand-held toys and games that use all five senses to keep babies busy and engaged, with play and in two-way exchanges. Meanwhile, parents are distracted and pay less attention to their infants because they stay engaged with their own personal phones and touchscreens.”

Michael Kleeman, a senior fellow at the University of California, San Diego, who previously worked for Boston Consulting and Sprint, predicted, “Basic connectivity will expand to many more people, allowing access to a range of services that in many places are only available to richer people. And this will likely increase transparency, causing the dual effect of greater pressure on governments to be responsive to citizens and allowing those who know how to manipulate information the ability to sway opinions more with seeming truths.”

Michael Kleeman, a senior fellow at the University of California, San Diego, who previously worked for Boston Consulting and Sprint, responded, “AI-enabled fakes of all kinds are a danger. We will face the risk of these undermining the basic trust we have in remote communications if not causing real harm in the short run. The flip side is they will create a better informed and more nuanced approach to interpreting digital media and communications, perhaps driving us more to in-person interactions.”

Jeremy Pesner, senior policy analyst at the Bipartisan Policy Center, predicted, “In 2035, there will be more and better ways to organize and understand the vast amount of digital information we consume every day. It will be easier to export data in machine-readable formats, and there will be more programs to ingest those formats and display high-level details about them. Because AI will be so prevalent in synthesizing information, it will be much easier to execute a first and second pass at researching a topic, although humans will still have to double-check the results and make their own additions. The falling costs of technology will mean that most people are on fairly even footing with one another, computationally speaking, and are therefore able to play immersive games and create high-quality digital art. Digital inequalities will also be lessened, as high-speed broadband will be available nearly everywhere, and just about everyone will know at least the basics of computing. Many will also know the basics of coding, even if they are not programmers, and will be able to execute basic scripts to organize their personal machines and even interface with service APIs. There will be more universal privacy laws, so it is less likely that peoples’ personal information will be leaked by through hacks and breaches, and more likely that they can manage their own health data.”

Jeremy Pesner, senior policy analyst at the Bipartisan Policy Center, wrote, “Most of the major technology services will continue to be owned and operated by a small number of companies and individuals.

“The gap between open-source and commercial software will continue to grow, such that there will be an increasing number of things that the latter can do that the former cannot, and therefore almost no one will know how the software we all use every day actually works. These individuals and companies will also continue to make a tremendous amount of money on these products and services, without the users of these services having any way to make money from them.

“Countries like China and Russia will continue to censor their Internet tremendously, if not outright disconnect it from the rest of the world.

“Because it is so much easier to publish content digitally than in any other format, people will constantly be glued to their screens and social media, with all of the health and psychological downsides we know those portend.

“There will continue to be a major dissonance between the way people act in person and the way they act on social media, and there will be no clear way to encourage or foster constructive, healthy conversations online when the participants have nothing concrete to gain from it.

“The world will start to run out of raw metals that are used for technology manufacturing, prompting a mad dash to track down and recycle metal from any usable source.”

Henning Schulzrinne, Internet Hall of Fame member, Columbia University professor of computer science and co-chair of the Internet Technical Committee of the IEEE, predicted, “Amplified by machine learning and APIs, low-code and no-code systems will make it easier for small businesses and governments to develop user-facing systems to increase productivity and ease the transition to e-government. Government programs and consumer demand will make high-speed (100 Mb/s and higher) home access, mostly fiber, near-universal in the United States and large parts of Europe, including rural areas, supplemented by low-earth orbiting satellites for covering the most remote areas. And we will finally move beyond passwords as the most common means of consumer authentication, making systems easier to use and eliminating many security vulnerabilities that endanger systems today.”

Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, warned, “The concentration of ad revenue and the lack of a viable alternative source of income will further diminish the reach and capabilities of local news media in many countries, degrading the information ecosystem. This will increase polarization, facilitate government corruption and reduce citizen engagement.”

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, wrote, “Digital technology has vast potential to improve people’s health and well-being in the next 20 years or so. Specifically, AI programs will help physicians to diagnose so-called ‘wicked’ health problems – situations we all face as older people where there are several things wrong with us, some serious and some less so, yet coming up with a holistic way to treat all of those problems and maximize quality of life has been elusive. AI and digital technologies can help to sort through the maze of treatments, research findings, etc., to get to solutions that are specifically tailored to each patient.”

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, wrote, “On the human knowledge front we, as yet, have no solution to the simple fact that digital technology gives any random village idiot a national or international forum. We knew that science and best practices didn’t sell themselves, but we were completely unready for the alternative worlds that people would create that seem to systematically negate any of the advancements the Enlightenment produced. People who want modern technology to back anti-science and pre-Enlightenment values are justifiably referred to as fascists. The digital world has produced a global movement of such people and we will have to spend the next 20 years clawing and fighting back against it.”

Beneficial and Harmful
Harold Feld, senior vice president at Public Knowledge, predicted, “Reliable, affordable high-speed broadband will become as ubiquitous in the world (including the developing world) as telephone service was in the United States in the late 20th Century. The actual technology will vary greatly depending on country, and we will still see speed differences and other quality of services differences that will maintain a digital divide. But the combination of available communications technology and solar operated systems will enable a wide range of benefits. These will include:

  • Far more efficient resource tracking and allocation and far more efficient environmental monitoring will enable dramatic increases in food and clean water distribution where needed and will help to predict potential environmental disasters with greater accuracy and certainty.
  • Greater communication potential will enable vast improvements in distance learning and telemedicine. In countries where health professionals are scarce, or where travel is difficult, a wealth of diagnostic tools and a broadband connection will allow a handful of trained first responders to treat people locally under the guidance of experienced and more highly trained medical professionals. Necessary resources such as antibiotics will be delivered by drones, and local personnel guided in how to administer and provide follow-up care. As a last resort, doctors can order medical evacuations.
  • Children will have access to education in their native language. Artificial expenses such as uniforms will be eliminated as a requirement. Girls will be able to access equal education without fear of assault.”

“Yet, here’s the thing:

  • Widespread ubiquitous broadband could easily broaden ubiquitous surveillance for corporate reasons and to aid repressive governments.
  • Big data systems will be able to sort the noise from the signal and allow corporate or government interests to predict with incredible accuracy human behavior and how to shape it in ways that best serve their interests.
  • Widespread access to others will create pockets of intense culture shock as communities find their basic assumptions about how to organize society undermined.
  • Basic trust in institutions will be replaced not with healthy skepticism for engagement, but either complete and fanatical belief in a trusted source or complete disbelief in any source.

“To slightly paraphrase William Butler Yeats, ‘Mere anarchy is loosed upon the world … The ceremony of innocence is drowned. The best will lack all conviction, while the worst will be filled with passionate intensity.’ Societies may become entirely paralyzed, caught between an inability to rely on facts for basic cooperation, or trapped between warring factions, or both.

“Copyright and technology to manage microtransactions will create huge gaps in knowledge between the haves and have nots, as even basic educational material becomes subject to limitations on sharing and requirements for access fees.

“Ownership of books or other educational media will become a thing of the past, as every digital source of knowledge will be licensed rather than owned. Book printing will wither away, so that modern educational materials will be inaccessible to those who cannot afford them.

“For the same reason, innovation will slow and become the province of a privileged few able to negotiate access to the needed software tools. Even basic mechanical inventions will have digital locks and software to prevent any tinkering.”

Beneficial and Harmful
Kelly Bates, president of the Interaction Institute for Social Change, observed, “We can transform human safety by using technology to survive pandemics, disease, climate shifts and terrorism through real-time communication, emergency response plans and resource sharing through apps and portals. We will harm citizens if there are no or limited controls over hate speech, political bullying, body shaming, personal attacks and the planning of insurrections on social media/online.”

Kyle Rose, principal architect at Akamai Technologies, said, “The biggest positive change will be to relieve tedium: AI with access to the internet’s knowledge base will allow machines to do 75 percent-plus of the work required in creative endeavors, freeing humans to focus on the tasks that require actual intelligence and creativity.”

Kyle Rose, principal architect at Akamai Technologies, observed, “AI is a value-neutral tool; while it can be used to improve lives and human productivity, it can also be used to mislead people. The biggest tech-enabled risk I see in the next decade (actually, just in the next year, and only getting worse beyond that point) is that AI will be leveraged by bad actors to create very convincing fictions that are used to create popular support for actions premised on a lie. That is likely to take the form of deepfake audio-visual content that fools large numbers of people into believing in events that didn’t actually happen. In an era of highly partisan journalism, without a trusted apolitical media willing to value truth over ideology, this will result in further bifurcation of perceived reality between left and right.”

Beneficial and Harmful
Matt Moore, a knowledge-management entrepreneur with Innotecture, which is based in Australia, observed, “Human beings will remain wonderful and terrible and banal. That won’t change. We’ll see greater use and abuse of artificial intelligence. ChatGPT will seem just like the iPhone seems to us today – so 2007. Many mundane tasks will be undertaken by machines – unless we chose to do them for our own pleasure (artisanal drudgery). We will be more productive as societies. There will be more content, more connection, more everything. We will have ecological and climate-related technologies in abundance. We will have digital-twin ecosystems that allow us to model and manage our complex world better than ever. We’ll probably have more bionic implants and digital medicine. A subset of society will reject all that (the Neo-Amish) in different ways, as it can be overwhelming. We will use these technologies to hurt, exploit and persecute each other. We will surveil, wage war and seek to maximise profit. Parts of our ecosystem will collapse, and our technologies will both accelerate and mitigate that. Fertility will probably drop as people don’t just opt out themselves but also opt out their potential children.”

John Lazzaro, retired professor of electrical engineering and computer science at the University of California, Berkeley, wrote, “By 2035, wireless barcode technology (RAIN RFID) will replace the printed barcodes that are ubiquitous on packaged goods. Fixed infrastructure will simultaneously scan hundreds of items per second, from tens of meters away, without direct line of sight. This sounds like a mundane upgrade. But it will facilitate an awareness of where every ‘thing’ is, from the moment its manufacturing begins until its recycling at end of life. This sort of change in underlying infrastructure enables changes throughout society, just as container shipping infrastructure unleashed dozens of major changes in the second half of the 20th century.

“Wireless barcodes let a store take complete inventory several times a day, with 95% accuracy. When the pandemic hit, retailers with this technology were able to pivot to omnichannel operation, so customers could shop online instead of in person, with the purchase being fulfilled from the inventory on the rack in a physical store. Those retailers became the retail winners of the pandemic, driving the rest of retail to put RFID on the fast lane. The leaders extended the use cases of RFID beyond inventory, to self-checkout and loss prevention.

“Seeing this success, other verticals are now taking the first steps into RFID. The logistics giant UPS has stated its intention to put RFID on every package, and to add infrastructure throughout their logistics chain to take advantage of the technology.

“Healthcare systems are preparing implementations are well. When fully implemented, counterfeit drugs will be easy to detect, as RFID facilitates source authentication, and expired drugs can be identified. Adoption by grocery stores will probably happen last, but when it does, manually scanning items at the self-checkout stand will be replaced by wheeling a shopping cart past a radio gateway that scans all items in parallel, without taking them out of the cart.

“Each example above seems incremental. But looking back to the early commercialization of the Internet, each individual use case also seemed incremental. But the collective weight of dozens of use cases elevates the incremental changes into a step-function change.”

Beneficial and Harmful
Rosanna Guadagno, associate professor of persuasive information systems at the University of Oulu (Finland), wrote, “By 2035, I expect that artificial intelligence will have made a substantial impact on the way people live and work. AI robotics will replace factory workers on a large scale and AI digital assistants will also be used to perform many tasks currently performed by white collar workers. I am less optimistic about AIs performing all of our driving tasks, but I do expect that driving will become easier and safer. These changes have the potential to increase people’s well-being as we spend less time on menial tasks. However, these changes will also displace many workers. It is my hope that governments will have the foresight to see this coming and will help the displaced workers find new occupations and/or purpose in life. If this does not occur, it will not be universally welcomed nor universally beneficial to human well-being.

“Emerging technologies taking people’s jobs could lead to civil unrest and wide-sweeping societal change. People may feel lost as they search for new meaning in their lives. People may have more leisure time which will initially be celebrated but will then become a source of boredom. AI technology may also serve to mediate our interpersonal interactions more so than it does now. This has the potential to cause misunderstandings as AI agents help people manage their lives and relationships. AIs that incorporate beliefs based on biases in algorithms may also stir up racial tensions as they display discriminatory behavior without an understanding of the impact these biases may have on humans. People’s greater reliance on AIs may also open up new opportunities for cybercrime.”

Sarita Schoenebeck, associate professor in the School of Information at the University of Michigan and director of the Living Online Lab, said, “I’m hopeful that there will be better integration between the digital technologies we use and our physical environments. It is awkward and even disruptive to use mobile phones in our everyday lives, whether at work, at home, walking on the street or at the gym. Our digital experiences tend to compete with our physical environments rather than working in concert with them. I’m hopeful devices will get better at fitting our body sizes, physical abilities and social environments. This will require advances in voice-based and gesture-based digital technologies. This is important for accessibility and for creating social experiences that blend physical and digital experiences.”

Sarita Schoenebeck, associate professor in the School of Information at the University of Michigan and director of the Living Online Lab, commented, “I am concerned about young people’s exposure to misogynistic and racist content, as well as other kinds of harmful content. My concern is that the exposure may be subtle, tacit and indistinct. It will be difficult for parents or teachers to notice it on a day-to-day basis, and perhaps difficult even for experts to track. The famous adage from the 1964 Supreme Court case, Jacobellis v. Ohio, where Justice Stewart said of pornography, “I know it when I see it” loses its durability here. We may not know it when we see it. I do not want to restrict our young people from the Internet, but I do want us to better understand ideas they are being exposed to, before those ideas become entrenched and harmful.”

Beneficial and Harmful
John Hartley, a research professor in media and communications at the University of Sydney in Australia, predicted, “The most beneficial changes will come from processes of intersectional and international group-formation, whereby digital life is not propounded as a species of possessive individualism and antagonistic identity, but as a humanity-system, where individuality is a product of codes, meanings and relations that are generated and determined by anonymous collective systems (e.g., language, culture ethnicity, gender, class).

“Just as we, the species, have begun to understand that we live in a planetary biosphere and geosphere, so we are beginning to feel the force of a sense-making semiosphere (Yuri Lotman’s term), within which what we know of ourselves, our groups and the world is both coded and expressed in an open, adaptive, complex system, of which the digital is itself a technological expression.

“At present, the American version of digital life is the libertarian internet as a soft-power instrument of U.S. global cultural hegemony. The direction-of-travel of that system is toward the reduction of humanity to consuming individuals; digital affordances to an internet of shopping; and human relations to corporate decisions.

“Within that setup, users have, however, discovered their own interlinked identities and interests and have begun to proliferate across platforms designed for consumerism, not as market influencers but as intersectional activists.

“A paradigm example of what is necessarily a mixed environment is Greta Thunberg. Her climate activism could not have gone global without digital life. Fridays for Future and School Strike for Climate could not have mobilized 6 million demonstrators without digital organisation. A lone teenager, Thunberg showed the world that innovation can come from anywhere in a digital system, and that collective action is possible to imagine at planetary scale to address a human-made planetary crises.

“Looking forward, ordinary users are becoming conscious of their own creative agency and are looking for groups in which world-building can be shared as a group-forming change agency. Thus, intersectionality, collective action, and planetary or species-level coding of the category of ‘we’ are what will be beneficial and of great benefit in digital life, to address the objective challenges of the Anthropocene, not as a false and singular unity of identity, but as a systemic population of difference, each active in their own sphere to link with common cause at group-level.

“At the same time, users are becoming more conscious of their individual ignorance in the context of cultural, political and economic multiplicity. Digital literacy includes recognition of what you don’t know. This is the self-consciousness of the expert, to seek understanding of context, history and others in order to improve their models of knowledge. Knowledge is already riven by power and antagonism and digital haters are probably better organized than activists for climate justice, but the developing understanding of how the system works both negatively and positively is another emergent benefit of digital literacy at humanity scale.

“The flip side: Incumbent powers, both political and commercial, are propagating stories in favour of conflict. These are now weaponized strategic forces, the continuation of warfare in the cultural realm, where audiences, viewers, players and consumers are encouraged to forget they are citizens, the public of humanity-in-common, and to cast themselves as partisans and enemies whose self-realization requires the destruction of others. The integration of digital life into knowledge, power and warfare systems is already far advanced. By 2035 it will be too late to self-correct without organized resistance.”

Beneficial and Harmful
Jon Stine, executive director of the Open Voice Network, wrote, “Three advances that we will welcome in 2035:

  • A narrowing of the digital and linguistic divide through the ubiquity of natural language understanding and translation. We’ll be able to understand each other, if we choose to listen.
  • The rapid advances in healthcare early diagnosis, achieved through the use of biomarker data and the application of artificial intelligence
  • Ambient, ubiquitous conversational AI. We’ll live in a world of billions of AI’s, and every AI will be conversational. Welcome to the post-QWERTY world.

“However the same digital advantages create this 2035 scenario:

  • The hyper-personalized attention economy has continued to accelerate – to the financial benefit of major technology platforms – and the belief/economic/trust canyons of 2023 are now unbridgeable chasms. Concepts of truth and fact are deemed irrelevant; the tribes of the earth exist within their own perceptual spheres.
  • The technology innovation ecosystem – research academics, VCs and start-ups, dominant firms – has fully embraced software libertarianism, and no longer concerns itself with ethical or societal considerations. If they can build it, they will (see above).
  • The digital divide has hardened, and divided into three groups: the digerati, who create and deliver it, and out of self-interest; the consumptives, into whose maw is fed ever-more-trite and behavior-shaping messaging and entertainment; and the ignored – old, impoverished, are off the grid.”

George Lessard, information curator and communications and media specialist at MediaMentor.ca, responded, “The best thing that could happen would be that the U.S. law that protects internet corporations from being held liable for content posted on their platforms by users will be revoked and that they become as liable as a newspaper is for publishing letters to the editor. The second-best thing would be that internet platforms like Google and Facebook are forced to pay the journalism sources they distribute for that content like they do in Australia and soon Canada. And the third best thing that could happen is that sites/platforms like Flickr and YouTube will be required to share the revenue generated by the intellectual property users/members share on their platforms.”

George Lessard, information curator and communications and media specialist at MediaMentor.ca, said, “The most harmful thing that will happen is that the intellectual property posted by users to platforms like Facebook, Flickr and YouTube will continue to create revenue for these sites well past the life of the people who posted them, and their heirs will not be able to stop that drain of income for the creator’s families/agents.”

Harmful (Did not respond to Benefits question)
Judith Donath, fellow at Harvard’s Berkman Center, and the founder of the Sociable Media Group at the MIT Media Lab, wrote, “Persuasion is the fundamental goal of communication. But, although one might want to persuade others of something false, persuasiveness has its limits. Audiences generally do not wish to be deceived, and thus communication throughout the living world has evolved to be, while not 100% honest, reliable enough to function.

“In human society by 2035, this balance will have shifted. AI systems will have developed unprecedented persuasive skills, able to reshape people’s beliefs and redirect their behavior. We humans won’t quite be an army of mindless drones, our every move dictated by omnipotent digital deities, but our choices and ultimately our understanding of the world will be profoundly influenced by algorithmically generated media exquisitely tuned to our individual desires and vulnerabilities. We are already well on our way to this. Companies such as Google and Facebook have become multinational behemoths (and their founders, billionaires) by gathering up all our browsings and buyings and synthesizing them into behavioral profiles. They sell this data to marketers for targeting personalized ads and they feed it to algorithms designed to encourage the endless binges of YouTube videos and social posting, providing an unbounded canvas for those ads.

“New technologies will add vivid detail to those profiles. Augmented-reality systems need to know what you are looking at in order to layer virtual information onto real space: the record of your real-world attention joins the shadow dossier.  And thanks to the descendants of today’s Fitbits and Ouras, the records of what we do will be vivified with information about how we feel – information about our anxieties, tastes and vulnerabilities that is highly valuable for those who seek to sway us.

“Persuasion appears in many guises: news stories, novels and postings scripted by machine and honed for maximum virality, co-workers, bosses and politicians who gain power through stirring speeches and astutely targeted campaigns. By 2035, one of the most potent forms may well be the virtual companion, a comforting voice that accompanies you everywhere, her whispers ensuring you never get lost, never are at a loss for a word, a name or the right thing to say. If you are a young person in the 2030s, she’ll have been your companion since you were small – she accompanied you on your first forays into the world without parental supervision; she knew the boundaries of where you were allowed to go and when you headed out of them she gently, yet irresistibly persuaded you to head home instead. Since then, you never really do anything without her. She’s your interface to dating apps. Your memory is her memory. She is often quiet, but it is comforting to know she is there accompanying you, ensuring you are never lost, never bored. Without her, you really wouldn’t know what to do with yourself.

“Persuasion could be used to advance good things – to promote cooperation, daily flossing, safer driving. Ideally, it would be used to save our over-crowded, over-heating planet, to induce people to buy less, forego air travel, eat lower on the food chain. Yet even if used for the most benevolent of purposes, the potential persuasiveness of digital technologies raises serious and difficult ethical questions about free will, about who should wield such power.

“These questions, alas, are not the ones we are facing. The accelerating ability to influence our beliefs and behavior is far more likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions, a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians. The question we face instead is how do we prevent this?”

Marvin Borisch, chief technology officer at Red Eagle Digital based in Berlin, wrote, “Since the invention of the ARPANET and the Internet, decentralization has been the thriving factor of our modern digital life and communication in the background. The navigation and use of decentralized structures, on the other hand, has not been easy, but over the last decades the emerging field of user experience has evolved interfaces and made digital products easier to use.

“After an episode of centralized services, the rise of distributed ledger technology in the modern form of blockchains and decentralized, federated protocols such as ActivityPub make me believe that by 2035, more decentralized services and other digital goods will enhance our life for the better, giving back ownership of data to the end-user rather than data-silos and service providers. If our species strives for a stellar future rather than a mono-planetary one, decentralized services with local and federated states along with handshake-synchronization would create a great basis for futuristic communication, software updates and more.”

Marvin Borisch, chief technology officer at Red Eagle Digital based in Berlin, commented, “The rise of surveillance technology is dangerously alarming. European and U.S. surveillance technology is hitting a never-before-seen level which gets adapted and optimized by more autocratic nations all around the globe. The biggest problem is that such technology has always been around and will always be around. It penetrates people’s privacy more and more, step by step. Karl-Hermann Flach, journalist and politician once said, ‘Freedom always dies centimeter by centimeter,’ and that goes for privacy, one of the biggest guarantees of freedom.

“The rise of DLT (distributed ledger technology) in forms of blockchains can be used for great purposes, but with over-regulation through technological incompetence and fear it will create a big step toward the transparent citizen and therefore the transparent human. Such deep transparency will enhance the already existing chilling effect and might cause a decline of individuality.

“Such surveillance will come in forms of transparent ‘Central Bank Digital Currencies’ which are a corner stone of social credit systems. It will come with the weakening of encryption through governmental mandatory backdoors but also with the rise of quantum computing. Later could, and probably will, be dangerous because of the costs of such technology.

“Quantum resistance might already be a thing, but the spread of it will be limited to those that have access to quantum computing. New technological gatekeepers will rise, deciding who has access to such technology in a broader span.”

Beneficial and Harmful
Bob Frankston, internet pioneer and technology innovator, said, “The idea that meaning is not intrinsic is a difficult one to grasp. Yet this idea has defined our world for the last half-century. Electronics spreadsheets knew nothing about finance yet allowed financiers and others to leverage their knowledge. Unlike the traditional telecommunications infrastructure, the Internet does not transport meaning – only meaningless packets of bits. Each of us can apply our own meaning if we accept intrinsic ambiguity.

“It poses a challenge to those who want to do human-centered infrastructure. The idea that putting such intent into the ‘plumbing’ actually limits our ability to find our own meaning is counterintuitive. Getting past that and learning how to manage the chaos is key. Part of this is having an educational system that teaches critical thinking and how to learn.

“We need to accept a degree of chaos and uncertainty and learn to survive it, if we have the time.

“I might be expecting too much, but I can hope that some of those growing up with the new technologies will see the powerful ideas that made them possible and eschew the hubris of thinking they can define the one true future.”

“I worry about the hubris of those who think they can define the one true future and impose it on us. I see the danger in an appeal to authority or those who do not understand how AI works and thus trust it far too much. Just as we used to use steam engine analogies to understand cognition, we now use problematic computer analogies.

“We’ve spent thousands of years developing a society implicitly defined by physical boundaries. Today we must learn how to live safely in a world without such boundaries. How do we manage the conflicts between rights in a connected world?

“How will we negotiate a world that we understand is interconnected physically (with climate as an example) and more abstractly as with the Internet?”

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy” said, “I am optimistic for the first time in 20 years that both the regulatory agencies and the Congress are serious about governing in the digital world. The FTC is seriously challenging the Big Tech monopolies and the SEC seems intent on bringing crypto exchanges under its purview. Whether these changes can be enacted in the next two years will be a test of the Biden administration’s willingness to take on the Silicon Valley donor class, which has become a huge part of Democratic campaign financing. At the Congressional level, I believe that Section 230 reform and some form of Net Neutrality applied to Google, Amazon, Meta and Apple (so they don’t favor their own services), are within the realm of bipartisan cooperation. This also makes me optimistic.”

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy” commented, “Wendell Berry once wrote, ‘It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.’ This is my greatest fear. From the point the technological Singularity was first proposed, the marriage of man and machine has proceeded at a pace that even worries the boosters of artificial general intelligence (AGI).

“I understand the Peter Thiel would like to live to 200, but that possibility fills me with dread. And the notion that AI (DALL-E, GPT 3) will create great ORIGINAL art is nonsense. These programs assume that all the possible ideas are already contained in the data sets, and that thinking merely consists of recombining them. Our culture is already crammed with sequels and knockoffs. AI would just exacerbate the problem.

“We are mired in a culture of escapism-crypto fortunes, living a fantasy life seven hours a day in the metaverse colonies on Mars. The dreams of Elon Musk, Marc Andreessen, Peter Thiel and Mark Zuckerberg are ridiculous and dangerous. They are ‘bread and circuses’ put forth by hype artists at a time when we should be financing the transition to a renewable energy economy, instead of spending $10 trillion on a pointless Martian Space Colony.”

Jonathan Kolber, author of “A Celebration Society,’ said, “I believe that we will see multiple significant and positive developments in the digital realm by 2035. These include:

  • Widespread availability of immersive VR (sight, sound, touch, and even limited smell and taste) at a low cost. Just as cellular phones with high-resolution screens are now serving most people on Earth, basic (sight and sound) VR devices should be similarly available for, at minimum, sight and sound. Further, I expect a FULLY immersive Dreamscape-type theater experience of it to be widely available, with thousands of available “channels” for experiences of wonder, learning, and play in 10-minute increments in many cities worldwide.
  • Wireless transmission of data will be fast enough and reliable enough that, in most cases, there will be the subjective experience of zero latency.
  • Courses will be taught this way. Families will commune at a distance. It will offer a new kind of spiritual/religious experience as well.
  • By 2035, I expect the prohibition on entheogens to have largely lifted and special kinds of therapy to be available in most countries using psilocybin, psychedelic cannabis, and (in select cases, per Dutch research) MDMA and LSD. PTSD will be routinely cured in one or two immersive VR experiences using these medicines under therapeutic guidance.”

Jonathan Kolber, author of “A Celebration Society,’ commented, “Without the emergence of a ‘third way,’ such as the restored and enhanced Venetian Republic-based model, the world will continue to crystallize into democracies and Orwellian states.

“Democracies will continue to be at risk of becoming fascist, regardless of the names it claims. As predicted as far back as the ancient Greeks, strongmen will emerge in times of crisis and instability, and accelerating climate change and accelerating automation with the attendant wholesale loss and disruption of jobs will provide these in abundance.

“Digital tools will enable a level of surveillance and control in all types of systems far beyond Orwell’s nightmares. Flying surveillance drones the size of insects, slaved to AI systems via satellite connections, will be mass-produced. These will be deployed individually or in groups according to shifting needs and conditions, according to the policy goals set by those whom Adam Smith called The Masters.

“In most cases, however, the drones will not be required for total surveillance and control of a populace. The ubiquitous phones and VR devices will suffice, with AIs discreetly monitoring all communication for signals deemed subversive or suspicious.

“Revolt will become increasingly difficult in such circumstances.

“We take universal surveillance as a given circa 2035. The only question becomes: surveillance by whom, and to what effect? Our celebration society proposal turns this on its head.”

Harmful (Did not respond to Benefits question)
Soraya Chemaly, an author, activist and co-founder of the Women’s Media Center Speech Project, wrote,  “Human-centered development of digital tools and systems – I’d like to say I am feeling optimistic about value-sensitive design that would improve human connections, governance, institutions, well-being, but, in fact, I fear we are backsliding.”

Zizi Papacharissi, professor and head of the communication department and professor of political science at the University of Illinois-Chicago, responded, “I see technologies improving communication among friends, family and colleagues. Personally-mediated communication will be supported by technology that is more custom-made, easier to use, conversational agent-supported and social-robot enabled. I see technology advancing in making communication more immediate, more warm, more direct, more nuanced, more clear and more high fidelity. I see us moving away from social media platforms, due to growing cynicism about how they are managed, and this is a good thing. The tools we use will be more precise, glossy and crash-proof – but they will not bring about social justice, heightened connection or healthier relationships. Just because you get a better stove, does not mean you become a better cook. Getting a better car does not immediately make you a better driver.”

Zizi Papacharissi, professor and head of the communication department and professor of political science at the University of Illinois-Chicago, said, “The lead motivating factor in technology design is profit. Unless the mentality of innovation is radically reconfigured, so as to consider innovative something that promotes social justice and not something that makes things happen at a faster pace (and thus is better for-profit goals), tech will not do much for social justice. We will be making better cars, but those car will not have features that motivate us to become more responsible drivers; they will not be accessible in meaningful ways; they will not be friendly to the environment; they will not improve our lives in ways that push us forward (instead of giving us different ways to do what we have already been able to do in the past).”

Mauro D. Ríos, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, responded, “In 2035, advances in technology can and surely will surprise us, but they will surprise us even more IF human beings are willing to change their relationship with technology.

“Advances in technology will surprise us in the next 10 years, for example, possibly seeing the emergence of the real metaverse, something that does not yet exist. We will see a clear evolution of wearable tech, and we will also be surprised at how desktop computing undergoes a remake of the PC.

“But technological advances alone do not create the future, even as they will continue to advance unfailingly. The ways in which people use them are what matter. What should occupy us is to understand if we and tech will be friends, lovers or have a happy marriage. We have discovered that, from the laws of robotics to the ethics behind artificial intelligence, our responsibility as a species is that as we create technology and dominate it, it is important that we generate a new social contract between it and we.

“The ubiquity of technology in our lives should lead us to question how we relate to it. Even back in the 1970s and 1980s it was very clear that the border between the human and the non-human was quite likely to blur soon. Today that border is blurry in certain scenarios. This is generating doubts, suspicions and concerns.

“By the year 2035, humans should have already resolved this discussion and have adapted and developed new, healthy models of interaction with technology. Digital technology is a permanent part of our world in an indissoluble way. It is necessary that we include a formal chapter on it in our social contract.”

Mauro D. Ríos, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, wrote, “2035 awaits us with more complex challenges than we can imagine. Technology incites us, provokes us, corners us and causes us to question everything.

“One of the biggest risks today is that the technology industry is resistant to establishing common standards. Steps like those taken by the European Community in relation to connectors are important, but technology companies continue to insist on avoiding standardization to win economic gain. In the past most of the battles were hardware-related, today they are software-related.

“If we want to develop things like the true Metaverse or the conquest of Mars, technology has to have common criteria in key aspects. It should be established in artificial intelligence, automation, remote or virtual work, personal medical information, educational platforms, interoperability and communications, autonomous systems and others.”

Nandi Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, wrote, “Whether physical, digital or somewhere in-between, interfaces to human experiences are all we have and have ever had. The body-mind (consciousness) construct is already fully dependent on naturally evolved interfaces to both our surroundings and our inner lives, which is why designing more intuitive and seamless ways of interacting with all aspects of our human lives is both a natural and relevant step forward – it is crossing our current horizon to experience the next horizon.

“With this in mind, extended reality, the metaverse and artificial intelligence become increasingly important all the time as there are many evident horizons we are crossing through our current endeavours simply by pursuing any advancement.

“Whether it is the blockchain we know of today, or something more useful, user- and environmentally friendly, and smooth to integrate that can allow simplification of instant contracts and permission-less activities of all sorts, this can enable our world to verify source and quality of content, along with many other benefits.

“The best interfaces to experiences and services that can be achieved will influence what we can think and do, both as tools and services in everyday life, but also as the path to education, communication and so many other things. Improving interfaces – both physical and digital make the difference between having and not having superpowers as we advance.

“Connecting a wide range of technologies that bridge physical and digital possibilities grows the reach of both. This also means that thinking of the human habitat as belonging to all areas the body and mind can traverse is more useful than inventing new categories and silos to classify experiences by. Whatever the future version of multifaceted APIs are, they have to be flexible, largely open and easy to use. Connectivity between ways, directions, clarity, etc., of communication can extend the reach and multiplication of any possibilities – new or old.”

Nandi Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, commented, “First comes data – if the FAANGs of the world (non-American equivalents are equally bad) are allowed to remain even nearly as powerful as they are today, problems will become ever-greater, as their strength as manipulators of individuals grow deeper and more advanced. Manipulation will become vastly more advanced and difficult to recognize.

“Artificial intelligence is already becoming so powerful and versatile it can soon shape any imagery, audio and text or geometry in an instant. This means anyone with the computational resources and some basic tools can trick just about anyone into new thoughts and ideas. The owners of the greatest databanks of individuals’ and companies’ history and preferences can easily shape strategies to manipulate groups, individuals and entire nations into new behaviours.

“Why invest in anything if you will have it stolen at some point? Is some sort of perfect fraud-prevention system (blockchain or better) relevant in a future in which any ownership of any sort of asset class – digital or physical – is under threat of loss or distortion?

“Extended reality and the metaverse often gets a bit of a beating for how it can make people more vulnerable to harassment, and this is a real threat, but artificial intelligence is vastly more scalable – essentially it could impact every human with access to digital technology more or less simultaneously, while online harassment in an immersive context is not scalable in a similar sense.

“Striking a comfortable and reasonable balance between safe and sane human freedom and surveillance technologies to keep a legit bottom line of this human safety is going to be hard to achieve. There will be further and deeper abuses in many cultures. This may create a digital world and lifestyle that branches off quite heavily from the non-digital counterparts, as digital lives can be expected to be surveilled while the physical can at least in principle be somewhat free of eavesdropping if people are not in view or earshot of a digital device.

“This being said, a state or company may still reward behaviour that trades data of all sorts also from anything happening offline – which has been the case in dictatorships throughout history.

“The very use and manufacturing of technology may also cost the planet more than it provides the human experience, and as long as the promises of the future drive the value of stock and investments, we are not likely to understand when to stop advancing on a frontier that is on a roll.

“Healthcare will likely become both better and worse – the class divide grows greater gaps – but long-term it is probably better for most people. The underlying factors generally have more to do with human individual values rather than with the technologies themselves.

“There might be artificial general intelligence by 2035. Such AI may have great potential to be helpful. Perhaps one individual can create a value for humanity or planet that is a million times greater than the next person’s contribution, but we do not know whether this value holds over time, or if it becomes just as bad as Nick Boström’s ‘paper clip’ analogy. Most people are willing to borrow from the future, and at the same time children are meant to be this future. What do we make of it? Are children therefore multi-dimensional batteries?”

Beneficial and Harmful
Frank Kaufmann, president of Twelve Gates Foundation and Values in Knowledge Foundation, wrote, “I find all technological development good if developed and managed by humans who are good. The punchline is always this: To the extent that humans are impulsively driven by compassion and concern for others and for the good of the whole there is not a single prospective technological or digital breakthrough that bodes ill in its own right. Yet, to the extent that humans are impulsively driven for self-gain with others and the good of the whole as expendable in the equation, even the most primitive industrial/technological development is to be feared.

“I am extreme in this view as simple, fundamental and universal. For example, if humans were fixed in an inescapable makeup characterized by care and compassion, the development of an exoskeletal, indestructible, AI-controlled, military robot that could anticipate my movements up to four miles away, and morph to look just like my loving grandmother could be a perfectly wonderful development for the good of humankind. On the other hand, if humans cannot be elevated above the grotesque makeup in which others and the greater good are expendable in the pursuit of selfish gain, then the invention of a fork is a dangerous, even horrifying thing.

“The Basis to Assess Tech – Human Purpose, Human Nature: I hold that the existence of humans is intentional, not random. This starting point establishes for me two bases for assessing technological progress: How does technological/digital development relate to 1. Human purpose and 2. Human nature?

“Purpose: Two things are the basis for assessing anything, the purpose and the nature of the agent. This is the same for whether we assess the CRISPR gene editing, or if I turn left or right at a streetlight. The question in both cases is: Does this action serve our purpose? This tells us if the matter in question is good or bad. It simply depends on what we are trying to do (our purpose). If our purpose is to get to our Mom’s house, then turning left at the light is a very bad thing to do. If the development of CRISPR gene editing is to elevate dignity for honorable people, it is good. If it is to advance the lusts of a demonic corporation, or the career of an ego-insane, medical monster, then likewise breakthroughs in CRISPR gene editing are worrisome.

“Unfortunately, it is very difficult to know what human purpose is. Only religious and spiritual systems recommend what that might be.

“Human Nature: The second basis for assessing things (including digital and technological advances) relates to human nature. This is more accessible. We can ask: Does the action comport with our nature? For simplicity I’ve created a limited list of what humans desire (human nature):

Original desires

1. To love and be loved

2. Privacy (personal sovereignty)

3. To be safe and healthy

4. Freedom and the means to create (creativity can be in several areas)

a. Ingenuity

b. Artistic expression

c. Sports and leisure, physical and athletic experience

Perverse and broken desires

1. Pursuit of and addiction to power

2. Willingness to indulge in conflict

Three Bases to Assess: In sum then, analyzing and assessing technological and digital development by the year 2035 should move along three lines of measure.

1. Does the breakthrough serve the reason why humans exist (human purpose)?

2. Which part of human nature does the breakthrough relate to?

3. Can the technology have built-in protections to prevent perfectly exciting, wonderful breakthroughs from becoming a dark and malign force over our lives and human history?

“All technology coming in the next 15 years sits on a two-edged sword according to measures for the analysis described above.

Likely Benign, Little Danger – Some coming breakthroughs are merely exciting, such as open-air gesture technology, prosthetics with a sense of touch, printed food, printed organs, space tourism, self-driving vehicles, and much more.

Medium Danger – Some coming digital and tech breakthroughs have medium levels of concern for social or ethical implications, such as hybrid reality environments, tactile holograms, domestic service and workplace robots, quantum-encrypted-information, biotechnology and nano-technology, again, and much more.

Dangerous, Great Care Needed – Finally, there is a category of coming developments that should be put in the high-concern category. These include BCI and brain-implant technology, genome editing, cloning, selective breeding, genetic engineering, artificial general intelligence (AGI), deep fakes, people hacking, clumsy efforts to fix the environment through potentially risky geoengineering, CRISPR gene editing, and again many others.

“Applying the three bases in assessing the benefits and dangers of technological advances in our time can be done rigorously, systematically and extensively on any pending digital and tech developments They are listed here on a spectrum from less worrisome to potentially devastating.

“It is not the technology itself that marks it as hopeful or dystopic. This divergence is independent of the inherent quality of the precise technology itself. It is tied to the maturation of human divinity, ideal human nature.”

Beneficial (Did not respond to Harms question)
Marc Rotenberg, founder and president of the Center for AI and Digital Policy, said, “Innovative developments in the energy sector, coupled with the use of digital techniques, will counter the growing impact of climate change as data models will provide political leaders and the public with a greater awareness of the risks of climate catastrophe. Improved modeling will also help assess the effectiveness of policy responses. AI models will spur new forms of energy reduction and energy efficiency.”

Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, predicted, “The best and most beneficial changes reside at the operational level. We will learn to do more things more efficiently and, most likely more effectively through digital technology, than we can do through analog technology or current digital technology. Our current and near-term future digital tools perform well if asked to answer simple questions, such as ‘what is the pattern?’ or ‘what changed.’ Tasks such as developing drugs; comparing images from various modalities; analyzing large, complex databases (weather information) leverage the current and past focus of digital tool research. The potential move to quantum computing will expand our capabilities in these and similar areas.”

Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, said, “The most harmful or menacing changes in digital life that are likely to occur by 2035 are the overuse of immature digital technology. The excitement over the apparent ‘skill’ of chat bots based on large language models (e.g., ChatGPT) tends to overwhelm the reality of such experimental software. Those who create such software acknowledge its many limitations. But still, they release them into the wild. Individuals without appreciation for the limitations start incorporating the software into systems that people will use in real life, sometimes quite important, settings. The combination will lead to the inevitable failures which the overeager will chalk up to the cost of innovation. Neither the software nor society are ready for this step. History has shown us that releasing technologies into the wild too soon leads to significant harm. History has not taught us to show restraint.”

Beneficial (Did not respond to Harms question)
Jeff Johnson, principal consultant at UI Wizards, Inc., former chair of Computer Professionals for Social Responsibility, predicted, “Cars, trucks and busses will be improved in several ways. They will have more and better safety features, such as collision-avoidance and accident-triggered safety cocoons. They will be mostly powered by electric motors, have longer ranges than today’s electric cars, benefit from improved recharging infrastructure. In addition:

A significant proportion of AI applications will be designed in a human-centered way, improving human control and understanding.

  • Digital technology will improve humankind’s ability to understand, sequence and edit genetic material, fostering advances in medicine, including faster creation of more effective vaccines.
  • Direct brain–computer interfaces and digital body implants will, by 2035, begin to be beneficial and commercially viable.
  • Auto-completion in typing will be smarter, avoiding the sorts of annoying errors common with auto-complete today. Voice control and biometric control, now emerging, may replace keyboards, pointers and touch screens.
  • Government oversight and regulation of digital technology will be more current and more accepted.
  • Mobile digital devices will consume less power and will have longer-lasting batteries.
  • Robots – humanoid and non-humanoid, cuddly and utilitarian – will be more common, and they will communicate with people more naturally.

“Machine learning will continue to be used naively, however, and people will continue to rely on it, causing many poor decisions. Cryptocurrency will wax and wane, but will continue to waste significant power, productivity and human mental and emotional energy. Bad actors will develop autonomous weaponry. It will be distributed worldwide by rogue nations and arms dealers, contributing to a rise in terrorism and wars and in the destruction caused by them.”

Isabel Pedersen, director of the Digital Life Institute at Ontario Tech University, said, “The most beneficial changes in digital life are difficult to predict because people rarely have shared values on the concept of betterment or human well-being. Put another way, social values involving lifestyle betterment are diverse and oftentimes conflicting. However, there is one area that most people agree upon. The opportunity for dramatic change lies in medical industries and the goal to improve healthcare.

“Human-centric AI technologies that are embodied and augmentative could converge to improve human health in dramatic ways by 2035. With the advent of personal health technologies – those that are worn on or implanted in bodies and designed to properly respond to individuals through dedicated AI- based platforms – the opportunity exists to diagnose, treat, restore, monitor, and care for people in improved ways.

“In this case, digital life will evolve to include healthcare not in terms of isolated activities (e.g., going to a doctor for diagnosis on a single health issue), but one whereby individual people interact with human doctors and caregivers (and their organizations) in relation to their own personalized biometric data. These types of utopian or techno-solutionist predictions have been made before, however deployment, adoption and adaptation to these technologies will finally start to occur.

“Design cycles that promised convergence are finally transforming to actual deployment cycles. The risk is that the rise of these technologies will benefit only those who can afford to purchase them by 2035 leading to further socio-economic problems of the digital divide.

“Another risk is algorithmic bias leading to racism, ageism, ableism or gender discrimination in healthcare. To achieve mass adoption of these technologies by societies, governments will need to regulate them to ensure equity and invest in them in order to actually benefit all members of society. Without the shared value of human well-being for everyone, the dream of improved human health will be limited.”

Isabel Pedersen, director of the Digital Life Institute at Ontario Tech University, predicted, “Digital life technologies are on course to further endanger social life and extend socio-economic divides on a global scale by 2035. One cause will be the further displacement of legitimate news sources in the information economy. People will have even more trouble trusting what they read. The deprofessionalization of journalism is well under way and technocultural trends are only making this worse.

“Along these lines, one technology that will harm people in 2035 is AI-based content-generation technology used through a range of deployments. The appropriate use of automated writing technologies seems unlikely and will further impoverish digital life by unhinging legitimate sources of information from the public sphere.

“Text-generation technologies, large language models and more advanced Natural Language Processing (NLP) innovations are undergoing extensive hype now; they will progress to further disrupt information industries. In the worst instances, they will help leverage disinformation campaigns by actors motivated by self-serving or malicious reasons.”

James S. O’Rourke IV, professor of management at the University of Notre Dame and author of 23 books on communication, predicted, “The best of what technology will have to offer will be in medicine, space flight, planetary defense against asteroids and space debris, interpersonal communication, data creation and storage and the mining of enormous data sets. Only the imagination will limit people’s use of such inventions.”

James S. O’Rourke IV, professor of management at the University of Notre Dame and author of 23 books on communication, commented, “Let’s explore some of the worst that technology will have to offer in regard to human rights by 2035.

“First, I and others have genuine concern about social media platforms for several reasons. First, the sheer volume of messaging and video content. If 500 hours of video content are now posted to YouTube every minute, Google and Alphabet cannot possibly monitor the content.

“Facebook owner Meta says that AI catches about 90 percent of terms-of-service violations, many of which are the worst humanity has to offer, simply horrific. The remaining 10 percent have been contracted out to firms such as Accenture. Two problems seem apparent here. First, Accenture cannot keep employees on the content monitoring teams longer than 45 to 90 days due to the heinous nature of the content itself. Turnover on those teams is 300% to 400% per annum. Second, the contract with Facebook is valued at $500 million per annum, and the Accenture board is unwilling to let go of it. Facebook says, ‘Problem solved.’ Accenture says, ‘We’re working on it.’

“The social media platforms are owned and operated either by billionaire entrepreneurs who may pay taxes but do not disclose operating figures, or by trillion-dollar publicly held firms that appear increasingly impossible to regulate. Annual income levels make it impossible for any government to levy a fine for misbehavior that would be meaningful. Regulating such platforms as public utilities would raise howls of indignation regarding First Amendment free speech infringements. Other social media platforms, such as TikTok, are either owned or controlled by dictatorial governments that continue to gather data on literally everyone, regardless of residence, citizenship or occupation.

“Another large concern about digital technology revolves around artificial intelligence. Several programs have either passed or come very close to passing the Turing Test. ChatGPT is but one example. The day when such algorithms can think for themselves and evade the efforts of homo sapiens to control them is honestly not far off. Neither legislators nor ethicists have given this subject the thought it deserves.

“Another concern has been fully realized. Facial recognition (FR) technology is now universally employed in the People’s Republic of China to track the moments, statements and behavior of virtually all Chinese citizens (and foreign visitors). Racial profiling to track, isolate and punish the Uyghur people has proven highly successful. In the United States, James Dolan, who owns the New York Knicks and Rangers as well as Radio City Music Hall, is using facial recognition to exclude all attorneys who work for law firms that have sued him and his corporate enterprises. They cannot be admitted to the entertainment venues, despite paying the price of admission, simply because of their affiliation. Many people fear central governments, but private enterprise operated by unaccountably rich individuals, have proven they can use FR and AI to control or punish those with whom they disagree.”

Christopher Le Dantec, associate professor of digital media at Georgia Tech, said, “The big gains will be in medical breakthroughs from AI- and ML-assisted research.”

Christopher Le Dantec, associate professor of digital media at Georgia Tech, predicted, “The next industrial revolution from AI and automation will further advance wealth disparity and undermine stable economic growth for all. The rich will continue to get vastly richer. No one will be safe, everyone will be watched by someone/thing. Every aspect of human interaction will be commodified and sold, with value extracted at each turn. The public interest will fall to private motivation for power, control, value extraction.

“Social media and the larger media landscape will continue to entrench and divide. This will continue to challenge political discourse, but science and medical advances will also suffer as a combination of outrage-driven revenue models and foreign actors advance mis- and disinformation to advance their interests.

“The tech sector will face a massive environmental/sustainability crisis as labor revolts spread through regions like China and India, as raw materials become more expensive, and as the mountain of e-waste becomes unmanageable.

“Ongoing experiments in digital currency will continue to boom and bust, concentrating wealth in venture and financial industries; further impoverishing late-come, retail investors; and adding to a staggering energy and climate crisis.

“Activists, journalists and private citizens will come under increased scrutiny and threat through a combination of institutional actors working against them and other private individuals who will increasingly use social media to harass, expose and harm people with whom they don’t agree.”

Beneficial (Did not respond to Harms question)
John McNutt, professor of public policy at the University of Delaware, said, “Technology offers many new and wonderful possibilities, but how people adapt those technologies to their lives and the uplift of their societies is where the real genius occurs. Our challenge has always been how we use these tools to make life better and to prevent harm.

“The legal/lawmaking system has begun to take technology much more seriously and while the first efforts have not been particularly impressive, the beginnings of new legal regimes have emerged. The nonprofit sector will rebalance away from the current bricks and mortar sector to a mix of traditional organizations, voluntary associations and virtual organizations. Many of the issues that plague the sector will be addressed by technology and the new forms of social organization it will allow. Communities will develop their own technology which will supplement government.”

Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, commented, “I hope to see the rise of the systematic organization of citizen education on digital literacy with a strong focus on information literacy. This should start in the earliest years and carry forward through life. I hope to see the prioritization of the ethics component (including bias evaluation) in the assessment of any digital system. I hope to see the emergence of innovative business models for digital systems that are NOT based on advertising revenue, and I hope that we will find a way to give credit to the real value of information.”

Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, commented, “I fear the generalization of state-governed, citizen-comprehensive surveillance systems from birth to death, from home to work and in-between. I fear the generalization of bias in uncontrollable digital systems that are designed to the objectives of surveillance capitalism.”

Juan Carlos Mora Montero, coordinator of post-graduate studies in planning at the Universidad Nacional de Costa Rica, said, “The greatest benefit that I predict for 2035 related to the digital world is that technology will allow people to have access to equal opportunities both in the world of work and in culture, allowing them to discover other places, travel, study, share and enjoy spending time in real-life experiences.”

Juan Carlos Mora Montero, coordinator of post-graduate studies in planning at the Universidad Nacional de Costa Rica, wrote, “The biggest damaging change that can occur between now and 2035 is a deepening of inequities when it comes to communications tools and the further polarization of humanity between people who have access to the infinite opportunities that technology offers and the people who do not. This situation would increase the social inequality in the economic sphere that exists today and would force it to spill over into other areas of life.”

Harmful (Did not respond to Benefits question)
John McNutt, professor of public policy at the University of Delaware, observed, “Sadly, while technology empowers positive behavior, it also empowers anti-social behavior and government repression. Hate groups, terrorists and bad actors of all stripes can use what technology offers to do their bidding. In addition, there are unintended consequences. As technology becomes more sophisticated, those externalities will become more difficult to predict and prevent.”

Harmful (Did not respond to Benefits question)
Llewellyn Kriel, retired CEO of a media services company based in Johannesburg, South Africa, wrote, “Human-centered issues will increasingly take a backseat to tyranny in Africa, parts of the Middle East and the Near East. This is due to the threat digital tech poses to inept, corrupt and self-serving governance. Digital will be exploited to keep populations under control.

“Already governments in countries in sub-Saharan Africa are exploiting tech to ensure populations in rural areas remain servile by either denying connectivity, ensuring entrenched poverty & making connectedness a privilege rather than a right. This control will grow.

“Through control and manipulation of education and curricula, governments ensure political policies are camouflaged as fact and truth. This makes real truth increasingly hard to identify. Digital growth and naïveté ensure popularity and easy-to-manipulate majoritarianism become ‘the truth.’ This too will escalate.

“Health is the only sector that holds some glimmer of hope, though access to resources will remain a control screw to entrench tyranny. Already the African digital divide is being exploited and communicated as an issue of narrow political privilege rather than one of basic human rights.

“The impotence of developers to ensure equity in digital tech extends to kind of new apartheid of which Israeli futurist Yuval Noah Hariri warned. The ease with which governments can and do manipulate access and social media will escalate. For Africa the next decade is very bleak.

“The fact that organised crime remains ahead of the curve will not only seriously raise the existing barrage of threats to individuals, but exacerbate suspicion, fear and rejection of digital progress in a baby-with-the-bathwater reaction.

“The gravest threat remains government manipulation. This is already dominant in sub-Saharan Africa and will grow simply because governments can, do and will control access. These responses are being written and formulated under precisely the extensive control of the ruling African National Congress and its myriad alliance proxies.

“While the technology will grow worldwide, tyranny and control – especially in the geographically greater rural areas, as is currently the case on the South African Development Community region, which includes 16 countries in South Africa. Rulers ensure their security by denying access. This will grow because technology development’s focus on profit over rights equates to majority domination, populist control and trendy fashionable fads over equity, justice, fairness and balance.”

Beneficial (Did not respond to Harms question)
Robin Allen, a UK-based legal expert in AI and machine learning and co-author of “Technology Managing People: The Legal Implications,” wrote, “I expect to see really important steps forward from just a debate about ethical principles to proper regulation of artificial intelligence as it regards overall governance and impacts on both individuals and institutions. The European Union’s AI Act will be a complete game changer. Meanwhile steps will be taken to ensure that definitional issues will be addressed by CEN/CENELEC and IEEE.”

Beneficial and Harmful
Warren Yoder, longtime director at Public Policy Center of Mississippi, now an executive coach, said, “As the 21st century picks up speed, we are moving beyond a focus on the protocol-mediated computation of the Internet. The new focus is on computation that acts upon itself, not yet with autonomous agency, but certainly moving in that direction. Three beneficial changes stand out for the medium-term promise they offer: Machine learning, synthetic biology and the built world.

“ChatGPT and other large language models command most of the attention at the moment because they speak our languages. Text, images and music are how we communicate with each other and, now, with computation. But machine learning offers much more. It promises to revolutionize math and science, disrupt the economy and change the way we produce and engage information. Educators are rethinking how they teach. Many of the rest of us will realize soon that we must do the same.

“COVID vaccines arrived in the nick of time, a popular introduction to the potential of synthetic biology. Drug discovery, mRNA treatments for old diseases, modifying the immune system to treat autoimmune disorders and many other advances in synthetic biology promise dramatic improved treatments in the medium-term.

“Adding computation to the built environment is generally called the Internet of Things. But that formulation does not at all prepare the imagination for the computational changes we are now experiencing in our physical world. Transportation, manufacturing, even the normal tasks of everyday life will see profound gains in efficiency.

“Haunting each of these beneficial changes are the specters of gross misuse, both for the entrepreneur class’s vanity and for big-business profit. We could lose not only our privacy, but also our freedom of voice and of exit.

“Our general culture is already adapting. Artists quickly protested the appropriation of their freely shared work to create the machine learning tools that could replace them. We do not generally acknowledge the speed of culture change, which happens even faster than technology change. Culture slurps tech with its morning coffee.

“Governance, on the other hand, is a messy business. The West delegates initial governance to the businesses that own the tech. Only later do governments try to regulate the harmful effects of tech. The process works poorly, but authoritarian regimes are even worse. In the medium-term, how well we avoid the most harmful effects of machine learning, synthetic biology and the built world depends on how well we cobble together a governance regime. The pieces are there to do an adequate job in the United States and the European Union. Success is anyone’s guess.”

Richard F. Forno, principal lecturer and director of the graduate cybersecurity program at the University of Maryland-Baltimore County, responded, “AI and Machine Learning capabilities will continue to work their way into society resulting in more efficient workflows in many (likely mostly white-collar) industries. By extension, more intelligent automation will likely result in significant shifts in task-oriented jobs being eliminated for labor cost savings. Along those lines, new fields of expression, such as AI-generated art, music and entertainment, will become mainstream attractions instead of AI/ML capabilities being used only to enhance traditional entertainment products (e.g., beyond ‘de-aging’, SFX, and creating fantasy landscapes).

Richard F. Forno, principal lecturer and director of the graduate cybersecurity program at the University of Maryland-Baltimore County, wrote, “Anything man creates, man can misuse. Technologies used to enable freedom of speech or expression can be constrained to restrict it. Technologies used to provide ‘smart’ medical assistance (i.e., pacemakers, drug-dispensing) can be co-opted and used to cause harm.

“As a cybersecurity professor rooted in the humanities, I worry that, as with most new technologies, individuals and society will be more interested in the likely potential benefits, conveniences, cost savings and the ‘cool factor’ and fail – or be unwilling – to recognize or even consider, the potential risks or ramifications. Over time, that can lead to infosocial environments in which corruption, abuse and criminality thrive at the hands of a select few political or business entities, which in turn presents larger social problems requiring remediation.”

Naveen Rao, a healthcare entrepreneur and founder and managing partner at Patchwise Labs, said, “Among the beneficial changes I see are:

  • More human-centered tech/digital development – reduction (but not elimination) of some systemic disparities in access to web/digital tools, via better rural broadband availability, more intentional product design and tech/data policy at the organization/institutional level
  • Smoother government operations in areas of taxes, DMV, voting, civic/citizen engagement (e.g., census, public services)
  • Health – better (but not universal) access to care through widespread availability of a single digital front door experiences with numerous self-serve options (check-ins, appointment scheduling, RX refills, virtual visits, payment, etc.)
  • Knowledge and education – shift to primary digital textbooks in high schools and colleges, which removes the cost burden on students and enables real-time curriculum updates; shift toward more group education
  • The ‘experience’ of digital engagement will evolve for the better, with more integrated digital tools that don’t require eyes to be glued to a screen (voice, AR/XR, IoT).”

Naveen Rao, a healthcare entrepreneur and founder and managing partner at Patchwise Labs, responded, “Everything that’s bad today is going to get worse as a direct of result of the U.S. government’s failure to regulate social media platforms: cyberbullying, corporate-fueled and funded misinformation campaigns, gun violence, political extremism will all become more pronounced and engrained, deeply shaping the minds of the next generation of adults (today’s grade schoolers).

“Adults’ ability to engage in critical thinking – their ability to discern facts and data from propaganda – will be undermined by the exponential proliferation of echo chambers, calcified identity politics, and erosion of trust in the government and social institutions. These will all become even more shrouded by the wool of digital life’s ubiquity.

“The corporate takeover of the country’s soul – profit over people – will shape product design, regulatory loopholes and the systemic extraction of time, attention and money from the population. I do think there will be a cultural counterbalance that emerges (at what point I can’t guess), towards less digital reliance overall, but this will be left to the individual or family unit to foment, rather than policymakers, educators, civic leaders or other institutions.”

Robert M. Mason, a University of Washington professor emeritus expert in the impact of social media on knowledge work, wrote, “I expect expanded accessibility to a wider range of digital technologies and applications through the use of natural language interfaces and greater use of improved graphics. This will be enabled by:

  • The ‘democratization’ of access to digital processes and services, including online information and online knowledge bases; digitization of knowledge
  • Expanded scope of online knowledge
  • Higher-resolution graphics that enable realistic representations of images and presentation of complex data relationships and analytic findings such as statistical relationships
  • Improved functionality and expanded use of natural language interfaces with digital knowledge bases and applications

“I expect greater integration of functional applications. This will stimulate innovation and the creation of new services for transportation and logistics. Past examples of such include the combination of GPS, large-scale integration, image processing, the World Wide Web and WiFi into a mobile phone; and further system integration to enable ride-sharing services and delivery services.”

Robert M. Mason, a University of Washington professor emeritus expert in the impact of social media on knowledge work, said, “The erosion of trust and faith in human institutions is of concern. Expanded accessibility to a wider range of technologies and applications for storing and promoting falsehoods under the pretense of sharing information and knowledge is detrimental. Then there is also the growth in the number of ‘influencers’ who spread rumors based on false and incomplete information.

“In addition, the increased expectation of having rapid access to information and people’s accompanying impatience with delays or uncertainties associated with issues that require deeper research or analysis is extremely troublesome.

“There continues to be an erosion of trust in the institutions that value and support critical thinking and social equity.”

Beneficial and Harmful
Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, wrote, “AI has arrived. I’ve seen many cycles of AI hype. I’m going out on a limb and saying, this time, it’s real. It now passes a key indicator which signals an actual technological advance: The Porn Test – do people use this to create pornography, and are the results appealing? The outcome here isn’t ringing a bell, it’s blaring a siren, the technology has reached a point where consumer applications are being built. Further, there’s another reliable key indicator which is evident: The Lawyer Test – are expensive corporate lawyers suing over this? When professional hired guns start shooting at each other, that usually indicates they’re fighting over something significant.

“Now, this has nothing to do with the scary AI bogey beloved of writers who dress up a primal monster in a science-fiction skin. Rather, there have been major breakthroughs in the technology which have advanced the field, which will ultimately be truly world-changing. And I have to re-affirm my basic realism that we won’t be getting utopia (the Internet sure didn’t give us that). But we will be getting many benefits which will advance our standard of living.

“To just give a few examples: Even as I type this, at the very start of the development, I’m seeing practical tools which significantly improve the productivity of programmers. I don’t believe it will replace programmers as a profession. But there’s going to be a shift where some of the bottom level coding will be as obsolete as the old job of manually doing calculations.

“Entertainment is going to undergo another major improvement in production quality. I’m not going to make the silly pundit prediction of “democratization”, because that never works, for economic reasons. But I will point out the way CGI (Computer Generated Imagery) changed movies and animation, and AI will take that to another level.

“We’re already seeing experiments with new ways of searching. Search has always been an arms-race between high-quality information versus spammers and clickbait farms. That’ll never stop because it’s human nature. But the battlefield has just changed, and I think it’ll take a while for the attackers to figure out how to engage with heavy artillery being rolled out.

“There’s a whole set of scientific applications which will benefit. Medical diagnostics, drug discovery, anything to do with data analysis. Essentially, we can currently generate and store a huge amount of data. And now we have a new tool to help make sense of it all. While pundits who predict the equivalent of flying cars are not justified, that shouldn’t cause us to ignore that both flying (commercial air transportation) and cars (mass produced automobiles) had profound effects over the last century.

“Nowadays, I’m deeply troubled by how much just trying to keep my digital footprint low is starting to make me feel like I’m an eccentric character in a dystopian SF novel (quirkily using ‘by Stallman’s beard!’ as an exclamation – a reference to Richard M. Stallman, who has been relentlessly arguing about freedom and technology for decades now). Every item I buy, every message I send, every physical place I go, every ebook I read, every website I browse, every video I watch … there’s a whole system set up to record it.

“When we think of the world of the book ‘1984,’ I believe one aspect which has been lost over the years is how the idea of the telescreen was, for the time, extremely hi-tech. Television wasn’t even widespread when it was written. Who would have thought that when such technology arrived, people would be eager to have telescreens installed in their homes for the consumer benefits? We consider the phrase ‘Big Brother’ to be chilling. But in that fictional world, maybe to an apolitical person it has a meaning more like ‘Alexa’ or ‘Siri.’

“There was a fascinating moment this year, when just after the U.S. Supreme Court overturned nearly 50 years of Federal protection of abortion rights, the chattering class had a brief realization that all this surveillance could be extremely useful for the enforcement of anti-abortions laws. There’s a principle that activists should try to relate global concerns to people’s local issues. But it was very strange to me seeing how this huge monitoring system could only be considered in terms of a ‘hot take’ in politics (‘Here’s this One Weird Trick’ which could be used against pregnant women seeking an abortion). And then the glimmer of insight just seemed to disappear.

“Now, it’s not as if I’m the only person to ever notice the perils. There’s quite a bit of material on the dangers of ‘surveillance capitalism.’ But doing anything about it runs into a problem of affecting present corporation profits for the benefit of safeguarding civil-liberties. And that’s just a very marginalized argument.

“I wish I knew more about how this was playing out in China or Singapore or other places which fully embrace such governmental population controls. The little bit I’ve read about the Chinese ‘social credit’ system seem to outline a practical collaboration of government and corporate power that is very disturbing.

“By Stallman’s beard, I worry!”

Pete Cranston, a pro bono UK knowledge consultant and former co-director of Euforic Services Ltd., said, “I expect an enhanced state of ubiquity of these technologies, enabling all global populations to participate equally in their own languages and without needing to learn any other input mechanism than speaking and output other than visual or auditory. Convergence of tech means this is likely to be through handheld mobile devices that will be as cheap as pens since there will be so many manufacturers. As we deal with the climate crisis, there will be real-time information through the above ubiquitous, convergent tech on how each individual is impacting the planet through their activities, including purchasing.

“There’s hope for some progress in limiting surveillance capitalism. The level of control recently introduced by the European Union will be extended and companies that harvest data will only be able to on the basis of informed consent.”

Pete Cranston, a pro bono UK knowledge consultant and former co-director of Euforic Services Ltd. wrote, “I see here the converse of my thoughts on positive trends. One major concern is that splinternets and commercial monopolies will prevent all global populations from participating equally in their own languages and without needing to learn any other input mechanism than speaking and output other than visual or auditory. Convergence of tech means this is likely to be through handheld mobile devices which will be as randomly priced as at present, but where the highest level of security and control will be more expensive than the majority of people will (want to) afford.

“In regard to the climate crisis, greenish and false information will conceal the planetary impact of ubiquitous, convergent tech, with information on how I am impacting the planet through my activities, including purchasing and using tech only available at a cost, and requiring at least first-degree educational levels.

“In regard to surveillance capitalism, a poor outcome would be that the level of control recently introduced by the EU will not be extended and carried on, and companies that harvest data will continue to harvest and share personal data without informed consent.”

Philippa Smith, communications and digital media expert, research consultant and commentator, said, “The best and most beneficial changes will result from advances in our decision-making abilities. With more than 65 years since the first computer-to-computer communication occurred, we will be in good stead in the ongoing pursuit of beneficial changes for all peoples based on our accumulated knowledge over time as the digital has become the norm.

“Drawing on our past experience and realisations about what has worked and what has not in our digital lives will enable a better mindset by 2035 to think more critically and deeply about where we want to be in the future. Designers, investors and stakeholders will be more cognizant of the need to think about social responsibility, the ways that technology can be more inclusive when it comes to the chasm of digital divides, and how potential pitfalls might be averted – especially when it comes to AI, cybersafety, cybersecurity, negative online behaviours, etc.

“Researchers will continue to work across disciplines to delve deep in applying theory and practice in their investigations and pursuing new methods – questioning and probing and gaining new knowledge to guide us along the yellow brick road towards a better digital life. Ideally, governments, tech companies and civil society will work collaboratively in designing the best possible digital life – but this will require honesty, transparency and compassion. Hopefully that is not too much to ask.”

Philippa Smith, communications and digital media expert, research consultant and commentator, wrote, “It is unlikely that by 2035 existing harmful and menacing online behaviours, particularly in terms of human health and well-being – such as cyber-bullying, abuse and harassment, scamming, identity theft, online hate, sexting, deep fakes, misinformation, dark web, fake news, online radicalisation or algorithmic manipulation – will have faded from view. In spite of legislation, regulation or counter measures, they will have morphed in more sinister ways as our lives become more digitally immersive bringing new challenges to confront.

“Much will be dependent on the management of technology development. Attempts to predict new and creative ways in which negative outcomes can be circumnavigated will be required. My main concern for the future, however, is on a bigger picture level and the effects that harmful and menacing changes in digital life will have on the human psyche and our sense of reality.

“Future generations may not necessarily be better off living a deeply immersive digital life, falling prey to algorithmic manipulation or conspiracy theories, or forgetting about the real physical world and all it has to offer. We will need to be careful in what we wish for.”

Beneficial and Harmful
Howard Rheingold, pioneering internet sociologist and author of “The Virtual Community,” commented, “Large Language Models (LLMs), generative AI and machine learning are tingling my antennae a lot – the way the graphical user interface and the Web first did in their early days. But I think this evolution is going faster. Without getting into too many details I don’t understand, the large language part of it is that the models are based on very large collections of texts, images, sounds and code. So if it weren’t for all of us putting everything online over the past three decades, there wouldn’t be anything to apply machine learning to.

“If we are honestly looking back at the last decades of rapid technological change for hints about decades to come, we’re in for a world of hurt along with some really miraculous stuff. I sense that we are at an inflection point in the conduct of science as significant as the introduction of computers: the use of machine learning techniques as scientific thinking and knowledge tools. Proteins, for just one example, are topologically complex and can fold into a large number of possible shapes. Much of immune system and anti-cancer therapies rely on matching the shape of proteins on the surface of a cell. Now, AI can propose previously unknown proteins of medical significance.

“Machine Learning (oversimplified) uses iterative computations modeled on the way neurons work. It can be applied to datasets other than the omniversal ones sought by large learning models, LLMs. LLMs don’t ‘know,’ but the way significant knowledge can be parsed out of it is, in my opinion, impressive, although the technology is in its infancy. Yes, it swallows all the bull along with the good info, and yes, it is unreliable and makes stuff up, and no, the models are tools, they are not General Intelligence. They don’t understand. They do statistics. Think of them as thinking-knowledge tools. As mathematics and computers come to enable human minds to go places they were previously unable to explore, I see a lot of change coming from this symbiosis of machine learning and human production of words, images, sounds and code.

“Computational biology is a good example of this two-edged miracle. Wanna get scary about the other edge of the AI sword? Generative AI once suggested 40,000 chemical weapons in just six hours. I recall that Bill Joy wrote a Wired magazine essay (23 years ago!) titled ‘Why the Future Doesn’t Need Us.’ In that essay he mentioned affordable desktop wetlabs, capable of creating malicious organisms. A good way to think about a proposed technology is to ask: What would 4chan do with it? Connecting computational biology to wetlab synthesizers is just a matter of money and expertise. What will 4chan do with LLM tools?”

Robert Y. Shapiro, professor and former chair of the political science department at Columbia University and faculty fellow at the Institute for Social and Economic Research and Policy, responded, “The changes to watch for – and this is being optimistic: First, I have great concern for the protection of data and individuals’ privacy, and second, there have to be much more serious, concerted and thoughtful efforts to deal with issues of misinformation and disinformation. Unfortunately, these hopes could also be answers to a question about worst and least-beneficial changes.”

Robert Y. Shapiro, professor and former chair of the political science department at Columbia University and faculty fellow at the Institute for Social and Economic Research and Policy, commented, “I repeat my earlier response. I have great concern for the protection of individuals’ data and privacy, and, second, there have to be much more serious, concerted and thoughtful efforts to deal with issues of misinformation and disinformation.”

Beneficial (Did not respond to Harms question)
Bill Woodcock, executive director of the Packet Clearing House, said, “The foundation of all current digital technology is electricity, and the single largest beneficial development we’re seeing right now is the shift from the consumption of environmentally destructive fossil fuels to the efficient use of the sun’s energy. This is happening in several ways: First, unexpectedly large economies in photovoltaic panels and the consequent dramatic reduction in the cost of solar-derived electricity is making all less-efficient forms of electrical production comparatively uneconomical. Second, non-electrical processes are being developed with increasing rapidity to supplant previously-inefficient and energy-consumptive processes, for a wide range of needs, including cooling and water purification. Together, these effects are reducing the foundational costs of digital technology and equalizing opportunities to apply it. Together with the broader distribution of previous-generation chip-making technologies and the further proliferation of open-source designs for hardware as well as software, I anticipate that a far greater portion of the world’s population will be in a position to innovate, create and produce digital infrastructure in 2035 than today. They will be able to seize the means of production.”

Harmful (Did not respond to Benefits question)
Stephen Abram, principal at Lighthouse Consulting, Inc., wrote, “ChatGPT has only been released for six weeks as I write this, and is already changing strategic thinking. Our political and governance structures are not competent to comprehend the international, transformative and open challenge this technology offers, and regulation, if attempted, will fail. If we can invest in the conversational and agreements to manage the outcomes of generative AI, good, neutral and bad, and avoid the near-term potential consequences of offloading human endeavor, creativity, intelligence, decisions, nuance and more – we might survive the first wave of generative AI.

“As copycat generative AIs proliferate this is a Gold Rush that will change the world. Misinformation, disinformation, political influence through social media: As the tools, including ChatGPT, allow for the creation of fake videos, voices, text and more, the problem is going to get far worse and democracies are in peril. We have not made a dent in the role of bad actors and disinformation and the part they play in democracies. This is a big hairy problem that is decades away from a framework, let alone a solution.

“TikTok has become somewhat transformational. Ownership of this platform aside, the role of fake videos and its strong presence in post-millennial demographics are of concern. Are any of the alternatives in place that are better? (Probably not) Transformation of core tools (Google and search), Microsoft Suite, Apple portfolio, etc. The massive investments of Microsoft, Alphabet/Google, Meta and Apple in generative AI tools and the emergent integration with core workplace tools in the absence of a conversation and framework for protecting privacy, identity, etc., is a massive concern.”

“ChatGPT will start with a ‘Let-a-thousand-flowers-bloom’ strategy for a few years. As always, human adoption of the tools will go through a curve that takes years and results in adoption that can be narrow or broad and sometimes with different shares of usage in different segments. It is likely that programming and coding will adopt more quickly. Narrow tools such as those for conversation customer service, art (sadly including publishing, video, visual art), writing (including all forms of writing – presentations, scripts, speeches, white papers), and more will emerge gradually but quickly.”

Beneficial and Harmful
William L. Schrader, advisor to CEOs, previously co-founder of PSINet, wrote, “I am disappointed with mankind and where it has taken the internet. I hope the dreams we old Internet folks had that kept us sleeping soundly after working for 18 hours a day, seven days a week to build the greatest communications system ever do come true. So far there have been good and bad outcomes.

1) Health and scientific advances moving twice or three times faster. This is not limited to big pharmaceuticals; it is focused on many massive improvements. One is fully remote surgeries in small towns without doctors, with only lightly medically train medical assistants or one registered nurse to be on site. This would include all routine surgical procedures. For more complex surgeries, the patient would need to be flown or driven hundreds of miles and possibly die in the process. This would be global so that we all had access, not just the rich. THAT is what we imagined in 1985 and before. It only takes really outstanding robotic 3-D motion equipment installed in a surgical suite that is maintained by the local team, high bandwidth supporting the video for the expert surgeon in a big medical center and the robotic controls from the experts’ location to the surgical site, and a team on both sides that is willing to give it a try and not get hung up on insurance risk. This must involve participants from multiple locations. This is not simply a business opportunity for a startup to assemble (the equipment is almost there, with the software and the video). This is a life saver.

2) Truth beating fascism is now required. We built this commercial Internet to stop the government from limiting the information each of us could access. We imagined that only a non-government-controlled Internet would enable that outcome. Freedom for all, we thought. FALSE. Over the past decade or so political operatives in various parts of the world have proven that social media and other online tools are excellent at promulgating fear and accelerating polarization. Online manipulations of public sentiment rife with false details that spread fear and create divisiveness have become a danger to democracy. I would like the Internet, the commercial Internet, to fight back with vigor. What Internet methods, what technologies, what timing, all remains to be seen. But people (myself included) understand it is time to build strong counter measures. We want all sides to be able to talk openly.

3) Climate change and inflation receive a lot of attention in the press for both Main Street and Wall Street. Looking at inflation, I have trust in our financial balancing system with the Federal Reserve Board, the thousands of brilliant analysts worldwide that watch their movement using the latest online tools and, of course. Other nations’ central banks are just as in tune as ours if, like ours, they are a bit focused on their own country. Inflation will resolve itself. Climate change, however, will not be solved. Not by politicians of any persuasion, not by the largest power companies, the latest gadgets in electric vehicles (EVs), not by carbon-capture technology and possibly not by anything. That could result in the end of the planet supporting homo sapiens. Alternatively, the commercial Internet could encourage 2 to 4 to 6 billion people who use it to not drive for one hour, turn off all electricity for the same hour, essentially a unified strike to tell the elected, appointed, monarchs or autocrats in charge or part of the government of all countries that the time has come do something so our grandchildren can survive. Only the Internet can do this. Please, someone start and support these movements.

4) Science tells us that we MUST expect more pandemics. Bill Gates has stated it clearly and funded activities that promise to help. We must stop listening to ‘it’s over’ or ‘it’s not any worse than a cold’ when our beloved grandparents have died or expect to if they mingle with their children’s children. In total over 6.7 million people have died. In the last year, 85 percent of the dead were elderly (over 65) in all countries (rich and poor). If only the commercial Internet could band together to convince those people who don’t believe in pandemics or don’t care about their grandparents to stop voting or to die from COVID, or the next one that comes along. Yes, this is a positive statement. There is a way for the Internet to persuade naysayers to stay away from the elderly or shop when they do not.

5) War in Ukraine and Russia will expand beyond Ukraine whether it ‘loses’ or ‘wins.’ The Internet can continue to support the tens of thousands of Ukraine voices – videos showing hundreds of indictable war crimes by the head of Russia who started the war a year ago. The Internet can communicate from any one person to any other one person or to millions. The truth matters. Lives are being lost hourly on all sides, all because we fail to say something or do something.”

“There are many scenarios that may play out between now and 2035, but the worst is the following: The commercial Internet has created opportunities for the evil side of man to excel with great speed, impact and lack of accountability. I am not talking about spam email, phone or text messages. I am talking about this: At its next election, the United States, the best of any democracy, might come to be led by a fascist supremacist. If this happens, it is likely that that faction may also have control of the Supreme Court and both houses of Congress. This could be accomplished using manipulative tactics on the Internet that create fear, spread lies and polarize the populace. The next step could be the U.S. sending military support to Russia instead of Ukraine, wiping out the middle class. The Internet enables this. The broad and sometimes far too silent community of intelligent, caring citizens who prefer to not live in a fascist state must implement the Internet to find a way to stop it.”

Harmful (Did not respond to Benefits question)
June P. Parris, a member of the Internet Society chapter in Barbados and former member of the UN Internet Governance Forum Multistakeholder Advisory Group, wrote, “Human rights: Some developing countries are not aware of or practice human rights. If they are not aware or misunderstand human rights, how can they put policies in place that will not harm citizens? What needs to take place is a standard across a set of policies and protocols that are followed by every government, every country and all citizens.

“Governments: They need to follow these policies, guidelines and protocols religiously, not the way they do things now; they need to be made accountable. The poor deserve the same opportunities as the rich. Institutions are not connected; systems should hold data and should monitor this data to prevent breaches and hacking.

“Human Knowledge: Hacking – for example a recent incident at a local hospital – should be explained and reported back. Experts should be brought in to fix the problems. Companies in developing countries do not always employ those qualified to do the job. Often these people are not up to date with what is going on in the developed world; there is a lack of up-to-date skills in the industry.

“Human health and well-being: All should have the same rights in this sector. All citizens should have access to IT, health treatment and education, and the cost of the internet should be affordable so that everyone has full access to health care, living essentials and education.

“Human connections: Some have access to information and some don’t. Relying on hearsay is not an effective way to communicate. If you are not a member of the party, some of your rights are denied and information is not across the board. The elderly suffer as a result. Social policy is lacking, especially for the poor, the disabled, the elderly and children with problems. Charities are not always operating with guidelines. The right of speech and access to assistance is not always practiced; complainers are victimized and disregarded.

“As I see it, humans seem resistant to technology. Despite several opportunities to use technology, not much has changed over the past 10 years and it is unlikely to change with citizens, governments and technocrats.

“Governments have introduced online platforms in order to make things easier for citizens, however, especially in the developing world, the platforms are not easy to negotiate and seem not to be maintained efficiently. Those of us who want to use tech are frustrated. In many instances websites are off, WiFi is not working properly or is too expensive.

“Technocrats are arrogant and misunderstand what is needed for easy access to online tools and access for the public. Sometimes the people just have to give up.

“Populations, even those who should know how to use technology, seem lazy, and the use of technology is not up to standard. Schools do not seem to be teaching students the use of technology.”

Beneficial and Harmful
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, said, “The dozen years ahead will bring the maturing of the relationship between human and artificial intelligence. In many ways, this will foster equity through enhanced education, access to skill development and broader knowledge for all – no matter the gender, race, where people live or their economic status.

“Education will be delivered through AI-guided online adaptive learning for the most part in the first few years, and more radical ‘virtual knowledge’ will evolve after 2030. This will allow global reach and dissemination without limits of language or disability. The ubiquity of access will not limit the diversity of topics that are addressed.

“In many ways, the use of AI will allow truths to be verified and shared. A new information age will emerge that spans the globe.

“Perhaps the most impressive advances will come with Neuralink-type connections between human brains and the next evolution of the internet. Those without sight will be able to see. Those without hearing will be able to hear. And all will be able to connect to knowledge by just tapping the connected internet through their virtual memory synapses. Virtual learning will be instant. One will be able to virtually recall knowledge into the brain that was never learned in the ways to which we are accustomed. Simply think about a bit of information you need, and it will pop into your memory through the connected synapses. The potential for positive human impact for brain-implanted connectivity is enormous, but so too is the potential for evil and harm.

“The ethical control of knowledge and information will be of the utmost importance as we move further into uses of these digital tools and systems. Truth is at the core of ethics. Across the world today, there seems to be a lower regard for truth. We must change this trend before the power of instant and ubiquitous access to knowledge and information is released.

“My greatest concern is that politics will govern the information systems. This may lead to untruths, partial truths and propaganda being disseminated in the powerful new brain-connected networks. We must find ways to enable AI to make judgments of truth in content, or at least allow for access to the full context of information that is disseminated. This will involve international cooperation and collaboration for the well-being of all people.”

Philip J. Salem, a communications consultant and professor emeritus at Texas State University, said, “First, I think the most important changes will relate to climate change. There will be advances in storing energy and the political system will move from mixed sources of energy to those that exclude fossil fuels. Furthermore, digital technologies will evolve to help manage personal energy consumption and to help dimmish those behaviors that damage the climate. Second, there will be more mindful use of social media, especially among the newer generation of users. There will be a less dominating social media as well with a more fluid enrollment in a variety of sites. Third, governments will begin to restrict a variety of digital use. This will vary from enforcement of monopoly laws to holding some organizations subject to libel and slander laws for misuse of their sites.

Philip J. Salem, a communications consultant and professor emeritus at Texas State University, wrote, “In regard to human wellness, I see three worrying factors. First, people will continue to prefer digital engagement to actual communication with others. They will use the technology to ‘amuse themselves to death’ (see Neil Postman) or perform for others, rather than engage in dialogue. Performances seek validation, and for these isolated people validation for their public performances will act as a substitute for the confirmation they should be getting from close relationships. Second, people will increase their predisposition to communicate with others who are similar to themselves. This will bring even more homogenous social networks and political bubbles. Self-concepts will lose more depth and governance will be more difficult. Third, communication competence will diminish. That is, people will continue to lose their abilities to sustain conversation.”

Beneficial and Harmful
Valerie Bock, principal at VCB Consulting, wrote, “We are going to go through a period of making serious mistakes as we integrate artificial intelligence into human life, but we can emerge with a more-sophisticated understanding regarding where human judgment is necessary to modify any suggestions made by our artificially intelligent assistants. Just as access to search engines and live mapping has made life better informed and more efficient for those of us privileged enough to have access to them, AI, too, will help people make better decisions in their daily lives.

“It is my hope that we will also become more sophisticated in our use of social networks. People will become aware of how they can be gamed, and they will benefit from stronger regulations around what untruths can be shared. We will also learn to make better use of our access to the strongest thinkers in our personal social circles and in the wider arenas in our societies.

“By 2035, I am hopeful that our social conventions will have adapted to the technological advances which came so quickly. Perhaps we will instruct our personal digital assistants to turn off their microphones when we are dining with one another or entertaining. We will embrace the basket into which our smartphones go when we are having face-to-face interactions at work and at home. There will be a whole canon of sound advice regarding when and under what circumstances to introduce our children to the tech with which they are surrounded. I’m hopeful that that will mean practicing respectful interaction, even with the robots, while understanding all the reasons why time with real people is important and precious.”

“I was once an avid fan of the notion that markets will, with appropriate feedback from consumers, adjust to serve human welfare. I no longer believe that to be true. Decades of weakening governmental oversight have not served us. Technology alone cannot serve humanity. We need people to look out for one another, and government is a more likely source of largescale care than private enterprise will ever be.

“I fear that the tech industry ethos that allows new technologies to be released to the public without serious consideration of potential downsides is likely to continue. Humans are terrible at imagining how our brilliant inventions can go wrong. We must commit to regulation and adequately fund regulators in a way that allows them the capacity to keep abreast of developments and encourage industry to better pre-identify the unexpected harms that might emerge when they are introduced to society. If not, we could see a nightmarish landscape of even worse profiteering in the face of real human suffering.”

Beneficial (Did not respond to Harms question)
Valdeane Brown, founder of NeurOptimal, predicted, “There will be fully autonomous vehicles embedded within comprehensive ecosystems that have a fundamental emphasis on safety, efficiency and ease of use. And there will be fully individualized and comprehensive health management systems with emphasis on empowering healthy living for each person.”

Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, “In regard to human-centered development, the trend will continue of increasingly sophisticated tool functionality with increasingly accessible and simplified interfaces, allowing a much larger number of humans to develop digital assets (software, content, etc.) without requiring a year of specialized training and experience. There will be more on-demand crowdsourcing; the recent example of OSINT in the Russian invasion of Ukraine demonstrates how large groups of volunteers can create valuable analysis from open-source information. This trend will continue with largescale crowdsourcing activities spontaneously emerging around topics of broad interest and concern.

Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, “In regard to human-to-human connections, the trend of increasing fragmentation of society will continue, aided and abetted by commercial and governmental systems specifically designed to ‘divide and conquer’ large populations around the world.

“There will continue to be problems with available knowledge. Propaganda and other disinformation will continue to grow, creating a balkanized global society organized around what channels, platforms or echo chambers they subscribe to. Postmodernism ends but leaves in its wake generations of adults with no common moral rudder to guide them through the rocks of future challenges.

“In regard to human well-being, I expect that digital globalization becomes a double-edged sword. There will be borderless communities with shared values around beauty and creativity on one side and echo chambers that justify and cheer genocide and imperial aggression on the other, especially in the face of the breakdown of economic globalization.”

Raquel Gatto, general consul and head of legal for the network information center of Brazil, NIC.br, wrote, “The best and most beneficial change by 2035 would be to achieve universal and meaningful connectivity. It is important to have everyone connected to the Internet but also that each person has access to the same opportunities online, which includes digital literacy, basic skills, local content, proper quality connection and equipment for example.”

Raquel Gatto, general consul and head of legal for the network information center of Brazil, NIC.br, said, “The most harmful and menacing change by 2035 would be the overregulation that breaks the Internet. The risk of fragmentation that entails a misleading conceit of digital sovereignty is rising and needs to be addressed in order to avoid the loss of the open and global Internet that we know and value today.”

Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of “Virtuality and Humanity,” said, “As the possibility of mass human destruction and close-to-complete extinction becomes more of a reality, greater thought (and perhaps the start of planning) will be given how to archive all of human knowledge in ways that will enable it to survive all sorts of potential mass disasters. This involves software and hardware. The hardware is the type(s) of media in which such knowledge will be stored to last eons (titanium? DNA? etc.); the software involves the type of digital code – and lexical language – to be used so that future generations can comprehend what is embedded (whether textual, oral or visual). Another critical question: What sort of knowledge to save? Only information that would be found in an expanded wiki-type of encyclopedia? Or perhaps everything contained in today’s digital clouds run by Google, Amazon, Microsoft, etc.? A final element: who pays for this massive undertaking? Governments? Public corporations? Private philanthropists?”

Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of “Virtuality and Humanity,” commented, “Digitally-based artificial intelligence will finally make significant inroads in the economy, i.e., causing increasing unemployment. How will society and governments deal with this? We don’t know.

“I see the need for huge changes in the tax structure (far greater corporate tax; elimination or significant reduction of individual taxation). This is something that will be very difficult to execute, given political realities, including intense corporate lobbying and ideological stasis.

“What will growing numbers of people do with their increasing free time in a future where most work is being handled autonomously? Can people survive (psychologically) being unemployed their entire lives? Our educational system should place far more emphasis already today on leisure education, and that which used to be called liberal arts? Like governments, educational systems tend to be highly conservative regarding serious change.

“Obviously, all this will not reach fruition by 2035, but much later), but the trend will become obvious – leading to greater political turmoil regarding future-oriented policymaking (taxes, Social Security, corporate regulation, education, etc.”

Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool, England, and research lead for the UK government’s Digital Culture team, wrote, “If digital tools can help with the climate crisis this could be their greatest beneficial impact.

Separate from that, I think that there are two critical areas that digital systems and media could have a beneficial impact: 1) Health and well-being – across everything from big data and genomics to everyday health apps digital systems and media could have considerable benefits. BUT only if well managed and regulated.

2) Knowledge production – this is obviously part of point 1 above. Digital systems provide unique opportunities to further human knowledge and understanding, but only if the current somewhat naive empiricism of ‘AI’ (= bad stats models) is replaced with far more thoughtful approaches. That means taking the computer scientists out of the driving seat and putting the topic specialists in charge again.”

Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool, England, and research lead for the UK government’s Digital Culture team, wrote, “Digital systems and media are human societal products. The benefits or harms they engender are the products of our choices about how we as individuals, communities, organisations, governments and societies use and deploy them. They will have a mix of benefits and hazards. On current form, the lack of societal regulation (though I note the EU is still at the forefront of trying to regulate), the continued ‘break things’ attitude of big tech and the benefits to both powerful (big corporates) and very authoritarian (e.g., China) actors that digital systems provide I worry that the harms will outweigh the benefits for most citizens.

“I worry therefore that tech may facilitate some quite draconian and unpleasant societal changes driven by corporate or political desire (or inaction). Limiting rights and freedoms, damaging civic institutions etc. While at the same time helping some live longer, more comfortable lives. The question should be: “What societal changes do we need to make to ensure we maximise the benefits and limit the harms of digital systems and media.”

Robert Bell, co-founder of the Intelligent Community Forum, predicted, “AI will be the technology with the greatest impact as it works its way into countless existing applications and spawns completely new ones, like the much-heralded ChatGPT. The potential positives are huge: greater productivity in fields where IT has not produced progress, from education to healthcare; far deeper and broader analysis of our social and policy challenges to yield new solutions; and greater digital inclusion as platforms better anticipate our needs and communicate by voice and gesture. Getting the positives without the negatives, of course, will take huge skill and huge luck.

“A lesser-known advance will be in the digitization of our knowledge of Earth. The new fleets of earth observation satellites in space are not fundamentally about producing the pictures we see in the news. They are about producing near-real-time data with incredible precision and detail about the changing environment, the impact of public-sector and private-sector actions, and the resources available to us. Most important, the industry is collaboration to create a standards-based ecosystem in the cloud that makes this data broadly available and that enables non-data-scientists to put it to work.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “The potential for AI to be used for evil is almost unlimited, and it is certain to be used that way to some extent. A relatively minor – if still frightening example – are the bots that pollute social media to carry out the agenda of angry minorities and autocratic regimes. Powerful AI will also give malign actors new ways to create a ‘post-truth’ society using such tools such as deep fake images and videos. On the more frightening side will be weapons of unprecedented agility and destructive power, able to adapt to a battlespace at inhuman speed and, if permitted, make decisions to kill.

“Our challenge is that technology moves fast and governments move slowly. A company founder recently told me that we live in a 21st century of big, knotty problems but we operate in an economy formed in the 20th Century after the Second World War, managed by 19th Century government institutions. Keeping AI from delivering on its frightening potential will take an immense amount of work in policy and technology and must succeed in a world where a powerful minority of nations will refuse to go along.

R Ray Wang, founder and principal at Constellation Research, predicted, “We will see a massive shift in how systems are designed from persuasive technologies (the ones that entrapped us into becoming the product), to consensual technologies (the ones that seek our permission), to mindful technologies (which work towards the individual’s benefit, not the network nor the system).

“In our digital life, we will see some big technology trends:

  • Autonomous Enterprise – the move to whole-scale automation of our most mundane tasks to allow us to free up time to focus on areas we choose.
  • Machine scale vs. human scale – we have to make a conscious decision to build things for human scale, yet operate at machine scale
  • The right to be disconnected – (without being seen as a terrorist). This notion of privacy will lead to a movement to ensure we can operate without being connected and retain our anonymity.
  • Genome editing – digital meets physical as we find ways to augment our genome
  • Cybernetic implants – expect more human APIs connected to implants, bio-engineering and augmentation.”

R Ray Wang, founder and principal at Constellation Research, said, “The biggest challenge will be the control that organizations such as the World Economic Forum and other powers that be have over our ability to have independent thinkers and thinking challenge the power of private-public partnerships with a globalist agenda. Policies are being created around the world to take away freedoms humanity has enjoyed and move us more towards the police state of China. Existing lawmakers have not created the tech policies to provide us with freedoms in a digital era.”

Steve Delbianco, president and CEO of NetChoice, wrote, “There will be great progress in health diagnostics. AI will enable fast and inexpensive diagnostics of health conditions, based on images, video, biometric measurements, self-reporting, etc. Generative AI will then translate the diagnostic info into actionable prose in a wide range of scripts and languages. Access to human knowledge will be greatly enhanced by generative AI, which provides answers in digestible chunks of prose in a wide range of scripts and languages.”

Steve Delbianco, president and CEO of NetChoice, said, “Regulation designed to curb interest-based advertising will change the way that free online services are working today. Ads that are not based on viewer interest command lower ad rates, meaning less ad revenue. With less ad revenue, services will need to show more ads that are less relevant, and/or cut investment in content and services. And many sites will erect pay walls to replace lost ad revenue. The detrimental effect will be to raise barriers for lower-income users when it comes to accessing knowledge and resources online.”

Rance Cleaveland, professor of computer science at the University of Maryland-College Park and former director of the Computing and Communication Foundations division of the National Science Foundation, said, “The primary benefits will derive from the ongoing integration of digital and physical systems (so-called cyber-physical systems).

“There will be a revolution in healthcare, with digital technology enabling continuous yet privacy-respecting individual health monitoring, personalized immunotherapies for cancer treatment, full digitization of patient health records and radically streamlined administration of health-care processes. The healthcare industry is loaded with low-hanging fruit. I still cannot believe, in this day and age, that I have to carry a plastic card around with me to even obtain care!

“There will be full self-driving vehicle support on at least some highways, with attendant improvements in safety, congestion and driver experience. The trick to realizing this involves the transition from legacy vehicles to new self-driving technology. I expect this to happen piecemeal, with certain roads designated as ‘self-driving only.’

“There will be much better telepresence technology to support hybrid in-person and virtual collaboration among teams. We have seen significant improvements in virtual meeting technology (Zoom, etc.), but having hybrid collaborative work is still terribly disappointing. This could improve markedly with better augmented-reality technology.”

Rance Cleaveland, professor of computer science at the University of Maryland-College Park and former director of the Computing and Communication Foundations division of the National Science Foundation, predicted, “The biggest harms all derive from the unfettered anonymity and lack of cross-checking of information on the internet. These problems already exist and are not likely to have been fixed by 2035. Specific problems include:

  • Cyber-bullying and cyber-harassment
  • Cyber-crime, especially fraud (already a terrible scourge)
  • Dis-information and mis-information.”

Tim Bray, a technology leader who has worked for Amazon, Google and Sun Microsystems, wrote, “The change that is dominating my attention is the rise of the ‘Fediverse,’ including technologies such as Mastodon, GoToSocial, Pleroma and so on. It seems unqualifiedly better for conversations on the Internet to be hosted by a network of federated providers than to be ‘owned’ by any of the Big Techs. The Fediverse experience, in my personal opinion, is more engaging and welcoming than that provided by Twitter or Reddit or their peers. Elon Musk’s shenanigans are generating a wave of new voices giving the Fedisphere a try and (as far as I can tell) liking it. I’m also encouraged as a consequence of having constructed a financial model for a group of friends who want to build a sustainable self-funding Mastodon instance based on membership fees. My analysis shows that the cost of providing this service is absurdly low, somewhere in the range of $1/user/month at scale. This offers the hope for a social-media experience that is funded by extremely low monthly subscription or perhaps even voluntary contributions. It hardly needs saying that the impact on the digital advertising ecosystem could be devastating.”

Tim Bray, a technology leader who has worked for Amazon, Google and Sun Microsystems, predicted, “The final collapse of the cryptocurrency/Web3 sector will be painful, and quite a few people will lose a lot of money – for some of them it’s money they can’t afford to lose. But I don’t think the danger will be systemic to any mainstream sector of the economy. Autocrats will remain firmly in control of China and Russia, and fascist-adjacent politicians will hold power in Israel and various places around Eastern Europe. In Africa and Southeast Asia, autocratic governments will be more the rule rather than the exception. A substantial proportion of the U.S. electorate will be friendly to anti-democratic forces. Largescale war is perfectly possible at any moment should Xi Jinping think his interests are served by an invasion of Taiwan. These maleficent players are increasingly digitally sophisticated. So my concern is not the arrival of malignant new digital technologies, but the lethal application of existing technologies to attack the civic fabric and defense capabilities of the world’s developed, democratic nations.”

Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, wrote, “AI will contribute to many aspects of life, including art and literature. Continuing improvements in price/performance of digital equipment will drive global economic gains. The EU will continue to lead the way in the push for humancentric use of technology. There will be continued gains in health technology, including electronic health data systems.”

Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, said, “Some of what follows on my list of worrisome areas may not seem digital at first blush, but everything is digital these days.

  • Armed conflict or the threat of conflict causes human and economic losses, and further impedes supply chains
  • Further decline in democratic institutions
  • Continued health crises (antibiotic resistant diseases, etc.)
  • Climate crisis leads to food crises/famine, migration challenges
  • Further growth of misinformation/disinformation
  • Massive breakdown of global supply chains for digital goods and (to a lesser degree?) services
  • The trade war U.S.A.-China increasingly drives a U.S.A.-EU trade war
  • Fragmentation of internet due to geopolitical tensions
  • Further breakdown of global institutions, including the World Health Organization and World Trade Organization”

Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, wrote, “I see an opportunity to 1) disseminate the benefits of data to a broader cross-section of the world’s people through new structures and policies, and 2) use sophisticated data analysis such as AI to solve cross-border wicked problems. Unfortunately, governance has not caught up to data-driven change.

“If public, private and non-governmental entities could protect and anonymize personal data (a big if) and share it to achieve public good purposes, the benefits of data sharing to mitigating shared wicked problems could be substantial. Policymakers could collaborate to create a new international organization, for now let’s call it the Wicked Problems Agency. It could prod societal entities – firms, individuals, civil society groups and governments – to share various types of data in the hope that such data sharing coupled with sophisticated data analysis could provide new insights into the mitigation of wicked problems.

“The Wicked Problems Agency would be a different type of international organization – it would be cloud-based and focused on mitigating problems. It would also serve as a center for international and cross-disciplinary collaboration and training in the latest forms of data analysis. It would rent useful data and compensate those entities that hold and control data. Over time, it may produce additional spillovers; it might inspire greater data sharing for other purposes and in so doing reduce the opacity over data hoarding. It could lead entities to hire people who can think globally and creatively about data use. It would also provide a practical example of how data sharing can yield both economic and public good benefits.”

Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, commented, “Today’s trends indicate data governance is not likely to be improved without positive changes. Firms are not transparent about the data they hold (something that corporate-governance rules could address). They control the use/reuse of much of the world’s data, and they will not share it. This has huge implications for access to information. In addition, no government knows how to govern data, comprehensively understanding the relationships between algorithms protected by trade secrets and reuse of various types of data. The power relationship between governments and giant global firms could be reversed again with potential negative spillovers for access to information. In addition, new nations/states have rules allowing the capture of biometric data collected by sensors. If firms continue to rely on surveillance capitalism, they will collect ever more of the public’s personal data (including eye blinks, sweat, heart rates, etc.). They can’t protect that data effectively and they will be incentivized to sell it. This has serious negative implications for privacy and for human autonomy.”

Peter Levine, professor of citizenship and public affairs at Tufts University, commented, “In the online ‘public sphere’ (settings where strangers come together to share ideas and generate public opinion) things might improve if the large for-profit social networks lose users to alternative platforms that are either decentralized – like Mastodon – or democratically governed. We might also see sustainable models for producing journalism and paying reporters.”

Peter Levine, professor of citizenship and public affairs at Tufts University, said, “I am worried about substantial deterioration in our ability to concentrate, and especially to focus intently on lengthy and difficult texts. Deep reading allows us to escape our narrow experiences and biases and absorb alternative views of the world. Digital media are clearly undermining that capacity.”

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “There will be widespread robotic automation that boosts annual labor productivity rates by several percentage points.”

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “One harm that will have significant impact is people’s continuing decline in reading long-form documents (articles/books).”

Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, responded, “Developments in AI will create effective natural language tools. These tools will make a broader range of human knowledge available to every person. Questions will be answered in a more context-dependent way that will have much more nuance than today’s search tools. People will be able to get specific answers to questions about their health, legal questions, engineering issues, tailored advice for books, movies, shows, etc. The questions and answers will be stated in natural language and will be tailored to the questioner’s specific needs and interests.”

Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, said, “Developments in AI will create effective natural language tools. These tools will make people feel they are getting accurate, individualized information but there will frequently be no way of checking. The actual information will be more homogeneous than it seems and will be stated with overconfidence. It will lead to large numbers of people obtaining biased information that will feed groundless ideology. Untruths about health, politics, history and more will pervade our culture even more than they already do.”

Beneficial and Harmful
Jim Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader, wrote, “Many potential benefits lie ahead thanks to the possibilities raised by ongoing advances in humans’ uses of digital technology.

1) There will be a shift from ‘human-centered design’ to ‘humanity-centered design’ in order to build a safer and better world. This is an increasingly necessary perspective shift as people born of the physical realm push deeper into the digital realm guided in part by ideas and ideals from the mathematical/philosophical/spiritual realms. Note that the shift from ‘human-centered’ to ‘humanity-centered’ is an important shift that is required per Don Norman’s new 2023 book ‘Design for a Better World: Meaningful, Sustainable, Humanity-Centered.’ Safely advancing technologies increasingly requires a transdisciplinary systems perspective as well as awareness of overall harms, not just the benefits that some stakeholders might enjoy at the expense of harms to under-served populations. The service research community, which studies interaction and change processes, has been emphasizing benefits of digital tools (especially value co-creation). It is now increasingly aware of harms to under-served populations (value co-destruction), so there’s hope for a broadening of the discussion to focus on harms and benefits as well as under-served and well-served populations of stakeholders. The work of Ray Fisk and the ServCollab team are also relevant regarding this change to service system design, engineering, management and governance.

2) There will be greater emphasis on how human connections via social media can be used to change conflict into deeper understanding, reducing polarization. It is hoped that there will be institutions and governance wise enough to eliminate poverty traps. An example of policy to reduce poverty in coming decades is ‘Buy2Invest,’ which ensures that customers who buy are investing in their retirement account.

3) Responsible actors in business, tech and politics can work to invest more systematically and wisely in protecting human rights and enforcing human responsibilities. One way is via digital twins technologies that allow prediction of harms and benefits for under-served and well-served populations. Service providers will not be replaced by AI, but service providers who do not use AI (and have a digital twin of themselves) will be replaced by those who do use AI. Human rights and responsibilities, harms and benefits are responsible actors (e.g., people, businesses, universities, cities, nations, etc.) that give and get service (AKA service system entities). The world simulator will include digital twins of all responsible actors, allowing better use of complexity economics in understanding interaction and change processes better. Note that large companies like Amazon, Google, Facebook, Twitter, etc. are building digital twins of their users/customers to better predict behavior patterns and create offers of mutual value/interest. Responsible actors will build and use AI digital twins of themselves increasingly.

4) There will be an increased emphasis on the democratization of open, replicable science – including the ability to rapidly rebuild knowledge from scratch and allow the masses to understand and replicate important experiments. The future of expertise depends on people’s ability to rebuild knowledge from scratch. The world needs better AI models. To get the benefits of service in the AI era, responsible actors need to invest in better models of the world (science), better models in people’s heads guiding interactions (logics), better models of organizations guiding change (architecture), and better models of technological capabilities and limitations shaping intelligence augmentation (IA).

5) Thanks to AI’s advancing technological capabilities it is likely that we are entering a golden age of service that will improve human well-being, including in the area of confronting harms done to under-served populations.

6) Local energy infrastructure will be advanced via decarbonized, geothermal drilling breakthrough innovations. Universities are increasingly adding AI data centers on campuses and experimenting with geothermal. The systems at top universities in each city serve as examples of decarbonized local energy infrastructure powering AI systems.

“Many challenges are emerging due to the ongoing advances in humans’ uses of digital technology.

1) There is a lack of accountability for criminals involved in cybersecurity breaches/scams that may slow digital transformation of adoption of digital twins for all responsible actors. For example, Google and other providers are unable to eliminate all the Gmail spam and phishing emails – even though their AI does a good filtering job identifying spam and phishing. The lack of ‘human-like dynamic, episodic memory’ capabilities for AI systems slows the adoption of digital-twin ownership by individuals and the development of AI systems with commonsense reasoning capabilities.

2) The winner-take-all mindset in all competitive and developmental settings rather than the type of balanced collaboration that is necessary is dominant in business and geo-politics of the U.S., Russia, China, India and others.

3) A general resistance to welcoming immigrants by providing accelerated pathways to productive citizenship is causing increasing tensions between regions and wastes enormous amounts of human potential.

4) Models show that it is likely that publishers will be slow to adopt open-science disruptions.

5) It is expected that mental illness, anxiety, depression exacerbated by loneliness will become the number-one health challenge in all societies with elderly-dominant populations.

6) A lack of focus on geothermal solutions due to oil company interest in a hydrogen economy is expected to slow local energy independence.”

Greg Sherwin, a leader in digital experimentation with Singularity University, said, “A greater social and scientific awareness of always-on digital communication technologies will lead to more regulation, consumer controls and public sentiment towards protecting our attention. The human social immune system will catch up with the addictive novelty of digitally mediated attention-hacking through communications and alerts. Attention hijacking by these systems will become conflated with smoking and fast food in terms of their detrimental effects, leading to greater thoughtfulness and balance in their use and application. On the negative side, as with smoking and fast food, poorer and more-marginalized groups will be the last to see these benefits.”

Greg Sherwin, a leader in digital experimentation with Singularity University, wrote, “Humans on the wrong side of the digital divide will find themselves with all of the harms of digital technologies and little or no agency to control them or push back. This includes everything from insidious, pervasive dark patterns to hijack attention and motivation to finding themselves on the wrong end of algorithmic decision-making with no sense of agency nor recourse. This will result in mental health crises, loneliness and potential acts of resistance, rebellion and violence that further condemn and stigmatize marginalized communities.”

Doc Searls, a contributor at the Ostrom Workshop at Indiana University and co-founder and board member at Customer Commons, said, “Business in general will improve because markets will be opened and enlarged by customers finally becoming independent from control by tech giants. This is because customers have always been far more interesting and helpful to business as free and independent participants in the open markets than they are as dependent captives, and this will inevitably prove out in the digital world. This will also free marketing from seeking, without irony, to ‘target,’ ‘acquire,’ ‘own,’ ‘manage,’ ‘control’ and ‘lock in’ customers as if they were slaves or cattle. This convention persisted in the industrial age but cannot last in the digital one. However, I am not sure this will happen by 2035.

“Back when we published ‘The Cluetrain Manifesto: The End of Business as Usual’ (2000) and when I wrote ‘The Intention Economy: When Customers Take Charge’ (2012), many like-minded folk (often called cyberutopians) expected ‘business as usual’ to end and for independent human beings (no longer mere ‘users’) to take charge soon. While this still hasn’t happened, it will eventually, because the Internet’s base protocols (TCP/IP, HTTP, et. al.) were designed to support full agency for everyone, and the Digital Age is decades old at most – and it will be with us for decades, centuries or millennia to come.”

Doc Searls, a contributor at Ostrom Workshop at Indiana University and co-founder and board member at Customer Commons, observed, “The most harmful and menacing changes in digital life will be the same ones we’ve had since forever in the physical world and for the last three decades in the digital one: bad acting by creeps who are out to make trouble for fun, profit or both.

“An iron law of technology will also apply: What can be done will be done – until we experience the harms it causes and work to correct them – even as some of those harms continue. This has been the case with every technological development from stone tools to nuclear power, electronic communication, computing and AI.

“Thus, while we will experience the negative effects of new developments in digital life, we will also be working to prevent the worst of those. Same as it ever was.”

Jason Hong, professor of computer science at Carnegie Mellon’s Human-Computer Interaction Institute, wrote, “The combination of better sensors, better AI, cheaper smart devices and smarter interventions will lead to much better outcomes for healthcare, especially for chronic conditions that require changes in diet, exercise and lifestyle. Improvements in AI will also lead to much better software, in terms of functionality, security, usability and reliability, as well as how quickly we can iterate and improve software. We’re already seeing the beginnings of a revolution in software development with GitHub Copilot, and advances will only get better from here. This will have significant consequences on many other aspects of digital life.”

Jason Hong, professor of computer science at Carnegie Mellon’s Human-Computer Interaction Institute, said, “While AI will have many beneficial uses, there will also be many continuing negative consequences. Some of these will be unintentional (e.g., AI bias). Some of these will be deliberate, for example, more and better deepfakes, adaptive attacks on software and online services, fake personas online, fake discussion from chatbots online meant to ‘flood the zone’ with propaganda or disinformation, and more. It’s much faster and easier for attackers to disrupt online activities than for defenders to defend it.”

Harmful (Did not respond to Benefits question; this contribution was shortened due overt length on one narrow topic)
Ashu M. G. Solo, principal R&D engineer at Maverick Trailblazers Inc. wrote, “Online defamation, doxing and impersonation are three of the major problems of the Internet age. These issues are a perfect example of regulation not keeping up with change. As technology advances, these become greater problems. Laws and platform policies should be updated to mitigate this. Internet defamation and doxing often harms people’s reputations; prevents them from getting gainful employment; ruins romantic relationships; causes depression, anxiety and distress and leads to deeper mental health problems.

“The civil remedies for dealing with defamation or doxing are extremely inadequate. Lawyer fees for a defamation or doxing claim in the United States are typically in the range of $30,000 or more. The vast majority of defamation or doxing victims can’t afford the legal costs. Internet platform providers could take action and unfortunately do not. Freedom of speech was never meant to protect defamation. Among the steps that could be taken is for platforms to require users to use their real names online and provide proof of their address and record and keep the IP address of all users then allow law enforcement appropriate access in the appropriate situations. In addition, criminal laws for defamation should be enforced; they rarely are in the United States and Canada. And defamation or impersonation should be a criminal offense in every country.”

Beneficial (Did not respond to Harms question)
Terri Horton, work futurist at FuturePath, said, “Digital and immersive technologies and artificial intelligence will continue to exponentially transform human connections and knowledge across the domains of work, entertainment and social engagement. By 2035, the transition of talent acquisition, onboarding, learning and development, performance management and immersive remote work experiences into the metaverse – enabled by Web3 technologies – will be normalized and optimized. Work, as we know it, will be absolutely transformed. If crafted and executed ethically, responsibly and through a human-centered lens, transitioning work into the metaverse can be beneficial to workers by virtue of increased flexibility, creativity and inclusion. Additionally, by 2035, generative artificial intelligence (GAI) will be fully integrated across the employee experience to enhance and direct knowledge acquisition, decision-making, personalized learning, performance development, engagement and retention.”

Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, responded, “I see strong trends toward more human-centered technical thinking and practice. The ‘go fast and break things’ mentality will be tempered by a marketplace that must pay for, or at least make transparent, how user data and activity is leveraged and valued. People will become more aware of the value their usage brings to digital technologies. Companies will not be able to easily ignore human dignity or ecological impact. Innovative and creative people will gravitate to careers of meaning (e.g., ecological balance, social justice, well-being). Tech workers will become more attentive to and engaged with knowledge and meaning as data hype attenuates. Human dignity will become as valued as stock options and big salaries. Some of these changes will be driven by government regulation, some will be due to the growing awareness and thoughtful conversations about socially-grounded IT, and some will be due to new tools and techniques, such as artificiality detectors and digital prophylactics.”

Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, said, “I am old enough to recognize that we are in the third iteration of ‘AI will save the world’ and this latest hype bubble will eventually yield to a more moderate but impactful Gartner Hype Cycle with real positive and negative outcomes. My main worries are this moderating acceptance of generative algorithms and autonomous systems will have severe consequences for human life and happiness. Autonomous weapon systems more openly used in today’s conflicts, such as Ukraine-Russia, will foster the acceptance of space-based and other more global weapon systems. Likewise, the current orgasmic fascination with generative AI will set us up for development of a much more impactful generation of food, building materials, new organisms and modified humans through synthetic biology 3D printing.”

Beneficial and Harmful
Pamela Rutledge, director of the Media Psychology Research Center, wrote, “All change, good and bad, relies on human choices. Technology is a tool; it has no independent agenda. There are tremendous opportunities in digital technologies for humans to enhance their experiences and wellbeing. Digital technologies can increase access to healthcare and fight climate change. They can change education by automating repetitive tasks and running adaptive-learning experiences, allowing teachers to focus on teaching soft skills like creative thinking and problem-solving. In art, literature and music, generative AI and imagery tools like DALL-E can enable cost-effective exploration and prototyping, facilitating innovation.

“The ubiquity of technology highlights the need for better media literacy training. Media literacy must be integrated into the educational curriculum so that we teach each generation to ask critical questions and develop the skills necessary to understand the design of digital tools and the motivations behind them, including the agendas of content-producers. Young people need to learn smart practices in regard to privacy and data management, how to manage their time online and how to take action in the face of bullies or inappropriate content. These are skills transferable on- and offline, digital and in-person. A better-educated public will be better prepared to make the demands for Big Tech to pull back the curtain on the structural issues of technology, including issues tied to blackbox algorithms and artificial intelligence.

“Used well, these technologies offer tremendous opportunities to innovate, educate and connect in ways that make a significant positive difference in people’s lives. Digital technologies are not going away. A positive outcome depends on us leaning into the places where technology enhances the human experience and supports positive growth. As in strengths-based learning, we can apply the strengths of digital technologies to identify needs and solutions.”

“There are challenges, however. The inherent tendency of humanity is to resist change as innovation cycles become more rapid, particularly when innovation is economically disruptive. The world will have to grapple with dealing with all of this in an atmosphere in which trust in institutions has been undermined and people have become hyper-sensitized to threat, making them more reactive to fear, heightening the tendency to homophily and othering.

“The devaluation of information puts us at social and political risk. Bad actors and lack of transparency can continue to increase distrust and drive wedges in society. Technology is persuasive. Structural decisions influence how people interact, what they access and how they feel about themselves and the world.

“The inability to think of digital life as a holistic human issue, rather than in segments like the blind men and the elephant, will hamper individual well-being, social progress and economic growth. Regulating an app or behavior doesn’t solve the larger issue because it doesn’t identify the fundamentals of ‘why’ humans behave as they do. Regulations might divert the behavior, but they will not stop people from being curious, attracted by motion and sound or interested in creating and sharing content and seeing what other people are doing. Without education and training, this puts everyone individually and collectively at risk.”

Beneficial and Harmful
Deirdre Williams, an independent internet governance consultant, responded, “There will be a great saving of time as digital systems replace cumbersome paper-based systems. There will be better planning facilitated by better records. Data collection will improve. Weather forecasting will become more precise and accurate. What we have here is an opportunity to advance global equity and justice but, judging by what has happened in humanity’s past, it is unlikely that full advantage of the opportunity will be taken. In regard to human rights, digital technology will abet good outcomes for citizens. The question is, which citizens, the citizens of where?

“Humanity is becoming more selfish and individualistic. Or rather a portion of humanity is, and sadly, while it may be a minority, it has a loud and wide-ranging voice and a great deal of influence. More and more, people seem to live on ‘hype’ – an excitement which depends on neither fact nor truth, but only on the extremity of the sensation. This is shared and amplified by the technology. It isn’t just a space that allows people individual freedom of expression, it is also a space on which some people encourage or seek homogenisation. The movement toward ‘binary thinking’ rules out the middle way, although there are and should be many middle ways, many ‘maybes.’ Computers deal with 1 and 0, yes and no, but people are not computers. Binary human thinking is doing its best to turn people into computers.

“Subtleties are being eroded, so that precise communication becomes less and less possible. Reviewing history, it is apparent that humanity is on a pendulum swinging between extremes of individualism and community. Sometimes it seems that the period of the swing is shortening; it certainly seems that we are getting closer to the point of return now, but it is difficult to stand far enough back so as to be able to get a proper view of the time scale.

“When the swing reverses, I expect we’ll all be more optimistic because, as someone said during the Caribbean Telecommunications Union’s workshop on legislative policy for the digital economy last week, the PEOPLE are the heart, soul and everything in the digital world. Without the people, the technology has no meaning.”

Beneficial and Harmful
Charles Ess, emeritus professor of ethics at the University of Oslo, said, “In the best-case scenario, more ethically-informed approaches within engineering, computer science and so on promise to be part of the package of developments that might save us from the worst possibilities of these emerging technologies. A brief paraphrase the executive summary of the first edition of the IEEE paper: These communities should now recognize that the first priorities in their work are to design and implement these technologies for the sake of human flourishing and planetary well-being, protecting basic human rights and human autonomy – over the current focus on profit and GNP.

“On the dark side, however, this sort of endeavor also opens up every temptation for ‘ethics-washing,’ so critical eyes need to watch closely. On the other hand, there would be real grounds for optimism if these sorts of developments should catch further hold in other disciplines and approaches that have historically likewise divorced themselves from more humanistic foci. Time will tell.

“If such ethical shaping and informed policy development and regulation succeed in good measure, then the manifest benefits of AI/ML will be genuinely significant and transformative. Given how computational and network technologies are now the envelope and ecology in which most of us in the so-called developed countries live, the promises and likely benefits of these technologies range across just about every aspect of human existence – including, as the initial questions suggest, in medicine and healthcare.

“All of this depends, however, on our taking to heart and implementing in praxis the clear lessons of the past 50 years or so. Human judgment must remain central in the implementation of any such system that impinges on human health, well-being and flourishing, rather than acquiescing to the pressures of profit and efficiencies in seeking to offload such judgment to AI/ML systems.

“The technical details are especially important here, as they make very clear that such systems, however impressive and often genuinely useful their results may be, are simply very fancy statistical inference machines – i.e., probabilistic guessing based on literally mindless calculation. ‘The lights are on, but nobody’s home,’ as I like to say – i.e., there is no consciousness, much less the human sorts of intelligences that implicate empathy, care and especially reflective judgment that we as human beings rely on for making our most difficult and often painful choices. As the 70 percent failure rate of current AI projects (so far) suggests, to offload this distinctively human work and responsibility to our machineries will often have devastating consequences for individuals and the larger society, as mindless statistical inference will sometimes result in a ‘decision’ that is manifestly mistaken (as well as impossible for anyone to explain – another set of problems).

“Even more problematic is how offloading human judgment and responsibility in these ways thereby de-skills us, i.e., we become rusty – worst case, we simply forget how to make such judgments on our own. Stated more generally: contra the understanding of such technologies as human augmentation – the more we engage with them, the more we become like them.

“Given these caveats, it is also manifest that these and related digital technologies will continue to  have enormous impact in the domain of human knowledge – at least those domains that thrive upon quantitative/calculative approaches, primarily in mathematics and the natural sciences. This is to be lauded not only for its own sake, but specifically for the very utilitarian and utterly critical matter of addressing and hopefully mitigating climate change and at least it’s likely worst consequences…

“We have some 20+ years of debate over what ‘the digital’ may mean, and more recently, whether or not any distinction between the digital and the analogue even makes any sense or difference. My own take is that we have been sold – literally – on ‘the digital’ as the universal panacea for all of humankind’s ills, all too often at the cost of the analogue, the qualitative, the foundational experience of what it is and might mean to be a human being. This does not bode well for human/e futures for free moral agents capable of pursuing lives of flourishing in liberal-democratic societies, nor for the planet.

“The same holds for hopes of using these technologies in the name of greater democracy, freedom and equality – what many of us foregrounded as the ‘democratizing potentials of the Internet’ in its first 20 years or so. One can only hope that these uses will continue, expand and multiply. At the same time, however, the larger pattern is not promising. Rather, what is often called the rise of digital authoritarianism – amplified by actors such as China who make good money selling their surveillance systems to other regimes intent on keeping their populations under strict control – has been documented since at least 2012 and is a phenomenon that only gets worse from year to year.

“I have not addressed other prominent technologies – starting with virtual assistants and social robots. As primarily the offspring of AI/ML systems, much of the same sort of comments would apply here. Ditto for the current excitement over ChatGPT and other Large Language Models (LLMs). A particular wrinkle has to be noted here, however, especially in the use of social robots and virtual assistants among very young children: again, a risk of deskilling – or never learning in the first place – such basic human/e elements as empathy, care and so on. So:

1) One threat is the risk of AI/ML systems displacing human judgment, autonomy and responsibility – accompanied by the ultimate risks of de-skilling should we fail to keep humans (specifically, our skills of empathy and judgment) ‘in the loop’ of the whole range of human development (specifically, for very young children in terms of empathy and care) and decision-making that will be increasingly offloaded to these systems, with often catastrophic losses.

2) The larger pattern of displacing or eliminating humanistic studies and resources in favor of STEM – thereby eliminating a very great deal of the kinds of education and experiences needed precisely to foster more qualitative forms of judgment, empathy and so on.

3) The continued rise of ‘digital authoritarianism’ – i.e., contra the emancipatory and democratizing potentials of digital technologies, more and more countries, including the nominally democratic ones, will make use of these technologies rather to reinforce and expand authoritarian control over their populations.”

“The majority of our fascination with the majority of the applications of the majority of these digital technologies has robbed us of critical abilities to concentrate or exercise a previously accepted ability to exercise critical reflection of a sustained and systematic sort. These likewise appear to be reducing our central capacities or abilities of empathy, perseverance, patience, care and so on – all of which are required for basic communication, long-term friendships and the deep sorts of relationships necessary for parenting, and so on. Twenty years ago, the early warnings along these lines were dismissed as moral panics (if not worse). Pun intended: we should have paid better attention.

“The flaws of today’s automated processes as substitutes for human’s doing the critical thinking are made evident in the work of data/AI legal philosopher Mireille Hildebrandt. Her research showed how AI/ML systems short-circuit the rights of the accused to contest evidence and accusations in court: when the accusation comes from an AI/ML system that statistically but mindlessly calculates that you are guilty, there is no way – not even for the system’s programmers and handlers – to explain just why this inference was made.

“The human/e loss will be enormous. These systems are built around models of behavior surveillance, modification and control rooted in Skinnerian Behaviorism – now a thousand times more sophisticated and thus effective in measuring and modifying our behaviors. The upshot is thus primitively simple: human beings are now nothing more than Skinner pigeons in Skinner cages of monitoring and control via positive, sometimes negative reinforcement. A very worst-case scenario is that ‘We are the Borg’: we ourselves have become the makers and consumers of technologies that risk eliminating – if not simply preventing us from acquiring in the first place – that which is most central to living out free human lives of meaning and flourishing. Resistance may not be entirely futile, but somehow getting along without these technologies is simply not a likely or possible choice for most people.

“Somehow reshaping and redesigning our uses and implementations of these technologies offers some hope. But whether enough professional and business organizations undertake the sorts of changes needed; whether or not our legal and political systems will nudge/force them to do so; and most of all, whether or not enough of us, the consumers and users of these technologies, will successfully resist current patterns and forces and insist on much more human/e directions of development and implementation, remains to be seen.

“Failure to do so will mean that whatever human skills and abilities affiliated with freedom, empathy, judgment, care and all else required for lives of meaning and flourishing will be increasingly offloaded – it is always easier to let the machines do the dirty work. And, very worst case, fewer and fewer of us would notice or care, as all of that will be forgotten, lost (deskilled) or simply never introduced and cultivated in the first place.

“Manifestly, I very much hope such the worst cases are never realized, and there may be some good grounds for hoping that they will not. But slowing down and redirecting the primary current patterns of technology development and diffusion will be very difficult indeed, I fear.”

Beneficial and Harmful
Oksana Prykhodko, director of INGO European Media Platform, an international NGO based in Ukraine, said, “I live in Ukraine, under full-scale, unprovoked aggression from Russia, and even now, after nearly 12 months of cyberattacks and the bombing of our citizens, ISPs, energy infrastructure and so on, I have an Internet connection.

“Before the war we had more than 6,500 different ISPs. Now nearly every large household, every office, every point of invincibility has its own Starlink satellite connection and a generator and shares its Wi-Fi with its neighbours. I am sure that the Ukrainian experience of ‘keeping Ukraine connected’ (with the help of many stakeholders from around the world) can help to ensure human-centered, government-decentralised Internet connection. I am hoping that by 2035 we will have several competitive decentralised private satellite providers for connectivity and to improve our social and political interactions in the future with all democratic countries.

“I am not optimistic about the future of human rights, but perhaps there will be better awareness-raising in support of them in the next decade, and the establishment of litigation processes in support of rights that result in clear and practical outcomes. The Russians are doing their best to commit the genocide of the Ukrainian people. We in Ukraine are extremely worried about our personal data protection and cybersecurity, the forced deportation of children to the country-aggressor, fake referendums with fake lists of ‘voters,’ and acts of torture committed on people found on e-registries. These crimes will demand future investigation and the trial of those who must take responsibility.

“We in Ukraine fully support the multistakeholder model of Internet governance. Because we have free speech, fierce discussions often break out among our stakeholders as we excitedly discuss the big issues tied to the future of the Internet. Russians have no such rights, no multistakeholders, only the governing class. Ignoring the fact that there are no stakeholders in non-democratic countries undermines the full realization of the global multistakeholder model.

“In this war, Ukrainian schoolteachers have had to become e-teachers (very often against their own wishes, against their technical capabilities) because it became unsafe to stay in Ukrainian schools in areas targeted for Russian bombings). This is the worst way to further the development of e-learning.”

Dan Hess, global chief product officer at NPD Group, commented, “Artificial intelligence, coupled with other digital technologies, will continue to have an astounding impact on advances in health care. For example, researchers have already used neural networks to mine massive samples of electrocardiogram (ECG) data for patterns that previously may have eluded detection. This learning can be applied to real-time inputs from devices such as wearable ECGs to alert providers to treatable health risks far faster and more completely than ever before.

“Similarly, imaging and processing technologies are driving a reduction in the cost and timing of DNA sequencing. Where once this process took weeks or months and millions of dollars, the application of new technologies will enable it to be done for less than $100 in less time than it takes to eat lunch. AI will interpret these results more thoroughly and quickly than ever, again resulting in early detection of health risks and the creation of new medications to treat them.

“The net result will be greater quality and length of life for humans – and, for that matter, countless other living creatures.”

Dan Hess, global chief product officer at NPD Group, wrote, “For all of the incredible positive impact that AI will have, it will also give rise to a vast range of dark issues that individuals, societies and our governments will need to confront.

“There is a very real probability of technological singularity. There isn’t enough time or space here to tackle the implications of that, so here are a few challenges that we’ll face until – and after — that day comes.

“In healthcare, such developments as AI-driven disease detection will drive ever-greater life expectancy. This in turn will drive further acceleration of population growth and all of its consequences to the environment, agriculture, trade and more.

“Machines will continue to replace humans in more jobs, including knowledge work such as scientific research. The use of AI across every aspect of life will have an impact on learning and development that eclipses what calculators, PCs and smartphones did to people’s ability to write and do basic math. At the same time, a longer overall lifespan will force individuals to find ways to lead a longer and/or more intense working life to keep food on the table for many more years of post-work retirement.”

Mary Chayko, sociologist, author of “Superconnected” and professor of communication and information at Rutgers University, said, “As communication technology advances into 2035, it will allow people to learn from one another in ever more diverse, multifaceted, widely distributed social networks. We will be able to grow healthier, happier, more knowledgeable and more connected as we create and traverse these networked pathways together. The development of digital systems that are credible, secure, low-cost and user-friendly will inspire all kinds of innovations and job opportunities. If we have these types of networks and use them to their fullest advantage, we will have the means and the tools to shape the kind of society we want to live in.”

Mary Chayko, sociologist, author of “Superconnected” and professor of communication and information at Rutgers University, commented, “Unfortunately, the commodification of human thought and experience online will accelerate as we approach 2035. People have long found it commercially viable to buy and sell ideas, knowledge, likenesses and experiential accounts – as suggested in the thriving worlds of fiction and nonfiction – but by 2035, this process may be out of our everyday control.

“Technology is already used not only to harvest, appropriate and sell our data, but to manufacture and market data that simulates the human experience, as with applications of artificial intelligence. This has the potential to degrade and diminish the specialness of being human, even as it makes some humans very rich.

“The extent and verisimilitude of these practices will certainly increase as technology permits the replication of human thought and likeness in ever more realistic ways. But it is human beings who design, develop, unleash, interpret and use these technological tools and systems. We can choose to center the humanity of these systems, and to support those that do so, and we must.”

Alexander Halavais, associate professor of social data science at Arizona State University, said, “For some, new tools will allow for new ways of creating; a new and different kind of arts and crafts movement will emerge. We already have seen the corner of this: from Etsy to YouTube. But there will be a democratization of powerful software and hardware for creating, and at least some of the overhead in terms of specialized training will be handled by the systems themselves.

“We are likely to see increased monitoring of use of resources: chiefly energy and water, but also minerals and materials. Whether the environmental costs will be priced into the economy remains to be seen, but we will have far better tools to determine which products and practices make the most efficient use of resources.

“Individualized medicine will mean better health outcomes for those with access to advanced healthcare. This does not mean ‘an end to death’ but it does mean dramatically healthier older people and longer fruitful lifespans.

“Access to a core education will continue to become more universally available. While there will remain significant boundaries to gaining access to these, they will continue to be eroded, as geographically based schools and universities give way to more broadly accessible (and affordable) sources of learning.

“An outgrowth of distrust of platform capitalism will see a resurgence in networked and federated sociality – again, for some. This will carve into advertising revenues for the largest platforms, and there may be a combination of subscription and cooperative systems on a smaller scale for those who are interested.

“We will increasingly see conversations among AI agents for arranging our schedules, travel, etc., and those working in these services will find themselves interacting with non-human agents more often.

“Across a number of professional careers, the ability to team with groups of mixed human and non-human actors will become a core skill.”

Alexander Halavais, associate professor of social data science at Arizona State University, responded, “Cyberwar is already here and will increase in the coming decades. The hopeful edge of this may appear to be a reduction in traditional warfighters, but in practice this means that the front is everywhere. Along with the proliferation of strong encryption and new forms of small-scale autonomous robotics, the security realm will become increasingly unpredictable and fraught.

“The divide between those who can make use of new, smart technologies (including robotics and AI) and those who are replaced by them will grow rapidly. It seems unlikely political and economic patches will be easy to implement, especially in countries like the United States that do not have a history of working with labor. I suspect this means that in those countries, technological progress may be impeded, and it will be increasingly difficult to avoid this long-standing divide coming to a head.

“I suspect that both universities and k-12 schools in the United States will also see something of a bifurcation. Those who can afford to live in areas with strong public schools and universities, or who can afford private tuition, will keep a relatively small number of ‘winners’ active, while most will turn to open and commodity forms of education. Khan Academy, for example, has done a great deal to democratize math education, but it also displaces some kinds of existing schools. At the margin, there will be some interesting experimentation, but it will mean a difficult transition for much of the educational establishment. We will see a continued decline of small liberal arts colleges, followed by larger public and private universities and colleges. I suspect, in the end, it will follow a pattern much like that of newspapers in the U.S., with a few niche, high-reputation providers, several mega universities and very few small, local/regional institutions surviving.

“The current bout of disinformation and misinformation is not unprecedented, of course, but it will require some significant global cultural shifts to cause it to recede. I see little hope, at present, of that happening. I suspect that the result will be a combination of populist leaders seeking to capitalize on such disinformation, and others retreating from democratic structures in order to preserve technocratic and knowledge-based government. These paired tendencies are already visible, but if they become entrenched in some of the largest countries (and particularly in the United States), they will contribute to growing political and economic instability.

“We have already seen a bit of a pushback from both global institutions and the global economy. In some ways, this is natural, as the damage of global transportation of goods is somewhat hidden. But the growth of the globalized economy has also closed some gaps between the global North and South over the last few decades. There will still be opportunities, especially in services, as more people embrace working from home and distanced teams. Nonetheless, there will be new, stronger national borders that will make international trade, as well as global cosmopolitanism, recede.”

Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, said, “There are three main sectors where digital systems offer the most potential benefit: health, education and transportation.

“In health, I hope to see two primary benefits. First, using digital to bring down the cost of care, particularly through telehealth services and automation. For example, today’s nurse intakes interviews could be completed with voice chatbots and some routine care could be provided by health care workers with significantly less medical training (e.g., a 2-year nurse technician versus a 10-year primary care physician). Second, using data to design more effective treatments. This should include designing and bringing new drugs to market faster, creating personalized treatments, and better understanding population-level impacts of various medical interventions.

“In education, the big opportunity is personalized learning. Digital has the potential to give everyone educational opportunities that meet them at their level.

“And in transportation, the big opportunity is improving safety, i.e., minimizing deaths and significant injuries. Whether this comes from fully autonomous vehicles or simply vehicles with greater safety functions is not important. But the goal should be to create vehicles less likely to cause injury.”

Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, responded, “There has been a lot of work on closing the digital divide – helping to ensure everyone has access to the Internet, computers and basic digital literacy. But there is less thinking about addressing future digital inequities. In particular, there is the problem of the data divide where not enough data is collected about some individuals or their communities so that they are unable to benefit fully from the digital economy. Addressing the data divide will be necessary to ensure that everyone benefits from digital progress.”

Janet Salmons, an online research methodologist, wrote, “I have hope for human health and well-being due to the regulations emerging from the European Union (such as the DSA and DMA). They were written to protect people from cyberbullying and violent threats. I have hope for positive developments in human knowledge if people continue to reject book bans and content restrictions. Open-access and cross-border library access are important to stopping censorship.”

Janet Salmons, an online research methodologist, responded, “I have concerns about human rights and human health and well-being. Without regulations, the Internet becomes too dangerous to use, because privacy and safety are not protected. More walled gardens emerge as safe spaces. Digital tools and systems are based in greed, not the public good, with unrestricted collection, sale and use of data collected from Internet users.”

Danny Gillane, an information science professional, commented, “We will begin to focus on privacy and security more seriously, and companies like Facebook (Meta), Google and Amazon will be edged out by more privacy-focused, less sell-user-information-focused companies like Apple (which is more interested in selling hardware) and DuckDuckGo and new companies. Well, okay, that’s more of a hope than a prediction.”

Danny Gillane, an information science professional, wrote, “Companies are going to run ahead with AI without regard to safety. The genie cannot be put back into the bottle. Government will not act to regulate or provide safeguards and consumers will suffer.”

Corinne Cath, an anthropologist of Internet infrastructure governance, politics and cultures, wrote, “Tech is just another instantiation of the economic system. It’s not magic. The rose-tinted glasses about the ‘positive’ impact of the tech are off, tech critique is getting stronger.”

Corinne Cath, an anthropologist of Internet infrastructure governance, politics and cultures, said, “Everything depends on the cloud computing industry, from critical infrastructure to health to electricity to government as well as education and even the business sector itself – this centralizes power in the centralized power structure even further.”

Gus Hosein, executive director of Privacy International, commented, “Direct human connections will continue to grow over the next decade-plus, with more local community-building and not as many global or regional or national divisions.

“People will have more time and a more sophisticated appreciation for the benefits and limits of technology. While increased electrification will result in ubiquity of digital technology, people will use it more seamlessly rather than through online vs offline.

“Human rights: Having been through a dark period of transition, a sensibility around human rights will emerge in places where human rights are currently protected and will find itself under greater protection in many more places, but not under the umbrella term of ‘human rights.’”

Gus Hosein, executive director of Privacy International, said, “When and where human rights are disregarded matters will grow worse over the next decade-plus. A new fundamentalism will emerge from the over-indulgences of the tech/information/free market era, with at least some traditional values emerging, but also aspects of a cultural revolution, both requiring people to exhibit behaviours to satisfy the community. This will start to bleed into more free societies and will pose a challenge to the term and symbolism of human rights.

“Loneliness will continue to rise, starting from early ages as some do not make it out of the end of online vs offline. Alongside the struggle around human rights vs. traditional values, more loneliness will result in people who are different, being outcast from their physical communities and not finding ways to compensate.

“Human knowledge development will slow. As we learn more about what it is to be human and how we interact with one another, the fundamentalism and quest for simplicity will mean that we care less and less about discovery and will seek solace in natural solutions. This has benefits for sure, but just as the link between new age and wellbeing has some links to right wing and anti-science ideologies, this will grow as we stop obsessing about technology as a driver of human progress and just see a huge replacement of pre-2023 infrastructure with electrification.”

Michael Muller, a researcher for a top global technology company who is focused on human aspects of data science and ethics and values in applications of artificial intelligence, wrote, “We will learn new ways in which humans and AIs can collaborate. Humans will remain the center of the situation. That doesn’t mean that they will always be in control, but they will always control when and how they delegate selected activities to one or more AIs.”

Michael Muller, a researcher for a top global technology company focused on human aspects of data science and ethics and values in applications of artificial intelligence, commented, “Human activities will increasingly be displaced by AIs, and AIs will increasingly anticipate and interfere with human activities. Most humans will be surveilled and channeled by AI algorithms. Surveillance will serve both authoritarian government and increasingly dominant corporations.”

Akah Harvey, director of engineering at Seven GPS, Cameroon, said, “Humans are always on the quest to reduce human labor and improve quality of life as much as they possibly can. With the advancement in the fields of artificial intelligence and renewable energy, we are getting closer and closer to achieving those goals. My biggest hope is that practical applications of conversational AIs like ChatGPT will eliminate monotonous discussions across several industry domains, from banking and finance to building and architecture, health and education. We can finally employ such artificial agents to speed up policy designs that give us significant insight on how we can better allocate resources in different departments for better productivity. Fairness and equity in a given community can be more achievable if we could test our policies more rapidly and efficiently across a wider target population. We could gain several hundred years of research and development from the application of such AIs. New drug synthesis could be developed in less than one-tenth of the time it would conventionally do. This creates a safe way of anticipating future health or economic disasters by preparing responses well ahead, or just preventing it all together. There’s really only a limit as to what domain of human endeavor we allow autonomous agents to be applied to. The opportunities for a better life regardless of where we are on Earth are boundless.”

Akah Harvey, director of engineering at Seven GPS, Cameroon, wrote, “We have to think long and hard about in just which industry domains we let artificial intelligence provide work product without some sort of rules in regard to it. We are soon going to have AI lawyers in our courts. What should we allow as acceptable from that AI in that setting? The danger in using these tools is the bias they may bring which we may not have yet conceived ever in the industry. This has the potential to sway judgment in a way that doesn’t render justice.

“Artificial intelligence that passes the Turing Test must be explainable. When people give up the security of their digital identity for a little more convenience, the risk could be far too great for the damage potential it represents. When interacting with agents, there’s need for proper identification as to whether that agent is an AI (acting autonomously) or a human. These tools are beating the test more and more these days, such that they can even impersonate actual humans to carry out acts that would otherwise jeopardize the stability of any given institution and global peace at large.

“We are most likely to be seeing more and more movies being created entirely by artificial entities than by humans. These will tend to be hardly distinguishable from a conventional movie production. It is going to drive less and less involvement of humans in the industry and therefore create pressure in society to create new roles for people to fill. The dangers are existential and public policy needs to keep up almost as fast as these new tools evolve.”

Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, “I’d like to touch on the future of human rights, knowledge, digital tools and systems and privacy.

“Human Rights: Today, governments such as China’s are able to control what most of their citizens see on the Internet. Yes, technically adept people can get around the censorship, but I assume random citizens do not have the ability to use VPNs and such. By 2035, it should be possible to make simple workarounds that nontechnical people can access. Especially when dictators are threatened, the first thing they do is cut off the Internet so the people cannot organize. By 2035, it should be possible for anyone to access the Internet without possibility of restriction. I note, for example, how satellite-based Internet was made available to the protesters in Iran, but Elon Musk then demanded payment for the service. I would envision, rather, a distributed system, uncontrolled from any one point (like cryptocurrency) as a means of access to the Internet, at least in times of crisis.

“Knowledge: Today we are in the ‘wild west’ in how we deal with behavior on the Internet. There are currently some fairly accurate systems for detecting social-media postings that are inappropriate or dangerous in some way (e.g., hate speech, fear speech, bullying, threats). They need to get better, and there needs to be some regulation regarding what is inappropriate under what circumstances. I hope and expect that by 2035 there will be established a reasonable standard for behavior on the Internet much as there is for behavior on the street. I also believe that enforcement of such a standard will be possible using software, rather than human intervention, in 99.9% of the instances.

“Digital Tools and Systems: Scams of all sorts appear on the Internet and elsewhere, and they are becoming more sophisticated. I hope that by 2035, we will have the technology in place to help vulnerable people avoid the traps. I envision a guide that looks over your shoulder – at your financial dealings, your on-line behavior, and such and warns you if you are about to make a mistake (e.g., sending your life savings to someone claiming to be the IRS, or downloading ransomware).

“Privacy: I believe that our current approach to privacy is wrong. The Internet has turned us into a global village, and just as villagers of 200 years ago knew everything about one another, we need to accept that our lives are open, not secret, as they were for most of human history. For example, many people look with horror at the idea that companies gather information about them and use that information to pitch ads. These same people are happy to get all sorts of free service, but very unhappy that they are sent ads that have a higher-than-random chance of being for something they might actually be interested in. I hope that by 2035, we will have adjusted to the new reality.”

Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, “While I am fairly confident that the major risks from the new technologies have technological solutions, there are a number of serious risks.

Governance and Institutions: Social media is, I believe, responsible for the polarization of politics. It is no longer necessary to get your news from reasonable, responsible sources, and many people have been given blinders that let them see only what they already believe. If this trend persists, we will see more events like Jan. 6, 2021, or the recent events in Brazil, possibly leading to social breakdown.

Human Connections: I recall that with the advent of online gaming, it was claimed that ‘100,000 people live their lives primarily in cyberspace.’ I believe it was referring to things like playing World of Warcraft all day. 100K isn’t a real problem, but what if virtual reality (the metaverse) becomes a reality by 2035, as it probably will, and a hundred million people are spending their lives there?

Well-Being: I remember from the 1960s the Mad Magazine satire of ‘The IBM Fight Song’: ‘…what if automation, idles half the nation, we’ll still work for IBM…’ Well 60 years later, automation has steadily replaced human workers, and more recently AI has started to replace brain work as well as physical labor. Yet unemployment has remained about the same. That doesn’t mean there won’t be a scarcity of work in the future, with all the social unrest it would entail. Especially, a consequence of the rapid obsolescence of jobs means the rate at which people must be retrained will only increase, and at some point I think we reach a limit, where people just give up trying to learn new skills.

Other (Education): It has recently been noticed that ChatGPT is capable of writing things that look like student essays. I think the panic is unwarranted; there are already tools being developed that will tell ChatGPT output from the work of high-school students pretty well. But what happens when students can build their own trillion-parameter models (without much thought – just using publicly available on-line software tools and data) and use it to do their homework. Worse, the increasing prevalence of on-line education has made it possible for students to use all sorts of scams to avoid actually learning anything (e.g., hiring someone on the other side of the world to do their work for them). Are we going to raise a generation of students who get good grades but don’t actually learn anything?

Digital Tools and Systems: I do not believe the ‘Terminator’ scenario where AI develops free will and takes over the world is likely anytime soon. The stories about chatbots becoming sentient are nonsense – they are designed to talk like the humans who created the text on which the chatbot was trained, so it looks sentient but is not. The risk is not that, for example, a driverless car will suddenly become self-aware and decide it would be fun to drive up on the sidewalk and run people over. It is much more likely that some rogue software engineer will program the car to do that. Thus, the real risk is not from unexpected behavior of an AI system, but rather from the possible evil intent of one or more of their creators.”

Lauren Wilcox, a senior scientist and group manager at Google Research who investigates AI and society, predicted, “The best and most beneficial changes in digital life likely to take place by 2035 tie into health and education.

“Improved capabilities of health systems (both at-home health solutions as well as health care infrastructure) to meet the challenges of an aging population and the need for greater chronic condition management at home.

“Advancements in and expanded availability of telemedicine, last-mile delivery of goods and services, sensors, data analytics, security, networks, robotics, and AI-aided diagnosis, treatment, and management of conditions, will strengthen our ability to improve the health and wellness of more people.

“These solutions will improve the health of our population when they augment rather than replace human interaction, and when they are coupled with innovations that enable citizens to manage the cost and complexity of care and meet everyday needs that enable prevention of disease, such as healthy work and living environments, healthy food, a culture of care for each other, and access to health care.

“Increases in the availability of digital education that enables more flexibility for learners in how they engage with knowledge resources and educational content. Increasing advancements in digital classroom design, accessible multi-modal media, and learning infrastructures will enable education for people who might otherwise face barriers to access.

“These solutions will be most beneficial when they augment rather than replace human teachers, and when they are coupled with innovations that enable citizens to manage the cost of education.”

Lauren Wilcox, a senior scientist and group manager at Google Research who investigates AI and society, observed, “The most harmful or menacing changes in digital life likely to take place by 2035 are likely to emerge from irresponsible development and use, or misuses, of certain classes of AI, such as generative AI (e.g., applications powered by large language and multimodal models) and AI that increasingly performs human tasks or behaves in ways that increasingly seem human-like.

“For example,  current generative AI systems can now take as input from the user natural language sentences and paragraphs and generate personalized natural language and image-based and multimodal responses. The models learn from a large body of available information online to learn patterns.

“Human interaction risks of irresponsible uses of these classes of AI include the ability for an AI system to impersonate people in order to compromise security, emotionally manipulate users, and gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking overtrust and reliance on them, diminishing learning and information discovery opportunities, and making it difficult for people to know when a response is incorrect or incomplete.

“In a future in which people rely on these AI systems, but cannot validate their responses easily,  or don’t know what data they’ve been trained on or what other techniques were used to generate responses, a lack of transparency will make accountability for poor or wrong decisions made with these systems difficult to assess.

“This is especially problematic when acknowledging the biases that are inherent to AI systems that are not responsibly developed; for example, an AI model that is trained on text available online will inherit cultural and social biases, leading to the potential erasure of many perspectives and reinforcement of particular worldviews.

“Irresponsible use or misuse of these AI technologies can also bring material risks to people, including a lack of fairness to creators of the original content that models learn from to generate their outputs, and the potential displacement of creators and knowledge workers resulting from their replacement by AI systems, in the absence of policies to ensure their livelihood.

“Finally, we’ll need to advance the business models and user interfaces we use to keep web businesses viable: when AI applications replace or significantly outpace the use of search engines, web traffic to websites one would usually visit as they search for information might be reduced if an AI application provides a one-stop shop for answers. If sites lose the ability to remain viable, a negative feedback loop could limit diversity in the content these models learn from, concentrating information sources even further into a limited number of the most powerful channels.”

Beneficial and Harmful
Charles Fadel, founder of the Center for Curriculum Redesign and co-author of “Artificial Intelligence in Education,” explained, “The amazing thing about this moment is how quickly artificial intelligence is spreading and being applied. With that in mind, let’s walk through some of your survey prompts:

“On human-centered development of digital tools and systems: I do believe significant autonomy will be achieved by specialized robotic systems, assisting in driving (U.S.), (air and land) package delivery, or bedside patient care (Japan), etc. But we don’t know exactly what ‘significant’ entails; in other words, the degree of autonomy may vary by life-criticality of the applications – the more life-critical, the less trustworthy the application (package delivery on one end, being driven safely on the other).

“On human knowledge: Foundation models (like GPT-3) are surprising everyone and will lead to hard-to-imagine transformations. What can a quadrillion-item system achieve? (Or is there a diminishing return? We will find out in the next six months, if not before the time this is published). We’ve already seen how very modest technology changes disrupt societies. I was witness to the discussion regarding the Global System for Mobile Communications (GSM) effort years ago, when technologists were trying to see if we could use a bit of free bandwidth that was available between voice communications channels. They came up with short messages – 140 characters that only needed 10 kilohertz of bandwidth. I wondered: Who would care about this?

“Well, people did care, and they started exchanging astonishing volumes of messages. The humble SMS [text message] has led to societal transformations that were complete ‘unknown unknowns.’ First, it led to the erosion of commitments (by people not showing up when they said they would) and not soon afterward led to the erosion of democracy (via Twitter). If something that small could have such an impact, it’s impossible to imagine what foundation models will have. For now, I’d recommend that everybody take a deep breath and wait to see what the emerging impact of these models is. We are talking about punctuated equilibria a la Stephen Jay Gould, for AI. but we’re not sure how far until the next plateauing.

“Human connections, governance and institutions: I worry about regulation. I continue to marvel at the inability of lawyers and politicians, who are typically humanities types, to understand the impact of technologies for a decade or more after they erupt. This leads to catastrophes before anyone is galvanized to react. Look at the catastrophe of Facebook and Cambridge Analytica and the 2016 election. No one in the political class was paying attention then – and there still aren’t any real regulations. There is no anticipation in the political circles about how technology changes things and the dangers that are obvious. It takes 2-3 decades for them to react, when regulations should come within three years at worst.

“For other kinds of institutions like universities, it’s still hard to guess whether tech developments will be the real silver bullet that helps higher ed or ruins it. Every new technology from radio to CD-ROMs to personal computers was supposed to fix education. It hasn’t yet happened. The better approach for educators would be to recognize the environment is changing and some of the changes will be helpful, and they do not destroy everything that’s valuable.

“Human rights: Should a centibillionaire have more free speech rights because they own a global platform? Look at Twitter now. This is a dangerous situation because of greed – greed for power and money. It’s all about the manipulation of people by understanding who they are and making the messages sent to them stickier and stickier. And we’ve seen how much harm it can do when misinformation hurts people – people who didn’t believe the early warnings about COVID. We’re basically emotional beings who are very easy to manipulate. That won’t change anytime soon.”

Beneficial and Harmful
Mark Davis, an associate professor of communications at the University of Melbourne in Australia, whose research focuses on online ‘anti-publics’ and extreme online discourse, responded, “There must be and surely will be a new wave of regulation. As things stand, digital media threatens the end of democracy.

“The structure, scale and speed of online life exceed deliberative and cooperative democratic processes. Digital media plays into the hands of demagogues, whether it be the libertarians whose philosophy still dominates Western tech companies and the online cultures they produce or the authoritarian figures who restrict the activities of tech companies and their audiences in the world’s largest non-democratic state, China.

“How do we regulate to maximise civic processes without undermining the freedom of association and opinion the internet has given us is one of the great challenges of our times? AI, currently derided as presaging the end of everything from university assessment to originality in music, can perhaps come to the rescue.

“Hate speech, vilification, threats to rape and kill, and the amplification of division that has become generic to online discussion, can all potentially be addressed through generative machine learning. The so-far-missing components of a better online world, however, have nothing to do with advances in technology: wisdom and an ethics of care. Are the proprietors and engineers of online platforms capable of exercising these all-too-human attributes?

“Humanity risks drowning in a rising tide of meaningless words. The sheer volume of online chatter generated by trolls, bots, entrepreneurs of division, and now apps like ChatGPT, risks devaluing language itself. What is the human without language? Where is the human in the exponentially wide sea of language currently being produced? Questions about writing, speech and authenticity structure western epistemology and ontology, which are being restructured by the scale, structure and speed of digital life.

“Underneath this are questions of value. What speech is to be valued? Whose speech is to be valued? The exponential production of meaningless words, that is, words without connection to the human, raises questions about what it is to be human. Perhaps this will be a saving grace of AI; that it forces a revaluation of the human since the rising tides of words raises the question of what gives words meaning. Perhaps, however, there is no time or opportunity for this kind of reflection, given the commercial imperatives of digital media, the role platforms play in the global economy, or the way we, as thinkers, citizens, humans, use their content to fill almost every available silence.”

Beneficial and Harmful
Clifford Lynch, director of the Coalition for Networked Information, wrote, “One of the most exciting long-term developments – it is already well advanced and will be much farther along by 2035 – is the restructuring, representation or encoding of much of our knowledge, particularly in scientific and technological areas, into forms and structures that lend themselves to machine manipulation, retrieval, inference, machine learning and similar activities. While this started with the body of scholarly knowledge, it is increasingly extending into many other areas; this restructuring is a slow, very largescale, long-term project, with the technology evolving even as deployment proceeds. Developments in machine learning, natural language processing and open-science practices are all accelerating the process.

“The implications of this shift include greatly accelerated progress in scientific discovery (particularly when coupled with other technologies such as AI and robotically controlled experimental apparatus). There will be many other ramifications, many of which will be shaped by how broadly public these structured knowledge representations are, and to what extent we encode not only knowledge in areas like molecular biology or astronomy but also personal behaviors and activities. Note that for scholarly and scientific knowledge the movements towards open scholarship and open-science practices and the broad sharing of scholarly data mean that more and more scholarly and scientific knowledge will be genuinely public. This is one of the few areas of technological change in our lives where I feel the promise is almost entirely positive, and where I am profoundly optimistic.

“The emergence of the so-called ‘geospatial singularity’ – the ability to easily obtain near-continuous high-resolution multispectral imaging of almost any point on Earth, and to couple this data in near-real-time with advanced machine learning and analysis tools, plus historical imagery libraries for comparison purposes, and the shift of such capabilities from the sole control of nation-states to the commercial sector – also seems to be a force primarily for good. The imagery is not so detailed as to suggest an urgent new threat to individual privacy (such as the ability to track the movement of identifiable individuals), but it will usher in a new era of accountability and transparency around the activities of governments, migrations, sources of pollution and greenhouse gases, climate change, wars and insurgencies and many other developments.

“We will see some big wins from technology that monitors various individual health parameters like current blood sugar levels. These are already appearing. But to have a large-scale impact they’ll require changes in the health care delivery system, and to have a really large impact we’ll also have to figure out how to move beyond sophisticated users who serve as their own advocates to a broader and more equitable deployment in the general population that needs these technologies.

“Social media as an environment for propaganda and disinformation, for targeting information delivery to audiences rather than supporting conversations among people who know each other, as well as a tool for collecting personal information on social media users, seems to be a cesspool without limit. The sooner we can see the development of services and business models that allow people who want to use social media for relatively controlled interaction with other known people without putting themselves at risk of exposure to the rest of the environment the better. It’s very striking to me to see how more and more toxic platforms for social media communities continue to emerge and flourish. These are doing enormous damage to our society.

“I hope we’ll see social media split into two almost distinct things. One is a mechanism for staying in touch with people you already know (or at least once knew); here we’ll see some convergence between computer mediated communication more broadly (such as video conferencing) and traditional social media systems. I see this kind of system as a substantial good for people, and in particular a way of offsetting many current trends towards the isolation of individuals for various reasons. The other would be the environment targeting information delivery to audiences rather than supporting conversations among friends who know each other. The split cannot happen soon enough.

“It’s hard to pick the worst potential technological developments between now and 2035 for human welfare and well-being; there are so many possibilities, and they tend to mutually re-enforce each other in various dystopian scenarios. And I have to say that we’ve got a very rich inventory of technologies that might be deployed in the service of what I believe would be evil political objectives; saving graces here will be political choices, if there are any.

“One cross-cutting theme the challenges to actually achieving the ethical or responsible use of technologies. It’s great to talk about these things, but they these conversations are not likely to survive the challenges of marketplace competition. And I absolutely despair in the fact that reluctance to deploy autonomous weapons systems is not likely to survive the crucible of conflict. I am also concerned that too many people are simply whining about the importance of taking cautious, slow, ethical, responsible approaches rather than thinking constructively and specifically about getting this accomplished in the likely real-world scenarios for which we need to know how to understand and manage them.

“I’m increasingly of the opinion that so-called ‘generative AI’ systems, despite their promise, are likely to do more harm than good, at least in the next 10 years. Part of this is the impact of deliberately deceptive deepfake variants in text, images, sound and video, but it goes beyond this to the proliferation of plausible-sounding AI-generated materials in all of these genres as well (think advertising copy, news articles, legislative commentary or proposals, scholarly articles and so many more things). I’d really like to be wrong about this.

“Finally, I’d like to believe brain-machine interfaces (where I expect to see significant progress in the coming decade or so) as a force for good – there’s no question that they can do tremendous good, and perhaps open up astounding new opportunities for people, but again I cannot help but be doubtful that these will be put to responsible uses, for example, think about using such an interface as a means of interrogating someone, as opposed to a way of enabling a disabled person; there are also, of course, more neutral scenarios such as controlling drones or other devices.

“I am simultaneously excited and frightened about the way that digital life may change in the coming decade. It’s going to be a critical period. I believe that as a society and a culture we will at least begin to negotiate, to come to terms with a number of critically important issues, though I’m doubtful that either the legal or legislative system is prepared to deal with the questions at hand. I’m thinking we will see some pragmatic commercial and cultural compromises as well as legislative and legal developments.

“There will be disruption in expectations of memorization and a wide variety of other specific skills in education and in qualification for employment in various positions. This will be disruptive not only to the educational system at all levels but to our expectations about the capabilities of educated or adult individuals.

“Related to these questions but actually considerably distinct will be a substantial reconsideration of what we remember as a culture, how we remember and what institutions are responsible for remembering; we’ll also revisit how and why we cease to remember certain things.

“Finally, I expect that we will be forced to revisit our thinking in regard to intellectual property and copyright, about the nature of creative works and about how all of these interact with not only with the rise of structured knowledge corpora, but even more urgently with machine learning and generative AI systems broadly.”

Maja Vujovic, owner and director of Compass Communications in Belgrade, Serbia, responded, “New technologies don’t just pop up out of the blue; they grow through iterative improvements of conceivable concepts moved forward by bold new ideas. Thus, in the decade ahead, we will see advances in most of the key breakthroughs we already know and use (automation and robotics, sensors and predictive maintenance, AR and VR, gaming and metaverse, generative arts and chatbots and digital humans) as they mature into the mass mainstream.

“Much as spreadsheet tech sprouted in the 1970s and first thrived on mainframe computers but became adopted en masse when those apps migrated onto personal desktops, in the same way, we will witness in the coming years countless variations of apps for personal use of our current top-tier technologies.

“The most useful among those tech-granulation trends will be the use of complex tech in personalized healthcare. We will see very likable robots serve as companions to ailing children and as care assistants to infirm elderly. Portable sensors will graduate from superfluous swagger to life-saving utility. We will be willing and able to remotely track our pets to begin with, but gradually our small children or demented parents as well.

“Drowning in data, we will have tools for managing other tools and widgets for automating our digital lives. Apps will work silently in the background, or in our sleep, tagging our personal photos, tallying our daily expenses, planning our celebrations or curating our one (combined) social media feed. Rather than supplanting us and scaling our creative processes – which by definition only works on a scale of one! – technology will be deployed where we need it the most, in support of what we do best – and that is human creation.

“To extract the full value from tools like chatbots, we will all soon need to master the arcane art of prompting AI. A prompt engineer is already a highly paid job. In the next decade, prompting AI will be an advanced skill at first, then a realm of licensed practitioners and eventually an academic discipline.”

Maja Vujovic, owner and director of Compass Communications in Belgrade, Serbia, said, “Our most advanced digital technologies are a result of unprecedented aggregation. Top apps have enlisted almost half of the global population. The only foreseeable scenario for them is to keep growing. Yet our global linguistic capital is not evenly distributed.

“By compiling the vocabularies of languages with far fewer users than English or Chinese have, a handful of private enterprises have captured and processed the linguistic equity of not only English, or Hindu or Spanish, but of many small cultures as well, such as Serbian, Welsh or Sinhalese. Those cultures have far less capacity to compile and digitally process their own linguistic assets by themselves. While most benign at times of peace, this dis-balance can have grave consequences during more tense periods. Effectively, it is a form of digital supremacy, which in time might prove taxing on smaller, less wealthy cultures and economies.

“Moreover, technology is always at the mercy of other factors, which get to determine whether it is used or misused. The more potent the technologies at hand, the more damage they can potentially inflict. Having known war firsthand and having gone through the related swift disintegration of social, economic and technical infrastructure around me, I am concerned to think how utterly devastating such disintegration would be in the near future, given our total dependence on an inherently frail digital infrastructure.

“With our global communication signals fully digitized in recent times, there would be absolutely no way to get vital information, talk to distant relatives or collect funds from online finance operators, in case of any accidental or intentional interruptions or blockades of Internet service. Virtually all amenities of contemporary living – our whole digital life – may be canceled with a flip of a switch, without recourse. As implausible as this sounds, it isn’t impossible. Indeed, we have witnessed implausible events take place in the recent years. So, I don’t like the odds.”

Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, said, “Digital technology has come to stay in our lives for good. One area that excites me about the future is the use of artificial intelligence, which of course is going to shape the way we live by 2035. We have started to see the dividends of artificial intelligence in our society.

“Essentially, the human-centered development of digital tools and systems is safely advancing human progress in the area of transportation, health, finances, energy harvesting and so on. As an engineer who believes in the power of digital technology, I see limitless opportunities for our transportation system. Beyond the personal driverless cars and taxis, by 2035, our public transportation will be taken over by remote-controlled buses with accurate timing with a marginal error of 0.0099 which will make us feel the needless use of personal cars. This will be cheaper without disappointment.

“Autonomous public transport will be pocket-friendly to the general citizenry. This will come with less pollution as energy harvesting from green sources will take a tremendous positive turn with the use of IoT and other digital technologies that harvest energy from multiple sources by estimating what amount of energy is needed and which green sources are available at a particular time with plus one redundancy. Hence minimal inefficiencies.

“Deployment of bigger drones that can come directly to your house to pick you up after identifying you and debiting your digital wallet account and confirming the payment will be a reality. The use of paper tickets will be a thing of the past as digital wallets to pay for all services will be ubiquitous.

“In regard to human connections, governance and institutions and the improvement of social and political interactions, by 2035, the body of knowledge will be fully connected. There will be universal acceptance of open-source applications that make it possible to have a globally robust body of knowledge in artificial intelligence and robotics. There will be less depression in society. If your friends are far away, robots will be available as friends you can talk to and even watch TV with and analyze World Cup matches as you might do with your friends. Robots will also be able to contribute to your research work even more than what ChatGPT is capable of today.

“Governance will be seamless as we are closer to the government in the digital ecosystem. You pay your taxes without being chased around because it is deducted from the source. There will be less corruption in our society. We will need fewer law enforcement agents, as there will be minimal lawbreaking because there is little or no opportunity to break the law for the recalcitrant and our society will be safer, even as digital finances take over the financial ecosystem, with AI and blockchain giving room for corruption.

“Contract gains can be calculated even before the contract is awarded and a change in budget will be opened, giving room for minimal corruption as AI brings in changes in the prices of materials at zero hours without going into the physical market. I look forward to participating in more research on how this can be implemented.

“In regard to human knowledge and the verifying, updating, safe archiving and so on, open-source AI will make research work easier. However, human ingenuity will still be needed to add value. Research will be much easier as we concentrate the creativity while the secondary research is being conducted by AI. Hence, there will be an increase in contributions to the body of knowledge and our society will be better off.

“Human health and well-being will benefit greatly from the use of AI, bringing about a healthy population as sicknesses and diseases can be easily diagnosed. Infectious diseases will become less virulent because of the use of robots in highly infectious pandemics and pandemics can easily be curbed. With enhanced big data using AI and ML, pandemics can be easily predicted and prevented and the impact curve flattened in the shortest possible time using AI-driven pandemic management systems.”

Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, wrote, “It is pertinent to also look at the other side of the coin as we gain positive traction on digital technologies. There will be concern about the safety of humans as this technology falls into the hands of scoundrels who use it for crime, mischief and other negative ends. This technology can be used to attack innocent souls. It may be used to manipulate the public or destroy political enemies, thus it is not necessarily always the ‘bad guys’ who are endangering our society.

“Human rights may be abused. For example, a government may want to tie us to one digital wallet through a central bank of digital currencies and dictate how we spend our money. These are issues that need to be looked at in order not to trample on human rights.

“Technological decolonization may also raise a concern as unique cultures may be eroded due to global harmonization. This can create an unequal society in which some sovereignty may benefit more than others.”

Dennis Szerszen, an independent business and marketing consultant who previously worked with IBM, wrote, ““Embedded information technology will make our personal transportation autonomous by 2035. It is likely to save lives. We will be less reliant on our own senses for driving, and, with broader information, we will not need to make choices regarding fuel, or even how we get to our destination. Predictive information may even enable us to migrate from fossil-based fuels by making the powering of our vehicles autonomously managed.

“Our home tech will be far more autonomous as well. Predictive information will help shop for us. Our food supply chain will become far more stable than it has been in these times of unstable supply chain and population growth.

“I predict that our healthcare system will be dramatically changed. We will still have to work through the mega-hospital system for our care, but care will be managed less by human decision-making but more by information systems that can anticipate conditions, completely manage predictive care and handle nearly all scheduled interactions including vaccinations and surgical procedures. I predict (with hope) that medical research will change dramatically from the short-sighted model used today that’s predominantly driven by big pharma seeking to make money on medications for chronic conditions, to one that migrates back to academia and focuses on predicting and curing human conditions, affecting both lifespan and quality of life.”

Dennis Szerszen, an independent business and marketing consultant who previously worked with IBM, commented, “False news will become the majority of what we see online, even through ‘trusted’ news services. We will trust even less the information that is presented to us as fact-based reporting. Ideology-driven decision makers will rule our governments and our courts, further eroding human rights, especially for women and for members of other-gender populations. Educational systems will be adversely affected by struggle over information available for teaching materials because of ideological shifts and the plain lack of non-subjective historical information. Social media will be flooded with idealized images, our sense of normal human appearance will be altered and changed. Our impression of beauty will become narrow.”

Beneficial and Harmful
Andy Opel, professor of communications at Florida State University, wrote, “In drafting this response, the first thing I notice if how hard it is to imagine a better digital future and how easily dystopian narratives, fears, and anxieties dominate my imaginative visions. The history of consolidated power and commercial imperatives have successfully warped my expectations to the point where potentially positive outcomes are met with skepticism and suspicion. Given this impulse, the following is an attempt to silence those voices and make room for a possible future that could emerge if we are able to wrestle our institutions back in service to the public good.

“The fall of 2022 introduced profound changes to the world with the release of OpenAI’s ChatGPT. Five days later, over a million users had registered for access, marking the fastest diffusion of a new technology ever recorded. This tool, combined with a myriad of text-to-image, text-to-sound and voice-transcription generators, is creating a dynamic environment that is going to present new opportunities across a wide range of industries and professions.

“These emerging digital systems will become integrated into daily routines, assisting in everything from the most complicated astrophysics to the banality of daily meal preparation. As the proliferation of access to collected human knowledge spreads, individuals will be empowered to make more informed decisions, navigate legal and bureaucratic institutions, and resolve technical problems with unprecedented speed and accuracy.

“AI tools will reshape our digital and material landscapes, disrupting the divisive algorithms that have elevated cultural and political differences while masking enormity of our shared values – clean air, water, food, safe neighborhoods, good schools, access to medical care and baseline economic security. As our shared values and ecological interdependence become more visible, a new politics will emerge that will overcome the stagnation and oligarchic trends that have dominated the neoliberal era.

“Out of this new digital landscape is likely to grow a realization of the need to reconfigure our economy to support what the pandemic revealed as ‘essential workers,’ the core elements of every community worldwide; farmers, grocery clerks, teachers, police and fire, service industry workers, etc. Society cannot function when the foundational professions of a society cannot afford homes in the communities they serve.

“This economic realignment will be possible because of the digital revolution taking place at this very moment. AI will both eliminate thousands of jobs and generate enough wealth to provide a basic income that will free up human time, energy and ingenuity. Through shorter work weeks and a move away from the two-parent income requirement to sustain a family, local, sustainable, communities will reconnect and rebuild the civic infrastructure and social relations that have been the base of human history across the millennia.

“Richard Nixon proposed a universal basic income in 1969 but the initiative never made it out of the Senate. Over half a century later, we are on the precipice of a new economic order made possible by the power, transparency and ubiquity of AI. Whether we are able to harness the new power of emerging digital tools in service to humanity is an open question. I expect AI will play a central role in assisting the transition to a more equitable and sustainable economy and a more accessible and transparent political process.”

“I end with this quote: ‘If the great mass of Americans is going to have any role whatsoever in shaping of this future, if there is to be any chance at all that the 21st-century will belong to the whole of humanity, as opposed to the monopolists of a new Gilded Age, then the defining economic issues of the age must become the defining political issues of the age.’ – Robert McChesney & John Nichols, authors of ‘People Get Ready: The Fight Against a Jobless Economy and a Citizenless Democracy.’”

Andy Opel, professor of communications at Florida State University, said, “AI and emerging digital technologies have a wide range of possible negative impacts, but I want to focus on two: the environmental impact of AI and the erosion of human skills.

“The creation of the current AI tools from ChatGPT-3 to Stable Diffusion and other text-to-image generators required significant amounts of electricity to provide the computing power to train the models. According to MIT Technology Review, over 600 metric tons of CO2 were produced to train ChatGPT-3. This is the equivalent of over 1,000 flights between London and New York, and this is just to train the AI tool, not to run the daily queries that are now expected among millions of users worldwide.

“In addition, ChatGPT-3 is just one of many AI tools that have been trained and are in use, and that number is expanding at an accelerating rate. Until renewable energy is used to run the server farms that are the backbone of every AI tool, these digital assets will have a growing impact on the climate crisis. This impact remains largely invisible to citizens who store media in ‘the cloud,’ too often forgetting the real cloud of CO2 that is produced with every click on the screen.

“The second major impact of emerging digital media tools is the ephemeral nature of the information and the vulnerability of this information. While print media has a limited lifespan, we continue to have access to documents that were written over 2,000 years ago, and the Epic of Gilgamesh continues to animate high school classes, thousands of years later. Computer software and hardware on the other hand changes so quickly most of us have media drives with cables that no longer connect to our machines, rendering those files obsolete to only the most dedicated media cable archivist!

“As our reliance on digital tools grows – from the simplicity of spell checking to the complexity of astrophysics, our collective knowledge is increasingly stored in a digital format that is vulnerable to disruption. At the same time, the ubiquity of these tools is seductive, allowing the unskilled to produce amazing visual art or music or simulate the appearance of expertise in a wide range of subject areas.

“The growing dependence on this simulation masks the physical skills that are being stripped out, replaced by expertise in search term and prompt writing skills. This is accelerating a trend that has been in place for many years as people moved from the physical to the digital. Without the mechanical skills of hammers and wrenches, planting and compost, wiring and circuits, entire populations become dependent on a shrinking pool of people who actually *do* things. When the power goes out, the best AI in the world will not help.”

Aymar Jean Christian, associate professor of communication studies at Northwestern University and adviser to the Center for Critical Race Digital Studies, observed, ““Decentralization is a promising trend in platform distribution. Web 2.0 companies grew powerful by creating centralized platforms and amassing large amounts of social data. The next phase of the web promises more user ownership and control over how our data, social interactions and cultural productions are distributed. The decentralization of intellectual property and its distribution could provide opportunities for communities that have historically lacked access to capitalizing on their ideas. Already users and grassroots organizations are experimenting with new decentralized governance models, innovating in the longstanding hierarchical corporate structure.

Aymar Jean Christian, associate professor of communication studies at Northwestern University and adviser to the Center for Critical Race Digital Studies, observed, “The automation of story creation and distribution through artificial intelligence poses pronounced labor equality issues as corporations seek cost benefits for creative content and content moderation on platforms. These AI systems have been trained on the un- or under-compensated labor of artists, journalists and everyday people, many of them underpaid labor outsourced by U.S.-based companies. These sources may not be representative of global culture or hold the ideals of equality and justice. Their automation poses severe risks for U.S. and global culture and politics.

“As the web evolves, there remain big questions as to whether equity is possible or if venture capital and the wealthy will buy up all digital intellectual property. Conglomeration among firms often leads to market manipulation, labor inequality and cultural representations that do not reflect changing demographics and attitudes. And, there are also climate implications for many new technological developments, particularly concerning the use of energy and other material natural resources.”

Beneficial (Did not respond to harmful)
Jon Lebkowsky, writer and co-wrangler of Plutopia News Network, previously CEO, founder and digital strategist, Polycot Associates, said, “However you define AI, it will be an increasingly present technology. I believe that increasing use of AI will highlight its constraints and limitations, with an understanding that it’s most effective for its ability to support and expand human endeavors. To the extent AI can automate tasks, we will have to rethink human employment and revise our economic thinking.

“We can expect to see substantial innovation related to climate change adaptation, and possibly mitigation to the extent that’s still possible. We will see the development of increasingly efficient and clean fuel sources and technologies for leveraging those sources most effectively.

“We’ll see a computer-mediated trend toward decentralization of social media and social organization. We’ll also hopefully see effective use of technology to support more decentralized and democratic cooperative enterprises.

“We can also hope to see ongoing medical advances including development of sophisticated vaccines and therapies to manage and prevent global pandemics. Hopefully we will find more and better ways to extend sophisticated healthcare broadly, leveraging technology effectively to make care delivery increasingly efficient and accessible.”

Beneficial (Did not respond to harmful)
Cathy Cavanaugh, chief experience officer at the University of Florida Lastinger Center for Learning, said, “Inequitable access to technology and services that exacerbates existing social and economic gaps rather than eliminating them. Too few governments balance capitalism and social services in ways that serve the greatest needs. These imbalances look likely to continue rather than to change because of increasing power imbalances in many countries. Equitable access to essential human services is crucial. Technology now exists in most locations that is affordable, available in most languages and for people of many physical abilities and easy to learn. The most beneficial use of this personal technology is to connect individuals, families and communities to necessary and life-changing services using secure technology that can streamline and automate these services, making them more accessible. We have seen numerous examples including microfinance, apps that help unhoused people find shelter, online education, telehealth and a range of government services. Too many people still experience poverty, bias and lack of access to serve their needs and create opportunities for them to fully participate in and contribute to their communities.”

Justin Reich, associate professor of digital media at MIT and director of the Teaching Systems Lab, commented, “Video games have continued to grow as a media and art form, both on the AAA side and on the indie side. I’m excited to see what games people are making in 2035. I bet a number of them will be really fun, engaging and moving.”

Justin Reich, associate professor of digital media at MIT and director of the Teaching Systems Lab, commented, “The hard thing about predicting the future of tech is that so much of it is a reflection on our society. The more we embrace values of civility, democracy, equality and inclusion, the more likely it is that our technologies will reflect our social goals. If the advocates of fascism are successful in growing their political power, then the digital world will be full of menace – constant surveillance, targeted harassment of minorities and vulnerable people, widespread dissemination of crappy art and design, and so forth, all the way up to true tragedies like the genocide of the Uyghur people in China.”

Stephan Adelson, president of Adelson Consulting Services and an expert in the internet and public health, said, “The recent release of several AI tools in their various categories begins a significant shift in the creative and predictive spaces. Creative writing, predictive algorithms, image creation, computations, even the process and products of thought itself are being challenged. I predict that the greatest potential for benefit to mankind by 2035 from digital technologies will come through the challenges their existence creates. We, as a species, are creators of technologies that are learning and growing their productive capabilities and creative capacities. As these tools grow, learn and become integrated into our everyday lives, both personal and professional, they will become major competitors for resources, financial, social and as entertainment. I feel it is in this competition that they will provide our greatest growth and benefits as a species. As we compete with our digital creations we will be forced to grow or become dependent on what we have created and can no longer exceed.”

Stephan Adelson, president of Adelson Consulting Services and an expert in the internet and public health, said, “Reality itself is under siege. AI, CGI, developmental augmented reality and other tools that have the ability to create misleading, alternate or deceptive reality, especially when used politically, are the greatest threats to our future. Manipulation of the masses through media has always been a foundation of political and personal gain. As digital tools that can create more convincing alternatives to what mankind sees, hears, comprehends and perceives become mainstream daily tools, as I believe they will by 2035, the temptation to use them for personal and political gain will be ever present. There will be battles for ‘truths’ that may cause a future where paranoia, conspiracy theories and a continual fight over what is real and what is not are commonplace. I fear for the mental health of those unable to comprehend the tools and that do not have the capacity to discern truth from deception.

“Continued political and social unrest, increases of mental illnesses and a further widening of the economic gap are almost guaranteed if actions that create restrictions on their use and/or tools developed that are reliable and capable of the separation of ‘truth’ from ‘fiction.’”

Beneficial and Harmful
Marcus Foth, professor of informatics at Queensland University of Technology, said, “The best and most beneficial changes with regards to digital technology and humans’ use of digital systems will be in the areas of governance – from the micro-scale governance of households, buildings and street blocks to the macro-scale governance of nation-states and the entire planet.

“The latest we are seeing now in the area of governance are digital twins – in essence, large agglomerations of data and data analysis. We will look back at them and perhaps smile. They are a starting point. Yet they don’t necessarily result in better political decision-making or evidence-based policy-making. Those are two areas in urgent need of attention. This attention has to come from the humanities, communications and political science fields more so than the typical STEM/computer science responses that tend to favour technological solutionism.

“The best and most beneficial changes will be those that end the neoliberal late-capitalist era of planetary ecocide and bring about a new collective system of governance that establishes consensus with a view to stop us from destroying planet Earth and ourselves. If we are still around in 2035 that is. The most harmful or menacing changes are those portrayed as sustainable but are nothing more than greenwashing. Digital technology and humans’ use of digital systems are at the core of the greenwashing problem. We are told by corporations that in order to be green and environmentally friendly, we need to opt for the paper-based straw, the array of PV solar panels on our roofs, and the electric vehicle in our garage. Yet, the planetary ecocide is not based on an energy or resources crisis but on a consumption crisis.

“Late capitalism has the perverted quality of profiteering from the planetary ecocide by telling greenwashing lies – this extends to digital technology and humans’ use of digital systems from individual consumption choices such as solar and EVs to large-scale investments such as smart cities. The reason these types of technology are harmful is because they just shift the problem elsewhere – out of sight.

“The mining of rare earth metals continues to affect the poorest of the poor across the Global South. The ever-increasing pile of e-waste continues to grow due to planned obsolescence and people being denied a right to repair.

“The idea of a circular economy is being dummified by large corporations in an attempt to continue BAU – business as usual. The Weltschmerz caused by humans’ use of digital systems is what’s most menacing without that we know it.”

Robin Raskin, founder of the Virtual Events Group, author, publisher and conference and events creator, wrote, “The metaverse marches forward in fits and starts but ultimately it will divide into two distinct categories. There will be a metaverse for gaming, entertainment and shopping. The most critical metaverse will be a digital twin of everything – cities, schools and factories, for example. These twins coupled with IoT devices will make it possible to create simulations, inferences and prototypes for knowing how to optimize for efficiency before ever building a single thing.

“The consumerization of AI will augment, if not replace, most of the white-collar jobs in areas including traditional office work, advertising and marketing, writing and even programming. Since work won’t be ‘a thing’ anymore, we’ll need to find other means of compensation for our contribution to humanity. How much positive participation we contribute to the web? A Universal Basic Income because we all taught AI to do our jobs? It remains to be seen but the AI Revolution will be as huge as the Industrial Revolution.

“Big tech as it is today will no longer be ‘big.’ Rather, tech jobs will go to various sectors, from agriculture and sustainability to biomed. The Googles and Facebooks have almost maxed out on their capabilities to broaden their innovations. Tech talent will move to solve more pressing problems in vertical sectors.

“By 2035 we will have a new digital currency (probably not crypto as we know it today, but close). We may have a new system of voting for leaders (a button in your home instead of a representative in Congress or Senate so that we really achieve something closer to one man/one vote).

“Finally, doctors and hospitals will continue to become less relevant to our everyday lives. People will be allowed to be sick in their homes, monitored remotely through tele-med and devices. We’re already seeing CVS, Walmart and emergency clinics replace doctors as the first point of contact. Medicine will spread into the community rather than be a destination.”

Robin Raskin, founder of the Virtual Events Group, author, publisher and conference and events creator, predicted, “What we’re experiencing now is the harbinger of what’s to come. Synthetic humans and robot friends may increase our social isolation. The demise of the office or a school campus as a gathering place will leave us hungry for human companionship and may cause us to lose our most human skills – empathy and compassion.

“We become ‘man and his machine’ rather than ‘man and his society.’

“Higher education will face a crisis like never before. Exorbitant pricing and lack of parity with the real world makes college seem quite antiquated. I’m wagering that 50 percent of higher education in the United States will be forced to close down. We will devise other systems of degrees and badges to prove competency.”

Mark Schaefer, a business professor at Rutgers University and author of ‘Marketing Rebellion,’ wrote, “In America, healthcare progress will come from startups and boutique clinics that offer wealthy individuals environmental screening devices and pharmaceutical solutions customized for precise genetic optimization. The smart home of the future will analyze air quality, samples from the bathroom waste stream and food consumption to suggest daily health routines and make automatic environmental and pharmaceutical adjustments.

“Overall, an AI-driven healthcare system will be radically streamlined to be highly personal, effective and efficient in many developed regions of the world – excluding the United States. While the U.S. will remain the leader in developing new healthcare technology, the country will lag most of the world in this tech adoption due to powerful lobbyists in the healthcare industry and a dysfunctional government unable to legislate reform.

“However, progress will take off rapidly in China, a country with a rapidly-aging population and a government that will dictate speedy reform. Dramatic improvements will also occur in countries with socialized healthcare, since efficiency means a dramatic improvement in direct government spending. Expected lifespan will increase by 10% in these nations by 2035. China’s population will have declined dramatically by 2035, a symptom of the one-child policy, rapid urbanization and social changes. China will attract immigrant workers to boost its population by offering free AI-driven healthcare.”

Mark Schaefer, a business professor at Rutgers University and author of “Marketing Rebellion,” wrote, “The rapid advances of artificial intelligence in our digital lives will mean massive worker displacement and set off a ripple of unintended consequences. Unlike previous industrial shifts, the AI-driven change will happen so suddenly – and create a skill gap so great – that re-training on a massive scale will be largely impossible.

“While this will have obvious economic consequences that will renew discussion about a minimum universal income, I’m more concerned by the significant psychological impact of the sudden, and perhaps permanent, loss of a person’s purpose in life.

“I recently completed a new book after two years of research, writing and significant personal sacrifice. After the book was published, I tested an AI tool to write a section of my book – in my ‘voice’ and with appropriate academic references. It did it, and it did it well, in five seconds. I am at least 80% replaced by a soulless bot. It was the most depressing moment of my career. Although my career is not necessarily threatened at this moment, much of my meaning is derived from the personal struggle it takes to create extraordinary books and the satisfaction of the reader’s response to my unique effort. What happens when this loss of meaning and purpose occurs on a massive, global scale?

“There is a large body of research showing that unemployment is linked to anxiety, depression and loss of life satisfaction, among other negative outcomes. Even underemployment and job instability create distress for those who aren’t counted in the unemployment numbers.

“Millions of these displaced people will require psychological support. They will probably receive it from AI-fueled bots. After all, much of psychological treatment is simply a scientific-based response to detectable patient behavior patterns, which is exactly what AI loves to do.

“Many lonely people will fill their empty days with content programming that is uniquely designed for them. Limitless, personalized media will be tuned to individual brain wave responses and optimized to elicit precise amounts of dopamine, oxytocin and serotonin to keep us blissfully and naturally high all day. Literally, we will be addicted to our media. We’ll routinely have immersive digital experiences with deceased loved ones, heroes and historical figures who will help us forget that we have nothing better to do.

“The general loss of employment and meaning will create new businesses to serve the bored and depressed population. Many social and economic issues will finally be addressed by the large number of volunteers with free time on their hands. I have seen bits and pieces of this media technology in action already and it can certainly be available on a massive scale by 2035.”

Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, wrote, “Human-centered design of digital tools will become a well-developed framework, the use of which is expected and demanded by all stakeholder groups. In addition, we will have seen much more progress on what human-centered design actually requires in practice across all types of technological innovation.”

Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, commented, “Digital authoritarianism could become a dominant model of governance across the globe, due to a combination of intentional use of technology for repression in places where human rights are not embraced, plus failure to adhere to a human rights-based approach to use and regulation of digital technology even in countries where human rights are embraced.”

Rosalie Day, a policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, wrote, “Spurred on by the pandemic and for multiple reasons, more of the population will be connected digitally, even in internet-walled countries within their walls. This makes it easier to get to unbanked populations and provide benefits, virtual healthcare and education. Disaster relief will be facilitated, and corruption will not be as prevalent for as long (assuming power generation and satellites restore connectivity).

“With progress in AI comes more ways to mitigate and adapt to climate change. Benefits to both will come in the form of supply chain optimizations, which will increase efficiency of fuel use and potentially increase food security.

“Therapies for cancers and rare diseases are going to vastly advance with the amount of data available for training the AI. Access to anonymized patient data will increase. Organizations aside from traditional players in big pharma will be enabled to strong gains, especially in genetic discoveries and gene therapies.”

Rosalie Day, a policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, said, “Misinformation will continue to grow – accelerated amplification. Now, not only by the algorithms that play toward our own worst instincts, but also generative AI will further embed biases and make us more skeptical of what can be ‘seen.’ The latter will make the importance of digital literacy an even greater divide. The digitally challenged will increasingly rely on the credibility of the source of the information, which we know is detrimentally subjective.

“Generative AI will hurt the education of our workforce. It is difficult enough to teach and evaluate critical thinking now. I expect knowledge silos to increase as the use of generative AI concentrates subjects and the training data becomes the spawned data. Critical thought asks the thinker to incorporate knowledge, adapt ideas and modify accordingly. The never seen before becomes the constraint and group think becomes the enforcer.

“Generative AI will also displace many educated and uneducated workers. Quality of life will go down because of the satisficing nature of human systems: is it sufficient? Does the technology get it right within the normal distribution? Systems will exclude hiring people with passion or those particularly good at innovating because they are statistical outliers.”

Isaac Mao, Chinese technologist, data scientist and entrepreneur, said, “Artificial Intelligence is poised to greatly improve human well-being by providing assistance in processing information and enhancing daily life. From digital assistants for the elderly to productivity tools for content creation and disinformation detection, to health and hygiene innovations such as AI-powered gadgets, AI technology is set to bring about unprecedented advancements in various aspects of our lives. These advances will not only improve our daily routines but also bring about a new level of convenience and efficiency that has not been seen for centuries. With the help of AI, even the most mundane tasks such as brushing teeth or cutting hair can be done with little to no effort and concern, dramatically changing the way we have struggled for centuries.”

Isaac Mao, Chinese technologist, data scientist and entrepreneur, observed, “It is important to recognize that digital tools, particularly those related to artificial intelligence, can be misused and abused in ways that harm individuals, even without traditional forms of punishment such as jailing or physical torture. These tools can be used to invade privacy, discriminate against certain groups and even cause loss of life. When used by centralized powers, such as a repressive government, the consequences can be devastating. For example, AI-powered surveillance programs could be used to unjustly monitor, restrict, or even target individuals without the need for physical imprisonment or traditional forms of torture. To prevent such abuse, it is crucial to be aware of the potential dangers of technology and to work towards making them more transparent through democratic processes and political empowerment.

“While some technologies, such as Virtual Reality (VR) or the Metaverse, have the potential to be used for entertainment and education, they also pose a risk of blurring the lines between reality and fiction. This can be dangerous and lead to long-term struggles in managing and utilizing these technologies for the greater good. It is important to be aware of these potential dangers and take steps to ensure that these technologies are used responsibly and ethically.”

Evan Selinger, professor of philosophy at Rochester Institute of Technology and author of “Re-Engineering Humanity,” wrote, “By 2035, there will be significant beneficial changes to healthcare, specifically in AI-assisted medical diagnosis and treatment, as well as AI predictions related to public health. I also anticipate highly immersive and interactive digital environments for working, socializing, learning, gaming, shopping, traveling and attending healthcare-related appointments.”

Evan Selinger, professor of philosophy at Rochester Institute of Technology and author of “Re-Engineering Humanity,” predicted, “Surveillance technology will become increasingly invasive – not just in terms of its capacity to identify people based on a variety of biometric data, but also in its ability to infer what those in power deem to be fundamental aspects of our identities (including preferences and dispositions) as well as predict, in finer-grained detail, our future behavior and proclivities. Hyper surveillance will permeate public and private sectors – spanning policing, military operations, employment (full cycle, from hiring through day-to-day activities, promotion and firing), education, shopping and dating.”

Beneficial and Harmful
David A. Banks, director of globalization studies at the University at Albany-SUNY commented, “Between now and 2035, as the tech industry will experience a declining rate of profit and individual firms will seek to extract as much revenue as possible from existing core services, thus users could begin to critically reevaluate their reliance on large-scale social media, group chat systems (e.g., Slack, Teams), and perhaps even search as we know it. Advertising, the ‘internet’s original sin’ as Ethan Zuckerman so aptly put it in 2014, will combine with intractable free-speech debates, unsustainable increases in web stack complexity and increasingly unreliable core cloud services to trigger a mass exodus from Web 2.0 services. This is a good thing!

“If big tech gets the reputation it deserves, that could lead to a renaissance of libraries and human-centered knowledge searching as an alternative to the predatory, profit-driven search services. Buying clubs and human-authored product reviews could conceivably replace algorithmic recommendations, which would be correctly recognized as the advertisements that they are. Rather than wring hands about ‘echo chambers,’ media could finally return to a partisan stance where biases are acknowledged, and audiences can make fully-informed decisions about the sources of their news and entertainment. It would be more common for audiences to directly support independent journalists and media makers who utilize a new, wider range of platforms.

“On the supply side, up-and-coming tech firms and their financial backers could respond by throwing out the infinite expansion model established by Facebook and Google in favor of niche markets that are willing to spend money directly on services that they use and enjoy, rather than passively pay for ostensibly free services through ad revenue. Call it the ‘humble net’ if you like – companies that are small and aspire to stay small in a symbiotic relationship with a core, loyal userbase. The smartest people in tech will recognize that they have to design around trust and sustainability rather than trustless platforms built for infinite growth.

“I am mostly basing my worst-case scenario prognostication on how the alt right has set up a wide range of social media services meant to foster and promulgate their worldview.

“In this scenario, venture capital firms will not be satisfied with the humble net and will likely put their money into firms that sell to institutional buyers, think weapons manufacturers, billing and finance tools, work-from-home hardware and software and biotech. This move by VCs will have the aggregate effect of privatizing much-needed public goods, supercharging overt surveillance technology and stifling innovation in basic research that takes more than a few years to produce marketable products.

“As big companies’ products lose their sheen and inevitably lose loyal customers, they will likely attempt to become infrastructure, rather than customer-facing brands. This can be seen as a retrenchment of control over markets and an attempt to become a market arbiter rather than a dominant competitor. This will likely lead to monopolistic behavior –  price gouging, market manipulation, collusion with other firms in adjacent industries and markets – that will not be readily recognizable by the public or regulators. There is no reason to believe regulatory environments will strengthen to prevent this in the next decade.

“Big firms, in their desperation for new sources of revenue, will turn toward more aggressive freemium subscription models and push into what is left of brick-and-mortar stores. I have called this phenomenon the ‘Subscriber City,’ where entire portions of cities will be put behind paywalls. Everything from your local coffee shop to public transportation will either offer deep discounts to subscribers of an Amazon Prime-esque service or refuse direct payments altogether. Transportation services like Uber and Waze will more obviously and directly act like managers of segregation than convenience and information services.

“Real estate markets, which were once geographically fragmented, will become increasingly integrated at national and international scales so that landlords and banks can collude to set prices on rent, interest rates and insurance premiums. The goal here will be to dynamically price real estate and its attendant financial services for individuals and maximize returns for institutional investors.

“Western firms will be dragged into trade wars by an increasingly antagonistic U.S. State Department, leading to increased prices on goods and services and more overt forms of censorship, especially with regard to international current events. This will likely drive people to their preferred humble nets to get news of varying veracity. Right-wing media consumers will seek out conspiratorial jingoism, centrists will enjoy a heavily censored corporate mainstream media, and the left will be left victim to con artists, would-be journalism influencers, and vast lacunas of valuable information.”

Stephan G. Humer, sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, responded, “The human being will be much more at the center of digital action than now. It’s not only about usability, interface design or intuitive usage of smartphones; it’s about real human empowerment, improvement and strength. Digitization has left the phase where technology is at the center of all things happening, and we will now move more and more to a real human-centered design of digital tools and services. Three main aspects will be visible: better knowledge management, better global communication and better societal improvements. Ultimately, we will have the most sovereign individuals of all time.”

Stephan G. Humer, sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, said, “The most harmful changes will appear if governments, digital companies and other institutions will not focus on the empowered citizen. The prime example here is social media: people need an excellent digital culture to successfully understand, deal with and engage with social media. The longer governments and other stakeholders wait, the more harmful it will be for societies and individuals.”

Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California, formerly product manager at Meta and co-founder of Ranker.com, said, “I expect that people will start to demand human-centered AI systems that work for them and not for the companies that build them. These demands will be enforced by governments and app stores. I also expect that AI will lead to great leaps forward in personalized medicine and the availability of automated health tools for emerging markets.”

Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California, formerly product manager at Meta and co-founder of Ranker.com, predicted, “A rogue state will build autonomous killing machines that will have disastrous unintended consequences. I also expect that the owners of capital will gain even more power and wealth due to advances in AI, such that the resulting inequality will further polarize and destabilize the world.”

Dean Willis, founder of Softarmor Systems, commented, “AI at Internet scale will provide for substantial advances in search, general information management and organization, public policy development and oversight, and health – analytical, monitoring and public health management. However, there is a massive dark side.

Dean Willis, founder of Softarmor Systems, observed, “From a public policy and governance perspective, AI provides authoritarian governments with unprecedented power for detecting and suppressing non-conformant behavior. This is not limited to political and ideological behavior or position; it could quite possibly be used to enforce erroneous public health policies, environmental madness, or, quite literally, any aspect of human belief and behavior. AI could be the best ‘dictator kit’ ever imagined. Author George Orwell was an optimist, as he envisioned only spotty monitoring by human observers. Rather, we will face continuous, eternal vigilance with visibility into every aspect of our lives. This is beyond terrifying. Authoritarian AI coupled with gamification has the potential to produce the most inhumane human behavior ever imaged.”

Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI,” said, “A human-centered approach to technology development is driven by deep understanding of human needs, which leads to design-thinking strategies that bring successful products and services. Human-centered user interface design guidelines, principles and theories will enable future designers to create astonishing applications that facilitate communication, improve well-being, promote business activities and much more.

“Building tools that give users superpowers is what brought users email, the web, search engines, digital cameras and mobile devices. Future superpowers could enable reduction of disinformation, greater security/privacy and improved social connectedness that supports potent forms of collaboration.

“This could be the Golden Age of Collaboration, with remarkable global projects such as developing COVID-19 vaccine in 42 days. The future could be made brighter if similar efforts were devoted to fighting climate change, restoring the environment, reducing inequality and supporting the 17 UN Sustainable Development Goals.

“Equitable and universal access to technology could improve the lives of many, including those users with disabilities. The challenge will be to ensure human control, while increasing the level of automation.”

Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI,” warned, “Dangers from poorly designed social technologies increase confusion, which undermines the capacity of users to accomplish their goals, receive truthful information or enjoy entertainment and sports. More serious harms come from failures and bias in transactional services such as mortgage applications, hiring, parole requests or business operations. Unacceptable harms come from life-critical applications such as in medicine, transportation and military operations.

“Other threats come from malicious actors who use technology for destructive purposes, such as cybercriminals, terrorists, oppressive political leaders and hate speech bullies. They will never be eliminated, but they can be countered to lessen their impact.

“There are dangers of unequal access to technology and designs that limit use by minorities, low-literacy users and users with disabilities. These perils could undermine economic development, leading to strains within societies, with damage to democratic institutions, which threatens human rights and individual dignity.”

Russell Blackford, Editor-in-Chief of IEET Journal of Evolution and Technology, wrote, “International communication, networking and availability of information will continue to improve.

Russell Blackford, Editor-in-Chief of IEET Journal of Evolution and Technology, said, “The surveillance society will become even more intense, hindering personal freedoms and privacy.”

Jeremy Foote, a computational social scientist at Purdue University studying cooperation and collaboration in online communities, said, “There are a number of trends in our digital life that are promising. One is the potential for AI as an engine for creativity. While GPT and other LLMs have been met with open-mouthed awe from some and derision from others, I think that it’s likely that AI tools like ChatGPT become important tools for both 1) empowering creativity through novel ways of thinking, and 2) improving productivity in knowledge work through making some tedious aspects easier, such as reading and summarizing large amounts of text. By 2035 we will likely know the limits of these tools (which are likely many) but we will also have identified many more of their uses.

“A second promising change in digital technology has been increasing skepticism about the power of corporations as platforms. The early web grew based on ‘protocols instead of platforms’ and there are indicators that protocols may be making a comeback. This is mostly good news as decentralized platforms have less centralized power.

“Finally, in optimistic moods I think that there is a chance that the excesses of misinformation, chaos and polarization drive creativity in figuring out institutions (in the Northian sense) that can help us to understand and connect with each other. There are not technologies or institutions now that I see as particularly promising, but these do not seem like completely impossible problems.”

Jeremy Foote, a computational social scientist at Purdue University studying cooperation and collaboration in online communities, commented, “The pessimistic version of 2035 looks pretty bad. The promise of AI also comes with perils. There is lots of potential for the creation of much more persuasive, tireless misinformation and propaganda machines, willing to converse and persuade 24/7. This could lead to a real distrust of basically anything that we see on the Internet.

“A second worrying trend is the ability of social media to polarize and radicalize some folks. Trends like decentralization may make it easier for radical groups to recruit while avoiding censors or moderators.

“Third, state actors have been surprisingly adept at using propaganda and other digital tools to control their citizens and to frame issues globally. Democracies may be at a disadvantage when it comes to this sort of informational warfare.”

Rich Salz, principal engineer at Akamai Technologies, predicted, “We will see a proliferation of AI systems to help with medical diagnosis and research. This may cover a wide range of applications, such as: expert systems to detect breast cancer or other X-ray/imaging analysis; protein folding, etc., and discovery of new drugs; better analytics on drug and other testing; limited initial consultation for doing diagnosis at medical visits. Similar improvements will be seen in many other fields, for instance, astronomical data analysis tools. I hope the tech field gets more unionized.”

Rich Salz, principal engineer at Akamai Technologies, warned, “Mass facial-recognition systems will be among the digital tools more widely implemented in the future. There will be increased centralization of internet systems leading to more extra-governmental data collection and further loss of privacy. In addition, we can expect that cell phone cracking will invade privacy and all of this, plus more government surveillance, will be taking place, particularly in regions with tyrannical regimes. Most people will believe that AI’s large language models are ‘intelligent,’ and they will, unfortunately, come to trust them. There will be a further fracturing of the global internet along national boundaries.”

Lambert Schomaker, a professor at the Institute of Artificial Intelligence and Cognitive Engineering at the University of Groningen, Netherlands, wrote, “The total societal cost of inadequate IT-plus-human-hellhounds who create office bottlenecks must be astronomical. In current society, human administrative work tends to be concentrated in a set of key positions in companies and institutions – for financial control, human-resource management, data and IT services, etc. The human personnel in these positions abuse their power, they do not assist but instead deflect any question without an actual solution. Office workflows could be streamlined, documentation could be written in more-user-friendly ways, tailored to the needs of the people being served. It seems as if, across society, these positions are usually held by individuals who, in their hearts, have no inclination to be service-oriented toward other humans. This is where AI comes in. Replacing pesky humans at friction points in society will lead to a higher productivity and higher level of happiness for most of us. The administrative and policy people will lose their jobs, but is that so terrible, even for themselves?”

Lambert Schomaker, a professor at the Institute of Artificial Intelligence and Cognitive Engineering at the University of Groningen, Netherlands, commented, “Current developments around ChatGPT and DALL-E2, although in their early stages now, will have had a deep impact on the way humans look at themselves. This can also be seen from the emotional reactions from artists, writers and researchers in the humanities. Many capabilities considered purely human now appear to be statistical of nature. Writing smooth, conflict-avoiding pieces of text is, apparently, fairly mechanical. This is very threatening.

“At the moment such users try to topple the current algorithms instead of providing cooperative text prompts. This is good, because the AI community will be eager to prove that many of the current problems (e.g., in logical reasoning) are fairly easily solvable. Also, creative diversity in image and music generation by AI will have improved dramatically.

“However, the psychological effect of these developments may be dramatic. Why go to school, the machine can do it all! As a consequence, motivation to work at all may drop. The only green lining here may be that physical activity (‘maker world’) will gain in importance. Given the current shortage of skilled workers in building, electrical engineering and agriculture, this may even be beneficial in some areas. However, the upheaval caused by the AI revolution may have an irreparable effect on the tissue of societies in all world cultures.”

Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, predicted, “Among the developments we’ll see come along well:

Self-driving cars will reduce congestion, carbon emissions and road accidents.

Automated drug discovery will revolutionize the use of pharmaceuticals. This will be particularly beneficial where speed or diversity of development is crucial, as in cancer, rare diseases and antibiotic resistance.

We will start to see platforms for political news, debate and decision-making that are designed to bring out the best of us, through sophisticated combinations of human and automated moderation.

AI assistants will be able to write sophisticated, well-cited research briefs on any topic. Essentially, most people will have access to instant specialist literature reviews.”

Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, warned, “Key worries include human rights, human knowledge and economic inequality.

“In regard to human rights, some governments will use surveillance and content-moderation techniques for control, making it impossible to express dissenting opinions. This will mostly happen in authoritarian regimes, however certain liberal democracies will also use this technology for narrower purposes, and speech regulations will shift depending on who wins elections.

“In regard to human knowledge, generative models for text, images and video will make it difficult to know what is true without specialist help. Essentially, we’ll need an AI layer on top of the Internet that does a new kind of ‘spam’ filtering in order to stand any chance of receiving reliable information.

“In regard to economic inequality, although AI will create massive wealth for some people and companies, this will not be accompanied by large productivity gains in most cases. Most people will still feel economically precarious and affording housing, medical care, etc., will be a challenge.”

Kay Stanney, CEO and founder of Design Interactive, commented, “Human-centered development of digital tools can profoundly impact the way we work and learn. Specifically, by coupling digital phenotypes (i.e., real-time, moment-by-moment quantification of the individual-level human phenotype, in situ, using data from personal digital devices, in particular smartphones) with digital twins (i.e., digital representation of an intended or actual real-world physical product, system or process), it will be possible to optimize both human and system performance and well-being. Through this symbiosis, interactions between humans and systems can be adapted in real-time to ensure the system gets what it needs (e.g., predicted maintenance) and the human can get what it needs (e.g., guided stress-reducing mechanisms), thereby realizing truly transformational gains in the enterprise.”

Kay Stanney, CEO and founder of Design Interactive, wrote, “Human-centered development of digital tools and systems could be done in such a manner that there may be accessibility limitations, thereby allowing some groups to benefit more than others. If this limits advancement, it is not an acceptable outcome.”

Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, said, “I expect technology to develop in such a way that physical machines (AKA, robots), not just virtual systems, will be developed to replace humans advantageously in dangerous, dull and dirty work. This will increase production, make work safer and create new challenges for humankind not thought of until then.”

Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, noted, “What I fear most is not the technology itself, but the wrong use of it. If we replace humans in hard work and do not create new jobs to face new challenges and/or do not provide mechanisms such as companies paying for social security whenever they replace a human by a robot and/or ensure paying universal basic income, societies may blow up.”

Alexander Klimburg, senior fellow at the Institute of Advanced Studies, Austria, commented, “In the best possible circumstances, by 2035 we will have fully internalized that cybersecurity is a policy and political issue, not just a technical issue. That means we will have honest and productive public discussions on the various tradeoffs that need to take place – how much individual security and responsibility? How much with companies? How much for government? And, most importantly, how do we ensure that our values are maintained and keep the Internet free – a meaning under smart regulation, not new-age state-mandated cyber-despotism or the slow suffocation of individual monopolies. The key to all of this is cracking the difficult question of governance of cyberspace. The decision points for this are now, and in particular in 2024 and 2025.”

Alexander Klimburg, senior fellow at the Institute of Advanced Studies, Austria, predicted, “In the worst cases by 2035, two nightmare scenarios can develop – firstly, an age of warring cyber blocks, where different internets are the battlefield for a ferocious battle between ideological intractable foes – democracies against the authoritarian regimes. In this scenario, a new forever war, not unlike the Global War on Terror but instead state-focused, keeps us mired in tit-for-tat attacks on critical infrastructure, undermines governments and destroys economies. A second nightmare is similar, but in some ways worse: the authoritarian voices who want a state-controlled Internet win the global policy fight, leading to a world where either governments or very few duopolies control the Internet and therefore our entire news consumption and censor our output, automatically shaping our preferences and beliefs along the way. Either the lights go out in cyberwar, or they never go out in a type of Orwellian cyber dystopia that even democracies will not be fully safe from.”

Charlie Kaufman, a system security architect with Dell Technologies, predicted, “In the area of human health and well-being we will have the ability to carry on natural language discussions of medical issues with an AI that is less expensive and less intimidating than a medical professional, especially when seeking guidance as to whether to seek professional help. We should be able to give it access to medical records and take pictures of visible anomalies. I also predict AI will be capable of providing companionship for people who don’t do well interacting with real people or have situations making that difficult. AI engines will be able to predict what sorts of entertainment I’d like to enjoy, which articles I would like to read and what sort of videos I’d like to watch and save me the time of seeking these out.”

Charlie Kaufman, a system security architect with Dell Technologies, said, “Digital systems will continue to be difficult to use well, and large fractions of humanity will be cut off from the benefits of technology because of lack of training and commercial rationing enforced with intellectual property protection to maximize corporate profits.

“In regard to the future of human knowledge, I hope for the best and fear the worst. Technology of late has been used to spread misinformation. I would hope that we will figure out a way to minimize that while making all public knowledge available to anyone who wants to ask.”

“In regard to human rights, I hope for the best but fear the worst for technology’s impact on personal privacy. Technology to date has lessened it, and while it has great potential to improve things, I fear that trend will continue.”

Frank Bajak, cybersecurity investigations chief at the Associated Press, wrote, “Many technologies have the potential to bring people and cultures together as never before and bridge understanding and cultural and historic knowledge. Speech- and image-recognition are tops among them. Labor-saving devices including AI and robotics have tremendous potential for creating more leisure time and greater dedication to the satisfactions of the physical world. Technologies developed for countering climate change are likely to have multiple unanticipated benefits. Advancements in medicine benefiting from our improved understanding of genetics – such as targeted gene therapies – will improve human longevity. The potential for technology to make the world safer and more harmonious is great. But this will depend on how humans wield it and whether we can make wealth distribution more equitable and wars less common and damaging. Every technology can be leveraged for good or ill.”

Frank Bajak, cybersecurity investigations chief at the Associated Press, predicted, “The powerful technologies maturing over the next decade will be badly abused in much of the world unless the trend toward illiberal, autocratic rule is reversed. Surveillance technology has few guardrails now, though the Biden administration has shown some will for limiting it. Yet far too many governments have no qualms about violating their citizens’ rights with spyware and other intrusive technologies. Digital dossiers will be amassed widely by repressive regimes. Unless the United States suppresses the fascist tendencies of opportunist demagogues, the U.S. could become a major surveillance state. Much depends also on the European Union being able to maintain democracy and prosperity and contain xenophobia. We seem destined at present to see biometrics combined with databases – anchored in facial, iris and fingerprint collection – used to control human migration, prejudicing the Black and brown of the Global South.

“I am also concerned about junk AI, bioweapons and killer robots. It will probably take at least a decade to sort out hurtful from helpful AI. Full autonomous offensive lethal weapons will be operative long before 2035, including drone swarms in the air and sea. It will be incumbent on us to forge treaties restricting the use of killer robots.

“Technology is not and never was the problem. Humans are. Technology will continue to imbue humans with God-like powers. I wish I had more faith in our better angels. AI will likely eventually make software, currently dismally flawed, much safer as security becomes central to ground-up design. This is apt to take more than a decade to shake out. I’d expect a few major computer outages in the meantime. We may also learn not to bake software into absolutely everything in our environment as we currently seem to be doing. Maybe we’ll mature out of our surveillance doorbell stage.”

Micah Altman, social and information scientist at the Center for Research in Equitable and Open Scholarship at MIT, said, “Whether digital or analog, there are five dimensions to individual well-being: longevity, health, access to resources, subjective well-being and agency over making meaningful life choices. Within the last decade the increasing digitalization of human activities has contributed substantially in each of these areas, providing benefits in four of the five areas.

“Digital life is greatly expanding access to online education (especially through open online courses and increasingly through online degree and certification programs); health information and health treatment (especially through telehealth in the area of behavioral wellness); the opportunity to work from remote locations (which is particularly beneficial for people with disabilities); and the ability to engage with government through online services, access to records, and modes of online participate in (e.g., through online public hearings). Expansion in most of these areas is likely to continue over the next dozen years.”

Micah Altman, social and information scientist at the Center for Research in Equitable and Open Scholarship at MIT, wrote, “There is more reason to be concerned than excited – not because digital life offers more peril than promise, but because the results of progress are incremental, while the results of failure could be catastrophic. Thus it is essential to govern digital platforms, to integrate social values into their design, and to establish mechanisms for transparency and accountability.

“The most menacing potential changes to life over the next couple of decades are the increasing concentration in the distribution of wealth, a related concentration of effective political power, and the ecological and societal disruptions likely to result from our collective failure to diligently mitigate climate change (further, the latter is related to the former).

“As a consequence, the most menacing potential changes to digital life are those that facilitate this concentration of power: The susceptibility of digital information and social platforms to be used for disinformation, for monopolization (often through the monetization and appropriation of information generated by individuals and their activities) and for surveillance. Unfortunately, the incentives for the creation of digital platforms, such as the monetization of individual attention, has created platforms on which it is easy to spread disinformation to 10 million people and monitor how they react, but hard to promote a meaningful discussion among even a hundred people.”

Gary Grossman, senior vice president and global lead of the AI Center of Excellence at Edelman, observed, “There are a great number of potential benefits, ranging from improved access to education, better medical diagnosis and treatments, to breaking down language barriers for enhanced global communications. However, there are technical, social and governmental barriers to these and others so the path forward will at times be messy.”

Gary Grossman, senior vice president and global lead of the AI Center of Excellence at Edelman, said, “Perhaps because we can already feel tomorrow’s dangers in activities playing out today, the downside seems quite dramatic. Deepfakes and disinformation are getting a boost from generative AI technologies and could become pervasive, greatly undermining what little public trust of institutions remain. Digital addiction, already an issue for many who play video games, watch TikTok or YouTube videos, or who hang on every tweet, could become an even greater problem as these and other digital channels become even more personalized and appeal to base instincts for eyeballs.”

Deanna Zandt, writer, artist and award-winning technologist, said, “I continue to be hopeful that new platforms and tech will find ways around the totalitarian capitalist systems we live in, allowing us to connect with each other on fundamentally (ironically enough) human levels. My own first love of the internet was finding out that I wasn’t alone in how I felt or in the things I liked and finding community in those things. Even though many of those protocols and platforms have been coopted in service of profit-making, developers continue to find brilliant paths of opening up human connection in surprising ways.

“I’m also hopeful the current trend of hypercapitalistic tech driving people back to more fundamental forms of internet communication will continue. Email as a protocol has been around for how long? And it’s still, as much as we complain about its limitations or overwhelm, a main way we connect. Look at the rise of Substack – some crazy high percentage of its users don’t know that it’s a platform with a website and features. They just get email from creators they love. Brilliant.”

Deanna Zandt, writer, artist and award-winning technologist, wrote, “First, deepfakes and misinformation will continue to undermine our faith in public knowledge and our ability to make individual and collective sound decisions about how we live our lives. And second, while we continue to work on gender, racial, disability and other inclusive lenses in tech development, the continued lack of equity and representation in the tech community (especially when empowered by lots of rich, able-bodied white men) will continue to create harm for people living on the margins.”

Ayden Férdeline, Landecker Democracy Fellow at Humanity in Action, commented, “The Internet today is largely centralized, with a few companies having a stranglehold over the control and distribution of information. As a result, data is vulnerable to single points of failure and important records are susceptible to censorship, Internet shutdowns and link rot. By 2035, control over the Internet’s core infrastructure will have become less concentrated. Decentralized technologies will have become more prevalent by 2035, making the Internet more durable and better equipped to preserve information that requires long-term storage and accessibility. It won’t just be that we can reliably retrieve data like historical records – we will be able to verify their origins and that they have not been manipulated with over time. Initiatives like the Coalition for Content Provenance and Authenticity are developing the mechanisms for verifying digital media that will become increasingly important in legal proceedings and journalism.”

Ayden Férdeline, Landecker Democracy Fellow at Humanity in Action, wrote, “There are organizations today which profit from being perceived as ‘merchants of truth.’ News organizations, for example, derive their authority and influence through being trusted by their audience as having integrity. Similarly, the judicial system is based on the idea that the truth can be established through an impartial and fair hearing of evidence and arguments. Historically, we have trusted those actors and their expertise in verifying information. As we transition to building trust into digital media files through techniques like authentication-at-source and blockchain ledgers that provide an audit trail of how a file has been altered over time, there may be attempts to use regulation to limit how we can cryptographically establish the authenticity and provenance of digital media. More online regulation is inevitable given the importance of the Internet economically and socially, and the likelihood that digital media will increasingly be used as evidence in legal proceedings. But will we get the regulation right? Will we regulate digital media in a way that builds trust, or will we create convoluted, expensive authentication techniques that increase the cost of justice – if they are adopted at all?”

Alan Inouye, director of the office for information technology policy at the American Library Association, said, “I am optimistic that the U.S. will achieve nearly ubiquitous access to advanced technology by 2035. Already, we have seen the rapid diffusion of such technology in the United States and worldwide. I was recently in Laos, and it struck me how many people had portable phones, such as folks running food stands on the side of the road and tuk-tuk drivers. Accelerating diffusion is the amplified awareness coming out of the COVID-19 pandemic, and the multiple federal funding programs for broadband and digital inclusion. I see this momentum carrying through for years to come by governments at all levels, corporations and the non-profit sector.

“That said, it is always the case that there is differential access to advanced technology by the population. The well-to-do and those-in-the-know will have access to more-advanced technology than less-privileged members of society, whether we’re talking about typewriters or the latest smartphone. However, the difference by 2035 is that the level of technology capability will be so high that even those with only access to basic technology will still have a great deal of computing and communications power at their fingertips.”

Alan Inouye, director of the office for information technology policy at the American Library Association, commented, “Perhaps ironically, the most harmful aspects by 2035 will arise from our very ubiquitous access to advanced technology. As the technology access playing field will become somewhat more level, the distinguishing difference or competitive advantage will be knowledge and social capital.

“Thus, the edge with ubiquitous access to advanced technology goes to knowledge workers and those highly proficient with the online world, and those who are well connected in that world. A divide between these people and others will become more visible, and resentment will build among those who do not understand that their profound challenge is in the realm of lacking adequate knowledge and social capital.

“It will take considerable education of and advocacy with policy makers to address this divide. The lack of a device or internet access is an obvious deficiency and plain to see, and policy solutions are relatively clear. Inadequate digital literacy and ability to engage in economic opportunity online is a much more profound challenge, much beyond one-time policy prescriptions as training classes or online modules. This is the latest stage of our society’s education and workforce challenge generally, as we see an increasing bifurcation – of high achievers and low achievers in the U.S. education and workforce system.”

Beneficial and Harmful
Sean McGregor, founder of the Responsible AI Collaborative, said, “By 2035, technology will have developed a window into many inequities of life, thereby empowering individuals to advocate for greater access to and authority over decision-making currently entrusted to people with inscrutable agendas and biases. The power of the individual will expand with communication, artistic and educational capacities not known throughout previous human history. However, if trends remain as they are now, people, organizations and governments interested in accumulating power and wealth over the broader public interest will apply these technologies toward increasingly repressive and extractive aims. It is vital that there be a concerted, coordinated and calm effort to globally empower humans in the governance of artificial intelligence systems. This is required to avoid the worst possibilities of complex socio-technical systems. At present, we are woefully unprepared and show no signs of beginning collaborative efforts of the scale required to sufficiently address the problem.”

Beneficial and Harmful
Cory Doctorow, activist journalist and author of “How to Destroy Surveillance Capitalism,” wrote, “I hope to see an increased understanding of the benefits of federation and decentralization; interoperability mandates, such as the Digital Markets Act, and a renewed emphasis on interoperability as a means of lowering switching costs and disciplining firms; a decoupling of decentralization from blockchain (which is nonsense); and an emphasis on subsidiarity in platform governance. Among the challenges are new compliance duties for intermediaries – new rules that increase surveillance and algorithmic filtering while creating barriers to entry for small players – and ‘link taxes’ and other pseudocopyrights that control who can take action to link to, quote and discuss the news.”

Richard Barke, associate professor of public policy at Georgia Institute of Technology, wrote, “It is dangerous to characterize any specific possibility as ‘likely’ given the pace of technological, social and political changes in the United States in the preceding years and decades. The trajectory of those changes suggests that the shift from real to digital life probably will not decelerate. The use of digital technologies for shopping, medical diagnosis and interpersonal relations will continue. The use of data analytics by businesses and governments also will continue to grow. And the number and severity of harmful consequences of these changes will also grow.”

Richard Barke, associate professor of public policy at Georgia Institute of Technology, responded, “New technologies, market tools or social changes never come without some harmful consequences. Concerns about privacy and discrimination will increase, with the result that demands for transparency about business practices, targeting of subpopulations, and government policies will grow at least as fast as digital life. Those demands are not likely to be answered in the absence of significant harmful or menacing events that catch the attention of the public, the media and eventually policymakers.

“The environmental movement needed a Rachel Carson and a Love Canal in the 1960s and 1970s as policy entrepreneurs and focusing events. The same is true for many other significant changes in business and government decision making. Unfortunately, it is likely that by 2035 some highly visible abuse or scandal with clearly identifiable victims and culprits will be needed to provide an inflection point that puts an aggrieved public in the streets and on social media, in courtrooms and in legislative hallways, resulting in a new regime of law and regulations to constrain the worst excesses of the digital world. But, even then, is it likely – or even possible – that the speed of reforms will be able to keep up with the speed of technological and business innovations?”

Christopher Richter, a retired professor of communications from Hollins University, wrote, “More tech industry leaders will develop social consciences and work toward the greater good in terms of both tech development goals and revenue distribution. More people generally will develop informed, critical perspectives on digital/social media content and emotionally manipulative media processes.

“Technology: Artificial intelligence and robotics applications will lower the costs of improved quality of routine elder care and healthcare generally. Digital technologies will be developed that make substantial contributions to reducing greenhouse emissions and ameliorating climate change. Digital technologies will continue to be developed that facilitate equitable education processes.”

Christopher Richter, a retired professor of communications from Hollins University, said, “To the detriment of humanity and life on Earth generally, more tech industry leaders will come to believe that what is good for their company’s bottom line is good for society.

“More digital technologies will be developed to enhance the status quo, often under the guise of being revolutionary (e.g., the way current developments in autonomous vehicles to date just reinforce the values of one person-one vehicle, commuter lifestyles, highway systems as vital infrastructure, etc.). Digital tech will continue to be exploited to deepen social, political, informational and financial divides, both domestically and globally. Even well-meaning developments in AI, robotics and digital tech generally will have unintended negative consequences (e.g., the way current developments like ChatGPT are useful for plagiarists, or early utopian dreams of the internet as an ideally functioning public missed a lot of the negative realities).”

Adam Nagy, a senior research coordinator at The Berkman Klein Center for Internet & Society at Harvard University, said, “Albeit far from guaranteed, there may be some beneficial changes to digital life by 2035. As indicated by recent legislation in the European Union, there will be a global expansion of obligations imposed on firms that control or process data and more robust penalties and enforcement for rule-breaking. Hopefully, improved regulations foster more responsible corporate behavior (or at least clamp down on the worst excesses of the digital economy).

“Cooperative ownership structures of digital products and services are only just beginning to take off. The availability of alternatives to traditional corporate governance will provide consumers with more choices and control over how their data is used. And, by 2025 decentralized identity systems will already be much further along in their development and adoption. While far from foolproof, these systems will improve user privacy and security and also make signing up for new products and services more accessible.”

Adam Nagy, a senior research coordinator at The Berkman Klein Center for Internet & Society at Harvard University, said, “People are increasingly alienated from their peers, struggling to form friendships and romantic relationships, removed from civic life and polarized across ideological lines. These trends impact our experiences online in negative ways, but they are also, to some extent, an outcome of the way digital life affects our moods, viewpoints and behaviors. The continuation of this vicious cycle spells disaster for the well-being of younger generations and the overall health of society.”

Beneficial and Harmful
Alan D. Mutter, consultant and former Silicon Valley CEO, said, “The magic of technology enables me to Google lentil soup recipes, trade stocks in the park, stream Bollywood music and Zoom with friends in Germany. Without question, tech has solved the eternally vexing P2P problem – the rapid, friction-free delivery of hot-ish pizza to pepperoni-craving persons. Techno thingies like software calibration and hardware calibration networks will get faster and somewhat better (albeit more complex) but probably not cheaper. Here’s what I mean: For no additional charge, the latest Apple Watches will call 911 if they think you fell. It’s a good idea and the feature actually has saved some lives. But it also is producing an overwhelming number of false alarms. So, it is a good thing that sometimes is a bad thing.

  • AI probably will do a better job of reading routine scans than radiologists and might do a better job than human air traffic controllers who sometimes vector two planes to the same runway.
  • AI undoubtedly will answer all phones everywhere, cutting costs but also further compromising the quality of customer service at medical offices, insurance companies, tech-support lines and all the rest.
  • AI will produce all forms of media content but likely without the elan and judgment formerly contributed by humans.
  • AI probably will be more accurate than humans at doing math but less savvy at sorting fact from fiction and nuance from nuisance.

“Technology has upended forever the ways we get and give information. We now live in a Tower of Babel where yadda-yadda moves unchecked, unmoderated and unhinged at the speed of light, polluting and corrupting the public discourse. This is perilous for a democracy like the United States. I am afraid for our republic.”

Edson Prestes, professor of informatics at Federal University of Rio Grande do Sul, Brazil, responded, “I believe digital technologies and their use will help us to understand ourselves and what kind of world we want to live in. This awareness is essential for creating a better and fairer world. All problems created by digital technologies come from a lack of self-, community- and planet-awareness. The sooner we understand this point, the faster we will understand that we live in an interconnected world and, consequently, the faster we will act correctly. Thus, I tend to be optimist we will live in a better society than we do today. The poor and vulnerable will have the opportunity to have a good quality of life and standard of living on a healthy planet, where those with differences and a diversity of opinions, religions, identities will coexist peacefully.”

Edson Prestes, professor of informatics at Federal University of Rio Grande do Sul, Brazil, said, “Having a just and fair world is not an easy task. Digital technologies have the power to objectify human beings and human relationships with severe consequences for society as a whole. The lack of guardrails or the slow pace of implementation of these guardrails can lead to a dystopian society. In this sense, the metaverse and similar universes pose a serious threat with huge potential to amplify existing problems in the real world. We barely understand the impact of current digital technologies on our lives. The most prominent is the impact on privacy. When we shift the use of digital technology from tool to a universe we can live in, new threats will be unlocked. Although, digital universes are only in the digital domain, they have a direct effect in the real world. Maybe some people will prefer to live only in the digital universe and die in the real world.”

David Bernstein, a retired market-research and new-product-development professional, said, “One of the most beneficial developments will be the ability for physicians and mental health professionals to reach even more individuals. As we can already transmit and share EKG information from home or away, I look forward to being able to have slightly more invasive medical processes such as blood analysis more easily available from a distance. Access to more advanced learning for adults not close to traditional education centers will become easier. The rapid changes in what is required to be a productive workforce member will likely necessitate more regular periods of needing to upgrade skills. Society cannot afford to sideline large groups of workers because their skills are not the latest and greatest.”

David Bernstein, a retired market-research and new-product-development professional, said, “Perhaps the most harmful development I see is the further class-based division in our social, economic and political lives. We have already seen how having the financial means to access specialized services, such as online higher education, financial market information and local government has already divided many communities. Indeed, what is an advantage to the middle and upper classes, is a disadvantage for lower class groups. The days of being worried that your position may become redundant due to computerization and automation will continue to worry many. Though, I believe the manual laborer’s job may be more secure than the average programmer’s.”

Carolyn Heinrich, professor of public policy and education at Vanderbilt University, wrote, “The best and most beneficial changes in digital life are those that will increase individual access to information that expands their health, education and economic opportunities. The expansion of digital access to areas where it has been limited by poor infrastructure, including rural areas in developed and developing countries, could go the farthest toward driving these beneficial changes. Access to information and opportunities to use it to improve human well-being can also fuel political and social demands for improvement in government and institutions and human rights. The digital expansion will need to be accompanied by human interactions, such as was done in France Services, to ensure that individuals who are limited in various capacities are not left out.”

Carolyn Heinrich, professor of public policy and education at Vanderbilt University, commented, “The most harmful aspects of digital tools and systems are those that are used to spread misinformation and to manipulate people in ways that are harmful to society. Digital tools are used to scam people of money, steal identities and to bully, blackmail and defame people, and so the expansion of digital tools and systems to areas where they are currently less present will also put more people at risk of these negative aspects of their use. The spread of misinformation promotes distrust in all sources of knowledge, to the detriment of the progress of human knowledge, including reputable research. Children are especially vulnerable to the misuse of digital tools and information, and there is serious concern for the negative impacts that this has had on their mental health.”

Beneficial (Did not respond to harms question)
David J. Krieger, director of the Institute for Communication and Leadership, Switzerland, commented, “In regard to human connections, governance and institutions, in a best-case scenario the widespread adaptation of AI will encourage the development of effective fact-checking for digital tools and establish standards of truth-telling and evidence in decision-making that will influence all aspects of society and human relations. Many plus-side options may emerge from that.

“In media: The development of personalized products and services will tend to eliminate spam and with it the economy of attention. In its place will appear an economy of participation. The disappearance of the economy of attention will disrupt the media system. Mass media will be integrated into decentralized and participatory information services.

“In society overall: The climate catastrophe will fully arrive, as all experts predict. The result will be 1) the collapse of the nation-states (which are responsible for the catastrophe). 2) From the ashes of the nation-states in the best-case scenario, global governance will arise. 3) In order to control the environment, geo-engineering will become widespread and mandatory. 4) In the place of functionally differentiated society based on nation-states, there will arise a global network society based on network governance frameworks, established by self-organizing global networks cutting across all functions (business, law, education, healthcare, science, etc.) and certified and audited by global governance institutions.

“New values and norms for social interaction that are appropriate for a global network society and a new understanding of human existence as relational and associational will replace the values and ideologies of modern Western industrial society.

“In the opposite future setting, the nation-states will successfully block the establishment of effective global governance institutions. The climate catastrophe will leave some nation-states or regions as winners and others as losers, increasing wars, migration, inequality, etc. There will be no effective fact-checking for information processing and uses of AI, which will lead to a loss of trust and greater polarization throughout society.”

Erhardt Graeff, a researcher at Olin College of Engineering who is expert in the design and use of technology for civic and political engagement, wrote, “I’m hopeful that digital technology will continue to increase the quality of government service provision, making it easier, faster and more transparent for citizens and residents engaging with their municipalities and states.

  • I’m hopeful that it will increase the transparency and therefore accountability of our government institutions by making government data more accessible and usable.
  • I’m hopeful that criminal legal system data, in particular, will be made available to community members and advocates to scrutinize the activity of police, prosecutors and courts.
  • I’m hopeful that the laws, policies and procurement changes necessary to ensure responsible and citizen-centered applications of digital technology and data will be put in place as citizens and officials become more comfortable acknowledging the role digital technology plays and the expectations we should have of the interfaces and data provided by government agencies.”

Erhardt Graeff, a researcher at Olin College of Engineering who is expert in the design and use of technology for civic and political engagement, said, “I worry that humanity will largely accept the hyper-individualism and social and moral distance made possible by digital technology and assume that this is how society should function. I worry that our social and political divisions will grow wider if we continue to invest ourselves personally and institutionally in the false efficiencies and false democracies of Twitter-like social media.”

Jim Fenton, a longtime leader in the Internet Engineering Task Force who has worked over the past 35 years at Altmode Networks, Neustar and Cisco Systems, commented, “By 2035, I expect that social norms and technological norms will be closer in alignment. We have undergone such a rapid evolution in areas like social networking, online identity, privacy and online commerce (particularly as applied to cryptocurrency) that our society doesn’t really know what to think about the new technology. At the same time, I don’t expect innovation to slow in the next 12 years. We will undoubtedly have different issues where society and technology fall out of alignment, but resolving these fundamental issues will, in my opinion, provide a basis for tackling new areas that arise.”

Jim Fenton, a longtime leader in the Internet Engineering Task Force who has worked over the past 35 years at Altmode Networks, Neustar and Cisco Systems, said, “I am particularly concerned about the increasing surveillance associated with digital content and tools. Unfortunately, there seems to be a counter-incentive for governments to legislate for privacy, since they are often either the ones doing the surveilling, or they consume the information collected by others. As the public realizes more and more about the ways they are watched, it is likely to affect their behavior and mental state.”

Artur Serra, deputy director of the i2cat Foundation and research director of Citilab in Catalonia, Spain, said, “In 2035 there is the possibility of designing and building the first universal innovation ecosystem based on Internet and digital technologies. As universal access to the Internet is progressing, by 2035 the majority of the African continent will be already online. Then the big question will be now what? What will be the purpose of having all of humankind connected the network? We will understand that the Internet is more than an information and communication network. It is a research and innovation network that can allow for the first time building such universal innovation ecosystems in each country and globally – empowering everyone to innovate.”

Artur Serra, deputy director of the i2cat Foundation and research director of Citilab in Catalonia, Spain, wrote “In 2035, the same great opportunity of design and building such universal innovation ecosystems upon the Internet can be paradoxically the most menacing threat to humanity. Transforming our countries in real labs can also become the most harmful change in our societies. It can end in the appropriation of the innovation capabilities of billions of people by a small group of corporations or public bureaucracies, resulting in a real dark era for the whole humanity.”

Carol Chetkovich, professor emeritus of public policy at Mills College, said, “Technology can contribute to all aspects of human experience through increased speed, data storage capacity, reach and processing sophistication. For example, with respect to human rights, technology can increase connectivity among citizens (and across societies) that enables them to learn about their rights and to organize to advocate more effectively. Similarly, health monitoring should become easier as technology enables us to measure more bodily activities/functions and assess changes in real time.

Carol Chetkovich, professor emeritus of public policy at Mills College, wrote, “I am skeptical that technological development will be sufficiently human-centered, and therein lies the downside of tech change. In particular, we have vast inequalities in our society today, and it’s easy to see how existing gaps in access to technology and control over it could be aggravated as the tools become more sophisticated and more expensive to buy and use. The development of the robotic industry may be a boon to its owners, but not necessarily to those who lose their jobs as a result. The only way to ensure that technological advancement does not disadvantage some is by thinking through its implications and designing not just technologies but all social systems to be able to account for the changes. So, if a large number of people will not be employed as a result of robotics, we need to be thinking of how to support and educate those who are displaced before it happens. Parallel arguments could be made about human rights, health and well-being, and so on.”

Andrew Czernek, former vice president of technology at a major technology company, predicted, “Computing will be ubiquitous, digital devices designed to meet human needs will proliferate. Everything from electrical outlets to aircraft will have useful, new and innovative functions. The spread of 5G networks and movement toward 6G will help accelerate this trend. In professional and personal settings, we’ll see more-intelligent software. It will be able to do simulations and use database information to improve its utility. Virtual reality is already becoming popular in gaming applications and its extension to education applications offers incredible utility. No longer will we have to rely on wooden models for astronomy or biology teaching but visualization of planets or molecules through software. Digital technology is at the beginning of revolutionizing gene editing and its uses in disease control. New techniques will allow better gene modeling and editing, and this will accelerate in the next decade.”

Andrew Czernek, former vice president of technology at a major technology company, observed, “Prevalence of digital devices gives dozens of entry points for hackers and digital theft. The situation will become so bad in the next five years that companies will be forced to set up white hat hacking operations as a defense. Lack of a political will to create a unique digital ID will result in increasing rates of theft. Will the education system be capable of producing teachers who can teach with technology? If not, wealth gaps will continue to exacerbate. One or two companies have a great opportunity to create teaching tools, but it will take wise management to create a profitable enterprise.”

Harmful (Did not respond to Benefits question)
Gina Neff, professor and director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, predicted, “By 2035 we will see large-scale systems with little room for opting out that lack the ability for people to rectify mistakes or hold systems and power accountable. The digital systems we now enjoy have been based up to now on an assumption of democratic control and governance. Challenges to democracy in democratic countries and increasing use of AI systems for control by authoritarian governments in other countries, will mean that our digital systems will come with a high cost to freedom, privacy and rights. Technologies will appear accurate but have hidden flaws and biases, making it difficult to challenge predictions or results. Guilty until proven otherwise – and it will take a lot to prove otherwise – will be the modus operandi of digital systems in 2035.”

Amali De Silva-Mitchell, founder and coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, commented, “Development in the e-health/medical internet of things (MIoT) space is growing. This is good news for supporting and scaling up the mandate of the UN Sustainable Development Goal #3, Health and Well-Being for All, given the rapidly increasing global population and the resulting pressure that is created on traditional medical services.

“Success will be dependent on quality internet connectivity for all, as well as on availability of devices and user skills or on the support of good IT Samaritans. Funding new innovation is critical. Accessibility for disabled persons can be significantly bettered through the AI and other technologies being developed for use by those who are blind or who are hard of hearing, for example, so as to enable them to access e-health and other services and activities. Robotics, virtual and augmented reality will develop to enhance the human-computer interaction space and hopefully support the e-health space as well.

“As more individuals in the overall global population increase their IT knowledge and skills as users and developers, they will demand more results of the science, as they see more options for innovation. Ethics must be core to any development and user support activity. The potential to train and provide access to knowledge and ethics training to users and developers becomes increasingly easier with online education and support, to create resilient, ethical, accessible, quality ICT (information and communications technology) systems.”

Amali De Silva-Mitchell, founder and coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, said, “There is growing attrition in the quality of universal value systems for the public good. Quick wins have led to a successive decline in the quality of systems through oversimplification, biased profiling, lack of care to data capture, storage, update and outputs, security, patching etc.

“A lack of quality collaboration and leadership amongst stakeholders can lead to expensive failures and loss in investments for similar innovations and a general malaise can set in, stalling growth and productivity in the ICT (information and communications technology) sector. When intellectual property is not credited appropriately, innovation can also stall.

“Misinformation must be dealt with. Ownership of failures without finger-pointing is important if betterment is to take place. Trust will be eroded, if positive, quality social outcomes are not the center of the development and delivery of ICTs.

“Internet fragmentation at a time of geo-political instability is preventing the development of the rich portfolio of ICT solutions the human world could produce together. The uncertain future that is now on our global horizon due to this and change, require people to work toward ICT togetherness. E-health, in particular, requires trust in multistakeholder support to enable global health and well-being especially due to the likely impacts of climate change on human health.”

Bryan Alexander, futurist, speaker and consultant, wrote, “The most beneficial changes are those empowering human creativity. We have already seen a generation of digitally enabled creativity, from increasingly accessible tools to new ways of sharing works. Indeed, it’s a historical tendency that no sooner do humans invent new technologies than we try to make art and tell stories with them. So, looking ahead, we should expect new tech and new creativity.

“For example, people using AI to generate computer games of ever-increasing sophistication. 3D printing is likely to become ever more capable and easier to use, spawning new art forms, enabling more people to make more stuff. New materials will be printable, from building parts to biological tissue. To whatever extent people use the metaverse there will be virtual art and stories in that space. Across all of these domains we should see old forms of creativity reused and new ideas take shape. Digital art philosophies and schools are likely to surface and compete.”

Bryan Alexander, futurist, speaker and consultant, responded, “I fear the most dangerous use of digital technologies will be various forms of antidemocratic restrictions on humanity. We have already seen this, from governments using digital surveillance to control and manipulate residents to groups using hacking to harm individuals and other groups. Looking ahead, we can easily imagine malign actors using AI to create profiles of targets, drones for terror and killing, 3D printing weapons and bioprinting diseases.

“The creation of augmented and virtual reality spaces means some will abuse other people therein, if history is of any guide (see ‘A Rape in Cyberspace’ or, more recently, ‘Gamergate’). All of these potentials for human harm can then feed into restrictions on human behavior, either as means of intimidation or as justifications for authoritarianism (e.g., we must impose controls in order to fend off bioprinted disease vectors). AI can supercharge governmental and private power.”

Beneficial (Did not respond to Harms question)
David Bray, distinguished fellow with the non-partisan Stimson Center and the Atlantic Council, wrote, “It’s possible to see us getting ‘left of boom’ of future pandemics, natural catastrophes, human-caused catastrophes, famines, environment erosion and climate change by using new digital technologies on the ground and in space. We often see the signs of a new outbreak before we see people getting sick. We can create an immune system for the planet – a network of tools that search for signs of new infections, directly detect and analyze new pathogens when they first appear and identify, develop and deploy effective therapies.

“This immune system could rely on existing tools, such as monitoring demand and prices for medicinal therapies, analyze satellite images of traffic patterns and step up our efforts to monitor for pathogens in wastewater. It could use new tools that search for novel pathogens in the air, water or soil, sequence their DNA or RNA, then use high-performance computers to analyze the molecules and search through an index of known therapies that might be able to neutralize the pathogen. Biosensors that can detect pathogens could be embedded in animals and plants living in the tropical regions rich in biodiversity, where new infectious diseases often originate. Transmissions from these sensors could link to a supercomputing network that characterizes new pathogens. Of course, such a dramatic scaling up of monitoring and therapeutics could raise concerns about privacy and personal choice, so we will need to take steps to ensure this planetary immune system doesn’t create a surveillance state.

“We can also use AI to develop indicators, warnings and plans to spot vulnerabilities in global food production and help make the agriculture system more resilient and sustainable.

“In an era in which precision medicine is possible, so too will be precision bioattacks, tailored and at a distance. This will become a national security issue if we don’t figure out how to better use technology to do the work of deliberative governance at the speed necessary to keep up with threats associated with pandemics. Exponentially reducing the time it takes to mitigate a biothreat agent will save lives, property and national economies. To do this, we need to:

  • Automate detection by embedding electronic sensors and developing algorithms that take humans out of loop with characterizing a biothreat agent
  • Universalize treatment methods by employing automated methods to massively select bacteriophages vs. bacteria or antibody-producing E. Coli vs. viruses
  • Accelerate mass remediation either via rain or the drinking water supply with chemicals to time-limit the therapy

“Challenges of misinformation and disinformation are polarizing societies, sowing distrust and outpacing any truthful beliefs or facts. Dis- and misinformation will be on the rise by 2035, but they have been around ever since humans first emerged on Earth. One of the biggest challenges now is that people do not follow complicated narratives – they don’t go viral, and science is often complicated. We will need to find ways to win people over, despite the preference of algorithms and people for simple, one-sided narratives.

“We need more people-centered approaches to remedy the challenges of our day. Across communities and nations, we need to internally acknowledge the troubling events of history and of human nature, and then strive externally to be benevolent, bold and brave in finding ways wherever we can at the local level across organizations or sectors or communities to build bridges. The reason why is simple: we and future generations deserve such a world.”

Aaron Chia-Yuan Hung, associate professor of educational technology at Adelphi University, said, “AI, rightfully gets a lot of bad raps these days, but it is often used for good, especially in terms of helping us see how we can overcome complex problems, especially wicked problems such as climate change. For individuals, it can be difficult to see their carbon footprint and the influence on the environment of their choices. AI can help unpack those decisions and make them easier to understand.

“In the future, AI will work for you to condense information in large documents that most people don’t bother reading (like Terms of Service) into a simpler document and flag potentially problematic clauses an individual would want to pay close attention to.

“You will also be able to enter your diet and the medications you take into an app and have AI keep you aware of potential side effects and continuously update that information based on the latest scientific knowledge, sending you notifications about what things to add, reduce or subtract. This ease of access to complex analysis could really benefit human health if properly implemented, with proper respect for privacy.

“Like AI, robots often conjure up dystopian nightmares of rogue machines causing havoc, but robots are being designed for home use and will soon be implemented in that setting to do many helpful things like lifting heavy objects, household tasks, etc. Robot vacuums and dishwashers have been around for many years. More people will own useful robots.”

Aaron Chia-Yuan Hung, associate professor of educational technology at Adelphi University, said, “I am concerned about the fragmentation of society. More people than ever before are being exposed to confirmation bias because algorithms feed us what we like to see, not what we should see. Because so much of media (including news, popular culture, social media, etc.) is about getting our attention, and because we are drawn to things that fit our worldview, we are constantly fed things programmed to drive us to think in particular ways. We are not encouraged to think beyond those parameters. This is an intentional design and, while it’s easy to point to social media being the issue, it’s not the sole perpetrator of this problem.

“Because the economy is based so much on attention, it is hard to get tech companies to design products that nudge us out of our worldview, let alone encourage us to have civil discourse based on factual evidence about complex issues. Humans are more isolated today and often too insulated. They don’t learn how to have proper conversations with people they disagree with. They are often not open to new ideas. This isolation, coupled with confirmation bias, is fragmenting society. It could possibly be reduced or alleviated by the correct redesign and updating of digital technology.”

Fernando Barrio, lecturer in business and law at Queen Mary University of London, wrote, “Taking into account today’s current trends in technology development – which can be linked to the greatest wealth concentration ever seen – it is difficult to imagine positive changes by 2035 (the disregard for the environmental impact of these technologies and the glorification of triviality not withstanding). I will say, however, that over the next 12 years the possibility for radical change does exist.

“If we analyze the trends and focus on potential benefits, seeking the best changes we might find in an otherwise bleak scenario, digital technologies – specifically AI – will work wonders in healthcare and new drug development. AI, which is more accurately designated in today’s form as self-learning algorithmic systems or artificial narrow intelligence, is currently used to theoretically test libraries of drugs against specific diseases. Deep learning technologies have the potential to isolate the impact that different components have on specific areas of a particular disease and to then recombine them to create new drugs.

“Through state-sponsored initiatives, philanthropic activity or, more unlikely, a reconversion of corporate objectives, it is possible that by 2035 technology can be used to upgrade society in many realms. In the field of health it can find treatment for many of the most serious diseases that are harming humanity today and get that treatment out globally, beyond the tiny proportion of the world’s population that is situated in affluent countries.

“Other technological development that it is likely to revolutionize healthcare, at least in the Global North, is the use of AI-based robots for elder care. With an aging population, the use of robots to care for and cater to the older generations seems inevitable, and there are already plenty of examples in countries like Japan that are likely to be globalized by 2035.”

Fernando Barrio, lecturer in business and law at Queen Mary University of London, commented, “To this point in their development people’s uses of the new digital technologies are primarily responsible for today’s extreme concentration of wealth, the overt glorification of the trivial and superficial, an exacerbation of extremes and political polarization and a relativization of human rights violations that may surpass most such influences of the past.

“Blind technosolutionism and a concerted push for keeping technology unregulated under the false pretense that regulation would hinder its development (and its growth being paramount to human development and happiness) led us to the present. Anyone who believes the fallacy that unbridled technological development was the only thing that kept the planet functioning during the global pandemic fails to realize that those technologies could well have evolved even better in a different, more human-centered regulatory and ethical environment, very likely with more stability.

“There needs to be a substantial change in the way that society regulates technology, or the overall result will not be positive.

“There is a move in intellectual and academic circles to justify the de-humanization of social interactions and brand as technophobes anyone seeing it to be a negative that people spend most of their time today looking at digital devices. The claim is that those who spend hours physically isolated are actually more connected than others, and that spending hours watching trivial media is a new form of literacy. The advocates of that form of technologically driven social isolation and trivialization will have to explain  why – in the age of greatest access to information in history – we see a constant decline in knowledge and in the capacity to analyze information, not to mention the current pandemic of mental health issues within the younger generations.

“By 2035, unless there is a radical change in the way people, especially the young, interact with technology, the current situation will worsen substantially.

“Uses of digital technology have led to an outbreak of political polarization and the constant creating of unbridgeable ideological divides, leading to more highly damaging social self-harming situations like Brexit in UK, and the shocking January 6, 2021, invasion of the U.S. Capitol. Technology does not create these situations, but its use is providing fertile ground for mischief, creating isolated people and affording them the tools to replicate and spread polarized and polarizing messages. The trivialization of almost everything via social media along with this polarization and the spread of misinformation is leading to an unfortunate decay in human rights.”

Beneficial (Did not respond to Harms question)
Dan Lynch, internet pioneer and inventor of CyberCash, wrote, “I’m concerned about the huge reliance on digital systems while the amount of illegal activity is growing daily. One really can’t trust everything. Sure, buying stuff from Amazon is easy and it really doesn’t matter if a few things are dropped or missing. I suggest you stay away from the money apps! Their underlying math is shaky. I know. I invented CyberCash in the mid-1990s.”

Beneficial and Harmful
Allison Wylde, senior lecturer at Glasgow Caledonian University and team leader on the UN Global Digital Compact Team with the Internet Safety, Security and Standards Coalition, said, “To help us try to look forward and understand possible futures, two prominent approaches are suggested: examining possible-future scenarios and learning from published works of non-fiction and fiction. I’d like to merge these approaches here.

“Royal Dutch Shell has arguably led on the scenario approach since the 1960s. For scenario development, as a starting point, an important consideration concerns the framing of any question. Next, the question opens out by asking ‘what if?’ to help consider possible futures that may be marginal possibilities.

“From published literature, fiction and non-fiction, a recent research project examining robots in the workplace concluded that society may experience gains and/or losses. From classical literature, as William Shakespeare suggested, perhaps cautioned, consequences are rooted in past actions, ‘What’s past is prologue.’ What can we take from this?

“If we look back to the time of the invention of the World Wide Web by Tim Berners-Lee, we see the internet started out as a space of openness and freedom. During the Arab Spring, citizens created live-streamed material that acted both as a real-time warning of threats from military forces, and as a record of events. Citizens from other countries assisted. Outside help is also being offered via online assistance today in the conflict between Russia and Ukraine. Is this one possible future: open and sharing?

“Alternative futures, for instance those predicted by H.G. Wells at the end of the 19th century, suggest that we are being watched by intelligences greater than ours, ‘with intellects vast and cool and unsympathetic.’ While we humans are ‘so vain and blinded by vanity that we couldn’t comprehend that intelligent life could have developed; so far…or indeed at all.’ Right now, we can see around us the open-source community developing AI-enhanced tools designed to help us, Dall-E, ChatGPT and Hugging Face are examples of such work. At the same time, malicious actors are turning these tools against us.

“Currently AI is viewed as a binary: good or bad. So, are we facing a binary problem, with two possible avenues? Or are our futures with AI, and indeed as with the rest of our lives, more complex, with multiple and interlinked possibilities? In addition, from literature (in particular, fiction), is the future constantly shifting – appearing and disappearing?

“At this point in time, the United Nations is shaping the language for a Global Digital Compact (GDC) that calls for a trusted, free, open and secure internet, with trust and trust-building as a central and underpinning foundation. Although the UN calls for trust and trust building, there is a silence on the mechanisms for such. Futures discussed here are but possibilities. The preliminary insights of those working toward finding their way to creating a widely accepted GDC share common threads, the importance of considering beyond good and bad, recognising the past and the present, and being alert, and thus well-prepared and well-resourced, to participate and anticipate possible multiple futures.

“Arguably, just what and who will be in our futures may be more complex that we can imagine. Kazu Ishiguro in the novel ‘Klara and the Sun’ paints yet another picture: a humanoid robot, pining for the attention of a human and seeking comfort in the ‘hum of a fridge.’ This image may chime with the views of a recent Google staffer fired in 2022 for suggesting that AI chatbots may already be sentient. While like children ‘who want to help the world,’ their creators need to take responsibility as illustrated by the drive toward the use of explainable AI (XAI). (As a final note, Mary Shelly was not invoked.)

Kat Schrier, associate professor and founding director of the Games & Emerging Media program at Marist College, wrote, “I believe one of the best benefits of future technology is that it will reveal more of the messiness of humanity. We can’t solve a problem unless we can identify it and name it as such. Through the advent of digital technologies, we have started to acknowledge issues with everything from harassment and hate to governance and privacy.

“These are issues that have always been there, but are highlighted through connections in gaming, social media and other virtual spaces. My great hope is that digital technology will help to solve complex human and social problems like climate change, racial inequities and war.

“We are already starting to see humans working alongside computers to solve scientific problems in games like Foldit or EteRNA. Complex, wicked problems are so challenging to solve. Could we share perspectives, interpret data and play with each other in ways that help illuminate and apply solutions to wicked problems?”

Kat Schrier, associate professor and founding director of the Games & Emerging Media program at Marist College, commented, “There are a number of large issues; these are just a few:

  1. Systemic inequities are transmogrified by digital technologies (though these problems have always existed, we may be further harming others through the advent of these systems). For instance, problems might include biased representation of racial, gender, ethnic and sexual identities in games or other media It also might include how a game or virtual community is designed and the cultural tone that is established. Who is included or excluded, by design?
  2. Other ethical considerations, such as privacy of data or how interactions will be used, stored and sold.
  3. Governance issues, such as how people report and gain justice for harms, how we prevent problems and encourage prosocial behavior, or how we moderate a virtual system ethically. The law has not evolved to fully adjudicate these types of interactions, which may also be happening across national boundaries.
  4. Social and emotional issues, such as how people are allowed to connect or disconnect, how they are allowed to express emotions, or how they are able to express their identities through virtual/digital communities.”

Beneficial and Harmful
Karl M. van Meter, author of “Computational Social Science in the Era of Big Data,” commented, “At this period in the development of digital technology I am both excited and concerned. That attitude will probably evolve with time and future developments. My major concerns are with governance and the environment.

“Given hominine ingenuity, proven over millions of years, and the current economic pressure for new developments – including in technology – the fundamental question is ‘how will our societies and their economies manage future technological developments?’ Will the economic and profit pressure to obtain more and more personal data with new technology continue to generate major abuses and override individuals’ wishes for privacy? This is a question of governance and not of technology and new technological developments. It is up to humanity.

“In personal work in scientific research, the vast availability of information and contacts with others has been a major advantage and has resulted in great progress, but the same technologies have served to give a voice and assistance to creating serious obstacles to such progress and increasingly bringing ideological extremism into daily both in developed and less-developed countries.”

Laurie L. Putnam, educator and communications consultant, wrote, “There is great potential for digital technologies to improve health and medical care. The trendlines are clear: Our population is growing older, caregivers are becoming harder to find, and medical specialists are often located some distance from their patients (even a short distance can be too far for a senior without support). Out of necessity, digital healthcare will become a norm.

“Remote house calls, which became more common during the COVID pandemic, will serve more patients more frequently. Remote diagnostics and monitoring will be especially valuable for aging and rural populations that find it difficult to travel. Connected technologies will make it easier for specialized medical personnel to work together from across the country and around the world. Medical researchers will benefit from advances in digital data, tools and connections, collaborating in ways never before possible. We have already made great strides in remote research, diagnostics and treatment. Demographic trends are clearly telling us to do more of this.”

Laurie L. Putnam, educator and communications consultant, said, “Many digital technologies are taking more than they give. And what we are giving up is difficult, if not impossible, to get back. Today’s digital spaces, populated by the personal data of people in the real world, is lightly regulated and freely exploited. Technologies like generative AI and cryptocurrency are costing us more in raw energy than they are returning in human benefit. Our digital lives are generating profit and power for people at the top of the pyramid without careful consideration of the shadows they cast below, shadows that could darken our collective future.

“If we want to see different outcomes in the coming years, we will need to rethink our ROI calculations and apply broader, longer-term definitions of return. We are beginning to see more companies heading in this direction, led by people who aren’t prepared to sacrifice entire societies for shareholders’ profits, but these are not yet the most-powerful forces. Power must shift and priorities must change.

Jim Kennedy, senior vice president for strategy at The Associated Press, wrote, “The most significant advances in technology will be in search, the mobile experience, social networking, content creation and software development. These – among so many other components of digital life – will be rapidly advanced through artificial intelligence. Generative text and imagery are just the early manifestations of an AI-assisted world that should spark a massive new wave of creativity along with major productivity boosts. To get the most out of this rapid revolution, the humans in the loop will have to sharpen their focus on targets where we can realize the biggest gains and move quickly from experimentation to implementation. Another big sleeper is the electrification of motor vehicles, which will finally break open the next big venue for the mobile experience beyond the phone. AI, of course, will be central to that development as well. At the root of it all will be real personalization, which has been the holy grail since the beginning of digitalization.”

Jim Kennedy, senior vice president for strategy at The Associated Press, responded, “Misinformation and disinformation are by far the biggest threats to digital life and to the peace and security of the world in the future. We have already seen the effects of this, but we probably haven’t seen the worst of it yet. The technological advances that promise to vastly improve our lives are the same ones giving bad actors the power to wage war against the truth and tear at the fabric of societies around the world. At the root of this problem is the lack of regulation and restraint of the major tech platforms that enable so much of our individual and collective digital experience. Governments exist to hold societies together. When will they catch up with the digital giants and hold them to account?”

Czesław Mesjasz, an associate professor at Cracow University of Economics, Kraków, Poland, responded, “Among the advances I see coming:

  1. Improving human knowledge about social life and nature should enhance capabilities to influence them positively
  2. Improving the quality of medical services will lead to better outcomes – especially diagnoses and treatment
  3. Helping people from various cultures understand one another could lead to a more peaceful world
  4. Increasing standards of living thanks to a higher productivity will bring many more people above the poverty line.”

Christopher Wilkinson, a retired European Union official, board member for EURid.eu, and Internet Society leader said, “Nearly everything that one might imagine for the future depends on proactive decisions and interventions by public authorities and by dominant corporations. At the global level, there will have to be coordination between regional and national public authorities and between corporate and financial entities. The United Nations (the available global authority) and the regional authorities (e.g., European Union and the like) are too weak to ensure protection of the public interest, and available institutions representing the corporate side (e.g., World Economic Forum) are conducive to collusive business behavior.

“Human rights are an absolute. Those who are least likely to have access to and use of digital technologies are those who are also most likely to suffer from limitations to, or abuse of, their human rights. During the next decade, a large part of the 8 billion world population will still not have access to and command of digital technologies. The hope is that corrective controls and actions are initiated to at least come incremental improvements, including management of interactions in languages, democratic governance and commerce, all of which require extensive research, education and investment to achieve in regions where connectivity is still a luxury.

“In regard to human health and wellness, I have no personal experience of digital life making people happier. In the light of current experience of Ukraine, Syria or Turkey, I have doubts about the ability of digital to make people safer. No doubt that predictive technologies and big data applications might reduce certain risks (the recent train crash in Ohio comes to mind). But human fallibility, greed and envy usually still prevail. Healthier? Vast resources will be required to address the health of the aging and victims of famine, war and pandemics. There is no evidence that this will be done worldwide, at scale, transparently, accessibly and affordably during the next decade.”

Christopher Wilkinson, a retired European Union official, board member for EURid.eu, and Internet Society leader said, “Among the potential harms:

“Digital applications in governance and other institutional decision-making will continue to be distrusted. Voting machines, identification, what else? Usually, the best solutions already exist somewhere, if only they can be identified and reproduced.

“Among the many concerns tied to human rights between now and 2035 are the continued exclusion of minorities in digital opportunity and the problems that would be raised by the disappearance of the right to payment with cash in light of the massive recent move to digital-only transactions.

“In health and medicine, the new leading-edge applications can be brilliant. The challenge is how to extend the best ones to replace the legacy systems already in place, linking patient IDs, doctors, hospitals, pharmacies, public health insurance, etc., into one interoperable system, whilst protecting the patients’ privacy.

“More generally, since 2035 is, like, next week in the world of planning for institutions and populations, the main priority should be to ensure that the best solutions are extended to the full populations as a whole. It is no longer the time for blue sky research. A lot of all that has already been done. The benefits of existing technology and knowledge need to be extended to the population as a whole, if the objective is to improve implementations by 2035.”

Jeffrey Johnson, professor of complexity science and design at The Open University, said, “Among the advances I foresee is that the internet will become better regulated. It will no longer be possible to make anonymous comments about people. This will curtail the terrible misogyny, lies, threats and false news that currently poison social media and damage social and political life. Artificial Intelligence will be better understood as a technology. It will not be legally possible to make false claims for AI. Autonomous robots will improve and be widely applied in agriculture and many other aspects of life. But they will remain primitive. And computer systems will become much better as the theory of programming matches better the way that computer systems are used in organisations.”

Jeffrey Johnson, professor of complexity science and design at The Open University, said, “I would say that the harms that may come will arrive if the points I just made are not achieved. If effective internet regulation does not come to fruition, it will remain possible to make anonymous comments about people. This will exacerbate the terrible misogyny, lies, threats and false news that currently poison social media and damage social and political life. It is possible that artificial Intelligence will continue to be misunderstood as a technology and it will remain legally possible to make false claims for AI. Autonomous robots, like all technology can be used for all purposes; they could be improved and then be widely applied in warfare and in many ways that harm the public and curtail citizens’ rights and lives. And computer systems may not be improved as the theory of programming continues to mismatch the way that computer systems are used in organisations.”

Jens Ambsdorf, director of the Lighthouse Foundation in Germany, said, “I can imagine that the stormy development of AI applications in the sense of meaningful sorting and verifying big data could help a lot to identify and verify trends and patterns at an unprecedented speed and range. This can potentially have a huge impact on all areas of to which it is applied. It is critical that the access to these technologies is not limited to small interest groups but society at large. But the same technologies could finally offer easy access routes also for non-tech affine groups and be an enabling driver for this development. Still, broad education is a prerequisite for a meaningful application, and interpretation of these technologies is needed in many countries and societies that are now lacking a full and useful understanding of them.”

Jens Ambsdorf, director of the Lighthouse Foundation in Germany, wrote, “The same technologies that could be drivers for a more coherent and knowledge-based world can be the source of further fragmentation and the building up of parallel societies. The creation of self-referenced echo chambers and alternative narratives is a threat to the very existence of humans on this planet as the self-inflicted challenges like biodiversity loss, climate change, pollution and destructive economies can only be successfully faced together on this planet. Currently I hold this danger to be far bigger than the chance for a positive development, as the tools for change are not based in the hands of society but more and more in the hands of private competing interests.”

Herb Lin, senior research scholar for cyber policy and security at Stanford University’s Center for International Security and Cooperation, said, “The most beneficial change in digital life likely to take place by 2035 is that things don’t get much worse than they are now with respect to pollution in and corruption of the information environment. Applications such as ChatGPT will get better without question, but the ability of humans to use such applications wisely will lag. My best hope is that human wisdom and willingness to act will not lag so much that they are unable to respond effectively to the worst of the new challenges accompanying innovation in digital life.”

Herb Lin, senior research scholar for cyber policy and security at Stanford University’s Center for International Security and Cooperation, commented, “The worst likely outcome is that humans will develop too much trust and faith in the utility of the applications of digital life and become ever more confused between what they want and what they need. The result will be that societal actors with greater power than others will use the new applications to increase these power differentials for their own advantage.”

Davi Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, said, “The best and most beneficial changes in digital life by 2035 by most accounts will be from innovations in machine learning, virtualization and interconnected things (IoT). Learning technology can reduce the cost of knowledge. Virtualization technology can reduce the cost of presence. Interconnected things can both improve the quantity of data for the previous two, while also delivering more accessibility.

“This all speaks mainly to infrastructure tools, however, which need a special kind of glue. Stewardship and ethics can chart a beneficent course for the tools by focusing on an improved digital life that takes those three pieces and weaves them together with open standards for data interoperability. We saw a similar transformation of the 1970s closed data processing infrastructure into the 1990s interconnected open-standards Web.

“This shift from centralized data infrastructure to federated and distributed processing is happening again already, which is expected to provide ever higher quality/integrity data. For a practical example, a web page today can better represent details of a person or an organization than most things could 20 years ago. In fact, we trust the Web to process, store and transmit everything from personalized medicine to our hobbies and work.

“The next 20 years will continue a trend to Web 3.0 by allowing people to become more whole and real digital-selves in a much safer and healthier format. The digital self could be free of self-interested moat platforms, using instead representative ones; a right to be understood, founded in a right to move and maintain data about ourselves for our purposes (including wider social benefit).

“Knowledge will improve as it can be far more easily curated and managed by its owner when it isn’t locked away, divided into complex walled gardens and forgotten in a graveyard of consents. A blood pressure sensor, for example, would send data to a personal data store for processing and learning far more privately and accurately. Metadata then could be shared based narrowly on purpose and time, such as with a relative, coach, assistant or healthcare professional. Health and well-being thus benefits directly from coming improvements in data-integrity architecture, as we already are seeing consent-based open-standards sharing infrastructure being delivered that will transform lives for the better.”

Davi Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, predicted, “The most harmful or menacing changes likely to occur by 2035 in digital technology are related to the disruptive social effects of domain shifts. A domain shift pulls people out of areas they are familiar with and forces them to reattach to unfamiliar technology, such as with the end of horses and the rise of cars. In retrospect the wheel was inferior to four-legged transit in very particular ways (e.g., requirement for a well-maintained road in favorable weather, dumping highly toxic byproducts in its wake) yet we are very far away from realizing any technology-based legged transit system.

“Sophisticated or not-well-understood technology can be misrepresented using fear tactics such that groups will drive into decades of failure and harm, without realizing they’ve being fooled. We’ve seen this in the return push to driverless vehicles, which are not very new but presented lately as magically very near to being realized.

“Sensor-based learning machines are solicited unfairly at unqualified consumers to prey on their fear about loss of control; people want to believe a simple and saccharin digital assistant will make them safer without evidence. This has manifested as a form of addiction and over-dependence causing social and mental health issues, including an alarming rise in crashes and preventable deaths by inattentive drivers believing misinformation about automation.

“Even more to the point, an over-emphasis on automation instead of augmentation leaves necessary human safety controls and oversight out of the loop on extremely dangerous and centrally controlled machines. It quickly becomes more practical and probable to poison a driverless algorithm in a foreign country to unleash a mass casualty event using loitering cars as swarm kamikazes, than to fire remote missiles or establish airspace control for bombs.

“Another example, related to misinformation, is the domain shift in identity and digital self. Often referred to as deep fakes, an over-reliance on certain cues can be manipulated to target people who don’t use other forms of validation. Trust sometimes is based on the sound of a voice or based on the visual appearance of a face. That was a luxury, as any deaf or blind person can provide useful insight about. Now in the rapidly evolving digital tools market anyone can sound or look like anyone, like observers becoming deaf or blind and needing some other means of trust to be established.

“This erodes old domains of trust, yet it also could radically shift trust by fundamentally altering what credible sources should be based upon. A black woman having the opportunity to put on a white face to reach audiences, or an unknown person looking like a celebrity, challenges many groups’ notions of what they should have been trusting as a connection and their message.

“Content should be judged, rather than the cover, as the old saying goes. Like the printing press revolution, without wise content frameworks we may see increased polarization and division due to exploitation of this knowledge shift – the spread of bogus ideology through rapidly evolving inexpensive communication channels.”

Raymond Perrault, a distinguished computer scientist at SRI International and director of the AI Center there from 1988 to 2017, wrote, “First, some background. I find it useful to describe digital life as falling into three broad, and somewhat overlapping categories:

  • Content: web media, news, movies, music, games (mostly not interactive)
  • Social media (interactive, but with little dependency on automation)
  • Digital services, in two main categories: pure digital (e.g., search, financial, commerce, government) and that which is embedded in the physical world (e.g., healthcare, transportation, care for disabled and elderly)

“The big challenges are quality of information (veracity and completeness) and technical feasibility of some services, in particular those depending on interaction.

“Most digital services depend on interaction with human users and the physical world that is timely and highly context-dependent. Our main models for this kind of interaction today (search engines, chatbots, LLMs) are all deficient in that they depend on a combination of brittle hand-crafted rules, large amounts of labelled training data, or even larger amounts of unlabeled data, all to produce systems that are either limited in function or insufficiently reliable for critical applications. We have to consider security of infrastructure and transactions, privacy, fairness in algorithmic decision-making, sustainability for high-security transactions (e.g., with blockchain), and fairness to content creators, large and small.

“So, what good may happen by 2035?

“Hardware, storage, compute, communications costs will continue to decrease, both in cloud and at the edge. Computation will continue to be embedded in more and more devices, but usefulness of devices will continue to be limited by the constraints on interactive systems. Algorithms essential to supporting interaction between humans and computers (and between computers and the physical world) will improve if we can figure out how to combine tacit/implicit reasoning, as done by current deep learning-based language models, with more explicit reasoning, as done by symbolic algorithms.

“We don’t know how to do this, and a significant part of the AI community resists the connection, but I see it as a (difficult) technical problem to be solved, and I am confident that it will one day be solved. I believe that improving this connection would allow systems to generalize better, be taught general principles by humans (e.g., mathematics), reliably connect to symbolically stored information, and conform to policies and guidance imposed by humans. Doing so would significantly improve the quality of digital assistants and of physical autonomous systems. Ten years is not a bad horizon.

Raymond Perrault, a distinguished computer scientist at SRI International and director of the AI Center there from 1988 to 2017, predicted, “Better algorithms will not solve the disinformation problem, though they will (continue to) be able to bring cases of it to the attention of humans. Ultimately this requires improvements in policy and large investments in people, which goes against incentives of corporations and can only be imposed on them by governments, which are currently incapable of doing so. I don’t see this changing in a decade. Nor will better algorithms solve the necessary investments to prevent certain kinds of information services (e.g., local news) from disappearing, nor treating content creators fairly. Government services could be significantly improved by investment using known technologies, e.g., to support tax collection. The obstacles again are political, not technical.”

Michael G. Dyer, professor emeritus of computer science at UCLA, wrote “AI systems like ChatGPT and DALL-E represent major advances in Artificial Intelligence. They illustrate “infinite generative capacity” which is an ability to both generate and recognize sentences and situations never before described. As a result of such systems, AI researchers are beginning to narrow in on how to create entities with consciousness. As an AI professor I had always believed that if an AI system passed the Turing Test it would have consciousness but systems such as ChatGPT have proven me wrong. ChatGPT behaves as though it has consciousness but does not. The question then arises: What is missing?

“A system like ChatGPT (to my knowledge) does not have a stream of thought; it remains idle when no input is given. In contrast, humans, when not asleep or engaged in some task, will experience their minds wandering – thoughts, images, past events and imaginary situations will trigger more of the same. Humans also continuously sense their internal and external environments and update representations of these, including their body orientation and location in space and the temporal position of past recalled events or of hypothetical, imagined future events.

“Humans maintain memories of past episodes. I am not aware as to whether or not ChatGPT keeps track of interviews it has engaged in or of questions it has been asked (or the answers it has given). Humans are also planners: they have goals, and they create, execute and alter/repair plans that are designed to achieve their goals. Over time they also create new goals; they abandon old goals and re-rank the relative importance of existing goals.

“It will not take long to integrate systems like ChatGPT with robotic and planning systems and to alter ChatGPT so that it has a continual stream of thought. These forms of integration could easily happen by 2035. Such integration will lead to an entire new type of technology – technologies with consciousness.”

Michael G. Dyer, professor emeritus of computer science at UCLA, warned “Humans have never before created artificial entities with consciousness and so it is very difficult to predict what sort of products will come about, along with their unintended consequences.

“I would like to comment on two dissociations with respect to A.I. The first is that an AI entity (whether software or robotic) can be highly intelligent while NOT being conscious or biologically alive. As a result, an AI will have none of the human needs that come from being alive and having evolved on our planet (e.g., the human need for food, air, emotional/social attachments, etc.). The second dissociation is between consciousness/intelligence and civil/moral rights. Many people might conclude that an AI with consciousness and intelligence must necessarily be given civil/moral rights; however, this is not the case. Civil/moral rights are only assigned to entities that can feel pleasure and pain. If an entity cannot feel pain then it cannot be harmed. If an entity cannot feel pleasure then it cannot be harmed by being denied that pleasure.

“Corporations have certain rights (e.g., they can own property) but they do not have moral/civil rights, because they cannot experience happiness, nor suffering. It is eminently possible to produce an AI entity that will have consciousness/intelligence but that will NOT experience pleasure/pain. If we humans are smart enough, we will restrict the creation of synthetic entities to those WITHOUT pleasure/pain. In that case we might survive our inventions.

“In the entertainment media, synthetic entities are always portrayed by humans and a common trope is that of those entities being mistreated by humans and the audience then sides with those entities. In fact, synthetic entities will be very nonhuman. They will NOT eat food; give birth; grow as children into adulthood; get sick; fall in love; grow old or die. They will not need to breath and currently I am unaware of any AI system that has any sort of empathy for the suffering of humans. Most likely (and unfortunately) AI researchers will create AI systems that do experience pleasure/pain and even argue for doing such, so that such systems learn to have empathy. Unfortunately, such a capacity will then turn them into agents deserving of moral consideration and thus of civil rights.

“Will humans want to give civil rights and moral status to synthetic entities who are not biologically alive and who could care less if they pollute the air that humans must breathe to stay alive? Such entities will be able to maintain backups of their memories and live on forever. Another mistake would be to give them any goals for survival. If the thought of being turned off causes such entities emotional pain, then humans will be causing suffering in a very alien sort of creature and humans will then become morally responsible for their suffering. If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.

“The field of AI is advancing very rapidly. AI systems now exist that pass the Turing Test but still lack consciousness, but conscious systems are not far off. Will humans also create (non-human, non-living) AI systems that are able to demand civil rights, due to their ability to also experience pleasure and pain?

“Note here that I have ignored the thorny issue of determining whether or a not an AI entity is actually experiencing pain when in the future it behaves as though it is in pain. With ChatGPT we already have the problem of determining whether or not it is conscious while it behaves as though it is conscious.”

Frank Odasz, president of Lone Eagle Consulting, said, “By 2035, everyone will have a relationship with AI in multiple forms. ChatGPT is an AI tool to draft essays on any topic. Jobs will require less training and will be continually aided by AI helpers. The congressional office of technology assessment will be reinstated to counter the exponential abuses of AI, deepfake videos and all other known abuses. Creating trust in online businesses and secure identities will become common place. Four-day work weeks and continued growth in remote work and remote learning will mean everyone can make the living they want, living wherever they want.

“Everyone will have a global citizenship mindset working toward those processes that empower everyone. Keeping humankind to the same instant of progress will become a shared goal as the volume of new innovations continues to increase, creating increasing opportunities for everyone to combine multiple innovations to create new integrated innovations.

“Developing human talent and agency will become a global shared goal. Purposeful use of our time will become a key component of learning. There will be those who spend hours a day using VR googles for work and gaming with increasingly social components. A significant portion of society will be able to opt out of most digital activities once universal basic income programs proliferate. Life, liberty and pursuit of happiness, equality before the law and new forms of self-exploration and self-care will proliferate.

“Collective values will emerge and become important regarding life choices. Reconnecting with nature and our responsibility for stewardship of our planet’s environments, and each other, will take a very purposeful role in the lives of everyone. As more people learn the benefits of being positive, progressive, tolerant of differences and open-minded, most people will agree that people are basically good.

“The World Values Survey has previously recorded metrics like 78% of Swedish citizens believe people are basically good, while Latin Americans give 15%, and those in Asia 5%. Exact figures are at the website and ongoing surveys will reflect changes.

“Pursuit of meaningful use of our time, freed from menial labor, will create a new global culture of purpose to rally all global citizens to work together to sustain civil society and our planet.

“With all the advances in tech, what could go wrong? … ”

Frank Odasz, president of Lone Eagle Consulting, wrote, “By 2035, the vague promise of broadband for all, providing meaningful, measurable, transformational outcomes, will create a split society, extending what we already see in 2023 with the most-educated leaning toward a progressive, tolerant, open-learning society able to adapt easily to accelerating change. Those left behind without the mutual support necessary to grow to learn to love learning and benefit from accelerating technical innovation will grow fearful of change, learning and of those who do understand the potential for transformational outcomes of motivated self-directed Internet learning and particularly of collaborating with others. If we all share what we know, we’ll all have access to all our knowledge.

“Lensa AI is an app from China that turns your photo into many choices for an avatar and/or a more compelling ID photo, requiring only that you sign away all intellectual rights to your own likeness. Abuses of social media are listed at the Ledger of Harms from the Center for Humane Tech.

“It is known that foreign countries continue to implement increasingly insidious methods for proliferating misinformation and propaganda. Certainly the United States, internally, has severe problems due to severe political polarization that went nearly ballistic in 2020 and 2021.

“If a unified global value system evolves, there is hope international law can contain such moral and ethical abuses. Note: the Scout Law created in 1911 has a dozen generic values for common decency and served as the basis for the largest uniformed organizations in the world – Boy Scouts and Girl Scouts. Reverence is one trait that encompasses all religions.

“Leave no one behind needs to refer to those without a moral compass; positive, supportive culture; self-esteem; and common sense.

“Mental health problems are rampant worldwide. Vladimir Putin controls more than 4,500 nuclear missiles. In the United States, proliferation of mass shootings tells us one person can wreak havoc with the lives of very many others. If 99 percent of society evolves to be good people with moral values and generous spirits, the reality is human society might still end in nuclear fires due to the actions of a few individuals, or even a single individual with a finger on the red button capable of destroying billions and making huge parts of the planet uninhabitable. How can technology assure our future? Finland has built underground cities to house their entire population in the event of nuclear war.

“The battle between good and evil has changed due to the power of technology. The potential disaster only a few persons can exact upon society continues to grow disproportionally to the security the best efforts of good folks can deliver. This dichotomy, taken to extremes, might spell doom for us all unless radical measures are taken, down to the level of monitoring individuals every moment of the day.

“Acultural worldviews need to evolve to create a common bond accepting our differences as allowable commonalities. This is the key to sustainability of the human race, and it is not a given. Our human-caused climate changes are already creating dire outcomes, drought, sea levels rising and much more. The risk of greater divisiveness will increase as impacts of climate change continue to increase. Migration pressure is but one example.”

Beneficial and Harmful
David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, wrote, “Both the Internet and machine learning have removed the safe but artificial boundaries around what we can know and do, plunging us into a chaos that is certainly creative and human, but also dangerous and attractive to governments and corporations desperate to control more than ever.

“It also means that the lines between predicting and hoping or fearing are impossibly blurred. Nevertheless:

“Right now, large language models (LLMs) of the sort used by ChatGPT know more about our use of language than any entity ever has, but they know absolutely nothing about the world. (I’m using ‘know’ sloppily here.) In the relative short term, they’ll likely be intersected with systems that have some claim to actual knowledge so that the next generation of AI chatters will hallucinate less and be more reliable. As this progresses, it will likely disrupt both our traditional and Net-based knowledge ecosystems.

“With luck, the new knowledge ecosystem is going to have us asking whether knowing with brains and books hasn’t been one long dark age. I mean, we did spectacularly well with our limited tools, so good job fellow humans! But we did well according to a definition of knowledge tuned to our limitations.

“As machine learning begins to influence how we think about and experience our lives and world, our confidence in general rules and laws as the high mark of knowledge may fade, enabling us to pay more attention to the particulars in every situation. This may open up new ways of thinking about morality in the West and could be a welcome opportunity for the feminist ethics of care to become more known and heeded as a way of thinking about what we ought to do.

“Much of the online world may be represented by agents: software that presents itself as a digital “person” that can be addressed in conversation and can represent a body of knowledge, an organization, a place, a movement. Agents are likely to have (i.e., be given) points of view and interests. What will happen when these agents have conversations with one another is interesting to contemplate.

“We are living through an initial burst of energy and progress in areas that until recently were too complex to even imagine we could.

“These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally. This is an opportunity for us to come face to face with how small a light our mortal intelligence casts. But it is also an overwhelming temptation for self-centered corporations, governments and individuals to exploit that power and use it against us.

“I imagine that both of those things will happen.

“Second, we are heading into a second generation that has lived much of its life on the Internet. For all of its many faults – a central topic of our time – being on the Internet has also shown us the benefits and truth of living in creative chaos. We have done so much so quickly with it that we now assume connected people and groups can undertake challenges that before were too remote even to consider. The collaborative culture of the Internet – yes, always unfair and often cruel – has proven the creative power of unmanaged connective networks.

“All of these developments make predicting the future impossible – beyond, perhaps, saying that the chaos that these two technologies rely on and unleash is only going to become more unruly and unpredictable, driving relentlessly in multiple and contradictory directions.

“In short: I don’t know.”

Beneficial and Harmful
Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, observed, “Humans and technology rarely develop in perfect sync, but we will see them catching up. We’ve lived through a period in which digital tech has developed at speeds we’ve struggled to keep up with: too much content, too much noise, too much disinformation.

“Slowly but surely, we’re getting the tools to regain some semblance of control. AI used to be the monster under our beds, but now we’re seeing how we might make it our obedient dog (although some still fear it might be a cat in disguise). As new tools are released, we’re increasingly seeing people using them for fearless experimentation, finding ways to bend ever more powerful technologies to human wills. From fearing that AI and other technologies are going to take our jobs and make us obsolete, humans are finding ever more ways to elevate themselves with technology and making digital wrangling into not just the hobby of a few forerunners, but a new folk culture.

“There was a time when using electricity was something you could only do after serious education and a long apprenticeship. Today, we all know how a plug works. The same is happening in the digital space. Increasingly, digital technologies are being turned into something so easy to use, utilize and manipulate so that they become the modern equivalent of electricity. As every man, woman and child knows how to use an AI to solve a problem, digital technology becomes ever less scary and more and more the equivalent of building with Lego blocks. In 2035 the limits are not technological, but creative and communicative. If you can dream it and articulate it, digital technology can build it, improve upon it and help you transcend the limitations you thought you had.

“That is, unless a corporate structure blocks you.

“Spiderman’s Uncle Ben said, ‘With great power comes great responsibility.’ What happens when we all gain great power? The fact that some of us will act irresponsibly is already well known, but we also need to heed the backlash this all brings. There are great institutional powers at play that may not be that pleased with the power that the new and emerging digital technologies afford the general populace. At the same time, there is a distinct risk that radicalized actors will find ever more toxic ways to utilize the exponentially developing digital tools – particularly in the field of AI. A common fear in scary future scenarios is that AIs will develop to a point where they subjugate humanity, but right now, leading up to 2035, our biggest concern is the ways in which humans are and will be weaponizing AI tools.

“Where this places most of humanity is in a double bind. As digital technology becomes more and more powerful, state institutions will aim to curtail bad actors using it in toxic ways. At the same time, and for the same reason, bad actors will find ever more creative ways to use it to cheat, fool, manipulate, defraud and otherwise mess with us. The average Joe and/or Jane (if such a thing exists anymore) will be caught up in the coming AI turf wars, and some will become collateral damage.

“What this means is that the most menacing thing about digital technologies won’t be the tech itself, nor any one person’s deployment of the same, but being caught in the pincer movement of attempted control and wanton weaponization. We think we’ve felt this now, with the occasional social media post being quarantined, but things are about to get a lot, lot worse.

“Imagine having written a simple, original post, only to see it torn apart by content-monitoring software and at the same time endlessly repurposed by agents who twist your message to its very antithesis. Imagine this being a normal, daily affair. Imagine being afraid to even write an email, lest it becomes fodder in the content wars. Imagine tearing your children’s tech away, just to keep them safe for a moment longer.”

Beneficial (Did not answer the Harms question)
Garth Graham, longtime Canadian networked communities leader, commented, “Consider the widely accepted Internet Society phrase, ‘Internet Governance Ecology.’ In that phrase, what does the word ecology actually mean? Is the Internet Society’s description of Internet governance as ecology a metaphor, an analogy or a reality? And, if it is a reality, what are the consequences of accepting it?

“Digital technology surfaces the importance of understanding two different approaches to governance. Our current understanding of governance, including democracies, is hierarchical, mechanistic and measures things on an absolute scale. The rules about making rules are assumed to be apply externally from outside systems of governance. And this means that those with power assume their power is external to the systems they inhabit. The Internet, as a set of protocols for inter-networking, is based on a different assumption. Its protocols are grounded in a shift in epistemology away from the mechanistic and towards the relational.

‘It is a common pool resource and an example of the governance of complex adaptive self-organizing systems. In those systems, the rules about making rules are internal to each and every element of the system. They are not externally applied. This complexity means that the adaptive outcomes of such systems cannot be predicted from the sum of the parts. The assumption of control by leadership inherent in the organization of hierarchical systems is not present. In fact, the external imposition of management practices on a complex adaptive system is inherently disruptive of the system’s equilibrium. So the system, like a packet-switched network, has to route around it to survive.

“Presently, our understanding of the difference between these two approaches to governance is most visible in the social changes occurring in the shift towards awareness of interconnectedness in ecologies, and in the significance that has for the mitigation of climate change. There is a chance that by 2035 awareness of the Internet’s nature as a complex adaptive system that mirrors and supports other self-organizing adaptive systems will accelerate a shift in epistemology away from governance by hierarchy and toward open systems of self-organization.

“Then the choice to connect or not with any system of relationship becomes personal, and the organizational responses to problems become distributed, adaptive and local, rather than top-down.

“I do not think we understand what society becomes when machines are social agents. Code is the only language that’s executable. It is able to put a plan or instruction or design into effect on its own. It is a human utterance (artifact) that, once substantiated in hardware, has agency. We write the code and then the code writes us. Artificial intelligence (AI) intensifies that agency. That makes necessary a shift in our assumptions about the structure of society.

“All of us now inhabit dynamic systems of human-machine interaction. That complexifies our experience. Yes, we make our networks, and our networks make us. Interdependently, we participate in the world and thus change its nature. We then adapt to an altered nature in which we have participated. But the ‘we’ in those phrases now includes encoded agents that interact autonomously in the dynamic alteration of culture. Those agents sense, experience and learn from the environment, modifying it in the process, just as we do. This represents an increase in the complexity of society and the capacity for radical change in social relation.

“Ursula Franklin’s definition of technology – ‘Technology involves organization, procedures, symbols, new words, equations, and, most of all, it involves a mindset’ – is that it is the way we do things around here. It becomes different as a consequence of a shift in the definition of ‘we.’ AI increases our capacity to modify the world, and thus alter our experience of it. But it puts ‘us’ into a new social space we neither understand nor anticipate.”

Beneficial (Did not answer the Harms question)
David Porush, writer and longtime professor at Rensselaer Polytechnic Institute, commented, “There will be positive progress in many realms. Quantum computing will become a partner to human creativity and problem solving. We’ve shown sophisticated brute force computing achieve this already with ChatGPT. Quantum computing will surprise us and challenge us to exceed ourselves even further and in much more surprising ways. It will also challenge former expectations about nature and the super-natural, physics and metaphysics. It will rattle the cage of scientific axioms of the mechanist-vitalism duality. This is a belief, and a hope, with only hints in empirical evidence.

“We might establish a new worldwide court of criminal justice. Utopian dreams that the World Wide Web and new social technologies might change human behavior have failed – note the ongoing human criminality, predation, tribalism, hate speech, theft and deception, demagoguery, etc. Nonetheless, social networks also enable us to witness, record and testify to bad behavior almost instantly, no matter where in the world it happens.

“By 2035 I believe this will promote the creation (or beginning of the discussion of the creation) of a new worldwide court of criminal justice, including a means to prosecute and punish individual war crimes and bad nation actors. My hope is that this court would supersede our current broken UN and come to apolitical verdicts based on empirical evidence and universal laws. Citizens pretty universally have shown they will give up rights to privacy to corporations for convenience. It would also imply that the panopticon of technologies used for spying and intrusion, whether for profit or totalitarian control by governments, will be converted to serve global good.

“Social networking contributes to scientific progress, especially in the field of virology. The global reaction to the arrival of COVID-19 showed the power of data gathering, data sharing and collaboration on analysis to combat a pandemic. Worldwide virology the past two years is a fine avatar of what could be done for all sciences.

“We can make more-effective use of global computing in regard to resource distribution. Politicians and nations have not shown enough political will to really address long-term solutions to crises like global warming, water shortages and hunger. At least emerging data on these crises arm us with knowledge as the predicate to solutions. For instance, there’s not one less molecule of H2O available on Earth than a billion years ago; it’s just collected, made usable and distributed terribly.

“If we combine the appropriate level of political will with technological solutions (many of which we have in hand), we can distribute scarce resources and monitor harmful human or natural phenomena and address these problems with much more timely and effective solutions.”

Christopher W. Savage, a leading expert in legal and regulatory issues based in Washington, D.C., wrote, “Continued advances in artificial intelligence and machine learning will be extremely beneficial to society in a range of ways. These will include better medical diagnosis and healthcare, better and more customized education, and – in the background – more efficient business and commercial activities, leading to the potential of lower prices for consumers.

“I predict that remote work – which is enabled by near-ubiquitous broadband connectivity – will become permanent in many fields. Among many other benefits, this will spare remote workers the time and hassle of a daily commute. This is found time in the range of 10 to 20 percent of people’s waking hours. People realized this during the pandemic, and the absurdity of unnecessarily spending time commuting will cause remote work to remain important.

“A particular application of AI/ML is permitting purely natural language interfaces between people and their devices and apps. Twenty years ago, being able to simply speak to our devices to cause them to do what we want was still in the realm of science fiction. This capability (like saving commuting time) may not seem like a lot, but it will eliminate a great deal of friction – the cognitive load of dealing with apps and devices by typing and pushing buttons – that takes away from human enjoyment and flourishing.”

Christopher W. Savage, a leading expert in legal and regulatory issues based in Washington, D.C., responded, “The degree to which people’s activities (both literally online and in the real world) are subject to surveillance by private entities and governments will increase. This creates a number of potentially serious harms:

1) Surveillance and inherent loss of privacy: People will perceive that they are being directly or indirectly watched more or less constantly. This inhibits personal freedom and exploration.

2) Manipulation: The more those performing surveillance know about us, the more effectively they will be able to manipulate us to do what is in their interest rather than ours. While this may be as trivial as buying something that someone doesn’t really need, it can also affect civic engagement and politics/voting activity. Again, this inhibits human freedom.

3) Disinformation: The less consensus there is among all members of society as to a set of basic facts and values, the more tenuous social bonds become. Digital technology has made it possible to spread lies, half-truths, innuendoes, etc., to a degree that has never before existed in human history. Combined with the increased ability of bad actors to manipulate us, this will seriously degrade social cohesion.”

Christine Boese, a consultant and independent scholar, wrote, “I’m having a hard time seeing around the 2035 corners because deep structural shifts are occurring that could really reframe everything on a level of electricity and electric light, or the advent of radio broadcasting (which I think was more ground-breaking for human connectedness than television).

“These reframing technologies live inside rapid developments in natural language processing (NLP) and GPT3 (and GPT4), which will have beneficial sides, but also dark sides, things we are only beginning to see with ChatGPT.

“The biggest issue I see to making NLP gains truly beneficial is the problem that humanity doesn’t scale very well. That statement alone needs some unpacking. I mean, why should humanity scale? With a population approaching nine billion, and assumptions of mass delivery of goods and services, there are many reasons for merchants and providers to want humanity to scale, but mass scaling tends to be dehumanizing. Case in point: teaching writing at the college level. We’ve tried many ways to make learning to write not so one-on-one teaching intensive, like an apprenticeship skill, with workshops, peer review, drafting, computer-assisted pedagogies, spell-check, grammar and logic screeners. All of these things work to a degree, but to really teach someone what it takes to be a good writer, nothing beats one-on-one. Teaching writing does not scale, and armies of low-paid adjuncts and grad students are being bled dry to try to make it do so.

“Could NLP help humanity scale? Or is it another thing that the original Modernists in the 1920s objected to about the de-humanizing assembly lines of the Industrial Revolution? Can we actually get to High Tech/High Touch, or are businesses which run like airlines, with no human-answered phone lines, the model of the future?

“That is a corner I can’t see around, and I’m not ready to accept our nearly-sentient, uncanny GPT4 Overlords without proof that humanity and the humanities are not lost in mass scalability and the embedded social biases and blind spots that come with it.

“We are hitting the limits of human-directed technology as well, and machine learning management of details is quickly outstripping human cognition. ‘Explainability’ will be the watchword, but with an even bigger caveat: one of the biggest symptoms of Long COVID could turn out to be permanent cognitive impairment in humans. This could become a species-level alteration, where it is not even possible for us to evolve into Morlocks; we could already necessarily be Eloi.

“To that end, the machines may have to step up, and this could be a critical and crucial benefit if the machines are up to it. If human intellectual capacity is dulled with COVID brain fog, an inability to concentrate, to retain details, and so on, it stands to reason humanity may turn to McLuhan-type extensions and assistance devices. Machines may make their biggest advances in knowledge retention, smart lookups, conversational parsing, low-level logic and decision-making, and assistance with daily tasks and even work tasks right at the time when humans need this support the most.

“This could be an incredible benefit. And it is also chilling.”

Christine Boese, a consultant and independent scholar, observed, “Technological dystopias are far easier to imagine than benefits. There are no neutral tools. Everything exists in social and cultural contexts.

“In the space of AI/ML in general, specialized ML will accomplish far more than unsupervised or free-ranging AI. I feel that the limits of the hype in this space are quickly being reached, to the point that it may stop being called ‘artificial intelligence’ very soon. I do not yet feel the overall benefit or threat will come directly from this space, on par with what we’ve already seen from Cambridge Analytica-style machinations (which had limited usefulness for algorithmic targeting, and more usefulness in news feed force-feeding and repetition). We are already seeing a rebellion against corporate walled gardens and invisible algorithms in the Fediverse and the ActivityPub protocol, which have risen suddenly with the rapid collapse of Twitter.

“Natural language processing is the exception, on the strength of the GPT project incarnations, including ChatGPT. Already I am seeing a split in the AI/ML space, where NLP is becoming a completely separate territory, with different processes, rules and approaches to governance. This specialized ML will quickly outstrip all other forms of AI/ML work, even image recognition.

“Where does the menace or harm come from in NLP? It will easily pass the Turing Test. It will then be able to appear invisibly within any digital communications, with or without machine-generated markers. And the matter of appearance without actual sentient or reliable substance comes into play. NLP communications will likely just seamlessly migrate into our communications streams, all of them. They won’t just be deep fakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents, and corporate digital workforces. Soon all high-touch interactions will be non-human, no longer dependent on constructed question and answer keyword scripts.

“Some may ask, “Where’s the harm in that? These machines could provide better support than humans and they don’t sleep or require a paycheck and health benefits.”

“Perhaps this does belong in the benefits column. But here is where I see harm in ubiquity (along with Plato, the old outsourcing brain argument): Humans have flaws. Machines have flaws. A bad customer service rep will not scale up harms massively. A bad machine customer service protocol could scale up harms massively.

“Further, NLP machine learning happens in sophisticated and many-layered ensembles, many so complex Explainable AI can only use other models to unpack model ensembles – humans can’t do it.

“How long does it take language and communication ubiquity to turn into out-sourced decisions? Or predictive outcomes to migrate into automated fixes with no carbon-based oversight at all?

“Take just one example: Drone warfare. Yes, a lot of this depends on image processing, as well as remote monitoring capabilities. But we’ve removed the human risk from the air (unmanned), but not on the ground (where it can be catastrophic). Digitization means replication and mass scalability, brought to drone warfare, and the communication and decision support will have NLP components. NLP logic processing can also lead to higher levels of confidence in decisions than is warranted.

“Add into the mix the same kind of malignant or bad actors as we saw within the manipulations of a Cambridge Analytica, a corporate bad actor, or a governmental bad actor, and we can easily get to a destabilized planet on a mass scale faster than the threat (with high development costs) of nuclear war ever did.

“This I find a greater risk than more mundane risks (which are more harmful without direct bad actors), such as blockchains, cryptocurrency mining, and a destabilized carbon footprint driven by the simple greed of oligarchs who think they can outlive a climate apocalypse in their bunkers and emerge smiling into an empty planet as their personal playground.”

Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, wrote, “The single most beneficial change will be the spread of already existing internet-based services to billions of people across the world, as they gradually replace their basic phones with smart phones, and as connection speed increases over time and across space. IT services to assist farmers and businesses are the most promising in terms of economic growth, together with access to finance through mobile money technology. I also expect IT-based trade to expand to all parts of the world, especially spearheaded by Alibaba.

“The second most beneficial change I anticipate is the rapid expansion of IT-based health care, especially through phone-based and AI-based diagnostics and patient interviews. The largest benefits by far will be achieved in developing countries where access to medically-provided health-care is limited and costly. AI-based technology provided through phones could massively increase provision and improve health at a time where the population of many currently low- or middle-income countries (LMIC) is rapidly aging.

“The third most beneficial change I anticipate is in drone-based, IT-connected drone services to facilitate dispatch to wholesale and local retail outlets, and to distribute medical drugs to local health centers and collect from them samples for health-care testing. I do not expect a significant expansion of drone deliveries to individuals, except in some special cases (e.g., very isolated locations or extreme urgency in the delivery of medical drugs and samples).

Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, said, “The most menacing change I expect is in terms of the political control of the population. Autocracies and democracies alike are increasingly using IT technology to collect data on individuals, civic organizations and firms. While this data collection is capable of delivering social and economic benefits to many (e.g., in terms of fighting organized crime, tax evasion and financial and fiscal fraud), the potential for misuse is enormous, as evidenced for instance by the social credit system put in place in China.

“Some countries – most prominently, the European Union – have sought to introduce safeguards against abuse. But without serious and persistent coordination with the United States, these efforts will ultimately fail given the dominance of U.S.-protected GAFAM (Google, Apple, Facebook, Amazon and Microsoft) in all countries except China, and to a lesser extent, Russia.

“The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law. Whether this can be done is doubtful, given that democracies themselves are responsible for developing a large share of these systems of data collection and control on their own population, as well as on that of others (e.g., politicians, journalists, civil right activists, researchers, R&D firms).

“The second most worrying change is the continued privatization of the internet at all levels: cloud, servers, underwater transcontinental lines, last-mile delivery and content. The internet was initially developed as free for all. But this will no longer be the case in 2035, and probably well before that. I do not see any solution that would be able to counterbalance this trend, short of a massive, coordinated effort among leading countries. But I doubt that this coordination will happen, given the enormous financial benefits gained from appropriating the internet, or at least large chunks of it.

“This appropriation of the internet will generate very large monopolistic gains that current anti-trust regulation is powerless to address, as shown repeatedly in U.S. courts and in EU efforts against GAFAM firms. In some countries, this appropriation will be combined with heavy state control, further reinforcing totalitarian tendencies.

“The third most worrying change is the further expansion of unbridled social media and the disappearance of curated sources of news (e.g., newsprint, radio and TV). In the past, the world has already experienced the damages caused by fake news and gossip-based information (e.g., through tabloid newspapers), but never to the extent made possible by social media. Efforts to date to moderate content on social media platforms have largely been ineffective as a result of multiple mutually reinforcing causes: the lack of coordination between competing social media platforms (e.g., Facebook, Twitter, WhatsApp, TikTok); the partisan interests of specific political parties and actors; and the technical difficulty of the task.

“These failures have been particularly disturbing in LMIC countries where moderation in local languages is largely deficient (e.g., hate speech across ethnic lines in Ethiopia; hate speech towards women in South Asia). The damage that social media is causing to most democracies is existential: by creating silos and echo chambers, social media is eroding the trust that different groups and populations feel towards each other, and this increases the likelihood of civil unrest and populist vote.

“Furthermore, social media has encouraged the victimization of individuals who do not conform to the views of other groups in a way that does not allow the accused to defend themselves. This is already provoking a massive regression in the rule of law and the rights of individuals to defend themselves against accusations. I do not see any signs suggesting a desire by GAFAM firms or by governments to address this existential problem for the rule of law.

“To summarize, the first wave of IT-technology did increase individual freedom in many ways (e.g., accessing cultural content previously requiring significant financial outlays; facilitating international communication, trade and travel; making new friends and identifying partners; and allowing isolated communities to find each other to converse and socialize). The next wave of IT-technology will be more focused on political control and on the exploitation of commercial and monopolistic advantage, thereby favoring totalitarian tendencies and the erosion of the rights of the defense and of the whole system of criminal and civil justice. I am not optimistic at this point, especially given the poor state of U.S. politics at this point in time on both sides of the political spectrum.”

Maggie Jackson, award-winning journalist, social critic and author, commented, “The most critical beneficial change in digital life now on the horizon is the rise of uncertain AI.

“In the six decades of its existence, AI has been designed to achieve its objectives, however it can. The field’s over-arching mission has been to create systems that can learn how to play a game, spot a tumor, drive a car, etc., on their own as well as or better than humans can do so.

“This foundational definition of AI largely reflects a centuries-old ideal of intelligence as the realization of one’s goals. However, the field’s erratic yet increasingly impressive success in building objective-driven AI has created a widening and dangerous gap between AI and human needs. Almost invariably, an initial objective set by a designer will deviate from a human’s needs, preferences and well-being come ‘run-time.’

“Nick Bostrom’s once-seemingly laughable example of a superintelligent AI system tasked with making paper clips, which then takes over the world in pursuit of this goal, has become a plausible illustration of the unstoppability and risk of reward-centric AI. Already, the ‘alignment problem’ can be seen in social media platforms designed to bolster user time online by stoking extremist content. As AI grows more powerful, the risks of models that have a cataclysmic effect on humanity dramatically increase.

“Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. Enroute to achieving its goals, AI traditionally has been designed to dispatch unforeseen obstacles, such as something in its path. But what AI visionary Stuart Russell calls ‘human compatible AI’ is instead designed to be uncertain about its goals, and so to be open to and adaptable to multiple possible scenarios.

“An uncertain model or robot will ask a human how it should fetch coffee or show multiple possible candidate peptides for creating a new antibiotic, instead of pursuing the single best option befitting its initial marching orders.

“The movement to make AI is just gaining ground and largely experimental. It remains to be seen whether tech behemoths will pick up on this radical change. But I believe this shift is gaining traction, and none too soon. Uncertain AI is the most heartening trend in technology that I have seen in a quarter-century of writing about the field.

Maggie Jackson, award-winning journalist, social critic and author, said, “One of the most menacing, if not the most menacing, changes likely to occur in digital life in the next decade is a deepening complacency about technology. If first and foremost we cannot retain a clear-eyed, thoughtful and constant skepticism about these tools, we cannot create or choose technologies that help us flourish, attain wisdom and forge mutual social understanding. Ultimately, complacent attitudes toward digital tools blind us to the actual power that we do have to shape our futures in a tech-centric era.

“My concerns are three-part: First, as technology becomes embedded in daily life, it typically is less explicitly considered and less seen, just as we hardly give a thought to electric light. The recent Pew report on concerns about the increasing use of AI in daily life shows that 46 percent of Americans have equal parts excitement and concern over this trend, and 40 percent are more concerned than excited. But only 30 percent correctly fully identified where AI is being used, and nearly half think they do not regularly interact with AI, a level of apartness that is implausible given the ubiquity of smart phones and of AI itself. AI, in a nutshell, is not fully seen. As well, it’s alarming that the most vulnerable members of society – people who are less-well educated, have lower incomes, and/or are elderly – demonstrate the least awareness of AI’s presence in daily life and show the least concern about this trend.

“Second, mounting evidence shows that the use of technology itself easily can lead to habits of thought that breed intellectual complacency. Not only do we spend less time adding to our memory stores in a high-tech era, but ‘using the internet may disrupt the natural functioning of memory,’ according to researcher Benjamin Storm. Memory-making is less activated, data is decontextualized and devices erode time for rest and sleep, further disrupting memory processing. As well, device use nurtures the assumption that we can know at a glance. After even a brief online search, information seekers tend to think they know more than they actually do, even when they have learned nothing from a search, studies show. Despite its dramatic benefits, technology therefore can seed a cycle of enchantment, gullibility and hubris that then produces more dependence on technology.

“Finally, the market-driven nature of technology today muffles any concerns that are shown about devices. Consider the case of robot caregivers. Although a majority of Americans and people in EU countries say they would not want to use robot care for themselves or family members, such robots increasingly are sold on the market with little training, caveats or even safety features. Until recently, older people were not consulted in the design and production of robot caregivers built for seniors. Given the highly opaque, tone-deaf and isolationist nature of big-tech social media and AI companies, I am concerned that whatever skepticism that people may have for technology may be ignored by its makers.”

Mark Surman, president of the Mozilla Foundation, commented, “My biggest prediction is that people will get fed up. Fed up with the constant barrage of always on. The nudging. The selling. The treadmill. Companies that see this coming – and that can build tech products that help people turn down the volume and disconnect while staying connected – will win the day. Clever, humane use of AI will be a key part of this.”

Mark Surman, president of the Mozilla Foundation, commented, “The most harmful thing I can think of isn’t a change as much as a trend: The ability for us to disconnect will increasingly disappear. We’re building more and more reasons to be always on and instantly responsive into our jobs, our social lives, our public spaces, our everything. The combination of immersive technologies and social pressure will make this worse. Opting out isn’t an option. Or, if it us, the social and economic consequences are severe. The result: we’re more anxious, tired, (emotionally) disconnected. Our ability to touch, to rest, to choose and to be human will continue to erode.”

Beth Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, wrote, “One of the most significant and positive changes expected to occur by 2035 is the increasing integration of artificial intelligence (AI) into various aspects of our lives, including our institutions of governance and our democracy.

“With 100 million people trying ChatGPT–a type of artificial intelligence (AI) that uses data from the Internet to spit out well-crafted, human-like responses to questions– between Christmas and Mardi Gras 2023 (by contrast it took the telephone 75 years to reach that level of adoption), we have squarely entered the AI-age and are rapidly advancing along the S-curve toward widespread adoption. Much more than ChatGPT, AI comprises a remarkable basket of data-processing technologies that make it easier to generate ideas and information, summarize and translate text and speech, spot patterns and find structure in large amounts of data, simplify complex processes, coordinate collection action and engagement. When put to good use, these features create new possibilities for how we govern and, above all, how we can participate in our democracy.

“One area in which AI has the potential to make a significant impact is in participatory democracy, that system of government in which citizens are actively involved in the decision-making process.

  • “The right AI could help to increase citizen engagement and participation. With the help of AI-powered chatbots, residents could easily access information about important issues, provide feedback, and participate in decision-making processes. We are already witnessing the use of AI to make community deliberation more efficient to manage at scale.
  • “The right AI could help to improve the quality of decision-making. AI can analyze large amounts of data and identify patterns that humans may not be able to detect. This can help policymakers and participating residents make more informed decisions based on real-time, high quality data. With the right data, AI can also help to predict the outcome of different policy choices and provide recommendations on the best course of action. AI is already being used to make expertise more searchable. Using large scale data sources, it is becoming easier to find people with useful expertise and match them to opportunities to participate in governance. These techniques, if adopted, could help to ensure more evidence-based decisions.
  • “The right AI could help to make governance more equitable and effective. New text generation tools make it faster and easier to ‘translate’ legalese into plain English but also other languages, portending new opportunities to simplify interaction between residents and their governments and increase the uptake of benefits to which people are entitled.
  • “The right AI could help to reduce bias and discrimination. AI can analyze data without being influenced by personal biases or prejudices. This can help to identify areas of inequality and discrimination, which can be addressed through policy changes. For example, AI can help to identify disparities in healthcare outcomes based on race or gender and provide recommendations for addressing these disparities.
  • “Finally, AI could help us design the novel, participatory and agile systems of participatory governance that we need to regulate AI. We all know that traditional forms of legislation and regulation are too slow and rigid to respond to fast-changing technology. Instead, we need to invest new institutions for responding to the challenges of AI and that’s why it is paramount to invest in reimagining democracy using AI.

“But all of this depends upon mitigating significant risks and designing AI that is purpose-built to improve and reimagine our democratic institutions.”

Beth Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, commented, “One of the most concerning changes that could occur by 2035 is the increased use of artificial intelligence (AI) to bolster authoritarianism. With the rise of populist authoritarians and the susceptibility of more people to such authoritarianism as a result of widening economic inequality, fear of climate change and as a result of misinformation, there is a risk of digital technologies being abused to the detriment of democracy.

  • “AI-powered surveillance systems could be used by authoritarian governments to monitor and track the activities of citizens. This could include facial recognition technology, social media monitoring, and analysis of internet activity. Such systems could be used to identify and suppress dissenting voices, intimidate opposition figures, and quell protests.
  • “AI could be used to create and disseminate propaganda and disinformation. We’ve already seen how bots have been responsible for propagating misinformation during COVID and election cycles. Manipulation could involve the use of deepfakes, chatbots and other AI-powered tools to manipulate public opinion and suppress dissent. Deepfakes, which are manipulated videos or images such as https://this-person-does-not-exist.com, illustrate the potential for spreading disinformation and manipulating public opinion. Deepfakes have the potential to undermine trust in information and institutions and create chaos and confusion. Authoritarian regimes could use these tools to spread false information and discredit opposition figures, journalists and human rights activists.
  • “AI-powered predictive policing tools could be used by authoritarian regimes to target specific populations for arrest and detention. These tools use data analytics to predict where and when crimes are likely to occur and who is likely to commit them. In the wrong hands, these tools could be used to target ethnic or religious minorities, political dissidents, and other vulnerable groups.
  • “AI-powered social credit systems are already in use in China and could be adopted by other authoritarian regimes. These systems use data analytics to score individuals based on their behavior and can be used to reward or punish citizens based on their social credit score. Such systems could be used to enforce loyalty to the government and suppress dissent.
  • “AI-powered weapons and military systems could be used to enhance the power of authoritarian regimes. Autonomous weapons systems could be used to target opposition figures or suppress protests. AI-powered cyberattacks could be used to disrupt critical infrastructure or target dissidents.

“It is important to ensure that AI is developed and used in a responsible and ethical manner, and that its potential to be used to bolster authoritarianism is addressed proactively.”

Leiska Evanson, a Caribbean-based futurist and consultant, commented, “The most beneficial change that digital technology is likely to manifest before 2035 is the same as offered earlier by the radio and the television – increased learning opportunities to people, including and especially for those in more remote locations.

“In the past decade alone, we have implemented stronger satellite and wireless/mobile Internet, distributed renewable energy connections and microgrids, as well as robust cloud offerings that can bolster flagging, inexpensive equipment (e.g., old laptops and cheaper Chromebooks). With this, wonderful websites such as YouTube, edX, Coursera, uDemy and MIT Opencourseware have allowed even more people to have access to quality learning opportunities once they can connect to the Internet.

“With this, persons who, for various reasons, may be bound to their locations can continue to expand their mind beyond physical and monetary limitations. Indeed, the COVID-19 pandemic has shown that the Internet is vital as a repository and enabler of knowledge acquisition. With more credential bodies embracing various methods to ensure quality of education (anti-cheat technologies and temporary remote surveillance), people everywhere will be able to gain globally recognised education from secondary and tertiary institutions.”

Leiska Evanson, a Caribbean-based futurist and consultant, said, “Colonialist languages have beaten down and eradicated local languages and this continues unabated with the Internet. Programming languages are almost all in American English. Non-Latin languages are barely represented. Non-European African and American languages will be extinct by 2035, and even European sublanguages are suffering. Until we translate scripting and programming languages to allow something other than English, human language and thought will be constrained into fewer dimensions.”

Richard L. Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, said, “Among the best and most beneficial changes in digital life that I expect are likely to occur by 2035 are the following advances, listed by category:

“Human-centered development of digital tools and systems that safely advance human progress will include:

  • High-end technology to compensate for vision, hearing and voice loss
  • Software that empowers new levels of human creativity in the arts, music, literature, etc., while simultaneously allowing those creators to benefit financially from their own work
  • Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Improvement of social and political interactions will include:

  • Software that actually delivers on the early promise of connectivity to buttress and enable wide and egalitarian participation in democratic governance, electoral accountability, voter mobilization, and holds elected authorities and authoritarian demagogues accountable to common people
  • Software able to empower dynamic institutions that answer to people’s values and needs rather than (only) institutional self-interest
  • Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Human rights-abetting good outcomes for citizens will include:

  • Systematic and secure ways for everyday citizens to document and publicize human rights abuses by government authorities, private militias and other non-state actors.

“Advancement of human knowledge, the verifying, updating, safely archiving and elevating the best of it:

  • Knowledge systems with algorithms and governance processes that empower people will be simultaneously capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom’ and subjecting such knowledge to democratic critique and discussion. i.e., a true ‘democratic public arena’ that is digitally mediated.

“Helping people be safer, healthier and happier:

  • True networked health systems with multiple providers across a broad range of roles, as well as health consumers/patients, can ‘see’ all relevant data and records simultaneously, with expert interpretive assistance available, with full protections for patient privacy built in
  • Social networks built to sustain human thriving via mutual deliberation and shared reflection regarding personal and social choices.”

Richard L. Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, wrote, “Among the most harmful or menacing changes in digital life that I expect are likely to occur by 2035 are the following, listed by category:

“Human-centered development of digital tools and systems:

  • Integration of human persons into digitized software worlds to a degree that de-centers human moral and ethical reflection, subjecting that realm of human judgment and critical thought to the imperatives of digital universe (and its associated profit-seeking or power-seeking or fantasy-dwelling behaviors)

“Human connections, governance and institutions:

  • The replacement of actual in-person human interaction (in keeping with our status as evolved social animals) with mediated digital interaction that satisfies immediate pleasures and desires without actual human social life with all its complexity.

“Human rights:

  • Overwhelming capacity of authoritarian governments to monitor and punish advocacy for human rights; overwhelming capacity of private corporations to monitor and punish labor activism.

“Human knowledge:

  • Knowledge systems that continue to exploit human vulnerability to group think in its most anti-social and anti-institutional modes, driving subcultures toward extremes that tear societies apart and undermine democracies. Outcome: empowered authoritarians and eventual historical loss of democracy.

“Human health and well-being:

  • Social networks that continue to hyper-isolate individuals into atomistic settings, then recruit them into networks of resentment and anti-social views and action that express the nihilism of that atomized world.”

Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, wrote, “Some of the most beneficial changes in digital technology and human use of digital systems may surface through impacts on health and well-being, education and the knowledge economy, and consumer technology and recreation. I anticipate more moderate positive impacts in areas like energy and environment, transportation, manufacturing and finance, and have only modest optimism around areas like democratic governance, human rights and social and political cohesion.

“In the next decade, the prospects for advancing human well-being, inclusive of physical health, mental health and other associated aspects of life satisfaction and flourishing seems substantial. The potential of techniques like deep learning to predict the structure of proteins, identify candidates for vaccine development and diagnose diseases based on imaging data has already been demonstrated. The upsides for humans of maturing these processes and enacting them robustly in our health infrastructure is profound. Even the use of virtual agents or chatbots to expand access to medical, pharmaceutical and mental health advice (carefully designed and controlled) could be deeply beneficial, especially for those who have historically lacked access. These and other tools in digital health such as new medical devices, wearable technologies for health monitoring, and yet-undiscovered innovations focused on digital well-being, could represent amongst the most important impacts from digital technologies in the near future.

“We might also anticipate meaningful advances in our educational ecosystem and broader knowledge economy that owe their thanks to digital technology. While the uptake of tools like intelligent tutoring systems (AI in education) has been modest so far in the 21st century, in the next decade, primary, secondary and postsecondary educational institutions may have the time to explore and realize some of the most promising innovations. Tools like MOOCs that suffered a reputational setback in part because of the associated hype cycle will have had ample time to mature along with the growing array of online/digital-first graduate programs, and we should also see success for emerging pedagogical tools like AR- or VR-based platforms that deliver novel learning experiences. Teachers, ed tech companies, policymakers and researchers may find that the 2030s provide the time for robust experimentation, testing and ‘survival of the fittest’ for digital innovations that can benefit students of all ages.

“Yet some of the greatest benefits may come outside of the formal educational ecosystem; it has become clear that tools like large language models are likely to substantially reform how individuals search for, access, synthesize and even produce information. Thanks to improved user interfaces and user-centered design along with AI, increased computing power, and increased internet access, we may see widespread benefits in terms of convenience, time saved and the informal spread of useful practices. A more convenient and accessible knowledge ecosystem powered by virtual assistants, large language models and mobile technology could, for example, lead to easy spreading of best practices in agriculture, personal finance, cooking, interpersonal relationships and countless other areas.

“Further, consumer technologies focused on entertainment and recreation seem likely to impact human life positively in the 2030s. We might expect to see continued proliferation of short- and long-form video content on existing and yet-unnamed platforms, heightened capabilities to produce high-quality television and movies, advanced graphics in individual and social video games, and VR and AR experiences ranging from music to travel to shopping. Moreover, this content is likely to increase in quantity, quality and diversity, reaching individuals of different ages, backgrounds and regions, especially if the costs of production are decreased (for example, by generative AI techniques) and access expanded by advanced internet and networking technologies. The prospects for individuals to produce, share and consume all manner of content for entertainment and other forms of enrichment seems likely to have a major impact on the daily experiences of humans.

“There are too many other areas where we should expect positive benefits from digital technology to list here, many in the form of basic and applied computational advances leading to commercialized and sector-specific tools. Some of the most promising include advances in transportation infrastructure, autonomous vehicles, battery technology, energy distribution, clean energy, sustainable and efficient materials, better financial and healthcare recommendations and so on. All of these could have tangible positive impacts on human life and would owe much (but certainly not all) of this to digital technology.

“Perhaps on a more cautionary note, I find it less likely that these advances will be driven through changes in human behavior, institutional practices and other norms per se. For example, the use of digital tools to enhance democratic governance is exciting and certain countries are leading here, but these practices require under-resourced and brittle human institutions to enact, as well as the broader public (not always digitally literate) to adapt.

“Thus, I find it unlikely we will have an international ‘renaissance’ in digital citizen participation, socioeconomic equity or human rights resulting from digital advances, though new capabilities for citizen service request fulfillment, voting access or government transparency would all be welcome. For similar reasons, while some of the largest companies have already made great progress in reshaping human experience via thoughtful human-centered design practices, with meaningful impact given their scale, spreading this across other companies and regions would seem to require significant human expertise, resources and changes in education and norms.

“Reaching a new paradigm of human culture, so to speak, may take more than a decade or two. Even so, relatively modest improvements driven by humans in data and privacy culture, social media hygiene and management of misinformation and toxic content can go a long way.

“Instead then, I feel that many of these positive benefits will arrive due to ‘the technologies themselves’ (crassly speaking, since the process of innovation is deeply socio-technical) rather than because of human-first changes in how we approach digital life. For example, I feel that many of the total benefits of advances in digital life will result from the ‘mere’ scaling of access to digital tools, through cheaper energy, increased Internet access, cheaper computers and phones, and so on. Bringing hundreds of millions or billions of people into deeper engagement with the plethora of digital tools may be the single most important change in digital life in the next decades.”

Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, said, “Some of the more concerning impacts in digital life in the next decade could include techno-authoritarian abuses of human rights, continued social and political fracturing augmented by technology and mis/disinformation, missteps in social AI and social robotics, and calcification of subpar governance regimes that preclude greater paradigm shifts in human digital life. As often occurs with emerging technology, we may see innovations introduced without sufficient testing and consideration, leading to scandals and harms, as well as more intentional abuses by hostile actors.

“Perhaps the most menacing manifestation of harmful technology would be the realization of hyper effective surveillance regimes by state actors in authoritarian countries, with associated tools also shared to other countries by state actors and unscrupulous firms. It’s already clear that immense human data production coupled with biometrics and video surveillance can create environments that severely hobble basic human freedoms. Even more worrisome is that the sophistication of digital technologies could lead techno-authoritarian regimes to be so effective that they even cripple prospects for public feedback, resistance and protest, and change altogether. Pillars of societal change like in-person and digital assembly, sharing of ideas inside and outside of borders, and institutions of higher education serving as hubs of reform could disappear in the worst case. To the extent that nefarious regimes are able to track and predict dissident ideas and individuals, deeply manipulate information flow and even generate new forms of targeted persuasive disinformation and instill fear, some corners of the world could be locked into particularly horrific status quos. Even less successful efforts here are likely to harm basic human freedoms and rights, including of political, gender, religious and ethnic minorities.

“Another fear imagined throughout subsequent historical waves of technology is dehumanization and dissolution of social life through technology (e.g., radio, television, Internet). Yet these fears do not feel anti-scientific, as we have watched the collapsing trust in news media, proliferation of misinformation and disinformation via social media platforms, and fracturing of political groups leading to new levels of affective polarization and outgroup dehumanization in recent decades. Misinformation in text or audio-visual formats deserves a special call out here. I might expect ongoing waves of scandal over the next years as various realistic generative capabilities become democratized, imagined harms become realized (in fraud, politics, violence), and news cycles try to make sense of these changes. The next scandal or disaster owing to misinformation seems just around the corner, and many such harms are likely happening that we are not aware of.

“There are other reasons to expect digital technology to become more individualized and vivid. Algorithmic recommendations are likely to become more accurate (however accuracy is defined), and increased data, including potentially biometric, physiological, synthetic, and even genomic data may feature into these systems. Meanwhile, bigger screens, clever user experience design, and VR and AR technologies could make these informational inputs feel all the more real and pressing. Pessimistically speaking, this means that communities that amplify our worst impulses, prey upon our weaknesses, and individuals that preach misinformation and hate may be more effective than ever in finding and persuading their audiences. Fortunately, there are efforts to dissipate in combat these trends in current and emerging areas of digital life, but several decades into the Internet age, we have not yet gotten ahead of bad actors and sometimes surprising negative emergent and feedback effects. We might expect a continuation of some of the negative trends enabled by digital technology already in the 21st century, with new surprises to boot.

“The power of social technologies like virtual assistants and large language models has also started to become clear to the mass public. In the next decade, it seems likely to me that we will have reached a tipping point where social AI or embodied robots become widely used in settings like education, healthcare and elderly care. Benefits aside, these tools will still be new and their ethical implications are only starting to be understood. Empirical research, best practices and regulation will need to play catch-up. If these tools are rolled out too quickly, the potential to harm vulnerable populations is greater. Our excitement here may be greater than our foresight.

“And unfortunately, more technology and innovation seem poised to exacerbate inequality (on some important measures) under our current economic system. Even as we progress, many will remain behind. This might be especially true if AI causes acceleration effects, granting additional power to big corporations or companies due to network/data effects, and if international actors do not work tirelessly to ensure that benefits are distributed rather than monopolized. One unfortunate tendency is for rights and other beneficial protections to lag in low-income countries; an unscrupulous corporation may be banned from selling an unsafe digital product or using misleading marketing in one country and decide that another unprotected market exists in a lower-income corner of the world. The same trends hold for misinformation and content moderation, for digital surveillance, and for unethical labor practices used to prop up digital innovation. What does the periphery look like in the AI era? To prevent some of the most malicious aspects of digital change, we must have a global lens.

“Finally, I fear that the optimists of the age may not find the most creative and beneficial reforms take hold. Regulatory efforts that aim to center human rights and well-being may fall somewhat to the banalities of trade negotiations and the power of big technology companies. Companies may become better at ethical design, but also better at marketing it, and it remains unclear how much the public knows whether a digital tool and its designer are ethical or trustworthy. It seems true that there is historically high attention to issues like privacy, cybersecurity, digital misinformation, deepfakes, algorithmic bias and so on.

“Yet even for areas where experts have identified best practices for years or decades, economic and political systems are slow to change and incentives and timelines remain deeply unaligned to well-being. Elections continue to be run poorly, products continue to be dangerous and actors continue to find workarounds to minimize the impact of governance reforms on their bottom line. In the next decade, I would hope to see several major international reforms take hold, such as privacy reforms like GDPR maturing in their implementation and enforcement, and perhaps laws like the EU AI Act start to have a similar impact. Overall, however, we do not seem poised for a revolution in digital life. We may have to content ourselves with the hard work required for slow iteration and evolution instead.”

Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “By 2035 two trends will be evident, which we can characterize as the best and worst of digital life. Neither, though, is unadulterated. The best will contain elements of a toxic underside, and the worst will have its beneficial upside.

  • The best: everything we need will be available online.
  • The worst: everything about us will be known; nothing about us will be secret.

“By 2035, these will only be trends, that is, we won’t have reached the ultimate state, and there will be a great deal of discussion and debate about both sides.

“The Best: As we began to see during the pandemic, the digital economy is much more robust than people expect. Within a few months, services emerged to support office work, deliver food and groceries, take classes and sit for exams, perform medical interventions, provide advice and counselling, shop for clothing and hardware, and more, all online, all supported by a generally robust and reliable delivery infrastructure.

“Looking past the current rebound effect, we can see some of the longer-term trends emerge: work-from-home, online learning and development, digital delivery services, and more along the same lines. We’re seeing a longer-term decline in the service industry as people choose both to live and work at home, or at least, more locally. Outdoor recreation and special events still attract us, but low-quality crowded indoor work and leisure leave us cold.

“The downside is that this online world is reserved, especially at first, to those who can afford it. Though improving, access to good and services is still difficult to obtain in rural areas and less developed areas. It requires stable accommodations and robust internet access. These in turn demand a set of skills that will be out of reach for older people and those with perceptual or learning challenges. Even when they can access digital services, some people will be isolated and vulnerable; children, especially, must be protected from mistreatment and abuse.

Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “The Worst: We will have no secrets. Every transaction we conduct will be recorded and discoverable. Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with artificial intelligence recognizing us through our physical characteristics, habits and patterns of behaviour. The primary purpose of this surveillance will be for marketing, but it will also be used for law enforcement, political campaigns, and in some cases, repression and discrimination.

“Surveillance will be greatly assisted by automation. A police office, for example, used to have to call in for a report on a license plate. Now a camera scans every simple plate that passes within view and a computer checks every single one of them. Registration and insurance documentation is no longer required; the system already knows and can alert the officer to expired plates or outstanding warrants. Facial recognition can accomplish the same for people walking through public places. Beyond the cameras, GPS tracking follows us as we move about, while every single purchase is recorded somewhere.

“The greatest risk of total surveillance is an unwelcome, and often unjust, differentiation of treatment of individuals. People who need something more, for example, may be charged higher prices; we already see this in insurance, where differential treatment is described as assessment of risk. Parents with children may be charged more for milk than unmarried men. The price of hotel rooms and airline tickets are already differentiated by location and search history and could vary in the future based on income and recent purchases. People with disadvantages or facing discrimination may be denied access to services altogether, as digital redlining expands to become a normal business practice.

“What makes this trend pernicious is that none of it is visible to most observers. Not everybody will be under total surveillance; the rich and the powerful will be exempted, as will most large corporations and government activities. Without open data regulations or sunshine laws, nobody will be able to detect when people have been treated inequitably, unfairly or unjustly.

“And this is where we begin to see the beginnings of an upside. The same system that surveils us can help keep us safe. If child predators are tracked, for example, we can be alerted to the presence of child predators near our children. Financial transactions will be legitimate and legal or won’t exist (except in cash). We will be able to press an SOS button to get assistance wherever we are. Our cars will detect and report an accident before we know we were in one. Ships and aircraft will no longer simply disappear. But this does not happen without openness and laws to protect individuals and will lag well behind the development of the surveillance system itself.

“On Balance: Both the best and the worst of our digital future are two sides of the same digital coin, and this coin consists of the question: who will digital technology serve? There are many possible answers. It may be that it serves only the Kochs, Zuckerbergs and Musks of the world, in which case the employment of digital technology will be largely indifferent to our individual needs and suffering. It may be that it serves the needs of only one political faction or state in which basic needs may be met, provided we do not disrupt the status quo. It may be that it provides strong individual protections, leaving no recourse for those who are less able or less powerful. Or it may serve the interests of the community as a whole, finding a balance between needs and ability, providing each of us enough with enough agency to manage our own lives, but not to the detriment of others.

“Technology alone won’t decide this future. It defines what’s possible. But what we do is up to us.”

Wendy Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, commented, “For the moment, it seems clear that the giants that have dominated the technology sector since around 2010 are losing ground as advertisers respond to social and financial pressures, as well as regulatory activity and antitrust actions. This is a *good* thing, as it opens up possibilities for new approaches that don’t depend on constant, privacy-invasive surveillance of Internet users.

“With any luck, that change in approach should spill over into the physical world to create smart devices that serve us rather than the companies that make them. A good example at the moment is smart speakers, whose business models are failing. Amazon is finding that consumers don’t want to use Alexa to execute purchases; Google is cutting back the division that makes Google Home.

“Similarly, the ongoing relentless succession of cyberattacks on user data might lead businesses and governments to recognize that large pools of data are a liability, and to adopt structures that put us in control of our own data and allow us to decide whom to share it with. In the UK, Mydex and other providers of personal data stores have long been pursuing this approach.

“I would like to think that by 2035 we will not still be fighting over whether citizens should be allowed to use strong encryption, even if it’s inconvenient for law enforcement. This dispute is already 30 years old!

“I think the machine learning approach to artificial intelligence (which I like to call ‘aspirational intelligence’) will soon hit its limits, but by 2035 we will still be finding new ways to use what we have.

“I do not think that by 2035 we will have an ‘artificial general intelligence’ or that we will have passed the ‘singularity’ beloved by Ray Kurzweil. This is a *good* thing.

“Many of the other items in your list are more dependent on what governments get elected and what policies they pursue than they are on what technology gets developed or how and to whom it is deployed. I’m thinking particularly of human rights, human-centered development, and human health and well-being.”

Wendy Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, said, “Many of the biggest concerns about life until 2035 are not specific to the technology sector: the impact of climate change and the disruption and migration it is already beginning to bring; continued inequality and the likely increase in old age poverty as Generation Rent reaches retirement age without the means to secure housing; the ongoing overall ill-health (cardiovascular disease, diabetes, dementia) that is and will be part of the legacy of the SARS-CoV-2 pandemic. These are sweeping problems that will affect all countries and while technology may help ameliorate the effects it can’t stop them. Many people never recovered from the 2008 financial crisis (see the movie ‘Nomadland’); the same will be true for those worst affected by the pandemic.

“In the short term, the 2023 explosion of new COVID cases expected in China will derail parts of the technology industry; there may be long-lasting effects.

“I am particularly concerned about the increasing dependence on systems that require electrical power to work in all aspects of life. We rarely think in terms of providing alternative systems that we can turn to when the main ones go down. I’m thinking particularly of those pushing to get rid of cash in favor of electronic payments of all types, but there are other examples.

“If allowed to continue, the reckless adoption of new technology by government, law enforcement and private companies without public debate or consent will create a truly dangerous state. I’m thinking in particular of live facial recognition, which just a few weeks ago was used by MSG Entertainment to locate and remove lawyers attending concerts and shows at its venues because said lawyers happened to work for firms that are involved in litigation against MSG. (The lawyers themselves were not involved.) This way lies truly disturbing and highly personalized discrimination. Even more dangerous, the San Francisco Police Department has proposed to the city council that it should be allowed to deploy robots with the ability to maim and kill humans – only for use in the most serious situations, of course.

“Airports provide a good guide to the worst of what our world could become. In a piece I wrote in October, 2022, I outline what the airports of the future, being built today without notice or discussion, will be like: all-surveillance all the time, with little option to ask questions or seek redress for errors. Airports – and the Disney parks – provide a close look at how ‘smart cities’ are likely to develop.

“I would like to hope that decentralized sites and technologies like Mastodon, Discord and others will change the dominant paradigm for the better – but the history of cooperatives tends to show that there will always be a few big players. Email provides a good example. While it is still true that anyone can run an email server, it is no longer true that they can do so as an equal player in the ecosystem. Instead, it is increasingly difficult for a small server to get its connections accepted by the tiny handful of big players. Accordingly, the most likely outcome for Mastodon will be a small handful of giant instances, and a long, long tail of small ones that find it increasingly difficult to function. The new giants created in these federated systems will still find it hard to charge or sell ads. They will have to build their business models on ancillary services for which the social media function provides lock-in, just as today Gmail profits Google nothing, but it underpins people’s use of its ad-supported search engine, maps, Android phones, etc. This provides Google with a social graph it can use in its advertising business.

“By 2035, today’s streaming services will likely have reconstituted themselves into something very like legacy TV, with ad-supported tiers (Netflix is already doing this), schedule grids and all the rest (see predictions by The Masked Scheduler, a former scheduler for the CBS network). The current situation is unsustainable; most people cannot afford the money to subscribe to dozens of streaming services or the time to figure out which services have the shows they actually want to watch. Legacy broadcasters will become streaming first; cable companies will shrink as people are driven away by costs and incessant advertising.”

Jamais Cascio, distinguished fellow at the Institute for the Future, wrote, “The benefits of digital technology in 2035 will come as little surprise for anyone following this survey. Better-contextualized and explained information; greater awareness about the global environment; clarity about surroundings that accounts for and reacts to not just one’s physical location but the ever-changing set of objects, actions and circumstances one encounters; the ability to craft ever more immersive virtual environments for entertainment and comfort; and so forth. The usual digital nirvana stuff.

“The explosion of machine-learning-based systems (like GPT or Stable Diffusion) doesn’t alter that broad trajectory much, other than that AI (for lack of a better and recognizable term) will be deeply embedded in the various physical systems behind the digital environment. The AI gives context and explanation, learning about what you already know. The AI learns what to pay attention to in your surroundings that may be of personal interest. The AI creates responsive virtual environments that remember you. (All of this would remain the likely case even if ML-type systems get replaced by an even more amazing category of AI technology, but let’s stick with what we know is here for now.)

“However, this sort of AI adds a new element to the digital cornucopia: autocomplete. Imagine a system that can take the unique and creative notes a person writes and, using what it has learned about the individual and their thoughts, turns those notes into a full-fledged written work. The human can add notes to the drafts, becoming an editor of the work that they co-write with their personalized system. The result remains unique to that person and true to their voice, but does not require that the person creates every letter of the text. And it will greatly speed up the process of creation.

“What’s more is that this collaboration can be flipped, with the (personalized, true-to-voice) digital system providing notes, observations, even edits to the fully human-written work. It’s likely that old folks (like me) would prefer this method, even if it remains stuck at a human-standard pace.

“Add to that the ability to take the written creation and transform it into a movie, or a game, or a painting, in a way that remains true to the voice and spirit of the original human mind. A similar system would be able to create variations on a work of music or art, transforming it into a new medium but retaining the underlying feeling.

“Computer games will find this technology system of enormous value, adding NPCs based on machine learning that can respond to whatever the player says or does, based on context and the in-game personality, not a basic script. It’s an autocomplete of the imagined world. This will be welcomed by gamers at first, but quickly become controversial when in-game characters can react appropriately when the player does something awful (but funny). I love the idea of an in-game NPC saying something like ‘hey man, not cool’ when the player says something sexist or racist.

Jamais Cascio, distinguished fellow at the Institute for the Future, asked, “Where to begin? To start with, the various benefits I describe in the first part can be flipped into something monstrous, using the exact same types of technology. Systems of decontextualization, providing raw data — which may or may not be true — without explanation or with incomplete or biased explanations. Context-less streams of info about how the world is falling apart without any explanation of what changes can be made. Systems of misinformation or censorship, blocking out (or falsely replacing) external information that may run counter to what the system (its designers and/or its seller) wants you to see. Immersive virtual environments that exist solely to distract you or sell you things.

“And, to quote Philip J. Fry on ‘Futurama,’ ‘My god, it’s full of ads.”

“Machine learning-based ‘autocomplete’ technologies that help expand upon a person’s creative work could easily be used to steer a creator away from or towards particular ideas or subjects. The system doesn’t want you to write about atheism or paint a nude, so the elaborations and variations it offers up push the creator away from bad themes. This is especially likely if the machine learning AI tools come from organizations with strong opinions and a wealth of intellectual property to learn from. Disney. The Catholic Church. The government of China. The government of Iran. Any government, really. Even that mom-and-op discount snacks and apps store on the corner has its own agenda.

“What’s especially irritating is that nearly all of this is already here in nascent form. Even the ‘autocomplete’ censorship can be seen: both GPT-3 and Midjourney (and likely nearly all of the other machine learning tools open to the public) currently put limits on what they can discuss or show. All with good reason, of course, but the snowball has started rolling. And whether or not the digital art theft/plagiarism problem will be resolved by 2035 is left an exercise for the reader.

“The intersection of machine learning AI and privacy is especially disturbing, as there is enormous potential for the invasion not just the information about a person, but what the person believes or thinks, as based on the mass collection of that person’s written or recorded statements. This would almost certainly be used primarily for advertising: learning not just what a person needs, but what weird little things they want. We currently worry about the (supposedly false) possibility that our phones are listening to us talk to create better ads; imagine what it’s like to have our devices seemingly listening to our thoughts for the same reason.

“It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more-extreme version of the present or an unfunny parody. Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles. Gatekeeping the visual commons is inevitably a part of any kind of persistent augmented reality world, with people having to pay extra to see certain clothing designs or architecture. Demoralizing deepfakes of public figures, not porn but showing them what they could have done right if they were better people.

“Advisors on our shoulders (in our glasses or jewelry, more likely) that whisper advice to us about what we should and should not say or do. Not Devils and Angels, but officials and industry.

“Now I’m depressed.”

Charalambos Tsekeris, vice president of the Hellenic National Commission for Bioethics and Technoethics, commented, “By 2035, digital tools and systems will be developed in a human-centered way guided by human design abilities and ingenuity. Human ability, regulatory frames and the soft pressure by civil society will promote the serious ethical, legal and social issues resulting from new forms of agency and privacy. All in all, collective intelligence, combined with digital literacy, is increasingly cultivating responsibility and shaping our environments (analog or digital) to make them safer and AI-friendly.

“Advancing futures-thinking and foresight analysis will substantially facilitate understanding and preparedness. It will also empower digital users to be more knowledgeable and reflexive upon their rights and the nature and dynamics of the new virtual worlds.

“The power of ethics by design will ultimately orientate internet-enabled technology toward updating the quality of human relations and democracy, also protecting digital cohesion, trust and truth from the dynamics of misinformation and fake news.

“In addition, digital assistants and coordination tools will support transparency and accountability, informational self-determination and participation. An inclusive digital agenda will help all users benefit from the fruits of the digital revolution. In particular, innovation in the sphere of AI, clouds and big data will create additional social value and will help to support people in need.

“In sum, the best and most beneficial change will pertain to a significant increase in digital human, social and institutional capital, toward a happy marriage between digital capitalism and democracy.

Charalambos Tsekeris, vice president of the Hellenic National Commission for Bioethics and Technoethics, responded, “By 2035, digital tools and systems will not be able to efficiently and effectively fight social divisions and exclusions, as well as the lack of accountability, transparency and consensus in decision-making. In particular, digital technology systems will continue to function in a shortsighted and unethical way so that humanity will face unsustainable inequalities and overconcentration of technoeconomic power. In particular, new digital inequalities will amount to serious alarming threats and existential risks for the human civilization.

“These risks will be significantly increased and put humanity in danger, in combination with environmental degradation and the overcomplexification of digital connectivity and the global system. No agreed ethical and regulatory frameworks will be found to fix social media algorithms, so that the vicious circle between collective blindness, populism and polarization will be dramatically reinforced. In addition, the fragmentation of the internet world will continue (splinternet), thus resulting in more geopolitical tensions, less international cooperation and less global peace.

“Overall, the dominant surveillance-for-profit model will continue to prevail by 2015, leading to further loss of privacy, deconsolidation of global democracy and the expansion of cyberfeudalism and data oligarchy. Also, the exponential speed and overcomplexity of datafication and digitalization in general will diminish the human capacity for critical reflection, futures thinking, information accuracy and fact-checking.

“The overwhelming processes of automation and personalization of information will intensify feelings of loneliness among atomized individuals and further disrupt the domains of mental health and well-being. By 2035, the ongoing algorithmization and platformization of markets and services will exercise more pressure on working and social rights, further worsening exploitation, injustice, labor conditions and labor relations. Ghost workers and contract breaching will dramatically proliferate.”

Lee Warren McKnight, professor of entrepreneurship and innovation at Syracuse University’s School of Information Studies, wrote, “First, I’d like to comment on human-centered development of digital tools and systems – safely advancing human progress in these systems. By 2035, digital tools and systems will have eliminated the edge. Nowhere will digital resources be unavailable, except by non-ambient design. The grassroots could be digitalized, empowering the 37% of the world still largely off the grid in 2023, by 2035. With ‘worst case scenario survival as a service’ widely available, human safety will progress.

“Most will assume I am referencing LEO or microsatellite systems, which is correct, in part. Infrastructureless wireless or cyber-physical infrastructure can span any distance already in 2023. Still, that is just a piece of a wider shared cognitive cyber-physical (IoT) technology, energy, connectivity, security, privacy, ethics, rights, governance and trust virtual services bundle. Decentralized communities will be adapting these digital, partially tokenized assets to their own needs and sustainable development goals (to speak UN), through to 2035.

“Everyone has been talking about connecting the unconnected and the next billion, and efforts are progressing with the ITU, Internet Society, many more UN and civil society organizations, and governments, addressing this huge challenge to our global community. I foresee self-help, self-organized, adaptive – Cloud to (previously known as) Edge community Internet operators solving the last 400 meters or thousand feet problem. Everywhere. They are digitally transforming themselves and are the new community services providers.

“The market effects of edge bandwidth management innovations, radically lower edge device and bandwidth costs through community traffic aggregation, and fantastically higher access to digital services will be significant enough to measurably raise GDP in nations undertaking their own initiatives to digitalize the grassroots, beyond the current reach of telecommunications infrastructure. At the community level, the effect of these initiatives is immediately transformative for the youth of participating communities.

“What I am saying is human well-being and sustainable development can be better in 2035. Supported by shared cognitive computing software and services at the edge, or perhaps a digital twin of the village, and operating to custom, decentralized design parameters decided by that community. The effects will significantly raise incomes of rural residents worldwide. It will not eliminate the Digital Divide, but it will transform it.

“How do I know? Because we are already underway with the Africa Community Internet Program, launched by the UN Economic Commission for Africa in cooperation with the African Union, in 2022. Ongoing pilot projects are educating governments and other Internet community multi-stakeholders, about what is possible.

“Of course, the key part of my prediction I only now mention, which is the ‘Africa Community Internet inter-ministry and -parliamentary Task Force, Advisory Group, and dynamic coalition alliance.’ ACITAG is coordinated by ACIP and will attract supremely talented and motivated people (like those who read Pew Surveys) and organizations worldwide to contribute and coordinate for their own national, regional, local and community needs. And they will be motivated to synchronize with continent-scale actors such as the African Union, UN agencies and businesses, for economies of scale, and with the ITU, IEEE, ICANN, and Internet Society and many more for technical scalability. Latin American and Asian communities, as well as regions of ALL nations, will benefit from elimination of the edge. By 2035. Led by Africans digitally transforming their own communities.

“Secondly, I’d like to comment on the topic of human connections, governance and institutions – improving social and political interactions.

“Trust in ‘zero trust’ environments is at a premium and relies on sophisticated mechanisms in 2035. Certified Ethical AI Developers are the new Silicon Valley elite priesthood, as they are the well-paid orchestrators of machines learning and cognitive (way beyond smart : ) communities. And they are certified to BE ethical in code and by design. Of course, liability insurance disputes delayed progress, but by 2035 the practice and profession of Certified Ethical AI Developers will have cleaned up many biased by poor design legacy systems. And they will have begun to lead others towards this approach, which combines both improved multi-dimensional security, but also privacy, ethics and rights-awareness by design into adaptive complex systems.

“The effects are especially noticeable in community RFP procurement processes, which virally adopt language requiring review by a certified ethical AI developer and their AI tools shortly after their first use, even just for submission of a bid. With this now impossible goal only achievable by certification, many developers and others in and around the technical community suddenly have a new interest in introductory level philosophy courses, also raising demand for computer science – philosophy double majors through the roof. Data scientists will work for and report to them.

“Of course, just having a certification process for ethical AI developers does not automatically make firms’ business practices more ethical. It serves as a market signal that sloppy Silicon Valley practices also run risks, including loss of market share. Standing alongside all the statements of ethical AI principles, certified ethical AI developers will be 2035’s reality 5D TV stars, vanquishing bad and evil AI systems.

“By 2035 many people, knowing that if principles are not practiced, they have no effect, will insist that they will not use or buy anything if it does not come with a certified ethical AI developer’s assurance that someone at least tried to make the system safe for humans. And cities will not buy anything that has not been reviewed at the least, by an ethical AI developer and their trusted ethical AI white and red hat digital twins.”

Lee Warren McKnight, professor of entrepreneurship and innovation at Syracuse University’s School of Information Studies, wrote, “I have concerns over human-centered development of digital tools and systems falling short of advocates’ goals. Good, bad and evil AI will threaten societies, undermine social cohesion, spark suicides and domestic and global conflict, and undermine human well-being. Just as profit-motivated actors, nation states and billionaire oligarchs have weaponized advocates for guns over people and led to skyrocketing murder rates and a shorter lifespan in the United States, similar groups, and groups manipulating machine learning and neural network systems to manipulate them, are arising under the influence of AI.

“They already have. To define terms, good AI is ethical and good by evidence-based design. Bad AI is ill-formed either by ignorance and human error or bad design. In 2035 evil AI could be a good AI or a bad AI gone bad due to a security compromise or malicious actor; or could be bad-to-the-bone evil AI created intentionally to disrupt communities, crash systems and foster murders and death.

  • The manufacturers of disinformation, both private sector and government information warfare campaign managers, will all be using a variety of ChatGPT-gone-bad-like tools to infect societal discourse, systems and communities.
  • The manipulated media and surveillance systems will be integrated to infect communities as a wholesale, on-demand service.
  • Custom evil AI services will be preferred by stalkers and rapists for their services.
  • Mafia-like protection rackets will grow to pay off potential AI attackers as a cost of doing only modestly bad business.
  • Both retail and wholesale market growth for evil AI will have compound effects, with both cyber-physical mass casualty events, and more psychologically damaged unfair-and-unbalanced artificially intelligent evil digital twins that are perfectly attuned to personalize evil effects on the infected, that is artificially influenced, to go bad. Evil robotic process automation will be a growth industry through to 2035, to improve scalability.”

Avi Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, said, “XR: By 2035, we have all-day wearable glasses that can do both AR and VR. The question is what do we use them for? No longer needing screens, smartphones have shrunk down to the size of keychains, if we still remember those (most doors unlock based on our digital ID). The primary use of XR is communications, bringing photorealistic holograms of other people to us, wherever we are. Those other participants also experience their own augmented spaces without us having to share our 3D environments.

“The upside of this is that we’re more connected, albeit mostly asynchronously. It would be impossible for us to be constantly connected to everyone in every situation, so we developed social protocols like we did with texting, allowing us to pop into and out of each other’s lives without interrupting. The experience is a lot like having a whole team of people at your back, ready to whisper ideas in your ears based on snippets of real life you choose to share.

“AI: The current wave of generative AI has taught us that the best AI is made of people, both providing our creative output and also filtering the results to be acceptable by people. By 2035, the business models will have shifted to rewarding those creators and value-adders such that the result looks more like a corporation today. We’ll contribute, get paid for our work, and the AI-as-corporation produces an unlimited quantity of new value from the combination for everyone else. It’s as if we cracked the ultimate code for how people can work efficiently together — extract their knowledge and ideas and let the cloud combine these in milliseconds. Still, we can’t forget the human inputs or it’s just another race to the bottom.

“The flip side of this is that what we today might called ‘recommendation AI’ merges with the above to form a kind of super intelligence that can find the most contextually appropriate content both virtually and IRL. That tech forms a kind of personal firewall that keeps our personal context private but allows to securely gather the best inputs the world can offer, without giving away our privacy.

“Metaverse: By 2035, the word Metaverse is now as popular as Cyberspace and Information Superhighway became over time. The companies prefixing their name by ‘meta’ are all kind of boring now. However, given the XR and AI trends above, we can now think of the Metaverse equivalent as the information space we all inhabit.

“The main shift by 2035 is we don’t care about it as a space, but as a massive inter-connection among 10 billion people. The AR tech and AI fade into the background and we see other people as valued creators and consumers of each other’s work, supporters of each other’s lives and social needs.”

Avi Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, commented, “Each of the previous technologies goes to its worst outcome quickly, if the technologies are built for the benefits of companies that monetize their customers. XR becomes exploitive and not socially beneficial. AI builds empires on the backs of real people’s work and deprives them of a living wage as a result. The Metaverse becomes a vast and insipid landscape of exploitive opportunities for companies to mine us for information and wealth, while we become enslaved to psychological countermeasures, designed to keep us trapped and subservient to our digital overlords. The key difference between the most positive and negative uses of these three related technologies is whether the systems are designed to help and empower people or exploit them.”

Marjory S. Blumenthal, senior adjunct policy researcher at RAND Corporation, responded, “In a little over a decade, it is reasonable to expect two kinds of progress, in particular: First are improvements in the user experience, especially for people with various impairments (visual, auditory, tactile, cognitive). A lot is said about diversity, equity, and inclusion that focuses broadly on factors like income and education, but to benefit from digital technology requires an ability to use it that today remains elusive for many people for physiological reasons. Globally, populations are aging, a process that often confronts people with impairments they didn’t use to have (and of course many experience impairments from birth onward).

“Second, and notwithstanding concerns about concentration in many digital-tech markets, more indigenous technology is likely, at least to serve local markets and cultures. In some cases, indigenous tech will take advantage of indigenous data, which technological progress will make easier to amass and use, and more generally it will leverage a wider variety of talent, especially in the Global South, plus motivations to satisfy a wider variety of needs and preferences (including, but not limited to, support for human rights).”

Marjory S. Blumenthal, senior adjunct policy researcher at RAND Corporation, said, “There are two areas where technology seems to get ahead of people’s ability to deal with it, either as individuals or through governance. One is the information environment – for the last few years people have been coming to grips with manipulated information and its uses, and it has been easier for people to avoid the marketplace of ideas by sticking with channels that suit narrow points of view.

“Commentators lament the decline in trust of public institutions and speculate about a new normal that questions everything to a degree that is counterproductive. Although technical and policy mechanisms are being explored to contend with these circumstances, the underlying technologies and commercial imperatives seem to drive innovation that continues to outpace responses. For example, the ability to detect tends to lag the ability to generate realistic but false images and sound, although both are advancing.

“At a time when there has been a flowering of principles and ethics surrounding computing, new systems like ChatGPT with a high cool factor are introduced without any apparent thought to second- and third-order effects of using them – thoughtfulness takes time and risks loss of leadership. The resulting distraction and confusion likely will benefit the mischievous more than the rest of us – recognizing that crime and sex have long impelled uses of new technology.

“The second is safety. Decades of experience with digital technology have shown our limitations in dealing with cybersecurity, and the rise of embedded and increasingly automated technology introduces new risks to physical safety even as some of those technologies (e.g., automated vehicles) are touted as long-term improvers of safety.

“Responses are likely to evolve on a sector-by-sector basis, which might make it hard to appreciate interactions among different kinds of technology in different contexts. Although progress on the safety of individual technologies will occur over the next decade, the cumulation of interacting technologies will add complexity that will challenge understanding and response.”

Louis Rosenberg, CEO and chief scientist at Unanimous AI, predicted, “As I look ahead to the year 2035, it’s clear to me that certain digital technologies will have an oversized impact on the human condition, affecting each of us as individuals and all of us as a society. These technologies will almost certainly include artificial intelligence, immersive media (VR and AR), robotics (service and humanoid robots), and powerful advancements in human-computer interaction (HCI) technologies. At the same time, blockchain technologies will continue to advance, likely enabling us to have persistent identity and transferrable assets across our digital lives, supporting many of the coming changes in AI, VR, AR and HCI.

“So, what are the BEST and MOST BENEFICIAL changes that are likely to occur?

“As a technologist who has worked on VR, AR, AI and HCI for over 30 years, I believe these disciplines are about to undergo a revolution, driving a fundamental shift in how we interact with digital systems. For the last 60 years or so, the interface between humans and our digital lives has been through keyboards, mice and touchscreens to provide input and the display of flat media (text, images, videos) as output. By 2035, this will no longer be the dominant model. Our primary means of input will be through natural dialog enabled by conversational AI and our primary means of output will be rapidly transitioning to immersive experiences enabled through mixed reality eyewear that brings compelling virtual content into our physical surroundings.

“I look at this as a fundamental shift from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ That’s because by 2035, human interface technologies (both input and output) will finally allow us to interact with digital systems the way our brains evolved to engage our world – through natural experiences in our immediate surroundings (mixed reality) and through natural human language (conversational AI).

“As a result, by 2035 and beyond, the digital world will become a magical layer that is seamlessly merged with our physical world. And when that happens, we will look back at the days when people engaged their digital lives by poking their fingers at little screens in their hands as quaint and primitive. We will realize that digital content should be all around us and should be as easy to interact with as our physical surroundings. At the same time, many physical artifacts (like service robots, humanoid robots and self-driving cars) will come alive as digital assets that we engage through verbal dialog and manual gestures. As a consequence, by the end of the 2030s the differences will largely disappear in our minds between what is physical and what is digital.”

Louis Rosenberg, CEO and chief scientist at Unanimous AI, said, “I strongly believe that by 2035 our society will be transitioning from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ This transition will move us away from traditional forms of digital content (text, images, video) that we engage today with mice, keyboards, and touchscreens to a new age of immersive media (virtual and augmented reality) that we will engage mostly through conversational dialog and natural physical interactions.

“While this will empower us to interact with digital systems as intuitively as we interact with the physical world, there are many significant dangers this transition will bring. For example, the merger of the digital world and the physical world will mean that large platforms will be able to track all aspects of our daily lives – where we are, who we are with, what we look at, even what we pick up off store shelves. They will also track our facial expressions, vocal inflections, manual gestures, posture, gait and mannerisms (which will be used to infer our emotions throughout our daily lives). In other words, by 2035 the blurring of the boundaries between the physical and digital worlds will mean (unless restricted through regulation) that large technology platforms will know everything we do and say during our daily lives and will monitor how we feel during thousands of interactions we have each day.

“This is dangerous and it’s only half the problem. The other half of the problem is that conversational AI systems will be able to influence us through natural language. Unless strictly regulated, targeted influence campaigns will be enacted through conversational agents that have a persuasive agenda. These conversational agents could engage us through virtual avatars (virtual spokespeople) or through physical humanoid robots. Either way, when digital systems engage us through interactive dialog, they could be used as extremely persuasive tools for driving influence. For specific examples, I point you to a white paper “From Marketing to Mind Control”  written in 2022 for the Future of Marketing Institute and to the 2022 IEEE paper “Marketing in the Metaverse and the Need for Consumer Protections.”

Catriona Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, said, “I have great hopes for the development of digital technologies and their effect on humans by 2035. The most important changes that I believe will occur that are the best and most beneficial include the following:

1. Transhumanism: Benefit – improved human condition and health

  • The development of software and hardware that humans will embed in their bodies to overcome current day problems
  • AI-driven, 3D-printed, fully-customised prosthetics
  • Brain extensions – brain chips that are connected to other digital interfaces and project brain, through or dream activity in a useful way for the participant
  • Nano technologies that may be ingested or enter into the human body that provide health and other benefits

2. Metaverse technologies: Benefit – improved widespread accessibility to experiences – widespread and affordable access for citizens to:

  • Virtual, augmented and mixed reality platforms for entertainment. This may include access to concerts, the arts or other digital based entertainment
  • Virtual travel experiences – this may include virtual tours to digital twin replicas of physical world sites
  • Virtual education providers including schools, secondary and tertiary and other learning opportunities
  • Virtual health care including virtual consultations with doctors and allied health professionals and remote surgery
  • Augmented reality-based apprenticeships for trades and other technical roles where the apprentice may work remotely on a digital twin of a car, or building for example

3. New financial models: Benefit – more secure and more decentralised finances

  • The emergence of decentralised based financial services – sitting on blockchain – adding ease, security and simplicity to finances
  • The use of NFT and other digital assets as a medium of currency, value and exchange

4. Autonomous machines: Benefit – human efficiency and safety

  • The widespread adoption of autonomous vehicles
  • The widespread adoption of autonomous appliances

5. AI-driven information: Benefit – access to knowledge, efficiency, potential to move human thinking to higher level once AI does more mundane information-based tasks

  • Widespread adoption of AI based technologies such as Generative AI leading to a rethink in how education, content development and marketing industries are constructed
  • Widespread acceptance of AI-based art – such as digital paintings, images or music

5. Psychedelic bio-technology: Benefit – healing and expanded consciousness

  • The Psychedelic Renaissance will be reflected in the proliferation of psychedelic bio-tech companies looking to solve human mental health problems and the expansion of consciousness

6. AI-driven climate change: Benefit – improved climate change conditions

  • A core focus of AI will be to drive rapid improvements in climate change.

Catriona Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, wrote, “In my estimation, the most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems are:

1) Warfare: Harm – human use of AI driven technologies to maim or kill humans

2) Crime: Harm – increased crime due to difficulties in policing within new technology platforms; removal of state, and national boundaries and jurisdictions for crime

3) Organised terrorism: Harm – new platforms for organised crime or terrorism to re-form; mass manipulation of populations or segments towards an enemy

4) Fraud: Harm – new financial models and platforms provide further opportunities for crime such as fraud

5) Identity theft: Harm – new platforms create difficulties in establishing identity and open opportunities for identity-related crimes

6) Division to the Digital and Non-digital populations: Harm – split in human society to those who are digital oriented and those who are not. This may result in a divide and result in further exacerbating the ‘have’ and ‘have nots’

7) Mass unemployment from automation of jobs: Harm – AI replaces the jobs of a percentage of the population and those people are on a Universal Basic Income

8) Societies biases hard-coded into machines: Harm – the gender and minority employment makeup in tech jobs continues and existing societal biases are coded into the technology platforms; data sets continue to not reflect women and other minorities and this results in discriminatory outcomes from advanced tech such as AI

9) Increased mental and physical health issues: Harm – the coming of advanced tech such as VR, AR and the metaverse results in humans having an increased level of mental health conditions; also physical health conditions

10) Challenges in legal jurisdictions: Harm – lack of state, national and international boundaries in platforms such as the metaverse result in legal issues and challenges

11) High-tech impact on the environment: Harm – the use of advanced technology and related carbon emissions has a negative impact on the environment

Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, said, “I see the future as a ‘sliding doors’ world. It can go awfully wrong or incredibly well. I don’t see it will be possible for half and half good and bad working. This answer is based on the idea that we went through the right door, and in 2035 we will have embraced human-centered development of digital tools and systems and human connections, governance and institutions.

“In 2035 we shall have myriad locally and culturally-based apps run by communities. The people participate and contribute actively because they know that their data will be used to build a better future. The public interest will be the morning star of all these initiatives, and local administrations will run the interface between these applications and the services needed by the community and by each citizen: health, public transportation and schooling systems.

“Locally produced energy and locally produced food will be delivered via common infrastructures that are interlinked, with energy networks tightly linked to communication networks. The global climate will come to have commonly accepted protection structures (including communications). Solidarity will be in place because insurance and social costs will become unaffordable. The changes in agricultural systems arriving with advances in AI and ICTs will be particularly important. They will finally solve the dichotomy between the metropolis and countryside. The possibility to work from everywhere will redefine metropolitan areas and increase migrations to places where better services, and more vivid communities will exist. This will attract the best minds.

“New applications of AI and technological innovation in health and medicine could bring new solutions for disabled people and bring relief for those who suffer from diseases. The problem will be assuring these are fully accessible to all people, not only to those who can afford it. We need to think in parallel to find scalable solutions that could be extended to the whole of the citizenship of a country and made available to people in least-developed countries. Why invest so much in developing a population of supercentenarians in privileged countries when the rest of the world still struggles to survive? Is such contradiction tenable?

“Then there is the future of work and of wealth redistribution. Perhaps the most important question to ask between now and 2035 is, ‘What will be the future of work?’ Recent developments in AI foreshadow a world in which many current jobs could easily be replaced or at least reshaped completely, even in the intellectual sphere. What robots did in the factories with manual work, now GPT and Sparrow can do to intellectual work. If this happens, if well-paid jobs disappear in large quantities, how will those who are displaced survive? How will communities survive as they also face an aging population? Between now and 2035, politicians will need to face these seemingly distant issues that are likely to become burning issues.”

Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, “In the worst scenario, if we go through the wrong sliding door, I expect the worst consequences in this area: Human connections, governance and institutions. If the power of Internet platforms will not be regulated by law and by antitrust measures, if global internet governance will not be fixed, then we will have serious risks for democracies.

“Until now we have seen the effects of algorithms on big Western democracies (U.S., UK, EU) where a balance of powers exists and – despite these counter powers – we have seen the damages that can be provoked. In coming years, we shall see the use of the same techniques in democratic countries where the balance of power is less shared. Brazil, in this sense, has been a laboratory and will provide bad ideas to the rest of the world.

“With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship’). In countries that are already non-democratic, AI and a distorted use of digital technologies could bring mass-control of societies much more efficiently than the old communist regimes.

“As Mark Zuckerberg innocently once said, in the social media world, there is no need for spying – people spontaneously surrender private information for nothing. As Julian Assange wrote, if democratic governments fall into the temptation to use data for mass control, then everyone’s future is in danger. There is another area (apparently less relevant to the destiny of the world) where my concerns are very high, and that is the integrity of knowledge. I’m very sensitive to this issue because, as a journalist, I’ve worked all my life in search of the truth to share with my co-citizens. I am also a fanatic movie-lover and I have always been concerned about the preservation of the masterworks of the past. Unfortunately, I think that in both areas between now and 2035 some very bad moves could happen in the wrong direction thanks to technological innovation being used for bad purposes.

“In the field of news, we have a growing attitude to look not for the truth but for news that people would be interested in reading, hearing or seeing – news that better corresponds with the public’s moods, beliefs or belonging.

“I also expect that revered and even beloved entertainment created in the past is going to be lost, twisted and manipulated. Soon AI will allow each of us to change, for instance, favorite movies’ endings. Look at ‘Death in Venice’ by Luchino Visconti (based on Thomas Mann’s book), in which the old homosexual professor died without being able to realize his platonic love dream with the young Tadzio. The story is sad, but AI could soon easily change the ending, showing the professor flying away with Tadzio and crowning his love dream somewhere in Venice.

“The same could happen for Steven Spielberg’s first movie ‘Duel,’ where the killer truck driver could succeed and eliminate the young car driver played by Dennis Weaver, or to Jack Nicholson in ‘The Shining,’ who would finally be allowed to exterminate the whole family he’s stalking. We are moving slowly in that direction (for examples of altered endings look at ‘Kaleidoscope’ or ‘Bandersnatch’). Alterations of classic works create a setting in which there is no more shared history, shared culture, even shared storytelling. Are you ready to accept it? Personally, I’m not.

“In 2024 we shall know if the UN Summit of the Future will be a success or a failure. and when the full regulation process of the Internet Platforms launched by the European Union will prove to be successful or not. These are the most serious attempts to date to conciliate the potential of the Internet with respect for human rights and democratic principles. Its success or failure will tell us if we are moving towards the right ‘sliding door’ or to the wrong one.”

Jonathan Grudin, affiliate professor of information science at the University of Washington, previously a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “Addressing unintended consequences is a primary goal. Many beneficial changes are possible, but the best that is very likely is that we will address many of the unanticipated negatives tied at least in part to digital technology that emerged and grew in impact over the past decade: malware, invasion of privacy, political manipulation, economic manipulation, declining mental health and growing wealth disparity.

“The once small, homogeneous, trusting tech community, after recovering from the internet bubble, was ill-equipped to deal with the challenges arising from anonymous bad actors and well-intentioned but imperceptive actors who operated at unimagined scale and velocity. Causes and effects are now being understood. It won’t be easy or an endeavor that will ever truly be finished, but technologists working with legislators and regulators are likely to make substantial progress.”

Jonathan Grudin, affiliate professor of information science at the University of Washington, previously a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “I foresee a  loss of human control. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms, with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course.

“We see it around us already. Political leaders unable to govern. CEOs at Facebook, Twitter and elsewhere unable to understand how technologies that were intended to unite led to nasty divisiveness and mental health issues. Google and Amazon forced to moderate content on such a scale that often only algorithms can do it, and humans can’t trace individual cases to correct possible errors. Consumers who can be reliably manipulated by powerful targeting machine learning to buy things they don’t need and can’t afford. It is early days. Little to prevent it from accelerating is on the horizon.

“We will also see an escalation in digital weapons, military spending and arms races. Trillions of dollars, euros, yuan, rubles and pounds are spent, and tens of thousands of engineers deployed, not to combat climate change but to build weaponry that the military may not even want. The United States is spending billions on an AI-driven jet fighter, despite the fact that jet fighter combat has been almost non-existent for decades with no revival on the horizon.

“Unfortunately, the Ukraine war has exacerbated this tragedy. I believe leaders of major countries have to drop rivalries and address much more important existential threats. That isn’t happening. The cost of a capable armed drone has fallen an order of magnitude every few years. Setting aside military uses, long before 2035 people will be able to buy a cheap drone at a toy store, clip on facial recognition software and a small explosive or poison and send it off to a specified address. No need for a gun permit. I hope someone sees how to combat this.”

Dmitri Williams, professor of technology and society at the University of Southern California, wrote, “When I think about the last 30 years of change in our lives due to technology, what stands out to me is the rise in convenience and the decline of traditional face-to-face settings. From entertainment to social gatherings, we’ve been given the opportunity to have things cheaper, faster and higher-quality in our private spaces, and we’ve largely taken it.

“For example, 30 years ago, you couldn’t have a very good movie-watching experience in your own home, looking at a small CRT tube and standard definition, and what you could watch wasn’t the latest and greatest. So, you took a hit to convenience and went to the movie theater, giving up personal space and privacy for the benefits of better technology, better content and a more community experience. Today, that’s flipped. We can be on our couches and watch amazing content, with amazing screens and sounds and never have to get in a car.

“That’s a microcosm of just about every aspect of our lives – everything is easier now, from work over high-speed connections to playing video games. We can do it all from our homes. That’s an amazing reduction in costs and friction in our business and private lives. And the social side of that is access to an amazing breadth of people and ideas. Without moving from our couch, chair or bed, we can connect with others all over the world from a wide range of backgrounds, cultures and interests.

“Ironically, though, we feel disconnected, and I think that’s because we evolved as physical creatures who thrive in the presence of others. We atrophy without that physical presence. We have an innate need to connect, and the in-person piece is deeply tied to our natures. As we move physically more and more away from each other – or focus on far-off content even when physically present – our well-being suffers. I can’t think of anything more depressing than seeing a group of young friends together but looking at their phones rather than each other’s faces. Watching well-being trends over time, even before the pandemic, suggests an epidemic of loneliness.

“As we look ahead, those trends are going to continue. The technology is getting faster, cheaper and higher quality, and the entertainment and business industries are delivering us better and better content and tools. AI and blockchain technologies will keep pushing that trend forward.

“The part that I’m optimistic about is best seen by the nascent rise of commercial-level AR and VR. I think VR is niche and will continue to be, not because of its technological limitations, but because it doesn’t socially connect us well. Humans like eye contact, and a thing on your face prevents it. No one is going to want to live in a physically closed off metaverse. It’s just not how we’re wired. The feeling of presence is extremely limited, and the technical advances in the next 10 years are likely to make the devices better and more comfortable, but not change that basic dynamic.

“In contrast, the potential for AR and other mixed reality devices is much more exciting because of its potential for social interactions. Whereas all of these technical advances have tended to push us physically away from each other, AR has the potential to help us re-engage. It offers a layer on top of the physical space that we’ve largely abandoned, and so it will also give us more of an incentive to be face-to-face again. I believe this will have some negative consequences around attention, privacy and capitalism invading our lives just that much more, but overall, it will be a net positive for our social lives in the long run. People are always the most interesting form of content, and layering technologies have the potential to empower new forms of connection around interests.

“In cities especially, people long for the equivalent of the ice-breakers we use in our classrooms. They seek each other online based on shared interests, and we see a rise in throwback formats like board games and in-person meetups. The demand for others never abated, but we’ve been highly distracted by shiny, convenient things. People are hungry for real connection, and technologies like AR have the potential to deliver that and so to mitigate or reverse some of the well-being declines we’ve seen over the past 10-20 years. I expect AR glasses to go through some hype and disillusionment, but then to take off once commercial devices are socially acceptable and cheap enough. I expect that the initial faltering steps will take place over the next three years and then mass-market devices will start to take off and accelerate after that.

“Here’s my simple take: I think AR will tilt our heads up from our phones back to each other’s faces. It won’t all be wonderful because people are messy and capitalism tends to eat into relationships and values, but that tilt alone will be a very positive thing.”

Dmitri Williams, professor of technology and society at the University of Southern California, commented, “What I worry most about with technology is capitalism. Technology will continue to create value and save time, but the benefits and costs will fall in disproportionate ways across society.

“Everyone is rightly focused on the promise and challenges of AI at the moment. This is a conversation that will play out very differently around the world. Here in the United States, we know that business will use AI to maximize its profit and that our institutions won’t privilege workers or well-being over those profits. And so we can expect to see the benefits of AI largely accrue to corporations and their shareholders. Think of the net gain that AI could provide – we can have more output with less effort. That should be a good thing, as more goods and capital will be created and so should improve everyone’s lot in life. I think it will likely be a net positive in terms of GDP and life expectancy, but in the U.S., those gains will be minimal compared to what they could and should be.

“Last year I took a sabbatical and visited 45 countries around the world. I saw wealthy and poor nations –  places where technology abounds and where it is rare. What struck me the most was the difference in values and how that plays out in promoting the well-being of everyday people. The United States is comparatively one of the worst places in the world at prioritizing well-being over economic growth and the accumulation of wealth by a minority (yes, some countries are worse still). That’s not changing any time soon, and so in that context I look at AI and ask what kind of impacts it’s likely to have in the next 10 years. It’s not pretty.

“Let’s put aside our headlines about students plagiarizing papers and think about the job displacements that are coming in every industry. When the railroads first crossed the U.S., we rightly cheered, but we also didn’t talk a lot about what happened to the people who worked for the Pony Express. Whether it’s the truck driver replaced by autonomous vehicles, the personal trainer replaced by an AI agent, or the stockbroker who’s no longer as valuable as some code, AI is going to bring creative destruction to nearly every industry. There will be a lot of losers.

“I can imagine the reactions of legislatures around the world as these facts come into focus. Here in the U.S., liberals will attempt to solve everything through some kind of job retraining and conservatives will trumpet doing nothing because the free market will solve it all. Both will be wrong and thoughtless. I expect more thoughtful places like Scandinavia, New Zealand or Singapore to confront these new changes and ask how they can best empower and support their citizens. They will be more likely to ask: How can these gains be used to improve all lives?

“We could have the future of the Jetsons and their short workdays, but I think we’re more likely to edge toward Blade Runner’s darker vision of large differences between rich and poor. Technology isn’t the cause, but it will be the means.”

Calton Pu, co-director of the center for experimental research in computer systems at Georgia Institute of Technology, wrote, “Digital life has been, and will continue to be, enriched by AI and ML (machine learning) techniques and tools. A recent example is the launch of ChatGPT, a modern chatbot (developed by OpenAI and released in 2022) that is passing the Turing Test every day. Similar to the contributions of robotics in the physical world (e.g., manufacturing), future AI/ML tools will relieve the stress (and jobs) from simple and repetitive tasks in the digital world.

“The combination of physical automation and AI/ML tools would and should lead to concrete applications such as autonomous driving, which have stalled in recent years despite massive investments (on the order of many billions of dollars). One of the major roadblocks has been the (gold standard) ML practice of training static models/classifiers that are insensitive to evolutionary changes in time. These static models suffer from knowledge obsolescence, in a way similar to human aging. There is an incipient recognition of the limitations of current practice of constant retraining of ML models to bypass knowledge obsolescence manually (and temporarily). Hopefully, the next generation ML tools will overcome knowledge obsolescence in a sustainable way, achieving what humans could not: stay young forever.

Calton Pu, co-director of the center for experimental research in computer systems at Georgia Institute of Technology, commented, “Toto, we’re not in Kansas anymore. When considering the future of digital life, we can learn a lot from the impact of robotics in the physical world. For example, Boston Dynamics pledged to ‘not weaponize’ their robots (in October 2022). This is remarkable, since the company was founded with, and worked on, defense contracts for many years before its acquisition by (primarily) non-defense companies

“That pledge is an example of moral dilemma on what is right or wrong, of which the technologist’s answer usually is amoral. By not taking sides, technologists avoid the dilemma and let both sides (good and evil) utilize the technology as they see fit. This amorality works quite well since good technology always has many applications over the entire spectrum from good to evil, through large gray areas in-between.

“A digital example is Microsoft Tay, a chatbot released in 2016, a dynamically learning chatbot that started to send inflammatory and racist speech, causing its shutdown the same day. Learning from this lesson, ChatGPT uses OpenAI’s moderation API to filter out racist and sexist prompts. Hypothetically, one could imagine OpenAI making a pledge to ‘not weaponize’ ChatGPT for propaganda purposes. Regardless of such pledges, any good digital technology such as ChatGPT could be used for any purpose, (e.g., generating misinformation and fake news) if it is stolen or simply released into the wild.

“The power of AI/ML tools, particularly if they become sustainable and remain amoral, will be greater for both good and evil. We have seen significant harm from misinformation on the COVID-19 pandemic, dubbed ‘infodemic’ by the WHO. More generally, there have been significant political propaganda in every election and every war. It is easy to imagine the depth, breadth and constant renewal of such propaganda and infodemic, as well as their impact, all growing with the capabilities of future AI/ML tools used by powerful companies and governments.

“Assuming that the AI/ML technologies will advance beyond the current static models, the impact of sustainable AI/ML tools in the future digital life will be significant and fundamental, perhaps in a greater role than industrial robots have in modern manufacturing. For those who are going to use those tools to generate content and increase their influence on people, that prospect will be very exciting. However, we have to be concerned for people who are going to consume such content as part of their digital life, particularly those who will consume without thinking critically.

“The great digital divide is not going to be between the haves and have-nots of digital toys and information. With more than 6 billion smartphones in the world (estimated in 2022), an overwhelming majority of the population already has access to and participates in the digital world. The Digital Life Divide will be between those who think critically and those who may go along with misinformation and propaganda. This is a big challenge for democracy, a system in which we thought more information would be unquestionably beneficial. In a New Brave Digital World, a majority that can be swayed with (sophisticated) propaganda and misinformation might choose wrong, influenced by misuse of amoral technological tools.

“In the physical world, technology may have been amoral for good reasons. For example, the nuclear power unleashed by the Manhattan Project serves both peace and war. However, it is debatable whether information technology would or should be equally amoral in the digital life. Recent events at Meta (M. Zuckerberg) and Twitter (E. Musk) illustrate the complexity of the issue and the impact, as well as social responsibility of information technologists and companies.”

W. Russell Neuman, professor of media technology at New York University, commented, “We can expect to see artificial intelligence as complementing human intelligence rather than competing with it. We tend to see AI as an independent agent, a robot, a willful and self-serving machine that represents a threat because it will soon be able to outsmart us. Why do we think that? Because we see things anthropomorphically. We are projecting ourselves onto these evolving machines.

“But these machines can be programmed to complement and augment human intelligence rather than compete with it. I call this phenomenon evolutionary intelligence, a revolution in how humans will think. It is the next stage as our human capacities co-evolve with the technologies we create. The invention of the wheel made us more mobile. Machine power made us stronger. Telecommunication gave us the capacity to communicate over great distances. Evolutionary Intelligence will make us smarter.

“We tend to think of technology as ‘out there’ – in the computer, in the smart phone in the autonomous car. But computational intelligence is moving from our laptop and dashboard to our technologically enhanced eyes and ears. For the last century glasses helped us to see better, hearing aids improved our hearing. Smart glasses and smart ear buds will help us think better. Imagine an invisible Siri-like character sitting on our shoulder, witnessing what we witness and from time to time advising us, drawing on her networked collective experience. She doesn’t direct, she advises. She provides optimized options based on our explicit preferences. And given human nature, we may frequently choose to ignore her good advice no matter how graciously suggested.

“Think of it as compensatory intelligence. Given our history of war, criminality, inhumanity, ideological polarization and simple foolishness, one might be skeptical that Siri’s next generations would be able to make a difference in our collective survival. Much of what has plagued our existence as humans has been our distorted capacity to match means with ends.

“Unfortunately, among other things, we’ve gotten good at fooling ourselves. It turns out that the psychology of human cognitive distortions is actually quite well understood. As humans, we systematically misrepresent different types of risk, reward and probability. We can computationally correct for these biases. Will we be able to design enhanced decision processes so that demonstrably helpful and well-informed advice is not simply ignored? This book argues that our survival may depend on it.”

W. Russell Neuman, professor of media technology at New York University, wrote, “My concern about the future of the capacity for privacy in the digital future is not just that that capacity will be eroded. It probably will be because of the interests of governments and private enterprise. My concern is about a lost opportunity that our digital technologies might otherwise provide for what I like to call ‘intelligent privacy.’

“Here’s an idea. You are well aware that your personal information is a valuable commodity for the social media and online marketing giants like Google, Facebook, Amazon and Twitter. Think about the rough numbers involved – Internet advertising in the U.S. for 2022 is about $200 billion. The number of active online users is about 200 million. $200 billion divided by 200 million. So your personal information is worth about $1,000. Every year. Not bad. The idea is: Why not get a piece of the action for yourself? It’s your data. But don’t be greedy. Offer to split it with the Internet biggies 50-50. $500 for you, $500 for those guys to cover their expenses.

“Thank you very much. But the Tech Giants are not going to volunteer to initiate this sort of thing. Why would they? So there has to be a third party to intervene between you and Big Tech. There are two candidates for this – first the government, and second some new private for-profit or not-for-profit. Let’s take the government option first.

“There seems to be an increasing appetite for ‘reigning in big tech’ in the United States on Capitol Hill. It even seems to have some bi-partisan support, a rarity these days. But legislation is likely to take the form of an anti-trust policy to prevent competition-limiting corporate behaviors. Actually, proactively entering the marketplace to require some form of profit sharing is way beyond current-day Congressional bravado. The closest Congress has come so for is a bill called DASHBOARD (an acronym for Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data) which would require major online players to explain to consumers and financial regulators what data they are collecting from online users, and how it is being monetized. The Silicon Valley lobbyists squawked loudly and so far the bill has gone nowhere. And all that was proposed in that case was to make some data public. Dramatic federal intervention into this marketplace is simply not in the cards.

“So what about non-governmental third parties? There are literally dozens of small for-profit startups and not-for-profits in the online privacy space. Several alternative browser search engines such as DuckDuckGo, Neeva and Brave offer privacy-protected browsing. But as for-profits they often end up substituting their own targeted ads (presumably without sharing information) for what you would otherwise see on a Google search or a Facebook feed.

“Brave is experimenting with rewarding users for their attention with cryptocurrency tokens called BATs for Basic Attention Tokens. This is a step in the right direction. But so far, usage is tiny, distribution is limited to affiliated players, and the crypto value bubble complicates the incentives.

“So the bottom line here is that Big Tech still controls the golden goose. These startups want to grab a piece of the action for themselves and try to attract customers with ‘privacy-protection’ marketing rhetoric and with small, tokenized incentives which are more like a frequent flyer program than real money. How would a serious piece-of-the-action system for consumers work? It would have to allow a privacy-conscious user to opt out entirely. No personal information would be extracted. There’s no profit there, so no profit sharing. So in that sense, those users ‘pay’ for the privilege of using these platforms anonymously.

“YouTube offers an ad-free service for a fee as a similar arrangement. For those people open to being targeted by eager advertisers, there would be an intelligent privacy interface between users and the online players. It might function like a VPN or proxy server but one which intelligently negotiates a price. ‘My gal spent $8,500 on online goods and services last year,’ the interface notes. ‘She’s a very promising customer. What will you bid for her attention this month?’

“Programmatic online advertising already works this way. It is all real-time algorithmic negotiations of payments for ad exposures. A Supply Side Platform gathers data about users based on their online behavior and geography and electronically offers their ‘attention’ to an Ad Exchange. At the Ad Exchange, advertisers on a Demand Side Platform have 10 milliseconds to respond to an offer. The Ad Exchange algorithmically accepts the highest high-speed bid for attention. Deal done in a flash. Tens of thousands of deals every second. It’s a $100 billion marketplace.

“Of course, ad blocking technologies may complicate the picture when users opt to use them. It is a bit of a technical cat and mouse game as aggressive advertisers try to embed their ads in ways that are difficult for ad blockers to detect. But so far ad blockers mostly just block when they can. It’s like a switch. Blocking is on or off. That’s not very intelligent privacy. If access to your attention is worth $1,000, let’s take a minute to think this privacy business through.

“Ad blockers don’t currently offer to negotiate a price for access. Some users may value privacy very highly and demand much more than advertisers would find practical, so no deal. Others are ambivalent or actually interested in connecting with marketers. Your algorithm talks to my algorithm. Intelligent Privacy. Now there’s an idea. Too bad that – given the commercial interests of private enterprise – it’s a real longshot.”

Liza Loop, educational technology pioneer, futurist, technical author and consultant, said, “Among the hopes for humanity inspired by ongoing digital advances are:

“Human-centered development of digital tools and systems. Nature’s experiments are random, not intentional or goal-directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do, thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.

“Human connections, governance and institutions. We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near-real-time can be stored and retrieved to enjoy later – even after death.

“Human rights. Increased communication will not advance human ‘rights’ but it might make human ‘wrongs’ more visible so that they can be diminished.

“Human knowledge. Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe or worthy of elevation is an age-old question and not significantly changed by digitization.

“Human health and well-being. There will be huge advances in medicine and the ability to manipulate genetics is being further developed. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.

“Education. In my humble opinion, the most beneficial outcomes of our ‘store-and-forward’ technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as middleman. Learners will be able to hail teachers and learning resources just like they call a ride service today.”

Liza Loop, educational technology pioneer, futurist, technical author and consultant, said, “The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance. Humans evolved, both physically and psychologically, as prey animals eking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families, and driving away or killing strangers and nonconformists.

“Although our species has come a long way toward peaceful and harmonious self-actualization, the vestiges of the old fearful behavior persist. Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?

“I see these things as likely:

“Human-centered development of digital tools and systems. They will fall short of advocates’ goals. Some would argue this is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?

“Human connections, governance and institutions. Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think ‘Christendom’ in 15th century Europe), word travels faster and crowds are larger than they were six centuries ago. So, is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward wickedness.

“Human rights. The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.

“Human knowledge. The threat to knowledge lies in humans’ increasing dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might lull us into believing that we don’t need to stay mentally and physically fit and agile.

“Human health and well-being. In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. Humanity cannot continue to improve its health and well-being indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.”

Matthew James Bailey, president of AI Ethics World, commented, “My response is focused on the Ages of AI and progression of human development, whilst honoring our cultural diversity at individual and group level. In essence, how does humanity thrive in the age of ethical machines?

“It is clear that the promise and potential of AI is a phenomenon that our ancestors could not have imagined. As such, if humanity embodies an ethical foundation within the digital genetics of AI, then we will have the confidence of working with a trusted digital partner to progress the diversity of humanity beyond the inefficient systems of the status quo into new systems of abundance and thriving. This includes restoration of a balance with our environment, new economic and social systems based on new values of wealth. As such, my six main predications for AI by 2035 are:

AI will become a digital buddy, assisting the individual as a life guide to thrive (in body, mind and spirit) and attain new personal potentials. In essence, if shepherded ethically, humanity will be liberated to explore and discover new aspects of its consciousness and abilities to create. A new human beingness, if you will.

AI will be a digital citizen, just like a human citizen – It will operate in all aspects of government, society and commerce, working towards a common goal of improving how democracy, society and commerce operates, whilst honoring and protecting the sovereignty of the individual.

AI will operate across borders. For those democracies that build an ethical foundation for AI, which transparently shows its ethical qualities, then countries can find common alignment and, as such, trust ethical AI to operate systems across borders. This will increase the efficiency of systems and freedom of movement of the individual.

The Age of Ethical AI will liberate a new age of human creation and invention. This will fast-track innovation and development of technologies and systems for humankind to move into a thriving world and find its place within the universe.

The three-world split. Ethical AI will have different progeny and ethical genetics based on the diverse worldviews between a country or region. As such, there will be different societal experiences for citizens living in countries and regions. We see this emerging today in the United States, EU and China). Thanks to ethical AI, a new age of transparency will encourage a transformation of the human to evolve beyond its limitations and discover new values and develop a new world view where the best of our humanity is aligned. As such, this could lead to a common and democratic worldview of the purpose and potential of humanity.

AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet. After all, humans are a creation from nature and as such, recognizing the importance of nurturing this relationship is viewed as fundamental. This is part of a new well-being paradigm for humanity to thrive.

“This all depends on humanity steering a new course for the Age of AI. By pragmatically understanding the development of human intelligence and how consciousness has expressed itself in experiencing and navigating our world (worldview), has resulted in a diversity of societies, cultures, philosophies and spiritual traditions.

“Using this blueprint from organic intelligence enables us to apply an equivalent prescription to create an ethical artificial intelligence – ethical AI. This is a cultural-centric intelligence that caters for a depth and diversity of worldviews, authentically aligning machines with humans. The power of Ethical AI is to advance our species into trusted freedoms of unlimited potential and possibilities.

“Whilst there is much dialogue and important work attempting to apply AI ethics into AI, troublingly, there is an incumbent homogenous and mechanistic mindset of enforcing one world view to suit all. This brittle and Boolean miscalculation can only lead to the deletion of our diversity and a false authentic alignment of machines with humans.

“In essence, these types of AIs prevent laying a trusted foundation for human species advancement within the age of ethical machines. Following this path, results in a misstep for humankind, deleting the opportunity for the richness of human, cultural, societal and organizational ethical blueprints being genuinely applied to the artificial. They are not ethical AI and fundamentally opaque in nature.”

Matthew James Bailey, president of AI Ethics World, said, “The most menacing, challenging problem with the age of Ethical AI being a successful phenomenon for humanity are controlling organizations and individuals trying to impose a hard-coded common one-world view onto the human race for the age of machines, based on old values and understanding of wealth.

“Ancient systems of top-down must be replaced with systems of distribution. We have seen this within the UK, with control and power being disseminated to parliaments in Scotland, Wales and Northern Ireland. This is also being reflected in technology with the emergence of block chain and cryptocurrencies, and edge compute. As such, empowering communities and human groups with sovereignty and freedom to self-govern and yet remain inter-connected with other communities will emerge. When we head into space, trialing of these new systems of governance might be a useful trial ground, say on the Moon or Mars colonies.

“Furthermore, not recognizing the agency of data and returning control of sovereignty of creation to the individual has resulted in our digital world having a fundamentally unethical foundation. This is a menacing issue our world is facing at the moment. Moving from contracts of adhesion within the digital world to contracts of agency will not only bridge the paradox of mistrust between the people with government and big tech, but it will also open up new individual and commercial commerce and liberate the personal AI – Digital Buddy – phenomenon.

“Humans are a creation of the universe, with that unstoppable force embodied within our makeup. As we recognize our wonderful place (and uniqueness thus far) in the universe and work with its principles then we will become aligned with and discover our place within the beauty of creation and maybe the multiverse!

“For humanity to thrive in the age of ethical machines, we must move beyond the menacing polarities of controllers and rediscover some of Aristotle’s ethical virtues that encourage the best of our humanity to flourish. This assists us to move beyond those principles that are no longer relevant, such as the false veil of power, control and wealth.

“Embracing Aristotle’s ethical virtues would be a good start to recognize the best of our humanity, as well as the Veda texts such as ‘The world is one family,’ or Confucius’s belief that all social good comes from family ethics, or Lao Tzu proposing that humanity must be in harmony with its environment.

“However, we must recognize and honor individual and group differences. Our consciousness through human development has expressed itself with a diversity of world views. These must be honored. As they are, I suspect more common ground will be found between human groups.

“Finally, there’s the concept of transhumanism. We must recognize that consciousness (a universal intelligence) is and will be the most prominent intelligence of earth and not AI. As such, we must ensure that folks have choice to the degree that they are integrated with machines. We are on the point of creating a new digital life (2029 – AI becomes self-aware), as such, let’s put the best of humanity into AI to reflect the magnificence of organic life!”

Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, “I abhor predictions, instead, I shall share some hopes.

“I hope that the tools of connection will enable more and more diverse voices to at last be heard outside the hegemonic control of mass media and political power, leading to richer, more inclusive public discourse.

“I hope we begin to see past the internet’s technology as technology and understand the net as a means to connect us as humans in a more open society and to share our information and knowledge on a more equitable and secure basis for the benefit of us all.

“I hope we might finally move beyond mass media’s current moral panic over the internet as competition and, indeed, supersede the worst of mass media’s failing institutions, beginning with the notion of the mass and media’s invention of the attention economy.

“I hope that – as occurred at the birth of print – we will soon turn our attention away from the futile folly of trying to combat, control and outlaw all bad speech and instead focus our attention and resources on discovering, recommending and supporting good speech.

“I hope the tools of AI – the subject of mass media’s next moral panic – will help people intimidated by the tools of writing and research to better express their ideas and learn and create.

“I hope we will have learned the lesson taught us by Elon Musk: that placing our discourse in the hands of centralized corporations is perilous and antithetical to the architecture and aims of the net; federation at the edge is a far better model.

“I hope that regulators will support opening data for researchers to study the impact and value of the net – and will support that work with necessary resources.”

Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, said, “I fear the pincer movement from right and left, media and politics, against Section 230 and protection of freedom of expression will lead to regulation that raises liability for holding public conversation and places a chill over it, granting protection to and extending the corrupt reign of mass media and the hedge-fund-controlled news industry.”

Barry Chudakov, founder and principal at Sertain Research, wrote, “Regarding digital technology and humans’ use of digital systems, we are living in a Golden Age of human connection – in the sense that never before in human history have we been able to connect with one another at so many levels via so many devices. This has to be regarded as a beneficial change because so many more people – at least in free, democratic societies – can now have a voice (perhaps small, and, yes, the room of voices is full to overcrowding) in governance and can have a say in how institutions that play a part in their lives are run or governed.

“These devices and connections are so new that ways of improving social and political interactions are evolving. The rules of the road for Twitter, Facebook, TikTok, Snap, or the metaverse are being written and rewritten every week or month; our understanding of human connection, governance and the institutions that were built before these devices and connections were so prevalent is changing.

“To fully appreciate how human connections, governance, and social structures or institutions are affected by digitization, it is useful to step back and consider how the structures of connection, governance, and institutions evolved. They came from the alphabet and its accelerator, the printing press, which organized reality categorically, hierarchically. Digital tools operate differently. Instead of naming things and putting them into categories; instead of making pronouncements and then codifying them in texts and books that become holy; instead of dividing the world topically and then aggregating people and states according to that aggregation – digital tools create endless miscellany which creates patterns for data analysis.

“How will this new dynamic affect human connections, governance, and institutions? Since we build our governance and institutions based on the tools we use to access and manipulate reality, the newer logic of digital tools is omnidirectional, non-hierarchical, instantaneous, miscellaneous, and organized by whatever manner of organization we choose rather than the structure of, say an alphabet which is front to back, A to Z. Digital tools constitute the new metrics. As Charlene Li, Chief Research Officer at PA Consulting, said of ESG (Environmental, Social and Governance):

‘The reality is that investors are looking at your company’s ESG metrics. They want to know what your climate change strategy is and if you’re measuring your carbon emissions. They’re curious if you pay your employees a fair wage, if you’re active in the community, and if you consider the health and safety of your team and your customers. They want to make sure you’re operating ethically….   How do you determine the right metrics?… You have to monitor and take action on your data points constantly. You have to measure meaningful metrics tied to a strategic objective or your organization’s overall values…. Otherwise, you’ll only tackle token measurements just so you’re doing something. And if you’re not measuring what’s meaningful or taking impactful steps, you risk never making real progress.’

“So, one of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital systems is continuous measurement – and the concomitant obligation to figure out what constitutes meaningful measurement in all the data collected. While humans have measured for certain tasks and obligations – cutting cloth to fit a given body, surveying land to know which parcel belongs to whom – measuring is now taking on a near constant presence in our lives. We are measuring everything: from our steps to our calories, our breaths and heartrate and blood pressure to how far a distance is from a destination on Google Earth.

“The result of all this measuring is facticity. We are unwittingly (and thankfully) moving from a vague and prejudicial assessment of what is real and what is happening, to a systematic, detailed, and data-driven understanding of what is, and what is going on – whether tracking a hurricane or determining traffic violations at a busy intersection. This flies in the face of many blind-faith traditions and the social structures and institutions those faith-based structures built to bring order to peoples’ lives. Measurement is a new order; we’re just beginning to realize the implications of that new order.

“Human rights – abetting good outcomes for citizens: The most beneficial change that is likely to occur by 2035 in regard to digital technology and humans’ use of digital systems is the continuing global distribution of handheld digital devices. Ubiquitous handheld devices not only are news weathervanes, scanning the news and political environment for updates on information relevant to individuals and groups; for the first time in human history, they also give each person a voice, albeit a small one – and these devices enable crowdsourcing and virtual crowd-gathering, which can compel interest and consensus. This ability is fundamental to fighting for and garnering more equitable human rights, thereby abetting good outcomes for citizens.

“Further, these devices are highly visual: they show fashion and possessions, cosmetic procedures and dwellings, cars and bling. For the unfortunates of the world, the have-nots, these images are more than incentives; they are an unspoken goal, an unuttered desire to do better, have more, become successful – and to have a say in that success as the people seen on Instagram and TikTok. What starts as digital envy will evolve to a demand for rights and a greater participation in governance. In this measure the most beneficial changes that are likely to occur by 2035 in regard to digital technology and humans’ use of digital systems are an ongoing leavening of human potential and rights.

“Human rights evolved from the rule of kings and queens to the rule and participation of common man and woman. Throughout that evolution, narrative fallacies regarding classes and races of certain humans sprang up, many of which are still with us and need to be uprooted like a noxious weed in a garden. Narrative fallacies such as those which underpin racism, sexism, antisemitism, anti-Muslim, etc. Democracies have often touted one (wo)man, one vote; with the rise of digital technologies, we now have one device, one vote. Effectively, this empowers each individual, regardless of class or status, with some kind of agency. This is revolutionary, although few intended it to be so.

“Ostensibly these devices have tactical, practical uses: determining a stock price, getting the weather, making a call, or sending and receiving a text. But the far greater value of humans having multiple devices is the potential for us to express ourselves in enlightened and uplifting ways. (On average, U.S. households now have a total of 22 connected devices. The number of Internet of Things (IoT) devices worldwide is forecast to almost triple from 9.7 billion in 2020 to more than 29 billion IoT devices in 2030).

“Finally, among the best and most beneficial changes that are likely to occur by 2030 in regard to digital technology and humans’ use of digital systems is the capture of human behavior by devices that contradict and prove not a self-serving narrative but enhance justice itself. When, for example, multiple cameras capture a police beating in Memphis or any other city, unless there is tampering with the digital record, this new evidence provides compelling testimony of how things went down.

“Time will be necessary for legacy systems to lose their sway over human behavior and public opinion. Further, we will need to oversee and create protocols for the use of devices where human behavior is involved. But make no mistake: our devices now monitor and record our behaviors in ways never before possible. This impartial assessment of what happens is a new and enlightening development, if humans can get out of their own way and create equitable use and expectations for the monitoring and recording.

“Human knowledge – verifying, updating, safely archiving and the best of it: Humans are undergoing an onslaught of factfulness. Human knowledge – verifying, updating, safely archiving and elevating that knowledge – is predicated on knowing what is true and actual, which may be evolving or even change drastically based on new evidence. What is clear is that the volume of data generated by human knowledge is increasing: The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly …. up to 2025 global data creation is projected to grow to more than 180 zettabytes. – Statista

“One significant mechanism of all this factfulness accrues to our advancing technologies of monitoring and measuring. Knowledge finds a highly beneficial ally in these emerging technologies. We now monitor global financial markets, traffic intersections, commercial and non-commercial flights, hospital operations, military maneuvers, and a host of other real-time assessments in ways that were unthinkable a century ago, and impossible two generations ago.

“This verification process which allows real-time updating, is an often-overlooked boon to human knowledge. Effectively, we are creating data mirrors of reality; we know what is going on in real time; we don’t have to wait for a storm to hit or a plane to land to make an assessment of a situation. We can go to Mars or go 10,000 feet below the surface of the ocean to quantify and improve our understanding of an ecosystem or a distant planet. Digitization has made this possible. Rendering our world in ones and zeros (quantum computing will likely upgrade this) has given human knowledge a boost unlike anything that came before it.

“The volume of data/information created, captured, copied, and consumed worldwide increased from 41 zettabytes in 2019 to 59 zettabytes in 2020. This figure is expected to rise to 74 zettabytes in 2021, 94 zettabytes in 2022, 118 zettabytes in 2023, and 149 zettabytes in 2024. Such knowledge explosion has never occurred until now in the history of human civilization.

“This exponential trend will continue, which means an ever-increasing pace of knowledge explosion, technological acceleration, and breakthrough innovation. In short, we are currently experiencing one of the biggest revolutions humanity has ever seen: It is a knowledge tsunami.

“The effect of monitoring human knowledge – verifying, updating, safely archiving and elevating the best of it – will be that by 2035 we will have made a dent in what I would call the tsunami retreat. That is, when there is a seemingly endless amount of information available, humans may retreat into ignorance, or make up facts (disinformation) either from sheer frustration or from a Machiavellian desire to manipulate reality to personal whim. (When there is a limited amount of information, a loud voice regarding that information may prevail; when there is an unlimited amount of information, numerous loud voices can proclaim almost anything, and their commentary gets lost in the noise.)

“By 2035 we will begin to make inroads into the ways and practices of misinformation and disinformation. Deepfakes and the manipulation of recorded reality will become a hotbed issue. In the next decade we will make progress on the process of factualization, i.e., how to approach the world factually, rather than via mysticism, hearsay, or former edict.

“From a wisdom perspective, our wars and inability to marshal resources against climate change reveal that humans are still in the Dark Ages, even though our data is increasing at dizzying rates. We’re not sure what to do with it all; we have little in place to cope with this exponential acceleration. So, no doubt, there is considerable work to do to make factualization potential a living reality. Yet by 2035 we will have seen enough of disinformation to know how it works, how it warps and distorts reality, and why this is not useful or good for humanity.

“At the same time, we will be able to fake reality – for good and for ill – and what is ‘real’ will be an issue that plagues humanity. While we will have developed disinformation protocols; and we will know what to do with lies rather than cluck our tongues and shake our heads; we will also be baffled by knowing the real from the unreal, the actual from the fake.

“Human health and well-being – helping people be safer, healthier, happier: Regarding human health and well-being – helping people live safer, healthier, happier lives – digital technology and humans’ use of digital systems will continue the progress of the quantified self and amplify it. New monitoring digital technologies, available to individuals as well as hospitals and medical institutions, will be responsible for revolutionizing the way we engage with health care. Self-diagnosis and AI-assisted diagnosis will change human health and well-being.

“Responsibility for our health is moving into our own hands. Literally. From monitoring the steps we take each day, to checking heart rate, blood pressure or glucose monitoring, to virtual doctor visits that alleviate the hassle of getting to a doctor’s office – the progress of digital technologies will continue to advance in 2035.

“Aside from moving monitoring devices from the doctor’s office to individuals and into the hands of patients, what is most significant about this: humans are learning to think in terms of quantities and probabilities versus commandments and injunctions. Digital technologies enable a more fact-based assessment of reality. This is a huge step forward for humanity which, prior to the digital age, was used to narratives – albeit some full of wisdom – that were ossified and then taken as indisputable gospel. With the rise of digital computing, what is becomes mutable, malleable, not fixed; uncertainty becomes a new wisdom as humans focus on what is provable and evidentiary, versus what is told through assertion and pronouncements. Some examples from Dr. Bertalan Meskó, The Medical Futurist:

  • Withings just launched a miniaturized device called U-Scan, that sits within a toilet bowl and can analyze urine at home. ‘More than 3,000 metabolic biomarkers can be assessed via urine, which makes it one of the gold standards of health assessment. Analyzing these can help diagnose and monitor certain diseases like diabetes, chronic kidney disease, kidney stones and urinary tract infection.’
  • MIT researchers have developed an AI model that can detect future lung cancer risk: Low-dose computed tomography (LDCT) scans are currently the most common way for finding lung cancers in earliest stages. A new deep-learning model – Sybil – takes a personalized approach to assess each patient’s risk based on CT scans. Sybil analyzes the LDCT image data without the assistance of a radiologist to predict the risk of a patient developing future lung cancer within six years.
  • Stanford researchers measure thousands of molecules from a single drop of blood. Stanford Medicine researchers demonstrated that they could measure thousands of protein, fat and metabolic molecules from a single drop of blood with a finger prick. Patients can collect the blood drop at home and mail it to the lab for analysis.
  • Instead of focusing on any single protein, metabolite or inflammatory marker, the growing field of ‘omics’ research takes a broader, systems-biology approach: analyzing the whole spectrum of proteins (the proteome), fats (the lipidome) or the by-products of metabolism (the metabolome).”

“The NIH summarized how AI will generally change healthcare:’The applications of AI in medicine can … be grouped into two bold promises for healthcare providers: (1) the ability to present larger amounts of interpretable information to augment clinical judgments while also (2) providing a more systematic view over data that decreases our biases.’ Decreasing bias is another way of saying we are championing facticity.

“One of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital system is recognition of the arrival of a digital tool meta level. We will begin to act on the burgeoning awareness of tool logic and how each tool we pick up and use has a logic designed into it.

“The important thing about becoming aware of tool logic, and then understanding it: humans follow the design logic of their tools because we are not only adopters, we are adapters. That is, we adapt our thinking and behaving to the tools we use. This will come into greater focus between now and 2035 because our technology development – like many other aspects of our lives – will continue to accelerate. With this acceleration humans will use more tools in more ways more often – robots, apps, the metaverse and omniverse, digital twins – than at any other time in human history.

“If we pay attention as we adopt and adapt, we will see that we bend our perceptions to our tools: when we use a cell phone, it changes how we drive, how we sleep, how we connect or disconnect with others, how we communicate, how we date, etc. Another way of looking at this: we have adapted our behaviors to the logic of the tool as we adopted (used) it. With an eye to pattern recognition, we may finally come to see that this is what humans do, what we have always done, from the introduction of various technologies – alphabet, camera, cinema, television, computer, internet, cell phone – to our current deployment of AI, algorithms, digital twins, mirror worlds, or omniverse.

“So, what does this mean going forward? With enough instances of designing a meta mirror of what is happening – the digital readout above the process of capturing an image with a digital camera, digital twins and mirror worlds that provide an exact replica of a product, process or environment – we will begin to notice that these technologies all have an adaptive level. At this level when we engage with the technology, we give up aspects of will, intent, focus, reaction. We can then begin to outline and observe this process in order to inform ourselves, and better arm ourselves against (if that’s what we want) adoption abdication. That is, when we adopt a tool, do we abdicate our awareness, our focus, our intentions? We can study and report on how we change and how each new advancing technology both helps us, and changes us. We can then make more informed decisions about who we are when we use said tool and adjust our behaviors if necessary.

“Central to this dynamic is the understanding that we are sharing our consciousness with our tools. They have gotten – and are getting more still – so sophisticated that they can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.

Barry K. Chudakov, founder and principal at Sertain Research, said, “Human-centered development of digital tools and systems will continue to fall short of technology advocates’ goals until humans begin to formulate a thorough digital tool critique and analysis, leading to a full understanding of how we use and respond to digital tools and systems. We eat them. We wear them. We take them into our bodies. We claim them as our own. We are all in Stockholm Syndrome with respect to digital tools: they enthrall us and we bend to their (designed) wishes, and then we champion their cause.

“We are not only adopters of various technologies; we are adapters. We adapt to – that is, we change our thinking and behaving with – each significant technology we adopt. Technology designers don’t need to create technologies which will live inside of us (many efforts towards this end are in the works); humans already ingest technology and tools as though we were cyborgs with an endless appetite.

“There are now more cell phones on the planet than humans. From healthcare to retail, from robots in manufacturing to NVIDIA’s omniverse, humans are adopting new technologies wholesale. In many respects this is wonderful. But our use of these technologies will always fall short of advocates’ goals and the positive potential of our human destiny until we understand and teach – from kindergarten through university graduate school – how humans bend their perceptions to technology and what effects that bending has on us. This is an old story that goes back to the adoption of alphabets and the institutions the alphabet created. We need to see and understand that history before we can fully appreciate how we are responding to algorithms, AI, federated learning, quantum computing, or the metaverse.

“The most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems will happen because we have not sufficiently prepared ourselves for the new world and new assumptions inherent in emerging technologies. We have blindly adopted technologies and stumbled through how our minds and bodies reacted to that adoption. Newer and emerging technologies are much more powerful (think AI or quantum computing) and the mechanics of those technologies more esoteric and hidden.

“Our populace will be profoundly affected by these technologies. We need a broad re-education, so we fully understand how they work and how they work on us. Advocates’ goals, while lofty and visionary, will not be realized if users are essentially asleep to the effects and implications of newer digital tools and technologies. Just as seat belt restraints were eventually installed in cars and governments passed laws to compel use of them, likewise we need to acknowledge that all technologies have hidden effects that are revealed over time as users engage with them. Many such effects will be (or appear to be) benign; but others will radically alter collective and individual human behavior.

“Just as cloud computing was once unthought-of, and there were no cloud computing technologists, and then the demand for such technologists became apparent and grew, so too technology developers will begin to create new industry roles, for example technology consequence trackers. Each new technology displaces a previous technology, and developers must include an understanding of that displacement in their pro forma. Remember: Data and technologies beget more data and technologies. There is a compounding effect at work in technology acceleration development. That is another factor to monitor, track and record.

“Human connections, governance and institutions – endangering social and political interactions: Digital technologies and digital systems change the OS, the operating system, of human existence. We are moving from alphanumeric organization to algorithms and artificial intelligence; ones and zeroes and the ubiquity of miscellany will change how we organize the world. Considering human connections, governance, and institutions, in each of those areas, digitization is a bigger change than going from horse and buggy to the automobile; a more pervasive change than land travel to air and space travel. This is a change that changes everything because soon there will hardly be any interaction, whether at your pharmacy or petitioning your congresswoman, that does not rely on digital technology to accomplish its ends.

“With that in mind, we might ask ourselves: do we have useful insight into the grammar and operations of digital technologies and digital systems – how they work, and how they work on us? At the moment, the answer is no. By 2035 we will be more used to the prevalence of digital technologies and we have a chance to gain more wisdom about them. Today the very thing we are starting to use most, the AI and the algorithms, the federated learning and quantum computing, is the thing we often know least about, and have almost no useful transparency protocols to help us monitor and understand it.

“Verifying digital information (all information is now digital) will continue to be a sine qua non for democracies. Lies, distortions of perceptions, insistence on self-serving assessments and pronouncements, fake rationales to cover treacheries – these threaten human connections, governance and institutions as few other things do. They not only endanger social and political interactions; they fray and ultimately destroy the fabric of civilized society. For this reason, by 2035 all information will come with verification protocols that render facts trustworthy or suspect; either true or false. The current ESG (Environmental, Social and Governance) initiative is a step in this direction

“By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will be focused directly on human connections, governance and institutions. Today we think in terms of managing these via governance, which is always a catch-up strategy or endeavor. Instead, to avoid endangered social and political interactions, we must become more proactive with our technologies. We should work to put in place governance, yes; but first, we need a basic pedagogy, a comprehensive understanding of how humans use digital technology and digital systems.

“We teach English, history, trigonometry, physics and chemistry. All of these disciplines and more are profoundly affected by digital technology and humans’ use of digital systems. Yet, generally speaking we have less understanding about how humans use and respond to digital technology than we have about the surface of Mars. (We know more about the surface of Mars than the bottom of the ocean; due to the Mars Reconnaissance Orbiter Mars is fully mapped, the ocean is not.) As a result, our social and political interactions are often undermined by digital realities (deep fakes, flaming, Instagram face, teen girl sadness and suicide rates rising), and many are left dazed and confused by the speed with which so many anchors of prior human existence are being uprooted or simply discarded.

“Most people have no idea how human institutions of church and state were built on the alphabetic order, so they are also blind to the effects of digital technologies, devices in the hands of virtually every human on the planet, and how these have changed dating and mating, governance and oversight, warfare and statecraft. How many people could explain this change? Or could explain in reasonably simple terms what it means to have a digital twin in the Omniverse or the metaverse?

“We need radical transparency so these protocols and behavioral responses do not become invisible – handed over to tech developers to determine our freedoms, privacy, and destiny. That would be dangerous for all of our social and political interactions. For the sake of optimizing human connections, governance, and institutions, we need education 2.0: a broad, comprehensive understanding of the history of technology adoption and the myths that adoption fostered; and then an ongoing, regularly updated, observation deck/report that looks broadly across humans’ use of technologies to see how we are adapting to each technology, the implications of that adoption, and recommendations for optimizing human health and well-being.

“Human rights – harming the rights of citizens: By the year 2035, the most harmful or menacing changes regarding human rights – i.e., harming the rights of citizens – that are likely to occur in digital technology and humans’ use of digital systems will entail an absenting of consciousness. Humans are not likely to notice the harmful or menacing changes brought about by digital technologies and systems because the effects are not only not obvious; they are invisible. Hidden within the machine are the assumptions of the machine.

“Developers don’t have time, nor do they have the inclination, to draw attention to the workings of the software and hardware they design and build; they don’t have the time, inclination, or money to game-play the unintended consequences to humans of using a given product or gadget or device. As a result, human rights may be abridged, not only without our consent but without our notice.

“If an AI voice has been contracted to read an audiobook, the rights of an audio book reader (voiceover) have not been considered; have not been addressed. A company, say Apple, has cut costs on the production of their audiobooks by automating the process using AI. Did Apple ask readers if they would prefer this? Did Apple ask book readers if they would mind competing with an AI reader, or being supplanted by an AI reader? Did your pharmacy or insurance company ask you if you want to hear the recorded (AI) voice that talks to you when you call – and won’t let you through to a human until you wade through a series of pronouncements not related to your query? At so many different levels and layers of human experience, technology and digital solutions will emerge – buying insurance online, investing in crypto, reading an X-ray or assessing a skin lesion for possible cancer – wherein human rights will be a consideration only after the fact.

“The strange thing about inserting digital solutions into older system protocols is that the consequences of doing so must play out; they must remain to be seen; the damage, if it is to occur must actually occur for most people to notice. So human rights are effectively a football, kicked around by whatever technology happens to emerge as a useful upgrade. This will eventually be recognized as a typical outcome and watchdogs will be installed in processes, as we have HR offices in corporations. We need people to watch and look out for human rights violations and infringements that may not be immediately obvious when the new digital solutions or remedies are installed.

“Human knowledge – compromising or hindering progress: By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and human knowledge – compromising or hindering progress – will come from the doubters of factfulness. New technologies are effectively measuring tools at many different levels. We all live with quantified selves now. We count our calories, our steps, we monitor our blood pressure and the air quality in our cities and buildings. We are inundated by facts and our newest technologies will serve to sort and prioritize facts for us. This is a remarkable achievement in human history, tantamount to – but far greater than – the enlightenment of 1685-1815.

“We have never had so many tools to tell us so much about so many different aspects of human existence. (“Dare to understand,” as Steven Pinker has it.) The pace of technology development is not slowing, nor is the discovery of new facts about almost anything you can name. In short, human knowledge is exploding. But the threat to that knowledge comes not from the knowing but from those, like the Unabomber Ted Kaczynski, who are uncomfortable with the dislocations, disintermediation, and displacements of knowledge and facts.

“The history of the world is not fact-based, evidence-based; it is based on assertion and institutionalizing explanations of the world. Our new technologies upset many of those explanations and that is upsetting to many who have clung to those explanations in such diverse areas as religion or diet or health or racial characteristics or dating and mating. So, the threat to knowledge by 2035 will not come from the engines of knowing but from forces of ignorance which are threatened by the knowledge explosion.

“This is not a new story. Copernicus couldn’t publish his findings in his lifetime; Galileo was ordered to turn himself in to the Holy Office to begin trial for holding the belief that the Earth revolves around the sun, which was deemed heretical by the Catholic Church. (Standard practice demanded that the accused be imprisoned and secluded during the trial.) Picasso’s faces were thought weird and distorted until modern technologies began to alter faces or invent face amalgams, i.e., ‘This person does not exist.’

“By 2035 human knowledge will be shared with artificial intelligence, AI. The logic of AI is the logic of mimesis, copying, mirroring. AI mirrors human activities to enhance work by mirroring what humans would do in that role – filling out a form, looking up a legal statute, reading an X-ray. AI trains on human behavior to enhance task performance and thereby enhance human performance – which ultimately represents a new kind of knowledge. Do we fully understand what it means to partner with our technologies to accomplish this goal? It is not enough to use AI and then rely on journalists and user reviews to critique it. Instead, we need to monitor it as it monitors us; we must train it, as it trains on us. Once again, we need an information balcony that sits above the functioning AI to report on it, to give us a complete transparent picture of how it is working, what assumptions it is working from – and especially what we think, how we act and change in response to using AI. This is the new human knowledge. How we respond to that knowledge will determine compromising or hindering progress.

“Human health and well-being – threatening individuals’ safety, health and happiness: By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems regarding human health and well-being –  threatening individuals’ safety, health, and happiness – will come from our blindness as we use digital technologies and digital systems. Blindness, however, is not a sufficient explanation.

“We have left the development of digital tools and systems to commercial interests. This has given rise to surveillance capitalism, thinking and acting as a gadget, being alone together as we focus more on our phones than each other, sadness among young girls as they look into the distorting mirror of social media – among other unintended consequences. Humans entrain with digital technologies and digital systems; we adjust, conform to their logic. We have always done this with our tools. It is human nature to adopt the logic of a tool and think in that logic. We did it with alphabets and movies and computers and the Internet – and we’ll do it in 2035.

“Regarding our health and well-being, threats to individuals’ safety, health, and happiness will come from lack of awareness and understanding. As we develop more sophisticated, pervasive, human-mimicking digital tools such as robots or AI human voice assists, we need to develop a concomitant understanding of how we respond to these tools, how we change, adjust, alter our thinking and behavior as we engage with these tools.

“We need to start training ourselves – from an early age, through kindergarten well into graduate schools – to understand how we respond to our tools. It is not useful or good for us to be alone together (Sherry Turkle), to think of ourselves as a gadget and to think as a gadget (Jaron Lanier), to live always in the shallows (Nicholas Carr). Currently there is little or no systematic effort to educate technology users about the logic of digital tools and how we change as we use them. Some of these changes are for the good such as hurricane tracking to ensure community preparedness. But teen suicide, the rise of loneliness at all levels of society, or an epidemic of self-obsession while climate issues are ignored are growing evidence that digital tools may threaten human health and well-being at the same time as they enhance our lives.

“This is the paradox, the contradiction inherent in technological progress. Whether considered as the revenge of unintended consequences or the exhaust of accelerated realities, it is imperative that we address humans’ use of digital tools. By 2035 digital realities will be destinations where we will live some (much?) of our lives in enhanced digital environments; we will have an array of digital assistants, prompts, whether called Alexa or Siri who interact with us. We need to develop moral and spiritual guidelines to help us and succeeding generations navigate these choppy waters.

“By the year 2035, Ian Bremmer, among others, believes the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will focus on AI and algorithms. He believes this because we can already see that these two technological advances together have made social media a haven for right-wing conspiracists, anarchic populists, and various disrupters to democratic norms. I would not want to minimize Bremmer’s concerns: I believe them to be real. But I would also say they are insufficient.

“Democracies and governments generally were hierarchical constructs which followed the logic of alphabets; AI and algorithms are asymmetric technologies which follow a fundamentally different logic than the alphabetic construct of democratic norms, or even the top-down dictator style of Russia or China. So, while I agree with Bremmer’s assessment that AI and algorithms may threaten existing democratic structures; they, and the social media of which they are engines, are designed differently than the alphabetic order which gave us kings and queens, presidents and prime ministers.

“The old hierarchy was dictatorial, top-down with most people except those at the very top beholden to and expected to bow to the wishes of, the monarch or leader at the top. Social media and AI or algorithms have no top or bottom. They are broad horizontally and shallow vertically, whereas democratic and dictatorial hierarchies are narrow horizontally and deep vertically. This structural difference is the cause for Bremmer’s alarm and is necessary to understand and act upon before we can salvage democracy from the ravages of populism and disinformation.

“Here is the rub: until we begin to pay attention to the logic of the tools we adopt we will use them and then be at the mercy of the logic we have adopted. A thoroughly untenable situation. We must inculcate, teach, debate and come to understand the logic of our tools and see how they build and destroy our social institutions. These social institutions reward and punish, depending on where you sit within the structure of the institution.

“Slavery was once considered a democratic right; it was championed by many American Southerners and was an economic engine of the south before and after the Civil War. America then called itself a democracy, but it was not truly democratic – especially for those enslaved. To make democracy more equitable for all, we must come to understand the logic of the tools we use and how they create the social institutions we call governments. We must insist upon transparency in the technologies we adopt so we can see and fully appreciate how these technologies can change our perceptions and values.

“Building a meta level into digital tools and technologies is akin to having an observation deck or air traffic controller office in an airport. Yes, planes could take off and land without air traffic controllers, just as vehicular traffic could move on land without traffic lights and signals. But life – and traffic flows – would be much more complicated. A technology meta level is a smoothing force and a watch platform to see what is going on with a given digital technology. This meta level amounts to feedback – continuous, among a variety of users and stakeholders, with transparency and ongoing dialogue built in. In this respect, the meta level acts like a digital twin: once the feedback comes to the meta level, the technology can alter or adjust to accommodate the feedback.”

The next four essays are reprinted with permission from the section “Hopes for 2023” in Andrew Ng’s December 28, 2022, edition of “The Batch” newsletter – all individual authors gave permission for us to use their pieces in this report. All take a positive perspective in looking ahead to expected goals. 

1) Yoshua Bengio, scientific director of Mila Quebec AI Institute and co-winner of the 2018 Alan Turing Award for his contributions to breakthroughs in deep learning, wrote, “In the near future we will see models that reason. Recent advances in deep learning largely have come by brute force: taking the latest architectures and scaling up compute power, data and engineering. Do we have the architectures we need, and all that remains is to develop better hardware and datasets so we can keep scaling up? Or are we still missing something?

“I believe we’re missing something, and I hope for progress toward finding it in the coming year.

“I’ve been studying, in collaboration with neuroscientists and cognitive neuroscientists, the performance gap between state-of-the-art systems and humans. The differences lead me to believe that simply scaling up is not going to fill the gap. Instead, building into our models a human-like ability to discover and reason with high-level concepts and relationships between them can make the difference.

“Consider the number of examples necessary to learn a new task, known as sample complexity. It takes a huge amount of gameplay to train a deep learning model to play a new video game, while a human can learn this very quickly. Related issues fall under the rubric of reasoning. A computer needs to consider numerous possibilities to plan an efficient route from here to there, while a human doesn’t.

“Humans can select the right pieces of knowledge and paste them together to form a relevant explanation, answer, or plan. Moreover, given a set of variables, humans are pretty good at deciding which is a cause of which. Current AI techniques don’t come close to this human ability to generate reasoning paths. Often, they’re highly confident that their decision is right, even when it’s wrong. Such issues can be amusing in a text generator, but they can be life-threatening in a self-driving car or medical diagnosis system.

“Current systems behave in these ways partly because they’ve been designed that way. For instance, text generators are trained simply to predict the next word rather than to build an internal data structure that accounts for the concepts they manipulate and how they are related to each other. But I think we can design systems that track the meanings at play and reason over them while keeping the numerous advantages of current deep learning methodologies. In doing so, we can address a variety of challenges from excessive sample complexity to overconfident incorrectness.

“I’m excited by generative flow networks, or GFlowNets, an approach to training deep nets that my group started about a year ago. This idea is inspired by the way humans reason through a sequence of steps, adding a new piece of relevant information at each step. It’s like reinforcement learning, because the model sequentially learns a policy to solve a problem. It’s also like generative modeling, because it can sample solutions in a way that corresponds to making a probabilistic inference.

“If you think of an interpretation of an image, your thought can be converted to a sentence, but it’s not the sentence itself. Rather, it contains semantic and relational information about the concepts in that sentence. Generally, we represent such semantic content as a graph, in which each node is a concept or variable. GFlowNets generate such graphs one node or edge at a time, choosing which concept should be added and connected to which others in what kind of relation.

“I don’t think this is the only possibility, and I look forward to seeing a multiplicity of approaches. Through a diversity of exploration, we’ll increase our chance to find the ingredients we’re missing to bridge the gap between current AI and human-level AI.”

2) Alon Halevy, a director with the Reality Labs Research brand of Meta Platforms, wrote, “Your personal data timeline lies ahead. The important question of how companies and organizations use our data has received a lot of attention in the technology and policy communities. An equally important question that deserves more focus in 2023 is how we, as individuals, can take advantage of the data we generate to improve our health, vitality and productivity.

“We create a variety of data throughout our days. Photos capture our experiences, phones record our workouts and locations, Internet services log the content we consume and our purchases. We also record our want-to lists: desired travel and dining destinations, books and movies we plan to enjoy, and social activities we want to pursue. Soon smart glasses will record our experiences in even more detail. However, this data is siloed in dozens of applications. Consequently, we often struggle to retrieve important facts from our past and build upon them to create satisfying experiences on a daily basis.

“But what if all this information were fused in a personal timeline designed to help us stay on track toward our goals, hopes, and dreams? The idea is not new. Vannevar Bush envisioned it in 1945, calling it a memex. In the 1990s, Gordon Bell and his colleagues at Microsoft Research built MyLifeBits, a prototype of this vision. The prospects and pitfalls of such a system have been depicted in film and literature.

“Privacy is obviously a key concern in terms of keeping all our data in a single repository and protecting it against intrusion or government overreach. Privacy means that your data is available only to you, but if you want to share parts of it, you should be able to do it on the fly by uttering a command such as, “Share my favorite cafes in Tokyo with Jane.” No single company has all our data or the trust to store all our data. Therefore, building technology that enables personal timelines should be a community effort that includes protocols for the exchange of data, encrypted storage, and secure processing.

“Building personal timelines will also force the AI community to pay attention to two technical challenges that have broader application.

“The first challenge is answering questions over personal timelines. We’ve made significant progress on question answering over text and multimodal data. However, in many cases, question answering requires that we reason explicitly about sets of answers and aggregates computed over them. This is the bread and butter of database systems. For example, answering “what cafes did I visit in Tokyo?” or “how many times did I run a half marathon in under two hours?” requires that we retrieve sets as intermediate answers, which is not currently done in natural language processing. Borrowing more inspiration from databases, we also need to be able to explain the provenance of our answers and decide when they are complete and correct.

“The second challenge is to develop techniques that use our timelines responsibly to improve personal well-being. Taking inspiration from the field of positive psychology, we can all flourish by creating positive experiences for ourselves and adopting better habits. An AI agent that has access to our previous experiences and goals can give us timely reminders and suggestions of things to do or avoid. Ultimately, what we choose to do is up to us, but I believe that an AI with a holistic view of our day-to-day activities, better memory, and superior planning capabilities would benefit everyone.

3) Douwe Kiela, an adjunct professor in symbolic systems at Stanford University, previously the head of research at Hugging Face and a scientist at Facebook Research, wrote, “Expect less hype and more caution. This year we really started to see AI go mainstream. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field. These are exciting times and it feels like we are on the cusp of something great: a shift in capabilities that could be as impactful as – without exaggeration – the industrial revolution.

“But amidst that excitement, we should be extra wary of hype and extra careful to ensure that we proceed responsibly.

“Consider large language models. Whether or not such systems really have meaning, lay people will anthropomorphize them anyway, given their ability to perform arguably the most quintessentially human thing: to produce language. It is essential that we educate the public on the capabilities and limitations of these and other AI systems, especially because the public largely thinks of computers as good old-fashioned symbol-processors  –  for example, that they are good at math and bad at art, while currently the reverse is true.

“Modern AI has important and far-reaching shortcomings. Systems are too easily misused or abused for nefarious purposes, intentionally or inadvertently. Not only do they hallucinate information they do so with seemingly very high confidence and without the ability to attribute or credit sources. They lack a rich-enough understanding of our complex multimodal human world and do not possess enough of what philosophers call ‘folk psychology,’ the capacity to explain and predict the behavior and mental states of other people. They are arguably unsustainably resource-intensive, and we poorly understand the relationship between the training data going in and the model coming out. Lastly, despite the unreasonable effectiveness of scaling – for instance, certain capabilities appear to emerge only when models reach a certain size – there are also signs that with that scale comes even greater potential for highly problematic biases and even less-fair systems.

“My hope for 2023 is that we’ll see work on improving all of these issues. Research on multimodality, grounding, and interaction can lead to systems that understand us better because they understand our world and our behavior better. Work on alignment, attribution, and uncertainty may lead to safer systems less prone to hallucination and with more accurate reward models. Data-centric AI will hopefully show the way to steeper scaling laws, and more efficient ways to turn data into more robust and fair models.

“Finally, we should focus much more seriously on AI’s ongoing evaluation crisis. We need better and more holistic measurements – of data and models – to ensure that we can characterize our progress and limitations, and understand, in terms of ecological validity (for instance, real-world use cases), what we really want out of these systems.”

4) Reza Zadeh, founder and CEO at Matroid, a computer vision company, and adjunct professor at Stanford University, wrote, “As we enter 2023, there is a growing hope that the recent explosion of generative AI will bring significant progress in active learning. This technique, which enables machine learning systems to generate their own training examples and request them to be labeled, contrasts with most other forms of machine learning, in which an algorithm is given a fixed set of examples and usually learns from those alone.

“Active learning can enable machine learning systems to:

  • Adapt to changing conditions
  • Learn from fewer labels
  • Keep humans in the loop for the most valuable, difficult examples
  • Achieve higher performance

“The idea of active learning has been in the community for decades, but it has never really taken off. Previously, it was very hard for a learning algorithm to generate images or sentences that were simultaneously realistic enough for a human to evaluate and useful to advance a learning algorithm.

“But with recent advances in generative AI for images and text, active learning is primed for a major breakthrough. Now, when a learning algorithm is unsure of the correct label for some part of its encoding space, it can actively generate data from that section to get input from a human.

“Active learning has the potential to revolutionize the way we approach machine learning, as it allows systems to continuously improve and adapt over time. Rather than relying on a fixed set of labeled data, an active learning system can seek out new information and examples that will help it better understand the problem it is trying to solve. This can lead to more accurate and effective machine learning models, and it could reduce the need for large amounts of labeled data.

“I have a great deal of hope and excitement that active learning will build upon the recent advances in generative AI. As we enter the new year, we are likely to see more machine learning systems that implement active learning techniques, and it is possible that 2023 could be the year that active learning truly takes off.”

Beneficial and Harmful
Czesław Mesjasz, an associate professor at Cracow University of Economics, Kraków, Poland, wrote, “The main challenge associated with the development of modern technology can be described through the following opposite scenarios:

“First, the positive one. Thanks to increased productivity, it will be possible to fulfill the needs of larger social groups. It is often forgotten that there should be a demand for increased productivity results. In this scenario, the demand will be created by people engaged in such areas that cannot be sufficiently financed (e.g., art, entertainment). For example, new collections of great works can be established in musea, and more creators and entrepreneurs will be paid for activities that are not paid for now (e.g., culture, art, learning about nature, etc.). The arrival of a super-efficient automatic industry demands that there be a specific social and political consensus of all of society in assuring the best results for all. I call this scenario positive. If handled well, this can lead to a decrease in social inequality if we look at the scenario with an idealist approach.

“The second, opposite, scenario says that owners, innovators and specialists of technology will conclude that they do not want to transfer the results of their above-average skills and resources. It will be a very politically tempting situation for those who are more affluent and better-qualified to dominate over the poorer, less-skilled and less-educated social groups.

“The question concerning the demand can then be asked: Who will create demand for the products of automated manufacturing? The answer is less optimistic. The following social divide can emerge spontaneously. The more affluent will be operating in closed social groups (e.g., a wealthy specialist/owner will buy products only from particular people, leaving out the other social groups). The less-educated, weaker social groups will be dominated by the affluent, smarter and better-educated. This is a pessimistic picture.

“Providing a basic income to everyone could allow people to survive at relatively low standards of living. This dilemma will be the most crucial challenge in the years to come. Of course, it is a matter of ethics and ideology. The actual situation will be somewhere in the middle. The most challenging in this duality is that it is of a solid structural, systemic character, independent of the subjective, individual opinions of the involved actors. Of course, more can be written, but this dichotomy will be crucial in shaping the social order under the conditions of accelerated technological development.”

Ian O’Byrne, assistant professor of literacy education at the College of Charleston, commented, “The best and most beneficial changes will be multiple, but it depends on expectations about who benefits and how. More to the point, as these technologies impact our societies and cultures, some groups are either disrupted or dislocated as these systems, products and spaces proliferate.

“In terms of human-centered development of digital tools and systems, I am hopeful that recent movements in open-source technologies, indieweb philosophies and federated systems (e.g., Mastodon) will help support and promote human identity and agency in and across these systems.

“Technological solutions and products may have some impact on improving human rights and abetting good outcomes for all citizens. Much of this involves the use of social networks and digital tools for capturing, sharing and documenting local events to a global audience.

“One of the things that has me excited about changes in the long term is that technology usually tries to advance toward progress, improvement and better outcomes. As stated earlier, this may often come at the expense of individuals and groups, but the hope is that for the larger community (or the community in power) better outcomes are attainable. I believe that science and technology usually are for the better.

“In terms of human health and well-being, I am a bit hopeful that current advances in technology, like wearable sensors, electronic health records, and digital records will help individuals be safer, healthier, and strive for mental and physical health.

Ian O’Byrne, assistant professor of literacy education at the College of Charleston, wrote, “In thinking about harmful or menacing aspects of advances in digital tools and networked platforms, I’m considerate of the fact that technology will advance toward what it believes is progress and a better solution or outcome. In many ways, this may run counter to what human systems, solutions and outcomes may desire.

“I believe that human-centered development of digital tools and systems is focused on keeping users interacting with tools, services and products, and not as much interested in the mental and/or physical health of individuals as they interact in these spaces. Furthermore, I believe these tools and are falling short of privacy advocates’ goals as terms of use and service are complex and unintelligible.

“In terms of human connections, especially in terms of governance and institutions, I am most concerned about the growing divide between education, science, technology and the communities that feel like they are upended by these forces. I believe that we are seeing the full impact of the “future shock” that Toffler referenced when referring to what happens when people are no longer able to cope with the pace of change. We’re increasingly seeing instances where we need to question the issues of privacy, security and data privacy they’re experiencing as they sign off and use these tools.

“I have significant concerns about the advances in technology and digital spaces as they impact human rights of individuals, especially children. With the spread of the global pandemic, as we moved to emergency remote teaching, schools had a decision to make about how to support learners as they moved online for learning. Many learning institutions used this as an opportunity to amplify surveillance tools and normalize surveillance culture for learners and educators. In addition, these technological tools are wonderful, especially as we think about access to a global, networked economy, but I have concerns about entering student data into these environments and ceding future customers of products and systems.”

Robert Gibson, director of instructional design at WSU Tech, commented, “Artificial intelligence will certainly be the most impactful. No question about it. Whether it will be used for beneficial purposes remains to be seen. We’ve seen how the web has spawned nefarious and dangerous web sites that have threatened our very democracy and civil order. Left unchecked, AI could certainly impact civilization in unexpected ways.”

Robert Gibson, director of instructional design at WSU Tech, said, “Artificial intelligence is a threat for all the same reasons that it is providing amazing opportunities. For one thing, deepfake technologies could be used to frame people for crimes, alter the course of diplomatic interactions and reshape society itself.”

Neither Beneficial Nor Harmful
Eduardo Villanueva-Mansilla, associate professor at Pontificia Universidad Católica del Perú and editor of the Journal of Community Informatics, said, “I’m not so sure about positive or beneficial changes as a whole. There are many instances of technological innovation that may have significant impacts on society, but it is quite hard to think of beneficial as a category that may describe them. For instance, it is evident that AI, at current speed, will be quite significant in many different sectors around the world, but still the biases – even unintentional ones – are a problem that no one is really thinking about. Any approach that considers this category will have a level of bias in itself that I don’t feel comfortable with. Unless there is a single set of technologies that stop the climate emergency and allows for better, fairer access to resources, progress will be uneven and actually, quite irrelevant.”

Buroshiva Dasgupta, professor of communication at Sister Nivedita University in Kolkata, India, wrote, “I am generally excited about the changes that are happening to society through digital media. By 2035 the human species will evolve into more efficient creatures. Privacy is an unnecessary concern. If humans want to keep certain things secret, they will do it, whether in a digital environment or not. Humans will continue as gregarious animals.’

Buroshiva Dasgupta, professor of communication at Sister Nivedita University in Kolkata, India, commented, “Digital media is a tool; it depends on how we use it. The atom bomb killed thousands, but now it is reined in to benefit the human species. Similarly, we are becoming aware of the harmful effects of digital media. We will learn to guard against it, but generally digital media will make us more-efficient human beings.”

Jan Schaffer, executive director at J-Lab, wrote, “In human health, there are going to be further great advances in molecular biology and treatment. My brother, 68, a pathologist, is absolutely animated by these advances. Likewise, there will be strides in robotic and laser surgeries. In regard to human connections, there will be great advances in secure voting that cancel out any fraud claims. In terms of human rights, our knowledge of abuses will be enhanced by messaging apps and drones. If there is a will, we could know more about kleptocratic behavior.”

Jan Schaffer, executive director at J-Lab, commented, “In terms of human knowledge, I have great concerns about AI tools that all too easily allow students to generate term papers and reports without learning any of the material. I worry about the growing reports of creating sentient robots with little oversight or rules. I think that lack of human services – in retail, banking, any kind of customer service – make people agitated and nervous. And in banking, in particular, I worry that systems are not being developed fast enough to prevent fraudulent activity. Will the FDIC be able insure it all, or at what point does the FDIC itself go under? And I worry that so many low-level jobs will be replaced by automated systems, that some workers who can’t, or don’t want, a college degree have limited options.”

Beatriz Botero Arcila, assistant professor of law in the digital economy at Sciences Po Law School in France and head of research at Edgelands Institute, said, “Institutions will get better at data-analytics and data-driven decision-making. This is happening in the private sector already, and in some parts of government, but will also continue to expand to civil society. This will be a function both of expertise, cheapening of various tools, but also people getting used to and expecting data-backed interventions. To survive the information explosion, it is also likely we will have developed mechanisms to verify information, hopefully curbing some of our cacophonous information environment.”

Beatriz Botero Arcila, assistant professor of law in the digital economy at Sciences Po Law School in France and head of research at Edgelands Institute, responded, “Harmful and menacing changes will be a further grip of infrastructures of control, of different kinds in different contexts. It’s hard to specify why this is harmful, but it’s maybe the case that large interconnected systems require strict rules and strong enforcement; this will hurt people who don’t fit the mold. Relatedly, I worry about freedom of speech shrinking as rules of what permissible speech is get stricter to limit certain forms of harm. In the long run this could hurt progress, but it is hard to tell.”

Randy Mayes, a self-employed technology analyst, commented, “The mass adoption of fog computing stands out most to me. Most data processing utilizes von Neumann architecture, in which the data memory and the processor are in two different places such as cloud computing. For autonomous vehicles (AVs) there is a need for processors to rapidly analyze data and make real-time decisions regarding acceleration, object detection, braking and steering. Using cloud computing when cameras and sensors generate data to detect objects on the roads is compromised by latency issues. One solution to latency is moving processing and data storage closer to where the data is generated, called edge computing. AVs will also need to use swarm intelligence similar to bacteria and animals for communicating with each other for navigation. Researchers are currently investigating fog computing because it would spread network servers along highways for faster and more reliable navigation and for communicating data analytics among driverless cars.”

Randy Mayes, a self-employed technology analyst, said, “There are numerous outstanding philosophical, scientific and existential risk issues that could determine whether or not our species survives. Solutions to these hard problems are beyond current human knowledge and machine capabilities. A number of software and hardware companies are developing quantum computers. These are not intended to replace our home and office computers; they are more-complex tools that will be used to solve more-complex problems. The properties that make quantum particles ideal for quantum computing also provide technical hurdles for its development. Qubits interact with the environment and are in multiple states simultaneously which decreases the accuracy of measurements. In order to overcome the interference and uncertainty hurdles, researchers have developed innovative technological solutions to make qubits more measurable. In 1994, Peter Shor of Bell Labs also demonstrated that quantum computers with massive computational power can also provide the military more secure data and communications. Because of the uncertain nature of qubits, it makes it almost impossible for hackers to intercept and decrypt it. However, since quantum computers can perform multiple calculations simultaneously, they have the potential to break the common encryption methods used for classical computing.”

Beneficial and Harmful
David Wilkins, an instructor at the University of Oregon School of Data Science and Computer Science, commented, “We will see developments in medicine and political wisdom. Medicine: Our species needs to compare how very large mammals (elephants, whales) mange to remain cancer-free despite having far more cells that can go wild. That effort will require huge amounts of computer research to compare the genomes with our own. Political wisdom: There will be more access to widely vetted facts, which may reduce passions about political ideas if there are widespread, well-tested accesses to facts, rather than hyper-partisan panics. Harms will come in regard to privacy: A dramatic loss of privacy will be made possible by monitoring searches, reading and affiliations with others.”

Beneficial and Harmful
Marc Brenman, managing member at IDARE, said, “The most beneficial changes due to digital technology will be in the area of medical diagnosis and treatment. The most harmful changes likely to occur due to digital technology will be in the area of artificial intelligence, when AI entities surpass humans and decide they can get along with us.”

Beneficial and Harmful
John L. King, professor of information studies and former dean at the University of Michigan, said, “Improved information and communications technologies (e.g., on cellphones) will make it easier for individuals to find information and execute actions, enabling improved health and safety and making it harder to hide human rights violations.

“In regard to harms: Incentives will continue for advertising and other social-control purposes. Tools to collect and use such information will get better ahead of other tools, reinforcing the effort to learn more knowledge about individuals and improve performance on advertising and social control endeavors. A lot of this will masquerade under the banner of privacy, but it is about control.”

Beneficial and Harmful
Kevin Doyle Jones, an entrepreneur and co-founder of the world’s largest social investment conference, SoCap, said, “I work in economic justice. I think the sharing of solutions by practitioners will accelerate. The most menacing aspects are the ability of conspiracy theorists to live in their own online worlds.”

Beneficial and Harmful
Michael Pilos, co-executive director at Raxios & Co., wrote, “Governmental infrastructure and systems will use AI to optimise operations. Autonomous cargo transportation with wide automation of road networks will advance. However, AI has the ability to mimic humans and influence political life, the governing and even the army – AI needs to be governed and controlled by global laws and protocols (with the help of the UN).”

Beneficial (did not respond to Harms question)
Matthew Belge, president and principal UX designer at Vision & Logic, wrote, “Medical advances by 2035 will include more use of VR and other simulation tools. Robots will be making street deliveries. Cars will drive on autopilot and will become a shared commodity; cars available on demand will become more common. Huge entertainment screens for the home will make traditional movie theaters obsolete. Personal electric aircraft will be available to fly us to close destinations.”

Sharon Sputz, executive director of strategic programs at the Data Science Institute at Columbia University, commented, “As digital systems advance, they have the great potential to take over tasks that are better done by computers as opposed to humans. While this list is growing, they still are centered around computation and can be put to good use. These can be examples like using data science models to alert the radiologist to suspicious images. Using Natural Language Processing to summarize large volumes of information or for improving our understanding of disease.

Sharon Sputz, executive director of strategic programs at the Data Science Institute at Columbia University, said, “When we take AI Systems and do not include the human or do not fully understand the data used, this can lead to making decisions that can unfairly harm innocent people.”

Beneficial and Harmful
Ernest Thiessen, founder and president of Smartsettle, developer of an eNegotiation system, responded, “Huge improvements in decision-making will be possible with intelligent collaboration systems that incorporate optimization algorithms. But I would be wary of systems like ChatGPT that give answers based on existing patterns.”

Beneficial and Harmful
Neil McLachlan, a consultant with Co Serve Consulting, commented, “There will be more human-centered development of digital tools and systems; this seems to be most likely and most needed to me. In particular, this will include operational, safety and quality of service benefits for transport systems. However, there will be more violations of human rights. Harming the rights of citizens is deep-seated problem with most social media platforms, even as they are right now.”

Beneficial and Harmful
Paul Wildman, futurist and consultant, Kids and Adults Learning Ltd, wrote, “Big developments will come in human-centered design of digital tools and systems, including autonomous health solutions and vehicles, safely advancing human progress in these systems. A big concern is if the human-centered design of digital tools and systems, including humans themselves through transhumanism, fall short of advocates’ goals.”

Beneficial and Harmful
David Lilley, an assistant professor of criminal justice at the University of Toledo, commented, “The greatest opportunity that the future digital world brings is quick access to educational information. Individuals could become experts (to the Ph.D. level) on their own time at little cost and without taking years of time via traditional schooling. Among the potential harms are the emergence of an Artificial Intelligence Hive Mind, the merging of corporatism with government (fascism) and the loss of freedom to think and speak.

“I believe a centralized AI system that is connected via Google, Facebook, Twitter and other content and search providers could likely attempt to monitor, control and manipulate the world by monitoring billions of Internet searches, emails and online comments.

“Background: Psychologists often tell institutionalized persons to keep a journal of their thoughts so they can monitor the mental state of the mentally ill. We are now entering into a similar relationship with the Internet. Monitoring the Internet gives a central entity (e.g., AI Hive Mind) near God-like power. This is already resulting in a merger of big tech, corporatism and government. Eventually, there will be just one large corporation that rules the world via fascist totalitarianism.”

Beneficial and Harmful
Perry Monroe, a futurist and consultant who does contract work for U.S. government agencies, said, “In America there has been a pattern of change and development that has occurred in a 40-year cycle. If we look at the 20th Century alone, here is the example I speak of. In 1905 we saw the development of cars and airplanes, then in 1945 we saw the birth of the atomic age. It was in 1985 we saw the birth of the computer age and cell phones. That being said, what change is going to happen in 2025 if this pattern holds true? Will it be sentient AI, the Kessler effect, or will we make first contact with some extraterrestrial entity? What role humanity will play in this is anyone’s guess. The most harmful thing I see that is an actual possibility is rogue AI. We have at present several AI programs open to the public that are seen as a curiosity but have the potential, if left unchecked, to lead to darker things.”

To read the full survey with analysis, please click here.

To read anonymous responses to the report, please click here.

To download a printable version of the report, please click here.