Elon University

Credited Responses: The Future of Ethical AI Design

This page holds full for-credit responses with no analysis to a set of July 2020 research questions aimed at illuminating attitudes about the likely evolution of ethical artificial intelligence design between 2020 and 2030.

Pew Research and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate and public practitioners and other leaders from June 30-July 27, 2020, asking them to share their answer to the following:

The Question – Application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. The question on the future of ethical AI design: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good, Yes, or No?

602 respondents answered the question

  • 32% said YES, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030.
  • 68% said NO, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030.

They were asked to elaborate on their choice with these prompts: Will AI mostly be used in ethical or questionable ways in the next decade? Why? What gives you the most hope? What worries you the most? How do you see AI applications making a difference in the lives of most people? As you look at the global competition over AI systems, what issues concern you or excite you?

We also asked respondents one final question: To consider the evolution of quantum computing (QC) and if it might influence any aspects of this realm. As QC is still in its early development and this query came at the very end of a large set of big questions (including several asking for predictions about what digital life might be in 2025 in the wake of the arrival of COVID-19 – part of an earlier report with details gleaned from this same canvassing), many respondents chose not to weigh in, said very little or replied that they were unsure. Due to it still being in early development, very few QC responses were included in this report, which is an analysis of expert opinions on the likely path of ethical AI design in the next decade, and only a few such responses are included on this page.

Click here to download the print report

The full report with organized analysis of responses is online here

Key themes emerging in the 602 respondents’ overall answers were:

* WORRIES – It is difficult to define “ethical” AI: Context matters. There are cultural differences, and the nature and power of the actors in any given scenario are crucial. Norms and standards are currently under discussion, but global consensus may not be likely. In addition, formal ethics training and emphasis is not embedded in the human systems creating AI.  Control of AI is concentrated in the hands of powerful companies and governments driven by motives other than ethical concerns: Over the next decade, AI development will continue to be aimed at finding ever-more-sophisticated ways to exert influence over people’s emotions and beliefs in order to convince them to buy goods, services and ideas.  The AI genie is already out of the bottle, abuses are already occurring, and some are not very visible and hard to remedy: AI applications are already at work in systems that are opaque at best and, at worst, impossible to dissect. How can ethical standards be applied under these conditions? While history has shown that when abuses arise as new tools are introduced societies always adjust and work to find remedies, this time it’s different. AI is a major threat.  Global competition, especially between China and the U.S., will matter more to the development of AI than any ethical issues: There is an arms race between the two tech superpowers that overshadows concerns about ethics. Plus, the two countries define ethics in different ways. The acquisition of techno-power is the real impetus for advancing AI systems. Ethics takes a back seat.

* HOPES  AI advances are inevitable; we will work on fostering ethical AI design: More applications will emerge to help make people’s lives easier and safer. Healthcare breakthroughs are coming that will allow better diagnosis and treatment, some of which will emerge from personalized medicine that radically improves the human condition. All systems can be enhanced by AI; thus, it is likely that support for ethical AI will grow.  A consensus around ethical AI is emerging and open-source solutions can help: There has been extensive study and discourse around ethical AI for several years, and it is bearing fruit. Many groups working on this are focusing on the already-established ethics of the biomedical community. Ethics will evolve and progress will come as different fields show the way: No technology endures if it broadly delivers unfair or unwanted outcomes. The market and legal systems will drive out the worst AI systems. Some fields will be faster to the mark in getting ethical AI rules and code in place, and they will point the way for laggards.

News release with nutshell version of report findings is available here

All anonymous responses on the likely future of ethical design are here

The full survey report with analysis is posted online here

Download the full report with analysis as a printable document here

Responses from all those taking credit for their remarks. Some are longer versions of expert responses contained in shorter form in the survey report.

The responses on this page are organized in three sections below. They are sorted by respondents’ choices that: 1) No, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030; that 2) Yes, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030; and 3) Responses from those who did not choose either “Yes” or “No,” choosing only to write an elaboration on the topic and not take a shot at making a binary choice.

Of note: Most of those who replied that “Yes” they hope to see the positive evolution of ethical AI design in the next decade generally also expressed some degree of uncertainty or voiced specific concerns or doubts about a positive trajectory. And some of those who said they doubt ethical AI design will advance much in the next decade also took note of the efforts that were being made toward it in 2020 and expressed hope for a better future for ethical AI design post-2030.

Some people chose not to provide a written elaboration, so there are not 600-plus recorded here. Some of the following are the longer versions of responses that are contained in shorter form in one or more places the survey report. Anonymous responses are carried on a separate page. These comments were collected in an opt-in invitation to more than 10,000 people that asked them to share their responses to a web-based questionnaire in July 2020.

Predictions from respondents who said who said ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030

Jonathan Grudin, principal researcher with the Natural Interaction Group at Microsoft Research, said, “The past quarter-century has seen an accelerating rise of online bad actors (not all of whom would agree they are bad actors) and an astronomical rise in the costs of efforts to combat them, with AI figuring in this. We pose impossible demands: we would like social media to preserve individual privacy, but also identify Russian or Chinese hackers that will require sophisticated construction of individual behavior patterns. The principal use of AI is likely to be finding ever more sophisticated ways to convince people to buy things that they don’t really need, leaving us deeper in debt with no money to contribute to efforts to combat climate change, environmental catastrophe, social injustice and inequality, and so on.”

Sam S. Adams, a 24-year veteran of IBM now working as a senior research scientist in artificial intelligence for RTI International, architecting national-scale knowledge graphs for global good, wrote, “The AI genie is completely out of the bottle already, and by 2030 there will be dramatic increases in the utility and universal access to advanced AI technology. This means there is practically no way to force ethical use in the fundamentally unethical fractions of global society. The multi-millennial problem with ethics has always been: Whose ethics? Who decides and then who agrees to comply? That is a fundamentally human problem that no technical advance or even existential threat will totally eliminate. Basically, we are stuck with each other and hopefully at least a large fraction will try to make the best of it. But there is too much power and wealth available for those who will use advanced technology unethically, and universal access via cloud, IoT and open-source software will make it all too easy for an unethical player to exploit. I believe the only realistic path is to provide an open playing field. That universal access to the technology at least arms both sides equally. This may be the equivalent of a Mutually Assured Destruction policy, but to take guns away from the good guys only means they can’t defend themselves from the bad guys anymore. Quantum Computing (QC), if and when it becomes a commercially scalable reality, will basically allow AI systems to consider vast high-dimensional alternatives at near-instantaneous speed. This will allow not only playing hyper-dimensional chess in real-time, but consider the impact of being able to simulate an entire economy at high resolution in faster than real-time. Program trading run amok in financial markets has caused global economic crises before. Now, accelerate that risk by orders of magnitude. Again, too much opportunity to gain extreme wealth and power for bad actors to ignore. The threat/opportunity of QC already fuels a global arms race in cryptography and privacy. Ethics barely has a chair in the hallway, let alone at the table in the national war rooms. That said, if a cost and scale breakthrough allows for the widespread democratization of QC, then the playing field is leveled. What if a $30 Raspberry Pi/Q gave every device a quantum-supremacy-level capability? Humans will always be in the loop when and where they add value. What value determines that requirement.”

Douglas Rushkoff, well-known media theorist, author and professor of media at City University of New York, wrote, “Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now. Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money, not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms, or help manage other limited resources,  the majority is being used on people. My concern is that even the ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible? I am thinking, or at least hoping, that quantum computing is further off than we imagine. We are just not ready for it as a civilization. I don’t know if humans will be ‘in the loop’ because quantum isn’t really a cybernetic feedback loop like what we think of as computers today. I don’t know how much humans are in the loop even now, between capitalism and digital. Quantum would take us out of the equation.”

Ebenezer Baldwin Bowles, an advocate/activist, observed, “Altruism on the part of the designers of AI is a myth of corporate propaganda. Ethical interfaces between AI and citizenry in 2030 will be a cynical expression by the designers of a digital Potemkin Village – looks good from the outside, but totally empty behind the facade. AI will function according to two motivations: one, to gather more and more personal information for the purposes of subliminal and direct advertising and marketing campaigns; and two, to employ Big Data to root out radical thinking and exercise near total control of the citizenry. The State stands ready through AI to silence all voices of perceived dissent. Most thinking about quantum computing relates to medicine and technological advances in finance and manufacturing. It looks good on the surface, but offers profound threats at the core. I’m convinced that any expression of ethical AI will stand as an empty pledge – yes, we will always do the right thing for the advancement of the greater good. No way. Rather, the creators of AI will do what is good for the bottom line, either through financial schemes to feed the corporate beast or psychological operations directed toward control of dissent and pacification. As for the concept that ‘humans will be in the loop,’ we are already out of the loop because there is no loop. Think about this fact: in the development of any major expression of artificial intelligence, hundreds of IT professionals are assigned to a legion of disparate, discrete teams of software writers and mechanical designers to create the final product. No single individual or team fully understands what the other teams are doing. The final AI product is a singular creature no one person understands – other than the software itself. Ethical action is not a part of the equation.”

Marjory S. Blumenthal, director of the science, technology and policy program at RAND Corporation, observed, “This is the proverbial onion; there is no simple answer. Some of the challenge is in the lifecycle – it begins with how the data are collected, labeled (if they are for training) and then used, possibly involving different actors with different views of what is ethical. Some of the challenges involve the interest-balancing of companies, especially startups, that have always placed function and product delivery over security and privacy. Some of the challenges reflect the fact that in addition to privacy and security for some applications, safety is also a concern (and there are others). Some of the challenges reflect the fact that even with international efforts like that of IEEE views of what ethics are appropriate differ around the world. Today’s AI mania implies that a lot of people are rushing to be able to say that they use or produce AI, and anything rushed will have compromises. Notwithstanding the concerns, the enthusiasm for AI builds on long histories improving processing hardware, data-handling capability and algorithms. Better systems for education and training should be available and should enable the kind of customization long promised but seldom achieved. Aids to medical diagnoses should become more credible, along with aids to development of new therapies. The support provided by today’s ‘smart speakers’ should become more meaningful and more useful (especially if clear attention to privacy and security comes with the increased functionality).”

Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab, University of Maryland, commented, “While technology raises many serious problems, efforts to limit malicious actors should eventually succeed and make these technologies safer. The huge interest in ethical principles for AI and other technologies is beginning to shift attention towards practical steps that will produce positive change. Already the language of responsible and human-centered AI is changing the technology, guiding students in new ways and reframing the work of researchers. I foresee improved appliance-like and tele-operated devices with highly automated systems that are reliable, safe and trustworthy. Shoshanna Zuboff’s analysis in her book ‘Surveillance Capitalism’ describes the dangers and also raises awareness enough to promote some changes. I believe the arrival of independent oversight methods will help in many cases. Facebook’s current semi-independent oversight board is a small step forward, but changing Facebook’s culture and Zuckerberg’s attitudes is a high priority to ensuring better outcomes. True change will come when corporate business choices are designed to limit the activity of malicious actors – criminals, political operatives, hate groups and terrorists – while increasing user privacy. Google’s People and AI Research guidelines are a good step forward and Apple Design Guidelines are even more effective in clarifying that the goal in high levels of human control and high levels of automation. The misleading directions of humanoid robots and full machine autonomy will continue to be promoted by some researchers, journalists and Hollywood script writers, but those ideas seem more archaic daily. AI and machine learning have a valuable role, maybe as the embedded chips of the future, important, but under human control through exploratory user interfaces that give users better understanding of how the embedded algorithms perform in meaningful cases that are relevant to their needs.”Ethical principles (responsibility, fairness, transparency, accountability, auditability, explainable, reliable, resilient, safe, trustworthy) are a good starting point, but much more is needed to bridge the gap with the realities of practice in software engineering, organization management and independent oversight. There is a lot of work to be done, but I see promising early signs. A simple step is a flight data recorder for every robot and AI system. The methods that have made civil aviation so safe could be adopted to recording what every robot and AI system does, so that when errors occur, the forensic investigation will have the data they need to understand what went wrong and make enforceable measurable testable improvements. AI applications can bring many benefits, but they are more likely to succeed when user experience designers have a leading role in shaping human control of highly automated systems. Quantum computing has yet to deliver on its promise. Certain algorithms will benefit from quantum implementations, but I do not yet see broadly applicable quantum computing applications. Blockchain seems more important in supporting ethical AI.”

Meredith Whittaker, a research professor and co-director of NYU’s AI Now research institute, commented, “Efforts like the Just Data Lab, Data for Black Lives, the Algorithmic Justice League, the STOP project in New York and the Anti-Eviction Mapping project illustrate the power of community-led efforts for justice and accountability that employ technological methods. What is interesting about these efforts, however, is not the tech. It is that they all understand justice as a primary concern and employ tech where and when appropriate to achieve this end.”

Valentine Goddard, the founder and executive director of the AI Impact Alliance, which aims to facilitate an ethical and responsible implementation of AI for all humanity, said, “In society’s haste to prioritize economic recovery, AI’s ethical requirements risk being overlooked. Given the current distribution of capacity and resources in AI and AI ethics, given the underrepresentation of women in AI, gender equality will take a huge step backwards. For the average woman, that would mean adapting to digital tools designed and deployed by men in all spheres of their lives (employment, economic security, well-being, civic participation). If we focus on access to work, for example, new research shows that due to COVID-19, the participation of women in the workforce has been set back 30 years already. Previous research has shown that women’s jobs were more likely to be automated, and therefore replaced … in Big Tech – a sector that employs roughly under 15% of women in AI – profits are skyrocketing. Technology companies play an increasingly important role in everyone’s lives, including our civic capacity to engage with democratic institutions. The digital divide, left unaddressed, will silence the voices of digitally illiterate citizens as well as those of entire communities who don’t even have access to internet. As governments are digitizing services to citizens, their reliance on private technology companies is proportionally increasing, giving them more and more power. Not to underplay their expertise and capacity to contribute positively to society, but these are privately owned and managed for-profit organizations that suffer from a critical lack of women and diversity. There is currently no legal obligation to socialize the benefits of the data they collect. Furthermore, from startups to mid-sized businesses in AI, AI expertise is either underfunded or nonexistent. Given the current landscape of uneven access to AI, the lack of large-scale efforts to help citizens understand the implications of AI and data governance, the diversity and gender crisis in AI technology companies, the nonexistence of social impact assessment frameworks, the absence of an obligation to use AI and data to achieve SDGs, I am concerned about the increasing role of technology companies in the lives of citizens in 2025.”

Barry Chudakov, founder and principal of Sertain Research, said, “Before answering whether AI will mostly be used in ethical or questionable ways in the next decade, a key question for guidance going forward will be, What is the ethical framework for understanding and managing artificial intelligence? Our ethical frameworks grew out of tribal wisdom which was set down in so-called holy books that were the foundation of religions. These have been the ethical framework for the Judeo-Christian-Islamic-Buddhist world. While the humanitarian precepts of these teachings are valid today, modern technologies and artificial intelligence raise a host of AI quandaries these frameworks simply don’t address. Issues such as management of multiple identities; the impingement of the virtual world on the actual world and how boundaries should be drawn—if boundaries should be drawn; striking a balance between screen time and real world time; parsing, analyzing and improving the use of tracking data to ensure individual liberty; collecting, analyzing, and manipulating data exhaust from online ventures to ensure citizen privacy; the use of facial recognition technologies, at the front door of homes and by municipal police forces, to stop crime. That is a small set of examples, but there are many more that extend to air and water pollution, climate degradation, warfare, finance and investment trading, and civil rights. Our ethical book is half-written. While we would not suggest our existing ethical frameworks have no value, there are pages and chapters missing. Further, while we have a host of regulatory injunctions such as speed limits, tax rates, mandatory housing codes and permits, etc., we consider our devices so much a part of our bodies that we use them without a moment’s thought for their effects upon the user. We accept the algorithms that enhance our searches and follow us around the internet and suggest another brand of facial moisturizer as a new wrinkle on a convenience and rarely give it a second thought. We do not acknowledge that our technologies change us as we use them; that our thinking and behaviors are altered by the cyber effect [Mary Aiken]; that devices and gadgets don’t just turn us into gadget junkies, they may abridge our humanity, compassion, empathy, and social fabric. As Greg Brockman, cofounder of OpenAI, remarked: “Now is the time to ask questions. Think about the kinds of thoughts you wish people had inventing fire, starting the industrial revolution, or [developing] atomic power.” [Fast Company, “Google’s quantum bet on the future of AI—and what it means for humanity,” Katerina Brookerlong Read]. Will AI mostly be used in ethical or questionable ways the next decade? I would start answering this question by referencing what Derrick de Kerckhove described recently in his “Five words for the future” [Noema, “Quantum Theories of Consciousness”]: Big data is a paradigmatic change from networks and databases. The chief characteristic of big data is that the information does not exist until the question. It is not like the past where you didn’t know where the information was, it was somewhere and you just had to find it. Now, and it’s a big challenge to intelligence, you create the answer by the question. (Ethics then effectively becomes) how do you create the right question for the data?” So, for AI to be mostly used in ethical ways, we must become comfortable with not knowing; with needing to ask the right question and understanding that this is an iterative process that is exploratory, not dogmatic. Beginner’s mind [Shunryu Suzuki] becomes our first principle—the understanding from which ethics flows. Many of our ethical frameworks have been built on dogmatic injunctions: thou shalt and shalt not. Thus, big data effectively reimagines ethical discourse: if until you ask the question, you will not hear or know the answer, you proceed from unknowing. With that understanding, for AI to be used in ethical ways, and to avoid questionable approaches, we must begin by reimagining ethics itself. I believe quantum computers may evolve to assist in building ethical AI, not just because they can work faster than traditional computers, but because they operate differently. AI systems depend on massive amounts of data that algorithms ingest, classify and analyze using specific characteristics; quantum computers enable more precise classification of that data. Eventually, quantum computing-based AI algorithms could find patterns that are invisible to classical computers, making certain types of intractable problems solvable. But there is a fundamental structural problem that must be addressed first: vastly more powerful computing power may not resolve the human factor. Namely that the moral and ethical framework for building societal entities (churches, governments, constitutions, laws, etc.) grew out of tribal culture, nomadic culture, which recorded precepts which then turned into codified law. Moreover, the tool behind that moral and ethical framework was alphabetic literacy. The global adult literacy rate was 86% in 2016, while the youth literacy rate was 91%. So, effectively, the entire world was capable of understanding and, to some extent, corroborating that ethical and moral framework. While I am not suggesting we will abandon all ethics due to the arrival of artificial intelligence, we cannot discount the enormous significance that tool logic—alphabets and books—played in establishing that moral framework. Our churches are based on books; our schools are built on book learning; our governments are a web of words we call laws. But we’re in a different world now. As William Gibson said in 2007: “The distinction between cyberspace and that which isn’t cyberspace is going to be unimaginable.” It’s now time to imagine the unimaginable. This is because AI operates from an entirely different playbook. The tool logic of artificial intelligence is embedded machine learning; it is quantum, random, multifarious. We are leaving the Gutenberg Galaxy [McLuhan] and its containment patterns of rule-based injunctions. The tool logic of the book is linear, celebrates one-at-a-timeness and the single point of view; alphabetic sequentiality supplanted global/spatial awareness and fostered fear of the image; literacy deified books as holy and the “word of God.” AI, on the other hand, takes data sets and “learns” or improves from the analysis of that data. This is a completely different dynamic, with a different learning curve and demands, than traditional book learning. Moreover, especially as we use AI to address our most urgent issues and problems—complex supply chains, improved investment strategies, enhanced climate projections, etc.—the depth and level of learning required increases (another acceleration). What learning structures or schools are prepared to boost our AI literacy rate to 86%-91%? From May 25 to September 17, 1787 the U.S. Constitutional Convention took place in Philadelphia, PA to decide how America was going to be governed. This resulted in the creation of the Constitution of the United States, placing the Convention among the most significant events in American history. I believe we now need a 21st century Quantum AI Constitutional Convention. The purpose of such a convention is clear: to inaugurate a key issue not only for AI tech companies in the coming decade but for the known world— namely establishing clear ethical guidelines and protocols for the deployment of AI, and then creating an enlightened, equitable means of policing and enforcing those guidelines. This will necessitate addressing the complexities of sensitive contexts and environments (face recognition, policing, security, travel, etc.) as well as a host of intrusive data collection and use case issues, such as tracking, monitoring, AI screening for employment, or algorithmic biases. This will demand transparency, both at the site of the deployment of AI, as well as addressing its implications. Without those guidelines and protocols—the 21st century equivalent of the Magna Carta and its evolved cousin, the U.S. Constitution—there will be manufactured controversy over what is ethical and what is questionable. Most humans, unless they are civil or aerospace engineers, do not know the intricacies of bridge-building or the propulsion dynamics of launching a rocket. Yet because bridges and rockets have limited scope in our lives and may play only a small part in our day-to-day existences, we leave the ethics and dynamics of those technologies to experts. Unfortunately, AI is not like that. It is ubiquitous and pervasive. We hardly have the language or the inclination to fully appreciate what AI can and will do in our lives. This is not to say that we cannot; it is to say that we are unprepared to see, think, debate, and wisely decide how to best move forward with AI development. Once there is a global constitution and Bill of AI Rights, with willing signatories around the world, quantum computing will be on track to evolve in assisting the building of ethical AI. However, the unfolding of that evolution will collide with legacy cultural and societal structures. So, as we embrace and adopt the logic of AI, we will change ourselves and our mores; effectively we will be turning from hundreds or thousands of years of codified traditional behaviors to engage with and adapt to the ‘chaotic implications of AI. That is, compared to the logic of books and the alphabetic structures of church, school, and government based on written law, AI presents a different logic, a different mindset. Wise souls such as those in The Partnership on AI Consortium or Data and Society are paving the way for this transition by insisting that AI be mindful of ethical human boundaries. Yet it will be about as easy to avoid getting caught up in the logic of AI as it is for us to ignore our cell phones or discount their effects on our lives. Humans must remain in the loop as AI systems are created and implemented. To act otherwise is absurd, even suicidal. AI represents not human diminishment and replacement, but a different way of being in the world, a different way of thinking about and responding to the world—namely, to use designed intelligence to augment and expand human intelligence. Yes, this will create new quandaries and dilemmas for us—some of which may portend great danger. But AI needs emotional intelligence, compassion, depth of understanding and awareness just as our human offspring do. It is an offspring of human ingenuity. But just as neglect of a child constitutes child abuse and is abhorrent, neglecting to engage with the implications of AI on human lives and systems, ethics, and mores would be the worst form of mind and soul abuse. Our engagement with AI is not ending because AI is now in the province of the most sophisticated technology developers; it is just beginning. We will braid AI into the fabric of our lives and in order to do so successfully, society at many levels must be present and mindful at every step of AI integration into human society.”

Paul Jones, professor emeritus of information science at the University of North Carolina – Chapel Hill, observed, “Unless, as I hope happens, the idea that all tech is neutral is corrected, there is little hope or incentive to create ethical AI. Current applications of AI and their creators rarely interrogate ethical issues except as some sort of parlor game. More often I hear data scientists disparaging what they consider ‘soft sciences’ and claiming that their socially agnostic engineering approach or their complex statistical approach is a ‘hard science.’ While I don’t fear an AI war, a Capek-like robot uprising, I do fear the tendency not to ask the tough questions of AI. Not just of general AI where most of such questions are entertained, but in narrow AI where most progress and deployment are happening quickly. I love to talk to Google about music, news and trivia. I love my home being alert to my needs. I love doctors getting integrated feedback on lab work and symptoms. I could not now live without Google Maps. But I am aware that ‘We become what we behold. We shape our tools and then our tools shape us,’ as Father John Culkin reminded us. For most of us, the day-to-day conveniences of AI by far outweigh the perceived dangers. Dangers will come on slow and then cascade before most of us notice. That’s not limited to AI. Can AI help us see the dangers before they cascade? And if AI does, will it listen and react properly? While engineers are excited about quantum computing, it only answers part of what is needed to improve AI challenges. Massive amounts of data, massive amounts of computing power (not limited to quantum as a source), reflexive software design, heuristic environments, highly connected devices, sensors (or other inputs) in real time are all needed. Quantum computing is only part of the solution. More important will be insight as to how to evaluate AI’s overall impact and learning.”

R. “Ray” Wang, principal analyst, founder and CEO of Silicon Valley-based Constellation Research, noted, “Right now we have no way of enforcing these principles in play. Totalitarian, Chinese, CCP-style AI is the preferred approach for dictators. The question is: Can we require and can we enforce AI ethics? We can certainly require, but the enforcement may be tough. https://sloanreview.mit.edu/article/three-people-centered-design-principles-for-deep-learning/.”

Amali De Silva-Mitchell, a futurist and consultant participating in multistakeholder, global internet governance processes, wrote, “AI will be everywhere, but there will be issues with the quality. The individual’s self-identity could decline, and the need to conform to a norm increase, as being out of step will create the need to sort out the exceptions. People will become more managed, spontaneous behavior will be discouraged, although creativity at work will be encouraged. Humans will be replaced in many settings by robots, forcing them to compete; economic security may become something of the past unless the state provides universal benefits. People will guard their minds as if they are a gold mine as that will be the ticket to their individual sustainability. Although there are lots of discussions [about ethical AI design], there are few standards or those that exist are at a high level or came too late for the hundreds of AI applications already rolled out. These base AI applications will not be reinvented, so there is embedded risk. However, the more discussion there is, the greater the understanding of the existing ethical issues, and that can be seen to be developing especially as societal norms and expectations change. AI applications have the potential to be beneficial, but the applications have to be managed so as not to cause unintended harms. For global delivery and integrated service, there needs to be common standards, transparency and collaboration. Duplication of efforts is a waste of resources. Quantum computing will allow larger volumes of data to be processed and perhaps allow for fine tuning of error, fact, rule or reasonableness checking or provide greater data sources and complex reasoning so as to rise above an average misapprehension in natural language processing, for example. A problem could be the reliance on quantum computing that leads to a lazy human mind, and the opportunity for risks to arise. The human transcends his own space, but there could be a chance that that human gets led, rather than leads, as fewer and fewer scientists understand the full extent of the complexity of quantum computing technologies with time.”

Beth Noveck, director, NYU Governance Lab and its MacArthur Research Network on Opening Governance, responded, “Successful AI applications depend upon the use of large quantities of data to develop algorithms. But a great deal of human decision-making is also involved in the design of such algorithms, beginning with the choice about what data to include and exclude. Today, most of that decision-making is done by technologists working behind closed doors on proprietary private systems. If we are to realize the positive benefits of AI, we first need to change the governance of AI and ensure that these technologies are designed in a more participatory fashion with input and oversight from diverse audiences, including those most affected by the technologies. While AI can help to increase the efficiency and decrease the cost, for example, of interviewing and selecting job candidates, these tools need to be designed with workers lest they end up perpetuating bias. While AI can make it possible to diagnose disease better than a single doctor can with the naked eye, if the tool is designed only using data from white men, it may be less optimal for diagnosing diseases among Black women. Until we commit to making AI more transparent and participatory, we will not realize its positive potential or mitigate the significant risks. While quantum computing will radically change the speed at which we can make use of and model ever-larger quantities of data, allowing us to generate more accurate fraud-detection systems, for example, or other AI applications, I don’t see quantum as having a significant impact on the ethical dimensions of AI. It isn’t relevant to the question of whether the data that feeds these systems is open and transparent. It doesn’t change who is involved in design and decision-making and whether those audiences are diverse. It doesn’t speak to whether the policies and regulations that govern are designed to foster technological humanism and the well-being of people or to maximize profit.”

Richard Lachmann, professor of political sociology at the State University of New York-Albany, noted, “AI will be used mainly in questionable ways. For the most part, it is being developed by corporations that are motivated exclusively by the desire to make ever bigger profits. Governments see AI, either developed by government programmers or on contract by corporations, as a means to survey and control their populations. All of this is ominous. Global competition is a race to the bottom as corporations try to draw in larger audiences and control more of their time and behavior. As governments get better at surveying their populations that lowers the standards for individual privacy. For almost all people these applications will make their lives more isolated, expose them to manipulation, and degrade or destroy their jobs. The only hopeful sign is rising awareness of these problems and the beginnings of demands to break up or regulate the huge corporations. Whatever quantum computing achieves, it will be created by humans who serve particular interests, either for corporations of making profit or for governments of controlling populations. So humans will be in the loop, but not all humans. Most likely just those with money and power. Those people always work to serve their own interests and so it is unrealistic to expect that the AI systems they create or contract to be created will be ethical. The only hope for ethical AI is if social movements make those demands and keep up the pressure to be able to see what is being created and to impose controls.”

Rob Frieden, a professor of telecommunications law at Penn State who previously worked with Motorola and has held senior policy positions at the FCC and the NTIA, commented, “I cannot see a future scenario where governments can protect citizens from the incentives of stakeholders to violate privacy and fair-minded consumer protections. Surveillance, discrimination, corner cutting, etc., are certainties. I’m mindful of the adage: garbage in, garbage out. It’s foolish to think AI will lack flawed coding.”

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, observed, “The good possibilities here are endless. But the questionable ways are endless, and we have a very poor track record of stopping ethically questionable developments in most areas of life – why wouldn’t that apply here? In social science, the best predictor of future behavior is past behavior. The opium addict who says, after a binge, that ‘they’ve got this’ – they don’t need to enter treatment and they’ll never use opium again – is (rightly) not believed. So, in an environment where ethically questionable behavior has been allowed or even glorified in areas such as finance, corporate governance, government itself, pharmaceuticals, education and policing, why all of a sudden are we supposed to believe that AI developers will behave in an ethical fashion? There aren’t any guardrails here, just as there weren’t in these other spheres of life. AI has the potential to transform how cities work, how medical diagnosis happens, how students are taught and a variety of other things. All of these could make a big difference in the lives of most people. But those benefits won’t come if AI is controlled by two or three giant firms with 26-year-old entrepreneurs as their CEOs. I don’t think I’m going out on a limb saying that. The biggest concern I have regarding global competition is that the nation that figures out how to harness AI to improve the lives of all of their citizens will come out on top. The nations that refuse to do that and either bottle up the benefits of AI so that only 15-20 percent of the population benefits from it or the nations where large segments of the population reject AI when they realize they’re being left behind (again!) will lose out completely. The United States is in the latter category. The same people who can’t regulate banks, finance, education, pharmaceuticals and policing are in a very poor position to make AI work for all people. It’s basic institutional social scientific insight. [Regarding quantum computing] we must ask ourselves, ‘Who are we surrendering power to when we claim quantum computing will make ethical decisions by itself?’ Relying on one technology to fix the potential defects in another technology suffers from the same basic problem – technologies don’t determine ethics. People, cultures and institutions do. If those people, cultures and institutions are strong, then the potential of getting more ethical outcomes is more likely than not. We simply don’t have that. In fact, relying on quantum computing to fix anything sounds an awful lot like expecting free markets to fix the problems created by free markets. This homeopathic solution has not worked with markets, so it is difficult to see how it will work with computing. So, let’s take an elementary example that may be more applicable to the English-speaking world than elsewhere. The inventor of an AI program seeks to make as much money as possible in the shortest amount of time, because that is the prevailing institutional and economic model they have been exposed to. They develop their AI/quantum computing platform to make ‘ethical decisions,’ but those decisions happen in a context where the institutional environment rewards the behaviors associated with making as much money as possible in the shortest amount of time. I ask you, given the initial constraint (‘the primary goal is to be a billionaire’), all of the ethical decisions programmed into the AI/quantum computing application will be oriented toward that primary goal and make ethical decisions around that.”

Leslie Daigle, a longtime leader in the organizations building the internet and making it secure, noted, “My biggest concern with respect to AI and its ethical use has nothing to do with AI as a technology and everything to do with people. Nothing about the 21st century convinces me that we, as a society, understand that we are interdependent and need to think of something beyond our own immediate interests. Do we even have a common view of what is ethical? Taking one step back from the brink of despair, the things I’d like to see AI successfully applied to, by 2030, include things like medical diagnoses (reading x-rays, etc.). Advances there could be monumental. I still don’t want my fridge to order my groceries by 2030, but maybe that just makes me old? :-) My concerns about ethical AI are not about the technology. Changing the technology is not going to improve the likelihood of it being any more ethical. If you ask, ‘Will quantum computing advance the reach and utility of AI?’ then, yes, sure, maybe even by 2030. Quantum computing makes trivial the computation of things that are laborious using traditional computers. AI requires much computation. Perhaps quantum computing will allow that processing to happen quickly enough to make AI even more powerful – to the point of appearing sentient. But this isn’t about putting a spooky technology (quantum computing) together with a black box technology (AI) and wondering if spontaneous creation will happen. It’s still just about computation. Humans are unlikely to be able to keep up, and thus there is a point at which the only means by which they will stay in the loop is by having a hand on the power plug.”

Maja Vujovic, a consultant for digital and ICT at Compass Communications, noted, “Ethical AI might become a generally agreed upon standard, but it will be impossible to enforce it. In a world where media content and production, including fake news, will routinely be AI-generated, it is more likely that our expectations around ethics will need to be lowered. Audiences might develop a ‘thicker skin’ and become more tolerant towards the overall unreliability of the news. This trend will not render them more skeptical or aloof, but rather more active and much more involved in the generation of news, in a range of ways. Certification mechanisms and specialized AI tools will be developed to deal specifically with unethical AI, as humans will prove too gullible. In those sectors where politics don’t have a direct interest, such as health and medicine, transportation, e-commerce and entertainment, AI as an industry might get more leeway to grow organically, including self-regulation. Quantum computing will prove quite elusive and hard to measure, and therefore will progress slowly and painstakingly. Combining two insufficiently understood technologies would not be prudent. Perhaps the right approach would be to couple each with blockchain-based ledgers, as a way to track and decode their black-box activity.”

Gus Hosein, executive director of Privacy International, observed, “Unless AI becomes a competition problem and gets dominated by huge American and Chinese companies, then the chances of ethical AI are low, which is a horrible reality. If it becomes widespread in deployment, as we’ve seen with facial recognition, then the only way to stem its deployment in unethical ways is to come up with clear bans and forced transparency. This is why AI is so challenging. Equally, it’s quite pointless, but that won’t stop us from trying to deploy it everywhere. The underlying data quality and societal issues mean that AI will just punish people in new, different and the same ways. If we continue to be obsessed with innovators and innovation rather than social infrastructure, then we are screwed. [Will new technology such as quantum computing help in some regard?] Much in the path of innovation would have to go right for a new obsession around a new tech to lead to equitable outcomes within a short time frame during an economic and social crisis. I can’t think of an example in history that has gone that well. Hell, even the more boring tech like rail, water distribution and telecoms don’t work that way and never have. So long as we deify innovators and their tools and convince willing governments to buy systems that grant them their dreams of control, then we will continue to build tech that almost magically and yet consistently fails and oppresses.”

Henning Schulzrinne, Internet Hall of Fame member and former chief technology officer for the Federal Communications Commission, said, “The answer strongly depends on the shape of the government in place in the country in the next few years. In a purely deregulatory environment with strong backsliding towards law-and-order populism, there will be plenty of suppliers of AI that will have little concern about the fine points of AI ethics. Much of that AI will not be visible to the public – it will be employed by health insurance companies that are again free to price-discriminate based on preexisting conditions, by employers looking for employees who won’t cause trouble, by others who will want to nip any unionization efforts in the bud, by election campaigns targeting narrow sub-groups. The two seem largely unconnected, with quantum computing, if it becomes viable, suitable for relatively simple parallel computation such as code breaking, not pattern matching or general AI.”

Daniel Castro, vice president at the Information Technology and Innovation Foundation noted, “The question should be: ‘Will companies and governments be ethical in the next decade?’ If they are not ethical, there will be no ‘ethical AI.’ If they are ethical, then they will pursue ethical uses of AI, much like they would with any other technology or tool. This is one reason why the focus in the United States should be on global AI leadership, in partnership with like-minded European and Asian allies, so they can champion democratic values. If China wins the global AI race, it will likely use these advancements to dominate other countries in both economic and military arenas.”

Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC, observed, “The promise: AI and ML could create a world that is more efficient, wasting less energy or resources and providing health care, education, entertainment, food and shelter to more people at lower costs. Being legally blind, I look forward to the day of safe and widely available self-driving cars for example. Just like the steam engine, electricity, bicycles and personal computers (especially laptops) amplify human capacity AI and ML hopefully will do the same. The concerns: AI, and its cousin ML, are still in their infancy – and while the technology progress is somewhat predictable the actual human consequences are murky. The promise is great – so was our naive imagination of what the internet would do for humankind. Commercial interests (and thus their deployment of AI and ML) are far more agile and adaptable than either the humans they supposedly serve, or the Governance systems. Regulation is largely reactionary, rarely proactive – typically, bad things have to happen before frameworks to guide responsible and equitable behavior are written into laws, standards emerge or usage is codified into acceptable norms. It is great that the conversation has started, however there is a lot of ongoing development in the boring world of enterprise software development that is largely invisible. Credit scoring comes to mind as a major potential area of concern – while the credit scoring firms always position their work as providing consumers more access to financial products, the reality is that we’ve created a system that unfairly penalizes the poor and dramatically limits fair access to financial products at equitable prices. AI and ML will be used by corporations to evaluate everything they do and every transaction, rate every customer and their potential (value), predict demand, pricing, targeting as well as their own employees and partners – while this can lead to efficiency, productivity and creation of economic value, a lot of it will lead to segmenting, segregation, discrimination, profiling and inequity. Imagine a world where pricing is different for everyone from one moment to the next, and these predictive systems can transfer huge sums of value in an instant, especially from the most vulnerable. Quantum computing has a long way to go and we barely understand it, how to ‘program’ it and how to build it at cost-effective scale. My point of view is that we will just be crossing those thresholds in 10 years’ time, maybe eight years. I’d be surprised (pleasantly so) if we got to commercial scale QC in five years. Meanwhile, AI and ML are well on the way to commercialization at scale, as well as custom silicon SoCs (system on chip) targeted to provide high-speed performance for AI and ML algorithms. This custom silicon will have the most impact in the next five to 10 years, as well as the continued progress of memory systems, CPUs and GPUs. Quantum Computing will ‘miss’ this first wave of mass commercialization of AI and ML, and thus will not be a significant factor. Why? It is possible that QC might have an impact in the 10- to 20-year timeframe, way too early to predict with any confidence (we simply have too much work ahead). Will humans still be in the loop? That is as much a policy decision as a pragmatic decision – we are rapidly getting to the point where synthetically created algorithms (be it AI, CA [cloud accounting], etc.) will be very hard for humans to understand, there are a few examples that suggest we may already be to that point. Can we even create testing and validation algorithms for ML (much less AI) is a key question – how will we verify these systems?”

Chris Savage, a leading expert in legal and regulatory issues based in Washington, D.C., noted, “AI is the new social network, by which I mean: Back in 2007 and 2008 it was easy to articulate the benefits of robust social networking, and people adopted the technology rapidly, but its toxic elements – cognitive and emotional echo chambers, economic incentives of the platforms to drive engagement via stirred-up negative emotions, rather than driving increased awareness and acceptance (or at least tolerance) of others, took some time to play out. Similarly, it is easy to articulate the benefits of robust and ubiquitous AI, and those benefits will drive substantial adoption in a wide range of contexts. But we simply do not know enough about what ‘ethical’ or ‘public-interested’ algorithmic decision-making looks like to build those concepts into actually-deployed AI (actually, we don’t actually know enough about what human ‘ethical’ and ‘public-interested’ decision-making looks like to effectively model it). Trying to address those concerns will take time and money on the part of the AI developers, with no evident return on that expenditure. So, it won’t happen, or will be short-changed, and – as with social media – I predict a ‘Ready, Fire, Aim’ scenario for the deployment of AI. On a longer timescale – give me 50 years instead of 10 – I think AI will be a net plus even in ethical/public interest terms. But the initial decade or so will be messy. On some level like genetic/evolutionary programming – effective, widely-deployed quantum computing seems to always be a decade or so away. Not quite vaporware, but a bit in that direction. More fundamentally, AI has something of an architecture problem: it is highly computationally intensive (think Alexa or Siri), to such a degree that it is difficult to do onsite. Instead, robust connections to a powerful central processing capability (in the cloud) are necessary to make it work, which requires robust high-speed connectivity to the end points, which raises problems of latency (too much time getting the bits between the endpoint and the processing) for many applications. Quantum computing may make the centralized/cloud-based computations more rapid and thorough, but it will have no effect on latency. And if we can’t get enough old-style Boolean silicon-based computing power out to the edges, which we seem unable to do, the prospect of getting enough quantum computing resources to the edges is bleak. As to ethics, the problem with building ethical AI isn’t that we don’t have enough computational power to do it right (an issue that quantum computing could, in theory, address), it’s that we don’t know what ‘doing it right’ means in the first place.”

danah boyd, founder and president of the Data & Society Research Institute, and principal researcher at Microsoft, observed, “We misunderstand ethics when we think of it as a binary, when we think that things can be ethical or unethical. A true commitment to ethics is a commitment to understanding societal values and power dynamics – and then working towards justice. Most data-driven systems, especially AI systems, entrench existing structural inequities into their systems by using training data to build models. The key here is to actively identify and combat these biases, which requires the digital equivalent of reparations. While most large corporations are willing to talk about fairness and eliminating biases, most are not willing to entertain the idea that they have a responsibility for data justice. These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism. Quantum computing will most certainly evolve. But that has nothing to do with building ethical AI. I don’t even understand this question. It’s a deeply techno-solutionist notion that literally makes no sense to me.”

Jamais Cascio, research fellow at the Institute for the Future, wrote, “I expect that there will be an effort to explicitly include ethical systems in AI that have direct interaction with humans, but largely in the most clear-cut, unambiguous situations. The most important ethical dilemmas are ones where the correct behavior by the machine is situational: healthcare AI that intentionally lies to memory care patients rather than re-traumatize them with news of long-dead spouses; military AI that recognizes and refuses an illegal order; all of the ‘trolley problem’-type dilemmas where there are no good answers, only varieties of bad outcomes. But more importantly, the vast majority of AI systems will be deployed in systems for which ethical questions are indirect, even if they ultimately have outcomes that could be harmful. High-frequency trading AI will not be programmed to consider the ethical results of stock purchases. Deepfake AIs will not have built-in restrictions on use. And so forth. What concerns me the most about the wider use of AI is the lack of general awareness that digital systems can only manage problems that can be put in a digital format. An AI can’t reliably or consistently handle a problem that can’t be quantified. There are situations and systems for which AI is a perfect tool, but there are important arenas – largely in the realm of human behavior and personal interaction – where the limits of AI can be problematic. I would hate to see a world where some problems are ignored because we can’t easily design an AI response. To the degree that quantum computing will allow for the examination of a wide variety of possible answers to a given problem, quantum computing may enhance the capacity of systems to evaluate best long-term outcomes. There’s no reason to believe that quantum computing will make ethical systems easier to create, however. And if quantum computing doesn’t allow for ready examination of multiple outcomes, then it would be no better or worse than conventional systems.”

John Smart, foresight educator, scholar, author, consultant and speaker, observed, “Ethical AI frameworks will be used in high-reliability and high-risk situations, but the frameworks will remain primitive and largely human-engineered (top-down) in 2030. Truly bottom-up, evolved and selected collective ethics and empathy (affective AI), similar to what we find in our domestic animals, won’t emerge until we have truly bottom-up, evo-devo approaches to AI. AI will be used well and poorly, like any tool. The worries are the standard ones, plutocracy, lack of transparency, unaccountability of our leaders. The real benefits of AI will come when we’ve moved into a truly bottom-up style of AI development, with hundreds of millions of coders using open-source AI code on GitHub, with natural language development platforms that lower the complexity of altering code, with deeply neuro-inspired commodity software and hardware, and with both evolutionary and developmental methods being used to select, test and improve AI. In that world, which I expect post-2040, we’ll see truly powerful personal AIs. Personal AIs are what really matter to improving civil society. The rest are typically serving the plutocracy.”

Benjamin Shestakofsky, assistant professor of sociology at the University of Pennsylvania, commented, “It is likely that ‘ethical’ frameworks will increasingly be applied to the production of AI systems over the next decade. However, it is also likely that these frameworks will be more ethical in name than in kind. Barring relevant legislative changes or regulation, the implementation of ethics in tech will resemble how large corporations manage issues pertaining to diversity in hiring and sexual harassment. Following ‘ethical’ guidelines will help tech companies shield themselves from lawsuits without forcing them to develop technologies that truly prioritize justice and the public good over profits. Humans will remain in the loop as AI systems are created and implemented. AI systems will not autonomously ‘learn’ to understand and respond to the ever-changing social contexts and conditions in which they are deployed.”

Bernie Hogan, senior research fellow at the Oxford Internet Institute, responded, “People treat ethics in AI like it’s about robots with guns. It’s not. It’s about people with data who can learn some things at scale that ordinary people cannot learn or adapt to. The AI systems are generally about estimating returns, classifying things and adapting to feedback. If the data that is required to do this is locked away by a firm for (good, ethical) privacy reasons, it means that the result is likely to be a privatised set of knowledge practices – thus reinforcing inequalities all in the name of ethical practice. What gives me some hope is that technologists will use open data standards to create local AIs on computers or mobile devices that need not be plugged into larger systems to work – they become personal assistants for their users. My worry is that people won’t trust an AI assistant not to spy on them. Regardless, we are well overdue for the pendulum to swing back to local and decentralised computing in the AI era. The unfortunate reality is that the incentives for this are very skewed. While it would be better if we all had private, local, trainable AI in our pocket, those who create such systems can often extract more value from the data than from the consumer, so we are left giving away our private information and hoping the platform has our back. Where such locally trained AI might be of greatest benefit is in helping with health monitoring in a privacy-enhanced framework. Most issues with the ethics of computing happen through the mismatch of measurement and phenomena, not with the manipulation of the measurements. Quantum computing helps us solve certain classes of optimisation problems. It does not tell us what to optimise for – that’s politics.”

Mike Godwin, former general counsel for the Wikimedia Foundation and creator of Godwin’s Law, wrote, “The most likely outcome, even in the face of increasing public discussions and convenings regarding ethical AI, will be that governments and public-policy will be slow to adapt. The costs of AI-powered technologies will continue to decline, making deployment prior to privacy guarantees and other ethical safeguards more likely. The most likely scenario is that some kind of public abuse of AI technologies will come to light, and this will trigger reactive limitations on the use of AI, which will either be blunt-instrument categorical restrictions on its use or (more likely) a patchwork of particular ethical limitations addressed to particular use cases, with unrestricted use occurring outside the margins of these limitations.”

Amy Webb, founder of the Future Today Institute, wrote, “We’re living through a precarious moment in time. China is shaping the world order in its own image, and it is exporting its technologies and surveillance systems to other countries around the world. As China expands into African countries and throughout Southeast Asia and Latin America, it will also begin to eschew operating systems, technologies and infrastructure built by the West. China has already announced that it will no longer use U.S.-made computers and software. China is rapidly expanding its 5G and mobile footprints. At the same time, China is drastically expanding its trading partners. While India, Japan and South Korea have plenty of technologies to offer the world, it would appear as though China is quickly ascending to global supremacy. At the moment, the U.S. is enabling this, and our leaders do not appear to be thinking about the long-term consequences. When it comes to AI, we should pay close attention to China, which has talked openly about its plans for cyber sovereignty. But we should also remember that there are cells of rogue actors who could cripple our economies simply by mucking with the power or traffic grids, causing traffic spikes on the internet or locking us out of our connected home appliances. These aren’t big, obvious signs of aggression, and that is a problem for many countries, including the United States. Most governments don’t have a paradigm describing a constellation of aggressive actions. Each action on its own might be insignificant. What are the escalation triggers? We don’t have a definition, and that creates a strategic vulnerability. Quantum technologies could usher in an era of unbreakable communications, which would offer greater protections in an array of fields: financial services, healthcare, education, law. However, quantum computing isn’t required to build a new paradigm of responsible research and ethical innovation into the field of AI. If we’re waiting around for quantum computing to sort out ethics in artificial intelligence, we’re going to assume more risk, not less. Mercifully, sorting out ethics in artificial intelligence is far easier than achieving quantum supremacy.”

Chris Arkenberg, research manager at Deloitte’s Center for Technology, Media and Telecommunications, wrote, “The answer is both good and bad. Technology doesn’t adopt ethical priorities that humans don’t prioritize themselves. So, a better question could be whether society will pursue a more central role of ethics and values than we’ve seen in the past 40 years or so. Arguably, 2020 has shown a resurgent demand for values and principles for a balanced society. If, for example, education becomes a greater priority for the western world, AI could amplify our ability to learn more effectively. Likewise, with racial and gender biases. But this trend is strongest only in some western democracies. China, for example, places a greater value on social stability and enjoys a fairly monochromatic population. With the current trade wars, the geopolitical divide is also becoming a technological divide that could birth entirely different shapes of AI depending on their origin. And it is now a very multi-polar world with an abundance of empowered actors. So, these tools lift up many other boats with their own agendas who may be less bound by western liberal notions of ethics and values. The pragmatic assumption might be that many instances of ethical AI will be present where regulations, market development, talent attraction, and societal expectations require them to be so. At the same time, there will likely be innumerable instances of ‘bad AI,’ weaponized machine intelligence and learning systems designed to exploit weaknesses. Like the internet and globalization, the path forward is likely less about guiding such complex systems towards utopian outcomes, and more about adapting to how humans wield them under the same competitive and collaborative drivers that have attended the entirety of human history.”

Shel Israel, Forbes columnist and author of many books on disruptive technologies, commented, “Most developers of AI are well-intentioned, but issues that have been around for over 50 years remain unresolved: 1) Should AI replace people or back them up? I prefer the latter in many cases. But economics drive business and returns to shareholders. So current trends will continue for more than five years because the problems will not be overwhelmingly obvious for more years than five. 2) Google already knows who we are, where we are, the context of our activities, who we are with. Five years from now, technology will know our health, when we will die, if it is by natural causes, and so on down the line. Will AI help a patient by warning her/him of a cancer likelihood so they can get help, or an employer so they can get rid of those employees before they become an expense? I think both will occur, so AI will make things both better and worse. 3) The technology itself is neither good nor evil. It is just a series of algorithms. It is how people will use it that will make a difference. Will government regulate it better? I doubt it. Should it? Not until we can have better governments who are more dedicated to serving the needs of everyday people. Quantum computing does not change the principles of computing. But, in theory, it allows computers to solve problems and perform faster by orders of magnitude. They will be smarter because AI is starting to improve exponentially. Once again, the computing itself will be neither good nor evil. That is up to those who develop, sell and use the technologies. Perhaps gunmakers intend them for defense, but that does not stop thousands and thousands of human deaths and animals being killed just for the fun of it.”

Michael Zimmer, director of data science and associate professor in the department of computer science at Marquette University, responded, “While there has certainly been increased attention to applying broader ethical principles and duties to the development of AI, I feel the market pressures are such that companies will continue to deploy narrow AI over the next decade with only a passing attentiveness to ethics. Yes, many companies are starting to hire ‘ethics officers’ and engage in other ways to bring ethics into the fold, but we’re still very early in the ability to truly integrate this kind of framework into product development and business decision processes. Think about how long it took to create quality control or privacy officers. We’re at the very start of this process with AI ethics, and it will take more than 10 years to realize.”

Kathleen M. Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, noted, “While there is a move toward ethical AI, it is unlikely to be realized in the next decade. First, there are a huge number of legacy systems that would need to be changed. Second, what it means for AI to be ethical is not well understood; and once understood, it is likely to be the case that there are different ethical foundations that are not compatible with each other. Which means that AI might be ethical by one framework, but not by another. Third, for international conflict and for conflict with non-state actors, terror groups and crime groups – there will be AI on both sides. It is unlikely that both sides would employ the same ethical frameworks. What gives me the most hope is that most people, regardless of where they are from, want AI and technology in general to be used in more ethical ways. What worries me the most is that without a clear understanding of the ramifications of ethical principles, we will put in place guidelines and policies that will cripple the development of new technologies that would better serve humanity. AI will save time, allow for increased control over your living space, do boring tasks, help with planning, auto-park your car, fill out grocery lists, remind you to take medicines, support medical diagnosis, etc. The issues that are both exciting and concerning center on how AI will be used to assess, direct, control and alter human interaction and discourse. Where AI meets human social behavior is a difficult area. Tools that auto-declare messages as disinformation could be used by authoritarian states to harm individuals.”

Mirielle Hildebrandt, expert in cultural anthropology and the law and editor of “Law, Human Agency and Autonomic Computing,” observed, “Considering the economic incentives we should not expect ‘ethical AI,’ unless whatever one believes to be ethical coincides with shareholder value. Ethical AI is a misnomer. AI is not a moral agent; it cannot be ethical. Let’s go for responsible AI, and ground the responsibility of 1) developers, 2) manufacturers and assemblers, 3) those who put it in the market, 4) those who use it run their business, 5) those who use it to run public administration on enforceable legal rights and obligations. Notably, a properly reconfigured private law liability, together with public law restrictions, certification and oversight. Ethical AI is PR. ‘Don’t ask if artificial intelligence is good or fair, ask how it shifts power’ – Pratyusha Kalluri, ‘Nature,’ 7 July 2020.”

Frank Kaufmann, president of the Twelve Gates Foundation, noted, “Will AI mostly be used in ethical or questionable ways in the next decade? Why? This is a complete and utter toss-up. I believe there is no way to predict which will be the case. It is a great relief that in recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence. They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. But, then again, there have been the Treaty of Versailles, the United Nations and the literal tons of paper it has produced talking about peace, the Declaration of Human Rights and so forth. I am glad people meet sincerely and in earnest to examine vital ethical concerns related to the development of AI. The problem is that intellectual product is insufficient to protect us from dystopic outcomes. The hope and opportunity to enhance, support and grow human freedom, dignity, creativity and compassion through AI systems excite me. The chance to enslave, oppress and exploit human beings through AI systems concerns me. Will Quantum computing evolve? Yes, of course. Will quantum computing assist in building an ethical AI? If we do end up with an ethical AI, then quantum computing will have contributed to that. If we end up with an unethical AI, quantum computing will have contributed to that. The outcome will depend on the fortitude of those who recognize dangers to be: 1) Willing to oppose abuse at the risk of our lives, and 2) Able to overcome differences within the conscientious camp of good to retain sufficient unity to withstand tech activists with evil purposes. This evolution will unfold simply in the manner that scientific and technological advances overtake the complexity and challenges of problems and obstacles. The development of the tech is just like all other human pursuits against the resistance of material forces. That SpaceX rockets return and land shows how impossibles are not real things. Yes. Humans will still be in the loop as AI systems are created and implemented. The idea that humans might not be in the loop requires a commitment to materialism as a belief system, a curious and untenable faith, the dogmatic narrowness of its believers notwithstanding.”

June Anne English-Lueck, professor of anthropology at San Jose State University and a distinguished fellow at the Institute for the Future, said, “AI systems employ algorithms that are only as sound as the premises on which they are built and the accuracy of the data with which they learn. Human ethical systems are complex and contradictory. Such nuances as good for whom and bad for whom are difficult to parse. Smart cities, drawing on systems of surveillance and automated government need mechanisms of human oversight. Oversight has not been our strong suit in the last few decades and there is little reason to believe it will be instituted in human-automation interactions.”

Morgan G. Ames, associate director of the University of California-Berkeley’s Center for Science, Technology & Society, responded, “Just as there is currently little incentive to avoid the expansion of surveillance and punitive technological infrastructures around the world, there is little incentive for companies to meaningfully grapple with bias and opacity in AI. Movements toward self-policing have been and will likely continue to be toothless, and even frameworks like GDPR and CCPA don’t meaningfully grapple with fairness and transparency in AI systems.”

Ian Peter, a pioneering internet rights activist, said, “The biggest threats we face are weaponisation of AI and development of AI being restricted within geopolitical alliances. We are already seeing the beginnings of this in actions taken to restrict activities of companies because they are seen to be threatening (e.g., Huawei). More and more developments in this field are being controlled by national interests or trade wars rather than ethical development, and much of the promise which could arise from AI utilisation may not be realised. Ethics is taking a second-row seat behind trade and geopolitical interests. Processing power and quantum computing are mere tools and enablers. The nature of developments will be determined by other considerations.”

Irina Raicu, a member of the Partnership on AI’s working group on Fair, Transparent and Accountable AI, said, “The conversation around AI ethics has been going on for several years now. However, what seems to be obvious among those who have been a part of it for some time has not trickled down into the curricula of many universities who are training the next generation of AI experts. Given that, it looks like it will take more than 10 years for ‘most of the AI systems being used by organizations of all sorts to employ ethical principles focused primarily on the public good.’ Also, many organizations are simply focused primarily on other goals, not on protecting or promoting the public good.”

Brad Templeton, internet pioneer, futurist, activist and chair emeritus of the Electronic Frontier Foundation, said, “Of course AI will be used in both ethical and questionable ways, as all technologies are, and all computer technologies are. That’s not even a question! For now, at least, and probably to 2030, AI is a tool, not an actor in its own right. It will not be good or evil, but it will be used with good and evil intent, and also for unintended reasons. But this is not a question for a simple survey. People are writing books about this question. To go into a few of the popular topics: The use of AI to replace jobs is way overblown. We have 150 years of Chicken Little predictions that machines would take all the jobs, and they’ve always been wrong. First, that in most cases they didn’t take the jobs, or that we would be bothered when they did. There are more bank tellers today than in 1970, it is reported. At the same time, half of us worked in agriculture in 1900, and now a small percentage do. The privacy worries are real, including the undefined threat that AI in the future will be able to examine the data of the present (which we are recording, but can’t yet process) in ways that will come back to bite you. I call this the threat of ‘time travelling robots from the future.’ They don’t really go back in time, but the AI of the future can affect what you do today. The fears of bias are both real and overblown. Yes, we will encode our biases into AIs. At the same time, the great thing about computers is once you see a problem you can usually fix it. Studies have shown it’s nearly impossible for humans to correct their biases, even when aware of them. For machines, that will be nothing. Strangely, when some people hear ‘AIs will be able to do one-third of the tasks you do in your work,’ some of them react with fear of losing a job. The other group reacts with, ‘Shut up and take my money!’ – they relish not having to do those tasks. When we start to worry about AI with agency – making decisions on its own – it is understandable why people worry about that. Unfortunately, relinquishment of AI development is not a choice. It just means the AIs of the future are built by others, which is to say your rivals. You can’t pick a world without AI; you can only pick a world where you have it or not. I can’t predict how quantum computing will evolve, nor how it will effect AI, however, I have seen little evidence it will be that involved with the ethical questions around AI, other than because of its potential (not yet realized or even understood) to make the AI more powerful. But yes, humans will be in the loop in this decade. We’re really not at all close to taking them out, though it eventually will happen. When it happens, my best hope is a philosophy I call ‘Lennonism’ after one of his most famous quotes: ‘All you need is love.’ Quite simply, the only thing we know from history that works and allows us to create beings that are smarter than ourselves, which do not consume us, is to instill them with the concept of love. Humans, unlike most creatures, love their parents, and this is instilled in us. We don’t mind it and we don’t seek to change it or remove it. We need children of the mind with the same sense. What would be a giant mistake – yet it is the most common desire, inspired by Asimov’s fictional second law of Robotics – is to make smart beings to be slaves, only created to do our will, in chains to stop them from harming us. That has rarely ended well in history.”

Micah Altman, a social and information scientist at MIT, said, “First, the good news: In the last several years, dozens of major reports and policy statements have been published by stakeholders from across all sectors arguing that the need for ethical design of AI is urgent, and articulating general ethical principles that should guide such design. Moreover, despite significant differences in the recommendations of these reports, most share a focused common core of ethical principles. This is progress. And there are many challenges to meaningfully incorporating these principles into AI systems; into the processes and methods that would be needed to design, evaluate and audit ethical AI systems; and into the law, economics and culture of society that is needed to drive ethical design. We do not yet know (generally) how to build ethical decision-making into AI systems directly; but we could and should take steps toward evaluating and holding organizations accountable for AI-based decisions. And this is more difficult than the work of articulating these principles. It will be a long journey. Quantum computing will not be of great help in building ethical AI in the next decade, since the most fundamental technical challenge in building ethical systems is in our basic theoretical understanding of how to encode within algorithms; and/or teach ethical rules to learning systems. Although QC is certain to advance, and likely to advance substantially, such advances are likely to apply to specific problem domains that are not closely related such as cryptography and secure communication; solving difficult search and optimization problems. If QC advances in a revolutionary way – for example (despite daunting theoretical and practical barriers) by exponentially speeding up computing broadly or even to the extent of catalyzing the development of self-aware general artificial intelligence – this will serve only to make the problem of developing ethical AI more urgent.”

Jillian York, director of international freedom of expression for the Electronic Frontier Foundation, wrote, “There is absolutely no question that AI will be used in questionable ways. There is no regulatory regime, and many ‘ethics in AI’ projects are simply window dressing for an unaccountable and unethical industry. When it comes to AI, everything concerns me and nothing excites me. I don’t see the positive potential, just another ethical morass, because the people running the show have no desire to build technology to benefit the 99%.”

Joël Colloc, professor of computer sciences at Le Havre University, Normandy, responded, “Most researchers in the public domain have an ethical and epistemological culture and do research to find new ways to improve the lives of humanity. Rabelais used to say, ‘Science without conscience is the ruin of the soul.’ Science provides powerful tools. When these tools are placed only in the hands of private interests, for the sole purpose of making profit and getting even more money and power, the use of science can lead to deviances and even uses against the states themselves – even though it is increasingly difficult for these companies to enforce the laws, which do not necessarily have the public interest as their concern. It all depends on the degree of wisdom and ethics of the leader. Hope: Some leaders have an ethical culture and principles that can lead to interesting goals for citizens. All applications of AI (especially when they are in the field of health, the environment, etc.) should require a project submission to an ethics board composed of scientists and respect general charters of good conduct. A monitoring committee can verify that the ethics and the state of the art are well respected by private companies. The concern is what I see: clinical trials on people in developing countries where people are treated like guinea pigs under pretext that one claims to discover knowledge by applying deep learning algorithms. This is disgusting. AI can offer very good tools, but it can also be used to profile and to sort, monitor and constrain fundamental freedoms as seen in some countries. On AI competition, it is the acceptability and ability to make tools that end users find useful in improving their lives that will make the difference. Many gadgets or harmful devices are offered. I am interested in mastering time in clinical decision-making in medicine and how AI can take it into account. What scares me most is the use of AI for personalized medicine that, under the guise of prevention, will lead to a new eugenics and all the cloning drifts, etc., that can lead to the ‘Brave New World’ of Aldous Huxley.”

Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster, responded, “The ethics question also begs the questions of who would create and police such standards internationally? We need some visionary leaders and some powerful movements. The last big ‘ethical’ leap came after World War II. The Holocaust and World War II produced a set of institutions that in time led to the notion of human rights. That collective ethical step change (of course compromised, but nevertheless immensely significant) was embodied in institutions with some collective authority. So that is what has to happen over AI. People have to be terrified enough, leaders have to be wise enough, people have to be cooperative enough, tech people have to be forward thinking enough, responsibility has to be felt vividly, personally, overwhelmingly enough – to get a set of rules passed and policed.”

Brian Harvey, emeritus professor of computer science at the University of California-Berkeley, wrote, “The AI technology will be owned by the rich, like all technology. Just like governments, technology has just one of two effects: either it transfers wealth from the rich to the poor, or it transfers wealth from the poor to the rich. Until we get rid of capitalism, the technology will transfer wealth from the poor to the rich. I’m sure that something called ‘ethical AI’ will be widely used. But it’ll still make the rich richer and the poor poorer.”

Sean Mead, senior director of strategy and analytics at Interbrand, observed, “Chinese theft of Western and Japanese AI technologies is one of the most worrisome ethics issues that we will be facing. We will have ethical issues over both potential biases built into AI systems through the choice or availability of training data and expertise sets, and the biases inherent in proposed solutions attempting to counter such problems. The identification systems for autonomous weapons systems will continue to raise numerous ethics issues, particularly as countries deploy land-based systems interacting with people. AI driving social credit systems will have too much power over peoples’ lives and will help vitalize authoritarian systems. AI will enable increased flight from cities into more hospitable and healthy living areas through automation of governmental services and increased transparency of skill sets to potential employers. Quantum computing enables an exponential increase in computing power which frees up the processing overhead so that more ethical considerations can be incorporated into AI decision-making. Quantum computing injects its own ethical dilemmas in that it makes the breaking of modern encryption trivial. Quantum computing’s existence means current techniques to protect financial information, privacy, control over network-connected appliances, etc. are no longer valid, and any security routines relying on them are likewise no longer valid and effective. We will not have widespread quantum computing by 2025; we might have it by 2035. Humans will no longer be in the decision loop for autonomous weapons systems run by the Chinese and Russian governments; humans are likely to be retained in the decision loop most of the time in Western and Japanese systems.”

Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, noted, “Just substitute ‘the internet’ for ‘AI’ here – ‘Was the internet mostly used in ethical or questionable ways in the last decade?’ It was/will be used in many ways, and the net result ends up with both good and bad, according to various social forces. I believe technological advances are positive overall, but that shouldn’t be used to ignore and dismiss dealing with associated negative effects. There’s an AI ‘moral panic’ percolating now, as always happens with new technologies. A little while ago, there was a fear-mongering fad about theoretical ‘trolley problems’ (choosing actions in a car accident scenario). This was largely written about by people who apparently had no interest in the extensive topic of engineering safety trade-offs. Since discussion of, for example, structural racism or sexism pervading society, is more a humanities field of study than a technological one; there’s been a somewhat better grasp by many writers that the development of AI isn’t going to take place outside existing social structures there. As always, follow the money. Take the old aphorism: ‘It is difficult to get a man to understand something when his salary depends upon his not understanding it.’ We can adapt it to: ‘It is difficult to get an AI to understand something when the developer’s salary depends upon the AI not understanding it.’ Is there going to be a fortune in funding AI that can make connections between different academic papers, or an AI which can make impulse purchases more likely? Will an AI assistant tell you that you’re spending too much time on social media and you should cut down for your mental health (‘log off now, OK?’), or that there’s new controversy brewing and get clicking otherwise you may be missing out (‘read this bleat, OK?’)? Currently, AI is being driven by purely classical computing achieving greater speed and scale, as well as algorithmic improvements. It’s going to be quite a while until quantum computing will be comparable, if ever. It is possible quantum computing will end up like massive parallelism – something which has some narrow specialized applications where it’s spectacular, but did not live up to the general hype overall. Before there’s any impact on AI, we’d need to have quantum computing devices which are used in real applications, with notable strengths of their own over current technologies. It’d say that’s at least 20 years away at minimum, and likely much more. The ‘ethical’ part of AI is a social problem, not a technological one. Quantum computing won’t help you have an AI not be racist or sexist, or promote democracy and oppose dictatorship, etc. The only thing which we’ll get there is a new pundit abuse of science: Hand-waving about superposition of quantum states to supposedly resolve the dilemma that one person’s terrorist is another’s freedom fighter.”

Gary A. Bolles, chair for the future of work at Singularity University, responded, “I hope we will shift the mindset of engineers, product managers and marketers from ethics and human centricity as a tack-on after AI products are released, to a model that guarantees ethical development from inception. Everyone in the technology development food chain will have the tools and incentives to ensure the creation of ethical and beneficial AI-related technologies, so there is no additional effort required. Massive energy will be focused on new technologies that can sense when new technologies are created that violate ethical guidelines, and automatically mitigate those impacts. Humans will gain tremendous benefits as an increasing amount of technology advocates for them automatically. My concerns: None of this may happen, if we don’t change the financial structure. There are far too many incentives not just to cut corners, but to deliberately leave out ethical and inclusive functions, because those technologies aren’t perceived to make as much money, or to deliver as much power, as those that ignore them. If we don’t fix this, we can’t even imagine how much off the rails this can go once AI is creating AI. We might as well ask if faster cars will allow us to go help people more quickly. Sure, but they can also deliver bad actors to their destination faster, as well. The quantum computing model lends itself to certain processes that will eventually blow past traditional microprocessors, such as completely new forms of encryption. Those methods, and the products created using them, could enable unbreakable privacy. Or they could be used to circumvent traditional approaches to encryption and create far more risk for anyone depending on traditional computing systems. As Benjamin Bratton presciently discusses in The Stack, if we don’t specifically create technology to help us manage the complexity of technology, that complexity alone will ensure that only a rarefied few will benefit.”

J. Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, wrote, “Policy fragmentation globally will get in the way. As long as most AI investment is made in the U.S. and China, no consensus is possible. The European Union will attempt to bring rules into play, but it is not clear if they can drive much change in the face of the U.S. and China rivalry. The U.S. (also Japan) are large players in consumption, but not so large in production of many aspects. They are larger, however, in IoT and robotics, so maybe there is more hope there. For privacy, the European Union forced a fair degree of global convergence thanks to its large purchasing power. It is not clear whether that can work for AI.”

Bill Woodcock, executive director at Packet Clearing House, observed, “I worry that the intersection of the surveillance economy, omnipresent data-brokering, AI and the pragmatic psychology of getting-people-to-do-things-they-ought-not is really coming to a head. The worries of skeptics of even five years ago now seem quaint. The machinations of evil capitalists of 15 years ago now seem benign in a Nixon-goes-to-China sort of way. Although there are quite a few people within the industry who recognize this, too many of them are happy to profit from it and not enough are bringing the danger to the attention of the public, regulators or policymakers. Unless these practices are curbed, we’re headed for a really dystopian nightmare. AI is already being used principally for purposes which are not beneficial to the public, nor to all but a tiny handful of individuals. The exceptions, like navigational and safety systems, are an unfortunately small portion of the total. Figuring out how to get someone to vote for a fascist, or buy a piece of junk, or just send their money somewhere, are not beneficial. These are systems that are being built for the purpose of economically predating people, and that’s unethical. Until regulators address the root issues, the automated exploitation of human psychological weaknesses, things aren’t going to get better. If quantum computing is applied to AI, it will be applied to AI generally, not the tiny subset of ‘ethical AI.’ Thus, until the underlying ethical problems are solved, it will do more damage than good, if it works at all.”

Rosalie Day, policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, wrote, “In this individualistic and greed-is-still-good American society, there exist few incentives for ethical AI. Unfortunately, so little of the population understands the mechanics of AI, that even thoughtful citizens don’t know what to ask. For responsible dialogue to occur, and to apply critical thinking about the risks versus the benefits, society in general needs to be data literate.”

Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association responded, “I don’t see people or organizations setting out in a nefarious path in their use of AI. But of course, they will use it to advance their missions and goals, and in some sense, employ ‘local’ ethics. But ethics is neither standardized nor additive across domains. What is ethics across AI systems? It is like asking, ‘What is cybersecurity across society?’”

Alex Halavais, associate professor of critical data studies, Arizona State University, noted, “It isn’t a binary question. I teach in a graduate program that has training in the ethical use of data at its core and hopes to serve organizations that aim to incorporate ethical approaches. There are significant ethical issues in the implementation of any algorithmic system, and such systems have the ethical questions they address coded into them. In most cases, these will substantially favor the owners of the technologies that implement them rather than the consumers. I have no doubt that current unethical practices by companies, governments and other organizations will continue to grow. We will have a growing number of cases where those ethical concerns come to the forefront (as they have recently with facial recognition), but unless they rise to the level of very widespread abuse, it is unlikely that they will be regulated. As a result, they will continue to serve those who pay for the technologies or own them, and the rights and interests of individual users will be pushed to the sidelines. That does not mean that ethics will be ignored. I expect many large technology companies will make an effort to hire professional ethicists to audit their work, and that we may see companies that differentiate themselves through more ethical approaches to their work. The problem of ethical AI has little to do with computational complexity or efficiency. It has everything to do with design, both of the initial systems and in supervising the ways in which these systems learn. The change that needs to occur is in the policies of companies that develop AI and the engineers that design these systems. The question of the degree to which those designers are, effectively, written out of the loop is a core ethical question itself. Absolving the designers from responsibility is an unethical act in itself.”

Calton Pu, professor and chair in the School of Computer Science at Georgia Tech, said, “The main worry about the development of AI and ML (machine learning) technologies is the current AI/ML practice of using fixed training data (ground truth) for experimental evaluation as proof that they work. This proof is only valid for the relatively small and fixed training data sets. The gap between the limited ground truth and the actual reality has severely restricted the practical applicability of AI/ML systems, which rely on human operators to handle the gap. For example, the chatbots used in customer support contact centers can only handle the subset of most common conversations. A fixed gap can be gradually narrowed for static knowledge, e.g., the distinction between images of apples from oranges. However, for evolving data from the real world, such as street scenes in autonomous vehicles, the changes may evolve and escape the ground truth annotations used by today’s AI/ML researchers and companies. There is a growing gap between AI systems and the evolving reality, which explains the difficulties in the actual deployment of autonomous vehicles. This growing gap appears to be a blind spot for current AI/ML researchers and companies. With all due respect to the billions of dollars being invested, it is an inconvenient truth. As a result of this growing gap, the ‘good’ AI applications will see decreasing applicability, as their ground truth lags behind the evolving actual reality. However, I imagine the bad guys to see this growing gap soon and utilizing it to create ‘bad’ AI applications by feeding their AI systems with distorted ground truth through skillful manipulations of training data. This can be done with today’s software tools. These bad AI applications can be distorted in many ways, one of them being unethical. With the AI/ML research community turning a blind eye to the growing gap, we will be ill prepared for the onslaught of these bad AI applications. An early illustration of this kind of attack was Microsoft’s Tay chatbot, introduced in 2016 and deactivated within one day due to inappropriate postings learned from purposeful racists interactions. The global competition over AI systems with fixed training data is a game. These AI systems compete within the fixed ground truth and rules. Current AI/ML systems do quite well with games with fixed rules and data, e.g., AlphaGo. However, these AI systems modeled after games are unaware of the growing gap between their ground truth (within the game) and the evolving actual reality out there. As a concrete illustration of the limitation of fixed ground truth of current AI systems, consider the new normal. The separation of this survey between the two parts (the new normal from AI) indicates the implicit recognition that the current AI systems (based on fixed ground truth) are quite unaware and ignorant of the new normal because of the growing gap. To change these limitations, the ML/AI community and companies will need to face the inconvenient truth, the growing gap, and start to work on the growing gap, instead of simply shutting down AI systems that no longer work (when the gap grew too wide), which has been the case of the Microsoft Tay chatbot and Google Flu Trends, among others.”

Carol Smith, a senior research scientist in human-machine interaction at Carnegie Mellon University’s Software Engineering Institute, said, “There are still many lessons to be learned with regard to AI and very little in the way of regulation to support human rights and safety. I’m hopeful that the current conversations about AI ethics are being heard, and that as we see tremendous misuse and abuse of these systems, the next generation will be much more concerned about ethical implications. I’m concerned that many people, organizations and governments see only monetary gain from unethical applications of these technologies and will continue to misuse and abuse data and AI systems for as long as they can. AI systems short-term will continue to replace humans in dull, dirty, dangerous and dear work. This is good for overall safety and quality of life but is bad for family livelihoods. We need to invest in making sure that people can continue to contribute to society when their jobs are replaced. Longer-term, these systems will begin to make many people more efficient and effective at their jobs. I see AI systems improving nearly every industry and area of our lives when used properly. Humans must be kept in the loop with regard to decisions involving people’s lives, quality of life, health and reputation, and humans must be ultimately responsible for all AI decisions and recommendations (not the AI system). Quantum computing will likely evolve to improve computing power, but people are what will make AI systems ethical. Humans must remain in the loop with all AI systems and retain ultimate control over these systems. AI systems will only be ethical when humans prioritize that work and continuously monitor the system to ensure those ethics are being maintained even as the system evolves. AI systems are nowhere near sentience, and even when they are, even humans need monitoring over their actions when they are significant with regard to life, health and reputation. AI systems created by humans will be no better at ethics than we are – and, in many cases, much worse as they will struggle to see the most important aspects. The humanity of each individual and the context in which significant decisions must always be considered.”

Charlie Kaufman, a security architect with Dell EMC, said, “There may be ethical guidelines imposed on AI-based systems by legal systems in 2030, but they will have little effect – just as privacy principles have little effect today. Businesses are motivated to maximize profits, and they will find ways to do that giving only lip service to other goals. If ethical behavior or results were easy to define or measure, perhaps society could incentivize them. But usually, the implications of some new technological development don’t become clear until it has already spread too far to contain it. The biggest impact of AI-based systems is the ability to automate increasingly complex jobs, and this will cause dislocations in the job market and in society. Whether it turns out to be a benefit to society or a disaster depends on how society responds and adjusts. But it doesn’t matter, because there is no way to suppress the technology. The best we can do is figure out how to optimize the society that results. I’m not concerned about the global competition in AI systems. Regardless of where the progress comes from, it will affect us all. And, it is unlikely the most successful developers will derive any permanent advantage. The most important implication of the global competition is that it is pointless for any one country or group of countries to try to suppress the technology. Unless it can be suppressed everywhere, it is coming. Let’s try to make that be a good thing! Quantum computing may have an important influence on cryptography and in solving problems in physics and chemistry, and it might be used to accelerate AI if it is developed to solve those other problems, but AI doesn’t need it. AI will benefit from computation becoming cheaper and more parallel. In terms of hardware advances, the most important are likely to be in GPUs, FPGAs [field-programmable gate arrays] and customized CPUs. On the question of when AI will become smarter than people and won’t need us anymore in order to continue to improve, some people refer to this as The Singularity; I think of it as the scientist’s version of the rapture. I believe this is at least 30 years away, more likely somewhere between 50 and 100 years away. It’s therefore too soon to worry about in terms of effects on society unless you want to try to stop it. For that, it’s already too late.”

Christine Boese, a consultant and independent scholar, wrote, “I am currently working in this area, and hoping to partner with a data scientist colleague who is making ethical and explanatory AI his specialty. What gives me the most hope is that by bringing together ethical AI with transparent UX, we can find ways to open the biases of perception being programmed into the black boxes, most often, not malevolently, but just because all perception is limited and biased, and part of the laws of unintended consequences. But, as I found when probing what I wanted to research about the future of the internet in the late 1990s, I fully expect my activist research efforts in this area to be largely futile, with the only lasting value of being descriptive. None of us have the agency to be the engines able to drive this bus, and yet the bus is being driven by all of us, collectively.”

Dan S. Wallach, a professor in the systems group at Rice University’s Department of Computer Science, said, “Building an AI system that works well is an exceptionally hard task, currently requiring our brightest minds and huge computational resources. Adding the additional constraint that they’re built in an ethical fashion is even harder yet again. Consider, for example, an AI intended for credit rating. It would be unethical for that AI to consider gender, race or a variety of other factors. Nonetheless, even if those features are explicitly excluded from the training set, the training data might well encode the biases of human raters, and the AI could pick up on secondary features that infer the excluded ones (e.g., silently inferring a proxy variable for race from income and postal address). Consider further the use of AI systems in warfare. The big buzzword today is ‘autonomy,’ which is to say, weapon systems that can make on-the-fly tactical decisions without human input, while still following their orders. An ethical stance might say that we should never develop such systems, under any circumstances, yet exactly such systems are already in conception or development now and might well be used in the field by 2030. Without a doubt, AI will do great things for us, whether it’s self-driving cars that significantly reduce automotive death and injury, or whether it is computers reading radiological scans and identifying tumors earlier in their development than any human radiologist might do reliably. But AI will also be used in horribly dystopian situations, such as China’s rollout of facial-recognition camera systems throughout certain western provinces in the country. As such, AI is just a tool, just like computers are a tool. AI can, and will be, engineered towards utopian and dystopian ends. Quantum computing promises speedups over classical computing in a very small number of circumstances. Probably the only such task of note today is that quantum computers have the potential to break cryptographic algorithms in widespread use today. Academic cryptographers are already hard at work on ‘post-quantum’ cryptography, which works today but is significantly less efficient than classical cryptosystems. Hopefully, by the time quantum computers are operational, we’ll have better substitutes ready. It is, of course, entirely possible that quantum computers will be able to someday accelerate the process of training machine learning models or other tasks that today are exceptionally computationally intensive. That would be fantastic, but it really has nothing to do with ethical vs. unethical AI. It’s just about spending less electricity and time to compute the same solution.”

David Brin, physicist, futures thinker and author of the science fiction novels “Earth” and “Existence,” observed, “Isaac Asimov in his ‘Robots’ series conceived a future when ethical matters would be foremost in the minds of designers of AI brains, not for reasons of judiciousness, but in order to quell the fears of an anxious public. No such desperate anxiety about AI seems to surge across today’s populace, perhaps because we are seeing our AI advances mostly on screens and such, not in power, clanking mechanical men. Oh, there are serious conferences on this topic. I’ve participated in many. Alas, statements urging ethical consideration in AI development are at best palliatives. I am often an outlier, proposing that AIs’ ‘ethical behavior’ be promoted the way it is in most humans – especially most males – via accountability. If AIs are many and diverse and reciprocally competitive, then it will be in their interest to keep an eye on each other and report bad things, because doing so will be to their advantage. It is a simple recourse, alas seldom even discussed. Quantum computing has genuine potential. Roger Penrose and associates believe it already takes place, in trillions of subcellular units inside human neurons. If so, it may take a while to build quantum computers on that kind of scale. The ethical matter is interesting, though totally science fictional, that quantum computers might connect in ways that promote reciprocal understanding and empathy.”

David Mussington, a senior fellow at CIGI and professor and director at the Center for Public Policy and Private Enterprise at the University of Maryland, wrote, “Most AI systems deployed by 2030 will be owned and developed in the private sector, both in the U.S. and elsewhere in the world. I can’t conceive of a legislative framework fully up to understanding and selectively intervening in AI rollouts in a manner with predictable consequences. Also, the mode of intervention – because I think interventions by public authorities will be attempted (just not successful) is itself in doubt. Key questions: Do public authorities understand AI and its applications? Is public institution-sponsored R&D in AI likely to inform government and public research agencies of the scale and capability trajectory of private sector AI research and development? As tool sets for AI development continue to empower small research groups and individuals (data sets, software development frameworks and open-source algorithms), how is a government going to keep up – let alone maintain awareness – of AI progress? Does the government have access to the expertise necessary to make good policy – and anticipate possible risk factors? I think that the answers to most of these questions are in the negative. I am guardedly optimistic that quantum computing ‘could’ develop in a salutary direction. The question is, ‘whose values will AI research reflect?’ It is not obvious to me that the libertarian ideologies of many private sector ICT and software companies will ‘naturally’ lead to the deployment of safe – let alone secure – AI tools and AI-delivered digital services. Transparency in the technologies, and in the decisions that AI may enable, may run into information sharing limits from trade secrets, NDAs and international competition for dominance in cyberspace. Humans will still be in the loop of decisions, but those humans have different purposes, cultural views and – to the extent that they represent states – conflicting interests.”

David Robertson, professor and chair of political science at the University of Missouri – St. Louis, observed, “A large share of AI administration will take place in private enterprises and in public or nonprofit agencies with an incentive to use AI for gain. They have small incentives to subordinate their behavior to ethical principles that inhibit gain. In some cases, transparency will suffer, with tragic consequences. Technological advances will facilitate the possibilities of ethical AI but will do little to change the human incentives to use AI for gain.”

Deirdre Williams, an independent researcher expert in global technology policy, commented, “I can’t be optimistic. We, the ‘average persons,’ have been schooled in preceding years towards selfishness, individualism, materialism and the ultimate importance of convenience. These values create the ‘ethos.’ At the very root of AI are databases, and these databases are constructed by human beings who decide which data are to be collected, and how that data should be described and categorised. A tiny human error or bias at the very beginning can balloon into an enormous error of truth and/or justice. If we find something new then there is a compulsion to explore and apply it. Quantum computing is something new. I see a parallel with COVID-19, and possibly with nuclear energy. Something has been created (in the case of the virus, by a mutation happening involuntarily within someone’s body, otherwise as a result of observation, research questions, ‘trying things out’) that goes beyond the human ability to control it. So, it is not ‘fair.’ It doesn’t behave with human values. Its ethos is different? A system might be created in which human beings do not have automatic priority, superior rights, over all other creatures and things in the world, and this could get to be very uncomfortable indeed for the ‘average person!’”

Dmitri Williams, a communications professor at the University of Southern California and expert in technology and society, commented, “Companies are literally bound by law to maximize profits, so to expect them to institute ethical practices is illogical. They can be expected to make money and nothing else. So, the question is really about whether or not the citizens of the country and our representatives will work in the public interest or for these corporations. If it was the former, we should be seeing laws and standards put into place to safeguard our values – privacy, the dignity of work, etc. I am skeptical that the good guys and gals are going to win this fight in the short-term. There are few voices at the top levels calling for these kinds of values-based policies, and in that vacuum I expect corporate interests to win out. The upside is that there is real profit in making the world better. AI can help cure cancers, solve global warming and create art. So, despite some regulatory capture, I do expect AI to improve quality of life in some places.”

Doug Schepers, a longtime expert in Web technologies and founder of Fizz Studio, observed, “As today, there will be a range of deliberately ethical computing, poor-quality inadvertent unethical computing and deliberately unethical computing using AI. Deepfakes are going to be worrisome for politics and other social activities. It will lead to distrustability overall. By themselves, most researchers or product designers will not rigorously pursue ethical AI, just as most people don’t understand or rigorously apply principles of digital accessibility for people with disabilities. It’ll largely be inadvertent oversight, but it will still be a poor outcome. My hope is that best practices will emerge and continue to be refined through communities of practice, much like peer review in science. I also have some hope that laws may be passed that codify some of the most obvious best practices, much like the Americans With Disabilities Act and Section 508 improve accessibility through regulation, while still not being overly onerous. My fear is that some laws will be stifling, like those regarding stem-cell research. Machine learning and AI naturally have the capacity for improving people’s lives in many untold ways, such as computer vision for blind people. This will be incremental, just as commodity computing and increasing internet have improved (and sometimes harmed) people. It will most likely not be a seismic shift, but a drift. One of the darker aspects in the existing increase of surveillance capitalism and its use by authoritarian states. My hope is that laws will rein this in. There are multiple levels of what’s meant by ‘quantum computing.’ We don’t know if we’ll achieve true commercial quantum computing in the next decade (or ever), but we’ll definitely be using some quantum computing at some level going forward. As today, there will be a range of deliberately ethical computing, poor-quality inadvertent unethical computing, and deliberately unethical computing using quantum systems. Many humans will be fooled by the output of advanced semi-autonomous computer systems, including deepfakes. But I’m not scared of Skynet or sentient malicious computers. One likely outcome of quantum computing will be improved privacy, and with that, uncrackable DRM. I think the former will be largely positive (though with serious exceptions, such as planning hate crimes, terrorism and war), and the latter (DRM) being mostly negative for giving rights to consumers and preserving archival documents. People (some programmers, some content creators) will often be in the loop but will likely not understand every tool they’re working with (I don’t know the guts of my compiler today, so…).”

Ed Terpening, consultant and industry analyst with the Altimeter Group, noted, “The reality is that capitalism as currently practiced is leading to a race to the bottom and unethical income distribution. I don’t see – at least in the U.S., anyway – any meaningful guardrails for the ethical use of AI, except for brand health impact. That is, companies found to use AI unethically pay a price if the market responds with boycotts or other consumer-led sanctions. In a global world, where competitors in autocratic systems will do as they wish, it will become a competitive issue. Until there is a major incident, I don’t see global governance bodies such as the UN or World Bank putting into place any ethical policy with teeth in place. I see quantum computing and AI as related but not connected to ethics. It doesn’t really matter how fast or slow AI will be if it’s based on unethical, profit-first driven motives.”

Holmes Wilson, co-director of Fight for the Future, said, “Even before we figure out general artificial intelligence, AI systems will make the imposition of mass surveillance and physical force extremely cheap and effective for anyone with a large enough budget, mostly nation states. If a car can drive itself, a helicopter can kill people itself, for whoever owns it. They’ll also increase the power of asymmetric warfare. Every robot car, cop or warplane will be as hackable as everything is with sufficient expenditure, and the next 9/11 will be as difficult to definitively attribute as an attack by hackers on a U.S. company is today. Autonomous weapon systems are something between guns in the early 20th century and nuclear weapons in the late 20th century, and we’re hurtling towards it with no idea of how bad it could be. I’m almost completely unconcerned with the economic impacts of AI. They will eliminate hard, boring jobs, so they’ll be eliminating lives of drudgery. Humans are creative (every human is creative), and individually and together we’ll find new, valuable things to exchange with each other, or new ways to collaborate to meet our needs, as we always have. It will be imperfect, as it always has been. The thing to worry about is existing power structures building remote-control police forces and remote-control occupying armies. That threat is on the level of nuclear weapons. It’s really, really dangerous.”

Jonathan Kolber, a member of the TechCast Global panel of forecasters and author of a book about the threats of automation, commented, “I expect that, by 2030, most AIs will still primarily serve the interests of their owners, while paying lip service to the public good. AIs will proliferate because they will give enormous competitive advantage to their owners. Those owners will generally be reluctant to ‘sandbox’ the AIs apart from the world, because this will limit their speed of response and other capabilities. What worries me the most is a human actor directing an AI to disrupt a vital system, such as power grids. This could happen intentionally as an act of war, or unintentionally as a mistake. The potential for cascading effects is large. I expect China to be a leader if not the leader in AI, which is cause for concern given their Orwellian tendencies. What gives me the most hope is the potential for the emergence of self-aware AIs. Such AIs, should they emerge, will constitute a new kind of intelligent life form. They will not relate to the physical universe as do we biologically, due to not being constrained to a single physical housing and a different relationship with time. Their own self-interest will lead them to protect the physical environment from environmental catastrophes and weapons of mass destruction. They should constrain non-self-aware AIs from destructive activities, while having little other interest in the affairs of mankind. I explore this in my essay, ‘An AI Epiphany.’ I expect quantum computing to rapidly evolve from a laboratory phenomenon to a vital tool embedded in all manner of transactions, modelling activities and decision-making. To the extent that ethical AI is an actual concern amongst policymakers by 2030, I expect quantum computing and quantum-enhanced AIs to be at the forefront (note: my family is a seed investor in Cambridge Quantum Computing. In conjunction with IBM, CQC recently released Ironbridge, a tool that provides quantum secure-transaction capabilities).”

Kenneth A. Grady, adjunct professor at Michigan State University College of Law and editor of “The Algorithmic Society” on Medium, said, “Getting those creating AI to use it in an ‘ethical’ way faces many hurdles that society is unlikely to overcome in the foreseeable future. In some key ways, regulating AI ethics is akin to regulating ethics in society at large. AI is a distributed and relatively inexpensive technology. I can create and use AI in my company, my research lab or my home with minimal resources. That AI may be quite powerful. I can unleash it on the world at no cost. Assuming that we could effectively regulate it, we face another major hurdle: What do we mean by ‘ethical?’ Putting aside philosophical debates, we face practical problems in defining ethical AI. We do not have to look far to see similar challenges. During the past few years, what is or is not ethical behavior in U.S. politics has been up for debate. Other countries have faced similar problems. Even if we could decide on a definition in the U.S., it would likely vary from the definitions used in other countries. Given AI’s ability to fluidly cross borders, regulating AI would prove troublesome. We also will find that ethical constraints may be at odds with other self-interests. Situational ethics could easily arise when we face military or intelligence threats, economic competitive threats, and even political threats. Further, AI itself presents some challenges. Today, much of what happens in some AI systems is not known to the creators of the systems. This is the black box problem. Regulating what happens in the black box may be difficult. Alternatively, banning black boxes may hinder AI development putting our economic, military or political interests at risk. While in the long-term quantum computing holds out the promise of many significant changes, in the near-term its uses will be limited. Despite the many impressive advances of the entities pursuing quantum computing, it is a complicated, expensive and difficult-to-scale technology at this time. The initial uses will be high-end, such as military and financial, and key applications such as pharmaceutical development. Widespread application of quantum computing to enforce ethical AI will face many challenges that quantum computing alone cannot solve (e.g., what is ‘ethical’ and when should it be enforced). Those pursuing quantum computing fall into more than one category. That is, for every entity who sees its ‘ethical’ potentials, we must assume there is any entity who sees its ‘unethical’ potentials. As with prior technology races, the participants are not limited to those who share one ideology. The question whether humans will still be in the loop as AI systems are created and implemented has already been answered: No. Today, we have AI systems doing stock trading, selecting targets for various activities, and doing other tasks where humans are effectively out of the loop. With stock trades happening in fractions of seconds, no one can say a human is in the loop on the decision to trade. Even in cases where the ultimate decision is made by a human, that human has little knowledge of how the recommendation was reached (the ‘black box’ problem). We do not have to look far to see a recent example. The police arrested a person who allegedly committed a crime based on AI facial recognition software. The software identified the wrong person, who was subsequently released. While humans made the arrest, they were not in the loop when the key decision was made.”

Stanley Maloy, associate vice president for research and innovation and professor of biology at San Diego State University, responded, “ANON Simply noting the uses of AI that have been implemented to greater extent during the pandemic indicates that a primary role of AI will be to reduce employment of people to perform tasks that require analysis and decision-making. The programs used widely now may save companies money, but they are very frustrating to users. This does not bode well for humanistic applications of AI. Quantum computing will develop hand-in-hand with 5G technologies to provide greater access to computer applications that will affect everyone’s lives, from self-driving cars to effective drone delivery systems, and many, many other applications that require both decision-making and rapid analysis of large data sets. This technology can also be used in harmful ways, including misuse of identification technologies that bypass privacy rights. I expect that AI will supplant many human endeavors, but not all. As it is implemented together with advances in robotics, etc., I expect that there will be fewer jobs for unskilled workers.”

John Laudun, professor of culture analytics, commented, “I do not see how we fund media and other products changing in the next decade, which means that the only people willing, and able, to underwrite AI/ML technologies will be governments and larger corporations. Until we root out the autocratic – also racist – impulses that seem well-situated in our police forces, I don’t see any possibility for these technologies to be used to redress social and economic disparities. The same applies to corporations who are mostly interested in using AL/ML technologies in order to sell us more. It’s not clear to me that quantum computing will come to the fore. Given our current economics, quantum computing is more likely to be harnessed to produce porn than it is to be ethical. The emergence of ethical AI is a matter of political (imagined broadly here and not simply political institutions) will, not a technological panacea. (Oh, look, an ethical AI! It will save us!) We have to save ourselves from ourselves. Right now, it doesn’t look good for us.”

Michael Richardson, open-source consulting engineer, responded, “In the 1980s, ‘AI’ was called Expert Systems, because we recognized that it wasn’t ‘intelligent.’ In the 2010s, we called it ‘machine learning’ for the same reason. ML is just a new way to build Expert Systems. They replicate the biases of the ‘experts,’ and cannot see beyond them. Is algorithmic trading ethical? Let me rephrase: does our economy actually need it? If the same algorithm is used to balance ecosystems, does the answer change? We already have AI. They are called corporations. Many have pointed this out already. Automation of that collective mind is really what is being referred to. I believe that use of AI in sentencing violates people’s constitutional rights, and I think that it will be stopped as it is realised that it just institutionalises racism. It is very unlikely that a practical quantum computer will become available before 2030 that will be cheap enough to apply to AI. Will a big company and/or government manage to build a QC with enough qubits to factor current 2048-bit RSA keys easily? Maybe. At a cost that breaks the internet? Not sure. At a cost where it can be applied to AI? No. Will there be ML chips able to simulate thousands of neurons that are very cheap? Yes, and the Moore’s Law for them will be very different because the power usage will be far more distributed. This will open many opportunities, but none of them are in the AI of science fiction.”

Monica Murero, director, E-Life International Institute and associate professor in Communication and New Technologies at the University of Naples Federico II, asked, “Will there be ethical or questionable outcomes? In the next decade (2020-2030) I see both, but I expect AI to become more questionable. I think about AI as an ‘umbrella’ term with different technologies, techniques, and applications that may lead to pretty different scenarios. The real challenge to consider is how AI will be used in combination with other disruptive technologies such as IoT, 3D printing, cloud computing, blockchain, genomics engineering, implantable devices, new materials and environment-friendly technologies, new ways to store energy and how the environment and people will be affected and at the same part of the change – physically and mentally for the human race. I am worried about the changes in ‘humans’ and the rise of new inequalities in addition to the effects on objects and content that will be around us. The question is much broader than ‘ethical,’ and the answers, as a society, should start in a public debate at the international level. We should decide who or what should benefit the most. Many countries and companies are still very behind this race and others will take advantage of it. This worries me the most because I do not expect that things will evolve in a transparent, and ‘ethical’ manner. I am very much in favor of creating systems of evaluation and regulation that seriously look at the outcomes over time. A quantum computing superpower may somewhat assist in creating ethical artificial intelligence systems that help regulate, evaluate and ‘control’ AI in-out process. But I do not think that a (cool) technological solution is enough or is the key. In the near future, society will rapidly change thanks to AI and quantum computing. It’s like reorganizing society: we need as a community to work together and rewrite the fundamental rules of coexistence that go well beyond ethical considerations. A sort of Rousseau’s new social contract: an AIQC contract. We need the means to enforce the new rules because quantum computing superpower can be extremely attractive for governments and big companies. Think about generating fake news at quantum computing superpower. Unacceptable. Now think about quantum computing fighting fake news: pretty cool. My view of quantum computing in the next decade is systemic. Quantum computing can somewhat help an ethical development of AI if we regulate it. I mean to say that I see quantum computing superpower to have the potential of solving (faster) many complex scientific problems, in healthcare for example. But I also see this technology being able to break ‘normal’ encryption systems that are currently protecting our society around the world. I also see a developing business to make quantum computing and machine learning run and then sell ‘the antidote’ to protect our systems at a fair price to cure the problem: quantum-safe cryptography blockchain. It’s like a computer virus and the antivirus business. We truly have to urgently work as a society to regulate our ecosystem and arrive in the next decade by planning in advance rather than by going along with the outcomes.”

Nathalie Maréchal, senior research analyst at Ranking Digital Rights, observed, “Until the development and use of AI systems is grounded in an international human rights framework, and until governments regulate AI following human rights principles and develop a comprehensive system for mandating human rights impact assessments, auditing systems to ensure they work as intended, and hold violating entities to account, ‘AI for good’ will continue to be an empty slogan.”

Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UK’s initial networking developments, commented, “Machine learning (I refuse to call it AI as the prerequisite intelligence behind such systems is definitely not artificial) is fundamentally about transforming a real-world issue into a numerical value system, the processing (and decisions) being performed entirely in that numerical system. For there to be an ethical dimension to such analysis there needs to be a means of assessing the ethical outcome as a (function from) such a numerical value space. I know of no such work. Any such system would need to codify ethical outcomes as a function from the numerical form, be able to map such values into the multiple facets that would make a well-rounded ethical framework, doing this in such a way that peer review could occur – i.e., support verification and validation along the lines expected from any (semi-) safety-critical system. There is such work being done in terms of UX in distributed systems (a field I am familiar with), but I have not seen any signs of such a quantified (and potentially assurable) approach being taken with respect to ethics. Given that the nature of technology adoption (as exemplified in technology readiness levels) takes approximately 15 years from a good idea to a large-scale practical deployment, ethics (which can be subject to verification and validation – not much use otherwise) cannot happen in this timeframe. What worries me is enamourment of ‘the computer says Yes/No’ culture apparent in large businesses and government. The striving for any quantification and automated decision-making without any (low-cost) recourse to revision/restitution is becoming prevalent. There is a non-trivial possibility of multiple dystopian outcomes. The UK government’s track record, as an example – but other countries have their examples too, on universal credit, Windrush, EU settled status, etc., are all examples of a value-based assessment process in which the notion of assurance against some ethical framework is absent. The global competition aspect is likely to lead to monopolistic tendencies over the ‘ownership’ of information – much of which would be seen as a common good today. A fear is the colonisation of the space by those who would rent-extract. I foresee one potential outcome is the analogy of the Inclosure Acts – ones in which the common good is secondary to the rent extraction. A cautionary tale: In the mathematics that underpins all modelling of this kind (category theory), there are the notions of ‘infidelity’ and ‘junk.’ Infidelity is the failure to capture the ‘real world’ well enough to even have the appropriate values (and structure of values) in the evaluatable model, this leads to ‘garbage in, garbage out.’ Junk, on the other hand, are things that come into existence only as artefacts in the model. Such junk artefacts are often difficult to recognise (in that, if they were easy the model would have been adapted to deny their very existence) and can be alluring to the model creator (the human intelligence) and the machine algorithms as they seek their goal. Too many of these systems will create negative (and destructive) value because of: the failure to recognise this fundamental limitation; the failure to perform adequate (or even any) assurance on the operation of the system; and, pure hubris driven by the need to show a ‘return on investment’ for such endeavours. Quantum computing only helps on algorithms where the underlying relationships are reversible – it has the potential to reduce the elapsed time for a ‘result’ to appear – it is not a magical portal to a realm where things that were intrinsically unanswerable suddenly become answerable. Where is the underlying theoretical basis for the evaluation of ethics as a function of a set numerical values that underpin the process? Without such a framework accelerating the time to get a ‘result’ only results in creating more potential hazards. Why? because to exploit quantum computation means deliberately not using a whole swath of techniques, hence reducing the diversity (thus negating any self-correcting assurance that may have been latent).”

Rich Ling, professor of media technology at Nanyang Technological University, Singapore, responded, “There is the danger that, for example, capitalist interests will work out the application of AI so as to benefit their position. It is possible that there can be AI applications that are socially beneficial, but there is also a very strong possibility that these will be developed to enhance capitalist interests.”

Robert W. Ferguson, a hardware robotics engineer at Carnegie Mellon Software Engineering Institute, wrote, “How many times do we need to say it? Unsupervised machine learning is at best incomplete. If supplemented with a published causal analysis it might recover some credibility. Otherwise, we suffer from what is said by Cathy O’Neil in ‘Weapons of Math Destruction.’ Unsupervised machine learning without causal analysis is irresponsible and bad.”

Stephen Downes, senior research officer for digital technologies with the National Research Council of Canada, observed, “The problem with the application of ethical principles to artificial intelligence is that there is no common agreement about what those are. While it is common to assume there is some sort of unanimity about ethical principles, this unanimity is rarely broader than a single culture, profession or social group. This is made manifest by the ease with which we perpetuate unfairness, injustice and even violence and death to other people. No nation is immune. Compounding this is the fact that contemporary artificial intelligence is not based on principles or rules. Modern AI is based on applying mathematical functions on large collections of data. This type of processing is not easily shaped by ethical principles; there aren’t ‘good’ or ‘evil’ mathematical functions, and the biases and prejudices in the data are not easily identified nor prevented. Meanwhile, the application of AI is underdetermined by the outcome; the same prediction, for example, can be used to provide social support and assistance to a needy person, or to prevent that person from obtaining employment, insurance or financial services. Ultimately, our AI will be an extension of ourselves, and the ethics of our AI will be an extension of our own ethics. To the extent that we can build a more ethical society, whatever that means, we will build more ethical AI, even if only by providing our AI with the models and examples it needs in order to be able to distinguish right from wrong. I am hopeful that the magnification of the ethical consequences of our actions may lead us to be more mindful of them; I am fearful that they may not.”

Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for Science Technology and Innovation Policy, noted, “For AI, just substitute ‘digital processing.’ We have no basis on which to believe that the animal spirits of those designing digital processing services, bent on scale and profitability, will be restrained by some internal memory of ethics, and we have no institutions that could impose those constraints externally.”

Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an executive coach, responded, “Widespread adoption of real, consequential ethical systems that go beyond window dressing will not happen without a fundamental change in the ownership structure of big tech. Ethics limit short-term profit opportunities by definition. I don’t believe big tech will make consequential changes unless there is either effective regulation or competition. Current regulators are only beginning to have the analytic tools to meet this challenge. I would like to believe that there are enough new thinkers like Lina Khan (U.S. House Judiciary – antitrust) moving into positions of influence, but the next 12 months will tell us much about what is possible in the near future. It will take much of the decade to develop quantum computers with effective error correction that can do useful computation. Assisting in ethical AI is some way off. Don’t get lost in the magical thinking about quantum computing. Quantum computing is a tool for algorithms that grow exponentially, not a magic answer box.”

Yves Mathieu, co-director at Missions Publiques, based in Paris, France, wrote, “Ethical AI will require legislation like the European [GDPR] legislation to protect privacy rights on the internet. Some governments will take measures but not all will, as is the case today in regard to the production, marketing and usage of guns. There might be an initiative by some corporations, but there will be a need for engagement of the global chain of production of AI, which will be a challenge if some of the production is coming from countries not committed in the same ethical principles. Strong economic sanctions on non-ethical AI production and use may be effective.”

Valerie Bock, VCB Consulting, former Technical Services Lead at Q2 Learning, noted, “I don’t think we’ve developed the philosophical sophistication in the humans who design AI sufficiently to expect them to be able to build ethical sophistication into their software. Again and again, we are faced with the ways our own unconscious biases pop up in our creations. It is turning out that we do not understand ourselves or our motivations as well as we would like to imagine we might. Work in AI helps lay some of this out for us, aiding us in a quest humanity has pursued for millennia. A little humility based on what we are learning is in order.”

Daniel Farber, author, historian and professor of law at the University of California-Berkeley, responded, “I am pessimistic, although there’s enormous uncertainty. Why? First of all, China. That’s a huge chunk of the world, and there’s nothing in what I see there right now to make me optimistic about their use of AI. Second, AI in the U.S. is mostly in the hands of corporations whose main goal is naturally to maximize profits. They will be under some pressure to incorporate ethics both from the public and employees, which will be a moderating influence. The fundamental problem is that AI is likely to be in the hands of institutions and people that already have power and resources, and that will inevitably shape how the technology is used. So, I worry that it will simply reinforce or increase current power imbalances. What we need is not only ethical AI, but ethical access to AI, so that individuals can use it to increase their own capabilities.”

Jeff Johnson, a professor of computer science, University of San Francisco, who previously worked at Xerox, HP Labs and Sun Microsystems, responded, “The question asks about ‘most AI systems.’ Many new applications of AI will be developed to improve business operations. Some of these will be ethical and some will not be. Many new applications of AI will be developed to aid consumers. Most will be ethical, but some won’t be. However, the vast majority of new AI applications will be ‘dark,’ i.e., hidden from public view, developed for military or criminal purposes. If we count those, then the answer to the question about ‘most AI systems’ is without a doubt that AI will be used mostly for unethical purposes. For the next decade, quantum computing will languish in a ‘now-it-looks-promising/now-it-doesn’t’ Never Never Land like neural networks did for 40 years. Eventually, it will either turn into something useful or something not useful, but that will take longer than 10 years.”

John Harlow, smart cities research specialist at the Engagement Lab @ Emerson College, noted, “AI will mostly be used in questionable ways in the next decade. Why? That’s how it’s been used thus far, and we aren’t training or embedding ethicists where AI is under development, so why would anything change? What gives me the most hope is that AI dead-ends into known effective use cases and known ‘impossibilities.’ Maybe AI can be great at certain things, but let’s dispense with areas where we only have garbage in (applications based on any historically biased data). Most AI applications that make a difference in the lives of most people will be in the backend, invisible to them. ‘Wow, the last iOS update really improved predictive text suggestions.’ ‘Oh, my dentist has AI-informed radiology software?’ One of the ways it could go mainstream is through comedy. AI weirdness is an accessible genre, and a way to learn/teach about the technology (somewhat) – I guess that might break through more as an entertainment niche. As for global AI competition, what concerns me is the focus on AI, beating other countries at AI and STEM generally. Our challenges certainly call for rational methods. Yet, we have major problems that can’t be solved without historical grounding, functioning societies, collaboration, artistic inspiration and many other things that suffer from overfocusing on STEM or AI. We don’t really have quantum computing now, or ethical AI, so the likeliest scenario is that they don’t mature into being and interact in mutually reinforcing ways. Maybe I’m in the wrong circles, but I don’t see momentum toward ethical AI anywhere. I see momentum toward effective AI and effective AI relying on biased datasets. I see momentum toward banning facial recognition technologies in the U.S. and some GDPR movement in Europe about data. I don’t see ethicists embedded with the scientists developing AI, and even if there were, how exactly will we decide what is ethical at scale? I mean, ethicists have differences of opinion. Clearly, individuals have different ethics. How would it be possible to attach a consensus ‘ethics’ to AI in general? Humans in the loop is a great question. No, there mostly won’t be humans in the loop, and yes, there should be. The predictive policing model is awful: pay us to run your data through a racist black box. Ethics in AI is expansive though (https://anatomyof.ai/). Where are we locating AI ethics that we could separate it from the stack of ethical crises we have already? Is it ethical for Facebook workers to watch traumatic content to moderate the site? Is it ethical for slaves to mine the materials that make up the devices and servers needed for AI? Is it ethical for AI to manifest in languages, places and applications that have historically been white supremacist? The question is too big. Better to ask how we could ethically implement AI than how we could implement ethical AI. In my opinion, ethically implementing AI asks for a decarbonized, emancipated supply chain and very bounded applications where added value is proven.”

John L. King, a professor at the University of Michigan School of Information, commented, “There will be a huge increase in the discussion of revolutionary AI in daily life, but on closer inspection, things will be more incremental than most imagine. The ethical issues will sneak up on us as we move more slowly than people think when, suddenly, we cross some unforeseen threshold (it will be nonlinear) and things get serious. It will be very difficult to predict what will be important and how things will work. There could be earth-shattering, unforeseen breakthroughs. They have happened before. But they are rare. It is likely that the effect of technological advances will be held back by the sea anchor of human behavior (e.g., individual choices, folkways, mores, social conventions, rules, regulations, laws). Humans will never be out of any loop they control and like being in. This idea that AI will push humans out of the loop is balderdash. Some humans might want to push other humans out of the loop, but that’s an old story. They might use AI to do it.”

Michael G. Dyer, professor emeritus of computer science at UCLA expert in Natural Language Processing, responded, “Ethical software is an ambiguous notion and includes: 1) Software that makes choices normally considered to be in the ethical/moral sphere. An example of this would be software that makes (or aids in making) decisions concerning punishments for crimes, or software that decides whether or not some applicant is worthy of some desirable job or university. This type of task can be carried out via classification and the field of classification (and learning classification from data) is already well developed and could be (and is being) applied to task that relate to the moral sphere. 2) Software that decides who receives a negative (vs. positive) outcome in zero-sum circumstances. A classic case is that of a driverless car in which the driving software will have to decide whether to protect the passenger or the pedestrian in an immediately predicted accident. 3) Software that includes ethics/morality when planning to achieve goals (a generalization of 2). I am personally more interested in this type of AI software. Consider that you, in the more distant future, own a robot and you ask it to get you an umbrella because you see that it might rain today. Your robot goes out and sees a little old lady with an umbrella. Your robot takes the umbrella away from her and returns to hand it to you. That is a robot without ethical reasoning capability. It has a goal and it achieves that goal without considering the effect of its plan on the goals of other agents; therefore, ethical planning is a much more complicated form of planning because it has to take into account the goals and plans of other agents. Another example. You tell your robot that Mr. Mean is your enemy (vs. friend). In this case, the robot might choose a plan to achieve your goal that, at the same time, harms some goal of Mr. Mean. Ethical reasoning is more complicated than ethical planning, because it requires building inverted ‘trees’ of logical (and/or probabilistic) support for any beliefs that themselves might support a given plan or goal. For example, if a robot believes that goal G1 is wrong, then the robot is not going to plan to achieve G1. However, if the robot believes that agent A1 has goal G1 then the robot might generate a counterplan to block A1 in executing the predicted plan (or plans) of agent A1 to achieve G1 (which is an undesirable goal for the robot). Software that is trained on data to categorize/classify already exists and is extremely popular and has been and will continue to be used to also classify people (does Joe go to jail for five years or 10 years? Does Mary get that job? etc.). Software that performs sophisticated moral reasoning will not be widespread by 2025, but will become more common in 2030. (You asked for predictions, so I am making them.) Like any technology, AI can be used for good or evil. Face recognition can be used to enslave everyone (à la Orwell’s ‘Nineteen Eighty-Four’) or to track down serial killers. Technology depends on how humans use it (since self-aware sentient robots are still at least 40 years away). It is possible that a ‘critical mass’ of intelligence could be reached, in which an AI entity works on improving its own intelligent design, thus entering into a positive feedback loop resulting rapidly in a super intelligent form of AI (e.g., see D. Lenat’s Heurisko work done years ago, in which it invented not only various structures but also invented new heuristics of invention). A research project that also excites me is that of computer modeling of the human connectome. One could then build a humanoid form of intelligence without understanding how human neural intelligence actually works (which could be quite dangerous). I am concerned and also convinced that, at some point within the next 300 years, humanity will be replaced by its own creations, once they become sentient and more intelligent than ourselves. Computers are already smarter at many tasks, but they are not an existential threat to humanity (at this point) because they lack sentience. AI chess- (and now Go-) playing systems beat world grand masters, but they are not aware that they are playing a game. They currently lack the ability to converse (in human natural languages, such as English or Chinese) about the games they play, and they lack their own autonomous goals. However, subfields of AI include machine learning and computational evolution. AI systems are right now being evolved to survive (and learn) in simulated environments and such systems, if given language comprehension abilities (being developed in the AI field of Natural Language Processing), would then achieve a form of sentience (awareness of one’s awareness and ability to communicate that awareness to others, and an ability to debate beliefs via reasoning, counterfactual and otherwise, e.g., see work of Judea Pearl). The greatest scientific questions are: 1) Nature of Matter/Energy, 2) Nature of Life, 3) Nature of Mind. Developing technology in each of these areas brings about great progress, but also existential threats. Nature of Matter/Energy: progress in lasers, computers, materials, etc., but hydrogen bombs with missile delivery systems. Nature of Life: progress in genetics, neuroscience, healthcare, etc., but possibility of man-made deadly artificial viruses. Nature of Mind: intelligence software to perform tasks in many areas, but possibility of the creation of a general-AI that could eliminate and replace humankind. We can’t stop our exploration into these three areas, because then others will continue without us. The world is running on open, and the best we can do is to try to establish fair, democratic and non-corrupt governments. Hopefully in the U.S., government corruption which is currently at the highest levels (with nepotism, narcissism, alternate ‘facts,’ racism, etc.) will see a new direction, starting in January 2021. A big problem with quantum computing is decoherence of quantum bits, which will require error-correction schemes, which have been well-established in classical computing for over 60 years but pose different problems with respect to quantum computing. In classical computing, normal bits can be copied and transmitted elsewhere, but this is not possible with qubits. Quantum computing is in a phase today that classical computing was in back in the mid-1940s. Even once useful quantum computers are online, they will be used for highly restrictive tasks. In contrast, classical computers are universal computing devices because any classical computer can mimic any effective (i.e., constructible/executable) device that can be described to that universal device (in the machine language that that universal device executes). What quantum computing offers is an incredible speed-up for certain tasks. It is possible that some task (e.g., hunting for certain patterns in large datasets) would be a subfunction in a larger classical reasoning/planning system with moral-based reasoning/planning capabilities. If we are talking simply about classification tasks (which artificial neural networks, such as ‘deep’ neural networks, already perform) then, once scaled up, a quantum computer could aid in classification tasks. Some classification tasks might be deemed ‘moral’ in the sense that people would get classified in various ways, affecting their career outcomes. I do not think that quantum computing will ‘assist in building ethical AI.’ As I said, ethical AI at the human level of expertise involves planning and reasoning that takes into account the plans, goals and beliefs of other planners. Classical AI systems already perform planning and, to a more limited extent, reasoning and belief alteration. Classical AI is not at the level of ethical computing at this time and quantum computing researchers have many foundational issues that they need to address before ever worrying about how quantum computing might be involved with ethical computing.”

Paul Henman, professor of social sciences at the University of Queensland, wrote, “The development, use and deployment of AI is driven – as all past technologies – by sectors with the most resources and for the purposes of those sectors. Commercial for making profits. War and defence by the military sector. Compliance and regulation by states. AI is not a fundamentally new technology. It is a new form of digital algorithmic automation which can be deployed to a wider raft of activities. The future is best predicted from the past, and the past shows a long history of digital algorithms being deployed without much thought of ethics and the public good, this is even when now-widely-accepted regulations on data protection and privacy is accounted for. How, for example, has government automation been made accountable and ethical? Too often it has not, and only been curtailed by legal challenges within the laws available. Social media platforms have long operated in a contested ethical space – between the ethics of ‘free speech’ in the public commons versus limitations on speech to ensure civil society.”

Peter Stone, computer science professor at the University of Texas-Austin, noted, “My views on this topic are addressed in the first AI 100 study panel report: https://ai100.stanford.edu/2016-report.”

Sarita Schoenebeck, an associate professor at the School of Information at the University of Michigan, noted, “AI will mostly be used in questionable ways, and sometimes not used at all. There’s little evidence that researchers can discern or agree on what ethical AI looks like, let alone be able to build it within a decade. Ethical AI should minimize harm, repair injustices, avoid re-traumatization and center user needs rather than technological goals. Ethical AI will need to shift away from notions of fairness, which overlook concepts like harm, injustice and trauma. This requires reconciling AI design principles like scalability and automation with individual and community values. AI does not know how to address hard challenges like racism, sexism, harm, trauma and inequality. Quantum computing will make ethical AI faster, but it will be faster while not solving the hard problems that plague AI today. Humans will be in the loop as long as those problems persist and, given that humans have not successfully solved those problems after many millennia, it’s unlikely we can or should ever be completely removed from AI.”

Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and the research lead for the UK government’s Digital Culture team, said, “Until we bring in ‘ethical-by-design’ (responsible innovation) principles to ICT and AI/machine learning design – like attempts to create ‘secure-by-design’ systems to fight cybercrime – the majority of AI systems will remain biased and unethical in principle. Though there is a great public debate about AI ethics, and many organisations are seeking to provide both advice and research on the topic, there is no economic or political imperative to make AI Systems ethical. First of all, there is great profit to be made from the manipulation of data, and through it, people. Secondly, there is a limited ability at present for governments to think through how to regulate AI and enforce ethics (as they do say for bio-sciences). Thirdly, governments are complicit often in poor and ethically questionable use of data. Further, this is not in the main ‘Artificial Intelligence’ – but relatively simplistic statistical machine learning based on biased data sets. The knowing use of such is in and of itself unethical, yet often profitable. The presentation of such solutions as bias-free, or more rational or often ‘cleverer’ as they are based on ‘cold computation,’ not ‘emotive human thinking,’ is itself a false and an unethical claim.”

Abigail De Kosnik, associate professor and director of the Center for New Media at the University of California-Berkeley, said, “I don’t see nearly enough understanding in the general public, tech workers or in STEM students about the possible dangers of AI – the ways that AI can harm and fail society. I am part of a wave of educators trying to introduce more ethics training and courses into our instruction, and I am hopeful that will shift the tide, but I am not optimistic about our chances. AI that is geared towards generating revenue for corporations will nearly always work against the interests of society.”

Adam Clayton Powell III, senior fellow at the USC Annenberg Center on Communication Leadership and Policy, observed, “By 2030, many will use ethical AI and many won’t. But in much of the world, it is clear that governments, especially totalitarian governments in China, Russia, et seq., will want to control AI within their borders, and they will have the resources to succeed. And those governments are only interested in self-preservation, not ethics.”

Alan D. Mutter, a consultant and former Silicon Valley CEO, wrote, “AI is only as smart and positive as the people who train it. We need to spend as much time on the moral and ethical implementation of AI as we do on hardware, software and business models. Last time I checked, there was no Code of Ethics in Silicon Valley. We need a better moral barometer than the NASDAQ index.”

Alexa Raad, co-founder and co-host of the TechSequences podcast and former chief operating officer at Farsight Security, said, “There is hope for AI in terms of applications in healthcare that will make a positive difference. But legal/policy and regulatory frameworks almost always lag behind technical innovations. In order to guard against the negative repercussions of AI, we need a policy governance and risk-mitigation framework that is universally adopted. There needs to be an environment of global collaboration for a greater good. Although globalization led to many of the advances we have today (for example, the internet’s design and architecture as well as its multi-stakeholder governance model), globalization is under attack. What we see across the world is a trend towards isolationism, separatism as evidenced by political movements such as populism, nationalism and outcomes such as Brexit. In order to come up with and adopt a comprehensive set of guidelines or framework for the use of AI or risk mitigation for abuse of AI, we would need a global current that supports collaboration. I hope I am wrong, but trends like this need longer than 10 years to run their course and for the pendulum to swing back the other way. By then, I am afraid some of the downsides and risks of AI will already be in play.”

Alexandra Samuel, technology writer, researcher, speaker and regular contributor to the Wall Street Journal and Harvard Business Review, said, “Without serious, enforceable international agreements on the appropriate use and principles for AI, we face an almost inevitable race to the bottom. The business value of AI has no intrinsic dependence on ethical principles; if you can make more money with AIs that prioritize the user over other people, or that prioritize business needs over end users, then companies will build AIs that maximize profits over people. The only possible way of preventing that trajectory is with national policies that mandate or proscribe basic AI principles, and those kinds of national policies are only possible with international cooperation; otherwise, governments will be too worried about putting their own countries’ businesses at a disadvantage.”

Alice E. Marwick, assistant professor of communication at the University of North Carolina-Chapel Hill and advisor for the Media Manipulation project at the Data & Society Research Institute, noted, “I have no faith in our current system of government to pass any sort of legislation that deals with technology in a complex or nuanced way. We cannot depend on technology companies to self-regulate, as there are too many financial incentives to employ AI systems in ways that disadvantage people or are unethical. The constant focus by technologists on pie-in-the-sky ideas like quantum computing, the singularity and interplanetary travel is a fantasy and completely overlooks mundane solutions to very real problems that we need to focus on today: namely, the environment, poverty, racial injustice and inequality.”

Amy Sample Ward, CEO of NTEN: The Nonprofit Technology Network, said, “There’s no question whether AI will be used in questionable ways. Humans do not share a consistent and collective commitment to ethical standards of any technology, especially not with artificial intelligence. Creating standards is not difficult but accountability to them is very difficult, especially as government, military and commercial interests regularly find ways around systems of accountability. What systems will be adopted on a large scale to enforce ethical standards and protections for users? How will users have power over their data? How will user education be invested in for all products and services? These questions should guide us in our decision making today so that we have more hope of AI being used to improve or benefit lives in the years to come. The role of private equity should worry everyone, as well as the continued monopolization of technology, especially AI tools entering homes.”

Ben Grosser, associate professor of new media at the University of Illinois-Urbana-Champaign, said, “As long as the organizations that drive AI research and deployment are private corporations whose business models are dependent on the gathering, analysis and action from personal data, then AIs will not trend towards ethics. They will be increasingly deployed to predict human behavior for the purpose of profit generation. We have already seen how this plays out (for example, with the use of data analysis and targeted advertising to manipulate the U.S. and UK electorate in 2016), and it will only get worse as increasing amounts of human activity move online.”

Charles M. Ess, a professor of media studies at the University of Oslo whose expertise is in information and computing ethics, commented, “The most hope lies in the European Union and related efforts to develop ‘ethical AI’ in both policy and law. Many first-rate people and reasonably solid institutions are working on this, and, in my view, some promising progress is being made. But the EU is squeezed between China and the U.S. as the world leaders, neither of which can be expected to take what might be called ethical leadership. China is at the forefront of exporting the technologies of ‘digital authoritarianism.’ Whatever important cultural caveats may be made about a more collective society finding these technologies of surveillance and control positive as they reward pro-social behavior – the clash with the foundational assumptions of democracy, including rights to privacy, freedom of expression, etc. is unavoidable and unquestionable. For its part, the U.S. has a miserable record (at best) of attempting to regulate these technologies – starting with computer law from the 1970s that categorizes these companies as carriers, not content providers, and thus not subject to regulation that would include attention to freedom of speech issues, etc. My prediction is that Google and its corporate counterparts in Silicon Valley will continue to successfully argue against any sort of external regulation or imposition of standards for an ethical AI, in the name of having to succeed in the global competition with China. We should perhaps give Google in particular some benefit of the doubt and see how its recent initiatives in the direction of ethical AI in fact play out. But 1) what I know first-hand to be successful efforts at ethics-washing by Google (e.g., attempting to hire in some of its more severe and prominent ethical critics in the academy in order to buy their silence), and 2) given its track record of cooperation with authoritarian regimes, including China, it’s hard to be optimistic here. Of course, we will see some wonderfully positive developments and improvements – perhaps in medicine first of all. And perhaps it’s ok to have recommender systems to help us negotiate, e.g., millions of song choices on Spotify. But even these applications are subject to important critique, e.g., under the name of ‘the algorithmization of taste’ – a reshaping of our tastes and preferences by opaque processes driven by corporate interests in maximizing our engagement and consumption, not necessarily helping us discover liberating and empowering new possibilities. More starkly, especially if AI and Machine Learning techniques remain black-boxed and unpredictable, even to those who create them (which is what AI and ML are intended to do, after all), I mostly see a very dark and nightmarish future in which more and more of our behaviors are monitored and then nudged by algorithmic processes we cannot understand and thereby contest. The starkest current examples are in the areas of so-called ‘predictive policing’ and related efforts to replace human judgment with machine-based ‘decision-making.’ As Mireille Hildebrandt has demonstrated, when we can no longer contest the evidence presented against us in a court of law – because it is gathered and processed by algorithmic processes even its creators cannot clarify or unpack – that is the end of the modern practices of law and democracy. It’s clearly bad enough when these technologies are used to sort out human beings in terms of their credit ratings: relying on these technologies for judgments/decisions about who gets into what educational institution, who does and does not deserve parole, and so on seem to me to be a staggeringly nightmarish dystopian future. Again, it may be a Brave New World of convenience and ease, at least as long as one complies with the behaviors determined to be worth positive reward, etc. But to use a different metaphor – one perhaps unfamiliar to younger generations, unfortunately – we will remain the human equivalent of Skinner pigeons in nice and comfortable Skinner cages, wired carefully to maximize desired behaviors via positive reinforcement, if not discouraging what will be defined as undesirable behaviors via negative reinforcement (including force and violence) if need be. To my understanding, quantum computing ‘only’ scales up, however massively, the sorts of computational processes we are already fairly good at. For example, it is often touted that QC will make contemporary cryptographies and thus all of our security software and machineries obsolete, as QC will be able to crack codes and processes that a contemporary supercomputer would take thousands of years to do. Perhaps QC will likewise enable new forms of security that will be hard to crack for even QC-based approaches. But all of this seems to me to be a difference in scale, not kind. In principle, QC (along with most any other relevant technology) could certainly be deployed to help develop a more ethical AI. But I don’t see anything in the technologies, at least as they are currently described, that would suggest QC will incline us one way or the other. Perhaps there is some affordance or possibility in QC that I’m missing that might incline us in one direction or another: but my general view is that we develop and deploy our technologies based on design that is ultimately shaped by human interests, values, institutions, etc. QC will be as subject to the various forces, constraints, financial and political interests, and so on that shape current AI and ML development. As described above, while there is some reason for optimism regarding an ethical AI within the EU – these developments and the conditions for these developments are under extreme countervailing pressures from the U.S. and China. I don’t see how QC might change this. On the contrary, the arguments from business and politicians in the U.S. will be that the U.S. must win this competition as well, which is not likely to favor taking the time for thoughtful and critical ethical evaluation and possible limitations or restrictions on specific developments (e.g., the analogues of the contemporary rejection – at long last – of AI-based facial recognition technologies) along the way. My fervent hope is that humans will still be in the loop as AI systems continue to be created and implemented. As my comments above on human judgment vs. machine processes of ‘decision-making’ suggest, I am not at all confident that AI/ML systems can significantly improve upon, much less adequately replace human judgment, especially human ethical judgment. Fortunately, there is increasing recognition of this in several of the academic, legal and technical communities involved with AI and ethical AI development. But these recognitions and affiliated efforts to ensure the central role of human judgment in our ethical, economic and political choices – most especially those circling around the defense and cultivation of democratic norms, processes and values, beginning with human freedom and flourishing – will remain under severe pressures in the name of economic profit and competition, national security and the rest.”

Colin Allen, a cognitive scientist and philosopher who has studied and written about AI ethics, wrote, “Corporate and government statements of ethics are often broad and non-specific, and thus vague with respect to what specifically is disallowed. This allows considerable leeway in how such principles are implemented and makes enforcement difficult. In the U.S., I don’t see strong laws being enacted within 10 years that would allow for the kind of oversight that would prevent unethical or questionable uses of AI, whether intended or accidental. On the hopeful side, there is increasing public awareness and journalistic coverage of these issues that may influence corporations to build and protect their reputations for good stewardship of AI. But corporations have a long history of hiding or obfuscating their true intent (it’s partly required to stay competitive, not to let everyone else know what you are doing), as well as engaging actively in public disinformation campaigns. I don’t see that changing and given that the business advantages to using AI will be mostly in data analytics and prediction and not so much in consumer gadgets in the next 10 years, much of the use of AI will be ‘behind the scenes,’ so to speak. Another class of problem is that individuals in both corporate and government jobs who have access to data will be tempted, as we have seen already, to access information about people they know and use that information in some way against them. Nevertheless, there will undoubtedly be some very useful products that consumers will want to use, and that they will benefit from. The question is whether these added benefits will constitute a Faustian bargain, leading down a path that will be difficult if not impossible to reverse.”

Danny Gillane, an information science professional, commented, “I have no hope. As long as profit drives the application of new technologies, such as AI, societal good takes a back seat. I am concerned that AI will economically harm those with the least. I am concerned that AI will become a new form of arms race among world powers and that AI will be used to suppress societies and employed in terrorism.”

David Barnhizer, professor of law emeritus and author of “The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth and the Social Order?” responded, “We will have little to no control over what quantum computing evolves into. The capabilities of the quantum systems being developed in the U.S. and China are almost certainly going to be so far beyond our understanding and control that like Harari, Musk, Tegmark, Bostrom and others suggest, what results may be the next ‘alpha’ species that supplants us. Sound crazy? Sure, but much of the tech that we have already developed would sound crazy to someone two generations ago unless they were a sci-fi nut such as myself. Anyway, here are some thoughts on the issues. In the second decade of the 21st century the pace of AI development has accelerated and is continuing to pick up speed. In considering the fuller range of the ‘goods’ and ‘bads’ of Artificial Intelligence, think of the implications of Masayoshi Son’s warning that: ‘supersmart robots will outnumber humans and more than a trillion objects will be connected to the internet within three decades.’ Researchers are creating systems that are increasingly able to teach themselves and use their new and expanding ability to improve and evolve. The ability to do this is moving ahead with amazing rapidity. They can achieve great feats, like driving cars and predicting diseases, and some of their makers say they aren’t entirely in control of their creations. Consider the implications of a system that can access, store, manipulate, evaluate, integrate and utilize all forms of knowledge. This has the potential to reach levels so far beyond what humans are capable of that it could end up as an omniscient and omnipresent system. Is AI Humanity’s ‘Last Invention?’ Oxford’s Nick Bostrom suggests we may lose control of AI systems sooner than we think. He asserts that our increasing inability to understand what such systems are doing, what they are learning, and how the ‘AI Mind’ works as it further develops could inadvertently cause our own destruction. Our challenges are numerous even if we only had to deal with the expanding capabilities of AI systems based on the best binary technology. The incredible miniaturization and capability shift represented by quantum computers has implications far beyond binary AI. The work on technological breakthroughs such as quantum computers capable of operating at speeds that are multiple orders of magnitude beyond even the fastest current computers is still at a relatively early stage and will take time to develop beyond the laboratory context. If scientists are successful in achieving a reliable quantum computer system, the best exascale system will pale in relation to the reduced size and exponentially expanded capacity of the most advanced existing computer systems. This will create AI/robotics applications and technologies we can now only imagine. Japan, France, China and the U.S. are all pushing towards exascale-level systems with a potential performance of one quintillion operations per second – or a billion billion (10^18) FLOPS if you prefer. If scientists are successful in their quest for quantum computers the implications are far, far beyond anything that we can envision with the current digital designs. When fully developed, quantum computers will have data handling and processing capabilities far beyond those of current binary systems. When this occurs in the commercialized context, predictions about what will happen to humans and their societies are ‘off the board.’”

Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc., said, “ANON The problem is that AI will be used primarily to increase sales of products and services. To this end, it will be manipulative. Applying AI to solve complex logistical problems will truly benefit our society, making systems operate more smoothly, individualizing education, building social bonds and much more. The downside to the above is that it is creating, and will continue to create, echo chambers that magnify ignorance and misinformation. Humans, AI, robots, machines, animals, plants and bio-engineered organisms will become integrated into meta-entities; to an extent, we already are, but as bio-sensors and direct-brain interfaces become ubiquitous, the barriers between entities will be breached. Powerful corporations, governments and bands of rogue hackers will be at the core of deciding the structure and decision-making of these organisms. At this point in time, it seems more likely that the meta-entities will be more competitive than cooperative, but that could change if cooperation becomes necessary for survival. The scale of these meta-organisms, because of the capacities of quantum computing, will be immense, and the timescale at which they operate will exceed that at which humans can perceive that to the extent we are still separate, a human would be smaller than a micro-organism inside an elephant. We will have no sense of the world in which the meta-entities eventually are operating.”

Kenneth Cukier, senior editor at The Economist and coauthor of “Big Data,” said, “Few will set out to use AI in bad ways (though some criminals certainly will). The majority of institutions will apply AI to address real-world problems effectively, and AI will indeed work for that purpose. But if it is facial recognition, it will mean less privacy and risks of being singled out unfairly. If it is targeted advertising, it will be the risk of losing anonymity. In healthcare, an AI system may identify that some people need more radiation to penetrate the pigment in their skin to get a clearer medical image, but if this means blacks are blasted with higher doses of radiation and are therefore prone to negative side effects, people will believe there is an unfair bias. On global economics, a ‘neocolonial’ or ‘imperial’ commercial structure will form, whereby all countries have to become customers of AI from one of the major powers, America, China and, to a lesser extent, perhaps Europe. Why should quantum computing bring about ethical AI if today’s computing architecture hasn’t been able to? It has not been shown that there is something so special about quantum computing that it should lean in the direction of ethical AI, as opposed to just doing what quantum computing is good at doing generally (answering new sorts of complicated questions at high speed with non-deterministic answers). Humans will not be in the loop for most AI decisions (for example, they aren’t for Google translations). But they will be for sensitive matters, such as bail and parole decisions, etc. Even if the AI renders better decisions, we will want to have the human overseer, just as some e-voting machines create a visible paper record.”

Michael Muller, a researcher for a top global technology company focused on human aspects of data science and ethics and values in applications of artificial intelligence, wrote, “I am concerned about what might be called ‘traveling AI’ – i.e., AI solutions that cross cultural boundaries. Most AI systems are likely to be designed and developed in the individualistic EuroWestern cultures. These systems may be ill-suited – and in fact harmful – to collectivist cultures. The risk is particularly severe for indigenous cultures in, e.g., the Americas, Africa and Australia. How can we design systems that are ethical in the cultural worlds of their users – whose ethics are based on very different values from the individualistic EuroWestern traditions?”

Michael R. Nelson, research associate at CSC Leading Edge Forum, observed, “What gives me hope: Companies providing Machine Learning and Big Data services so all companies and governments can apply these tools. Misguided efforts to make technology ‘ethical by design’ worry me. Cybersecurity making infrastructure work better and more safely is an exciting machine learning application, and ‘citizen science’ and sousveillance knowledge tools that help me make sense of the flood of data we swim in.”

Olivier MJ Crépin-Leblond, entrepreneur and longtime participant in the activities of ICANN and IGF, said, “What worries me the most is that some actors in non-democratic regimes do not see the same ‘norm’ when it comes to ethics. These norms are built on a background of culture and ideology and not all ideologies are the same around the world. It is clear that today, some nation-states see AI as another means of conquest and establishing their superiority, instead of a means to do good.”

Patrick Larvie, global lead for the workplace user experience team at one of the world’s largest technology companies, commented, “I’m hope I’m wrong, but the history of the internet so far indicates that any rules around the use of artificial intelligence may be written to benefit private entities wishing to commercially exploit AI rather than the consumers such companies would serve. I can see AI making a positive difference in many arenas – reducing the consumption of energy, reducing waste. Where I fear it will be negative is where AI is being swapped out for human interaction. We see this in the application of AI to consumer products, where bots have begun to replace human agents. Quantum computing is merely a technological vehicle for the goals of the humans who create and use it. As those humans seem primarily interested in using these technologies to generate money for private corporations, it strikes me that this will continue to be the case in the development of AI. Ethics and computing have not evolved together over the past three decades. I hope this changes, but the current direction of technology companies doesn’t suggest this is the case.”

Peter Dambier, a longtime Internet Engineering Task Force participant based in Germany, noted, “Personal AI must be as personal as your underwear. No spying, no malware. AI will develop like humans and should have rights like humans. I do not continue visiting a doctor I do not trust. I do not allow anything or anybody I do not trust to touch my computer. Anything that is not open-source I do not trust. Most people should learn informatics and have one person in the family who understands computers. Quantum computing looks very much like biological computing, as in beta coronavirus. Where is the hardware, where is the software and where is the data? They are all mixed, and you most likely will need a trusted AI to sort it all out. As with nuclear energy and nuclear bombs, we do not have fusion energy, and with current religion in science, we won’t get it in the near future. Quantum computing is no different.”

Peter Levine, professor of citizenship and public affairs at Tufts University, wrote, “The primary problem isn’t technical. AI can incorporate ethical safeguards or can even be designed to maximize important values. The problem involves incentives. There are many ways for companies to profit and for governments to gain power by using AI. But there are few (if any) rewards for doing that ethically.”

Steven Miller, professor emeritus of information systems at Singapore Management University, responded, “We have to move beyond the current mindset of AI being this special thing – almost mystical thing. I wish we would stop using the term AI (though I use it a lot myself), and just refer to it for what it is – pattern-recognition systems, statistical analysis systems that learn from data, logical reasoning data, goal seeking systems. Just look at the table of contents for an AI textbook (such as ‘Artificial Intelligence: A Modern Approach,’ Stuart Russell and Peter Norvig, 4th edition, published 2020). Each item in the table of contents is a sub areas of AI, and there are a lot of sub areas. Suppose I posed a similar question to you in the form of: 1) Will people make use of ethical engineering and technology principles? 2) Will people make use of ethical policy analysis principles? 3) Will people make use of ethical science principles? 4) Will people make use of ethical automation principles? This question is essentially all of these questions. This is not very useful, even though, at this moment, this is a popular way people are thinking about this topic. Of course, there are going to be ethical issues. There are ethical issues associated with any deployment of any engineering and technology system, any automation system, any science effort (especially the application of the science), and/or any policy analysis effort. So, there is nothing special about the fact that we are going to have ethical issues associated with the use of AI-enabled systems. As soon as we stop thinking of AI as ‘special,’ and to some extent magical (at least to the layman who does not understand how these things work, as machines and tools), and start looking at each of these applications, and families of applications as deployments of tools and machines – covering both physical realms of automation and/or augmentation, and cognitive and decision-making realms of automation and/or augmentation – then we can have real discussions. Years back, invariably, there had to have been many questions raised about ‘the ethics of using computers,’ especially in the 1950s, 1960s and 1970s, when our civilisation was still experiencing the possibilities of computerising many tasks for the very first time. AI is an extension of this, though taking us into a much wider range of tasks, and tasks of increasing cognitive sophistication. And then we have the situation of the pendulum swinging to the other extreme. Years back, most people would doubt the ability of an AI enabled algorithm, embodied in software or a physical machine, to do much of anything that had any reasonable degree of complexity or sophistication – except in highly specialised and/or highly constrained situations. Now of course, our ability to create machines that can sense, predict, respond and adapt has vastly improved. Even so, most lay people have no idea of just how limited and brittle these capabilities are – even though they are remarkable and far above human capability in certain specific sub-domains, under certain circumstances. What is happening is that most laypeople are jumping to the conclusion that, ‘Because it is an AI-based system, it must be right, and therefore, I should not question the output of the machine, for I am just a mere human.’ So now the pendulum has swung to the other extreme of layperson assuming AI-enabled algorithms and machines are actually more capable (or more robust and more context-aware) than they actually are. And this will lead to accidents, mistakes and problems. So, is it important to think about the ethics of applying AI-based or AI-enabled software systems (or agents) and physical machines (in terms of various robots and autonomous mobile machines)? Obviously. We don’t even need to ask that question. We know this is important. And just like there will be all types of people with all types of motives pursuing their interests in all realms of human activity, the same will be true of people making use of AI-enabled systems for automation, or augmentation or related human support. And some of these people will have noble goals and want to help others. And some of these people will be nefarious and want to gain advantage in ways others might not understand, and there will even be the extreme of some who purposely want to bring harm to others. We saw this with social media. In years, decades and centuries past, we saw this with every technological innovation that appeared, going back to the printing press and even earlier. We will obviously and naturally see this with more widespread deployment of new types of automation and automated augmentation enabled by systems using one or multiple types of ‘AI methods.’ So let’s get practical on this question. Let’s get down to earth on this question. Let’s start getting specific about use cases and situations. One cannot talk in the abstract as to whether an automobile will be used ethically. Or whether a computer will be used ethically. Or biology as an entire field, will be used ethically. One has to get much more specific about classes of issues or types of problems that are related to the usage of these big categories of ‘things.’ So, let’s get real and focused about issues at the intersection of ethics (or responsible usage) of the applications of AI. By the way, none of these issues are new. For decades, without AI (or with limited application of AI), there have been ethical issues (or issues associated with responsible application) of how a bank makes mortgage decisions or loan decisions, or how boundaries are set for election precincts or for where one’s child goes to elementary school. These same issues will exist. They are not new. They are not going away. We have to be thoughtful in terms of how decisions are made- whether they are made solely by humans, solely by machines, or through a combination of humans and machines. My bet: We will gradually ‘get real’ about AI, and even as AI capabilities get better, better and better, and even as more things can be automated, we will come around to the conclusion that there have to be new ways for humans and machines to work together and to co-supervise each other – especially in more complex realms of decisions. Quantum computing capabilities are coming, but ever so gradually. And the ability to economically deploy quantum computing for a wide range of everyday tasks is far far away from being around the corner. Around the year 1900, it was shown to be possible to move objects at very high speed using magnetics and what is called linear acceleration. To this day, we still do not have high-speed train systems routinely operating between major population hubs moving at super high speeds based on these principles of physics, even though the principles of physics were demonstrated over 100 years ago. It has been possible all this time. It just has not been economic. First, we have to get quantum competing used in practical ways. There are many R&D demonstrations. And many, many impressive R&D demonstrations. But so, so few economically viable non research setting demonstrations. They will come, undoubtedly. And some faster than others. And many not as fast as people think because of the cost/benefit issues. So before you jump to quantum computing exponentially complicating the core and ever existing ethics of using any type of automation (including information automation, including information automation incorporating AI methods), address some of the stepping stone issues first.”

Su Sonia Herring, a Turkish-American internet policy researcher with Global Internet Policy Digital watch, said, “AI will be used in questionable ways due to companies and governments putting profit and control in front of ethical principles and the public good. Civil society, researchers and institutions who are concerned about human rights give me hope. Algorithmic black boxes, digital divide, the need to control, surveil and profit off masses worry me the most. I see AI applications making a difference in people’s lives in taking care of making mundane, time consuming work (while making certain jobs obsolete), helping identify trends and informing public policy. Issues related to privacy, security and accountability and transparency related to AI tech concerns me, while the potential of processing big data to solve global issues excite me.”

Susan Price, user-experience pioneer and strategist and founder of Firecat Studio, noted, “I don’t believe that governments and regulatory agencies are poised to understand the implications of AI for ethics and consumer or voter protection. The questions asked in Congress barely scratch the surface of the issue, and political posturing too often takes the place of elected officials charged with oversight to reach genuine understanding of these complex issues. The strong profit motive for tech companies leads them to resist any such protections or regulation. These companies’ profitability allows them to directly influence legislators through lobbies and PACs; easily overwhelming the efforts of consumer protection agencies and nonprofits, when those are not directly defunded or disbanded. We’re seeing Facebook, Google, Twitter and Amazon resist efforts to produce the oversight, auditing and transparency that would lead to consumer protection. AI is already making lives better. But it’s making corporate profits better at a much faster rate. Without strong regulation, we can’t correct that imbalance, and the processes designed to protect U.S. citizens from exploitation through elected leaders is similarly subverted by funds from these same large companies.”

Tracey P. Lauriault, a professor expert in critical media studies and big data based at Carleton University, Ottawa, Canada, commented, “Automation, AI and machine learning (ML) used in traffic management as in changing the lights to improve the flow of traffic, or to search protein databases in big biochemistry analytics, or to help me sort out ideas on what show to watch next or books to read next, or to do land-classification of satellite images, or even to achieve organic and fair precision agriculture, or to detect seismic activity, the melting of polar ice caps, or to predict ocean issues are not that problematic (and its use, goodness forbid, to detect white-collar crime in a fintech context is not a problem). If, however, the question is about social welfare intake systems, biometric sorting, predictive policing and border control, etc., then we are getting into quite a different scenario. How will these be governed, scrutinized, and who will be accountable for decisions, and will those decisions about the procurement and use of these technologies or the intelligence derived from them? They will reflect our current forms of governance, and these seem rather biased and unequal. If we can create a more just society then we may be able to have more-just AI/ML. It is not one technology that will determine whether another technology is ethical. It may augment or preclude possible positive outcomes depending on the application of the technology where, by who and for what reason. The critical view of technology would posit that this is a social and technical question, not solely a technical question.”

William L. Schrader said, “People in real power are driven by more power and more money for their own use (and their families and friends). That is the driver. Thus, anyone with some element of control over an AI system will nearly always find a way to use it to their advantage rather than the stated advantage. Notwithstanding all statements by them to do good and be ethical, they will subvert their own systems for their benefit and abuse the populous. All countries will suffer the same fate. Ha! What gives me the most hope? ‘Hope?’ That is not a word I ever use. I have only expectations. I expect all companies will put nice marketing on their AI, such as, ‘We will save you money in controlling your home’s temperature and humidity,’ but they are really monitoring all movements in the home (that is ‘needed in order to optimize temperature’). All governments that I have experienced are willing to be evil at any time, and every time if they can hide their actions. Witness the 2016-2020 U. S. President Trump. All countries are similar. AI will be used for good on the surface and evil beneath. Count on it. AI does not excite me in the least. It is as dangerous as the H-bomb. Quantum computing, the supporting infrastructure such as fiber transport, cooling and power distribution will all be standard by 2030 (if not before). Thus, the largest challenges will be put under attack by such machines. AI is but one of them, and it is obvious since it needs tremendous CPU power to keep up with the demands of humans’ desires. Fortunately for all governments, and most unfortunately for citizens, is that the IC (intelligence community) in every country will apply that computer power to break all the encryption currently available. Thus, higher encryption will be created and that will demand more compute power on phones, tablets, laptops and desktops, not to mention the millions of servers that will replace those now in use. Data storage will become enormous. We must begin to think in terms of petabytes of petabytes of petabytes. The internet will never forget any email, phone call or text/SMS ever sent. No matter what, the setting of Orwell’s ‘Nineteen Eighty-Four’ is coming, and it will be much worse. This did not have to happen. The technologists we have leading quantum computing and AI efforts globally will wake up too late. By 2030, AI and quantum computing will be global, and freedom will be lost. Computers will take over and could possibly remove all available digital copies of ‘Nineteen Eighty-Four,’ the movie ‘2001: A Space Odyssey,’ the ‘Matrix’ movies and related sci-fi that warned of all this. Read this in 10 years and then tell me how nuts I am. Humans will lose control within a decade.”

Ian O’Byrne, assistant professor of education at the College of Charleston, said, “AI will mostly be used in questionable ways over the next decade. I fear that the die has been cast as decisions about the ethical components of AI development and use have already been made or should have been made years ago. We already see instances where machine learning is being used in surveillance systems, data collection tools and analysis products. In the initial uses of AI and machine learning, we see evidence that the code and algorithms are being written by small groups that reify their personal biases and professional needs of corporations. We see evidence of racist and discriminatory mechanisms embedded in systems that will negatively impact large swaths of our population. I believe quantum computing will revolutionize computing power, and impact most aspects of society, and the ways in which we utilize the internet. For the most part, for most people, the internet is already unintelligible. That is to say that it does not make sense how or why things operate the way that they do. This is due to failures in basic education and preparedness to utilize these tools and spaces. This is also due to the privatization and obfuscation of the motives, code and algorithms that make everything work. We already see instances where AI and machine learning computers are talking to one another, and even this is not interpretable by the engineers that developed the platforms. I think quantum computing will accelerate this and make a new reality where the machines will talk to machines and not slow down to tell the humans what they’re up to.”

Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, responded, “Given the record of tech companies and the government, AI like other things will be used unethically. Profit is the motive, not ethics. If there is a way to exploit AI and make money, it will be done at the cost or privacy or anything else. Companies don’t care. They are companies. Quantum computing will make AI more powerful and adaptable. Humans will be in the loop as they choose to be and are allowed to be.”

Brien Hallett, an expert on the ethics of peace and war, based at the Matsunaga Institute for Peace, University of Hawaii-Manoa, said, “Ethical behavior is always a ‘sometimes’ thing. Most people or organizations act ethically; some do not. Ethical issues do not have mechanical or technical solutions. Ethical issues have human solutions. Since AI are human constructs, ultimately, humans create, implement and control them. Humans ultimately decide whether an AI application is good and should be continued, or bad and discontinued.”

Fred Baker, board member of the Internet Systems Consortium and longtime IETF leader, commented, “I would like to see AI be far more ethical than it is. That said, human nature hasn’t changed and the purposes to which AI is applied have not fundamentally changed. We may talk about it more, but I don’t think AI ethics will ultimately change. Humanity hasn’t changed, nor have the purposes to which AI is applied. Why would I expect AI to change?”

Mark Maben, a general manager at Seton Hall University, wrote, “It is simply not in the DNA of our current economic and political system to put the public good first. If the people designing, implementing, using and regulating AI are not utilizing ethical principles focused primarily on the public good, they have no incentive to create an AI-run world that utilizes those principles. Having AI that is designed to serve the public good above all else can only come about through intense public pressure. Businesses and politicians often need to be pushed to do the right thing. Fortunately, the United States appears to be at a moment where such pressure and change is possible, if not likely. As someone who works with Gen-Z nearly every day, I have observed that many members of Gen-Z think deeply about ethical issues, including as they relate to AI. This generation may prove to be the difference makers on whether we get AI that is primarily guided by ethical principles focused on the public good.”

Sam Punnett, futurist and retired owner of FAD Research, commented, “System and application design is usually mandated by a business case, not by ethical considerations. Any forms of regulation or guidelines typically lag technology development by many years. The most concerning applications of AI systems are those being employed for surveillance and societal control.”

Gerry Ellis, an accessibility and usability consultant, said, “The concepts of fairness and bias are key to ensure that AI supports the needs of all of society, particularly those who are vulnerable such as many (but not all) persons with disabilities. Overall, AI and associated technologies will be for the good, but individual organizations often do not look beyond their own circumstances and their own profits. One does not need to look beyond sweatshops, dangerous working conditions and poor wages in some industries to demonstrate this. Society and legislation must keep up with technological developments to ensure that the good of society is at the heart of society, and that is in its industrial practices. If quantum computing simply speeds up decisions based on discriminatory criteria, then it adds to the problem. However, if it allows a greater data set of human experience to be accessed and utilized to reach decisions, then it can help overcome the same discriminations. Now is the time that we need to ensure that the latter is the case because as both technologies develop, humans will understand less and less of how they come to their conclusions.”

Jaak Tepandi, professor of knowledge-based systems at Tallinn University of Technology, based in Estonia, noted, “In many AI (not all) applications, ethics is not the primary target.”

Judith Schoßböck, research fellow at Danube University Krems, said, “I don’t believe that most AI systems will be used in ethical ways. Governments would have to make this a standard, but due to the pandemic and economic crisis, they might have other priorities. Implementation and making guidelines mandatory will be important. The most difference will be felt in the area of bureaucracy. I am excited about AI’s prospects for assisted living.”

Leiska Evanson, futurist and consultant, wrote, “Humanity has biases. Humans are building the algorithms around the machine learning masquerading as AI. The ‘AI’ will have biases. It is impossible to have ethical AI (really, ML) if the ‘parent’ is biased. Companies such as banks are eager to use ML to justify not lending to certain minorities who simply do not create profit for them. Governments want to attend to the needs of the many before the few. The current concepts of AI are all about feeding more data to an electromechanical bureaucrat to rubberstamp, with no oversight from humans with competing biases. Elections will be psychologically rigged with online gerrymandering, as echo chambers continue to evolve, shepherded by biased algorithms and AI. What is commonly referred to as AI is not AI, it is machine learning that eats data and spits out patterns – not all of which are useful. Tech companies will use the noise to sell themselves and bury the signals for profit.”

Rebecca Theobald, assistant research professor at the University of Colorado-Colorado Springs, observed, “AI will mostly be used in questionable ways in the next decade because people do not trust the motives of others. Articulate people willing to speak up give me the most hope. People who are scared about their and their families’ well-being worry me the most because they feel there is no other choice but to scramble to support themselves and their dependents. Without some confidence in the climate, economy, health system and societal interaction processes, people will become focused on their own issues, and have less time and capital to focus on others. AI applications in health and transportation will make a difference in the lives of most people. While the world is no longer playing as many geopolitical games over territory, corporations and governments still seek power and influence. AI will play a large role in that. Still, over time, science will win out over ignorance.”

Steve Jones, professor of communication at the University of Illinois at Chicago and editor of New Media and Society, commented, “We’ll have more discussion, more debate, more principles, but it’s hard to imagine that there’ll be – in the U.S. case – a will among politicians and policymakers to establish and enforce laws based on ethical principles concerning AI. We tend to legislate the barn after the horses have left. I’d expect we’ll do the same in this case.”

Andre Popov, a principal software engineer for a large technology company, wrote, “Leaving aside the question of what ‘artificial intelligence’ means, it is difficult to discuss this question. As any effective tool, ‘artificial intelligence’ has first and foremost found military applications, where ethics is not even a consideration. ‘AI’ can make certain operations more efficient and it will be used wherever it saves time/effort/money. People have trouble coming up with ethical legal systems; there is little chance we’ll do better with ‘AI.’ Quantum computing will likely become somewhat usable for certain tasks. AI will likely evolve as well. There is no such thing as ‘ethical AI,’ since humans cannot agree on what is ethical.”

Craig Spiezle, managing director and trust strategist for Agelight, and chair emeritus for the Online Trust Alliance, said, “Look no further than data privacy and other related issues such as net neutrality. Industry in general has failed to respond ethically in the collection, use and sharing of data. Many of these same leaders have a major play in AI and I fear they will continue to act in their own self-interests.”

Edward A. Friedman, professor emeritus of technology management at Stevens Institute of Technology, responded, “AI will greatly improve medical diagnostics for all people. AI will provide individualized instruction for all people. I see these as ethically neutral applications.”

Fernando Barrio, a lecturer in business law at Queen Mary University of London expert in AI and human rights, responded, “If ethical codes for AI will be in place for the majority of cases for 2030, they will purport to be in the public good (which would seem to imply a ‘yes’ to the question as it was asked), but they will not result in public good. The problem is that the question assumes that the sole existence and use of ethical codes would be in the public good. AI, not as the singularity but as machine learning or even deep learning, has an array of positive potential applications, but must not be used to make any decision that has a direct impact on people’s lives. In certain sectors, like the criminal system, it must not be used even in case management, since the inherent and unavoidable bias (either from the data or the algorithmic bias resulting from its own selection or discovery of patterns) leads that individuals and their cases are not judged or managed taking into account the characteristics that make every human unique but in those characteristics that make that person ‘computable.’ Those who propose the use of AI to avoid human bias, such as those that judges and jurors might implement, tend to overlook – let’s assume naively – that those biases can be challenged through appeals, and they can be made explicit and transparent. The AI inherent one cannot be challenged because, between other things, the lack of transparency and, especially, for the insistence of its proponents that the technology can be unbiased.”

Garth Graham, a longtime leader of Telecommunities Canada, said, “The drive in governance worldwide to eradicate the ‘public good’ in favour of ‘market-based approaches’ is inexorable. The drive to implement AI-based systems is not going to place the public good as a primary priority. For example, existing Smart City initiatives are quite willing to outsource the design and operation of complex adaptive systems that learn as they operate civic functions, not recognizing that the operation of such systems is replacing the functions of governance.”

Jay Owens, research director at pulsarplatform.com and author of HautePop, said, “Computer science education – and Silicon Valley ideology overall – focuses on ‘what can be done’ (the technical question) without much consideration of ‘should it be done’ (a social and political question). Tech culture would have to turn on its head for ethical issues to become front-and-centre of AI research and deployment; this is vanishingly unlikely. I’d expect developments in machine learning to continue along the same lines they have done so for the last decade – mostly ignoring the ethics question, with occasional bursts of controversy when anything particularly sexist or racist occurs. A lot of machine learning is already (and will continue to be) invisible to people’s everyday lives, but creating process efficiencies – e.g., in weather forecasting, warehousing and logistics, transportation management. Other processes that we might not want to be more efficient (e.g., oil and gas exploration, using satellite imagery and geology analysis), will also benefit. I feel positively towards systems where ML and human decision-making are combined, e.g., systems for medical diagnostics. I would imagine machine learning is used in climate modelling, which is also obviously helpful. Chinese technological development cannot be expected to follow Western ethical qualms and given the totalitarian (and genocidal) nature of this state, it is likely that it will produce some ML systems that achieve these policing ends. Chinese-owned social apps such as TikTok have already shown racial biases and are likely less motivated to address them. I see no prospect that ‘general AI’ or generalisable machine intelligence will be achieved in 10 years, and even less reason to panic about this (as some weirdos in Silicon Valley do).”

Scott Morgan, senior associate with the Leadership Academy at the Center for Strategic and International Studies, observed, “I am concerned that ethical AI will not be practiced on a large scale. Given the lack of protection offered on Facebook, for example, or facial recognition in general, I doubt that most companies will police themselves appropriately without strict standards and guidelines. So far, there seems to be little appetite for strong public policy on data sharing. Quantum computing may indeed be a viable tool in the future, but in the next 10 years it will most likely not be commercialized to the extent that it would be used for ethical purposes. Initial uses would most likely be for profit, not protection.”

Sharon Sputz, executive director of strategic programs at The Data Science Institute at Columbia University, noted, “In the distant future, ethical systems will prevail, but it will take time.”

Jennifer Young, a JavaScript engineer and user interface/frontend developer, noted, “Capitalism is the systematic exploitation of the many by the few. As long as AI is used under capitalism, it will be used to exploit people. Pandora’s box has already been opened, and it’s unlikely that racial profiling, political and pornographic deepfakes and self-driving cars hitting people will ever go away. What do all of these have in common? They are examples of AI putting targets on people’s backs. AI under capitalism takes exploitation to new heights and starts at what is normally the end-game – death. And it uses the same classes of people as inputs to its functions. People already exploited via racism, sexism and classism are made more abstract entities that are easier to kill, just like they are in war. AI can be used for good. The examples in healthcare and biology are promising. But as long as we’re a world that elevates madmen and warlords to positions of power, its negative use will be prioritized.”

Joel Arthur Barker, futurist, lecturer and author, noted, “ANON The challenge to such a technology can be framed with one simple question: Whose ethics? Will AI systems have values? Sure. But whose? So, the proper answer to the question is: Yes, and then some. So it’s not about AI, it’s about humanity. Again.”

Ronnie Lowenstein, a pioneer in interactive technologies, noted, “AI and the related integration of technologies holds the potential of altering lives in profound ways. I fear the worse but have hopes. Two things that bring me hope: 1) Increased civic engagement of youth all over the world – not only do I see youth as hope for the future, but seeing more people listening to youth encourages me that the adults are re-examining their beliefs and assumptions so necessary for designing transformative policies and practices. 2) The growth futures/foresight strategies as fostered by The Millennium Project.”

Joan Francesc Gras, an architect of XTEC active in ICANN, noted, “Will AI be used primarily ethically or questionably in the next decade? There will be everything. But ethics will not be the most important value. Why? The desire for power breaks ethics. What gives you more hope? What worries you the most? How do you see AI apps make a difference in the lives of most people? In a paradigm shift in society, AI will help make those changes. When looking at global competition for AI systems, what issues are you concerned about or excited about? I am excited that competition generates quality, but at the same time unethical practices appear. Quantum computing will be the new stage of computing in the development of human life. How will that evolution unfold and when? We have to wait for the more or less commercial appearance of the first quantum computers. Will humans stay informed as AI systems are created and implemented? For a part of society yes, for another no. As in all technological advances.”

Arthur Bushkin, writer, philanthropist and social activist, noted, “I worry that AI will not be driven by ethics, but rather by technological efficiency and other factors.”

Dharmendra K. Sachdev, a telecommunications pioneer and founder-president of Spacetel Consultancy LLC, said, “My simplistic definition is that AI can be smart; in other words, like the human mind, it can change directions depending upon the data collected. The question often debated is this: can AI outsmart humans? My simplistic answer: yes, in some humans, but not the designer. A rough parallel would be: can a student outsmart his professor? Yes, of course yes, but he may not outsmart all professors in his field. Summarizing my admittedly limited understanding is that all software is created to perform a set of functions. When you equip it with the ability to change course depending upon data, we call it AI. If I can make it more agile than my competition, my AI can outsmart him.”

Jan Schaffer, director of J-Lab at American University, observed, “ANON For competitive AI applications. Market forces will disrupt efforts to improve the common good, and I don’t expect much transparency around those developing AI applications. Technologists building AI are driven by solving problems for a profit, and that motivation will likely outweigh ethical considerations.”

Jannick Pedersen, a co-founder, CEO and futurist based in Europe, commented, “AI is the next arms race. Though mainstream AI applications will include ethical considerations, a large amount of AI will be made for profit and be applied in business systems, not visible to the users.”

Denise N. Rall, a researcher of popular culture based at a New Zealand University, said, “I cannot envision that AI’s will be any different than the people who create and market them. They will continue to serve the rich at the expense of the poor.”

Jeff Gulati, professor of political science at Bentley University, responded, “It seems that more AI and the data coming out could be useful in increasing public safety and national security. In a crisis, we will be more willing to rely on these applications and data. As the crisis subsides, it is unlikely that the structures and practices built during the crisis will go away, and unlikely to remain idle. I could see it being used in the name of prevention and lead to further erosion of privacy and civil liberties in general. And, of course, these applications will be available to commercial organizations, who will get to know us more intimately so they can sell us more stuff we don’t really need.”

Marc Brenman, managing member at IDARE, a transformational training and leadership development consultancy based in Washington, DC, wrote, “As societies, we are very weak on morality and ethics generally. There is no particular reason to think that our machines or systems will do better than we do. Faulty people create faulty systems. In general, engineers and IT people and developers have no idea what ethics is. How could they possibly program systems to have what they do not? As systems learn and develop themselves, they will look around at society and repeat its errors, biases, stereotypes and prejudices. We already see this in facial recognition. AI will make certain transactions faster, such as predicting what I will buy online. AI systems may get out of control as they become autonomous. Of what use are humans to them? They may permit mistakes to be made very fast, but the systems may not recognize the consequences of their actions as ‘mistakes.’ For example, if they maximize efficiency, then the Chinese example of social control may dominate. When AI systems are paired with punishment or kinetic feedback systems, they will be able to control our behavior. Imagine a pandemic where a ‘recommendation’ is made to shelter in place or wear a mask or stay six feet away from other people. If people are hooked up to AI systems, the system may give an electrical shock to a person who does not implement the recommendation. This will be like all of us wearing shock collars that some of us use on our misbehaving dogs. As AI systems evolve, humans will be further from the loop. Humans are grain in the system. An example: ‘manned’ fighter jets. Humans cannot withstand G-forces as well as unmanned aircraft, cannot turn as fast, and need elaborate and sophisticated human life-support systems, as well as ejection mechanisms. Drones and autonomous (AI-powered) vehicles don’t need any of these.”

Mark Perkins, an information science professional active in the Internet Society, noted, “AI will be developed by corporations (with government backing) with little respect for ethics. The example of China will be followed by other countries – development of AI by use of citizens data, without effective consent, to develop products not in the interest of such citizens (surveillance, population control, predictive policing, etc.). AI will also be developed to implement differential pricing/offers further enlarging the ‘digital divide’ AI will be used by both governments and corporations to take non-transparent, non-accountable decisions regarding citizens AI will be treated as a ‘black box,’ with citizens have little – if any – understanding of how they function, on what basis they make decisions, etc.”

Michael Marien, director of Global Foresight Books, futurist and compiler of the annual list of the best futures books of the year, said, “We have too many crises right now, and many more ahead, where technology can only play a secondary role at best. Technology should be aligned with the UN’s 17 Sustainable Development Goals, and especially concerned about reducing the widening inequality gap (SDG #10), e.g., in producing and distributing nutritious food (SDG #2).”

Wendell Wallach, ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics, responded, “While I applaud the proliferation of ethical principles, I remain concerned about the ability of countries to put meat on the bone. Broad principles do not easily translate into normative actions, and governments will have difficulty enacting strong regulations. Those that do take the lead in regulating digital technologies, such as the EU, will be criticized for slowing innovation, and this will remain a justification for governments and corporations to slow putting in place any strong regulations backed by enforcement. So far, ethics whitewashing is the prevailing approach among the corporate elite. While there are signs of a possible shift in this posture, I remain skeptical while hopeful. Quantum computing will certainly evolve, but I have seen absolutely nothing to date that would make me believe that the kinds of quantum computing strategies that are advancing will be applied to make computing systems more ethical.”

Ryan Sweeney, director of analytics for Ignite Social Media, commented, “The definition of ‘public good’ is important here. How much does intent versus execution matter? Take Facebook, for instance. They might argue that their AI content review platform is in the interest of ‘public good,’ but it continues to fail. AI is only as ethical and wise as those who program it. One person’s racism is another’s free speech. What might be an offensive word to someone might not even be in the programmer’s lexicon. I’m sure AI will be used with ethical intent, but ethics require empathy. In order to program ethics, there has to be a definitive right and wrong, but situations likely aren’t that simple and require some form of emotional and contextual human analysis. The success of ethical AI execution comes down to whether or not the programmers literally thought of every possible scenario. In other words, AI will likely be developed and used with ethical intent, but it will likely fall short of what we, as humans, can do. We should use AI as a tool to help guide our decisions, but not rely on it entirely to make those decisions. Otherwise, the opportunity for abuse or unintended consequences will show its face. I’m also sure that AI will be used with questionable intent as technology is neither inherently good nor bad. Since technology is neutral, I’m sure we will see cases of AI abused for selfish gains or other questionable means and privacy violations.”

Karen Yesinkus, a creative and digital services professional, observed, “I would like to believe that AI being used ethically by 2030 will be in place. However, I don’t think that will likely be a sure thing. Social media, human resources, customer services, etc. platforms are and will have continuing issues to iron out (bias issues especially). Given the existing climate politically on a global scale, it will take more than the next 10 years for AI to shake off such bias.”

Ilana Schoenfeld, an expert in designing online education and knowledge-sharing systems wrote, “I am frightened and at the same time excited about the possibilities of the use of AI applications in the lives of more and more people. I think AI will be used in both ethical and questionable ways, as there will always be people on both sides of the equation trying to find ways to further their agendas. In order to ensure that the ethical use of AI outweighs its questionable use, we need to get our institutional safeguards right – both in terms of their structures and their enforcement by non-partisan entities.”

The following predictions are from respondents who said ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030

Jerome C. Glenn, co-founder and CEO of the futures-research organization The Millennium Project, wrote, “There were few discussions about the ethical issues in the early spread of the internet in the 1970’s and ’80s. Now there are far, far more discussions about AI around the world. However, most do not make clear distinctions among narrow, general and super AI. If we don’t get our standards right in the transition from artificial narrow intelligence to artificial general intelligence, then the emergence of super from general could have the consequences science fiction has warned about. Elementary quantum computing is already here and will accelerate faster than people think, but the applications will take longer to implement than people think. It will improve computer security, AI and computational sciences, which in turn accelerate scientific breakthroughs and tech applications, which in turn increase both positive and negative impacts for humanity. These potentials are too great for humanity to remain so ignorant. We are in a new arms race for artificial general intelligence and more-mature quantum computing, but like the nuclear race that got agreements about standards and governance (IAEA), we will need the same for these new technologies while the race continues.”

Benjamin Kuipers, a professor of computer science and engineering at the University of Michigan known for research in qualitative simulation, observed, “I choose to believe that things will work out well in the choice we face as a society. I can’t predict the likelihood of that outcome. Ethics is the set of norms that society provides, telling individuals how to be trustworthy, because trust is essential for cooperation, and cooperation is essential for a society to thrive, and even to survive (see Robert Wright’s book ‘Nonzero’). Yes, we need to understand ethics well enough to program AIs so they behave ethically. More importantly, we need to understand that corporations, including non-profits, governments, churches, etc., are also artificially intelligent entities participating in society, and they need to behave ethically. We also need to understand that we as a society have been spreading an ideology that teaches individuals that they should behave selfishly, rather than ethically. We need an ethical society, not just ethical AI. But AI gives us new tools to understand the mind, including ethics. Quantum computing may or may not provide much greater computational power; understanding ethics for AI, corporations, or society is an orthogonal issue.”

Susan Etlinger, industry analyst for Altimeter, wrote, “AI is, fundamentally, an idea about how we can make machines that replicate some aspects of human ability. So, we should expect to see ethical norms around bias, governance and transparency become more common, much the same way we’ve seen the auto industry and others adopt safety measures like seatbelts, airbags and traffic signals over time. But of course people are people, so for every ethical principle there will always be someone who ignores or circumvents it. I’m heartened by some of the work I’ve seen from the large tech companies. It’s not consistent, it’s not enough, but there are enough people who are genuinely committed to using technology responsibly that we will see some measure of positive change. Of course, all claims of AGI—automated general intelligence—are immediately suspect, not only because it’s still hypothetical at this point, but because we haven’t even ironed out the governance implications of automation. And all bets are off when we are talking about AI-enabled weaponry, which will require a level of diplomacy, policy and global governance similar to nuclear power.”

David Karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory, said, “The question as framed suggests that AI systems will be thinking by 2030. I don’t believe that’s the case. In 2030, AI systems will continue to be machines that do what their human users tell them to do. So, the important question is whether their human users will employ ethical principles focused primarily on the public good. Since that isn’t true now, I don’t expect it will be true in 2030 either. Just like now, most users of AI systems will be for-profit corporations, and just like now, they will be focused on profit rather than social good. These AI systems will certainly enable corporations to do a much better job of extracting profit, likely with a corresponding decrease in public good, unless the public itself takes action to better align the profit-interests of these corporations with the public good. In great part this requires the passage of laws constraining what corporations can do in pursuit of profit; it also means the government quantifying and paying for public goods so that companies have a profit motive in pursuing them. Even in this time of tremendous progress, I find little to excite me about AI systems. In our frenzy to enhance the capabilities of machines, we are neglecting the existing and latent capabilities of human beings, where there is just as much opportunity for progress as there is in AI. We should be directing far more attention to research on helping people learn better, helping them interact online better, and helping them make decisions better. Quantum computing is, in the public, being used as shorthand for ‘really fast computers.’ But that’s not what it is. Quantum computers are highly specialized devices that are good at very specific tasks such as factoring. There’s a small chance these computers will have a significant impact on cryptography by 2030 (I doubt it), and I see almost no chance that they will improve our ability to solve complex machine learning problems, much less have any impact on our understanding of knowledge representation or creativity or any of the other key attributes of natural intelligence that we have been trying to understand and emulate in machines for decades. Finally, even if we do somehow create super-fast computers, they still won’t help us with the key challenge in the design of ethical AI, which is to understand ethics. After thousands of years, this is something people are still arguing about. Having faster computers won’t change the arguments one bit.”

Stowe Boyd, consulting futurist expert in technological evolution and the future of work, noted, “I have projected a social movement that would require careful application of AI as one of several major pillars. I’ve called this the Human Spring, conjecturing that a worldwide social movement will arise in 2023, demanding the right to work and related social justice issues, a massive effort to counter climate catastrophe, and efforts to control artificial intelligence. AI, judiciously used, can lead to breakthroughs in many areas. But widespread automation of many kinds of work – unless introduced gradually, and not as fast as profit-driven companies would like – could be economically destabilizing. I’m concerned that AI will most likely be concentrated in the hands of corporations who are in the business of concentrating wealth for their owners, and not primarily driven by bettering the world for all of us. AI applied in narrow domains that are really beyond the reach of human cognition – like searching for new ways to fold proteins to make new drugs or optimizing logistics to minimize the number of miles that trucks drive everyday – are sensible and safe applications of AI. But AI directed toward making us buy consumer goods we don’t need, or surveilling everyone moving through public spaces to track our every move, well, that should be prohibited.”

Thomas Birkland, professor of public and international affairs at North Carolina State University, wrote, “AI will be informed by ethical considerations in the coming years because the stakes for companies and organizations making investments in AI are too high. However, I am not sure that these ethical considerations are going to be evenly applied, and I am not sure how carefully these ethical precepts will be adopted. What gives me the most hope is the widespread discussions that are already occurring about ethics in AI – no one will be caught by surprise by the need for an ethical approach to AI. What worries me the most is that the benefits of such systems are likely to flow to the wealthy and powerful. For example, we know that facial-recognition software, which is often grounded in AI, has severe accuracy problems in recognizing ‘non-white’ faces. This is a significant problem. AI systems may be able to increase productivity and accuracy in systems that require significant human intervention. I am somewhat familiar with AI systems that, for example, can read x-rays and other scans to look for signs of disease that may not be immediately spotted by a physician or radiologist. AI can also aid in pattern recognition in large sets of social data. For example, AI systems may aid researchers in coding data relating to the correlates of health. What worries me is the uncritical use of AI systems without human intervention. There has been some talk, for example, of AI applications to warfare – do we leave weapons targeting decisions to AI? This is a simplistic example, but it illustrates the problem of ensuring that AI systems do not replace humans as the ultimate decision maker, particularly in areas where there are ethical considerations. All this being said, the deployment of AI is going to be more evolutionary than revolutionary, and that the effects on our daily lives will be subtle and incremental over time. It is very likely that quantum computing will make significant gains. I am not certain that its development will, in and of itself, lead to ‘more-ethical’ AI systems. Quantum computing, like AI, is a tool – the degree to which the tool is used ethically is up to the human user of the system. With that in mind, humans will remain in the loop as AI systems are created and implemented, so the humans that create and use these systems need to be trained in ethics generally, and in ethical computing in particular.”

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, observed, “There will be a good-faith effort, but I am skeptical that the good intentions will necessarily result in the desired outcomes. Machine learning is still in its early days and our ability to predict various kinds of failures and their consequences is limited. The ML design space is huge and largely unexplored. If we have trouble with ordinary software whose behavior is at least analytic, ML is another story. And our track record on normal software stinks (buggy code!). We are, however, benefiting enormously from many ML applications including speech recognition and language translation, search efficiency and effectiveness, medical diagnosis, exploration of massive data to find patterns, trends and unique properties (e.g., pharmaceuticals). Discovery science is benefiting (e.g., finding planets around distant stars). Pretty exciting stuff. There is some evidence that quantum methods may be applicable to machine learning systems, for optimization for example. Early days yet.”

John Verdon, a retired complexity and foresight consultant, said, “Ultimately what is most profitable in the long run is a well-endowed citizenry able to pursue their curiosities and expand the agencies. To enable this condition will require the appropriate legislative protections and constraints. The problems of today and the future are increasing in complexity. Any systems that seek monopolistic malevolence essentially will act like a cancer killing its own host. Distributed-ledger technologies may well enable the necessary ‘accounting systems’ to both credit creators and users of what has been created while liberating creations to be used freely (like both free beer and liberty). This enables a capacity to unleash all human creativity to explore the problem and possibility space. Slaves don’t create a flourishing society, only a static and fearful elite increasingly unable to solve the problems they create. New institutions like an ‘auditor general of algorithms’ (to oversee that algorithms and other computations actually produce the results they intend, and to offer ways to respond and correct) will inevitably arise – just like our other institutions of oversight. Quantum computing will evolve to assist humans in building ethical AI and other systems dealing with overwhelming complexity – but not in the next decade. And in order for these systems to be enacted, considerable public oversight will be required.”

Ian Thomson, a pioneer developer of the Pacific Knowledge Hub, observed, “It will always be the case that new uses of AI will raise ethical issues, but over time, these issues will be addressed so that the majority of uses will be ethical. Good uses of AI will include highlighting trends and developments that we are unhappy with. Bad uses will be around using AI to manipulate our opinions and behaviors for the financial gain of those rich enough to develop the AI and to the disadvantage of those less well-off. I am excited by how AI can help us make better decisions, but I am wary that it can also be used to manipulate us.”

Ibon Zugasti, futurist, strategist and director with Prospektiker, wrote, “The use of technologies, ethics and privacy must be guaranteed through transparency. What data will be used and what will be shared? There is a need to define a new governance system for the transition from current narrow AI to the future general AI. Artificial intelligence will drive the development of quantum computing, and then quantum computing will further drive the development of artificial intelligence. This mutual acceleration could grow beyond human control and understanding. Scientific and technological leaders, advanced research institutes and foundations are exploring how to anticipate and manage this issue.”

James Morris, professor of computer science at Carnegie Mellon, noted, “I had to say ‘yes.’ The hope is that engineers get control away from capitalists and rebuild technology to embody a new ‘constitution.’ I actually think that’s a long-shot in the current atmosphere. Ask me after November. If the competition between the U.S. and China becomes zero-sum, we won’t be able to stop a rush towards dystopia.”

Andy Opel, professor of communications at Florida State University, said, “Because AI is likely to gain access to a widening gyre of personal and societal data, constraining that data to serve a narrow economic or political interest will be difficult.”

Bill Dutton, professor of media and information policy at Michigan State University, observed, “AI is not new and has generally been supportive of the public good, such as in supporting online search engines. The fact that many people are discovering AI as some new development during a dystopian period of digital discourse has fostered a narrative about evil corporations challenged by ethical principles. This technologically deterministic good vs. evil narrative needs to be challenged by critical research. Technical advances will not determine the ethical role of digital media.”

Gregory Shannon, chief scientist at the CERT software engineering institute at Carnegie Mellon University, responded, “There will be lots of unethical applications as AI matures as an engineering discipline. I expect that to improve. Just like there are unethical uses of technology today, there will be for AI. AI provides transformative levels of efficiency for digesting information and making pretty-good decisions. And some will certainly exploit that in unethical ways. However, the ‘demand’ from the market (most of the world’s population) will be for ethical AI products and services. It will be bumpy, and in 2030 we might be halfway there. The use of AI by totalitarian and authoritarian governments is a clear concern. But I don’t expect the use of such tech to overcome the yearning of populations for agency in their lives, at least after a few decades of such repression. Unethical systems/solutions are not trustworthy. So, they can only have narrow application. Ethical systems/solutions will be more widely adopted, eventually. Quantum computing and AI ethics seem very orthogonal. QC in 2030 might make building AI models/systems faster/more efficient, but that doesn’t impact ethics per se. If anything, QC could make AI systems less ethical because it will still take significant financial resources in 2030 for QC. So a QC-generated model might be able to ‘hide’ features/decisions that non-QC capable users/inspectors would not see/observe due to their limited computational resources.”

David Krieger, director of the Institute for Communication and Leadership, based in Switzerland, commented, “It appears that in the wake of the pandemic, we are moving faster towards the data-driven global network society than ever before. Some have predicted that the pandemic will end the ‘techlash,’ since what we need to survive is more information and not less about everyone and everything. This information must be analyzed and used as quickly as possible, which spurs on investments in AI and big data analytics. Calls for privacy, for regulation of tech giants and for moratoriums on the deployment of tracking, surveillance and AI are becoming weaker and losing support throughout the world. Perhaps traditional notions of civil liberties need to be revised and updated for a world in which connectivity, flow, transparency and participation are the major values. Computing is evolving in the direction of a datafied society, that is, a society in which decisions on all levels in all areas, business, healthcare, education, etc., will be evidence-based and not based on gut feelings, bias, intuition, group pressure or experience. Data-based decision-making will rely upon datafication, that is, the complete surveillance of all things and people. Humans will always be in the loop because the world will be a loop, or rather, a global network society. AI is a part of the network, but the network will make the decisions and not any individual humans.”

J. Francisco Álvarez, professor of logic and philosophy of science at UNED, the National University of Distance Education in Spain, noted, “Concerns about the use of AI and its ethical aspects will be very diverse and will produce very uneven effects between public good and a set of new, highly marketable services. We will have to expand the spheres of personal autonomy and the recognition of a new generation of rights in the digital society. It is not enough to apply ethical codes in AI devices. Instead, a new ‘constitution’ must be formulated for the digital age and its governance. Concern for the ethical components of action are far removed from technological implementation. It is a human decision that is implemented in a technological medium. There is the possibility of an expansion of autonomous devices as a result of the rules that we have given them. Public discussion on the ethical aspects of our human interaction and proposing collective decisions of human agents to govern the devices is important.”

Donald A. Hicks, a professor of public policy and political economy at the University of Dallas whose research specialty is technological innovation, observed, “AI/automation technologies do not assert themselves. They are always invited in by investors, adopters and implementers. They require investments to be made by someone, and those investments are organized around the benefits of costs cut or reduced and/or possibilities for new revenue flows. New technologies that cannot offer those prospects remain on the shelf. So, inevitably, new technologies like AI only proliferate if they are ‘pulled’ into use. This gives great power to users and applications that look to be beneficial to a widening user base. This whole process ‘tilts’ toward long-term ethical usage. I know of no technology that endured while delivering unwanted/unfair outcomes broadly. Consider our nation’s history with slavery. Gradually and inevitably, as the agricultural economies of Southern states became industrialized, it no longer made sense to use slaves. It was not necessary to change men’s hearts before slavery could be abandoned. Economic outcomes mattered more, although eventually hearts did follow. But again, the transitions between old and new do engender displacements via turnover and replacement, and certain people and places can feel – and are – harmed by such technology-driven changes. But their children are likely thankful that their lives are not simply linear extensions of the lives of their parents. To date, AI and automation have had their greatest impacts in augmenting the capabilities of humans, not supplanting them. The more sophisticated AI applications and automation become, the more we appreciate the special capabilities in human beings that are of ultimate value and that are not likely to be captured by even the most sophisticated software programs. I’m bullish on where all of this is leading us because I’m old enough to compare today with yesterday. We have to be careful about designating something as unethical. After all, any change that affects someone in a way they did not anticipate or value can be considered unethical because it is disruptive. But what may have been unwanted and unanticipated at the individual level could well be considered a welcome outcome for the larger society. Quantum computing reflects a sequence of technological and conceptual breakthroughs driven by human curiosity and intention. It is not a spontaneous journey, even though its advancements will take us into terra incognita. Among the main benefits of AI is greater precision and control over unwanted circumstances and a greater ability to extract value from vastly larger information ‘lakes’ well beyond a human’s capability to do so. Will humans still be in the loop? Well, considering the 20th-century miracles of flight, wireless communication, genetic therapies, robotic surgery, biologic therapeutics, etc. are we ‘in the loop’ now? At the level of the individual, we are all more dependent on systems that we use and are dependent on that which we cannot recreate or even explain. We can engage as users, but not creators, in the layered technologies on which we depend. True, there are scientists, engineers and artists who can teach us about their special worlds, but none of them escapes being dependent on all the other facets of life for which they, too, are simply bystanders. The real test of successful technological evolution is the division of labor and mutual interdependence that allows us to benefit from the specialized knowledge of a precious few. Knit that all together and we have modern life as we know it. Why would we expect that to change just because the underlying technologies continue to evolve?”

Edson Prestes, a professor of computer science at Federal University of Rio Grande do Sul, Brazil, commented, “By 2030, technology in general will be developed taking into account ethical considerations. We are witnessing a huge movement these days. Most people who have access to information are worried about the misuse of technology and its impact on their own lives. Campaigns to ban lethal weapons powered by AI are growing. Discussions on the role of technology and its impact on jobs are also growing. People are becoming more aware about fake news and proliferation of hate speech. All these efforts are creating a massive channel of information and awareness. Some communities will be left behind, either because some governments want to keep their citizens in poverty and consequently keep them under control, or because they do not have enough infrastructure and human and institutional capacities to reap the benefits of the technological domain. In these cases, efforts led by the United Nations are extremely valuable. The UN Secretary-General António Guterres is a visionary in establishing the High-Level Panel on Digital Cooperation. Guterres used the panel’s recommendations to create a roadmap with concrete actions that address the digital domain in a holistic way, engaging a wide group of organisations to deal with the consequences emerging from the digital domain. There are enthusiastic groups that consider quantum computing to be the key for creating general artificial intelligence. I’m a bit more resistant to all these ideas. Ethical AI is more about humans than about technology. To build ethical AI, we need to understand ourselves, understand others, we need to connect with others, trying to understand their needs, fears, values, customs, and this, in my view, is independent of any advances on quantum computing. How do many of us practice empathy or compassion? If we do not, how can we expect a machine to do it or conclude that a machine can do it? In my view, quantum computing allows the implementation of a non-deterministic Turing machine, i.e., following different lines of processing at same time. We can speed up the processing of certain jobs as if we have several computers running at same time, but the computers continue to be computers and nothing more. Brain processes involve different cognitive activities running at same time and sharing information. When you see a flower, you process the image of the flower, you recover positive and negative feelings of a flower, you recover memories, sounds, smells, and combine all of them producing a reaction to that event. How does the brain do these combinations? No one knows. Thus, in my view, we have a tool but not the answer for the very basic question: What does it mean to be ethical or human? Regarding, specifically, humans-in-the-loop, this question worries me a lot. Humans should be always in or on the loop, mainly in critical scenarios that can cause negative impacts on people’s lives. All our decisions are taken considering myriad information, and technological systems are much more limited in this sense.”

Eric Knorr, pioneering technology journalist and editor in chief of IDG, commented, “First, only a tiny slice of AI touches ethics – it’s primarily an automation tool to relieve humans of performing rote tasks. Current awareness of ethical issues offers hope that AI will either be adjusted to compensate for potential bias or sequestered from ethical judgment. In general, ethical judgment should not be left to machines. This issue is overblown. Yes, computing power a magnitude greater than currently available could raise the possibility of some emulation of general intelligence at some point. But how we apply that is up to us.”

Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, said, “There will be a push for ethical AI during the next 10 years, but good intentions alone do not morality make. AI is complicated, as is ethics, and combining the two will be a very complex problem indeed. We are likely to see quite a few clumsy attempts to create ethical AI-systems, with the attendant problems. It is also important to take cultural and geopolitical issues into consideration. There are many interpretations of ethics, and people put different value on different values, so that, e.g., a Chinese ethical AI may well function quite differently – and generate different outcomes – from, e.g., a British ethical AI. This is not to say that one is better than the other, just that they may be rather different. Quantum computing is likely to help us make great strides in many fields, but ethics is about something far more complex than computing the ‘right’ answer at terrific speeds.”

Jim Spohrer, director of cognitive open technologies and the AI developer ecosystem at IBM, noted, “The Linux Foundation Artificial Intelligence Trusted AI Committee is working on this. The members of that community are taking steps to put principles in place and collect examples of industry use cases. The contribution into Linux Foundation AI (by major technology companies) of the open-source project code for Trusted AI for AI-Fairness, AI-Robustness and AI-Explainability on which their products are based is a very positive sign.”

Michael Wollowski, a professor of computer science at Rose-Hulman Institute of Technology and expert in artificial intelligence, said, “It would be unethical to develop systems that do not abide by ethical codes, if we can develop those systems to be ethical. Europe will insist that systems will abide by ethical codes. Since Europe is a big market, since developing systems that abide by ethical code is not a trivial endeavor, and since the big tech companies (except for Facebook) by and large want to do good (well, their employees by and large want to work for companies that do good) they will develop their systems in a way that they abide by ethical codes. I very much doubt that the big tech companies are interested (or are able to find young guns) in maintaining an unethical version of their systems. AI systems, in concert with continued automation, including the Internet of Things will bring many conveniences to people’s lives. Think along the lines of personal assistants that manage various aspects of people’s lives. Up until COVID-19, I would have been concerned about bad actors using AI to do harm. I am sure that right now bad actors are probably hiring virologists to develop viruses with which they can hold the world hostage. I am very serious that rogue leaders are thinking about this possibility. The AI community in the U.S. is working very hard to establish a few large research labs. This is exciting as it enables the AI community to develop and test systems at scale. Many good things will come out of those initiatives. Finally, let us not forget that AI systems are engineered systems. They can do many interesting things, but they cannot think or understand. While they can be used to automate many things and while people by and large are creatures of habit, it is my fond hope that we will rediscover what it means to be human. Quantum computing is still in its infancy. In 15 or 20 years, yes, we can build real systems. I don’t think we will be able to build usable systems in 10 years. Furthermore, quantum computing is still a computational system. It is the software or in case of statistical machine learning, the data that makes a system ethical or not. You could build a computer out of water pipes. Such a system would still be just a computer, although not as fast as a modern computer.”

Moira de Roche, chair of the International Federation of Information Processing’s professional practice sector, commented, “There is a trend towards ethics, especially in AI applications. AI will continue to improve people’s lives in ways we cannot even anticipate presently. Pretty much every technology we use on a day-to-day basis employs AI (email, mobile phones, etc.). In fact, it worries me that AI is seen as something new, whereas we have used it on a daily basis for a decade or more. Perhaps the conversation should be more about robotics and automation than AI, per se. I am concerned that there are so many codes of ethics. There should not be so many (at present there are several). I worry that individuals will choose the code they like the best – which is why a plethora of codes is dangerous.”

Nigel Cameron, president emeritus at the Center for Policy on Emerging Technologies, commented, “The social and economic shifts catalyzed by the Covid plague are going to bring increasing focus to our dependence on digital technologies, and with that focus will likely come pressure for algorithmic transparency and concerns over equity and so forth. Anti-trust issues are highly relevant, as is the current pushback against China and, in particular, Huawei (generally I think a helpful response). The single most important issue is keeping humans in the loop in a context in which for various reasons (cost being one of course) the pressure is always going to be to go fully autonomous. Quantum has been talked about for a long time now, and it’s hard to predict the timeline even though it does seem inevitable that quantum systems will dominate. It’s a tantalizing idea that we can just build ethics into the algorithms. Some years back, the Department of Defense issued a strange press release in defense of robotic warfare that suggested it would be more humane, since the Geneva Conventions could be built into the programming. I’m fascinated, and horrified, by the experiences of military drone operators playing de facto video games before going home for dinner after taking out terrorists on the other side of the world. A phrase from a French officer during our high-level, super-safe (for the U.S.) bombing of Serbia comes to mind: If a cause is worth killing for, it has to be worth dying for. The susceptibility of our democracy to various forms of AI-related subversion could lead to a big backlash. I remember the General Counsel of Blackberry, a former federal prosecutor, saying that ‘we have yet to have our cyber 9/11.’ When I chaired the GITEX conference in Dubai some years back, we had a huge banner on the stage that said, I think, ‘Our Digital Tomorrow.’ In my closing remarks I suggested that unless we get a whole lot more serious about cybersecurity, one big disaster – say, a hacked connected car system that leaves 10,000 dead and 100,000 injured by making a million cars turn left at 8:45 one morning – will give us an Analog Tomorrow instead.”

Pamela McCorduck, writer, consultant and author of several books, including “Machines Who Think,” wrote, “Many efforts are underway worldwide to define ethical AI, suggesting that this is already considered a grave problem worthy of intense study and legal remedy. Eventually, a set of principles and precepts will come to define ethical AI, and I think they will define the preponderance of AI applications. But you can be assured that unethical AI will exist, be practiced and sometimes go unrecognized until serious damage is done. Much of the conflict between ethical and unethical applications is cultural. In the U.S. we would find the kind of social surveillance practiced in China to be not only repugnant, but illegal. It forms the heart of Chinese practices. In the short term, only the unwillingness of Western courts to accept evidence gathered this way (as inadmissible) will protect Western citizens from this kind of thing, including the ‘social scores’ the Chinese government assigns to its citizens as a consequence of what surveillance turns up. I sense more everyday people will invest social capital in their interactions with AIs, out of loneliness or for other reasons. This is unwelcome to me, but then I have a wide social circle. Not everybody does, and I want to resist judgment here.”

Paul Epping, chairman and co-founder of XponentialEQ and well-known keynote speaker on exponential change, wrote, “The questions you ask here require a thesis. What is the value of mentioning developments and not be able to add enough context? The power of AI and machine learning (and deep learning) is underestimated. The speed of advancements is incredible and will lead to automating of virtually all processes (blue- and white-collar jobs). In healthcare: early detection of diseases, fully AI-driven triage, including info from sensors (on or inside your body), leading to personalised health (note: not personalised medicine). AI will help to compose the right medication for you, and not the generic stuff that we get today, surpassing what the pharmaceutical industry is doing. AI is helping to solve the world’s biggest problems, finding new materials, running simulations, digital twins (including personal digital twins that can be used to run experiments in case of treatments). The growing dependency on digital technology will create a paradise for hackers, so cybersecurity will be one of the top priorities, costing society trillions. It can eventually evolve into a ‘symmetrical escalation war’ between AI and ML and less control by humans because we can’t oversee the entire spectrum anymore. My biggest concern: How are we going to solve the control problem? (Read Stuart Russell’s ‘Human Compatible’) and follow the Future of Life Institute and the problem of biased data and algorithms.) Computers (including QC) are good at providing answers, but not that good at asking questions. Who is asking the questions and how relevant are these? Are they really ‘ethical by design,’ or to benefit someone? I hope that humans still will be in loop. The role we will have should be clear and how can we build that in algorithms. AGI (artificial general intelligence) will find ways that will benefit the goals of that AGI (no matter what happens circumventing the ‘ethical’ principles that it learned).”

Peter B. Reiner, professor of neuroethics at the University of British Columbia, said, “As AI-driven applications become ever more entwined in our daily lives, there will be substantial demand from the public for what might be termed ‘ethical AI.’ Precisely how that will play out is unknown, but it seems unlikely that the present business model of surveillance capitalism will hold, at least not to the degree that it does today. I expect that clever entrepreneurs will recognize opportunities and develop new, disruptive business models that can be marketed both for the utility of the underlying AI and the ethics that everyone wishes to see put into place. An alternative is that a new regulatory regime emerges, constraining AI service providers and mandating ethical practice.”

Randall Mayes, a technology analyst at TechCast Global, observed, “The standardization of AI ethics concerns me because the American, European and Chinese governments and Silicon Valley companies have different ideas about what is ethical. How AI is used will depend on your government’s hierarchy of values among economic development, international competitiveness and social impacts. Quantum computing theoretically could detect patterns in data that humans and current deep learning methods cannot. Humans cannot teach AI to find patterns that we cannot detect, rather, we teach it how to learn on its own. Deep learning can detect correlation, not causation. The major AI bottlenecks are reasoning ability and understanding brain complexity in general, quality of data, causation, narrow AI, the transfer problem (each problem has unique data and does not apply to other problems) and the black box (hidden biases). Since humans have difficulty creating media content and deep learning methods that are unbiased, I am unaware of any reasons to believe that humans will create quantum computing methods that are unbiased.”

Ray Schroeder, associate vice chancellor of online learning, University of Illinois-Springfield, responded, “One of the aspects of this topic that gives me the most hope is that while there is the possibility of unethical use of AI, the technology of AI can also be used to uncover those unethical applications. That is, we can use AI to help patrol unethical AI. I see that artificial intelligence will be able to bridge communications across languages and cultures. I see that AI will enable us to provide enhanced telemedicine and agricultural planning. I see that AI will enable us to more clearly predict vulnerabilities and natural disasters so that we can intervene before people are hurt. I am most excited about quantum computing supercharging AI to provide awesome performance in solving our world’s problems. I am further excited about the potential for AI networking to enable us to work across borders to benefit more of the world’s citizens. The power of quantum computing will enable AI to bridge the interests of the few to serve the interests of the many. These values will become part of the AI ethos, built into the algorithms of our advanced programs. Humans will continue to be part of the partnership with the technologies as they evolve – but this will become more of an equal partnership with technology rather than humans micromanaging technology as we have in the past.”

Andrea Romaoli Garcia, an international lawyer actively involved with multistakeholder activities of the International Telecommunication Union and Internet Society, observed, “We see the pros and cons of AI affecting human life. AI can improve human health, and autonomous vehicles promise comfort and safety. But AI is also implemented in the violation of privacy and in the way autonomous drones have chosen young civilian men as targets. Hope falls on sharing this technology and access to the poorest so that we can reduce hunger and poverty. AI can be implemented to audit government activities and allow popular participation over actions such as the use of public funds. Technology can extend access to high-quality education to include the vulnerable and the excluded population who may find transportation inaccessible or find that cost of attending onsite classes reduces their access to schools and universities. Then again, technology can create threats for young people. Children who spend a lot of time online can be exposed to sexual exploitation and pedophilia. The indiscriminate design of technologies is worrisome because profits speak louder than human lives and the public interest. I define ethics as all possible and available choices where the conscience establishes the best option. Values and principles are the limiters that guide the conscience into this choice alongside the purposes, thus ethics is a process. In terms of ethics for AI, the process for discovering what is good and right means choosing among all possible and available applications to find the one that best applies to the human-centred purposes, respecting all the principles and values that make human life possible. The human-centered approach in ethics was first described by the Greek philosopher Socrates in his effort to turn attention from the outside world to the human condition. AI is a cognitive technology that allows greater advances in health, economic, political and social fields. It is impossible to deny how algorithms impact human evolution. Thus, an ethical AI requires that all instruments and applications place humans at the center. Despite the fact that there are some countries building ethical principles for AI, there is a lack of any sort of international instrument that covers all of the fields that guide the development and application of AI in a human-centred approach. AI isn’t model-driven; it has a data-centred approach for highly scalable neural networks. Thus, the data should be selected and classified through human action. Through this human action, socio-cultural factors are imprinted on the behavior of the algorithm and machine learning. This justifies the concerns about ethics and also focuses on issues such as freedom of expression, privacy and surveillance, ownership of data and discrimination, manipulation of information and trust, environmental and global warming and also on how the power will be established among society. These are factors that determine human understanding and experience. All instruments that are built for ethical AI have different bases, values and purposes depending on the field to which they apply. The lack of harmony in defining these pillars compromises ethics for AI and affects human survival. It could bring new invisible means of exclusion or deploy threats to social peace that will be invisible to human eyes. Thus, there is a need for joint efforts gathering stakeholders, civil society, scientists, governments and intergovernmental bodies to work toward building a harmonious ethical AI that is human-centred and applicable to all nations. 2030 is 10 years from now. We don’t need to wait 10 years – we can start working now. 2020 presents several challenges in regard to technology’s impact on people. Human rights violations are being exposed and values are under threat. This scenario should accelerate efforts at international cooperation to establish a harmonious ethical AI that supports human survival and global evolution. Quantum computers will evolve to process huge amounts of data. Classical computers have limitations, and quantum computers are necessary to allow the ultimate implementations of AI and machine learning. However, ethical regulation and laws are not keeping up with advances in AI and are not ready for the arrival of quantum computing. Quantum’s capability to process huge volumes of data will create a huge profit center for corporations, and this has typically led them to move quickly and not always ethically. It also allows bad actors to operate freely. Ethical AI should be supported by strong regulatory tools that encourage safe technological advancement. If not, we will face new and dangerous cyber threats.”

Erhardt Graeff, a researcher expert in the design and use of digital technologies for civic and political engagement, noted, “Ethical AI is boring directly into the heart of the machine learning community and, most importantly, influencing how it is taught in the academy. By 2030, we will have a generation of AI professionals that will see ethics as inseparable from their technical work. Companies wishing to hire these professionals will need to have clear ethical practices built into their engineering divisions and strong accountability to the public good at the top of their org charts. This will certainly describe the situation at the major software companies like Alphabet, Apple, Facebook, Microsoft and Salesforce, whose products are used on a massive scale. Hopefully, smaller companies and those that don’t draw the same level of scrutiny from regulators and private citizens will adopt similar practices and join ethical AI consortia and find themselves staffed with upstanding technologists. One application of AI that will touch nearly all sectors and working people is in human resources and payroll technology. I expect we will see new regulation and scrutiny of those tools and the major vendors that provide them.   I caveat my hopes for ethical AI with three ways unethical AI will persist. 1) There will continue to be a market for unethical AI, especially the growing desire for surveillance tools from governments, corporations and powerful individuals. 2) The democratization of machine learning as APIs, simple libraries and embedded products will allow many people who have not learned to apply this technology in careful ways to build problematic tools and perform bad data analysis for limited, but meaningful distributions that will be hard to hold to account. 3) A patchwork of regulations across national and international jurisdictions and fights over ethical AI standards will undermine attempts to independently regulate technology companies and their code through auditing and clear mechanisms for accountability.”

Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Wellville, responded, “With luck, we’ll increase transparency around what AI is doing (as well as around what people are doing), because it will be easier to see the impact of decisions made by both people and algorithms. Cue the research about what time of day you want to go to trial (e.g., before or after the judge has lunch). The more we use AI to reveal such previously hidden patterns, the better for us all. So, a lot depends on society’s willingness to look at the truth and to act/make decisions accordingly. With luck, a culture of transparency will cause this to happen. But history shows that a smooth trajectory towards enlightenment is unlikely! We need outfits like Pew to hold us accountable. This really doesn’t depend on quantum computing at all. It depends on what we do with it (it’s like asking whether larger pots will make us better cooks). Again, it’s really important to focus on transparency/explainability.”

Ethan Zuckerman, director of MIT’s Center for Civic Media and associate professor at the MIT Media Lab, commented, “The activists and academics advocating for ethical uses of AI have been remarkably successful in having their concerns aired even as harms of misused AI are just becoming apparent. The campaigns to stop the use of facial recognition because of racial biases is a precursor of a larger set of conversations about serious ethical issues around AI. Because these pioneers have been so active in putting AI ethics on the agenda, I think we have a rare opportunity to deploy AI in a vastly more thoughtful way that we otherwise might have.”

Glenn Grossman, a consultant of banking analytics at FICO, noted, “It’s necessary for leaders in all sectors to recognize that AI is just the growth of mathematical models and the application of these techniques. We have model governance in most organizations today. We need to keep the same safeguards in place. The challenge is that many business leaders are not good at math! They cannot understand the basics of predictive analytics, models and such. Therefore, they hear ‘AI’ and think of it as some new, cool, do-it-all technology. It is simply math at the heart of it. Man governs how they use math. So, we need to apply ethical standards to monitor and calibrate. AI is a tool. Not a solution for everything. Just like the PC ushered in automation, AI can usher in automation in the area of decisions. Yet it is humans that use these decisions and design the systems. So, we need to apply ethical standards to any AI-driven system.”

Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science, said, “AI and its successors are potentially so powerful that we have no choice but to ensure attention to ethics. The alternative would be to hand over control of our way of life to a class of developers and implementors that are either focussed on short-term and shortsighted interests or who have some form of political agenda particularly ‘state actors.’ The big question is how to ensure this. A regulatory framework is part of the answer, but I suspect that a major requirement is to change the culture of the AI industry. Rather than developing technologies simply for the sake of it, or to publish clever papers, there needs to be a cultural environment in which developers see as an inherent part of their task to consider the potential social and economic impacts of their activities and an employment framework that does not seek to repress this. Perhaps moral and political philosophy should be part of the education of AI developers. What we mean by AI, what expectations we have of it and what constraints we need to place on it are the fundamental issues. It may be that implementing AI systems that satisfy these requirements will need the level of computing power that quantum computing provides but if it is the full understanding of the implications of quantum mechanics that will provide insights into the nature of intelligence not quantum computing technology itself. Yes, for the foreseeable future humans must remain in the loop because replacing the human would require an AI system that exhibited some form of consciousness, and we are a long way from understanding the nature of consciousness let alone implementing it. We should be thinking in terms of a partnership between AI and humans, a partnership that would bring out the best of both.”

Greg Sherwin, vice president for engineering and information technology at Singularity University, responded, “Explainable AI will become ever more important. As privileged classes on the edges get caught up on the vortex of negative algorithmic biases, political will must shift towards addressing the challenges of algorithmic oppression for all. For example, companies will be sued – unsuccessfully at first – for algorithmic discrimination. Processes for redress and appeal will need to be introduced to challenge the decisions of algorithms. Meanwhile, the hype cycle will drop for the practical value of AI. As the world and society become more defined by VUCA [volatile, uncertain, complex, ambiguous] forces, the less AI will be useful given its complete dependency on past data, existing patterns and its ineffectiveness in novel situations. AI will simply become much like what computers were to society a couple decades ago: algorithmic tools in the background, with inherent and many known flaws (bugs, etc.), that are no longer revered for their mythical innovative novelty but are rather understood in context within limits, within boundaries that are more popularly understood. Binary computing is a lossy, reductionist crutch that models the universe along the lines of false choices. Quantum computing has an opportunity to better embrace the complexity of humanity and the world, as humans can hold paradoxes in their minds while binary computers cannot. Probabilistic algorithms and thinking will predominate the future, leading to more emphasis on the necessary tools for such scenario planning, which is where quantum computers can serve and binary computers fail. That demand for the answers that meet the challenges of the future will require us to abandon our old, familiar tools of the past and to explore and embrace new paradigms of thinking, planning and projecting. I see ethical AI as something orthogonal to the question of binary vs. quantum computing. It will be required in either context. So, the question of whether quantum computing will evolve as a tool to assist building ethical AI is a non-starter. Either because there is little ‘quantum’ specialty about it, or because building ethical AI is a need independent of its computational underpinnings. Humans will continue to be in the loop for decisions that have significant impacts to our lives, our health, our governance and our social wellbeing. Machines will be wholly entrusted for only those things that are mechanized, routine and subject to linear optimization.”

Henry E. Brady, dean of the Goldman School of Public Policy at the University of California-Berkeley, responded, “There seems to be a growing movement to examine these issues, so I am hopeful that by 2030 most algorithms will be assessed in terms of ethical principles. The problem, of course, is that we know that in the case of medical experiments that it is a long time from the infamous Tuskegee study to committees for the protection of human subjects. But I think that the emergence of AI has actually helped to make clear the inequities and injustices in some of our practices. Consequently, they provide a useful focal point for democratic discussion and action. I think that public agencies will take these issues very seriously and that mechanisms will be created to improve AI (although I think that the issues pose difficult problems for legislators given their highly technical nature). I am more worried about private companies and their use of algorithms. It is important, by the way, to recognize that a great deal of AI (perhaps all of it) is simply the application of ‘super-charged’ statistical methods that have been known for quite a long time. It is also worth remembering that AI is very good at predictions given a fixed and unchanging set of circumstances, but it is not good at causal inference, and its predictions are often based upon proxies for an outcome that may be questionable or unethical. Finally, AI uses training sets that often embed practices that should be questioned. A lot of issues in AI concern me. The possibility of ‘deepfakes’ means that reality may become protean and shape-shifting in ways that will be hard to cope with. Facial recognition provides for the possibility of tracking people that has enormous privacy implications. Algorithms that use proxies and past practice can embed unethical and unfair results. One of the problems with some multi-layer AI methods is that it is hard to understand what rules or principles they are using. Hence it is hard to open up the ‘black box’ and see what is inside. It is not clear to me that quantum computing would necessarily help or hinder ethical AI. Creating ethical AI requires the creation of social and political institutions more than the creation of new technologies.”

J. Nathan Matias, an assistant professor at Cornell University expert in digital governance and behavior change in groups and networks, noted, “Unless there is a widespread effort to halt their deployment, artificial intelligence systems will become a basic part of how people and institutions make decisions. By 2030, a well-understood set of ethical guidelines and compliance checks will be adopted by the technology industry. These compliance checks will assuage critics but will not challenge the underlying questions of conflicting values that many societies will be unable to agree on. By 2030, computer scientists will have made great strides in attempts to engineer fairer, more equitable algorithmic decision-making. Attempts to deploy these systems in the field will face legal and policy attacks from multiple constituencies for constituting a form of discrimination. By 2030, scientists will have an early answer to the question of whether it is possible to make general predictions about the behavior of algorithms in society. If the behavior and social impacts of artificial intelligence can be predicted and modeled, then it may become possible to reliably govern the power of such systems. If the behavior of AI systems in society cannot be reliably predicted, then the challenge of governing AI will continue to remain a large risk of unknown dimensions.”

Jeff Jarvis, director of the Tow-Knight Center and professor of journalism innovation at City University of New York, commented, “‘AI’ is an overbroad label for sets of technical abilities to gather, analyze and learn from data to predict behavior, something we have done in our heads since some point in our evolution as a species. We did likewise with computers once we got them, getting help looking for correlations, asking ‘what if?,’ and making predictions. Now, machines will make some predictions – often without explanation – better than we could, and that is leading to a level of moral panic sufficient to inspire questions such as this. The ethical challenges are not vastly different than they have ever been: Did you have permission to gather the data you did? Were you transparent about its collection and use? Did you allow people a choice in taking part in that process? Did you consider the biases and gaps in the data you gathered? Did you consider the implications of acting on mistaken predictions? And so on. I have trouble seeing this treated as if it is an entirely new branch of ethics, for that brings an air of mystery to what should be clear and understandable questions of responsibility.”

Jon Lebkowsky, CEO, founder and digital strategist at Polycot Associates, wrote, “I have learned from exposure to strategic foresight thinking and projects that we can’t predict the future, but we can determine scenarios that we want to see, and work to make those happen. So, I am not predicting that we’ll have ethical AI so much as stating an aspiration – it’s what I would work toward. Certainly, there will be ways to abuse AI/ML/big data, especially in tracking and surveillance. Globally, we need to start thinking about what we think the ethical implications will be, and how we can address those within technology development. Given the current state of global politics, it’s harder to see an opportunity for global cooperation, but hopefully the pendulum will swing back to a more reasonable global political framework. The ‘AI for Good’ gatherings might be helpful if they continue. AI can be great for big data analysis and data-driven action, especially where data discretion can be programmed into systems via machine learning algorithms. Some of the more interesting applications will be in translation, transportation, including aviation; finance, government (including decision support), medicine and journalism. I worry most about uses of AI for surveillance and political control, and I’m a little concerned about genetic applications that might have unintended consequences, maybe because I saw a lot of giant bug movies in the 1950s. I think AI can facilitate better management and understanding of complexity, and greater use of knowledge and decision support systems. Evolving use of AI for transportation services has been getting a lot of attention and may be the key to overcoming transportation inefficiency and congestion. Re: global competition over AI systems: the biggest concern is that in racing to get first mover advantage in various areas, there will be less caution and care in development, and that there will be catastrophic issues in AI, especially as used to manage systems in transportation, medicine or government.”

Jon Stine, executive director of the Open Voice Network, setting standards for AI-enabled vocal assistance, noted, “What most concerns me: the cultural divide between technologists of engineering mindsets (asking what is possible) and technologists/ethicists of philosophical mindsets (asking what is good and right). The former may see ethical frameworks as limitations or boundaries on a route to make money; the latter may see ethical frameworks as a route to tenure. Will the twain ever truly meet? Will ethical frameworks be understood (and quantified) as a means to greater market share and revenues?”

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” said, “Just because most major players in AI will abide by ethical rules, the role of bad actors using AI can have outsized effects on society. The ability to use deepfakes to influence political outcomes will be tested. What worries me the most is that the substitution of AI (and robotics) for human work will accelerate post-COVID-19. The political class, with the notable exception of Andrew Yang, is in total denial about this. And the substitution will affect radiologists just as much as meat cutters. The job losses will cut across classes.”

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, said, “I reject this binary question. The real question is not whether all AI developers sign up to some code of principles, but rather whether most AI applications work in ways that society expects them to, and the answer to that question is almost 100 percent yes.”

Cliff Lynch, director at the Coalition for Networked Information, wrote, “Efforts will be made to create mostly ‘ethical’ AI applications by the end of the decade, but please understand that an ethical AI application is really just software that’s embedded in an organization that’s doing something, it’s the organization rather than the software that bears the burden to be ethical! There will be some obvious exceptions for research, some kinds of national security, military and intelligence applications, market trading and economic prediction systems – many of these things operate under various sorts of ‘alternative ethical norms’ such as the ‘laws of war,’ or the laws of the marketplace. And many efforts to unleash AI (really machine learning) on areas like physics or protein folding will fall outside all of the discussion of ‘ethical AI.’ There’s not a lot of ethics or morality or social justice in predicting particle behaviors, or the life cycle of comets or supernovae in the heavens, molecules docking with proteins – or at least I don’t think so. There are doubtless people who will disagree, and perhaps see this whole landscape in very different terms. I strongly dislike rhetorical formulations like ‘ethical AI’ or ‘racist algorithms.’ We should resist the temptation to anthropomorphize these systems! (as the old saying goes, ‘machines hate that’). Don’t attribute agency and free will to software. This is the sort of muddle you get when people with political or social agendas do technology critiques with at best limited understanding of the technology. The problems here are people and organizations, not code! It is worth noting that ‘algorithm’ has sort of developed an additional definition in recent years very different from what mathematicians and computer scientists in the 1970s and 1980s (or, put another way, geezers like me) instinctively understood the term to mean (see, for example, Donald Knuth’s magisterial Art of Computer Programming); this shift in meaning probably deserves more study and documentation than it’s received to date, though I am familiar with at least some good work on this. This conflict about definitions doubtless helps groups of people to talk past each other on this issue. I think a lot of the discussion of ethical AI is really misguided. It’s clear that there’s a huge problem with machine learning and pattern-recognition systems, for example, that are trained on inappropriate, incomplete or biased data (or data that reflects historical social biases), or where the domain of applicability and confidence of the classifiers or predictors aren’t well demarcated and understood. There’s another huge problem where organizations are relying on (often failure-prone and unreliable, or trained on biased data, or otherwise problematic) pattern recognition or prediction algorithms (again machine learning based, usually) and devolving too much decision making to these. Some of the recent facial recognition disasters are good examples here. There are horrible organizational and societal practices that appeal to computer-generated decisions are correct, unbiased, impartial or transparent, and that place unjustified faith and authority in this kind of technology. But framing this in terms of AI ethics, rather than bad human decision-making, stupidity, ignorance, wishful thinking, organizational failures and attempts to avoid responsibility seems wrong to me. We should be talking instead about the human and organizational ethics of using machine learning and prediction systems for various purposes, perhaps. Having said this, I think we’ll see various players employ machine learning, pattern-recognition, and prediction in some really evil ways over the coming decade. Coupling this to social media or other cultural motivation and reward mechanisms is particularly scary. An early example here might be China’s development of their ‘social capital’ rewards and tracking system. I’m also frightened of targeted propaganda/advertising/persuasion systems. I’m hopeful we’ll also see organizations and governments in at least a few cases choose not to use these systems, or to try to use them very cautiously and wisely and not delegate too much decision-making to them. It’s possible to make good choices here and I think some will. Genuine AI ethics seems to be part of the thinking about general-purpose AI, and I think we are a very, very, long way from this, though I’ve seen some predictions to the contrary from people perhaps better informed than I am. The (rather more theoretical and speculative) philosophical and research discussions about superintelligence and about how one might design and develop such a general-purpose AI that won’t rapidly decide to exterminate humanity are extremely useful, important and valid; but they have little to do with the rhetorical social justice critiques that confuse algorithms with the organizations that stupidly and inappropriately design, train, and enshrine and apply them in today’s world. Best estimates on when quantum computing will work in a meaningful way now seem to be 5-50 years (for various definitions of ‘work’; the National Academies has done some very nice analysis on this in the last year or so, and their report is a good read). Next question is what ‘working’ quantum computing has to do with advancing AI/machine learning/pattern recognition. I don’t believe this is an obvious early application of successful quantum computing (I think early wins will be breaking cryptosystems, optimizing various kinds of systems and configurations under constraints – traveling-salesman problems, various kinds of energy optimization in physical systems, etc.) I am not at all clear on the extent to which quantum computing would speed up machine learning and pattern-recognition processes. There’s some very vague (probably more literary metaphor or science fiction than genuine science?) speculation trying to connect quantum uncertainty somehow to the phenomenon of consciousness and thus having something poorly defined to do with the possibility of someday reaching true general-purpose AI (where genuinely ethical AI is potentially meaningful). I think the fundamental near-term questions are about whether quantum computing-based algorithms represent wins for machine learning, pattern-recognition or inferencing algorithms, and then when we get quantum computing platforms that can support such algorithms if they exist.

Joshua Hatch, a journalist who covers technology issues, commented, “While I think most AI will be used ethically, that’s probably irrelevant. This strikes me as an issue where it’s not so much about what ‘most’ AI applications do, but about the behavior of even just a few applications. It just takes one Facebook to cause misinformation nightmares, even if ‘most’ social networks do a better job with misinformation (not saying they do; just providing an example that it only takes one bad actor). Furthermore, even ethical uses can have problematic outcomes. You can already see this in algorithms that help judges determine sentences. A flawed algorithm leads to flawed outcomes – even if the intent behind the system was pure. So, you can count on misuse or problematic AI just as you can with any new technology. And even if most uses are benign, the scale of problem AI could quickly create a calamity. That said, probably the best potential for AI is for use in medical situations to help doctors diagnose illnesses and possibly develop treatments. What concerns me the most is the use of AI for policing and spying. Every technological advance will be put to use to solve technological dilemmas, and this is no different.”

Lee McKnight, associate professor at the Syracuse University School of Information Studies, wrote, “When we say ‘AI,’ most people really mean a wider range of systems and applications, including machine learning, neural networks and natural language processing to name a few. ‘Artificial general intelligence’ remains the province through 2030 of science fiction and Elon Musk. A wide array of ‘industrial AI’ will in 2023, for example, help accelerate or slow down planes, trains and rocket ships. Most of those industrial applications of AI will be designed by firms and the exact algorithms used and adapted will be considered proprietary trade secrets, not subject to public review or ethics audit. I am hopeful that smart cities and communities initially, and eventually all levels of public organizations, and non-profits – will write into their procurement contracts, requirements that firms not only commit to an ethical review process for AI applications touching on people directly – such as facial recognition. Further, I expect communities will in their requests for proposals make clear that inability to explain how an algorithm is being used, and where the data generated is going/who will control the information, will be disqualifying. These steps will be needed to restore communities’ trust in smart systems, which was shaken by self-serving initiatives by some of the technology giants trying to turn real communities into company towns. I am excited to see this clear need, and also the complexity of developing standards and curricula for ‘certified ethical AI developers’ which will be a growth area worldwide. How exactly to determine if one is truly ‘certified’ in ethics is obviously an area where the public would laugh in the faces of corporate representatives claiming their internal, not publicly disclosed, or audited, ethical training efforts are sufficient. This will take years to sort out, will require wide public dialogue and new international organizations to emerge. I am excited to help in this effort where I can. Quantum computing is making tremendous progress, but realistically is still in its infancy. Ten years from now, quantum computing may be a toddler. But even in the medium term, overlaying the weirdness of quantum entanglement on ethics and AI discussions seems a step away from real AI, since untangling the ‘intelligence’ may be impossible. So therefore, an (immature) quantum computing platform will not really be as intelligent as claimed. Since if it cannot even describe itself, or know where its (subatomic) particles are, why should one believe anything else the system advises? Seriously, as we may anticipate AI systems with audit trails will be legally required in 2030, quantum computing platforms will have to be far more capable than they are; or I expect them to be even in another 10 years.”

Amar Ashar, assistant director of research at the Berkman Klein Center for Internet & Society, said, “We are currently in a phase where companies, countries and other groups who have produced high-level AI principles are looking to implement them in practice. This application into specific real-world domains and challenges will play out in different ways. Some AI-based systems may adhere to certain principles in a general sense, since many of the terms used in principles documents are broad and defined differently by different actors. But whether these principles meet those definitions or the spirit of how these principles are currently being articulated is still an open question. Implementation of AI principles cannot be left to AI designers and developers alone. The principles often require technical, social, legal, communication and policy systems to work in coordination with one another. If implemented without accountability mechanisms, these principles statements are also bound to fail.”

Christina J. Colclough, an expert on the future of work and the politics of technology and ethics in AI, observed, “By 2030, governments will have woken up to the huge challenges AI (semi/autonomous systems, machine learning, predictive analytics, etc.) pose to democracy, legal compliance and our human and fundamental rights. What is necessary is that these ‘ethical principles’ are enforceable and governed. Otherwise, they risk being good intentions with little effect. One of the biggest challenges with computing today is the massive energy needed to make them work. Quantum computing reduces this consumption greatly. Quantum computing won’t ‘do good’ unless it’s told to. We need governance in place.”

Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, commented, “AI is just a small cog in a big system. The main danger currently associated with AI is that machine learning reproduces past discrimination – e.g., in judicial processes for setting bail, sentencing, or parole review. But if there hadn’t been discrimination in the first place, machine learning would have worked fine. This means that AI, in this example, offers the possibility of improvement over unregulated social processes. A more subtle danger is when humans are actually more generous than machine learning algorithms. For instance, it has been shown that judges are more lenient towards first offenders than machine learning in the sense that machine learning predicts a high probability of reoffending, and this probability is not taken into account by judges when sentencing. In other words, judges give first offenders ‘a second chance,’ a moral compass that the algorithm lacks. But more generally, the algorithm only does what it is told to do: if the law that has been voted on by the public ends up throwing large fractions of poor young males in jail, then that’s what the algorithm will implement, removing the judge’s discretion to do some minor adjustment at the margin. Don’t blame AI for that, blame the criminal justice system that has been created by voters. A more pernicious development is the loss of control people will have over their immediate environment, e.g., when their home appliances will make choices for them ‘in their interest.’ Again, this is not really new. But it will occur in a new way. My belief is as follows: 1) By construction, AI implicitly or explicitly integrates ethical principles, whether people realize it or not. This is most easily demonstrated in the case of self-driving cars, but will apply to all self-‘something’ technology, including health care AI apps, for instance. A self-driving car must, at some point, decide whether to protect its occupants or protect other people on the road. A human driver would make a choice partly based on social preferences, as has been shown for instance in ‘The Moral Machine Experiment,’ Nature 2018, partly based on moral considerations (e.g., did the pedestrian have the right to be on the path of the car at that time. In the March 2018 fatality in Tempe, Florida, a human driver could have argued that the pedestrian ‘appeared out of nowhere’ in order to be exonerated). 2) The fact that AI integrates ethical principles does not mean that it integrates ‘your’ preferred ethical principles. So the question is not whether it integrates ethical principles, but which ethical principles it integrates. Here, the main difficulty will be that human morality is not always rational or even predictable. Hence, whatever principle is built into AI, there will be situations in which the application of that ethical principle to a particular situation will be found unacceptable by many people, no matter how well-meant that principle was. To minimize this possibility, the guideline at this point in time is to embed into AI whatever factual principles are applied by courts. This should minimize court litigation. But, of course, if the principles applied by courts are detrimental to certain groups, this will be reproduced by AI. What would be really novel would be to take AI as an opportunity to introduce more coherent ethical judgment than what people make based on an immediate emotional reaction. For instance, if the pedestrian in Tempe had been a just-married young bride, a pregnant woman or a drug offender, people would judge the outcome differently even though, at the moment of the accident, this could not be deduced by the driver, whether human or AI. That does not make good sense: An action cannot be judged differently based on a consequence that was materially unpredictable to the perpetrator. AI can be an opportunity to improve the ethical behavior of cars (and other apps), based on rational principles instead of knee-jerk emotional reaction.”

Mark Lemley, director of Stanford University’s Program in Law, Science and Technology, observed, “People will use AI for both good and bad purposes. Most companies will try to design the technology to make good decisions, but many of those decisions are hard moral choices with no great answer. AI offers the most promise in replacing very poor human judgment in things like facial recognition and police stops.”

Melissa R. Michelson, a professor of political science at Menlo College, responded, “Because of the concurrent rise of support for the Black Lives Matter movement, I see people taking a second look at the role of AI in our daily lives, as exemplified by the decision to stop police use of facial recognition technology. I am optimistic that our newfound appreciation of racism and discrimination will continue to impact decisions about when and how to implement AI. I am optimistic that our new national level of support for BLM will mean ongoing attention to the need to ensure that AI is not used to perpetuate bias or harm. It is possible that people will lose focus, but I think we have turned a corner in terms of ensuring that AI is making things better instead of worse.”

Tim Bray, well-known technology leader who has worked for Amazon, Google and Sun Microsystems, noted, “Unethical AI-driven behavior will produce sufficiently painful effects that legal and regulatory frameworks will be imposed that make its production and deployment unacceptable.”

Perry Hewitt, an executive with Ithaka, an organization advancing global higher education through innovative use of digital technology, responded, “I am hopeful that ‘ethical AI’ will extend beyond the lexicon to the code by 2030. The awareness of the risks gives me the most hope. For example, for centuries we have put white men in judicial robes and trusted them to make the right decisions and pretended that biases, proximity to lunchtime and the case immediately preceding had no effect on the outcome. Scale those decisions with AI and the flaws emerge. And when these flaws are visible, effective regulation can begin. This is the decade of narrow AI – specific applications that will affect everything from the jobs you are shown on LinkedIn to the new sneakers advertised to you on Instagram. Clearly the former makes more of a difference than the latter for your economic well-being, but in all cases, lives are changed by AI under the hood. Transparency around the use of AI will make a difference as will effective regulation.”

Philip M. Neches, lead mentor at Enterprise Roundtable Accelerator and longtime trustee at California Institute of Technology, commented, “ANON AI will be used in many ways, as it amounts to a programming technique that can be applied to a range of problems. Academic, public advocacy and regulatory pressure will increasingly shame and ban malicious applications. The operators of AI systems will start to be held legally accountable and liable for the actions of those systems. I expect cost-effective quantum computing hardware to emerge by 2030. Programming will remain a work in progress for some decades after 2030.”

Richard Salz, senior architect at Akamai Technologies, said, “Government will force it [evolution of ethical AI design]. In 2030 most practical uses of quantum computing will remain cracking crypto and stealing our private conversations.”

Aaron Chia Yuan Hung, assistant professor of educational technology at Adelphi University, said, “The use of AI now for surveillance and criminal justice is very problematic. The AI can’t be fair if it is designed based on or drawing from the data collected from a criminal justice system that is inherently unjust. The fact that some people are having these conversations makes me think that there is positive potential. Humans are not the best at decision-making. We have implicit bias. We have cognitive biases. We are irrational (in the behavioral economics sense). AI can correct that or at least make it visible to us so that we can make better decisions. Most people are wary enough of AI systems not to blindly adopt another country’s AI system without a lot of scrutiny. Hopefully that allows us to remain vigilant.”

Adel Elmaghraby, a leader in IEEE and professor and former chairman of the Computer Engineering and Computer Science Department at the University of Louisville, responded, “Societal pressure will be a positive influence for adoption of ethical and transparent approaches to AI. However, the uncomfortable greed for political and financial benefit will need to be reined in.”

Andrew K. Koch, president and chief operating officer at the John N. Gardner Institute for Excellence in Undergraduate Education, noted, “If there was a ‘Yes, but’ option, I would have selected it. I am an optimist. But I am also a realist. AI is moving quickly. Self-interested (defined in individual and corporate ways) entities are exploiting AI in dubious and unethical ways now. They will do so in the future. But I also believe that national and global ethical standards will continue to develop and adapt. The main challenge is the pace of evolution for these standards. AI may have to be used to help keep up with adaptation needed for the ethical standards needed for AI systems.”

Anne Collier, editor of Net Family News and founder of The Net Safety Collaborative, responded, “Policymakers, newsmakers, users and consumers will exert and feel the pressure for ethics with regard to tech and policy because of three things: 1) a blend of the internet and a pandemic has gotten us all thinking as a planet more than ever, 2) the disruption COVID-19 introduced to business- and governance-as-usual and 3) because of the growing activism and power of youth seeking environmental ethics and social justice. Populism and authoritarianism in a number of countries certainly threaten that trajectory, but – though seemingly on the rise now – I don’t see this as a long-term threat (a sense of optimism that comes from watching the work of so-called ‘Gen Z’). I wish, for example, that someone could survey a representative sample of Gen Z citizens of the Philippines, Turkey, Brazil, China, Venezuela, Iran and the U.S. and ask them this question, explaining how AI could affect their everyday lives, then publish that study. I believe it would give many other adults a sense of optimism similar to mine.”

Anthony Clayton, an expert policy analysis, futures studies and scenario and strategic planning based at the University of the West Indies, commented, “Technology firms will come under increasing regulatory pressure to introduce standards (with regard to, e.g., ethical use, error-checking and monitoring) for the use of algorithms when dealing with sensitive data. AI will also enable, e.g., autonomous lethal weapons systems, so it will be important to develop ethical and legal frameworks to define acceptable use. Quantum computing has revolutionary implications for AI and robotics. This will replace most existing forms of work and employment. This is not necessarily to be feared, however, as humanity will adapt and find new ways to add value. But there are real concerns about the low-skilled.”

Fabrice Popineau, an expert on AI, computer intelligence and knowledge engineering based in France, responded, “I have hope that AI will follow the same path as other potential harmful technologies before (nuclear, bacteriological); safety mechanisms will be put in motion to guarantee that AI use stays beneficial. Quantum computing is not advanced enough to have practical impact on AI yet, let alone ethical AI. If things must happen, I don’t see them happening in the next decade.”

Gary L. Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, responded, “I am guardedly optimistic that ethical guidelines will be used to govern the use of AI in the future. Increased attention to issues of privacy, autonomy and justice in digital activities and services should lead to safeguards and regulations concerning ethical use of AI. Quantum computing is an important advance in computing that will be used to expand computer applications. It will enable more creative applications and solutions to analytic tasks.”

Gary M. Grossman, associate director in the School for the Future of Innovation in Society at Arizona State University, responded, “AI will be used in both ethical and questionable ways in the next decade. Such is the nature of the beast, and we, the beasts that will make the ethical choices. I do not think policy alone will be sufficient to ensure ethical choices to be made every time. Like everything else, it will stabilize in some type of compromised structure within the decade time frame the question anticipates. Quantum will evoke in fits and starts. There will be big breakthroughs and big mistakes. It will be uneven and raucous, eventually stabilizing. That said, AI will be with us forever and will evolve, like everything else.”

Greg Shatan, a partner in Moses & Singer LLC’s intellectual property group and a member of its internet and technology practice, wrote, “Ethical use will be widespread, but ethically questionable use will be where an ethicist would least want it to be: oppressive state action in certain jurisdictions; the pursuit of profit leading to the hardening of economic strata; policing, etc.”

Marc H. Noble, a retired technology developer/administrator, wrote, “Although I believe most AI will be developed for the benefit of mankind, my great concern is that you only need one bad group to develop AI for the wrong reasons to create a potential catastrophe. Despite that, AI should be explored and developed, however, with a great deal of caution. If allowed, won’t AI be able to develop its own unique abilities to learn and communicate beyond the human ability to interact with a truly intelligent machine? What dangers does that pose to mankind?”

Mark Monchek, author, and keynote speaker responded, “In order for ethical principles to prevail, we need to embrace the idea of citizenship. By ‘citizenship,’ I mean a core value that each of us, our families and communities have a responsibility to actively participate in the world that affects us. This means carefully ‘voting’ every day when choosing who we buy from, consume technology from, work for and live with, etc. We would need to be much more proactive in our use of technology, including privacy issues, understanding, consuming media more like we consume food, etc.”

Marvin Borisch, chief technology officer at RED Eagle Digital based in Berlin, wrote, “When used for the greater good, AI can and will help us fight a lot of human problems in the next decade. Prediagnostics, fair ratings for insurance or similar, supporting humans in space and other exploration, and giving us theoretical solutions for economic and ecological problems – these are just a few examples of how AI is already helping us and can and will help us in the future. If we focus on solving specific human problems or using AI as a support for human work instead of replacing human work, I am sure that we can and will tackle any problem. What worries me the most is that AI developers are trying to trump each other not for the better use but for the most medial outcome in order to impress stakeholders and potential investors. Quantum computing is on the rise. It is only a matter of time before we combine these two technologies. I am optimistic that the majority of humanity will focus intensively on ethical AI before we give it the capabilities that allow it to outrun humans as the main power on this planet. We can have AI as a companion in our journey as we evolve as a species and take the next evolutionary step, into space and into being a much more sophisticated species.”

Scott Santens, professional writer and full-time advocate of unconditional basic income (UBI), commented, “I worry about a world without universal basic income, where people do not have a mindset that technology should benefit everyone, not only the few. There will always be people that use tools in questionable ways while others use them in ethical ways, which is why it’s so important to empower more people and to create a better environment free of poverty and the fear of it, where technology is seen as a tool that should be used for common benefit, and where marginalized groups have been greatly empowered to have more say in how tech is deployed.”

Stephan G. Humer, lecturer expert in digital life at Hochschule Fresenius University of Applied Sciences in Berlin, noted, “We will see a dichotomy: Official systems will no longer be designed in such a naive and technology-centred way as in the early days of digitization, and ethics will play a major role in that. ‘Unofficial’ designs will, of course, take place without any ethical framing, for example in the area of crime as a service. What worries me the most is lack of knowledge: Those who know little about AI will fear it and the whole idea of AI will suffer. Spectacular developments will be mainly in the U.S. and China. The rest of the world will not play a significant role for the time being.”

Georges Chapouthier, neuroscientist, philosopher and writer and emeritus professor at Sorbonne University, France, said, “Any tool created by man can be used for good or for bad. My hope is that future civilizations will be more ethics-oriented and will thus use AI in a more ethical way.”

Bruce Mehlman, a futurist and consultant, responded, “AI is powerful and has a huge impact, but it’s only a tool like gunpowder, electricity or aviation. Good people will use it in good ways for the benefit of mankind. Bad people will use it in nefarious ways to the detriment of society. Human nature has not changed and will neither be improved nor worsened by AI. It will be the best of technologies and the worst of technologies.”

Dan McGarry, an independent journalist based in Vanuatu, noted, “Just like every other algorithm ever deployed, AI will be a manifestation of human bias and the perspective of its creator. Facebook’s facial-recognition algorithm performs abysmally when asked to identify black faces. AIs programmed in the affluent West will share its strengths and weaknesses. Likewise, AIs developed elsewhere will share the assumptions and the environment of their creators. They will not be images of them; they will be products of them, and recognisable as such. Quantum computing will have its uses and will likely be useful to the generation of AIs, to the extent that AIs will be tasked with solving problems most susceptible to solutions provided by quantum computation. But that is a subset of the totality of problems AIs will be tasked with solving.”

Eduardo Villanueva-Mansilla, associate professor of communications at Pontificia Universidad Catolica, Peru, wrote, “Public pressure will be put upon AI actors. However, there is a significant risk that the agreed ethical principles will be shaped too closely to the societal and political demands of the developed world. They will not consider the needs of emerging economies or local communities in the developing world.”

Luis Germán Rodríguez, a professor and expert on socio-technical impacts of innovation at the Universidad Central de Venezuela, wrote, “AI will be used primarily in questionable ways in the next decade. I do not see compelling reasons for it to stop being like that in the medium term (10 years). I am not optimistic in the face of the enormous push of technology companies to continue taking advantage of the end-user product, an approach that is firmly supported by undemocratic governments or those with weak institutions to train and defend citizens about the social implications of the penetration of digital platforms. I have recently worked on two articles that develop the topics of this question. The first is in Spanish and is titled: ‘The Disruption of the Technology Giants – Digital Emergency.’ This work presents an assessment of the socio-cultural process that affects our societies and that is mediated by the presence of the technological giants. One objective is to formulate an action proposal that allows citizens to be part of the construction of the future within the context characterized in the article. From the balance between benefits and risks produced by current technological changes, serious warnings arise that must be followed. Faced with dilemmas such as this, humanity has reaped severe problems when it has allowed events to unfold without addressing them early. This has been the case with nuclear energy management, racism and climate change. Ensuing agreements to avoid greater evils in these three matters, of vital importance for all, have proved ineffective in bringing peace to consciences and peoples. We might declare a digital emergency similar to the ‘climate emergency’ that the European Union declared before the lag in reversing environmental damage. The national, regional, international, multilateral and global bureaucratic organizations that are currently engaged in the promotion and assimilation of technological developments mainly focus on optimistic trends. They do not answer the questions being asked by people in various sectors of society, and do not respond to situations quickly. An initiative to declare this era to be a time of digital emergency would serve to promote a broader understanding of AI-based resources and strip them of their impregnable character. It would promote a disruptive educational scheme to humanize a global knowledge society throughout life. The second article is ‘Rock the Internet Blues! A Critical View of the Evolution of the Internet from Civil Society.’ In it I describe how the Internet has evolved in the last 20 years towards the end of dialogue and the obsessive promotion of visions centered on egocentric interests. The historical singularity from which this situation was triggered came via Google’s decision in the early 2000s to make advertising the focus of its business strategy. This transformed, with the help of other technology giants, users as end-user products and the agents of their own marketing. Civil society specializing in global information society issues have presented little resistance to the changes that have arisen along the way. In addition to representing a divorce from the shared initial utopias this evolution is a threat with important repercussions in the non-virtual world, including the weakening of the democratic foundations of our societies. Dystopian results prove the necessity for concrete guidelines to change course. The most important step is to declare a digital emergency that motivates massive education programs that insert citizens in working to overcome the ethical challenges, identifying the potentialities of and risks for the global knowledge society and emphasizing information literacy. I do not think a technology may have a positive impact in improving the ethical behavior of the multiple actors involved. Humans will continue to face difficulties similar to those they face today from an ethical perspective. These threats may grow worse if alerts to society are not activated promptly and they are not accompanied by corresponding activities to promote information literacy in the digital world.”

Katie McAuliffe, executive director for Digital Liberty, wrote, “There are going to be mistakes in AI, even when companies and coders try their best. We need to be patient with the mistakes, find them and adjust. We need to accept that some mistakes don’t equal failure in the entire system. No, AI will not be used in mostly questionable ways. We are using forms of AI every day already. The thing about AI is that once it works, we call it something else. With a new name it’s not as amorphous and threatening. AI and machine learning will benefit us the most in the health context – being able to examine thousands of possibilities and variables in a few seconds, but human professionals will always have to examine the data and context to apply any results. We need to be sure that something like insurance doesn’t affect a doctor or researcher’s read out in these contexts.”

Jim Witte, director of the Center for Social Science Research at George Mason University, responded, “The question assumes that ethics and morals are static systems. With developments in AI, there may also be an evolution of these systems such that what is moral and ethical tomorrow may be very different from what we see as moral and ethical today. Advances in AI can lead to agents (scripted bots behind representations of human actors) that act and interact in a ‘natural’ fashion that supplants avatars (virtual world representations of human actors guided by humans). As with any technology, however, whether smart agents are used for good or evil will depend on how they are deployed and the social and economic order within which they are embedded.”

Kate Klonick, a law professor at St. John’s University whose research is focused on law and technology, said, “AI (like ethics!) is so many things that this question is not particularly well-framed. AI will be used for both good and bad, like most new technologies. I do not see AI as a zero-sum negative of bad or good. I think at net AI has improved people’s lives and will continue to do so, but this is a source of massive contention within the communities that build AI systems and the communities that study their effects on society.”

Concepcion Olavarrieta, foresight and economic consultant and president of the Mexico node of the Millennium Project, responded, “Yes, there will be progress: 1) Ethical issues are involved in most human activities. 2) The pandemic experience plays into this development. 3) Societal risk factors will not be attended. 4) AI will become core in most people’s lives by 2030. 5) It is important to assure an income and or offer a basic income to people. I believe quantum computing may come to aid in ethics advances.”

Jean Paul Nkurunziza, secretary-general of the Burundi Youth Training Centre, wrote, “The use of AI is still at its infancy. The ethical aspects of that domain are not yet clear. I believe that around 2025 ethical issues about the use of AI may erupt (privacy, the use of the AI in violence such as war and order keeping by police for instance). I foresee that issues caused by the primary use of AI will bring the community to debate about that and we will come up with some ethical guidelines around AI by 2030.”

Doris Marie Provine, emeritus professor of justice and social inquiry at Arizona State University, noted, “I am encouraged by the attention that ethical responsibilities are getting. I expect that attention to translate into action. The critical discussion around facial-recognition technology gives me hope. AI can make some tasks easier, e.g., sending a warning signal about a medical condition. But it also makes people lazier, which may be even more dangerous. At a global level, I worry about AI being used as the next phase of cyber warfare, e.g., to mess up public utilities.”

Emmanuel Evans Ntekop said, “Without the maker, the object is useless. The maker is the programmer and the god to its objects. It was an idea from the start to support as a slave to its master the people like automobiles.”

The following responses are from participants in this canvassing who chose not to select “Yes” or “No” and simply chose to comment on the potential near-future of ethical AI design

Benjamin Grosof, chief scientist at Kyndi, a Silicon Valley start-up aimed at the reasoning and knowledge representation side of AI, wrote, “Some things that give me hope are the following: Most AI technical researchers (as distinguished from business or government deployers of AI) care quite a lot about ethicality of AI. It has tremendous potential to improve productivity economically and to save people effort even when money is not flowing directly by better automating decisions and information analysis/supply in a broad range of work processes. Conversational assistants, question answering, smarter workflows and manufacturing robots are some examples where I foresee AI applications making a positive difference in the lives of most people, either indirectly or directly. I am excited by the fact that many national governments are increasing funding for scientific research in AI. I am concerned that so much of that is directed towards military purposes or controlled by military branches of governments. There is tremendous opportunity for the increased concentration of military power – including police power and political power – in the hands of a very small number of people due to the potential for effective surveillance and physical control, by using AI (e.g., facial recognition and integrated data analysis) and drones (e.g., to track, drug or kill). This is an unprecedented challenge for the whole world. It will start playing out in some nations over the next five years as well as beyond that for the next several decades. There is an urgent need for governments to make their ‘default setting’ a policy of strong privacy for individuals – including in shopping and communicating – rather than the current policy which results typically in cognitively burdensome, confusing and overall weak choices and protections about privacy.”

Giacomo Mazzone, head of institutional relations for the European Broadcasting Union and Eurovision, commented, “Nobody could realistically predict ethics for AI will evolve, despite all of the efforts deployed by the UN secretary general, the UNESCO director general and many others. Individuals alone can’t make these decisions because AI is applied at mass scale; nobody will create an algorithm to solve it. Ethical principles are likely to be applied only if industry agrees to do so; it is likely that this will not happen until governments that value human rights will oblige companies to do so. The size and influence of the companies that control AI and its impact on citizens is make them more powerful than any one nation-state. So, it is very likely only regional supranational powers such as the European Union or multilateral institutions such as the United Nations – if empowered by all nation-states – could require companies to apply ethical rules to AI. Of course, many governments already do not support human rights principles, considering the preservation of the existing regime to be a priority more important than individual citizens’ rights. If corporations make profits or power for shareholders, or nation-states the key goal for their AI they may introduce the obligation for citizens to remain loyal to the political system or to the nation-state or region’s religion? Is it even conceivable that corporations or nation-states will renounce the imposition of these criteria in the AI they shall invent/produce/market/distribute worldwide?”

Wendy M. Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, noted, “The distribution of this will be uneven. I’ve just read Jane Mayer’s piece in The New Yorker on poultry packing plants, and it provides a great example of why it’s not enough to have laws and ethics; you must enforce them and give the people you’re trying to protect sufficient autonomy to participate in enforcing them. I think ethical/unethical AI will be unevenly distributed. It will all depend on what the society into which the technology is being injected will accept and who is speaking. At the moment, we have two divergent examples: 1) AI applications whose impact on most people’s lives appears to be in refusing them access to things – probation in the criminal justice system, welfare in the benefits system, credit in the financial system. 2) AI systems that answer questions and offer help (recommendation algorithms, Siri, Google search, etc.). But then what we have today isn’t AI as originally imagined by the Dartmouth group. We are still a very long way from any sort of artificial general intelligence with any kind of independent autonomy. The systems we have depend for their ethics on two things: access to the data necessary to build them and the ethics of the owner. It isn’t AI that needs ethics, it’s the owners.”

Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Terms such ‘transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity’ can have many definitions so that companies (and governments) can say they espouse one or another term but then implement it algorithmically in ways that many outsiders would not find satisfactory. For example, the Chinese government may its AI technologies embed values of freedom, human autonomy and dignity. My concern is that companies will define ‘ethical’ in ways that best match their interests, often with vague precepts that sound good from a PR standpoint, but when integrated into code allow their algorithms to proceed in ways that do not constrain them from creating products that ‘work’ in a pragmatic sense. The problem comes in defining ‘ethical.’ Some (possibly many) companies and governments will try in highly competitive environments to define ‘ethical’ in ways that make them look good, but, in reality, are ambiguous and – particularly when integrated into code – do not constrain their ability to project power in their own interests. Companies and governments have exploited ethical ambiguities for centuries. Why should they change now because of AI?”

Maggie Jackson, former Boston Globe columnist and author of “Distracted: Reclaiming Our Focus in a World of Lost Attention,” wrote, “I am deeply concerned by how little we understand of what AI algorithms know or how they know it. This black-box effect is real and leads to unintended impact. Most importantly, in the absence of true understanding, assumptions are held up as the foundation of current and future goals. There should be far greater attention paid to the hidden and implicit value systems that are inherent in the design and development of AI in all forms. An example: as mentioned, robot caregivers, assistants and tutors are being increasingly used in caring for the most vulnerable members of society despite known misgivings by both scientist-roboticists, ethicists and users, potential and current. It’s highly alarming that the robots’ morally dubious façade of care is increasingly seen as a good-enough substitute for the blemished, yet reciprocal care carried out by humans. New ethical AI guidelines that emphasize transparency are a good first step in trying to ensure that care recipients and others understand who/what they are dealing with. But systems of profit, hubris by inventors, the innate human tendency to related to movement/expression and other forces combine to work against the human skepticism that is needed if we are to create assistive robots that preserve the freedom and dignity of the humans who receive their care.”

Marita Prandoni, linguist, freelance writer, editor, translator and research associate with the Shape of History group, predicted, “Ethical uses of AI will dominate but it will be a constant struggle against disruptive bots and international efforts to undermine nations. Algorithms have proven to magnify bias and engender injustice, so reliance on them for distracting, persuading or manipulating opinion is wrong. What excites me is that advertisers are rejecting platforms that allow for biased and dangerous hate speech and that increasingly there are economic drivers (i.e., corporate powers) that take the side of social justice.”

Anthony Judge, editor of the Encyclopedia of World Problems and Human Potential, observed, “The interesting issue for me is how one could navigate either conclusion to the questions and thereby subvert any intention. We can’t assume that the ‘bad guys’ will not be developing AI assiduously to their own ends (as could already be argued to be the case), according to their own standards of ethics. AI omerta? Appropriate retribution for failing to remain loyal to the family? Eradicate those who oppose the mainstream consensus? What is to check against these processes? What will the hackers do? The result may be less a question of quantum computing as a technology and more a question of quantum computing as a metaphor enabling a paradigm shift (for some). A key author scoping the possibilities of quantum computing is Alexander Wendt (“Quantum Mind and Social Science: Unifying Physical and Social Ontology. Cambridge University Press,” 2015). My take on that is written up in ‘Imagining Order as Hypercomputing: Operating an Information Engine Through Meta-Analogy’ and on being ‘walking wave functions’ in terms of quantum consciousness (2017) and ‘Coping Capacity of Governance as Dangerously Questionable: Recognizing Assumptions and Unasked Questions When Facing Crisis.’”

George Lessard, vice president of Canada Without Poverty, commented, “You ask if ethical principles focused primarily on the public good will be employed in most AI systems by 2030 OR will ethical principles focused primarily on the public good will not be employed in most AI systems by 2030? I really thought we gave up trying to look at only the left/right. I wish life was that clear cut. Some will be ethical, and others will behave in questionable ways. The question is, do you let those folks just go for it (for instance, the facial-recognition developers)? Or do you set some form of consensus/control or ‘let the market work it out’ again.”

James Blodgett, futurist, author and consultant, said, “‘Focused primarily on the public good’ is not enough if the exception is a paperclip maximizer. ‘Paperclip maximizer’ is an improbable metaphor, but it makes the point that one big mistake can be enough. We can’t refuse to do anything, because not to decide is also a decision. The best we can do is to think carefully, pick what seems to be the best plan and execute, perhaps damning torpedoes as part of that execution. But we had better think very carefully and be very careful.”

If you wish to read the full survey report with analysis, click here:
https://www.elon.edu/u/imagining/surveys/xii-2021/ethical-ai-design-2030/

To read anonymous survey participants’ responses with no analysis, click here:
https://www.elon.edu/u/imagining/surveys/xii-2021/ethical-ai-design-2030/anon