Elon University

Anonymous Responses: The Future of Ethical AI Design

This page holds full anonymous responses to a set of July 2020 research questions aimed at illuminating attitudes about the likely evolution of ethical artificial intelligence design between 2020 and 2030.

Pew Research and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate and public practitioners and other leaders from June 30-July 27, 2020, asking them to share their answer to the following:

The Question – Application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. The question on the future of ethical AI design: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good, Yes, or No?

602 respondents answered the question

  • 32% said YES, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030.
  • 68% said NO, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030.

They were asked to elaborate on their choice with these prompts: Will AI mostly be used in ethical or questionable ways in the next decade? Why? What gives you the most hope? What worries you the most? How do you see AI applications making a difference in the lives of most people? As you look at the global competition over AI systems, what issues concern you or excite you?

We also asked respondents one final question: To consider the evolution of quantum computing (QC) and if it might influence any aspects of this realm. As QC is still in its early development and this query came at the very end of a large set of big questions (including several asking for predictions about what digital life might be in 2025 in the wake of the arrival of COVID-19 – part of an earlier report with details gleaned from this same canvassing), many respondents chose not to weigh in, said very little or replied that they were unsure. Due to it still being in early development, very few QC responses were included in this report, which is an analysis of expert opinions on the likely path of ethical AI design in the next decade, and only a few such responses are included on this page.

 Click here to download the print report

The full report with organized analysis of responses is online here

Among the key themes emerging in the 602 respondents’ overall answers were:

* WORRIES It is difficult to define “ethical” AI: Context matters. There are cultural differences, and the nature and power of the actors in any given scenario are crucial. Norms and standards are currently under discussion, but global consensus may not be likely. In addition, formal ethics training and emphasis is not embedded in the human systems creating AI.  Control of AI is concentrated in the hands of powerful companies and governments driven by motives other than ethical concerns: Over the next decade, AI development will continue to be aimed at finding ever-more-sophisticated ways to exert influence over people’s emotions and beliefs in order to convince them to buy goods, services and ideas.  The AI genie is already out of the bottle, abuses are already occurring, and some are not very visible and hard to remedy: AI applications are already at work in systems that are opaque at best and, at worst, impossible to dissect. How can ethical standards be applied under these conditions? While history has shown that when abuses arise as new tools are introduced societies always adjust and work to find remedies, this time it’s different. AI is a major threat.  Global competition, especially between China and the U.S., will matter more to the development of AI than any ethical issues: There is an arms race between the two tech superpowers that overshadows concerns about ethics. Plus, the two countries define ethics in different ways. The acquisition of techno-power is the real impetus for advancing AI systems. Ethics takes a back seat.

* HOPES  AI advances are inevitable; we will work on fostering ethical AI design: More applications will emerge to help make people’s lives easier and safer. Healthcare breakthroughs are coming that will allow better diagnosis and treatment, some of which will emerge from personalized medicine that radically improves the human condition. All systems can be enhanced by AI; thus, it is likely that support for ethical AI will grow. A consensus around ethical AI is emerging and open-source solutions can help: There has been extensive study and discourse around ethical AI for several years, and it is bearing fruit. Many groups working on this are focusing on the already-established ethics of the biomedical community. Ethics will evolve and progress will come as different fields show the way: No technology endures if it broadly delivers unfair or unwanted outcomes. The market and legal systems will drive out the worst AI systems. Some fields will be faster to the mark in getting ethical AI rules and code in place, and they will point the way for laggards.

News release with nutshell version of report findings is available here

All credited responses on the likely future of ethical AI design by 2030 are here

The full survey report with analysis is posted online here

Download the full report with analysis as a printable document here

Responses from all those preferring to make their remarks anonymous. Some are longer versions of expert responses contained in shorter form in the survey report.

The responses on this page are organized in three sections below. They are sorted by respondents’ choices that: 1) No, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030; that 2) Yes, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030; and 3) Responses from those who did not choose either “Yes” or “No,” choosing only to write an elaboration on the topic and not take a shot at making a binary choice.

Of note: Most of those who replied that “Yes” they hope to see the positive evolution of ethical AI design in the next decade generally also expressed some degree of uncertainty or voiced specific concerns or doubts about a positive trajectory. And some of those who said they doubt ethical AI design will advance much in the next decade also took note of the efforts that were being made toward it in 2020 and expressed hope for a better future for ethical AI design post-2030.

Some people chose not to provide a written elaboration, so there are not 600-plus recorded here. Some of the following are the longer versions of responses that are contained in shorter form in one or more places the survey report. Credited responses are carried on a separate page. These comments were collected in an opt-in invitation to more than 10,000 people that asked them to share their responses to a web-based questionnaire in July 2020.

Predictions from respondents who said who said ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030

An anonymous respondent predicted, “We’ll see incredible growth in AI over the coming decade. I don’t see ethical-vs.-questionable as an attribute particular to the technology. Rather, I see AI as a powerful tool enabling systems that also involve human and social components. Whether the systems function ethically or not is an open question, but one that is largely independent of the tools and technologies involved. I think Skynet is still a way off. I expect that quantum computing will evolve to assist in building AI. The sheer increase in computation capacity will make certain problems tractable that simple wouldn’t be otherwise. However, I don’t know that these improvements will be particularly biased toward ethical AI. I suppose there is some hope that greater computing capacity (and hence lower cost) will allow for the inclusion of factors in models that otherwise would have been considered marginal, making it easier in some sense to do the right thing.”

A futurist and consultant who founded a Canadian strategy network noted, “Both will happen. Science and medicine will benefit, for example. But AI will be used to track and profile people. This will be used in business, but also in justice, and in the relationship between citizens and government. China is using AI to control its population. This could become a trend in other countries too. Progress is continually being made in QC. It is feasible to anticipate breakthroughs in the next decade that would enable very fast neural networks. This would enable AI but not always ethical AI. Facial recognition is an application that will be dramatically improved by quantum computing. Humans may be in the loop much less as self-learning systems evolve. This trend will accelerate in the next decade.”

An anonymous respondent said, “Most of the new AI now being deployed uses statistical and/or neural methods. There is no ‘higher reasoning’ or possibility of self-awareness. Hence, no ethics. Though very useful, it’s probably not even as intelligent as most insects. Most of the older AI are simple rule-based systems that work only in a narrow domain and in a tightly controlled range of operations. Their narrow world obviates ethics. That being said, I do hope people will stop training statistical AIs on historically biased datasets – though some historic bias is inevitable so long as they require training. Ill-defined concept plus ill-defined concept does not equal magical solution – however sexy both may initially sound. Human ethics is an unsolved problem; AI even more so. Quantum computing may prove wonderful at combinatorial problems like code-breaking, but unless 1) God does exist, and 2) quantum computing taps directly into his wisdom, it will not provide a solution to the ethical life of machines. Even if it does, it will have to convince us that it’s not crazy first.”

A journalist and industry analyst expert in AI ethics wrote, “The focus during my long career in Silicon Valley and Sand Hill Road has been the art of the possible (innovation) and fast economic returns. This is not a sustainable model moving forward. Hence, my second full-time job, which is AI ethics (someone has to fight the good fight, even if it is idealistic at this point). We’re being forced to conform to the digitization of everything, which is separating people from each other, enslaving them, facilitating mass manipulation and oppression. Technology is becoming far too intrusive, and we are allowing it to happen. We have already become a surveillance state. Going completely digital is exciting and also dangerous, in my opinion, because it makes people and organizations more vulnerable to attacks. While there have always been divisions in society, people are now being manipulated into them artificially, which only adds to what happens organically. I, for one, am deeply active in AI ethics, however, ‘we’ operate too slowly as groups and there is so much money and power at stake that bad actors will have an advantage for the foreseeable future. Humanity is self-destructing. Only we can save ourselves. The ‘new normal’ is a society that’s more divided than it has ever been in history. Already we’re recording every breath, every step, every heartbeat. Every thought is next. Digital tools amplify everything that we do, but we don’t stop to think about what we’re doing in the bigger picture, and it’s going to be harder to do the right things for society at large when the mindset is individualism. There will not be one ‘new normal,’ but several. Most people on the planet will be disadvantaged, though not to the same degree by automation, economic structures. And downright control mechanisms that favor the few at the expense of the many.”

A technology industry analyst commented, “Technologically, we’ll achieve things we never were able to do before, such as AI-brain connections, nanotechnologies, AR/VR/XR, autonomous robotics, etc. Could all of those things be used for good? Yes. But we must evolve our thinking if the good is to outweigh the bad. Until we value the right things (humanity as a whole and its well-being), technology cannot solve the world’s problems itself. It’s simply a tool. There’s a lot of room for innovation and creativity, and even more room for oppression until we change our thought patterns.”

The director of a major strategic project, recipient of the U.S. National Intelligence Exceptional Achievement Medal, responded, “First, for all the focus on AI ethics – we still haven’t resolved data ethics, on which any AI ethics would rest. Second, on Wednesday, May 27, 2020, the Atlantic Council’s GeoTech Center and Accenture hosted Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School, and Ms. Jana Gooth, legal policy advisor to MEP Alexandra Geese, for the inaugural episode of the jointly presented Data Salon Series, which will host private roundtables preceded by publicly recorded presentations concerning data policymaking and governance. The event was co-hosted by Mr. Steven Tiel, Senior Principle, Responsible Innovation at Accenture and Dr. David Bray, Inaugural Director, GeoTech Center at the Atlantic Council. Dr. King’s presentation began by identifying a source of frustration shared among data scientists and policymakers: that the consent and privacy agreements we have all mindlessly clicked through act as a barrier to helping consumers make informed decisions and are little more than a liability shield for companies. Further complicating the dilemma is the system’s lack of scalability – there are simply too many terms of service agreements updating too often for consumers to be able to meaningfully process that information. It is a process made by and for lawyers that has failed to adjust to consumer concerns about data usage, and King argued that a paradigm shift is required, not just tinkering within an existing framework. King went on to describe several approaches to begin addressing that policy gap, gathered from research and forums. The first pushed for the development of personal user agents, or software that helps users aggregate and coordinate their relationship with their data and its privacy, much like a password manager. Others hoped to expand the knowledge base of policymakers, especially concerning human-computer interactions, through data visualization tools. A third approach hoped to consider data technology in terms of public spaces given its impending ubiquity in the forms of IoT, facial recognition, smart cities and so on – how can we accommodate people in public places who don’t want to be recorded in some way? Similarly, a fourth approach emphasized the importance or proactive efforts to include the interests of marginalized, vulnerable communities not traditionally considered in the tech design process. From a more regulatory perspective, data trusts were identified as a possible way to shift data ownership to the community level and away from the individual, particularly regarding genetic data. In addition, some sought to incentivize companies to use data responsibly rather than to use prohibitions and penalties. Further, many subscribed to the concept of algorithmic explainability – the notion that people providing data should be able to understand exactly what happens to it, who controls it, and what decisions it ultimately guides. Finally, some hoped to legislate limits on widespread public surveillance and develop a system of metrics for the harm associated with certain uses of data through an independent body. King ended her presentation with an appeal to reconsidering the Fair Information Practice Principles as a framework that constrains the ethical dynamics that must be considered in legislating data. In her follow up, Ms. Gooth provided context from the EU perspective that aligned with King’s main points: that the EU’s GDPR and Data Privacy Direction still exist in the traditional notice and consent framework with no provisions regarding design. The closest thing to design-focused policy was a potential requirement for web browsers to default to the highest privacy settings, though the legislation has been stuck for years. https://www.atlanticcouncil.org/blogs/geotech-cues/data-salon-series-episode-1-notice-consent-and-disclosure-in-times-of-crisis/. Leveraging ‘Data Trust for Good’ to combat COVID-19 comprises three key tasks: 1) We must transparently develop frameworks for data access, including both data trusts and other contractual agreements, that will make the needed data available for defined COVID-19 recovery initiatives. The frameworks – data trusts and contracts – will also specify standards for the ethical use of the data and how data governance will be formulated. The operationalization of the data access will have both a normal mode during which COVID-19 recovery continues to improve, and a mode when exigent circumstances require special cooperation and data access among public and private institutions in the region. 2) We must collect data on how well access to data is performing in aiding the region’s recovery, and how fully users are adhering to the ethical standards and governance agreements. To maintain public trust, the system’s performance must be audited, and standards for this evaluation must be developed. Professional auditing organizations will help formulate this portion of the initiative. 3) We must educate all public and private institutions and all people on the benefits and risks that result from the data trusts and other data sharing. These benefits comprise the health and economic recovery, and the independent assessment that privacy, ethics and data ownership standards are adhered to. To establish confidence in the sustainability of this approach, we must evaluate risks that appear during implementation and alternative solutions that should be considered.”

A professor emeritus of social science said, “The algorithms that represent ethics in AI are neither ethical nor intelligent. We are building computer models of social prejudices and structural racism, sexism, ageism, xenophobia and other forms of social inequality. It’s the realization of some of Foucault’s worst nightmares. Quantum computing will be very good at helping us solve some difficult problems. Ethical behavior does not appear likely to be one of them. AI systems may well prevent solving ethical issues by moving the responsibility for ethical choice from people to technologies. It will be interesting to see if legal liability is similarly transferred.”

An ethics expert who served as an advisor on the UK’s report on “AI in Healthcare” responded, “I don’t think the tech companies understand ethics at all. They can only grasp it in algorithmic form, i.e., a kind of automated utilitarianism, or via ‘value alignment,’ which tends to use economists’ techniques around revealed preferences and social choice theory. They cannot think in terms of obligation, responsibility, solidarity, justice or virtue. This means they engineer out much of what is distinctive about humane ethical thought. In a thought I saw attributed to Hannah Arendt recently, though I cannot find the source, ‘It is not that behaviourism is true, it is more that it might become true: that is the problem.’ It would be racist to say that in some parts of the world AI developers care less about ethics than in others; more likely they care about different ethical questions in different ways. But underlying all that is that the machine learning models used are antithetical to humane ethics in their mode of operation. Quantum computing will take an already barely tractable problem (AI explainability) and make it completely intractable. Quantum algorithms will be even less susceptible of description and verification by external parties, in particular laypeople, than current statistical algorithms. And the speed of such systems will be so rapid it is hard to see how humans can be ‘in the loop,’ except as owners or subjects of such systems.”

A principal architect at a technology company noted, “I see no framework or ability for any governing agencies to understand how AI works. Practitioners don’t even know how it works, and they keep the information as proprietary information. Consider how long it took to mandate seat belts or limit workplace smoking, where the cause and effect were so clear; how can we possibly hope to control AI within the next 10 years? My perception is that right now quantum computing cannot even do anything useful. The results are mostly of scientific or mathematical interest. Where AI is difficult to understand, Quantum is more so. Considering this, there is little hope that quantum computing can make AI more ethical.”

A professor of computer science wrote, “The impetus for ethical AI comes mostly from academia and government agencies sponsoring basic research. The big tech companies, to some extent, strive for harm-avoidance but their overriding motivation is profit, and this tends to undermine concerns about ethical design and beneficial use of their technologies. Their resources are far greater than those of the academic research community, and consequently their profit-maximizing subterfuges are likely to have a larger impact on deployed AI systems than academic researchers’ exhortations and prototype systems. And an arms race in AI-endowed weaponry is a serious threat to our future.”

A pioneering internet sociologist said, “As long as capitalism lives, profit and not principles will drive the development and deployment of AI technologies in surveillance capitalism, law enforcement and warfare.”

A vice president at a major global company wrote, “AI is too distributed a technology to be effectively governed. It is too easily accessible to any individual, company or organization with reasonably modest resources. That means that unlike, say, nuclear or bioweapons, it will be almost impossible to govern and there always will be someone willing to develop the technology without regard to ethical consequences.”

A futurist based in Europe, said, “The abilities of AI are overblown. Little care or transparency is there to make sure the input data for algorithms reflect the complexity of the real world.”

A government contractor based in North America wrote, “Use of AI will (in the United States) be driven by the market, and I don’t see a market driver for ethics. The inability for the operators of AI-based decision systems to explain the rationale for decisions they make is likely only to get worse. Absent government regulation, this could become an effective defense against litigation for any number of misdeeds. Quantum computing may have the potential to make AI systems more efficient, but I don’t see any reason it would make AI more ethical. Much the opposite; it could make AI even less likely to be scrutinized.”

A respected professor of engineering and information science on the board of ACM Transactions on Human-Computer Interaction noted, “I think the worries about AI are just business hype and the latest fad. I’ve worked in the field (in natural language processing) since 1989, and have seen this hype over and over. The technology is improving rapidly in some ways for sure, but the conversation is very much driven by people who are interested in making money. Social media today has caused more harm than AI technologies ever will. The attention-addiction economy of social media and mobile apps encourages people to make themselves their own captives. The flocking behavior it encourages has enormous force, and I assume it is the reason we are seeing fast social changes today. Digital privacy in the U.S. has been under assault since the recent Bush administration, and cameras everywhere have compounded that. Our phones and browsers track us; statistical analysis (which is what AI is) makes it a bit more efficient to analyze that data. China was quite successful at having a surveillance state without AI, but granted the technology makes it even easier to be thorough. A fortunate sign in the U.S. is the movement to ban facial recognition in some police districts, but the motivation seems to be only because it is not deployed fairly, as opposed to the view that it should not be deployed at all. In my view, the emphasis on ethics in AI is a cynical stance put forward by industry that has no interest in following it since tracking peoples’ behavior is what earns their money.”

A professor of international security at a major U.S. state university wrote, “Many key AI systems will be developed in and by countries that have little interest in maintaining and upholding human rights, because at least their elites are operating on very different sets of ethical principles than academics in industrial democracies are. Even in industrial democracies, we need to realize that what is being done by the national security state in the names of its citizens is very often starkly at odds with the kinds of ethical principles academics tend to argue for. And public opinion is very often quite supportive of those national security state activities. I think the very concept and category of ‘ethical AI’ is therefore very problematic. People can and do use that framework with very different conceptions of what it means. The notion that there is one way to do ‘ethical AI’ is simply unrealistic because it posits a single conception of what it means to be ‘ethical’ that doesn’t at all match actual human experience. What is typically meant by the term would be better called something like ‘liberal democratic cosmopolitan ethical AI,’ but that doesn’t quite trip off the tongue as easily. Even with fairly rapid technological development, it is likely there will be humans in the loop as quantum-enabled AI is created and employed. But the cost will mean that this will likely be limited to application in the public national security sector, the financial sector and perhaps in a few other similarly high-value sectors. The key question, though, seems to me to be which humans will be involved? From what countries? In what professional arenas? Of what gender and ethnic background? What are the inequalities here? And how do those inequalities create disparate outcomes?”

An internet pioneer and principal architect at a major technology company said, “AI is a lot like new drug development – without rigorous studies and regulations, there will always be the potential for unexpected side effects. Bias is an inherent risk in any AI system that can have major effects on people’s lives. While there is more of an understanding of the challenges of ethical AI, implicit bias is very difficult to avoid because it is hard to detect. For example, you may not discover that a facial-recognition system has excessively high false-recognition rates with some racial or ethnic groups until it has been released – the data to test all the potential problems may not have been available before the product is released. The alternative is to move to a drug development model for AI where very extensive trials with increasingly large populations are required prior to release, with government agencies monitoring progress at each stage. I don’t see that happening because it will slow innovation and tech companies will make the campaign contributions necessary to prevent regulation from becoming that intrusive. The challenge of ethical AI is obtaining enough data to evaluate whether a system is acting appropriately. Quantum computing cannot help with this. Humans will need to be in the loop as AI systems are created and implemented in order to be able to evaluate whether the systems meet ethical guidelines.”

An internet pioneer based in Berkeley, California, wrote, “Although there will be considerable research into ‘ethical AI,’ such systems will be substantially more expensive, so the temptation to bypass these costs will be very high. Most AI is just pattern matching, which makes it perfect for automated surveillance, resulting in a net loss of privacy and civil liberties. That said, it will also provide substantial advantages in, for example, medical diagnosis. It’s still too early to understand the implications of quantum computing. It’s clear that they will be useful for certain kinds of algorithms, but not all, and I don’t think anyone yet knows the implications. I think humans will be in the loop perhaps more than they are now as society becomes more aware of the importance of training ML on fair data, e.g., without biasing the AI against people with dark skin. But it’s going to remain very hard to determine if there is implicit bias in an AI.”

An expert in the regulation of risk and the roles of politics within science and science within politics observed, “In my work I use cost-benefit analysis. It is an elegant model that is generally recognized to ignore many of the most important aspects of decision-making – how to ‘value’ non-monetary benefits, for example. Good CBA analysts tend to be humble about their techniques, noting that they provide a partial view of decision structures. I’ve heard too many AI enthusiasts talk about AI applications with no humility at all. Cathy O’Neil’s book ‘Weapons of Math Destruction’ was perfectly on target: if you can’t count it, it doesn’t exist. The other major problem is widely discussed: the transparency of the algorithms. One problem with AI is that it is self-altering. We almost certainly won’t know what an algorithm has learned, adopted, mal-adopted, etc. This problem already exists, for example, in using AI for hiring decisions. I doubt there will be much hesitancy about grabbing AI as the ‘neutral, objective, fast, cheap’ way to avoid all those messy human-type complications, such as justice, empathy, etc. I doubt very much that we will tolerate humans in the AI loop.”

A cybersecurity team leader at a major university commented, “There is a major lack of transparency in the visibility to how AI products are designed and trained. There is a need for large volumes of data that is not readily available, so product builders are using the data available with no analysis of the built-in bias that this forces on the product. When choices are made, there is no requirement to publish these limitations and to ensure the software is only used for the purpose it was trained. Quantum will provide broad processing speed, but this will not translate into learning. The needed data to train AI still has to be assembled and curated, which involves humans sharing the choices and how they can be made similar to the training of a child to grow into an ethical adult.”

The head of research at a major U.S. wireless communications trade association responded, “I anticipate AI will be applied in a variety of neutral settings and systems, with respect to applications intended to support a variety of logistical and other ‘smart city’-type (or indeed, smart industrial) systems, as well as analytical operations across a number of industries and institutions. I expect that AI may have its most significant impact as deployed for purposes of social/population control and enforcement. Thus, predictive analysis, preventive detention, social ranking and control of rights and privileges will be promoted through the application of AI. Autocratic behavior and outcomes will be strengthened in political, state and non-state institutions. Efforts to incorporate and promote ethical constraints and constructs to govern AI will not be welcomed by those implementing the systems that will have the most significant negative potential for people. It is likely that quantum computing will evolve, and that it might be deployed by those hoping to build ethical AI, but that those responsible for implementing the AI systems will either 1) underrate its importance in the nominally neutral systems being deployed by local governments and private sector institutions, or 2) consider it irrelevant or even hostile to the intended uses of the non-neutral monitoring and control systems being developed for use by state and non-state institutions. Those who may not underrate its importance will not be those with the decision-making power with respect to its implementation. Ethical individuals are essential but will be marginalized by significant decision-makers.”

An expert in the security of large-scale networks commented, “Companies will claim to be using principles for ethical AI, but it will be mostly window-dressing, compliance and public relations. We don’t know how to do a good job of addressing these issues – we simply lack the techniques – and so companies won’t have good options available to them. They’ll do their best, but the technology isn’t there yet, and won’t be there by 2030 – it will take a sustained research effort to develop techniques for embedding ethics into AI. Second prediction: AI won’t be used in a very sophisticated way by 2030 anyway, which will partly reduce the ethical issues.”

An anonymous respondent said, “No. Privacy, security, safety, opportunity, independency, integrity and opportunity all will be jeopardized due to AI. AI will not be used for better performance of life or for troubleshooting systems. AI will be a tool for companies which take the role of governments or worse.”

An architect of practice specializing in AI for a major global technology company wrote, “The EU has the most concrete proposals, and I believe we will see their legislation in place within three years. My hope is that we will see a ripple effect in the U.S. like we did from GDPR – global companies had to comply with GDPR, so some good actions happened in the U.S. as a result. Unfortunately, the GOP is still being bribed by anyone with an interest and lots of money, so I am doubtful of AI ethics legislation at a federal level unless Biden wins and the Democrats have a significant majority in the House and Senate. We may be more likely to see a continuation of individual cities and states imposing their own application-specific laws (e.g., FRT limits in Oakland, Boston, etc.). The reason why I am doubtful that the majority of AI apps will be ethical/benefit the social good is: 1) Even the EU’s proposals are limited in what they will require, 2) China will never limit AI for social benefit over the government’s benefit, and 3) The ability to create a collection of oversight organizations with the budget to audit and truly punish offenders is unlikely. I look at the FDA or NTSB and see how those organizations got too cozy with the companies they were supposed to regulate and see their failures. These organizations are regulating products much less complex than AI, so I have little faith the U.S. government will be up to the task. Again, maybe the EU will be better. I am concerned that quantum computing will result in even-blacker black-box AI. If we cannot understand why the AI is doing what it does, we cannot identify potential bias or harm. Human-in-the-loop systems only work if: 1) Humans are incentivized to ensure the right decision is made, and 2) Humans are given adequate information to validate a decision. As we saw in the 2008 financial crisis, bankers who were supposed to be reviewing foreclosure paperwork were simply rubber-stamping them. They were incentivized to get the work done as fast as possible. I’ve seen other AI systems where a human is supposed to approve a recommendation, but no information is given for the human to know why that recommendation was made or question it if they think it is wrong. I have no reason to believe that quantum computing would change either the incentive structure or information provided. These are culture/institutional and UI issues.”

A professor of digital economy and culture based in the UK responded, “Corporations will design first and train machines without completely thinking about how the ramifications will play out over time. For example, facial recognition technologies are taught the notion of race through facial differences. Risks and ethical slippages in AI applications and their glitches, may it be wrong identification or sending confidential messages from home devices to your contact list, will propel us into a world which will elude common sense and rationality while seemingly making the world more rational through AI. We will enter a phase of extreme experimentation between the human and AI. We will exude the data which will train AI and these will in turn be experimented on us. The fascination with quantum computing means that technology companies will do a lot of work on it without being too concerned about how many of these new inventions will facilitate human life. The emphasis will remain on monetizing this frontier and enabling AI which is less guided by human interventions. In effect, these technologies will be more error-prone, and as such will unleash more ethical concerns as they unravel through time. Its speed of calculation will be matched by glitches that will require human deliberation.”

A professional journalist and writing teacher who specializes in science topics observed, “Reports just yesterday of U.S. federal evasion of requirements for warrants by purchasing surveillance data from private companies illustrates the problem: Our telephones and our automobiles are the most obvious eyes and ears monitoring our lives, but everything from parking lot cameras to highway sensors to video feeds scattered through society, combined with facial recognition software means that we are under scrutiny – or we could be – in an increasingly continuous fashion. Much of this information is in private hands, so we have little to no ready control of it, and our data is already a major commercial project. So no, I don’t have a lot of hope that AI will build civil society. If encryption tools are democratized – as in, if we as individual private citizens have control over the data stream we create, then maybe quantum computing will be a net positive for ethical computing and civic life.”

A professor of criminal justice wrote, “Profit maximization and ethics are not compatible. AI will exist to harvest data and use that data to manipulate idiots. Quantum computing will allow for AI as far as creating computational power to process complex thinking. However, they are interfacing with unpredictable humans who may not be logical or have ulterior motives. Human behavior is too complex.”

An advocate and activist said, “Most of the large AI convenings to date have been dominated by status quo power elites whose sense of risk, harm, threat are distorted. Largely comprised of elite white men with an excessive faith in technical solutions and a disdain for socio-cultural dimensions of risk and remedy. These communities – homogenous, limited experientially, overly confident – are made up of people who fail to see themselves as a risk. As a result, I believe that most dominant outcomes – how ‘ethical’ is defined, how ‘acceptable risk’ is perceived, how ‘optimal solutions’ will be determined – will be limited and almost certainly perpetuate and amplify existing harms. As you can see, I’m all sunshine and joy. I genuinely don’t understand why anyone had faith in AI systems somehow being outside of the people and systems that produce them. So, yes, humans will be involved, even with quantum computing, and no, I am confident that quantum computing will produce miraculously better ethical outcomes.”

A director for the foundation of a major global technology organization said, “AI has been with us for decades. It’s only now that we are experiencing the issues firsthand due to the increased connectivity of our devices, and ultimately, our lives. The fast pace of product development in companies is producing many issues that are not thought of properly ahead of development. Fixing products is not enough. We must fix attitudes, behaviours and policies. And these changes will take a long time. For example, computer scientists and web developers are not aware of the consequences and damage that can be done by many of the systems they create. They must be educated about it from the start. I think a 10-year horizon is too short for this to happen. In addition, I don’t think we’ll completely avoid bad uses of these technologies, for example by governments in the name of ‘cybersecurity.’”

A North American research scientist observed, “AI will mostly be used in questionable ways. The companies that control the majority of the tech do not have benevolent motives, are primarily interested in creating salable products from people’s information, and do not have a mandate to behave in ethical ways. There is such a strong disconnect between peoples’ understanding of how these systems work and how they work that it is difficult to even identify what the ethical issues are. In terms of global competition, all of it drives towards products that are used for surveillance, propaganda and manipulation of people’s thoughts, opinions and desires. The very limited knowledge I have about quantum computing suggests that it will be mature enough to help with AI computational problems in the next decade. I don’t think human-in-the-loop is relevant to quantum in particular, but I think human-in-the-loop for many AI applications will persist for a long time, and possibly forever.”

A telecommunications and internet industry economist, architect and consultant with over 25 years of experience now working as a researcher at one of the world’s foremost technological universities responded, “The law needs to continue to evolve and change and it endogenously causes adaptations to ethics. The same will be true about AI. It is appropriate that the debate is being enjoined but the economic forces and incentives (and the lack of perfect solutions) mean that the success of embedding appropriate norms in AI will be, at best, imperfect. AI will be an important tool and it will be able to be abused. I am potentially most concerned in the way that AI will substitute for human intelligence in formulating decisions. I am excited as an academic in the potential for smart contracts and automation to reveal new patterns and enable new ways to organize economic activity. I think many surprises are in store. Quantum computing will advance, yes, but why does that benefit ethical AI systems? AI systems will, once fully unleashed, have their own biology. I do not think we understand their complex system interaction effects any more than we understand pre-AI economics. All of our models are at best partial.”

An active leader in the Internet Engineering Task Force noted, “AI is an arms race. While there will be some regulatory control over some AI systems (e.g., autonomous vehicles), other uses (probably unknown to the everyday citizen) will not develop with ethical considerations in mind. For most people, the growth in AI will be nearly unnoticed as it will arise in rather innocuous ways (e.g., voice assistants, automation, etc.). However, the rapid growth in AI use in military will operate nearly unregulated unless there is a catastrophic failure leading to disastrous results. And even if nations agree to control the use of AI in military scenarios, the global accessibility of AI toolkits will allow rogue actors to use/abuse the technology in ways that are dangerous to society. While I am interested in seeing how AI can be used to optimize workflows or replace humans in dangerous situations (e.g., nuclear accident response), I am fearful that the evolving use of AI to replace humans in the workplace will lead to an even further widening of the gap between the ‘haves’ and the ‘have-nots.’ I don’t see a large connection between quantum computing and ethical AI at this time due to nascent stage of quantum computers.”

A research professor and director of an AI research institute commented, “Ethical principles are a distraction from questions of power and control. AI systems and related large-scale networked technologies are already profoundly affecting the lives of millions of people, making predictions and determinations that shape their access to resources and opportunities. The global competition around AI is a competition between elites for control and insight. The vast majority of humans on earth, wherever they reside, do not ‘win’ regardless of which national power comes out ‘ahead’ in the race to dominate AI.”

A professor of engineering at a U.S. university wrote, “This seems impossible to determine. This requires prediction of human behavior – and I expect that will not change much. I expect that AI will be used ethically by ethical people and organizations, but it will be exploited for unethical (and illegal) purposes by unethical people. One hopes there are incentives so that it is used ethically, but I’m not sure what those are. We see unethical use of Facebook, Twitter and so on. AI will make that even easier. The promise of quantum computing seems to be mostly in providing significant speed-ups in computation. I guess that could indirectly help with ethical behavior by providing a way for an AI system to better consider the implications of its actions, but it could also allow a system to plan for its behavior to be unethical – but initially appear to be unethical. Overall, I’m not optimistic that quantum computing will have a qualitative impact on AI development for a very long time (if ever).”

A technologist based in Europe said, “Most AI will be used to address a specific need, and the wider questions will mostly be ignored. Quantum computing will evolve to assist faster computing. This has nothing to do with AI as such. It has even less to do with /ethical/ AI.”

A futures consultant and scholar responded, “I don’t think most AIs will be aimed at the public good. Rather, they are more likely to be focused on private goods: benefits for individuals, businesses and nonprofits. Some will work for governmental ends, but administratively, rather than oriented towards the public at large. I am very interested in the ways AIs can surface their own ethical models through emergent behavior. Globally, I am very concerned about nation states mobilizing AIs for geopolitical ends, especially in conflict zones.”

A network administrator wrote, “The term ‘public good’ is ambiguous. AI will be used in the optic of public good. The main objective will be to control people in the details of their lives and influence their way of living.”

A professor of law expert in technology policy commented, “Ethical AI is still too inchoate to be practical; either we won’t have a standard by 2030, or it won’t have a lot of teeth, so even if it’s widely deployed it won’t do much. AI will be used as well and badly as most other tools. In general, it will continue to centralize control in powerful institutions such as governments and large enterprises. On the whole, this will not be good for rights and freedoms. The issues are in figuring out 1) what ethics we want to build in (very, very hard!) and 2) actualizing that (also hard) humans will be in some loops and not others. The issues of greatest concern are in systems that can hurt people directly, e.g., military applications. I’d guess that by 2030 at least one would-be power will deploy some form of ‘killer robot’ (or nanotech) with no meaningful human presence in the loop.”

An active member of the Internet Engineering Task Force observed, “Absent regulation, there is no interest in, or driver for, ethical technology.”

An anonymous designer and technologist said, “Will AI mostly be used in ethical or questionable ways in the next decade? AI will mostly be used to generate practical business advantages which will likely mean deeper dehumanization in their approach. Why? Because it is simply expensive to build AI and requires massive sets of connected and semantic training data. To engineer these costly systems there must be a practical business advantage, not a more ethical system or application. What gives me the most hope? Nature. What worries me the most? I don’t see many decisions in internet companies being driven by ethical or altruistic considerations. Rather most are based on growth. Technology is entangled in capitalism, so you can’t imagine more ethical algorithms based on that condition. How do you see AI applications making a difference in the lives of most people? There will be more magical recommendation engines to allow you to find and explore. There will likely be medical and scientific breakthroughs helping people live healthier and more productive lives (for those that can afford it). There will be an incredible efficiency in mundane tasks (again, for those that can afford it). As you look at the global competition over AI systems, what issues concern you or excite you? Concerning: China and communism gives the Chinese an advantage over almost every other economy considering there is a lack of concern or consideration for individual privacy. What excites me? It could solve really big problems beyond the capacity of human understanding. It may allow our species to reframe and solve the biggest threat of ecological collapse. Future QC will be framed and developed by businesses looking for an edge over competition, not to build a more ethical system. Only with government oversight and regulation could we ensure this evolves in this way. Right now, getting any representative to understand how the internet works is a difficult enough task, let alone the ability to comprehend what AI and quantum computing will unleash. AI systems need humans to create or architect them and implement them in software or applications. But humans will likely not understand the logic, process or awareness the AI has in determining its decisions. This doesn’t require quantum computing, this happens all the time today.”

A pioneer in venture philanthropy commented, “While many will be ethical in the development and deployment of AI/ML, one cannot assume ‘goodness.’ Why will AI/ML be any different than how 1) cell phones enabled Al-Qaeda, 2) ISIS exploited social media, 3) Cambridge Analytica influenced elections, 4) elements of foreign governments who launched service denial attacks or employed digital mercenaries, and on and on. If anything, the potential for misuse and frightening abuse just escalates making the need for a global ethical compact all that more essential.”

An internet pioneer based in the Caribbean said, “The dominant business model of the internet will determine the answer here. If there is no change inevitably AI will be more and more used to finetune and amplify the existing situation of personal data being the engine of business with the risk of amplification of the sphere of persona data capture by the mixed use of Internet of things and neuroscience and the sphere of application with those data with AI applications. The situation today is that the GAFAM [Google, Apple, Facebook, Amazon and Microsoft] are able to forecast users’ practices in order to maximize their advertisement revenues; the evolution will go to a new and terrifying phase in which they will more and more influence and in some cases determine users’ practice. User awareness has become a key factor directly linked to information literacy.”

A distinguished university professor of computer science and engineering said, “Most AI systems, frankly like most pre-AI systems, will be engineered to optimize outputs and perhaps to comply with regulations. I am skeptical that we will reach consensus or mandates around AI ethics in the timeframe indicated. Quantum computing might be helpful in some limited utilitarian ethical evaluations (i.e., pre-evaluating the set of potential outcomes to identify serious failings), but I don’t see most ethical frameworks benefiting from the explore/recognize model of quantum computing.”

A co-founder of an award-winning nonprofit news outlet noted, “The public has almost no control over the development of AI. They and their representatives in government hardly understand it even though it will likely become one of the most potent tools against democracy. This is all about who decides: people or machines. We are sleepwalking into another disaster. These questions show a political naivete. As if the characteristics of this or that technology will prove the day. As if technology has a life of its own. You’re so immersed in the tech narrative/tech determinism you don’t seem to see it.”

A director of global partnerships for a major digital organization commented, “There is no current common understanding or agreement on what is ethical – global differences in culture, political system, beliefs, etc., all effect what society thinks is ethical. Most individuals do not understand the role of AI and the amount of their lives that are already controlled by proprietary algorithms. There is an ethical divide in 2020 evident in the way certain technology companies behave compared to others, just as there are real differences in regulator environments or cultural expectations around concepts like privacy, social norms, the greater good, etc. And there are real disparities of corporate and investor philosophy. These will not be solved in 10 years’ time. Quantum computing will evolve to assist in building AI, but whether that AI will be ethical is a different problem. Ethics and the ethical gap in the technology is not a reflection of computing capacity – it is a reflection of human limitation – unconscious bias, if not intentional discriminatory thinking. Garbage-in, garbage-out. Humans should be in the loop as systems are created and implemented – they will be the progenitors of the systems and the programing that determines how the systems learn and evolve – the inherent bias in the design will self-replicate and amplify without evolution in human thought and review of AI progression.”

A director of research into privacy and security responded, “In a relatively short period of time, we have seen the emergence of ethical principles for AI, and I think we can take this as a positive development. There is at least a public debate underway, and some early attempts at getting ahead of abusive behaviors. At the same time, it is foolish to think that in five or 10 years we will have an ethical code or codes that work for every application of AI. We should expect that there will be difficult-to-anticipate applications of AI that we won’t be ready for in advance. In addition, AI is a technology that isn’t necessarily public facing. It can be difficult to discern whether, and how, it is being used in products or decision-making processes, and that makes it difficult to know whether organizations are using it at all and using it in a way that’s in accord with these ethical principles. It’s going to be difficult to know and to enforce adherence, and we should anticipate that some organizations may take shortcuts and skirt the norms.”

A longtime internet security architect and engineering professor responded, “I am worried about how previous technologies have been rolled out to make money with only tertiary concern (if any) for ethics and human rights. Palantir and Clearview.ai are two examples. Facebook and Twitter continue to be examples in this space as well. The companies working in this space will roll out products that make money. Governments (especially repressive ones) are willing to spend money. The connection is inevitable and quite worrying, Another big concern is these will be put in place to make decisions – loans, bail, etc. – and there will be no way to appeal to humans when the systems malfunction or show bias. Overall, I am very concerned about how these systems will be set up to make money for the few, based on the way the world is now having been structured by the privileged. The AI/ML employed is likely to simply further existing disparities and injustice.”

A distinguished professor emeritus of engineering wrote, “Most companies will claim to be obeying ethical principles, and they will be implementing some basic methods. But companies are not good at considering the broader impacts of their behavior on society. Such considerations are expensive to address during design and testing, as they must involve large (expensive) teams of experts. In short, we don’t know how to scale up good ethical design and deployment. Until our methods can be deployed by tiny firms and also scale to mega-firms, we will be left with only ‘ethics in the small.’ Government has a role too, but integrative thinking is hard and expensive. Quantum computing may enable some AI methods to be more efficient, but the issues with ethics concern the goals and behavior of AI systems, not their efficiency. AI systems are created to achieve human purposes, so of course humans will be essential. A lot depends on your definition of ‘in the loop.’ A trusted computing system is precisely one that can be allowed to run autonomously (with appropriate automated monitoring). So, humans are not ‘in the loop’ in such cases. But the decision to design, implement and deploy such systems is a human one, and humans are ethically responsible for the harms and benefits that such systems create.”

A senior leader for an international digital rights organization commented, “Why would AI be used ethically? You only have to look at the status quo to see that it’s not used ethically. Lots of policymakers don’t understand AI at all. Predictive policing is a buzzword, but most of it is snake oil. Companies will replace workers with AI systems if they can. They’re training biased biometric systems. And we don’t even know in many cases what the algorithm is really doing; we are fighting for transparency and explainability. I expect this inherent opaqueness of AI/ML techs to be a feature for companies (and governments), not a bug. Deepfakes are an example. Do you expect ethical use? Don’t we think about it precisely because we expect unethical, bad-faith use in politics, ‘revenge porn,’ etc.? In a tech-capitalist economy, you have to create and configure the system even to begin to have incentives for ethical behavior. And one basic part of ethics is thinking about who might be harmed by your actions, and maybe even respecting their agency in decisions that are fateful for them. Finally, of course AI has enormous military applications, and U.S. thinking on AI takes place in a realm of conflict with China. That again does not make me feel good. China is leading, or trying to lead, the world in social and political surveillance, so it’s driving FR and biometrics. Presumably, China is trying to do the same in military or defense areas, and the Pentagon is presumably competing like mad. I don’t even know how to talk about ethical AI in the military context.”

A senior research program manager at a major U.S.-based think tank said, “In the U.S., I expect ethical questions to be addressed through tort law – i.e., until a harm can be demonstrated, no legislative action will be taken, and when it is, it will be narrow. The U.S. responds to privacy considerations this way, and I expect a similar response to AI ethics. There is unlikely to be a general approach to dealing with AI, ethics and responsibilities.”

A complex systems researcher based in Australia observed, “As AI systems become more useful, new ways of using them will come into play. Unethical usage will increase as the technology becomes more user friendly and hence make life easier for thieves and other unethical people. This is true for any new technology. Once AI systems can start to self-replicate, then there will be an explosive evolution. I doubt it will become the fabled singularity (where humans are no longer needed), but there will be many changes.”

A computer science professor at a top global technological university said, “Ethics is treated as distinct from AI rather than fundamental to every application. This will perpetuate the status quo and no new regulations will be introduced. In most fields of engineering, there are legal liabilities that fall to individuals involved in design/construction; CS/AI has no equivalent, and so there are no risks to deploying unethical or broken systems that ruin lives. Quantum will have no measurable impact on computing for several decades at a minimum.”

A consulting engineer commented, “The advancement of this subject is dominated by profit-making organizations motivated to take advantage and to manipulate the average member of society. I do not believe that quantum computing will impact the fundamental approach to AI systems; therefore, it cannot yield ‘ethical AI.’”

A data scientist in touch with major societal trends responded, “AI has wonderful applications, but will be used by individuals and entities for their own goals and purposes (reflecting the capitalist system in general). This will end up being similar to the current battle over Facebook and other social media companies: Can and should government attempt to corral and control technologies?”

A director for a national laboratory for public policy based in Mexico wrote, “AI will be used a lot in the market arena. Companies will use it for their own objectives. AI, on the other hand, will be used intensively in public contexts, and that hopefully will have important benefits for all, especially the most vulnerable.”

A director of standards and strategy at a major technology company commented, “I believe that people are mostly good, and that the intention will be to create ethical AI. However, an issue that I have become aware of is the fact that we all have intrinsic biases, unintentional biases, that can be exposed in subtle and yet significant ways. Consider that AI systems are built by people, and so they inherently work according to how the people that built them work. Thus, these intrinsic, unintentional biases are present in these systems. Even learning systems will ‘learn’ in a biased way. So, the interesting research question is whether or not we learn in a way that overcomes our intrinsic biases. In general, our digital future depends on advances in two very broad, very basic areas: bandwidth and computer power. Most generally, I need to be able to complete tasks and I need to be able to move information and generally communicate with others. Quantum computing is one of the promising areas for computing power. As for humans in the loop, this is an ethical question. AI systems are most often imagined as the all-knowing, prescient software system of the future. If you have unlimited resources, in terms of storage, computing and communication, logically this should be achievable. However, given limits, there will always be limits. So, the question is, what limits do we want to impose so we can better manage how things work? One element of those limits is whether or not you want AI to have an ‘opinion’ about human life? In point of fact, it has to have an opinion about such things. Consider a simple example like self-driving cars. There are driving circumstances where it simply is not possible to preserve all human life in the vicinity, i.e., a threatening scenario has developed and no matter what you do one or more persons in the vicinity will die. How do you choose? I suspect that humans will always still be in the loop, i.e., you don’t want AI to decide to ‘take a human life,’ as an extreme example to start a war? So, one guardrail principle that will likely need to be present is that AI may not choose to harm human life. However, an independently functioning AI system will undoubtedly encounter human life choices. So, a second guardrail principle will need to be how those choices are to be made. In my view, these are ethical questions, and we are a long way from understanding them and their consequences.”

A futurist and consultant said, “We need legislation on the ethical use of AI, requirements around reproducibility and data rights or AI will simply accentuate biases and imbalances of power within society. Ethical AI is a decision for humans to make. We have to choose to make systems ethical. Quantum computing gives us new tools to explore the world.”

A futurist/consultant based in North America said, “AI will have both ethical and non-ethical uses. Because technology usually outpaces policy and governance, there will be some outliers that use AI for non-ethical uses, and because the limiters are essentially not in place, these cases will remain unrestrained. AI ethical uses in terms of numbers of systems could be overwhelmed by a single scalable unethical use of AI if it has unlimited resources of a nation-state or multinational corporate sponsor. Statistical-driven analysis of behavior could lock in generational inequalities. Only those whose parents have succeeded are statistically likely to succeed. Therefore, fund those with successful parents. There may come a time when systems responses cannot be validated and have to be taken on trust. Once these types of systems control life safety, there will likely be cases where they err and there are unintended consequences.”

A lawyer and former law school dean who specializes in technology issues wrote, “AI is an exciting new space, but it is unregulated and, at least in early stages, will evolve as investment and monetary considerations direct. It is sufficiently known that there are no acknowledged ethical standards and probably won’t be until beyond the time horizon you mention (2030). During that time, there will be an accumulation of ‘worst-case scenarios,’ major scandals on its use, a growth in pernicious use that will offend common sense and community moral and ethical standards. Those occasions and situations will lead to a gradual and increasing demand for regulation, oversight and ethical policies on use and misuse. But by whom (or what)? Who gets to impose those ethical prescriptions – the industries themselves? The government?”

A member of the Internet Society of Barbados wrote, “Some may find a way to survive at the cost of the public, who may not have much of a choice. I do not see a problem with technology, what I see is a problem with humans who do not know how to safely use technology and are not willing to learn.”

A political science professor and award-winning teacher observed, “AI technology has advanced at a faster pace than our collective ability to employ it responsibly. Even if the United States is able to reach a consensus on principles that allow AI to be used in the public interest, China and other governments are unlikely to follow suit.”

A principal investigator on a project researching the future of human rights wrote, “My biggest concern is about ensuring transparency and democratic decision-making around the development and use of AI. I also worry about the failure to plan for the continued employment dislocation that comes from technological change and the societal divisions that this produces. As I see it, AI lags developments in computing capacity. We still effectively have ‘dumb AI,’ relatively speaking, but lightning speed decision-making and greater connectivity. This will in fact slow the adoption of some AI applications, like driverless cars.”

A professor of data journalism commented, “The notion of sentient AI is foolish. Ethical AI is a fantasy; most companies will choose what is good for the balance sheet, not what is good for the community. Quantum computing is just more powerful machines. Horsepower is not the thing standing in the way of ethical AI; people are, plus fundamental limits.”

A professor of economics and director of a labor and worklife project at a major U.S. university said, “That anything that produces profits will ‘mostly be used in ethical ways’ is ridiculous. Firms will use AI mostly to make money. But, we can constrain what firms do with strong laws and social norms so that they will not use AI mostly in illegal unethical ways. They will surely hire lawyers to find loopholes and to defend them in court, and that will do some good, and enable persons who put ethics first to move firms in a socially responsible way. But the desire to make money will lead to questionable decisions. We must use technology to police firm behavior, at the risk that the government may exploit the technology – somewhere, there is a balance that the U.S. will struggle to find. At least if we avoid economic feudalism and maintain democracy. Quantum computing seems more like fusion at this point, always needing some new thing to keep going. And why should anyone expect the firm/government that has successfully deployed quantum AI to be ethical? If I have a quantum computing machine, my first thought will be how do I use it (possibly skirting laws) to get my profits up or my political power or whatever. It will not be how to use it ethically. If by chance I am ethical, all we need is one non-ethical competitor who makes more than me and that competitor will dominate the market/society. We will need to have a strong system of controls ala Asimov’s three rules for robotics, but way more sophisticated. In a global setting, we probably need global agreements with others. I should be pessimistic – the world has done little to slow climate change, the U.S. has at this time been the greatest world failure in controlling COVID-19, and may be entering a political campaign where the biggest issue on one side will be preserving slave-owner statues and disassociating or being shunned by rest of world. But evolution has built a lot of social concerns into us, and as Lincoln did not say, ‘You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all of the time.’”

A professor of sociology based in Texas wrote, “AI, like any other technology, will be developed and promoted for profit. Acting for the common good may or may not align with profit-seeking. When it does not, profit-seeking nearly always takes priority in a capitalist economy. AI has tremendous potential, such as safer transportation, more efficient use of resources and more advanced health-care devices. These outcomes excite me.”

A project manager active in ICANN said, “Just as we cannot be sure about how responsible people should be with regards to public interest and doing good for mankind, the COVID-19 situation has demonstrated that a lot of people don’t even think about the implications of their own actions, let alone what impacts it will have on others. With AI, while some applications may be developed to help people, we must always be wary of what happens when the technology fails or when unscrupulous people take advantage of the technology for their own purposes. The same goes with nuclear sites and the impacts that they could have on the world environment. Countries use it for control and with so many untrustworthy leaders of the superpowers, one has to wonder how safe the world actually is. We can only hope that AI applications are being developed for the better good of the people of the world and that they will help to make the world a better place.”

A research scientist based at Oxford University observed, “Machine learning will become more and more powerful, and nothing will be done in most of the world to restrain the possibility of using it for bad. Europe might escape in better shape than the rest of the world, since they have the most proactive stance on keeping AI systems in check, but countries like the U.S. and China are going to run roughshod over all attempts to make AI more ethical.”

A research scientist based in North America said, “While there will be a push for ethical practices, the economic drivers will limit the adoption of ethical practices along a delay in policy and law to keep up with changes to help mandate ethical controls.”

A researcher at a center focused on the future of work and employment responded, “Codes of ethics are very easy to skirt in the gray areas at the edges. It will be worse with AI because we won’t be able to fully test for discrimination across large samples, and algorithms don’t describe their rules. It may help AI but will make no difference either way for making AI more ethical.”

A researcher and expert in journalism, culture and community observed, “Simply put: Technological advances in AI are moving quicker than ethical and regulatory debates. AI has incredible potential and there’s real value in its accelerated growth and development. Greater investment is needed into societal function, ethical considerations and regulatory approaches. Main opportunity – and thus also danger – is the potential for repurposing or appropriating AI into fields it was not originally intended for. This can lead to rapid deployment for new use scenarios in completely unrelated fields to the AI origin. These new areas of application may require very different ethical considerations to that of the AI origin, though no time to reflect as AI deployment is instantaneous. In the short term (i.e., the timescales discussed here), this is likely the biggest challenge (as opposed to AGI or ASI).”

A researcher in bioinformatics and computational biology said, “Take into account the actions of the CCP in China. They have been leading the way recently in demonstrating how these tools can be used in unethical ways. And the United States has failed to make strong commitments to ethics in AI, unlike EU nations. AI and the ethics surrounding its use could be one of the major ideological platforms for the incoming next Cold War. I am most concerned about the use of AI to further invade privacy and erode trust in institutions. I also worry about its use to shape policy in non-transparent, non-interpretable and non-reproducible ways. There is also the risk that some of the large datasets that are the fundamental to a lot of decision-making, from facial recognition, to criminal sentencing, to loan applications, being conducted using AI that are critically biased and will continue to produce biased outcomes if they are used without undergoing severe audits – issues with transparency compound these problems. Advances to medical treatment using AI run the risk of not being fairly distributed as well. Quantum computing will only be involved in the assistance of building ethical AI if it can be used in a way to fundamentally preserve privacy (unbreakable encryption) of end users. Otherwise, it is not too different from other means of increasing computational throughput. It will also depend on how quickly quantum computing reaches home computers and mobile devices as the quantum technology gap between private citizens and corporations and governments would likely serve to undermine ethical AI implementation. Humans will have to be in the loop for any type of ethical AI to exist, almost by definition.”

A researcher with a technology policy organization commented, “Fast development is prized over good development, and we don’t know what we don’t know, so it will be difficult to say how companies use the AI they’re developing. As of right now, they were developed without a sense of ethics because we have that sense built in (while AI does not). I hope that researchers can figure out how to effectively implement AI in ways that are not hurtful to minorities.”

A senior fellow at a major California university responded, “Hey, the cat is out of the bag. Where are the basic rules of robotics? Those were simple. AI is already being misunderstood and misused. It is overpromoted in ways that are not applicable and it is increasingly in the wrong hands.”

A sociology professor who specializes in social psychology and social movements said, “Corporations use AI to make money – with that as the primary motivation, I don’t see why they would care about the ethical implications for people. AI in the military realm could be troubling as well.”

A strategic expert in telecommunications and public policy advocacy responded, “The history of consumer-based big tech has been to provide services to the exclusive purpose of monetizing customer information and behavior. This is almost always non-transparent and completely unconcerned with potentially negative societal consequences. Considering the same market actors are driving AI, it would be naive to expect their behavior and ethos to be any different.”

A technology and science researcher based in Namibia observed, “There is limited AI research and investment in some regions of the world, and there is limited participation across the global community. This will lead to those involved catering for those things that are of importance to them. Quantum computing will not evolve to assist in building ethical AI; it will rather serve other purposes.”

A technology developer/administrator based in North America commented, “My concern is: Who defines good? If my model consists of my biases and is against yours or the predominant thought, then I would say you would feel it is unethical. Most models are so complex and fluid that you may not know when the model itself becomes ‘unethical.’ Insertion of unethical or skewed information could adversely affect the model and its conclusions.”

A U.S.-based professor with expertise in workforce development commented, “I worry about algorithms and predictive modeling based on incorrect and biased assumptions, and that ignore the inherent variation in human behavior and preferences.”

A wireless technology analyst commented, “AI is not and will not be auditable, so will continue to be questionable in the next decades. In ETSI standards, crypto-security lacks definitive tests, and so does AI. The low cost of offense vs. the high cost of defense worries me the most; few bother to audit the behavior of ill-trained AI to modify training. I see AI coming into elder care because it costs less than caregivers. I don’t know who will ethically invest in AI or in AI ethics.”

An active leader in the UN-facilitated Internet Governance Forum who is based in Africa said, “AI mostly will not be used in ethical ways in the next decade. AI has no feelings; unlike humans, AI cannot think. I have no hope of AI use being ethical if the laws and regulations are not in place on how to use AI. I am not very confident it will happen, save if it is expected to bring in lots of profit for the technology companies who will win the big contracts from governments.”

An anonymous respondent commented, “AI will be used by businesses to gain advantage over each other and consumers will be caught in the middle. Ethics will take a back seat to profit and domination. Quantum computing will develop slowly and AI will develop in some very different directions than anticipated. Humans will be in the loop, but elements of AI will advance beyond our initial ability to comprehend it.”

An anonymous respondent observed, “Present use of predictive algorithms by Facebook, Google, etc., doesn’t bode well. In the already monetised realm of individually targeted advertising it’s very hit and miss – mainly miss, with an uncanny emphasis on things that have already been purchased. When matched with contactless payment, real-time facial recognition and location mapping and so on, there is ample scope for abuse in the absence of government regulation and/or some mechanism for consumer price signalling (willingness to pay or not) to advertisers. Technology-enabled disintermediation has already adversely affected the conduct of war – and undoubtedly will continue to do so. The Wikileaks release of July 12, 2007, Baghdad airstrikes predates some of the higher tech versions of this, but is pretty chilling nonetheless. If it all works as promised, no doubt there will be benefits, unfortunately history suggests that these will not be unalloyed.”

An advocate, research scientist, futurist and professor said, “Unless the legal and regulatory framework changes with ever-increasing clarity to genuinely protect citizens from the abuse of AI, we are heading towards an uncertain future. It is crucial that such a framework emerges only out of the will of the relevant populations in all countries to affect such a change. One hand alone cannot clap, and policy change can only occur in a meaningful way when societies are moving in the direction of ascertaining their need for policy structures that generate clarity instead of confusion and distrust. Therefore, there is a great need to properly educate the public about AI, constantly clarifying the great benefits of spreading its use to all layers of global populations while emphasizing the drainage and damage if and when AI is employed in abusive, selfish and unethical manners. Quantum can possibly come into play if there is a real will for it. However, from what I have seen so far, I am afraid it will not gather momentum in a timely manner.”

A professor of digital transformation based in Canada responded, “As with most areas of technology development, I believe the speed of development of technology capabilities far outruns the ability of (social science) researchers to study impacts on and implications for policy, laws and ethics. Also, I believe a lot of technology development – because of its speed – happens without thinking about its necessity or benefits for society, but just development for the sake of development.”

A researcher investigating labor and technology said, “Unless we see a real change in society in the coming years, I’m afraid that AI will continue to serve the stronger parties in society. Again, AI can be used to serve society – but for that to happen, other changes have to occur in our culture, politics, the legal system, etc.”

A longtime network technology administrator and leader based in Oceana, said, “There is tremendous potential for AI to make significant improvements in access to high quality diagnosis in healthcare. Other social settings where AI can anticipate needs also hold tremendous promise: cities can be better planned, better managed and made more amenable by enabling transport and optimisation of services. Ethical law enforcement can be possible not only through anticipation of crime, but also the anticipation of social needs so services are delivered effectively and with an evidence basis. My concerns arise where AI further entrenches disadvantage and bias, simply because there are larger and more numerous data sets that reinforce advantage. If steps are taken to redress this disadvantage, by for example doing mass screening in populations that are not currently screened (balancing out the data), then this disadvantage will not be reinforced by AI. The problem remains that balancing bias is difficult to do and costs money. Until those with advantage, power and money realise that a more equitable society leads to a more productive one with overall benefits to security and stability, we face further entrenchment of advantage by tools such as AI. Given the current global economic and social outlook is bleak due to the pandemic, it is unlikely that those with entrenched advantage will do anything other than hang onto it and try to further entrench that advantage. Quantum computing gives us greater computational power to tackle complex problems. It is therefore a simple relationship – if more computational power is available, it will be used to tackle those complex problems that are too difficult to solve today. Humans must remain involved in order to apply the additional lens of ethical decision-making and identifying bias.”

A technology developer/administrator observed, “The only organizations with the resources to deploy AI systems are large organizations, and they want a return on their investment. I cannot imagine Amazon ever helping me, I am either a consumer (not a customer) or a wage slave to be ground up in their warehouses. There is no competition, if it’s not Amazon, then it will be eBay, or some other corporation. My role is unaffected, the only variable is the name on the rent I must pay. Self-driving cars will be rented. The software will require constant updates, of maps, of algorithms. There is no way to just buy it. All of the ‘digital assistants’ work by sending audio files home to their actual master. There is no intelligence at the edge. None can be self-hosted. Quantum computing may be a more efficient way to implement a neural network. That doesn’t change the final result, though. Just as I can compile my C for any architecture, an AI algorithm may be implemented on a different hardware platform. The results will be equivalent, though hopefully faster/cheaper to execute.”

An information security expert at a major technological university wrote, “AI development today is being driven most forcefully by economic forces, and economic forces are amoral. In addition, those economic forces are very powerful, and nothing will stand in the way of AI development unless, and until, there are demonstrable, unquestionable failures in AI systems’ abilities to produce ethical results as generally accepted. I believe this is the scenario we will follow in part because so much of software development has never had to deal with significant real/physical world implications of failure, and is consequently only in the ‘crawl’ phase of understanding how complex the real, physical world is, when one attempts to make decisions with binary logic.”

A global military intelligence expert wrote, “No, AI ethics will not be applied broadly by 2030. This is because competing societies and economies will leverage AI as competitive asymmetries to thwart U.S. and EU hegemonies to the improvement of the competitive outcomes – economic, military, diplomatic, legal, intelligence and information for the selfish aims of those adversary regimes. I expect no big leaps forward in quantum computing [by 2030]. However, it’s developing the economic rationales that underpin use cases where proponents have significant challenges. So on one hand you have the technical development challenges, and on the other, the monetary mechanisms to fund application. Not impossible, but very challenging.”

An anonymous respondent said, “I don’t think computer scientists (and I am one) are within light years of building an AI capable of knowing what ethics is. As long as this is true – which is the indefinite future – the only ethics that can be built in are those explicitly programmed, which means the decision about them will be controlled by huge corporations motivated primarily by greed. If quantum computing succeeds – and I’m not expert enough to say, but it seems increasingly likely – of course it will help in building just about anything computational.”

An anonymous respondent wrote, “Perhaps the best use of AI is for detection of medical diagnoses not easily seen with the naked eye.”

An anonymous respondent commented, “I suspect most of the major U.S. and Western corporations will adopt ethical codes of conduct for AI. However, many other players will be involved and creating and using AI systems and insights with less oversight, guidance or constraints. Quantum computing can evolve to assist in building ethical AI systems, but who is involved in developing the AI systems and their ability to distribute, influence and use AI and quantum computing for other than ethical ends will be just as important.”

An anonymous respondent wrote, “AI will always be only as good and ethical as its original and subsequent creators. When corruption is encouraged from the top of the food chain, you can’t very well expect there to be no one following that lead. How you create consistently ethical people will be key to how you create consistently ethical AI. Regarding quantum computing, most research and development takes 15-20 years just to bring something new to the masses. Then you have the adoption processes to go through. Diffusion of innovation often begins with those most able to afford the novel implementation, and then slowly seeps out to those increasingly less able. Only in rare instances is it quick.”

An anonymous respondent said, “To discuss ethics issues is difficult and slow. To deliver new tech-related changes is easy and quick. Ethics will lag behind.”

An anonymous respondent wrote, “Profit will govern until catastrophic things happen.”

An anonymous respondent based in North America observed, “Whose ethics? There are different value systems – do no harm, greater good, each life matters. If capitalism continues to reign, the goal will be greatest profit, profit loosely defined to include social benefit.”

An anthropologist and writer said, “This is barely a decade away and we’re nowhere near sustained ethical development of even rudimentary AI. Far too much ignores ethical issues when it comes to gender, race, ethnicity and socioeconomic class and I don’t see developers having a decent grasp on these issues and a committed – and mature – development of ethical guidelines for AI development in the next twenty years. That of course doesn’t even begin to touch on political and competitive issues between nation states and it all concerns me. Greatly. AI applications could make a great deal of positive difference in people’s lives, but I have absolutely zero confidence in the ethical development of such.”

An assistant professor in information science at a U.S. university said, “While I hope that ethical principles are incorporated into AI by 2030, I am not optimistic on this front. I think the technology is moving too fast to incorporate ethical principles, and those who are in charge do not want to slow down to think through the ethical consequences of their actions – especially because market leadership/dominance is on the line currently. There are big financial stakes on the line, and thus ethical precepts take a backseat. I think that ethics in AI will eventually become mainstream, but not by 2030.”

An engineer based in Cameroon wrote, “I don’t currently hear much about AI’s fairness and ethical legislation and therefore think a decade may be a little small to define the boundaries of how the machines interact with humans. To quantify the impact AI has on the fundamental structure of our society requires to test and analyse very complex use cases which are often events that happen spontaneously, but to do so in an environment that is deemed safe, yet with humans as the subject. This is a complex issue and a decade in my opinion is not sufficient.”

An entrepreneur from Philadelphia responded, “As an American, I can’t help but feel that certain foreign governments will continue to weaponize tech. AI will be no different.”

An executive with a global entertainment company commented, “Other technologies have not – the ethics are seen through the lens of the ‘customer,’ which are not individuals but users of the data such as advertisers and data brokers. I see no reason to think AI will be different.”

An expert in genetic programming and computer science observed, “In a questionable sense, it will be used to gain a competitive edge. I think that criminals will start to make use of AI soon. In general, AI will increase productivity and efficiency in society. It might give a competitive edge for authoritarian regimes. One issue is that it tends to have increasing returns, such as baking in and amplifying already existing differences in and between societies.”

An expert in marketing strategy based in India noted, “Just as corporations talk about ethics of business, and do not necessarily follow up, in the same way companies will talk about ethical algorithms. We will definitely need regulation and guidelines to make sure that certain ethical principles are indeed followed.”

An expert in the field of communication measurement said, “I’m afraid that AI will be used to help powerful people get more powerful. I’m an optimist and so I think if it can be used in ethical ways and to help people, it will be because we elect better leaders. It all comes down to the culture at the top.”

An information science professor who is an expert in the changing forms of work and organizing said, “In the U.S., all the AI failures will be adjudicated in courts, slowly and with confusing results. There will be continued failures and people will be damaged. There will be more insurance supporting AI failures. But, the power of corporations and beliefs of millions to be made will drive an open-market approach. People will die.”

An internet pioneer formerly active in ICANN said, “AI ethics? You’ve got to be kidding. Who’s going to stand up to the Greedy Ones on that one? We’re doomed. And quantum computing strikes me as similar to fuel cells. Always about five years away, receding at about one year per year. If somebody actually comes up with it, that somebody is likely to work in an organization that doesn’t have our greater good foremost in their mind. We’re doomed.”

The digital minister for a Southeast Asian nation-state said, “Medical AI and support AI is excellent. No matter what happens with quantum computing in the future, the problem with programming anything is the programmer. If they are not ethical their system will not be.”

The founder of a London-based network commented, “I don’t see much hope at all in the use of AI for ethical purposes. AI will be used for nefarious purposes by governments. Tech companies care only about money. They’ll do anything for money. They are amoral.”

A researcher focused on the evolution of digital and political communication noted, “AI, or rather machine learning, has severe flaws and biases that are due to several factors. Be it datasets (that are already the product of a racist/misogynistic/classist/etc., society), assumptions (you don’t know your own biases), or how they are being used (good used for bad). What gives me hope is that big tech companies are showing some hesitancy about working with government. But, realistically, AI/ML will contribute to hyper-surveillance, and while we will undoubtedly get good things from AI (assisted driving, health care, etc.), ethical considerations will always take a backseat when it comes to technological progress. If Western countries/companies don’t do it, others will and their products will be used. There are places for quatum computing (encryption/decryption, data storage, etc.), but I don’t know how this will progress and/or play out over the years.”

A communications researcher based in Europe noted, “There will always be (governmental) organizations that will bend the rules, or try new implementations of AI that are not conforming to these rules, knowingly or unknowingly. Now, with software more user-friendly and more available on platforms such as GitHub, the use of the applications will increase rapidly. People will only see these applications explicitly in consumer goods. But, they will nevertheless be affected – unknowingly – by the applications of large institutions, either by chatbots or monitoring systems. People often will only see the end result of an AI application. As such, it is mostly obscured for the general population. As with all new technology, it depends how people and institutions will use it for good or bad. It’s similar to nuclear technology, a knife, guns. In that respect, tech is sort of neutral. But perceptions and uses create good AI and bad AI. The point, though, is that AI is here to stay; once it’s out, you can’t undo it. Ethical issues are not issues of the technology, but how the technology is implemented. People are responsible for implementing AI in an ethical manner.”

A consultant and expert in technical, regulatory and business issues said, “In the U.S., corporations are subject to ‘agency,’ wherein only the stockholders’ interests matter. Only regulation can prevent abuse, then, and the political forces in the U.S. in particular are unwilling to do their job. Europe may do better; China will abuse the technology even more. Quantum computing will be practical five years from now, this statement is always true. It’s not clear that QC actually will do practical work. Since AI itself is a field of uncertain algorithms, applying it in uncertain technology is even more speculative.”

A founder and CEO of a technology company based in Boston wrote, “Today, AI systems are not used ethically and there are severe privacy issues. I don’t believe that the AI will further much more from pattern recognition, but I am worried about (mis)use of such systems. Quantum computing is in its infancy. If and when it is fully developed, it will provide much more computing power, but it is the software that will have to change and our understanding of human intelligence to build a new generation of AI systems.”

A professor based in Singapore wrote, “As long as organizations are after the rational accumulation of data/resources/profits, acting for the public good will not be a primary concern. Further, ‘ethical’ is a cultural term, and thus what might be defended as ethical from one standpoint may be unethical from another standpoint. Until you have broad democratic knowledge of how computerization and big data work (for people vs for big businesses), there can be no informed consent or informed consensus regarding how ethical behavior should even be defined.”

A professor emeritus of communications studies predicted, “Peoples’ use of technology for convenience will be exploited even more by unscrupulous business interests.”

A professor of information science and human-centered computing researcher noted, “There is limited interest of companies to protect privacy unless there is a business case. See, for example, Facebook. The entire business model is to sell data that individuals did not realize would be sold. I would expect that the lack of privacy will continue.”

A professor of government at one of the world’s leading universities said, “Around the world we see authoritarian regimes using AI to tighten their control. In non-authoritarian countries, we see AI being driven by greater concern for profits than for the public interest. It will take considerable political will, and widespread goodwill, to reverse these tendencies. Ethical efforts will occur in parallel with efforts that are not. The question is not whether quantum computing will assist in building ethical AI but whether it will significantly retard less-favorable developments.”

A professor of international affairs and economics at a Washington, D.C.- area university wrote, “AI tends to be murky in the way it operates and the kinds of outcomes that it obtains. Consequently, it can easily be used to both good and bad ends without much practical oversight. AI, as it is currently implemented, tends to reduce the personal agency of individuals, and instead creates a parallel agent who anticipates and creates needs in accordance with what others think is right. The individual being aided by AI should be able to fully comprehend what it is doing and easily alter how it works to better align with their own preferences. My concerns are allayed to the extent that the operation of AI, and its potential biases and/or manipulation remain unclear to the user. I fear its impact. This, of course, is independent from an additional concern for individual privacy. I want the user to be in control of the technology, not the other way around.”

A professor of sociology responded, “My pessimism stems from thinking about the combination of immense competition among the largely for-profit businesses that produce and market these technologies, along with the inherent and unconscious biases that are built into the AI systems by human programmers and users. That is never a good combination for equitable social access and outcomes. These systems will never be perfect (as they are designed and shaped by humans), but AI systems specifically built with ethics in mind are likely in the future, particularly if there is a nonprofit way to fund them and/or a means for companies to profit from them. Ethical AI most likely will be developed within universities (and via government funding), but subsequently used for a variety of purposes.”

A technology developer/administrator based in Europe responded, “As powerful AI will be mostly in the hands of huge multinational companies, their interest will be the guiding principle of AI. The public interest tends to be neglected if possible for the sake of profit. Quantum computing is over-hyped, and the development of quantum-computing is still very slow. Quantum computing makes things faster, but otherwise the same human influence will be there in development and creation of new technology.”

A technology policy leader from Africa who is based in Europe said, “The good news is the discussion is taking place already about AI and ethics. Whether it is going to influence the technology itself and policymaking linked is really difficult to state in the current circumstances. It’s the market that dictates the rules, and if the uptake of AI is not significant, then the impact will be limited.”

A U.S. professor of sociology commented, “The potential of AI has been oversold. AI systems are created by humans operating in specific social and economics contexts. The question seems misguided.”

A vice president for research and economic development responded, “The AI systems will be created by humans who will always seek to push the edge of what’s legal and morally right; they will have to determine limits on how far algorithms can and will go. The quest of certain individuals to be first and always be in the lead in the tech industry may be a determinant.”

An anonymous respondent wrote, “Hahaha, ethical AI. My answer is: Capitalism. Europe can try to fight it, but the U.S. won’t. And the Chinese will build snooping systems into all of their products.”

An anonymous respondent said, “I doubt ethical considerations will be packed into AI any more than they are into social media. That said, the effect of AI is probably overstated, especially in the short run.”

An anonymous respondent wrote, “Ethics is always last. Despite ethical concerns relating to cloning and gene manipulation, there is no standard of ethics. Quantum computing will build AI. It will not be an ethical creation.”

An anonymous respondent said, “I am not sure that 10 years is enough time to move technology forward to the point that ‘ethics’ will be needed in AI. It may, and if so, we will probably stumble upon it by accident (the need for it), and probably be in some rather dire situations since I doubt we will use forethought and build it in to begin with. And I think that I will be long gone before we really have a ‘quantum computer.’”

An anonymous respondent responded, “Whether ethical, legal, regulatory or technical, protections will be guided by state and global policies around themes such as structured inequality, material growth, property ownership, individualism, and other neoliberal myths. Protections need to embrace systemic change that includes the institutions and structures that direct these values.”

An anonymous respondent said, “Ethics by its very nature can have a number of different meanings. If ethics is defined to mean something like fostering and protecting individual freedoms, then it can be a good thing. But if it is defined so that it protects those who hold power, then it can be very bad for the individual even if it is good for those holding power. The definition of the meaning of ethics is important.”

An anonymous respondent based in Scandinavia said, “AI is a tool. Will hammers be used mostly for ethical or unethical purposes? More-powerful future systems will simply accelerate and build on existing biases. The technology will not influence these biases.”

An executive director of an advocacy group based in the U.S. wrote, “AI will be controlled by powerful companies for profit and therefore have no concern for the public good. AI could make positive changes if somehow deployed for the public good.”

An executive with a North American media group wrote, “There is no regulation of online technology that is across the board internationally. There is no governing authority, so anything goes.”

An expert in epidemiology and biostatistics noted, “In a capitalistic society, it is hard to believe that anything other than economic gain will be the driving force of AI.”

A business executive wrote, “AI will continue to be used in questionable ways in the next decade without regulation. Companies are only invented by the profit motive. Regulation is as close to ethical boundaries as is relevant for a company. Take GDPR as a good example. Without that regulation, many companies would not even consider consumer data privacy and responsibilities that come with handling those assets. With the regulation in place, companies must demonstrate how they account for their data handling practices. It’s an operational pain for the company, but ultimately a beneficial exercise for consumers.”

A business consultant commented, “Our leaders will set out to be good and fair. But as said in a previous answer, the need and want to be first to the table overtakes what is best for humanity. AI has a huge potential to be destructive to the human landscape. China promises to be the biggest offender. Everything being explored right now will have room to evolve over the next decade.”

A chair of computer science and engineering at a major technological university on the U.S. West Coast wrote, “Companies have a large incentive to fake heading towards ethics, and then to ignore those concerns in practice. Government regulation has the potential to make a dent, but it largely hasn’t started engaging, and until that happens it seems likely that the status quo will prevail – expressions of concern with little progress.”

A chief technology officer who works in government commented, “Profit-seeking by private businesses and more control by governments will be the key factors.”

A director at a center for geospatial intelligence wrote, “Even in today’s society, truth is relative. How in the world can we actually expect the ethical use of these technologies broadly when ethics themselves are social constructs, not rules and regulations?”

A former user-experience researcher for Amazon and Microsoft observed, “The moral imagination of technology workers is almost nonexistent. Even the Harvard-educated CEO of Facebook has an absurdly simplistic conception of moral choices. Ethical decision-making is not a step in AI model building, but a cultural norm, which does not exist.”

A futurist and consultant based in the U.S. noted, “In the U.S., the ruling ideology is that only the shareholder matters. Thus, anything ethical that is not demanded by law or regulation is essentially prohibited unless it can be justified to ‘activist shareholders.’ AI without regulation provides more opportunities to intrude on individuals’ lives. And there is insufficient regulation in the U.S., due to regulatory capture and over-partisan Congress.”

A futurist based in Brazil said, “Information warfare is easy, cheap and will be at the reach of just about anyone, while AI is unable to assess what is the real or fake news. There will be chaotic disinformation to the point that it could become a national security issue. With the advent of new technologies, the capabilities of potential terrorists and harmful organizations will increase beyond any entity’s possibilities to control them, despite the increasing surveillance and pre-detection developments. Geopolitical power will change in favor of China, which is increasing its influence around the world and expanding its monopoly over critical resources.”

A globally-based researcher of digital communications issues commented, “They will not be used ethically, and even if they will, ethics is the wrong standard. I am interested in AI that is equitable and just, not ethical.”

A Hong Kong-based researcher and data scientist expert in COVID-19 and political polarization commented, “The fundamental problem is that end-users have limited capacity to ensure that the governments and companies are complying with ethical codes.”

A journalist and industry analyst expert in AI ethics said, “It’s not one or the other, it’s both. If we’re going to survive, ethics has to win in the long run. However, it’s going to take longer than 10 years to change the mindsets of society. AI can be a companion in productive ways. However, since it’s just a tool, it can also be designed or altered to do harm, however subtle. Quantum computing is going to be made available via the cloud because of the cooling requirements. A lot of innovation has already happened, but in the next decade there will be major advancements. It will break cybersecurity as we know it today. Humans need to be in the loop. However, they will likely find themselves out of the loop unless safeguards are built into the system. AI can already do many tasks several orders of magnitude faster than humans. Quantum computing will add yet more orders of speed magnitude.”

A law professor and former dean of a prestigious U.S. law school said, “Surveillance and predictive analytics will penetrate financial and legal decisions and may enter into allocation of medical resources. On the positive side, AI will improve development of pharmaceuticals and other discoveries.”

A leader of Brazil’s networked communications community commented, “I see algorithms governed for profit mostly. Ethics will be a concern just because society claims it to be. 2030 is too early for quantum to be of value.”

A medical nanotechnology innovator responded, “Many AI applications are built and tested on limited datasets which may not have the diversity of the ultimate target population. By its very nature, this is an issue with AI unless it is constantly updated and vetted. Unintended consequences will no doubt occur. With one’s life totally in cyberspace, even ‘independent’ platforms may share data. Using AI, such data aggregators can recreate very detailed personal profiles. A simple unintended consequence. Buy a gift for a family member who shares an account, and it’s hard to hide this. One can use some approaches, but then the gift emerges from suggested purchases or ads. The same holds for services such as Netflix depicting previous habits. Privacy is being intruded upon. I don’t think quantum computing, which I am familiar with, will necessarily have any greater ethical applications than normal digital computing. In fact, privacy could be eroded more by its ability to instantaneously aggregate data from separate databases.”

A network architect for a major technology company responded, “While there will be pressure on commercial entities to use AI according to ethical standards, I believe the greatest threat of non-ethical and harmful AI will be pursued by governments, law enforcement and intelligence agencies. Quantum computing may help to advance AI in general, but I don’t see why it will necessarily help build ethical AI.”

A North American futurist/consultant responded, “Ethical AI will not advance. Too much money to be made by being intrusive and violating privacy.”

A noted academic leader, teacher and author at a leading U.S. university responded, “Powerful forces behind the new digitalized normal will act in their own interests, and even more powerful fixes behind them, albeit under the guise of claims to action in the public interest. The degree of ethical or unethical use of quantum computing will depend on the balance of political power.”

A post-doctoral researcher studying the relationship between governance, public policy and computer systems observed, “At present, there is little understanding of how to move from statements of principle to actual practice in automated systems. I believe firmly that there are opportunities to improve here by thinking of the automation as part of a system that includes people, although I suspect that incentives push the world away from equilibria in which this is the norm. The result is automation run amok, with little in the way of capacity for individuals to contest its actions. Again, the risk is to individual human agency, and the financial gain for ignoring good norms of behavior and simply pursuing what is possible without regard to its morality will likely overcome intentions to do good. At the moment, you see a version of this, where companies talk about their high-minded principles while simultaneously ignoring worker rights and robbing the autonomy of their users.”

A professor of earth science wrote, “Ethics is not a priority of our government at the federal level, and even in organizations such as Facebook/social media. There are legal and ethical violations in both, and yet, no one is held accountable. Nothing can convince me that AI will be ethical about the development or its use.”

A professor of economics at a major U.S. university responded, “Technology consistently appears to move quicker than our ability to manage it. Financial incentives will undercut the public interest. My greatest concern is we will not understand the social implications of AI before it is too late. My hope is that there will be some technological breakthroughs that could be used to better the human condition.”

A professor of law at a major university in the U.S. commented, “Questionable. There is little commercial reason to make these systems ethical, and there are lots of commercial reasons not to. I don’t know that AI is really as advanced as we think it is. Quantum will help, but we need to answer the philosophical questions around what we are trying to build.”

A professor of psychology and expert in how people learn new information, both true and false, observed, “Despite all the talk about ethical AI, I’ve seen very little action. I do not believe that companies will change their practices – especially when it will decrease profits.”

A professor of robotics and mechatronics based in Japan wrote, “Basically, AI can only learn from past data. Therefore, its ability strongly depends on the training data. Justice depends on individuals, religions, nations, and so on. There is no correct answer that everyone can accept in common. If AI gives top priority to the maintenance of the global environment and the sustainability of life, it will lead to the exclusion of people.”

A professor of sociology responded “As AI-based tools become increasingly available to the public, the values encoded into AI will only further reflect existing social inequalities and biases. For example, we’ve already seen AI weaponized with deepfakes, for example. But it is also weaponized by computer scientists and companies creating tools for law enforcement despite a national movement to limit policing.”

A professor of urban planning noted, “Predicting is difficult, especially the future. Already the general ‘co-evolution’ of humanity and technology suggests that humans are not nearly as in control as they think they are of technology’s operations, much less its trajectory. While I am not an enthusiast of singularity speculations, there does seem to be a move toward AI stepping in to save the planet, and humans remain useful to that for a while, maybe in bright perpetuity. With wondrous exceptions of course, humans themselves seem ever less inclined to dwell on the question of what is good, generally, or more specifically, what is good about mindful reflectivity in the face of rampant distraction engineering. While one could worry that humans will unleash AI problems simply because it would be technically possible, perhaps the greater worry is that AI, and lesser technological projects too, have already been shifting the nature of thought and discourse toward conditions where cultural deliberations on more timeless and perennial questions of philosophy have no place. Google is already better at answers. Humans had best cultivate their advantage at questions. But if you are just asking about everyday AI assistance, just look how much AI underlies the autocomplete to a simple search query. Or, gosh, watch the speed and agility of the snippets and IntelliSense amid the keyboard experience of coding. Too bad. How are mere humans supposed to anticipate the impact of quantum computing? [In 2030] they are no longer in charge.”

A program director at the U.S. National Science Foundation responded, “Most AI uses in 2030 will be the same as now: analysis of socio-demographic data for the purposes of marketing. I do not see the companies involved changing their ways from now because I do not think governments will have the appetite, or frankly the technical resources, for effective regulation.”

A public policy expert in poverty studies commented, “The widespread use of AI has started to increase understanding of the need for something like an FDA for algorithms.”

A research professor of international affairs expert in digital trade, corruption, governance and human rights responded, “AI can have tremendous benefits, but only if it is properly regulated and seen as a global public good. The U.S. should not view AI as a proprietary technology but work with researchers from a wide range of nations to build AI and help develop AI in the global public interest. The key to that is effective data governance and rules governing the mixing of personal, public and proprietary data.”

A research scientist said, “The problem I see with that issue is: who defines ‘ethical?’ The current debate around ‘racism’ is ripe with bad actors, and I don’t see a good agreement amongst tech leaders and politicians in regards to what ethical AI even means.”

A research scientist wrote, “Most AI is a black box. I see no incentives for companies to change this or to make their algorithms public when they are often a competitive advantage. Quantum computing does not address the issues of economic incentives and the private interests of corporations.”

A researcher in social media and children’s rights said, “There is a reason for many conflicting interests in the context of AI implementation. Many structures are already set up and have an impact, almost none controlled by politics, law and democratic structures. There is an urgent need for public, law-based and democratic control of AI implementation in all everyday contexts. Adiaphorization (Bauman) is a real problem in the interplay between AI and human beings.”

A senior research advisor in social psychology wrote, “The AI technology will develop too quickly for ethics or law to keep pace and it will not be used ethically. In fact, I think it will be used to silence those who question its uses. What excites me is the idea of what tech can do. What concerns me is people looking to take advantage of others with technology and the lack of education about ethics at a young age. Humans will still be in the loop as AI is created, but it will not unfold evenly.”

A senior research scientist expert in complex systems wrote, “In the case of some forms of AI, such as deep learning, the opaqueness of the system after training makes it difficult to tell if unethical decisions are being made. Given the COVID-19-related damage to the global economy over the next 20-30 years, there will be commercial pressures to cut ethical corners with AI that will ultimately succeed. The increasing use of AI systems will put more and more people out of work, increasing levels of unemployment and driving down wages, thus deepening and extending the economic downturn. Quantum computing, when it finally arrives, will only be useful for a limited range of problems. For most of these, traditional computing systems will outperform quantum computing systems for the foreseeable future.”

A sociology professor, consultant and director of a university center wrote, “I just don’t see how, in today’s political climate, ethics are a priority.”

A technology industry analyst predicted, “AI is a tool for tech firms and will be developed by them in ways 1) that they can envision and 2) so they can make money. You might as well as ask whether Facebook or Google were developed to be used for ‘the public good.’ Note that Google has dropped the ‘Don’t be Evil’ slogan, for good reason. AI can be used to provide us with assistance and answers – ideally becoming smarter about these answers over time. Accuracy answering a voice question is a delighter. Failing and referring users to ‘here’s what I found on the web’ is a failure.”

A technology policy expert noted, “You can’t begin to create ethical systems until you decide what ‘ethical’ means, and that will vary across the board. AI R&D will be driven mostly by profit. AI use for surveillance and sentencing scare me; those are going to be leading deployment use cases. Profit motive is generally the polar opposite of ‘ethical,’ and profit drives the country.”

A telecommunications and networking writer and publisher commented, “Questionable. It will be the same as corporations that have employed humans. If there is profit to be made, but requires unethical behavior, it will occur unless strictly regulated and enforced. It will be like the evolution of humane labor practices. Quantum computing is just faster, better computation. AI is software running on any given platform.”

A telecommunications law professional wrote, “The deployment of AI is all about profit maximization. I don’t see a path to broad acceptance around AI principles for good.”

A vice president for a U.S.-based digital research center commented, “AI, by and large, is trained on existing data. Evil in, evil out. The chances that this will be fixed by profit-driven technology companies seems quite slim to me and certainly shows no sign of happening yet. The potential of quantum to overturn our existing assumptions about what can or cannot be done in the management of information, scientific research discoveries and decision-making is enormous, but I think it is coming somewhat further in the future than is assumed by many enthusiasts. So, in that sense, it has the potential to exacerbate bad policy or amplify good policy. But I do not see that it has any direct influence on the creation of those policies.”

A well-known independent analyst/commentator on national security and foreign policy wrote, “There are no controls over so-called AI. AI is not ‘intelligent.’ AI are simply algorithms written by humans to obtain certain objectives. They collect data and use the data to select from among predetermined options based on values or objectives set by the designer of the algorithm. This is not ‘intelligence,’ it is manipulation. Quantum computing will destroy privacy as we know it.”

A well-known longtime U.S. academic leader and scholar of history, humanities and technology said, “The incentives of the companies that create or deploy AI will work against ethical use and limiting use of the systems. Our politics has favored companies over people for decades, and I do not see that changing. So, I have no confidence that AI will be used in an ethical way. I do not believe that the ethics of AI can rely on computer systems. Such systems are systematic, and no system of thought or rules can be right all the time. That is why the power of pardon has been part of legal systems since the beginning of legal systems. That is why we do not award grades simply on the numbers generated by the performance on a series of assignments. Ethics are uniquely human and require human judgment to work.”

A well-known sociologist expert in the evolution of digital society wrote, “Surveillance. Tailoring of ads.”

A well-known technology researcher and writer commented, “Companies see AI as a way to reduce costs and gain competitive advantage. They are not fundamentally interested in helping society or people. They will only do so if pressure is applied. The problem is that AI is developing so fast it is very hard for society – particularly politicians – to keep up. That makes it hard to regulate or direct. Quantum computing is interesting from a theoretical viewpoint, and will make important contributions in some areas. But the idea that it will help build ethical AI is hopelessly optimistic.”

An anonymous journalist said, “Most AI applications are being developed by entities for their own interests; for example, retailers looking to best target customers and maximize sales. Their interests and the interests of the individual are rarely the same. Healthcare and some financial services may be exceptions in that their interests and the individual’s interests may align. But those cases will not be the majority of use cases.”

An anonymous respondent said, “Of course AI will be used in questionable and also ethical ways in the future. Human greed and selfishness motivate some people to leverage technology to grow rich and powerful. Human dignity and compassion will work to balance those tendencies as they always have, but just as computer security has become a growth industry by balancing the inclinations of creative individuals with good and bad intentions, so will our efforts to control and limit the applications of AI that feed greed. I tend to subscribe to the 1950s-60s notion that technology is value-free, and it is people who use it for good or evil. Quantum computing may give us new ways to reveal the black box of AI-driven outputs, but it also can be used to accelerate even more hidden layers of inference. Humans must always be involved, but they will likely not be fully in the loop – they must be in the beginning of the loop (why are we executing these processes?) and at the outcome (what do the results mean?). In most ways, humans are already out of the full loop, and this will be extended. The danger is turning motivation/initiation and implications/meanings over to AI. I do not want to be in the internal loop of the automatic braking system on my car, but I want to initiate when to brake and understand consequences.”

An anonymous respondent wrote, “The idea of AI is flawed to begin with – so it can never be ethical. Even if data was described and inputted differently – even if AI was trained differently – it would still be flawed. Add to that the fact that higher education does not care about humanities-based teaching/learning (nor does the business world), so the people creating any and all components of AI don’t know enough/care enough about the foundational issues. Ethics cannot be computed. AI is not possible. What we call intelligence is not intelligent – it cannot ‘think,’ so it cannot be done.”

An anonymous respondent wrote, “AI will be used to solidify wealth and power disparity and increase the control and tracking of individuals. Nation-states are already fading in power compared to the influence of transnational entities. These same entities already cache huge quantities of information on individuals and already use AI systems to probe and mine the data with goals specific to the corporation’s needs. We are very unlikely to wrestle better control of political power, the only thing which would hold such borderless entities accountable for their use of data on humanity. Quantum computing is already being pursued by corporate entities either directly or indirectly (patent strings attached) by universities, et cetera. They will want a return on their investments, it’s their M.O. and can see how the leap in computing power would play into their use of AI for fine structure knowledge and predictions about individuals. Knowledge of quantum computing is highly unlikely to become ‘open source’ for the very reason that it will be the source of power and wealth.”

An anonymous respondent said, “The government doesn’t want us to be anonymous, but allows companies to use AI to gather big data, which is then shared with the government to track us. AI is used to help hospitals make decisions without requiring approval from patients. AI will begin to make decisions for people and will not allow for human input. We have to trust in businesses and government to make the right decision, and with the current administration and the drive for profitability, the human aspect is ignored. There is no more trust in the government. My trust in the government all depends on how much damage control it can do after Trump, and if he is reelected. I do not trust him to make decisions that are in the best interest of the American people. The government is running the quantum AI research, and I do not at this time see the government as a whole as ethical. I believe the scientists are most likely ethical, but I do not trust the funding bodies and the government, and do not see them currently as ethical entities. Of note, AI research may not require institutional review board approvals at this time, which is a major barrier to my trust in AI.”

An anonymous respondent wrote, “What worries me most is not AI, but where we’re going politically. Now democratic institutions have been weakened virtually worldwide. Only if we see some reversal are we likely to also see the ethical uses of AI.”

An anonymous respondent said, “Going by what I see in the private sector today, I cannot imagine that AI will mostly be used ethically. Until we see better regulation of the way private companies and government agencies use this technology, we will see greater erosions to social justice and social welfare as a result.”

An anonymous respondent commented, “AI technologies are driven by commercial interests that are antithetical to ideas around ethical practices. Ethics are situated, and therefore don’t scale well, otherwise they become moral regimes baked into technical systems. AI technologies are unlikely to scale down to ethical situations. A general review suggests that quantum computing is likely to inform AI developments, perhaps in the first instance in computational finance capitalism. Depending on the extent of social/consumer/market pressure, ethical design may well feature in AI systems.”

An anonymous respondent responded, “I am most worried by the ultra-capitalist approach, the unrestricted law of the markets. I think this will prevail. The need of making money out of new technology and information, big data, AI tools such as computational statistics, statistical classification, machine learning, or deep learning and automated reasoning will lead to ethical breaches. The erroneous disruption about the concept of cause-effect, brought about by AI techniques, will make people lose their grasp on reasoning and knowledge from evidence derived from cause and effect. Even scientists will lean too much on computational statistics and AI with damage to experimental evidence, design of experiments and learning from controlled observations. Quantum computing is still in its infancy. There is no way that it will compete with current and future state of the art in computing based on semiconductor devices. We will see advances, but not at the required pace as to pose a real threat to current computing.”

An anonymous respondent observed, “My expectations are not at all hopeful, in part because of my concerns about how power will be exercised in the U.S. in the future. I have no basis for seeing a radical transformation in the systems of governance, including governance over technological development, such that its primary purpose will be the advancement of the public interest, rather than the continued capture of even more economic and political power and influence. The global competition that I see, including the battle for technological leadership in which we see China becoming more likely to become a global leader, is not one that is encouraging, given the ways in which surveillance is being used to tighten government control over its populations. These are developments that I see as being incompatible with democratic governance, and I fear that competition with China will be drawn toward control over populations, rather than enhancement of democratic capabilities within populations, including those of partners (however defined). I see little to suggest that the resources needed to develop corporate initiatives that are focused on an individual’s ethics, rather than ethical systems that some strategic analysis at the governmental, or even at the level of a public-private partnership (P3s), sees as being more profitable, will emerge and grow within the kind of global markets we are currently seeing develop. The development of ethical systems, especially those which are personalized and private, will be massively expensive, and the levels of income necessary for such systems to be affordable to the average individual are not on the horizon; indeed, we are rapidly moving in the opposite direction. Instead, what we are likely to get are systems masquerading as personal ethical resources, that are actually advertising/influence vehicles for the third parties that will be subsidizing their acquisition by members of the public (or in the Chinese case, will be required for use in order to continuing participating in the economy at all). I don’t see humans being ‘out of the loop’ at the levels in which decisions about design and distribution will be made, especially in the context of P3-based development of these systems.”

An anonymous respondent said “I believe the special emphasis on ‘AI’ is misguided and unhelpful. Technology will be continually advanced in power and sophistication, and classification of some as ‘AI’ and some ‘not AI’ will be less possible and less meaningful (if ever it was). Some governments (notably in Europe) are engaging in appropriate legislative control of technology, but they are in the minority. It is certainly possible for other governments to follow their example, but I am not optimistic. We are very far from the ‘true AI’ which would allow genuine ethics to be encoded into systems, or developed by those systems. We are also very far from any practical use of quantum computing, let alone from applying it to the complexity of intelligence.”

An anonymous respondent commented, “So long as a business (higher education included) is managing an AI project, profits come before people. While companies may do good and be good at the beginning, profits, greed, ego, etc., will take over and ultimately harm people.”

An assistant vice president at a major American multinational telecommunications company observed, “I don’t view AI as any threat that differs significantly from past technologies.”

An associate professor of education policy studies based in the U.S. observed, “Money talks.”

An Australian writer, philosopher and literary critic responded, “2030 is only 10 years away. These sorts of changes happen more slowly than that.”

An expert and consultant in education policy said, “I am not confident that there will be a shared commitment to these ethical principles. Many will take advantage of this technology, especially with the lack of Americans’ commitment to the common good. Accountability is something I am skeptical will happen.”

An expert in economics and political science noted, “Tech companies are ‘whitewashing’ ethics broadly to make the appearance of ethical AI. There are multiple factors: legacy data, developers/creators of technology who do not understand that they are not the center of the universe (or even typical), lack of end-user orientation in technology/tech creation.”

An expert in the history of U.S. foreign relations and the international human rights movement wrote, “I have no faith in technology companies to police themselves, as they – like all corporations have proven time and again – care little about the common good. Unless there are rigorous and well-enforced government regulations in place, technology companies will not do what is right, they will do what is profitable. AI right now has been used by police to identify and target protesters who are exercising their first amendment rights. I worry AI will be used to profile and blacklist people. Until we reckon with the unethical and racist biases that the humans programming AI have, there is not likely to be ethical AI. Poor inputs lead to poor outputs, regardless of how powerful the computing.”

An expert in information systems and cybersecurity said, “AI is a black box. All results are based on training data being fair. Quality of data will be incredibly hard to maintain. Humans will slowly move out of the loop. Scary thought.”

An expert in media management responded, “There is no historical evidence that ethical considerations are applied when any new technology emerges. Greed and profit will dominate these ventures.”

An expert in network society and digital activism based on Australia noted, “Just like most technology, AI will be weaponised some way. While governments and corporations claim they are following regulations, AI are black boxes with outputs that are not predictable. I think stable room-temperature quantum computing required to build any AI is too far away unless biological quantum effects can be harnessed. There will always be rogue scientists that use biological quantum effects to build something devastating.”

An expert on AI and technological innovation and the future of law observed, “This is an impossibly difficult topic to approach in few words. We are at an important turning point. If the U.S. took the lead in pushing for greater development of ‘ethical AI,’ I would be more hopeful about the future. The EU has already taken more concrete steps than the U.S. has, but the two superpowers together could achieve important global results. However, in the current political atmosphere, the U.S. has lost much of its international influence, which has not been lost on China. What Western democracies consider ‘ethical AI’ and ‘AI for social good’ are far off from China’s or Russia’s visions. What concerns me are the macropolitical shifts that will inevitably pervade technology. I think that in 10 years’ time, humans will still be, by and large, in the loop, but perhaps on the way out if we do not develop clear boundaries about where AI can be deployed.”

An expert on conflict prevention and peace commented, “Geopolitical competition won’t allow for consensus around regulatory regimes of AI. We’re also seeing the rise of techno-nationalist competition and the convergence of economic and security interests around AI.”

The chairman of an investment and strategic advisory firm observed, “AI development is a global phenomenon driven by a combination of state and economic interests. I find it hard to believe that there will be either the will or the capacity to cooperate sufficiently to agree, then apply, then ‘police’ such policies.”

The director of a public policy center responded, “I see a positive future for AI in the areas of health and education. However, there are ethical challenges here too. Will the corporation who access and hold this data use it responsibly? What will be the role of government? Perhaps AI can help the developing water deal with climate change and water resources, but again, I see a real risk in the areas of equitable distribution, justice and privacy protections.”

The former vice president for technology at a major North American company wrote, “We haven’t solved fundamental privacy and security issues in the 40 years that I’ve been in the IT industry, despite solutions being available. How will we ever solve more complex ethical issues? We’re missing fundamental tools for quantum computing. Until they’re available, we won’t be able to build more-complex systems.”

The founder, chair and CEO of a sustainable business commented, “Every single person needs to get involved in positively training AI so the collective intelligence can overcome the forces that are currently in motion as reflected in my prior answer. There must be a process to instill protective measure for humans and all biological sentient beings.”

The manager of a project focused on enhancing digital life said, “I answer ‘No’ because it isn’t clear what it means for ‘principles’ to be ‘employed’ in a system, to say nothing of the challenging question of what constitutes an AI system. I think most if not all AI systems, by any definition, will at least claim to have relied on ethical principles in their design and implementation, – ‘ethics-washing’ – but depending on the principles, that might mean almost nothing, especially without some sort of legal regulation and enforcement, which in turn would have to be based on explainability and transparency. I think AI systems, whether machine learning rooted, or ‘expert systems’ or semi-autonomous devices will be at work in most people’s lives, but largely invisibly in the most important and far reaching ways. Based on current trends, I do not expect these uses to improve most people’s lives in any substantive way, and I think that they will most likely be used to reinforce and exacerbate existing inequality and reduce social mobility. I would say my primary concern is that without effective enforcement of ethical principles and regulation, it will be too easy for individuals, companies and even countries to speak out of both sides of their mouths and rely on illicit ‘non-ethical,’ but possibly more powerful, AI to actually do things while professing ethics in all their dealings. This will be all the more problematic if one or more countries, seeing a competitive advantage, refuse to adhere to global principles.”

The publisher and editorial director of a science magazine said, “Just as there is malicious software, there will be malicious AI, no matter what we do to control it.”

The following predictions are from respondents who said ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030

A research scientist who works at Google commented, “I’m involved in some AI work and I know that we will do the right thing. It will be tedious, expensive and difficult, but we’ll do the right thing. The problem will be that it’s very cheap and easy for a small company to not do the right thing (see the recent example of ClearView, which scraped billions of facial images, violating terms of service, and created a global facial-recognition dataset). This kind of thing will continue. Large companies have incentives to do the right thing, but smaller ones do not (see, e.g., Martin Shkreli and his abuse of pharma patents).”

A research scientist working on AI innovation with Google commented, “There will be a mix. It won’t be wholly questionable or ethical. Mostly, I worry about people pushing ahead on AI advancements without thinking about testing, evaluation, verification and validation of those systems. They will deploy them without requiring the types of assurance we require in other software. For global competition, I worry that U.S. tech companies and workers do not appreciate the national security implications.”

A vice president involved in AI research at a major global technology company said, “The COVID-19 research is really going to push our beliefs on ethics and data sharing. So far, I feel tech companies have done a very good job of managing privacy and scientific progress. This will be one of the most significant tests of AI and ethics. 2030 is just too early for quantum computing. Other things like government and social issues will be more important.”

A well-known cybernetician and emeritus professor of business management commented, “AI will be used to help people who can afford to build and use AI systems. Lawsuits will help to persuade companies what changes are needed. Companies will learn to become sensitive to AI-related issues. Some members of the public will lobby, protest, etc. to ensure that AI systems operate ethically. There will be lots of back and forth.”

A professor of human-centered design and engineering at a major U.S. university said, “We’ll certainly see AI framed as being used for the public good, though whether it is actually for the public good is questionable. Also, we very well could have AI systems that overall benefit society (i.e., a public good) while further marginalizing some groups (i.e., that are oppressive and unethical). Just because something is framed as for the public good doesn’t mean it is good for everyone.”

A leader with an education technology institute based in the U.S. commented, “Society will make the effort to embrace all because the ‘market’ lives to sell to all people. As long as you have money or the potential of getting some, you are ‘in the club,’ so having a system that does not discriminate is one that will be supported by the folks who want to make money by selling folks products or services. Perhaps not the most compassionate answer, but whatever gets us to a society that ensures diminishing returns/penalties for bigotry is heading in the right direction. Quantum is all about controlling the environment (vacuum, absolute-zero temp) and the surgical use of harmonics. So, where might a good place exist to manage both? Space is the place for the quantum boom! So yes, quantum computing is coming, and yes, it will have a huge impact on AI. Indeed, quantum will be AI with all that preceded it no longer recognized as true AI.”

An active leader of an African nation-state’s communications agency wrote, “The abuse of algorithms as forms of power will be reined in by 2030. I expect to see greater awareness regarding the role of hyper objects in human lives post-COVID-19. The three key hyper objects that need to be brought within the span of human control on an ethical basis are global warming, pandemic proliferation and AI/tech. Whether this will be possible lies in if, post-COVID-19, a new social contract is developed within human political and economic systems. If things AI continue along the development paths currently chosen, we will see science fiction become a reality in the form of a horrible dystopia, with all the implications of a QWERTY keyboard being locked in. The future of AI is tied in with the future of that other hyper object, the capitalist system. The analogy is to the 1930s. That is the winky comparator we have. Either we get a New Deal or a revolution. Recent history of unethical abuse by tech developers will result in the ethical regulation of the AI/tech industry, and these measures will govern the development of quantum computing and AI systems based on it. So yes, humans will be in the loop as AI systems are developed. As to how and when that evolution will unfold, and when, is currently unknown. What is known is that unfettered unethical AI/tech experimentation on humans will have to be prevented by the introduction of sophisticated regulatory systems. Is this a naive view? I don’t think so. We are not entirely psychotic, only a bit. Here the analogy is with the regulation of the internet. Regulation was resisted during the UN’s World Summit on the Information Society movement in the 2010s. No global treaty on the internet was put in place. But following the economic crash of 2008, followed by the explosion of unethical algorithmic behavior in social media and other tech companies and governments, there is a case for demanding regulatory oversight of AI.”

A strategy and planning expert responded, “While I say and believe that, yes, ethical boundaries will be put in place for AI by 2030, I also realize that doing this is going to be incredibly difficult. The understanding of what an AI is doing as it builds and adapts its understandings and approaches rather quickly gets to a point where human knowing and keeping up gets left behind. The how and why something was done or recommended can be unknowable. Also, life and the understanding of right and wrong, or good-ish and bad-ish can be fluid for people as things swing to accommodate the impacts on the human existence and condition, as well as livable life on our planet. Setting bounds and limitations has strong value but being able to understand when things are shifting out of areas that are comfortable or have introduced a new realization for a need to correct for unintended consequences is needed. But bounds around bias need to be considered and worked through before setting ethical limitations in place.”

A journalism professor emeritus wrote, “The initial applications, ones easily accepted in society, will be in areas where the public benefit is manifestly apparent. These would include health and medicine, energy management, complex manufacturing and quality control applications. All good and easy to adhere to ethical standards, because they’re either directly helpful to an individual or they make things less expensive and more reliable. But that won’t be the end of it. Unless there are both ethical and legal constraints with real teeth, we’ll find all manner of exploitations in finance, insurance, investing, employment, personal data harvesting, surveillance and dynamic pricing of almost everything from a head of lettuce to real estate. And those who control the AI will always have the advantage – always.  What most excites me beyond applications in health and medicine are applications in materials science, engineering, energy and resource management systems and education. The ability to deploy AI as tutors and learning coaches could be transformative for equalizing opportunities for educational attainment. I am concerned about using AI to write news stories unless the ‘news’ is a sports score, weather report or some other description of data. But I suspect there will be attempts to use AI to write novels, poetry and editorials. I hope not, but I have no doubt it will be tried. And I’d hate to see job applicants screened by an AI interviewer, but I suspect that will be tried too. My greatest fear, not likely in my lifetime, is that AI eventually is deployed as our minders – telling us when to get up, what to eat, when to sleep, how much and how to exercise, how to spend our time and money, where to vacation, who to socialize with, what to watch or read and then secretly rates us for employers or others wanting to size us up. It seems likely that quantum computing would/could become a major asset in the wise application of ethical principles, especially situational ethics. AI has no need for greed, avarice, jealousy, resentment, love, hate or spite. It seems likely that the initial applications may involve a range of solutions for human consideration with some of those solutions informed by ethical principles or ethical implications. It may be that is as far as such applications evolve – to make sure humans are mindful of ethics – rather than the other way around.”

A technology leader affiliated with the International Telecommunication Union wrote, “The discussions of ethics for AI have started in a more timely fashion compared than those for the internet. What worries me is that AI is only a tool. A hammer is also a tool. Any tool can be used for good or bad. AI can make huge differences in many fields. Health is one area that will experience a huge impact. A concern though, tied to AI, is the bias that may be present in datasets used to train the AI solutions. Quantum computing will make AI solutions more common, including for problems that remained non-computational before. Nevertheless, in my opinion, quantum computing is also a tool like AI or a hammer. Their ethical use remains with the people designing or using them.”

A professor at university based in the U.S. Midwest responded, “I anticipate that some ethical principles for AI will be adopted by 2030, however, they will not be strong or transparent. Bottom line: Capitalism incentivizes exploitation of resources, and the development of AI and its exploitation of information is no different than any other industry. AI has great potential, but we need to better differentiate its uses. It can help us understand disease and how to treat it but has already inflicted great harms on individuals. As we have seen, AI has also disproportionately impacted those already marginalized – the COMPAS recidivism algorithm and the use of facial-recognition technology by police agencies are two examples. The concept of general and narrow AI that Meredith Broussard uses is appropriate. Applied in particular areas, AI is hugely important and will better the lives of most. Other applications are nefarious and should be carefully implemented.”

A director with a strategy firm commented, “I chose ‘Yes,’ because I hope so, although I am less than certain. The creators of AI and AI in general are most likely to be used by those in power to keep power. Whether to wage war, or financial war, or manage predicted outcomes most AIs are there to do complex tasks. Unless there is some mechanism to make them for the public benefit, they will further encourage winner-take-all. Regarding lives, let’s take the Tesla example. Their claim is by the end of the year, it will have Level 5 AI in its vehicles. Let’s assume that it takes a couple of years beyond that. The markets are already betting that 1) It will happen, and 2) No one else is in any position to follow. Rapid scaling of production will enable fleets of robocall-taxis, will destroy the current car industry as the costs are radically lower and the same tech will impact on most public transport too in five years. Technology-wise, I love the above scenario. It does mean, however, that only the elite will drive or have a desire to have their own vehicle. Thus, for the majority, this is a utility. Utilities are traditionally for the public good. It’s why in most countries the telephone system or the postal system were originally owned by the government. It’s why public transport is a local government service. We will not be well served by a winner-take-all transportation play! Amazon seems to be doing pretty well with AI. They can predict your purchases. They can see their resellers success and at scale simply replace them. Their delivery network is at scale and expected to also go to autonomous. I can’t live without it, however each purchase kills another small supplier. Because economics eliminate choice – one has to feed oneself. As long as AI can be owned, those that have it or access to it have an advantage. Those that don’t are going to suffer and be disadvantaged. The concern should be that the R&D for quantum systems is no longer housed in universities and other public domains; they are being developed by corporations. There is often no peer-review dialogue involved, and the legal structures don’t exist for them.”

A professor of political science commented, “What gives me the most hope is the resilience of democratic political systems despite its pathologies, and that voices of reason and ethics can prevail. What worries me the most is that AI will be driven predominantly by instrumental needs of private firms, and that government will also exploit AI mostly for reasons of security, rather than for broader wellbeing.”

An anonymous respondent wrote, “It’s an open question. Black Lives Matter and other social justice movements must ‘shame’ and force profit-focused companies to delve into the inherently biased data and information they’re feeding the AI systems – the bots and robots – and try to keep those biased ways of thinking to a minimum. There will need to be checks and balances to ensure the AI systems don’t have the final word, including on hiring, promoting and otherwise rewarding people. I worry that AI systems such as facial recognition will be abused, especially by totalitarian governments, police forces in all countries and even retail stores in who is the ‘best’ and ‘most suspicious’ shopper coming in the door. I worry that AI systems will lull people into being OK with giving up their privacy rights. But I also see artists, actors, movie directors and other creatives using AI to give voice to issues that our country needs to confront. I also hope that AI will somehow ease transportation, education and healthcare inequities. I have faint hope that governments can use AI to spot healthcare problems before they reach pandemic proportions; crime patterns to interrupt urban violence; and to somehow upend deep-seated inequities, especially in underserved communities. The biggest question is whether companies, governments and others can interrupt inherent biased data baked into the system – whether the incentive to do so can ever be strong enough. Yes, I think humans will always have the final word. I hope that we maintain the skepticism that the original ‘Star Trek’ TV show always had – that machines will never replace moral leadership and human compassion and gut instinct.”

The co-founder and coordinator of a digital grassroots organization based in Africa said, “Ethics will be embedded, but that doesn’t mean it will be implemented. It’s not about the policy, it’s about the people, power and who is vulnerable to corruption. The foundation is the human, and the people are ethical. It will evolve that way. I believe there will still be bias though.”

The well-known founding director of a U.S. center for humanities observed, “The Public Interest Technology initiative led by the Ford Foundation gives me most hope. Also, some corporations, in response to Black Lives Matter, are devoting greater attention to technology ethics. Quantum computing technology will eventually evolve, but it is hard so it will take some time – longer than a decade – to have many practical applications.”

An associate programme specialist at UNESCO observed, “Most organizations and governments will say they commit to such principles, but there will be difficulty to audit and little or no enforcement, so likely not fully applied in practice. Again, enforcement is likely to be limited, perhaps to Europe, for example. Questions may go to courts with insufficient technical competence to properly adjudicate, and to policy makers similarly lacking such knowledge. It is likely that humans-in-the-loop may disappear due to cost (already this is partially the case with some AI technologies).”

A futurist and managing principal for a consultancy commented, “AI will mostly be used in ethical or questionable ways in the next decade. However, one must provide for either global leadership on the innovation front to counteract the U.S. juggernaut of lobbied-for privileges or work harder to lobby for balance in these applications. AI offers extremely beneficial opportunities, but only if we actively address the ethical principles and regulate and work towards: 1) Demographically balanced human genome databases, 2) Gender-balanced human genome databases (especially in the area of clinical drug trials where women are severely under tested), 3) Real rules around the impact of algorithms and machine learning developed by poor data collection of availability. We see this in the use of facial recognition, police data, education data, Western bias in humanities collections, etc. AI also has the potential to once again be a job killer but also assist the practice medicine, law enforcement, etc. It can also be a job creator but countries outside the U.S. are making greater progress on building AI hubs. Quantum computing has potential, but fundamentally it’s about speed. Doing something unethical faster should not be the goal. For this to have a positive effect there have to be parallel conversations, policies, and laws in place to avoid overreaching and potential damage to the majority of people (especially women and BIPOC). If humans aren’t in the loop our species is lost and subservient to technology.”

A technology consultant said, “As with all technology, the use depends on the user. The increase of AI will result in mixed uses – being used for both ethical and questionable ways. With the amount of information and with an increase in use of technology (working from home, education, etc.), AI is a necessity as the complexity of sorting and using the ever-increasing amount of information increases. Choosing to focus on the positive, I feel AI will be mostly used ethically, although it also has the potential to corrupt the democratic process in countries worldwide, and to manipulate and control vulnerable individuals. AI and quantum computing will become more ‘human’ over time – incorporating human biological aspects into computing and AI. I believe that combining human biology with computing is the only way that AI can ever reach its full potential. With efforts in this direction, the conversation regarding ethical use of AI and computing will be a major focus. This focus may not be within the next 20 years, but with the pandemic I believe that movements in this direction will be sped along at a faster pace.”

A leader in telecommunications based in New Zealand wrote, “AI will lead to fewer jobs, improvements in healthcare and more-precise treatments. There will be improved public safety and less petty crime in major cities because of AI systems. In-home robotics will eliminate some domestic duties. Transportation can be more efficient and cleaner with driverless and autonomous vehicles. Data breaches and abuses could be magnified under an AI system. However, the weaponisation of misinformation and data could be greater. Humans will need to work out ways to digitally cooperate. Humans will need to devise standards, certifications and regulatory systems to define a rules-based system for AI. Digital divides will exist between those with AI systems and those without. Task creep will be evident. AI systems will become more common, so society’s comfort levels with what we let the system do for us without interacting with it will become greater in scope. There will be less questioning of AI systems deployed for public-good purposes as resources become scarce.”

An expert in learning technologies, digital life and higher education wrote, “AI will be used in ethical and questionable ways in the next decade. With luck, enough people and appropriate governance will be involved to steer AI in ethical directions. The availability of public information about what’s happening with AI will influence ethical behaviors, regardless of organization. AI has the potential to be very helpful in a range of human activities – from government to communications to medicine to learning. Still, the ethics of AI development depends on what we want AI to do. Many folks, including experts, still don’t know what they think about quantum computing and how to think about quantum computing in relation to AI, much less about the possibilities of its assistance with ethical AI. The theoretical musings on the subject cover the waterfront of exploratory communication among experts and amateur experts. Humans will still be in the loop as AI systems are created and implemented, assuming we don’t create our own destruction device (which we are perfectly capable of doing through ignorance, fatigue, lack of care, existing unethical practice, etc.). A crisis can help an evolution unfold because of great need(s), but crisis-driven thinking and feeling are not always rational enough to benefit the changes needed.”

The chief technology officer for a technology strategies and solutions company noted, “I’m an optimist. That does not mean I’d be surprised if the hope for an ethical approach is totally misplaced. This isn’t a technical question. It’s about the people charged with research and development. I hope no one has cause to repeat Robert Oppenheimer’s thought after the first atomic bomb exploded.”

The director of a military center for strategy and technology responded, “Most AI will attempt to embed ethical concerns at some level. It is not clear how ‘unbiased’ AI can be created. Perfectly unbiased training datasets don’t exist, and due to human biases being an inherent part of interactions, such a goal may be unobtainable. As such, we may see gender or racial biases in some training datasets which will spill over into operational AI systems, in spite of our efforts to combat this. Quantum will evolve. The timescale is uncertain, but my gut sense is quantum computing will emerge in a significant way in the early to mid 2030s. How much it will assist in creating AI appears to be dependent on the nature of the AI. Quantum computing may help in complex pattern recognition. I’m less convinced it will be of much use in processing strictly mathematical or numeric patterns/relationships.”

A cybersecurity engineer and speaker commented, “In my opinion, AI won’t be politically palatable if it isn’t embedded in some framework that accounts for basic fairness. It is not obvious to me that quantum computing models would actually be well-suited to the sorts of problems that AI or expert systems would be assigned. Compared to, say, neural networks or genetic tools.”

A professor of cognitive science and artificial intelligence based in New Zealand noted, “My worry is worldwide neoliberalism. I have no confidence this economic system will bring about the important global changes we need in any area, whether environmental or economic or technical. I hope the countries of the world will move away from neoliberalism towards liberal democracy – a more egalitarian variety of capitalism, where the state plays a larger role. If practical QC does emerge, it will certainly lead to big improvements in AI technology. I don’t know if it will emerge.”

A professor of economics expert in systems to support employment, productivity and economic security observed, “There is a chance for ethical regulation of AI because prior technology was also eventually regulated. But there are powerful interests at play, so I am not fully confident. Further, there are no ethical principles that everyone can agree on, and widely agreed upon principles for non-discrimination disagree with each other.”

A public policy entrepreneur and expert in information technology and government said, “Absolutely both. The most concerning possibility is that global economic rivalries lead both governments and companies fearing they will be ‘left behind’ if they do not pursue the most aggressive strategies possible. As a result, minimal restrictions via regulation – or even just internal organizational checks – will be imposed.”

A computer engineer observed, “The systems will be built ethically, but the people using the systems may not be trustworthy to use the information collected ethically. Oversight of corporations will be key.”

A head of digital transformation, leadership and research based in Australia commented, “We recognize the ethical issues, and organizations are working on policies, standards and approaches to ensure ethical AI. Whether this happens in the U.S. and elsewhere depends on government leaders who understand the risks and partner with technologists.”

A professor of communication and political science based in the U.S. Midwest said, “It is too soon to see a change, but I do believe we will see some positive steps and some mishaps. It all depends on leadership. We need to choose our leaders with an eye for the future and think beyond our own wallets. Of course humans will be in the loop. Humans design these systems. They design the infrastructure and at the same time are the infrastructure. The question is not: What do we want AI or quantum to be? The question is: What kind of humans do we want to be? How do we want to evolve as a species?”

A professor of computer science based in Montreal responded, “Developed democracies have laws and enact laws to prevent serious harm. Quantum computing is very, very difficult, and little progress has been made in achieving practical quantum computers. AI is also very, very difficult. Most of the recent advances are really due to advances in hardware speed and multiprocessing. Real AI means general intelligence that can learn independently, and we are nowhere near that.”

A professor of economics and public policy at a major U.S. technological university observed, “AI will allow people to be more independent in their daily lives and will open the possibility for improvements in the ways cities organize, especially regarding traffic-related issues. My concern is the abuse of personal information. Human-computer interaction will become more organic. It will be trial and error: when something is not going well, society will sooner or later make changes to improve individual and social outcomes.”

A vice president with a major U.S. technology company wrote, “I would question the phrase ‘public good.’ I do believe AI systems will be governed by ethics, but they will be equal parts new legislation and litigation avoidance. If your AI rejects my insurance claim, I can sue to find out why, and ensure your learning model wasn’t built on, say, racially biased source data.”

An activist working to create new forms of participatory democracy commented, “Internet standards established by a representative Citizens’ Assembly on Technology and AI will establish guidelines. Or… we’re f***ed.”

The executive director of a digital institute commented, “Although I chose that there will be ethical principles by 2030 this doesn’t mean much, as many are already incorporating ethical principles. But if the ethics research isn’t informative enough then it doesn’t mean much to incorporate ethical principles. In other words, proper implementation of this all depends on whether we will have come to a consensus on ethical use of AI.”

A professor of public affairs expert in organizational science said, “I’m worried about AI-enabled manipulation of news, surveillance and communications. I’m worried about the impingement on humanity, e.g., isolation, intensive measurement, assessments and evaluation that is insufficiently nuanced to support rewarding human experience. I’m excited about potential efficiencies.”

An anonymous manager of technology sourcing and logistics operations wrote, “Artificial intelligence will serve as a mechanism to organize information, and presentation of such information; humans will be the ultimate decision-makers, and governing bodies will judge quality over quantity of the AI system. Human knowledge has incrementally and exponentially increased as a result of digital technology; quantum computing will be a ‘must-have’ to continuously organize and understand real-time data; old data will serve as a repository of past patterns, as part of a larger ongoing effort to understand the changing future.”

An anonymous respondent said, “The political winds will, one way or the other, incentivize builders of AI systems and algorithms to work towards the public good.”

An anonymous respondent noted, “AI is a natural progression of human enterprise. Surely there will be hiccups in the short term. Totalitarian states can use it to suppress dissent. On the other hand, I envision businesses that match customers to their personalized needs and educational institutions that can do the same.”

An anonymous respondent commented, “AI is a tool for smarter decisions, minimizing risk and avoiding mistakes that can negatively impact our lives. The ethical part mainly is where it may replace humans or taking decisions rather than empowering people. This needs to be addressed. Such concerns should not stop advances and research in the field of AI but should be addressed accordingly. Quantum computing is a major advance toward making AI available to everyone to benefit from. It will assist us in building AI models and tools based on individual needs and automate processes more effectively.”

An anonymous respondent observed, “There is much potential for AI to be used unethically and/or implemented in such a way (i.e., quick and shoddy) that it winds up being unethical, if not by intent, then in practice. My hope currently is the people/companies working most aggressively with AI are the same ones that understand most fully the implications – and hopefully with the increased dialogue in the past two years around the above issues, those individuals and companies will do primarily, if not fully, the right thing(s) as they roll out their AI implementations. Most promising for AI is healthcare, in detecting and treating diseases of many kinds and extending access to quality healthcare for more people. It’s my understanding that true quantum computing is still more than a decade away, more in the 2040-2050 range.”

A chief marketing officer noted, “AI will be used as all other technologies. It is not a panacea and will not revert the course of humanity. It will merely help us work less and enjoy more – just like other technologies. But privacy remains a big issue. Governments will want to work with data or block the abuses of using data. We will see. EU vs. China vs. U.S. Quantum and AI will be paired; one affects the other. Computational power is what makes AI possible.”

A managing partner at a financial group commented, “AI is capital-driven, so it will benefit the haves more than the have-nots. Therefore, unfortunately, it will also benefit Caucasians and other majority-race groups and further disadvantage minorities. The ethics of AI for the common good will be under tremendous pressure. The key element to ethics is not quantum computing, it’s who the quantum computer is working for. Those who choose to use it for good will do so and be lauded. Those who choose to use it for personal gain at the expense of others will do so.”

A professor of communications based in the United Kingdom commented, “I hope that ethical frameworks for the application of AI can be agreed and in place by 2020 – but I am not optimistic (there being a difference between hope and optimism!). I believe AI has the potential to drive greater equality of opportunity and to improve the quality of lives across society. I fear it will be used simply to drive profits at the expense of employment and civic good. AI systems will evolve beyond the comprehension of most of the public. Therefore, its development and application will rest in the hands of a minority tech elite. We have to hope they behave responsibly. To date there is little evidence they will.”

An anonymous respondent wrote, “I believe that most people in AI will begin with good intentions and a small group will use it for ill. For those that believe that they are doing good, how long will it take to understand good intentions had bad outcomes? I do not believe society as a whole is mature enough to understand, manage and monitor the potential of AI. There will, as always, be the rush to be the first. But once the genie is out, what will happen then? I believe AI will best serve people when monitored by humans. If left by itself, it can and will cause harm to our lives. Quantum computing has the potential to hurl computing and AI into another level, making computers faster and more powerful. The greater the processing power, the greater the potential.”

An anonymous respondent commented, “I am hopeful, but not convinced, that tech companies will pay attention to ethics. AI already has made a difference in people’s lives, but most are unaware of it. It’s under the surface. People are only aware that some things, such as their favorite product being prominently displayed to them or a chatbot answering their exact question accurately, are the result of AI technologies. Global competition over AI systems does not concern me, as many of the companies developing these have a presence in multiple countries. It’s not a question of nationalism so much as a global awareness of ethical use of AI. If we don’t have humans in the loop, even with quantum computing, we will not have ethical AI.”

An anonymous respondent said, “The growing awareness by governmental groups, the educational community and the community at large will lead to major policy changes in the use of AI. Technological innovations will continue, and quantum computing will be one of the benefactors.”

An anonymous respondent said, “AI will move towards more ethical framing. There have been various domestic and international conversations on the topic and those will continue into 2030. Quantum computing is already becoming the next wave of AI as it improves upon autonomous systems. For AI to get bigger, quantum computing will also have to advance.”

An author and global expert on the future of AI and transhumanism wrote, “Ethics is an unhelpful word as it makes people get moralistic.”

The director of a project exploring collective learning observed, “There is good research on the topic and a better understanding of the intersection between ethics and technology. I am particularly looking forward to the MIT Press book ‘How Humans Judge Machines’ by Cesar Hidalgo. There will also be a generational change that will place Gen X on top.”

A former technology administrator for a global superpower commented, “It will be a mixed bag with some AI uses that will be societally and personally beneficial for its users. I strongly believe that there will be misapplications and misuse of AI for everything from ‘deepfakes’ to commercial fraud to misleading information online that is circulated to more folks who are less discriminating in what they see or hear.”

A director and leader of a major internet ecosystem in Southern Asia observed, “We may expect AI ethics to be adopted, hopefully, seeing the present scenario, however expecting all to follow ethical behaviour may be far-fetched. Concerns: Network effect of giants and countries using these systems to abuse human rights.”

A chief scientist and professor emeritus expert in physics responded, “As with all technologies, the creation of monopolies that are profit-driven do not always serve society because of the greed of corporations. It is not the technology that creates problems, but the way they are deployed.”

A director of a major university’s center on society and digital life noted, “This will depend upon regulations and governance. But I am confident that both can be instituted in ways that will restrict and restrain the least-ethical uses of AI. Development of quantum computing will continue and progress will be made. But its actual use and adoption will be slow and gradual.”

A futurist and consultant based in Malaysia noted, “The internet is the equaliser – we will see AI being shared globally and everyone will be able to benefit from AI. AI will continue to depend on humans as it is needed to support human experience – it will never replace humans.”

A longtime ICANN leader based in Australia responded, “If AI is unethical then there will be a rejection of new AI. The competition to provide ethical AI should force that as a normal expectation, though it is reliant on good transparency and accountability systems. It makes sense to me for quantum computing to be focused on assisting in AI development. Whilst some evolution will unfold without too much human intervention I suspect that humans will, in general, resist being out of the loop completely.”

A vice president for regulatory affairs at a major global telecommunications company commented, “Most AI systems will be designed to operate in ethical ways with consideration given to how the AI system impacts people, equity and human dignity. Trouble areas are likely to be how AI impacts work as jobs formerly done by people can now be done more effectively by AI and machine learning systems. Even more concerning is ensuring that AI is used responsibly in military applications. I don’t think quantum computing will have any significant impact on the development of AI systems (ethical or otherwise) over the next decade.”

An analyst for a major branch of the U.S. government noted, “In the fields of medicine and science, AI will help to provide a technical edge.”

A CEO wrote, “Common sense brings balance to thinking and issues. Capitalism rewards good ideas of benefit and drives out bad ones. Quantum has more important uses of greater value, for example in genome research.”

An anonymous participant in internet governance organizations said, “I hope that AI must be used with respect to ethical behaviour certainly due to the fact that the internet (the spine of all these technologies) governance is managing in a multistakeholderism manner. People will remain at the center of this model of governance.”

An anonymous respondent commented, “Both ethical and questionable ways, depending on the government. Hope: AI for the arts sector (performing arts) and for business meetings. Concern: facial recognition for repression. Global competition for AI systems: concerned about military use for cyberattacks and infringement of privacy/civil liberties.”

An anonymous respondent wrote, “It will be key to process data for vaccine development, etc. Ethics will need to be addressed. AI will be the key to everything. Better decisions overall. People can review.”

An expert in developmental psychology observed, “AI helps people now in many ways and will continue to do so, but it has also been shown to be racially biased, and its potential for use in multiple ethically questionable ways, especially for profit, are well known. In the last two years, machine learning and technology to support it have grown by leaps and bounds, and there is no reason to believe that will stop. Again, there is so much potential to profit: AI can and does help identify treatments in medicine, but if the U.S. continues to put profit before people, only the wealthy will benefit.”

A respondent based at a university in Nigeria noted, “AI will be used in questionable ways, but it won’t and cannot replace human intelligence.”

A professor of mathematics and computer science commented, “The AI community now recognizes that AI will automatically incorporate racism and prejudice that it sees in humans. That means AI can be corrected to reject such prejudice. It’s not so much that AI worries me, but that information about people is collected and sold to corporations so they can take advantage of us. It’s going to take more than 10 years for quantum computing to have any applications like that. It’s still in early stages of research and development.”

A writer, educator and editorial adviser observed, “I think the key term you used is ‘most.’ Governments are so concerned about the misuse of AI that they will mostly regulate them into good use. But there will still be rogue actors, some damaging, who will skirt law or simply break it for their own gain.”

The following responses are from participants in this canvassing who chose not to select “Yes” or “No” and simply chose to comment on the potential near-future of ethical AI design

A CEO and founder of an agency helping people use digital tools responded, “Governments need to be more involved in this conversation, yet most of our politicians do not understand the implications.”

An economist who works in government responded, “Ethical principles will be developed and applied in democratic countries by 2030, focusing on the public good, global competition and cyber breaches. Other less-democratic countries will be focused more on cyberbreaches and global competition. Nongovernmental entities such as private companies will presumably concentrate on innovation and other competitive responses. AI will have a considerable impact on people, especially regarding their jobs and also regarding their ability to impact the functions controlled by AI. This control and the impact of cybercrimes will be of great concern, and innovation will intrigue.”

A researcher and consultant in in the fields of economic sociology and stratification noted, “People are trying to use it for good. But AI doesn’t actually work that well in most settings, so there is a lot of potential for unintended consequences. Also, a lot of AI is devoted to issues of minor importance that can be quickly marketized – like coming up with movie recommendations. Self-driving cars would be attractive, but they still seem a long way off (based on how long they’ve been saying it’s ‘just around the corner’).”

If you wish to read the full survey report with analysis, click here:
https://www.elon.edu/u/imagining/surveys/xii-2021/ethical-ai-design-2030/

To read for-credit survey participants’ responses with no analysis, click here:
https://www.elon.edu/u/imagining/surveys/xii-2021/ethical-ai-design-2030/credit/