Elon University

The 2016 Survey: Algorithm impacts by 2026 (Credited Responses)

Credited responses by those who wrote to explain their response

Internet experts and highly engaged netizens participated in answering a five-question canvassing fielded by the Imagining the Internet Center and the Pew Internet Project from July 1 through August 12, 2016. One of the questions asked respondents to share their answer to the following query:

Algorithms will continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other. The hope is that algorithms will help people quickly and fairly execute tasks and get the information, products, and services they want. The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts. Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society? Select from 1) Positives outweigh negatives; 2) Negatives outweigh positives; 3) The overall impact will be about 50-50. Please elaborate on the reasons for your answer.

Among the key themes emerging from 1,302 respondents’ answers were: – Algorithms will continue to spread everywhere. – The benefits, visible and invisible, can lead to greater insight into the world. – The many upsides of algorithms are accompanied by challenges. – Code processes are being refined; ethics and issues are being worked out. – Data-driven approaches achieved through thoughtful design are a plus. – Algorithms don’t have to be perfect; they just have to be better than people. – In the future, the world may be governed by benevolent AI. – Humanity and human agency are lost when data and predictive modeling become paramount. – Programming primarily in pursuit of profits and efficiencies is a threat. – Algorithms manipulate people and outcomes, and even read our minds. – All of this will lead to a flawed yet inescapable logic-driven society. – There will be a loss of complex decision-making capabilities and local intelligence. – Suggested solutions include embedding respect for the individual. – Algorithms reflect the biases of programmers and datasets. – Algorithms depend upon data that is often limited, deficient, or incorrect. – The disadvantaged are likely to be more so. – Algorithms create filter bubbles and silos shaped by corporate data collectors. – Algorithms limit people’s exposure to a wider range of ideas and reliable information and elminate serendipity. – Unemployment numbers will rise as smarter, more-efficient algorithms will take on many work activities. – There is a need for a redefined global economic system to support humanity. – Algorithmic literacy is crucial. – There should be accountability processes, oversight, and transparency. – There is pessimism about the prospects for policy rules and oversight.

The non-scientific canvassing found that 38% of these particular respondents predicted that the positive impacts of algorithms will outweigh negatives for individuals and society in general, while 37% said negatives will outweigh positives; 25% said the overall impact of algorithms will be about 50-50, positive-negative.

If you wish to read the full survey report with analysis, click here.

To read anonymous survey participants’ responses with no analysis, click here.

Written elaborations by for-credit respondents

Following are the full responses by study participants who chose to take credit for their remarks in the survey – only including those who included a written elaboration explaining how they see the near future for the impacts of algorithms. Some of these are the longer versions of expert responses that are contained in shorter form in the official survey report. About half of respondents chose to take credit for their elaboration on the question (anonymous responses are published on a separate page).

These responses were collected in an “opt in” invitation to several thousand people who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet.

About 38% of the respondents expect positives to outweigh negatives; about 37% expect that the impacts of algorithms will be an even split between positive and negative outcomes for individuals and society; and about 25% of respondents anticipate that the expanding deep dive into algorithm-driven, digital systems will have mostly negative impacts.

Marc Rotenberg, executive director of the Electronic Privacy Information Center, observed, “The core problem with algorithmic-based decision making is the lack of accountability. Machines have literally become black boxes—even the developers and operators do not fully understand how outputs are produced. The problem is further exacerbated by ‘digital scientism’ (my phrase)—an unwavering faith in the reliability of big data. ‘Algorithmic transparency’ should be established as a fundamental requirement for all AI-based decision-making. There is a larger problem with the increase of algorithm-based outcomes beyond the risk of error or discrimination—the increasing opacity of decision-making and the growing lack of human accountability. We need to confront the reality that power and authority are moving from people to machines. That is why #AlgorithmicTransparency is one of the great challenges of our era.”

Michael Dyer, a computer science professor at the University of California-Los Angeles who specializes in artificial intelligence, commented, “The next 10 years is transitional but within the next 20 years AI software will have replaced workers’ jobs at all levels of education. Hopefully, countries will have responded by implementing forms of minimal guaranteed living wages and free education past K-12; otherwise the brightest will use online resources to rapidly surpass average individuals and the wealthiest will use their economic power to gain more political advantages over the average voter.”

Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, commented, “The future effects of algorithms in our lives will shift over time as we master new competencies The rates of adoption and diffusion will be highly uneven, based on natural variables of geographies, the environment, economies, infrastructure, policies, sociologies, psychology, and—most importantly—education. The growth of human benefits of machine intelligence will be most constrained by our collective competencies to design and interact effectively with machines. At an absolute minimum, we need to learn to form effective questions and tasks for machines, how to interpret responses and how to simply detect and repair a machine mistake.”

Jeff Jarvis, professor at the City University of New York Graduate School of Journalism, observed, “Larry Lessig famously decreed that code is law. Code, like law, is a substantiation of its creator’s rules and prejudices (prejudicing, one hopes, good behavior). An algorithm is nothing but a formula, a process, a procedure. We have long had such systems—often hidden from view—governing what we do. The only issue with the algorithm is that it is new and written in a strange language that makes it opaque. To demonize the algorithm is a form of moral panic. What matters is that we demand first a statement of principles that govern these algorithms (note that Facebook recently issued such a statement regarding the prioritization of its News Feed) and second some faith that those principles are being followed (and the way to judge that is not to audit the algorithm but instead its results).”

Patrick Tucker, author of The Naked Future and technology editor at Defense One, said, “As I write in The Naked Future: In the next two decades, as a function of machine learning and big data, we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference. That will have an impact in all areas including health care, consumer choice, educational opportunities, etc. The rate by which we can extrapolate meaningful patterns from the data of the present is quickening as rapidly as is the spread of the Internet because the two are inexorably linked. The Internet is turning prediction into an equation. From programs that chart potential flu outbreaks to expensive (yet imperfect) ‘quant’ algorithms that anticipate bursts of stock market volatility, computer-aided prediction is everywhere. But on its most basic level this mechanical prediction process is the same as what plays out when the brain makes a guess about what’s going to happen next. These computer systems analyze ‘sensed’ data in the light of stored or remembered data to extrapolate a pattern. What differs between human predictors and machine predictors is the sensing tools on-hand and the process of remembering. Humans are limited to two eyes, two ears, and a network of nerve endings on our tongues, in our noses, and on our skin. Our sense organs communicate with the brain via chemical exchange, a functional but slow process that demands that the sensor be physically connected to the brain in some way. Why be hopeful about this? We can create laws that protect people for volunteering information such as the Genetic Information Non-discrimination Act, (Pub.L. 110–233, 122 Stat. 881) that ensures people aren’t punished for data that they share that then makes its way into an algorithm. The current suite of encryption products available to consumers shows that we have the technical means to allow consumers to fully control their own data and share it according to their wants and needs, and the entire FBI vs. Apple debate shows that there is strong public interest and support in preserving the ability of individuals to create and share data in a way that they can control. The worst possible move we, as a society, can make right now is to demand that technological progress reverse itself. This is futile and shortsighted. A better solution is to familiarize ourselves with how these tools work, understand how they can be used legitimately in the service of public and consumer empowerment, better living, learning, and loving, and also come to understand how these tools can be abused.”

Terry Langendoen, an expert at the National Science Foundation, replied, “My current job is to support research on algorithms for speech and text processing, and from my perspective, the technological improvements in the past 50 years in such areas as speech-recognition and synthesis, machine translation, and information retrieval have had profound beneficial impacts, and the field is poised to make significant advances in the near future.”

Michael Kleeman, senior fellow at the University of California-San Diego, observed, “The answer is really both negative and positive. Underlying all code at its root are human beings with biases that become integrated into the code at some level. The concept that algorithms are neutral is a sad myth (look at the history of ‘standardized’ testing). Overall I hope the positives outweigh the negatives but there will be both good and bad. And in the hands of those who would use these tools to control the results can be painful and harmful.”

David Clark, Internet Hall of Fame member and senior research scientist at MIT, replied, “I see the positive outcomes outweighing the negative, but the issue will be that certain people will suffer negative consequences, perhaps very serious, and society will have to decide how deal with these outcomes. These outcomes will probably differ in character, and in our ability to understand why they happened, and this reality will make some people fearful. But as we see today that people feel that they must use the internet to be a part of society, even if they are fearful of the consequences, people will accept that they must live with the outcomes of these algorithms, even though they are fearful of the risks.”

Bernardo A. Huberman, senior fellow and director of the Mechanisms and Design Lab at HPE Labs, Hewlett Packard Enterprise, said, “Algorithms do lead to the creation of filters through which people see the world and are informed about it. This will continue to increase. If the negative aspects eventually overtake the positive ones, people will stop resorting to interactions with institutions, media, etc. People’s lives are going to continue to be affected by the collection of data about them, but I can also see a future where they won’t care as much or will be compensated every time their data is used for money-making purposes.”

Ben Shneiderman, professor of computer science at the University of Maryland, wrote, “When well-designed, algorithms amplify human abilities, but they must be comprehensible, predictable, and controllable. This means they must be designed to be transparent so that users can understand the impacts of their use and they must be subject to continuing evaluation so that critics can assess bias and errors. Every system needs a responsible contact person/organization that maintains/updates the algorithm and a social structure so that the community of users can discuss their experiences.”

David Golumbia, associate professor of digital studies at Virginia Commonwealth University, said, “The putative benefits of algorithmic processing are wildly overstated and the harms are drastically under-appreciated. Algorithmic processing in many ways deprives individuals and groups of the ability to know about, and to manage, their lives and responsibilities. Even when aspects of algorithmic control are exposed to individuals, they typically have nowhere near the knowledge necessary to understand what the consequences are of that control. This is already widely evident in the way credit scoring has been used to shape society for decades, most of which have been extremely harmful despite the credit system having some benefit to individuals and families (although the consistent provision of credit beyond what one’s income can bear remains a persistent and destructive problem). We are going full-bore into territory that we should be approaching hesitantly if at all, and to the degree that they are raised, concerns about these developments are typically dismissed out of hand by those with the most to gain from those developments.”

Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, responded, “The choices in this question are too limited. The right answer is, ‘If we use machine learning models rigorously, they will make things better; if we use them to paper over injustice with the veneer of machine empiricism, it will be worse.’ Amazon uses machine learning to optimize its sales strategies. When they make a change, they make a prediction about its likely outcome on sales, then they use sales data from that prediction to refine the model. Predictive sentencing scoring contractors to America’s prison system use machine learning to optimize sentencing recommendation. Their model also makes predictions about likely outcomes (on reoffending), but there is no tracking of whether their model makes good predictions, and no refinement. This frees them to make terrible predictions without consequence. This characteristic of unverified, untracked, unrefined models is present in many places: terrorist watchlists; drone-killing profiling models; modern redlining/Jim Crow systems that limit credit; predictive policing algorithms; etc. If we mandate, or establish normative limits, on practice that correct this sleazy conduct, then we can use empiricism to correct for bias and improve the fairness and impartiality of firms and the state (and public/private partnerships). If, on the other hand, the practice continues as is, it terminates with a kind of Kafkaesque nightmare where we do things ‘because the computer says so’ and we call them fair ‘because the computer says so.'”

Joe McNamee, executive director at European Digital Rights, commented, “The Cambridge/Sanford studies on Facebook likes, the Facebook mood experiment, Facebook’s election turnout experiment, and the analysis of Google’s ability to influence elections have added to the demands for online companies to become more involved in policing online speech. All raise existential questions for democracy, free speech, and, ultimately, society’s ability to evolve. The range of ‘useful’ benefits is broad and interesting but cannot outweigh this potential cost.”

John Markoff, author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots and senior writer at the New York Times, observed, “I am most concerned about the lack of algorithmic transparency. Increasingly we are a society that takes its life direction from the palm of our hands—our smartphones. Guidance on everything from what is the best Korean BBQ to who to pick for a spouse is algorithmically generated. There is little insight, however, into the values and motives of the designers of these systems.”

Axel Bruns, professor at the Digital Media Research Center at Queensland University of Technology, commented, “Algorithms can be highly beneficial when they are well-designed for the task at hand, when their operation is transparent to all stakeholders, and when their impact on communicative and other processes is well-understood. In day-to-day practice, unfortunately, these three conditions are rarely all met. The algorithms that guide communicative processes online—for example, in social media platforms—may be well-designed, but their inner workings are rarely publicly revealed and therefore their impact on how we communicate is largely unknown. Worse, there are competitive, regulatory, and legal disadvantages that would result from greater transparency on behalf of the platform operator, and so there is an incentive only to further obfuscate the presence and operations of algorithmic shaping of communications processes. This is not to say that such algorithms are inherently ‘bad,’ in the sense that they undermine effective communication; algorithms such as Google’s PageRank clearly do the job that is asked of them, for instance, and overall have made the Web more useful than it would be without them. But without further transparency ordinary users must simply trust that the algorithm does what it is meant to do, and does not inappropriately skew the results it delivers. Such algorithms will continue to be embedded deeply into all aspects of human life, and will also generate increasing volumes of data on their fields. This continues to increase the power that such algorithms already have over how reality is structured, measured, and represented, and the potential impact that any inadvertent or deliberate errors could have on user activities, on society’s understanding of itself, and on corporate and government decisions. More fundamentally, the increasing importance of algorithms to such processes also transfers greater importance to the source data they work with, amplifying the negative impacts of data gaps and exclusions.”

Baratunde Thurston, a director’s Fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, replied, “Main positive changes: 1) The excuse of not knowing things will be reduced greatly as information becomes even more connected and complete. 2) Mistakes that result from errors in human judgment, ‘knowledge,’ or reaction time will be greatly reduced. Let’s call this the ‘robots drive better than people’ principle. Today’s drivers will whine, but in 50 years no one will want to drive when they can use that transportation time to experience a reality-indistinguishable immersive virtual environment filled with a bunch of Beyonce bots. 3) Corruption that exists today as a result of human deception will decline significantly—bribes, graft, nepotism. If the algorithms are built well and robustly, the opportunity to insert this inefficiency (e.g., hiring some idiot because he’s your cousin) should go down. 4) In general, we should achieve a much more efficient distribution of resources, including expensive (in dollars or environmental cost) resources like fossil fuels. Basically, algorithmic insight will start to affect the design of our homes, cities, transportation networks, manufacturing levels, waste management processing, and more. There’s a lot of redundancy in a world where every American has a car she never uses. We should become far more energy efficient once we reduce the redundancy of human-drafted processes. But there will be negative changes: 1) There will be an increased speed of interactions and volume of information processed—everything will get faster. None of the efficiency gains brought about by technology has ever lead to more leisure or rest or happiness. We will simply shop more, work more, decide more things because our capacity to do all those will have increased. It’s like adding lanes to the highway as a traffic management solution. When you do that, you just encourage more people to drive. The real trick is to not add more car lanes but build a world in which fewer people need or want to drive. 2) There will be algorithmic and data-centric oppression. Given that these systems will be designed by demonstrably imperfect and biased human beings, we are likely to create new and far less visible forms of discrimination and oppression. The makers of these algorithms and the collectors of the data used to test and prime them have nowhere near a comprehensive understanding of culture, values, and diversity. They will forget to test their image recognition on dark skin or their medical diagnostic tools on Asian women or their transport models during major sporting events under heavy fog. We will assume the machines are smarter, but we will realize they are just as dumb as we are but better at hiding it. 3) Entire groups of people will be excluded and they most likely won’t know about the parallel reality they don’t experience. Every area of life will be affected. Every. Single. One.”

Judith Donath of Harvard University’s Berkman Center for Internet and Society replied, “Algorithms are not inherently good or bad: it all depends on how they are implemented. In theory, having more data available to make decisions should lead to better decisions. But data can be incomplete, or wrong, and algorithms can embed false assumptions. The danger in increased reliance on algorithms is that is that the decision-making process becomes oracular: opaque yet unarguable. The solution is design. The process should not be a black box into which we feed data and out comes an answer, but a transparent process designed not just to produce a result, but to explain how it came up with that result. The systems should be able to produce clear, legible text and graphics that help the users—readers, editors, doctors, patients, loan applicants, voters, etc.—understand how the decision was made. The systems should be interactive, so that people can examine how changing data, assumptions, rules would change outcomes. The algorithm should not be the new authority; the goal should be to help people question authority.”

Steven Waldman, founder and CEO of LifePosts, said, “Algorithms, of course, are not values-neutral. If Twitter thrives on retweets, that seems neutral but it actually means that ideas that provoke are more likely to succeed; if Facebook prunes your news feed to show you things you like, that means you’ll be less exposed to challenging opinions or boring content, etc. As they are businesses, most large internet platforms will have to emphasis content that prompts the strongest reaction, whether it’s true or not, healthy or not. I know firms are concerned about this and working to mitigate against that tendency but it seems to be a deep part of the DNA of large-scale social platforms.”

Trevor Owens, senior program officer at the Institute of Museum and Library Services, wrote, “All computational methods and approaches are loaded with assumptions. Algorithms are ways of seeing and interpreting the world. Algorithms all have their own ideologies. As computational methods and data science become more and more a part of every aspect of our lives it is essential that work begin to ensure there is a broader literacy about these techniques and that there is an expansive and deep engagement in the ethical issues surrounding them.”

T. Rob Wyatt, an independent network security consultant, commented, “Algorithms are an expression in code of systemic incentives, and human behavior is driven by incentives. Any overt attempt to manipulate behavior through algorithms is perceived as nefarious, hence the secrecy surrounding AdTech and sousveillance marketing. If they told us what they do with our data we would perceive it as evil. The entire business model is built on data subjects being unaware of the degree of manipulation and privacy invasion. So the yardstick against which we measure the algorithms we do know about is their impartiality. The problem is no matter how impartial the algorithm, our reactions to it are biased. We favor pattern recognition and danger avoidance over logical, reasoned analysis. To the extent the algorithms are impartial, competition among creators of algorithms will necessarily favor the actions that result in the strongest human response, i.e., act on our danger-avoidance and cognitive biases. We would, as a society, have to collectively choose to favor rational analysis over limbic instinctive response to obtain a net positive impact of algorithms, and the probability of doing so at the height of a decades-long anti-intellectual movement is slim to none.”

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “Like virtually all past technologies, algorithms will create value and cut costs, far in excess of any costs. Moreover, as organizations and society get more experience with use of algorithms there will be natural forces toward improvement and limiting any potential problems.”

Scott Amyx, CEO of Amyx+, commented, “Within the field of artificial intelligence, there has been significant progress on cognitive AI as evidenced by Viv, IBM Watson, Amazon Echo, Alexa, Siri, Cortana, and X.ai. Advancement in cognitive AI will usher in a new era of orchestration, coordination, and automation that will enable humans to focus on human value-add activities (creativity, friendship, perseverance, resolve, hope, etc.) while systems and machines will manage task orientation. More exciting, in my opinion, is the qualitative, empathetic AI—AI that understands our deep human thoughts, desires, and drivers and works to support our psychological, emotional, and physical well-being. To that end, we are kicking off a research consortium that will further explore this area of research and development with emphasis on friend AI, empathetic AI, humorous AI, and confidant AI. To enable hyper-personalization, these neural network AI agents would have to be at the individual level. All of us at some point in the future will have our own ambient AI virtual assistant and friend to help navigate and orchestrate life. It will coordinate with other people, other AI agents, devices and systems on our behalf. Naturally, concerns of strong AI emerge for some. There is active research, private and public, targeted at friendly AI. We will never know for sure if the failsafe measures that we institute could be broken by self-will.”

Brian Behlendorf, executive director of the Hyperledger Project at the Linux Foundation, commented, “The net effect will be positive, but only if data scientists, programmers, and systems architects commit to recognizing and countering the effects of different forms of bias that can emerge in models and systems derived from big data.”

Scott Fahlman, computer science and artificial intelligence research professor at Carnegie Mellon University, wrote, “There are all kinds of ‘algorithms’ but I assume that you are referring to some recent stories about machine-learning systems picking up gender and racial biases from the training data sets. They are doing what they are supposed to do: learn which features correlate with which outcomes. We humans do that all that time, but we have learned that certain generalizations, if applied in an unthinking way, have bad social outcomes—i.e. all the doctors I knew as a child were men and all the nurses were women. If we want out systems not to make certain kinds of generalizations, we will have to put those things in deliberately, either by some sort of over-arching rules or by carefully selecting or manipulating the training data sets. That’s possible, in some cases we should do that, and we will.”

Thomas Claburn, editor-at-large at Information Week, commented,  “Our algorithms, like our laws, need to be open to public scrutiny, to ensure fairness and accuracy.”

Mark Lemley, professor of law at Stanford Law School, said, “Algorithms will make life and markets more efficient, and will lead to significant advances in health. But they will also erode a number of implicit safety nets that the lack of information has made possible. The government will need to step in, either to prevent some uses of information or to compensate for the discrimination that results.”

Jim Warren, longtime technology entrepreneur and activist, responded, “Any sequence of instructions for how to do something (or how a machine that can understand said instructions can do it) is—by definition—an ‘algorithm.’ All sides—great and small, benevolent and malevolent—have always created and exercised such algorithms (recipes for accomplishing a desired function), and always will. Almost all of the ‘good’ that humankind has created—as well as all the harm (sometimes only in the eye of the beholder)—has been from discovering how to do something, and then repeating that process. And more often than not, sharing it with others. Like all powerful but double-edged tools, algorithms are. ;-)”

John Howard, creative director at LOOOK, a mixed-reality design and development studio, commented, “Algorithms will primarily impact humans’ ability to understand and make sense of ever-increasing amounts of data. Within decision-making, humans will put increased focus on intuition and leaps in logic (at least until machines get better at that, too).”

K.G. Schneider, a university administrator, said, “We will have much less privacy, but really smart health systems.”

David Morar, a doctoral student and Google policy fellow at George Mason University, wrote, “The most important thing to understand about algorithms is that they are not value-neutral, in 99% of cases. The reason for that is the fact that humans create these algorithms, just like humans create the systems that gather or scrape the data that will be fed to the algorithms. Humans will inherently have biases, both obvious and inconspicuous, both direct and indirect, and both passive and active. Try as we might, as humans our actions mirror, or at least are tinted by, our inherent biases. Not all biases are bad, and not all are conscious. This makes the choices made in the creation of the algorithms all the more important. The solution is not to try to get rid of them, because that is impossible. The solution in fact is to make sure that both the creators and the users, not to mention the potential policy or market actors involved at a higher level, understand beyond a doubt that simply because they are presented with numbers it doesn’t mean that they have been given cold, hard facts and undeniable truths. Starting from that point, algorithms will have a higher likelihood of producing positive change, even in a world where a whole slew of negative, criminal, or immoral actions are much easier to do through the help of algorithms.”

Joe Mandese, editor-in-chief of MediaPost, wrote, “For the average individual it will most definitely be negative in the short run, because algorithms will replace any manual-labor task that can be done better and more efficiently via an algorithm. In the short term, that means individuals whose work is associated with those tasks will either lose their jobs or will need to be retrained. In the long run, it could be a good thing for individuals by doing away with low-value repetitive tasks and motivating them to perform ones that create higher value.”

Bob Garfield, a journalist, commented, “It is impossible to know whether utopian trumps dystopian. So far, benefits outweigh the risks. But the future risks are ominous.”

Glenn Ricart, Internet Hall of Fame member and founder and CTO of US Ignite, commented, “Algorithms, as well as all decisions people make, have both positive and negative impacts. The danger is that algorithms appear as ‘black boxes’ whose authors have already decided upon the balance of positive and negative impacts—or perhaps have not even thought through all the possible negative impacts. This raises the issue of impact without intention. Am I responsible for all the impacts of the algorithm I invoke, or algorithms invoked in my behalf through my choice of services? How can we achieve algorithm transparency, at least at the level needed for responsible invocation? On the positive side, how can we help everyone better understand the algorithms they choose and use? How can we help people personalize the algorithms they choose and use?”

Jesse Drew, a digital media professor at the University of California-Davis, replied, “Certainly algorithms can make life more efficient, but the disadvantage is the weakening of human thought patterns that rely upon serendipity and creativity.”

M.E. Kabay, professor of computer information systems at Norwich University, said, “On the positive side, better algorithms for adapting output to specific users’ needs/values/priorities may indeed be helpful in speeding up and refining information flow appropriate for specific conditions. However, if the algorithms use pooled data and generalized computations of probabilities, the result may be a suppression of results for specific needs. We may be heading for lowest-common-denominator information flows. Another issue is the possibility of increasingly isolated information bubbles or echo chambers. If the algorithms directing news flow suppress contradictory information—information that challenges the assumptions and values of individuals—we may see increasing extremes of separation in worldviews among rapidly diverging subpopulations. A dictatorship like that in Orwell’s 1984 would love to have control over the algorithms selecting information for the public or for subsectors of the public. If information is power, then information control is supreme power. Warning bells should sound when individualized or group information bubbles generated by the selective algorithms diverge from some definition of reality. Supervisory algorithms should monitor assertions or information flows that deviate from observable reality and documentary evidence; the question remains, however, of whose reality will dominate.”

Karen Blackmore, lecturer in information technology at the University of Newcastle, replied, “The use of algorithms to provide automation to decision-making tasks, particularly in consumer settings, is likely to improve outcomes for individuals and add a layer of convenience. However, unregulated use of algorithms—particularly with regard to the information presented to individuals that forms the basis of developing a framework for social engagement—can, and is, leading to a change in public discourse. As those individuals with a broader knowledge framework attempt to engage with those reliant on algorithm-derived news sources, a mismatch in worldviews arises that stifles dialogue and debate.”

Paul Davis, a director, observed, “The age of the algorithm presents the opportunity to automate bias, and render Labour surplus to requirements in the economic contract with Capital. Modern Western society is built on a societal model whereby Capital is exchanged for Labour to provide economic growth. If Labour is no longer part of that exchange, the ramifications will be immense. So whilst the benefits of algorithms and automation are widespread; it is the underlying social impact that needs to be considered. If Labour is replaced, in a post-growth model, perhaps a ‘Living Wage’ replaces the salary; although this would require Capital to change the social engagement contract.”

Alf Rehn, professor and chair of management and organization at Åbo Akademi University, commented, “New algorithmic thinking will be a great boon for many people. They will make life easier, shopping less arduous, banking a breeze and a hundred other great things besides. But a shaved monkey can see the upsides. The important thing is to realize the threats, major and minor, of a world run by algorithms. They can enhance filter bubbles for both individuals and companies, limit our view of the world, create more passive consumers, and create a new kind of segregation—think algorithmic haves and have-nots. In addition, for an old hacker like me, as algorithmic logics get more and more prevalent in more and more places, they also increase the number of attack vectors for people who want to pervert their logic, for profit, for more nefarious purposes, or just for the lulz.”

Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University, noted, “We already have had early indicators of the difficulties with algorithmic decision-making, namely credit scores. Their computation is opaque and they were then used for all kinds of purposes far removed from making loans, such as employment decisions or segmenting customers for different treatment. They leak lots of private information and are disclosed, by intent or negligence, to entities that do not act in the best interest of the consumer. Correcting data is difficult and time-consuming, and thus unlikely to be available to individuals with limited resources. It is unclear how the proposed algorithms address these well-known problems, given that they are often subject to no regulations whatsoever. In many areas, the input variables are either crude (and often proxies for race), such as home zip code, or extremely invasive, such as monitoring driving behavior minute-by-minute. Given the absence of privacy laws, in general, there is every incentive for entities that can observe our behavior, such as advertising brokers, to monetize behavioral information. At minimum, institutions that have broad societal impact would need to disclose the input variables used, how they influence the outcome and be subject to review, not just individual record corrections. An honest, verifiable cost-benefit analysis, measuring improved efficiency or better outcomes against the loss of privacy or inadvertent discrimination, would avoid the ‘trust us, it will be wonderful and it’s AI!’ decision-making.”

Jonathan Grudin, principal researcher at Microsoft, said, “We are finally reaching a state of symbiosis or partnership with technology. The algorithms are not in control; people create and adjust them. However, positive effects for one person can be negative for another, and tracing causes and effects can be difficult, so we will have to continually work to understand and adjust the balance. Ultimately, most key decisions will be political, and I’m optimistic that a general trend toward positive outcomes will prevail, given the tremendous potential upside to technology use. I’m less worried about bad actors prevailing than I am about unintended and unnoticed negative consequences sneaking up on us.”

Stewart Dickinson, digital sculpture pioneer, said, “Algorithms are written by smart people, but they are biased when they are written for the benefit of competitive organizations. Basic Income will reduce beholdenship to corporations and encourage participation in open-source development for social responsibility. When it is no longer possible to profit from class struggle, social division, and war, then algorithms will benefit people. Dismantle the Military-Industrial Complex and destroy the Conservative Cultural Revolution. Will there always be people whose livingry is that of the con artist or despot? I suppose so. I don’t know the cure for this.”

Stephen Downes, researcher at the National Research Council of Canada, commented, “The sort of discrimination, social engineering, and other societal impacts we have today often have a negative impact because they are based on crude stereotypes and result in inappropriate measures. Their impacts are magnified when deployed by social systems causing harm to individuals based on these crude measures. But new algorithms will have profoundly beneficial effects because they will provide a person an accurate picture of themselves, and not a negative self-image reinforced by media messaging and stereotypes, and prevent other individuals from basing their assessments of us on unreliable intuition, incomplete or inaccurate data, or bias and prejudice. The negative expectations that exist—for example, fears of loss of employment, termination of health insurance, discrimination in housing opportunities, unfair denial of credit, media ‘bubbles’ and tunnel-vision, government surveillance and control, etc., are all reflective of *today’s* reality. They are not properties inherent in the new technologies, they are things that are done to people every day today, and which new technologies will make less and less likely. Some examples: Banks—today banks provide loans based on very incomplete data; It is true that many people who today qualify for loads would not get them in the future. However many people—and arguably many more people—will be able to obtain loans in the future, as banks turn away from using such factors as race, socio-economic background, postal code, and the like to assess fit. Moreover, with more data (and with a more interactive relationship between bank and client) banks can reduce their risk, thus providing more loans, while at the same time providing a range of services individually directed to actually help a person’s financial state. Health care providers—health care is a significant and growing expense not because people are becoming less healthy (in fact, society-wide, the opposite is true) but because of the significant overhead required to support increasingly complex systems, including prescriptions, insurance, facilities, and more. New technologies will enable health providers to shift a significant percentage of that load to the individual, who will (with the aid of personal support systems) manage their health better, coordinate and manage their own care, and create less of a burden on the system. As the overall cost of health care declines, it becomes increasingly feasible to provide single-payer health insurance for the entire population, which has known beneficial health outcomes and efficiencies. Retailers—Alvin Toffler predicted an era of mass custom production, where a good is not manufactured until it is ordered. We are on the cusp of providing this today, from sourcing of raw materials on a real-time basis through production and deliver via automated vehicles or drones. Additionally, software provides efficiencies in many industrial systems, from energy production to storage, distribution, and use, resulting in a more environmentally friendly economy. Governments—a significant proportion of government is based on regulation and monitoring, which will no longer be required with the deployment of automated production and transportation systems, along with sensor networks. This includes many of the daily (and often unpleasant) interactions we have with government today, from traffic offenses, manifestation of civil discontent, unfair treatment in commercial and legal processes, and the like. A simple example: one of the most persistent political problems in the United States is the gerrymandering of political boundaries to benefit incumbents. Electoral divisions created by an algorithm to a large degree eliminate gerrymandering (and when open and debatable, can be modified to improve on that result).”

David Weinberger, senior researcher at the Harvard Berkman Klein Center for Internet & Society, said, “The positive is that algorithmic analysis at scale can turn up relationships that are predictive and helpful even if they are beyond the human capacity to understand them. This is fine where the stakes are low, such as a book recommendation. Where the stakes are high, such as algorithmically filtering a news feed, we need to be far more careful, especially when the incentives for the creators are not aligned with the interests of the individuals or of the broader social goods. In those latter cases, giving more control to the user seems highly advisable.”

David Lankes, professor and director at The University of South Carolina’s School of Library and Information Science, wrote, “There is simply no doubt that, on aggregate, automation and large-scale application of algorithms have had a net-positive effect. People can be more productive, know more about more topics than ever before, identify trends in massive piles of data, and better understand the world around them. That said, unless there is an increased effort to make true information literacy a part of basic education, there will be a class that can use algorithms and a class used by algorithms.”

Timothy C. Mack, managing principal at AAI Foresight, said, “The shift from search engines to decision advisors will be a significant one, but further moves to foresight modeling are less likely to be effective, if only because of increasing systemic complexity. The most troubling discrimination will be the use of these model outputs for institutional decision making, such as educational admissions or hiring. Such structures are likely to run afoul of litigation at some point. The use of attention analysis on algorithm dynamics will be a possible technique to pierce the wall of black box decisions, and great progress is being made in that arena.”

Vinton Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, commented, “The algorithms are mostly intended to steer people to useful information and I see this as a net positive.”

Thomas Claburn, editor-at-large at Information Week, commented, “Our algorithms, like our laws, need to be open to public scrutiny, to ensure fairness and accuracy.”

Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation, replied, “My answer to this one is ‘negative.’ The worst thing about judging people by algorithms is that people will be pressured to hand over all the personal data that the algorithms would judge by. The data, once accumulated, will be misused in various ways—by the companies that collect them, by rogue employees, by crackers that steal the data from the company’s site, and by the state via National Security Letters. I have heard that people who refuse to be used by Facebook are discriminated against in some ways. Perhaps soon they will be denied entry to the US, for instance. Even if the US doesn’t actually do that, people will fear that it will. Compare this with China’s social obedience score for internet users.”

Peter Levine, professor and associate dean for research at Tisch College of Civic Life, Tufts University, noted, “What concerns me is the ability of governments and big companies to aggregate information and gain insight into individuals that they can use to influence those individuals in ways that are too subtle to be noticed or countered. The threat is to liberty.”

Jamais Cascio, distinguished fellow at the Institute for the Future, observed, “The 50/50 option isn’t exactly what I mean, but comes closer than the other two. The impact of algorithms in the early transition era will be overall negative, as we (humans, human society and economy) attempt to learn how to integrate these technologies. Bias, error, corruption, and more will make the implementation of algorithmic systems brittle, and make exploiting those failures for malice, political power, or lulz comparatively easy. By the time the transition takes hold—probably a good 20 years, maybe a bit less—many of those problems will be overcome, and the ancillary adaptations (e.g., potential rise of universal basic income) will start to have an overall benefit. In other words, shorter term (this decade) negative, longer term (next decade) positive.”

John Sniadowski, a systems architect, noted, “Predictive modelling will make life more convenient, but conversely it will narrow choices and confine individuals into classes of people from which there is no escape. Predictive modelling is unstoppable because international business already sees massive financial advantages by using such techniques. An example of this is insurance where risk is now being eliminated in search of profits instead of the original concept of insurance being shared risk. People are now becoming uninsurable either because of their geographic location or social position. Premiums are weighted against individuals on control decisions on which the individual has no control and therefore cannot improve their situation. The huge problem with oversight mechanisms is that globalisation by the internet removes many geopolitical barriers of control. International companies have the resources to find ways of implementing methods to circumvent controls. The more controls are put in place the more the probability of unintended consequences and loophole searching, the net result being more complex oversight that becomes unworkable.”

Bob Frankston, internet pioneer and software innovator, said, “The negatives of algorithms will outweigh the positives. There continues to be magical thinking assuming that if humans don’t intervene the ‘right thing’ will happen. Sort of the modern gold bugs that assume using gold as currency prevents humans from intervening. Algorithms are the new gold, and it’s hard to explain why the average ‘good’ is at odds with the individual ‘good.’”

Cindy Cohn, executive director at the Electronic Frontier Foundation, wrote, “This is such a broad question that it’s hard to do a sum up. Right now I’m focused a lot on the problems with this sort of analysis because the lack of critical thinking among the people embracing these tools is shocking and can lead to some horrible civil liberties outcomes. The algorithms I think you are referencing are only a subset of all of the algorithms in the world, and chiefly those focused on finding patterns in a set a data that they have been given. Algorithms do many other things, of course. Some problems are, first, algorithms are only one of the factors: in predictive policing and so many other applications of algorithmic learning a key additional problem is that the training data is biased. Data based on police behavior that is biased in a racist way will find ‘patterns’ and make predictions that are similarly racially biased. The algorithms are also limited in scope based on the things that exist training data: they are fundamentally backward looking cannot foresee new things that may come along—the ascendancy of Trump seems to have been missed by the data-driven analysis of political behavior, for instance, since nothing like it exists in the data that the political analysts use. So big data analysis is good at helping us see patterns in the data, i.e., so a history of Google searches is good at picking out patterns of Google searches but not that good at picking out patterns of actual behavior that is not what the data is about, like whether people actually have the flu when they search for flu-related items. I don’t think it’s possible to give an overall ‘good’ or ‘bad’ to the use of algorithms, honestly. As they say on Facebook, ‘it’s complicated.'”

Ian Peter, an internet pioneer and historian based in Australia, observed, “Media becoming more intrusive without accompanying greater emphasis and protections for privacy is likely to have negative results.”

Richard Adler, distinguished fellow at the Institute for the Future, said, “The big question is who will control the algorithms and for what purposes. If they are mainly shaped by the goals of marketers, their impact will be mixed at best, and potentially negative overall.”

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, observed, “Algorithmic methods have a very long way to go before they can deal with the needs of individuals. So we will all be mistreated as more homogenous than we are.”

Peter Eckart, a survey participant who shared no additional identifying details, commented, “We can create algorithms faster than we can understand or evaluate their impact. The expansion of computer-mediated relationships means that we have less interest in the individual impact of these innovations, and more on the aggregate outcomes. So we will interpret the negative individual impact as the necessary collateral damage of ‘progress.'”

Marina Gorbis, executive director at the Institute for the Future, replied, “Main positive impacts: Algorithms will enable each one of us to have a multitude of various types of assistants that would do things on our behalf, amplifying our abilities and reach in ways that we’ve never seed before. Imagine instead of typing search words and getting list of articles, pushing a button and getting a narrative paper on a specific topic of interest. It’s the equivalent of each one of us having many research and other assistants. Negatives: While algorithms might perpetuate some discriminatory practices simply because they are based on existing data and a lot of such data reflects existing biases, algorithms also have the potential to uncover current biases in hiring, job descriptions, and other text information. Startups like Unitive and Knack show the potential of this.”

Martin Shelton, Knight-Mozilla OpenNews Fellow at The Coral Project + New York Times, wrote, “The task-automating potential of algorithms represents an enormous opportunity. Algorithms that make decisions about simple tasks, as well as repetitive tasks requiring relatively little creativity, can free up our time in routine work. Think about self-driving cars, or software that automates regulation of industrial systems and manufacturing. Simultaneously, these systems are fundamentally controlled and crafted by people. People make mistakes, and people can undermine the design of these systems both deliberately and by accident. It’s also important to remember that peoples’ values inform the design of algorithms—what data they will use, and how they will use data. Far too often, we see that algorithms reproduce designers’ biases by reducing complex, creative decisions to simple decisions based on heuristics. Those heuristics do not necessarily favor the person who interacts with them. These decisions typically lead software creators not to optimize for qualitative experiences, but instead, optimizing for click-through rates, page views, time spent on page, or revenue. These design decisions mean that algorithms use (sometimes quite misplaced) heuristics to decide which news articles we might be interested in; people we should connect with; products we should buy.”

Amy Webb, futurist and CEO at the Future Today Institute, observed, “In order to make our machines think, we humans need to help them learn. Along with other pre-programmed training datasets, our personal data is being used to help machines make decisions. However, there are no standard ethical requirements or mandate for diversity, and as a result we’re already starting to see a more dystopian future unfold in the present. There are too many examples to cite, but I’ll list a few: would-be borrowers turned away from banks, individuals with black-identifying names seeing themselves in advertisements for criminal background searches, people being denied insurance and health care. Most of the time, these problems arise from a limited worldview, not because coders are inherently racist. Algorithms have a nasty habit of doing exactly what we tell them to do. Now, what happens when we’ve instructed our machines to learn from us? And to begin making decisions on their own? The only way to address algorithmic discrimination in the future is to invest in the present. The overwhelming majority of coders are white and male. Corporations must do more than publish transparency reports about their staff––they must actively invest in women and people of color, who will soon be the next generation of workers. And when the day comes, they must choose new hires both for their skills and their worldview. Universities must redouble their efforts not only to recruit a diverse body of students––administrators and faculty must support them through to graduation. And not just students. Universities must diversify their faculties, to ensure that students see themselves reflected in their teachers.”

Lilly Irani, assistant professor at UC San Diego, wrote, “When we talk about algorithms, we sometimes are actually talking about bureaucratic reason embedded in code. The embedding in code, however, powerfully takes the execution of bureaucracy out of specific people’s hands and into a centralized controller—what Aneesh Aneesh has called algocracy. A second issue is that these algorithms produce emergent, probabilistic results that are inappropriate in some domains where we expect accountable decisions, such as jurisprudence. While algorithms have many benefits, their tendency toward centralization needs to be countered with policy.”

Giacomo Mazzone wrote, “Unfortunately most algorithms that will be produced in the next 10 years will be from global companies looking for immediate profits. This will kill local intelligence, local skills, minority languages, local entrepreneurship because most of the available resources will be drained out by the global competitors. The day that a ‘minister for algorithms toward a better living’ will be created is likely to be too late unless new forms of social shared economy emerge, working on ‘algorithms for happiness.’ But this is likely to take longer than 10 years.”

Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information and professor of law at the University of Maryland, commented, “Algorithms are increasingly important because businesses rarely thought of as high tech have learned the lessons of the internet giants’ successes. Following the advice of Jeff Jarvis’s What Would Google Do, they are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable. Companies may be parsing your voice and credit record when you call them, to determine whether you match up to ‘ideal customer’ status, or are simply ‘waste’ who can be treated with disdain. Epagogix advises movie studios on what scripts to buy based on how closely they match past, successful scripts. Even winemakers make algorithmic judgments, based on statistical analyses of the weather and other characteristics of good and bad vintage years. For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit, and education, they deserve more scrutiny. US hospitals are using big data-driven systems to determine which patients are high-risk—and data far outside traditional health records is informing those determinations. IBM now uses algorithmic assessment tools to sort employees worldwide on criteria of cost-effectiveness, but spares top managers the same invasive surveillance and ranking. In government, too, algorithmic assessments of dangerousness can lead to longer sentences for convicts, or no-fly lists for travelers. Credit-scoring drives billions of dollars in lending, but the scorers’ methods remain opaque. The average borrower could lose tens of thousands of dollars over a lifetime, thanks to wrong or unfairly processed data. It took a combination of computational, legal, and social scientific skills to unearth each of the examples discussed above—troubling collection, bad or biased analysis, and discriminatory use. Collaboration among experts in different fields is likely to yield even more important work. Grounded in well-established empirical social science methods, their models can and should inform the regulation of firms and governments using algorithms. Empiricists may be frustrated by the ‘black box’ nature of algorithmic decision-making; they can work with legal scholars and activists to open up certain aspects of it (via freedom of information and fair data practices). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis, and use—and to push regulators to crack down on the worst offenders. Researchers are going beyond the analysis of extant data, and joining coalitions of watchdogs, archivists, open data activists, and public interest attorneys, to assure a more balanced set of ‘raw materials’ for analysis, synthesis, and critique. Social scientists and others must commit to the vital, long-term project of assuring that algorithms are producing fair and relevant documentation; otherwise states, banks, insurance companies, and other big, powerful actors will make and own more and more inaccessible data about society and people. Algorithmic accountability is a big-tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists, and others. It’s an urgent, global cause with committed and mobilized experts looking for support. This conference I helped organize: http://isp.yale.edu/node/6055 gathered some thought leaders in the field.”

Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, observed, “I find the question ill-posed as algorithms are embedded in devices, applications, services, etc. Algorithms, per se, change little—it’s how they are used that matters. Some apps are beneficial, some are less so. Some use data to help people, some are stunts to collect data with little value. Overall, as with technology throughout the ages, it will be a mix.”

Jerry Feldman, a respondent who did not share other identifying background, commented, ”This is a badly posed question. Algorithms don’t do anything—it is systems, including the social aspects that do things. Both the hype and the fear of ‘machine learning’ are irrational.”

Eugene H. Spafford, a professor at Purdue University, wrote, “Algorithmic decisions can embody bias and lack of adjustment. The result could be the institutionalization of biased and damaging decisions with the excuse of, ‘The computer made the decision, so we have to accept it.’ If algorithms embody good choices and are based on carefully vetted data, the results could be beneficial. To do that requires time and expense—will the public/customers demand that?”

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied, “Algorithms will capitalize on convenience and profit, thereby discriminating certain populations, but also eroding the experience of everyone else. The goal of algorithms is to fit *some* of our preferences, but not necessarily *all* of them: they essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for *self-actualization,* it will be simply too convenient for people to follow the advice of an algorithm (or, to difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.”

Dana Klisanin, founder and CEO of Evolutionary Guidance Media R&D Inc, commented, ” If we want to weight the overall impact of the use of algorithms on individuals and society toward ‘positive outweighs negative,’ the major corporations will need to hold themselves accountable through increasing their corporate social responsibility. Rather than revenue being the only return, they need to hire philosophers, ethicists, and psychologists to help them create algorithms that provide returns that benefit individuals, society, and the planet. Most individuals have never taken a course in ‘Race, Class, and Gender,’ and do not recognize discrimination even when it is rampant and visible. The hidden nature of algorithms means that it will take individuals and society that much longer to demand transparency. Or, to say it another way: we don’t know what we don’t know.”

Louisa Heinrich, founder of Superhuman Limited, observed, “Software is becoming an unintentional normative force in society. Because we write algorithms with goals in mind that are usually focussed on the mainstream masses, those same algorithms routinely exclude outliers. The pressure and pace of the technological business environment does not encourage the kind of big-picture lateral thinking that might address this—too often, the MVP is never ‘finished.’ We can already see the effects of this exclusion in action through the bias of our social news feeds—much has been written about this since the Brexit vote, and there is no question that by setting filtering and curation to give us more of the things we like, social media companies are fragmenting the global online community into factions who are rarely even exposed to one another. We will see negative impact in other critical areas as well. Whether or not a person is ‘healthy’ will be algorithmically determined by the degree to which they conform to a model of what healthy looks like, but whose? The medical community’s understanding of the relationship between lifestyle and health evolves from year to year, and sometimes the things we’ve been told for years are bad for us turn out to be good for at least some of us, after all. But despite the possible flaws in any current model of a healthy lifestyle, those not deemed ‘healthy’ may be excluded from public health services, pay high insurance premiums, even be socially shunned—even though they may in fact be perfectly well. We should be extremely wary of automating models around things that we do not fully understand. Rather, we would do well to use technology as a tool for testing and better understanding these models—as assistive rather than authoritative. There is no need to remove humans from the equation; when artificial and human intelligences work together, we can do amazing things.”

Christopher Mondini, a leader with a major Internet governance organization, wrote, “Algorithms are tools and, like all tools, can be used for constructive or destructive purposes. I believe their risks are counterbalanced by the opportunities they enable. The intentions, behaviors and motivations behind the commercial and political actors deploying algorithms are responsible for the social and economic risks outlined in this question. If the nefarious practices of people become mainstream and copied by others, it is not the technology that is to blame.”

Cornelius Puschmann, Hans-Bredow-Institute for Media Research, Hamburg, said, “It is very difficult to judge whether the overall impact will be positive or negative, partly because as we grow accustomed to a technology we find it difficult to imagine life without it. Very few people seriously contemplate a world without the combustion engine or the PC, although both clearly have negative effects as well as positive ones. Regarding discrimination: While this is a major (!) challenge, it is hardly a case that a world without algorithms is free from discrimination. Algorithms mostly draw out attention to existing as well as new forms of discrimination.”

D. Yvette Wohn, assistant professor at the New Jersey Institute of Technology, noted, “We should not talk about algorithms as if some profound machine is making decisions. Humans decide what algorithms should be. With algorithms becoming a prevalent technology, the question is not if algorithms are good or bad, but if the people making the algorithms and the people being affected by the algorithms are all involved in what those algorithms will look like. Computer scientists will be forced to think about the social and ethical consequences of algorithms and the public will slowly but surely understand what algorithms mean. A growing coalition of scholars, politicians, civilians, and industry who care about the societal impact of algorithms will evolve into a large international body that undertakes educational activities, political lobbying, and research to help increase awareness and constantly examine and evaluate these issues.”

Frank Elavsky, data and policy analyst at Acumen, LLC, said, “Positive changes? Greatly enhanced consumerism. The products and services that will be available to the consumer will only continue to improve in convenience. Negative changes? Identity security. Privacy. Identity formation—people will become more and more shaped by consumption and desire. Excessive consumerism. Loss of basic skills—convenience will slowly replace skillsets such as cooking your own food, replacing a button on your shirt, changing your oil, or childcare services. Racial exclusion in consumer targeting. Gendered exclusion in consumer targeting. Class exclusion in consumer targeting—see Google’s campaign to educate many in Kansas on the need for a fiberoptic infrastructure. Nationalistic exclusion in consumer targeting. Monopoly of choice—large companies may begin to control the algorithms or results that people see. Monopoly of reliable news—already a problem on the internet, but consumer bias will only get worse as algorithms are created to confirm your patterns of interest.”

Dan Ryan, professor of sociology at Mills College in Oakland, CA, wrote, “‘Algorithms’ is just the latest buzzword from tech that social scientists don’t quite understand. They really are more akin to rules than folks recognize. What’s different is the way the rules are embedded in systems—both the opacity and the proliferation this yields. So our focus might be less on algorithms as a new phenomenon and rather on the impacts associated with this style of rules. The worry that algorithms might introduce subtle biases strikes me as much social science ado about very little. No more true than the ways that architecture, cartography, language, organizational rules, credentialing systems, etc., produce these effects.”

Kevin Novak, CEO of 2040 Digital, commented, “Algorithms can lead to filtered results that demonstrate biased or limited information to users. This bias and limitation can lead to opinions or understanding that does not reflect the true nature of a topic, issue or event. Users should have the option to select algorithmic results or natural results.”

Robert Boatright, professor of political science at Clark University, observed, “I’m hardly the first person to say this, but the main problem is that we don’t encounter information that conflicts with our prior beliefs or habits, and we’re rarely prompted to confront radically new information or content—whether in news, music, purchasing, or any of the other sorts of opportunities that we are provided.”

Adam Gismondi, a visiting scholar at Boston College, wrote, “With the convenience of algorithms comes the danger that some things, like news, will be overly custom-tailored to each reader. I am fearful that as users are quarantined into distinct ideological areas, human capacity for empathy may suffer. Brushing up against contrasting viewpoints challenges us, and if we are able to (actively or passively) avoid others with different perspectives, it will negatively impact our society. It will be telling to see what features our major social media companies add in coming years, as they will have tremendous power over the structure of information flow.”

Dave Robertson, professor of political science at University of Missouri-St. Louis, said, “I see increased algorithm power, but also increased complacency about algorithms and increased numbers and seriousness of errors.”

Irina Shklovski, associate professor at the IT University of Copenhagen, observed, “The outcomes here will increasingly depend on which types of algorithms are deployed where and how, what kinds of legislation is in place to govern this shaping and decision-making, how data are managed and governed. The answer here will also increasingly depend on what aspects of life are documented and how and what kinds of audit systems are in place. There is nothing ‘inadvertent’ to discrimination and social engineering is not in itself necessarily a negative thing (social engineering arguably is what laws and policies do already). Discrimination in algorithms comes from implicit biases and unreflective values embedded in implementations of algorithms for data processing and decision-making. There are many possibilities to data-driven task and information retrieval support, but the expectation that somehow automatic processing will necessarily be more ‘fair’ makes the assumption that implicit biases and values are not part of system design (and these always are). Thus the question is how much agency will humans retain in the systems that will come to define them through data and how this agency can be actionably implemented to support human rights and values.”

Sam Anderson, coordinator of instructional design at the University of Massachusetts, Amherst, noted, “Algorithms will be human-designed and interpreted by humans. They will be another tool that humans use and misuse, intentionally or not. The real danger is if the prevailing notion that algorithms will obviate our need for interpretation takes hold.”

Tim Norton, chair of the Digital Rights Watch, Digital Rights Watch, said, “There are positives and negatives to this area. The solution is to ensure that algorithms are anonymised so individuals can receive the benefits of increased smart automation without the dangers of profiling.”

Dmitry Strakovsky, professor of art at the University of Kentucky, wrote, “We will benefit from the speed of information filtration that algorithmic filtering affords us but yes, some of the sorting will create very specific social blindness cases (by class, educational level, gender, etc.). Cool businesses will start throwing in ‘Surprise Me’ buttons to intentionally reshuffle decks. Most will simply create algorithms that promote their specific business goals.”

Tom Vest commented, “Algorithms will have the same general effect on overall wellbeing as most other technological advances have had over the past few decades: they will most benefit the minority of individuals who are consistently ‘preferred’ by algorithms, plus those who are sufficiently technically savvy to understand and manipulate them (usually the same group).”

Ida Brandão, an educator, noted, “I feel a bit divided here, because I like a fast and efficient Internet and algorithms are important. On the other hand I don’t like the idea of the ‘machine’ taking over my privacy and how some powers may misuse/abuse personal information.”

Marti Hearst, a professor at the University of California-Berkeley, said, “For decades computer algorithms have been automating systems in a more-or-less mechanical way for our benefit. For example, a bank customer could set up automated payment for their phone bill. The change we are seeing more recently is that the algorithms are getting increasingly more sophisticated, going from what we might called ‘cut and dried’ decisions like ‘pay the balance of my phone bill’ to much more complex computations resulting in decisions such as ‘show products based on my prior behavior’ or eventually and menacingly ‘shut off access to my bank account because of my political posts on social media.’ Every one of these advances is two-sided in terms of potential costs and benefits. The benefits can be truly amazing: automated spoken driving directions that take into account traffic congestion and re-route in real time is stunning—the stuff of science fiction in our own lifetimes. On the other hand, quiet side streets known only to the locals are suddenly become full of off-route vehicles from out of town. These new algorithms are successful only because they have access to the data about the activity of large numbers of individual people. And the more reliant we become on them, the fewer options anyone has to go ‘off the grid.’ The rush towards ‘big data’ has not built in adequate protections from harm for individuals and society against potential abuses of this reliance. I think the bias issue will be worked out relatively quickly, but the excessive reliance on monitoring of every aspect of life appears unavoidable and irreversible.”

Valerie Bock, VCB Consulting, commented, “Nicholas Negroponte predicted the advent of ‘The Daily Me’ years ago, and it has definitely come to pass that it is now more possible than ever before to curate one’s information sources so that they include only those which strike one as pleasurable. That’s a real danger, which we’re seeing the impact of in this time of the Brexit and the 2016 US election season. Our society is as polarized as it has ever been. And yet, algorithms that learn—from good data—what the most likely diagnosis is, given symptoms and physical findings, promise to take us past our individual physician’s experience into what the world knows about what this stuff can mean. We are going to need to be disciplined about not surrendering to what the robots think we would like to see. I have to regularly re-inform my Facebook feed that I want to see ‘latest news’ and not ‘top stories.’ I’m pretty much appalled that some folks depend on Facebook for their ‘news’ feed. I use Feedly for news, because it allows me to program it with the various sources I need to consult to get a variety of voices into my head. I worry that because it will become a hassle to see stuff we don’t ‘like’ that gradually, fewer and fewer people will see that which challenges them. All the same, I’ll be glad when the algorithms which are trying to figure out who I am learn that I am located a good 30 miles from the town where the servers that give me my IP address are, that the romance novel I ordered was not an indication of interest in a new genre but rather the result of an interest in the locality where the action takes place and the author who is steeped in the history of this location. And to the extent that the data, looked at by creative individuals with quirky backgrounds, yields new theories of disease causation and wellness strategies, I think we will all enjoy a net win.”

Daniel Berleant, author of The Human Race to the Future, noted, “Algorithms are less subject to hidden agendas than human advisors and managers. Hence the output of these algorithms will be more socially and economically efficient, in the sense that they will be better aligned with their intended goals. Humans are a lot more suspect in their advice and decisions than computers are.”

Jon Lebkowsky, CEO of Polycot Associates, wrote, “I’m personally committed to agile process, through which code is iteratively improved based on practice and feedback. Algorithms can evolve through agile process. So while there may be negative effects from some of the high-impact algorithms we develop, my hope and expectation is that those algorithms will be refined to diminish the negative and enhance the positive impact.”

Marcel Bullinga, trend watcher and keynote speaker of @FutureWatch, commented, “Robots (a symbol for algorithms/AI) will enable us to live in a DIY world where individuals are more powerful—both as consumers and as producers—than ever before. AI will conquer the world, like the internet and the mobile phone once did. It will end the era of apps. Millions of useless apps (because there are way too many for any individual) will become useful on a personal level if they are integrated and handled by AI. For healthy robots/AI, we must have transparent, open source AI. The era of closed is over. If we stick to closed AI, we will see the rise of more and more tech monopolies dominating our world as Facebook and Google and Uber do now. The long-term promise of blockchain is the disappearance of intermediaries and platforms and the 19th century ways of shareholding they entitle. We need to get rid of organizations like Uber and switch to decentralized challengers such as Arcade City. They empower the workers—not the investors.”

Scott McLeod, associate professor of educational leadership at University of Colorado, Denver, noted, “Overall, algorithms are going to reshape almost every aspect of how we live, work, play, and think. While there are dangers in regard to who creates and controls the algorithms, eventually we will evolve mechanisms to give consumers greater control that should result in greater understanding and trust. Right now, however, the technologies are far outpacing our individual and societal abilities to make sense of what’s happening and corporate and government entities are taking advantage of these conceptual and control gaps. The pushback will be inevitable but necessary and will, in the long run, result in balances that are more beneficial for all of us.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “The impact of algorithms will be largely positive—but that does not mean we must not remain alert to the negatives and work to limit the damage. This is, unfortunately, something we as Americans are not particularly good at: acknowledging that every policy creates collateral damage and including in our policies a means to limit that damage. Transparency is the great challenge. As these things exert more and more influence, we want to know how they work, what choices are being made, and who is responsible. The irony is that as the algorithms become more complex, the creators of them increasingly do not know what is going on inside the black box. How, then, can they improve transparency?”

Doc Searls, journalist, speaker, and director of Project VRM at Harvard University’s Berkman Klein Center for Internet and Society, wrote, “Algorithms discriminate. That’s what they do. Obviously, this has many positive results. Judgment calls by algorithms can take many more factors into account than can an individual human mind, and they can be more objective as well. Algorithms can also fail in completely new ways, all of which will also involve discrimination. An algorithm also isn’t intuitive, though it can emulate intuition—just as artificial intelligence isn’t really intelligent in the human sense, though it can do what we call intelligent things. It’s essential to recognize the differences. The biggest issue with algorithms today is the black-box nature of some of the largest and most consequential ones. An example is the one used by Dun & Bradstreet to decide credit worthiness. The methods behind the decisions it makes are completely opaque, not only to those whose credit is judged, but to most of the people running the algorithm as well. Only the programmers are in a position to know for sure what the algorithm does, and even they might not be clear about what’s going on. In some cases there is no way to tell exactly why or how a decision by an algorithm is reached. And even if the responsible parties do know exactly how the algorithm works, they will call it a trade secret and keep it hidden. There is already pushback against the opacity of algorithms, and the sometimes vast systems behind them. Many lawmakers and regulators also want to see, for example, Google’s and Facebook’s vast server farms more deeply known and understood. These things have the size, scale, and in some ways the importance of nuclear power plants and oil refineries, yet enjoy almost no regulatory oversight. This will change. At the same time, so will the size of the entities using algorithms. They will get smaller and more numerous, as more responsibility over individual lives moves away from faceless systems more interested in surveillance and advertising than actual service. One reason this will happen is that advertising as it is done today online will collapse. The level of pushback against it by individuals is gargantuan. According a May 2016 report by PageFair, 419+ million people worldwide block ads on their mobile devices. The company’s August 2015 report said the number of people blocking ads online by May of that year had passed 200 million worldwide, with high rates of increase as well. This is the biggest boycott in human history. When it’s over, the current system, which Shoshana Zuboff calls surveillance capitalism, will fail. In its place will be much more efficient and respectful ways for demand and supply to connect.”

Robert Matney, COO at Polycot Associates, replied, “Algorithms can be more successfully measured against success than humans. There will be misfires, but they will be corrected.”

Sunil Paul, entrepreneur, investor, and activist at Spring Ventures, said, “This is a nonsense question. Algorithms are just code and are co-developed with society and individuals. You might as well ask if the legal code will have a negative or positive impact on society and individuals (See Lawrence Lessig’s book Code for elaboration on these ideas). Over time, we’ll see algorithms/code as a reflection of society and individuals. Evil society, evil algorithm (Exhibit A: The Great Firewall of China). Good society, good algorithm (Exhibit B: unmonitored email in the US).”

Stowe Boyd, chief researcher at Gigaom, said, “Algorithms and AI will have an enormous impact on the conduct of business as soon as companies wake up to the benefits of having many decisions made by algorithm and AI, instead of by cognitively biased human beings. HR is one enormous area that will be revamped top to bottom by this revolution. Starting at a more fundamental level, education will be recast and AI will be taking a lead role. We will rely on AI to oversee other AIs.”

Galen Hunt, partner research manager at Microsoft Research NExT, wrote, “Algorithms will accelerate in their impact on society. If we guard the core values of civil society (like equality, respect, transparency), the most valuable algorithms will be those that help the greatest numbers of people.”

Jon Hudson, a futurist and principal engineer, wrote, “A positive future requires transparency. We have to understand how the algorithms work and what they do. People only do negative things when they think no one is watching. Everyone is going to have to step up. However we are already starting to see this with open source. Transparency is everything.”

Vance S. Martin, instructional designer at Parkland College, said, “Algorithms save me time when my phone gets a sense for what I will be typing and offers suggestions or when Amazon or Netflix recommend something based on my history. However, they also close options for me when Google or Facebook determine that I read or watch a certain type of material and then offer me content exclusively from that point of view. This narrows my field of view, my exposure to other points of view. Using history to predict the future can be useful, but overlooks past reasons, rationales, and biases. For example, in the past the US based immigration quotas on historical numbers of people who came in the past. So if in the early 1800s there was a large number of Scottish immigrants and few Italian immigrants, they would allow in more Scots, and fewer Italians. So a historical pattern leads to future exclusionary policies. So if an algorithm determines that I am male, white, middle-class, and educated I will get different results and opportunities than a female African-American, lower class aspirant. So ease of life/time will be increased, but social inequalities will presumably become reified.”

Megan Browndorf, on the staff at Towson University, wrote, “Algorithms run the danger of fitting people into boxes instead of defining boxes by people. This is incredibly dangerous. Making people into what the algorithm needs them to be. However, it is not that different than what the Modernist experiment of the 20th century did to the American worker already. It’s just a continuation of a trend. It is not a trend that I support and I think it is making life, in many ways, difficult for those who are not involved in the creation of these algorithms. Safiya Noble has some wonderful work on the discriminatory power of algorithms, I highly suggest it. However, there is a definite possibility for algorithms to increase access and use of knowledge. To change what is possible. To make our work faster and make living better for us. They are a tool. Like all tools they can be used for evil and for good depending on the user.”

Tse-Sung Wu, project portfolio manager at Genentech, commented, “This is very similar to the question of self-driving cars, in which an AI system will be making decisions and taking actions in the task of driving a person or cargo from point A to point B. Your question expands to more sophisticated decision-making, where the stakes are higher, and the tradeoffs far less clear. Perhaps the biggest peril is the dissolution of accountability, unless we change our laws. Who will be held to account when these decisions are wrong? Right now, it’s a person—the driver—or, in the case of professional services, someone with professional education and/or certification (a doctor making a diagnosis and coming up with a treatment plan; a judge making a ruling; a manager deciding how to allocate resources, etc.). In each of these, there is a person who is the ultimate decision-maker, and, at least at moral level, the person who is accountable (whether they are held to account is a different question). Liability insurance exists in order to manage the risk of poor decision-making by these individuals. How will our legal system of torts deal with technologies that make decisions: will the creator of the algorithm be the person of ultimate accountability of the tool? Its owner? Who else? The algorithm will be limited by the assumptions, world view/mental model and biases of its creator. Will it be easier to tease these out, will it be harder to hide biases? Perhaps, which would be a good thing. In the end, while technology steadily improves, once again, society will need to catch up. We live in a civilization of tools, but the one thing these tools don’t yet do is make important decisions. The legal concepts around product liability closely define the accountabilities of failure or loss of our tools and consumable products. However, once tools enter the realm of decision-making, we will need to update our societal norms (and thus laws) accordingly. Until we come to a societal consensus, we may inhibit the deployment of these new technologies, and suffer from them inadvertently.”

Alexander Halavais, director, MA in social technologies at Arizona State University, observed, “The positive and negative impact of algorithmic approaches to social distribution will depend almost entirely on where you sit. For the society as a whole, algorithmic systems are likely to reinforce (and potentially calcify) existing structures of control. While there will be certain sectors of society that will continue to be able to exploit the move toward algorithmic control, it is more likely that such algorithms will continue to inscribe the existing social structure on the future. What that means for American society is that the structures that make Horatio Alger’s stories so unlikely will make them even less so. Those structures will be ‘naturalized’ as just part of the way in which things work. Avoiding that outcome requires a revolutionary sort of educational effort that is extraordinarily difficult to achieve in today’s America. An education that doesn’t just teach kids to ‘code,’ but to think critically about how social and technological structures shape social change and opportunity.”

Sandi Evans, an assistant professor at California State Polytechnic University, Pomona, said, “We need to ask: How do we evaluate, understand, regulate, improve, make ethical, make fair, build transparency into, etc., algorithms?”

Christine (Malina) Maxwell, entrepreneur and program manager of learning technologies at the University of Texas-Dallas, wrote, “Algorithms help make our lives far more efficient today. For instance, they make our online choices about such things as finding hotels and travel related bargains, shopping, etc., much quicker and easier. They also help protect users from online fraud for example. Few members of the Public realize that there is no such thing as an unbiased search engine—and therein lies the real danger. Recognizing bias online is becoming harder and harder—not easier and easier.”

Dariusz Jemielniak, professor of management at Kozminski University and Wikimedia Foundation trustee, observed, “There are no incentives in capitalism to fight filter bubbles, profiling, and the negative effects, and governmental/international governance is virtually powerless.”

Adam Nelson, CTO of Factr, said, “Just like Plato asks, how can we know what is ‘good’? Algorithms will definitely improve outcomes for individual people (health, transport, productivity), but it’s hard to judge how good it all is.”

Ian O’Byrne, co-founder of BadgeChain, replied, “Algorithms and considerations of ethics and trust in code have become more of a talking point over the past year. We’ve seen this happen with the advent of autonomous vehicles and blockchain technologies. These technologies will undergo a learning process as users adapt to these technologies and they (hopefully) adapt to users. The challenge that is in many of these decisions that algorithms make, human lives are left in the balance. With each negative and positive that we endure, it is my belief that the algorithm will become ‘smarter’ and eliminate some of the error in the model.”

Jenny Korn, race and media scholar at the University of Illinois-Chicago, noted, “The discussion of algorithms should be tied to the programmers programming those algorithms. Algorithms reflect human creations of normative values around race, gender, and other areas related to social justice. For example, searching for images of ‘professor’ will produce pictures of white males (including in cartoon format), but to find representations of women or people of color, the search algorithm requires the user to include ‘woman professor’ or ‘Latina professor,’ which reinforces the belief that a ‘real’ professor is white and male. Problematic! So, we should discuss the (lack of) critical race and feminist training of the people behind the algorithm, not just the people using the algorithm.”

Daniel Pimienta, head of the Networks and Development Foundation (FUNREDES), Brazil, said, “Until the obligation of transparency and openness will be put on big data algorithms the outcome will be clearly negative. The turnover could be obtained when enough internauts get educated (which drive us back to the importance of MIL)”

danah boyd, founder of Data & Society, commented, “An algorithm means nothing by itself. What’s at stake is how a ‘model’ is created and used. A model is comprised of a set of data (e.g., training data in a machine learning system) alongside an algorithm. The algorithm is nothing without the data. But the model is also nothing without the use case. The same technology can be used to empower people (e.g., identify people at risk) as harm them. It all depends on who is using the information to what ends (e.g., social services vs. police). Because of unhealthy power dynamics in our society, I sadly suspect that the outcomes will be far more problematic—mechanisms to limit people’s opportunities, segment and segregate people into unequal buckets, and leverage surveillance to force people into more oppressive situations. But it doesn’t have to be that way. What’s at stake has little to do with the technology; it has everything to do with the organizational, societal, and political climate we’ve constructed.”

Nigel Cameron, president and CEO of the Center for Policy on Emerging Technologies, observed, “Positives: enormous convenience/cost-savings/blah-blah. Negatives: radically de-humanizing potential, and who writes/judges the algos? In a consensus society all would be well. But we have radically divergent sets of values, political and other, and algos are always rooted in the value-systems of their creators. So the scenario is one of a vast opening of opportunity, economic and otherwise, under the control of either the likes of Zuckerberg or the grey-haired movers of global capital or…”

Adrian Hope-Bailie, standards officer at Ripple, noted, “We will see algorithms having a greater and greater impact for some time until their influence is recognized by public-service organizations who will lobby for them to be regulated. One of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering.”

Michael Rogers, author and futurist at Practical Futurist, “In a sense, we’re building a powerful nervous system for society. Big data, real-time analytics, smart software could add great value to our lives and communities. But at the same time they will be powerful levers of social control, many in corporate hands. In today’s market economy, driven by profit and shareholder value, the possibility of widespread abuse is quite high. Hopefully society as a whole will be able to use these tools to advance more humanistic values. But whether that is the case lies not in the technology, but in the economic system and our politics.”

Miles Fidelman, systems architect, policy analyst, and president at the Center for Civic Networking wrote, “Tools will help us with sorting through information (Google, rating systems associated with shopping sites, etc.). At the same time, lots of tools are coming into play for manipulating opinion and behavior—notably data mining tools and social media marketing. By and large, tools will disproportionally benefit those who have commercial reasons to develop them—as they will have the motivation and resources to develop and deploy tools faster.”

Marcus Foth, professor of interactive and visual design at Queensland University of Technology, noted, “Many colleagues have started to think through these issues. Rather than crowdsourcing your work, why not read (or view) some of these contributions: https://theconversation.com/why-we-should-design-smart-cities-for-getting-lost-56492 http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles http://www.eng.unimelb.edu.au/engage/events/lectures/dourish-2016 http://www.thelateageofprint.org/category/algorithmic-culture/.”

Isto Huvila, a professor at Uppsala University, wrote, “I hate to be a pessimist. The opportunities are vast in most of the sectors if we can make sure that algorithms will be controlled by the society and not by individual, collective or corporate actors that have their own interests, different from that of the society as a whole.”

Christopher Wilkinson, retired senior European Union official, commented, “A few algorithms applied to big data may become beneficial. More generally, algorithms applied to commercial behaviour are potentially intrusive. Algorithms that generate unsolicited advertising should be banned.”

Anil Dash, technologist, said, “The best parts of algorithmic influence will make life better for many people, but the worst excesses will truly harm the most marginalized in unpredictable ways. We’ll need both industry reform within the technology companies creating these systems and far more savvy regulatory regimes to handle the complex challenges that arise.”

Marc Brenman, managing partner at IDARE, wrote, “The algorithms will reflect the biased thinking of people. Garbage in; garbage out. Many dimensions of life will be affected, but few will be helped. Oversight will be very difficult or impossible. People will continue to waste huge amounts of time online, including responding to surveys that change nothing.”

Leah Stokes, an assistant professor at the University of California-Santa Barbara, commented, “The internet exists within a capitalist economy that seeks to connect buyers with sellers for a profit. It’s important to remember that the organizations controlling these algorithms and the internet overall are not usually NGOs or third parties, but vested interests. This is why the idea of net neutrality is important.”

Mary Griffiths, associate professor in media at the University of Adelaide, South Australia, replied, “It is in the uptake and implementation by particular agencies, governments, and corporations that will see diverse, possibly unpredictable, outcomes for individuals and society. The whole concept of smartification depends on predictive modelling. That can create energy efficiencies and health planning if metadata is used. Whatever the potential benefits, the most salient question everyone should be asking is the classical one about accountability—’quis custodiet ipsos custodes?’—who guards the guardians? And, in particular, which ‘guardians’ are doing what, to whom using the vast collection of information? Who has access to health records? Who is selling predictive insights, based on private information, to third parties unbeknown to the owners of that information? Who decides which citizens do and don’t need additional background checks for a range of activities? Will someone with mental health issues be ‘blocked’ invisibly from employment or promotion? The question I’ve been thinking about, following UK scholar E. Ruppert, is that data is a collective achievement, so how do societies ensure that the collective will benefit? Oversight mechanisms might include stricter access protocols; sign off on ethical codes for digital management and named stewards of information; online tracking of an individual’s re-use of information; opt-out functions; setting timelines on access; no third-party sale without consent.”

Laurent Schüpbach, neuropsychologist at University Hospital Zurich, Switzerland, wrote, “It depends a lot on the context. In the medical context, I can imagine an amazing progress where machine learning and quantified self can greatly improve diagnosis (as far as there’s still a professional overseeing it, and not just people freaking out at the first outlier data). A self-driving car is probably much safer than human drivers. But there are a number of cases where algorithms are more problematic. One example is the echo chamber effect that we can see on social media. Another fear is when algorithms become too good and predict something too personal (such as a teen pregnancy predicted by Target). My biggest concern is that at some point, we won’t fully understand the algorithms anymore and fixing a misbehaving algorithm will sometimes be impossible. For instance what happened with TAN, the youth-oriented chatbot of Microsoft, may be a glimpse of what’s to come.”

Majoki commented, “I’m going with the 50-50, though advertising is the 500-pound gorilla in this equation. If you’ve ever read The Space Merchants by Pohl and Kornbluth, you know what I’m talking about. The more a system knows about you, the better it can serve and meet your needs; or exploit, manipulate or control you. Even well-intentioned systems can feed the latter.”

Dave Howell, a senior program manager in the telecommunications industry, replied, “Algorithms will identify humans using connected equipment. Identity will be confirmed through blockchain by comparison to trusted records of patterns, records kept by the likes of MS, Amazon, Google. But there are weaknesses to any system, and innovative people will work to game a system. Advertising companies will try to identify persons against their records, block chains can be compromised (given a decade someone will…) Government moves too slowly. The Big Five (Microsoft, Google, Apple, Amazon, Facebook) will offer technology for trust and identity, few other companies will be big enough. Scariest to me is AliBaba or China’s state-owned companies with the power to essentially declare who is a legal person able to make purchases or enter contracts. Government does not pay well enough to persevere. I bet society will be stratified by which trust/identity provider one can afford/qualify to go with. The level of privacy and protection will vary. Lois McMaster’s Jackson’s Whole suddenly seems a little more chillingly realistic.”

Susan Etlinger, industry analyst at Altimeter, said, “We are entering the age of the algorithm, the beginning of an era of truly automated decision making. That’s good news, because we will become better able to scale routine tasks, but it’s also bad news, because there are both technical and competitive reasons that prevent algorithms and machine learning systems from being truly transparent. Lack of transparency can lead to all sorts of undesirable consequences, from bias and discrimination to bad decision-making at scale. But it doesn’t have to be this way. Much like the way we increasingly wish to know the place and under what conditions our food and clothing are made, we should question how our data and decisions are made as well. What is the supply chain for that information? Is there clear stewardship and an audit trail? Were the assumptions based on partial information, flawed sources or irrelevant benchmarks? Did we train our data sufficiently? Were the right stakeholders involved, and did we learn from our mistakes? The upshot of all of this is that our entire way of managing organizations will be upended in the next decade. The power to create and change reality will reside in technology that only a few truly understand. So to ensure that we use algorithms successfully, whether for financial or human benefit or both, we need to have governance and accountability structures in place. Easier said than done, but if there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.”

John Laprise, founder of the Association of Internet Users, observed, “The biggest problem is that algorithms are a black box decipherable to few. Algorithms will enable smart systems to make choices that would otherwise be made by people, freeing the latter.”

Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN, wrote, “The structure of work in modern society is changing rapidly, but there are still many areas where dull, repetitive tasks can and should be replaced by ‘smart’ machines. As robotic skills increase, many things, e.g., surgery, will also be done better by machines. The limits to human displacement by our own smart machines are not known or very predictable at this point. The broader question is how to redefine and reconstruct global economic systems to provide a decent quality of life for humanity. Currently—as ‘Brexit’ [Britain’s vote to leave the European Union] demonstrates—we are collectively in denial that latter-day capitalism cannot deal with the challenges of machine intelligence.”

Paul Jones, clinical professor at the University of North Carolina and director of iBiblio.org, commented, ”Although, we have seen the simplest algorithms create the cruelest human conditions—the concept of race creating genocide as the most brutal—the promise of standardization of best practices into code is a promise of stronger best practices and a hope of larger space for human insight. Code, flexible and open code, can make you free—or at least a bit freer.”

Jason Hong, associate professor at Carnegie Mellon University, noted, “On the whole, algorithms will be a net positive for humanity. Any given individual has a large number cognitive biases, limited experiences, and limited information for making a decision. In contrast, an algorithm can be trained on millions or even billions of examples, and can be specifically tuned for fairness, efficiency, speed, or other kinds of desired criteria. In practice, an algorithm will be deployed to work autonomously only in cases where the risks are low (e.g., ads, news) or where the certainty is high (e.g., anti-lock brakes, airplane auto-pilot) or good enough (e.g., Uber’s algorithm for allocating passengers to drivers). In most cases, though, it won’t be just a person or just an algorithm, but rather the combination of an expert with an algorithm. For example, rather than just a doctor, it will likely be a doctor working with an AI algorithm that has been trained on millions of electronic health records, their treatments, and their outcomes. However, there are two major risks with algorithms. 1) People will forget that models are only an approximation of reality. The old adage of garbage-in-garbage-out still applies, but the sheer quantity of data and the speed of computers might give the false impression of correctness. As a trivial example, there are stories of people following GPS too closely and ending up driving into a river. As another example, our research group has been studying how to use geo-tagged social media data to understand cities. However, one major bias is that we have little data from poor neighborhoods. We try to caveat all of our models by telling people about this limitation, but it’s easy to imagine lots of similar cases. 2) The other major risk with algorithms is accidental redlining. Originally, redlining referred to banks explicitly drawing red lines around neighborhoods that they refused to loan money in, which essentially meant that they refused to loan money to African-Americans. Today, a data scientist might design a large set of machine learning features and create a ‘fair’ model based on empirical data, not realizing that the data might have human biases in it or that some of the features are proxies for race or other characteristics. That is, the people designing the algorithms might not have explicitly designed them to be discriminatory, but in practice they might end up being so. Overall, though, I am still very positive on algorithms. We have several thousand years of human history showing the severe limitations of human judgment. Data-driven approaches based on careful analysis and thoughtful design can only improve the situation.”

Michael Wollowski, associate professor of computer science at the Rose-Hulman Institute of Technology, replied, “Algorithms, or the models that you build with them may be seen as boxing us in. However, I see them as empowering. When I come home from work and my wife and I are too exhausted to plan a meal or make a run to the grocery store, we would love to be on autopilot and have the analytical system that processes our family’s eating habits suggest a recipe and have the ingredients already acquired by calling on an autonomous delivery vehicle. When we have the energy, we would love for the autonomous distribution system to have delivered the ingredients of the recipe that we told it earlier in the day. To drive this even further, I would love for the system that analyses our eating habits to compare it to others and recommend variations that we may enjoy. Think of it as Netflix for recipes. In this vein, many things in our lives will be automated. Similar to the scenario I described, users will have choices of control that they let the models have over them, broadly characterized as ‘autopilot,’ ‘semi-autonomous,’ i.e., we operate the system within parameters set for us, and ‘autonomous’ which gives us maximum control. It is my fond hope that we are in control of our data. While discrimination will likely still occur, there will be mechanisms to inform any interested party about acts of discrimination. They can then be fed as input into our models, and among others affect consumer choices. Certainly for businesses, this would be a large disincentive to discriminate.”

Erik Johnston, associate professor and director of the Center for Policy Informatics at Arizona State University, replied, “The positives will far outweigh the negatives but that does not mean we should not always strive to do better to understand the unintended consequences, come up with better ways to design with algorithms as a part of an overall approach to design and be more thoughtful about communicating the role of algorithms in the decision-making process. In health care, we can have mass customization, in journalism, we can present not just the information to the person who wants to see it, but we will able to assess and communicate the context they are seeing. For example, I use newswhip.com, which shares the news stories that are being shared the most, and it reveals many different perspectives that show thoughts outside of my bubble and also reveals how both sides of political discussions use similar tricks to drive readership. Algorithms are a tool that when used by people with a broad understanding of humanities and social contexts, can be used intentionally and thoughtfully.”

Aaron Chia Yuan Hung, assistant professor at Adelphi University, said, “The positive changes will be better and more information that can give us more insights into all the data mined by other organizations. Research from fields such as behavioral economics suggests that, as much as we want to trust in human insight and believe that human judgment is innately better than computational algorithms, it isn’t always true. The negatives will be that those who control the algorithms can potentially control our worldviews. Even assuming that they do not have particular agendas, there are still things that even the best-informed algorithms may miss. Anthropologists studying artificial intelligence (e.g., Diane Forsythe, Lucy Suchman) suggest that there are limitations in these expert programs, too, as problems may be encoded into data used by an algorithm. As an optimist, I think the positives will outweigh the negatives provided that we don’t blindly accept algorithms and we continue to question their design. The field of technomethodology as pioneered by Paul Dourish and Graham Button seems to lead in that direction.”

Peter Morville, president of Semantic Studios, wrote, “Algorithms already help us discover books, movies, and music. I expect future algorithms (in the form of expert systems and weak AI) will help us make better decisions (e.g., better diagnosis than a doctor, better financial planning than a financial advisor). The risk is that if we trust our sources without understanding them (and without realizing the hidden incentives), they can easily take advantage of us. I expect lots of negatives but an overall net positive for society.”

Pamela Rutledge, director of the Media Psychology Research Center, replied, “People have always resisted technological change; what is unknown is frightening. Humans fear giving away power—thus the anxiety over big data and computer-generated decision-making. Access to information and the increase in productivity are the biggest benefits. The danger is the assumption that, in pursuit of better productivity, all decisions can be made irrespective of context and the human condition.”

Demian Perry, director of mobile at NPR, replied, “An algorithm is just a way to apply decision-making at scale. Mass-produced decisions are, if nothing else, more consistent. Depending on the algorithm (and whom you ask), that consistency is either less nuanced or more disciplined than you might expect from a human. In the NPR One app, we have yet to find an algorithm that can be trusted to select the most important news and the most engrossing stories that everyone must hear. At the same time, we rely heavily on algorithms to help us make fast, real-time decisions about what a listener’s behavior tells us about their program preferences and we use these algorithms to pick the best options to present to them at certain points in their listening experience. Thus algorithms are helpmates in the process of curating the news, but they’ll probably never run the show. We believe they will continue to make our drudge work more efficient, so that we have more time to spend on the much more interesting work of telling great stories.”

Lee McKnight, associate professor at Syracuse University’s School of Information Studies, said, “Algorithms coded in smart service systems will have many positive, life-saving and job-creating impacts in the next decade. Social machines will become much better at understanding your needs, and attempting to help you meet them. Ethical machines—such as drones—will know to sense and avoid collisions with other drones, planes, birds, or people, recognize restricted air space, and respect privacy law. Algorithmically-driven vehicles will similarly learn to better avoid each other. Healthcare smart-service systems will be driven by algorithms to recognize human and machine errors and omissions, improving care and lowering costs. Given the wide-ranging impact on all aspects of people’s lives, eventually, software liability law will be recognized to be in need of reform, since right now, literally, coders can get away with murder. Inevitably, regulation of implementation and operation of complex policy models such as Dodd-Frank Volcker Rule Capital Adequacy standards will themselves be algorithmically-driven. Regulatory algorithms, code, and standards will be—actually already are—being provided as a service. The Law of Unintended Consequences indicates that the increasing layers of societal and technical complexity encoded in algorithms ensure that unforeseen catastrophic events will occur—probably not the ones we were worrying about.”

Michael Whitaker, vice president of emerging solutions at ICF International, wrote, “Over the next 10 years, the increasing use of algorithms will have a substantial net-positive effect for individuals and society. However, in the next 2-4 years, we will have a substantial reckoning related to algorithm accountability. My longer thoughts are contained in this post—https://lnkd.in/bvJKSui—A brief summary: Many organizations that deploy advanced analytics are unaware of the implicit or explicit biases and have not defined acceptable failure for those algorithms. Over the next few years, scrutiny over the real-world impacts of algorithms will increase and organizations will need to defend their application. Many will struggle and some are likely to be held accountable (reputation or legal liability). This will lead to increased emphasis on algorithm transparency and bias research. Algorithms are delivering and will continue to deliver significant value to individuals and society. However, we are in for a substantial near- to mid-term backlash (some justified, some not) that will make things a bit bumpy on the way to a more transparent future with enhanced trust and understanding of algorithm impacts.”

David Sarokin, author of Missed Information: Better Information for Building a Wealthier, More Sustainable Future (MIT Press), observed, “Apps/algorithms have a real capability of democratizing information access in important and positive ways (a theme of my book, by the way). Whether these outweigh the negatives that people see in terms of, e.g., invasion of privacy is, in some measure, a matter of personal values and an overall judgment call. For example, phone apps have been developed to collect, collate, and combine reports from citizens on their routine interactions—both positive and negative—with police. In widespread use, these can be an effective ‘report card’ for individual officers as well as overall community policing, and help identify potential problems before they get out of hand.”

David Klann, media industry technology consultant, commented, “The evidence supports that the positive uses of algorithms will gain more attention and be used far more than the negative. The use of algorithms will enable people to use to services they might otherwise be unaware of, or those that might otherwise be unavailable. Happening right now, for example, is the automation of administrative legal services (e.g., for parking tickets, for small-business establishments). Low-level supervision and management is likely another area that will be augmented with algorithms. I’m not very good at predicting the future of discrimination, but I see possibilities in reducing discrimination with the use of algorithms *if* they are vetted with large and diverse test samples. What might be the oversight mechanisms? Crowd-sourced review and monitoring of the effectiveness and applicability.”

Richard Oswald, a writer, said, “As the service industries use these tools more extensively, they will evolve or face discriminating use by consumers. The secret does not lie in government rules for the algorithms themselves but in competition and free choice allowing consumers to use the best available service and by allowing consumers to openly share their experiences.”

Edward Friedman, emeritus professor of technology management at the Stevens Institute of Technology, observed, “As more algorithms enter the interactive digital world, there will be an increase of Yelp-type evaluation sites that guide users in their most constructive use.”

Raymond Plzak, former CEO of a major regional Internet governance organization, replied, “Positives will only outweigh the negatives if the designers carefully consider unintended consequences and thoroughly test them in a robust testing environment.”

Lauren Wagner, a respondent who shared no additional identifying details, wrote, “Algorithms are dependent on the inputs that are provided to the system. While discrimination may exist, it is my hope that we continue to provide inputs that mitigate these effects. One example is loan procurement. We can use inputs that are not captured by a traditional bank to grant loans to individuals who would not be able to secure them through traditional means. The algorithms would ensure that they are reliable borrowers. The negative impact will be in instances where intangibles like emotional intelligence need to be measured or considered but are not accounted for by algorithms. These factors are not less important, but they may be relegated to a less-important place if algorithms are prioritized. Healthcare will be most affected by algorithms, with advances like DNA sequencing that have the potential to detect cancer earlier than ever before and save millions of lives. The dissemination of news will also be strongly impacted. Overall, artificial intelligence holds the most promise and risk in terms of impacting peoples’ lives through the expanding collection and analysis of data. Oversight bodies like OpenAI are emerging to assess the impact of algorithms. OpenAI is a nonprofit artificial intelligence research company. Their goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Vin Crosbie, adjunct professor of multimedia, photography, and design at Syracuse University, said, “The world is transitioning from the Industrial Era to the Information Era. We’re moving from a 200-or-more-year era of mass producing standardized products and services to a new era in which individuated products and services can be produced on mass scales. These individuated products and services are all based upon algorithmic technologies. This epochal transition between eras will probably take the next 40 to 50 years, but this next decade will already see tangible developments in that direction.”

Avery Holton, an assistant professor and humanities scholar at the University of Utah, said, “In terms of communication across social networks both present and future, algorithms can work quickly to identify our areas of interest as well as others who may share those interests. Yes, this has the potential to create silos and echo chambers, but it also holds the promise of empowerment through engagement encouragement. We can certainly still seek information and relationships by combing through keywords and hashtags, but algorithms can supplement those efforts by showing us not only ‘what’ we might be interested in and ‘what’ we might be missing, but ‘who’ we might be interested in and ‘who’ we might be missing. Further, these algorithms may be able to provide us some insights about others (e.g., their interests, their engagement habits) that help us better approach, develop, and sustain relationships.”

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., replied, “Pedro Domingos, author of The Master Algorithm, wrote, ‘If every algorithm suddenly stopped working, it would be the end of the world as we know it.’ Fact: we have already turned our world over to machine learning and algorithms. The question now is how to better understand and manage what we have done. Algorithms are a useful artifact to begin discussing the larger issue of the effects of technology-enabled assists in our lives. Namely, how can we see them at work, consider and assess their assumptions and most importantly for those who don’t create algorithms for a living—how do we educate ourselves about the way they work, where they are in operation, what assumptions and biases are inherent in them, and how to keep them transparent so, like fish in a tank, we can see them swimming around and keep an eye on them. It is especially important to educate ourselves about understanding algorithms that foster probabilities as a means of prediction. Once we use data mining, machine learning, modeling, and artificial intelligence to analyze data and make predictions, quis custodiet ipsos custodes—who will monitor our monitors and predictors? Algorithms are the new arbiters of human decisionmaking in almost any area we can imagine from watching a movie (Affectiva emotion recognition) to buying a house (Zillow.com) to self-driving cars (Google). Deloitte Global predicted more than 80 of the world’s 100 largest enterprise software companies will have cognitive technologies—mediated by algorithms—integrated into their products by the end of 2016. As Brian Christian and Tom Griffiths write in Algorithms to Live By, algorithms provide ‘a better standard against which to compare human cognition itself.’ They are also a goad to consider that same cognition: how are we thinking and what does it mean to think through algorithms to mediate our world? The main positive result of this is better understanding of how to make rational decisions, and in this measure a better understanding of ourselves. After all, algorithms are generated by trial and error, by testing, by observing, and coming to certain mathematical formulae regarding choices that have been made again and again—and this can be used for difficult choices and problems, especially when intuitively we cannot readily see an answer or a way to resolve the problem. The 37% Rule, optimal stopping, and other algorithmic conclusions are evidence-based guides that enable us to use wisdom and mathematically verified steps to make better decisions. The secondary positive result is connectivity. In a technological recapitulation of what spiritual teachers have been saying for centuries, our things are demonstrating that everything is—or can be—connected to everything else. Algorithms with the persistence and ubiquity of insects will automate processes that used to require human manipulation and thinking. These can now manage basic processes of monitoring, measuring, counting or even seeing. Our car can tell us to slow down. Our televisions can suggest movies to watch. A grocery can suggest a healthy combination of meats and vegetables for dinner. Siri reminds you it’s your anniversary. The main negative changes come down to a simple but now quite difficult question: how can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions? The rub is this: whose intelligence is it, anyway? The algo trader may effect a result that would make shareholders shudder. The automated car may slow down but not realize a stalker on foot is closing in on you as you stop at a traffic light. Further, humans design algorithms, and humans often have unexamined biases in their thinking and approaches to problem solving. Once again, we encounter a transparency issue: how do we see bias when it is hidden in process? The overall impact of ubiquitous algorithms is presently incalculable because the presence of algorithms in everyday processes and transactions is now so great, and is mostly hidden from public view. What dimensions of life will be most affected—health care, consumer choice, the dissemination of news, educational opportunities, others?  The simple answer: all dimensions of our lives will be affected. Some, like healthcare and banking, may employ more algorithms, pound for pound, than others. But since every business, government, educational institution or social construct relies on technology to do its day-to-day business, and algorithms are ensconced in that technology, the ubiquity of algorithms affects everything. A great mistake with AI, things that think, the Internet of Things, and the algorithms that fuel those capabilities, would be to adopt a set-it-and-forget-it mentality. All of our extended thinking systems (algorithms fuel the software and connectivity that create extended thinking systems) demand more thinking—not less—and a more global perspective than we have previously managed. Concern and doubts about our ability to manage this new reality, expressed by Stephen Hawking among others, stem, I believe, from a blind faith in the infallibility of our conceptions, and the algorithms that fuel them. On the bright side, a world where choices and experiences are data driven is an evidence-based world, where bias is diminished and facts and true conclusions inform policy and preferences. This is a remarkable and positive development in human history. So the expanding collection and analysis of data, and the resulting application of this information, can cure diseases, decrease poverty, bring timely solutions to people and places where need is greatest, and dispel millennia of prejudice, ill-founded conclusions, inhumane practice, and ignorance of all kinds. Unfortunately, that is not the whole story. Now in a recapitulation of the history of technology and the revenge of unintended consequences, where we create a techno solution to solve a problem that then creates problems equal to or worse than what we originally tried to solve, our algorithms are now redefining what we think, how we think, and what we know. We need to ask them to think about their thinking—to look out for pitfalls and inherent biases before those are baked in and harder to remove. Our systems do not have, and we need to build in, what David Gelernter called ‘topsight,’ the ability to not only create technological solutions but also see and explore their consequences before we build business models, companies and markets on their strengths, and especially on their limitations. Algorithms and machine learning will enable predictive modeling in virtually all areas of life, and many of these will add convenience in our lives. Anytime we choose one thing over another, buying a product for example, predictive modeling will come into play. So from getting a mortgage to buying a car to choosing workout clothing, our choices—and the choices of others the predictive model deems to be like us—will present conveniently suited options. By expanding collection and analysis of data and the resulting application of this information, a layer of intelligence, or thinking manipulation, is added to processes and objects that previously did not have that layer. So prediction possibilities follow us around like a pet. The result: as information tools and predictive dynamics are more widely adopted, our lives will be increasingly affected by their inherent conclusions and the narratives they spawn. All algorithms will mirror the biases of their creators, in the data relied upon, or in the local environment or society at large. A Carnegie Mellon study found Google is more likely to advertise exec-level positions to search engine users when it thinks the user is male. Harvard researchers found ads about arrest records were more likely to pop up when a user searched for names thought to belong to a black person versus a white person. As Leigh Alexander wrote recently reporting on the work of law professor and sociologist Ifeoma Ajunwa: ‘The work of these researchers points to a problem in the world of big data that doesn’t get discussed often enough: Unless the data itself can be truly said to be ‘fair,’ an algorithm can’t do much more than perpetuate an illusion of fairness in a world that still scores some people higher than others—no matter how ‘unbiased’ we believe a machine to be.’ We are, as humans, quite good at inventing innovative technologies. We are far less aware, and so less interested it would seem, in understanding the broader impact of the technologies we create. To create oversight that would assess the impact of algorithms, first we need to see and understand them in the context for which they were developed. That, by itself, is a tall order that requires impartial experts backtracking through the technology development process to find the models and formulae that originated the algorithms. Then, keeping all that learning at hand, the experts need to soberly assess the benefits and deficits or risks the algorithms create. Who is prepared to do this? Who has the time, the budget, and resources to investigate and recommend useful courses of action? This is a 21st century job description—and market niche—in search of real people and companies. In order to make algorithms more transparent, products and product information circulars might include an outline of algorithmic assumptions, akin to the nutritional sidebar now found on many packaged food products, that would inform users of how algorithms drive intelligence in a given product and a reasonable outline of the implications inherent in those assumptions.”

David Krieger, director of the Institute for Communication & Leadership IKF, observed, “Data-driven, algorithmic cognition and agency will characterize all aspects of society. Humans and non-humans will become partners such that identity(ies) will be distributed and collective. Individualism will become anachronistic. The network is the actor. It is the network that learns, produces, decides, much like the family or clan in collective societies of the past, but now on the basis of big data, AI, and transparency. Algorithmic auditing, accountability, benchmarking procedures in machine learning, etc., will play an important role in network governance frameworks that will replace hierarchical, bureaucratic government. Not government, but governance.”

Micah Altman, director of research at MIT Libraries, replied, “‘Algorithms’ are defined essentially as mathematical tools designed to solve problems. Generally, improvements in problem-solving tools—especially in the mathematical and computational fields—has yielded huge benefits in science, technology, and health, and will most likely continue to do so. The key policy question is: How we will choose to hold government and corporate actors responsible for the choices that they delegate to algorithms? There is increasing understanding that each choice of algorithms embody a specific set of choices over what criteria are important to ‘solving’ a problem, and what can be ignored. To incent better choices in algorithms will likely require actors using them to provide more transparency, to explicitly design algorithms with privacy and fairness in mind, and holding actors who use algorithms meaningfully responsible for their consequences.”

Ansgar Koene, senior research fellow at the Horizon Digital Economy Research Institute, commented, “The impact of algorithms will depend heavily on the level of transparency and interpretability of the algorithmic reasoning processes that is provided to people. The current application of most algorithms is unfortunately very lacking in this respect and therefore threatens to move toward a mostly negative impact. Growing unease about the opaque nature of these systems is however producing a push toward better information provisioning and more critical attitudes toward the outputs of algorithms. In the long run this is likely to lead to more positive results. With greater transparency and auditability the algorithms will make it possible to reveal persistent societal inequalities and facilitate a more proactive move to remedying these problems.”

Ryan Hayes, owner of Fit to Tweet, observed, “There are a lot of ways in which the world is more peaceful and our quality of life better than ever before but we don’t necessarily feel that way because we’ve been juggling more than ever before, too. For example, when I started my career as a CPA I could do my job using paper and a ten-key calculator and when I left for the day I could relax knowing I was done, whereas today I have over 300 applications that I utilize for my work and I can be reached every minute of the day through Slack, texts, several email accounts and a dozen social media accounts. Technology is going to start helping us not just maximize our productivity but shift toward doing those things in ways that make us happier, healthier, less distracted, safer, more peaceful, etc., and that will be a very positive trend. Technology, in other words, will start helping us enjoy being human again rather than burdening us with more abstraction. A negative trend we’ll see more of, though, is the divide between people who are utilizing the cutting-edge tech vs. those who aren’t. Twenty years ago we talked about the ‘digital divide’ being people who had access to a computer at home vs. those that didn’t, or those who had access to the internet vs. those who didn’t. And that was a real difference, of course, as computers and the internet were valuable. Ten years from now, though, the life of someone whose capabilities and perception of the world is augmented by sensors and processed with powerful AI and connected to vast amounts of data is going to be vastly different from that of those who don’t have access to those tools or knowledge of how to utilize them. And that divide will be self-perpetuating, where those with fewer capabilities will be more vulnerable in many ways to those with more. I’m sure there will be an increase in algorithms that audit, oversee, and control other algorithms, but as AI increases in capability it may become difficult if one system isn’t able to comprehend why another system does something (in fact, we already have AI algorithms that make decisions that their designers can’t understand). In cases like that we may just need to blindly trust that we’re being led in the right direction, and while that isn’t ideal, it’s not necessarily worse or riskier than trusting the forces/people that have led society to date. I think open-source algorithms will play a large role too and that should help assure the public that those algorithms are trustworthy.”

John Anderson, director of journalism and media studies at Brooklyn College, wrote, “This is perhaps one of the largest issues facing our media and technological environments today. The key issue is transparency: without adequate public knowledge of just how algorithms actually operate, there can be no basis for the public to determine whether or not they are ‘good’ or ‘bad.’ However, history has shown in many other informational contexts that the greater the opacity, the more negative the consequences.”

Aidan Hall, head of UX at TomTom Sports, commented, “‘Algorithms’ is just the new label for technology—it’s exactly the same question that has been asked agian and again of the last 200 years about mechanisation, industrialisation, general purpose computers, internet, etc.”

B. Remy Cross, assistant professor of Sociology Webster University, said, “I chose 50/50 even though I think it is more likely to be a net negative. I do have hope that discussions are finally being had that machines, and programs, are not simply tool that lack inherent bias, but creations made by people that often create them in reflection of their own biases. Algorithms in particular are prone to a sort of tehcno-fetishism where they are seen as perfectly unbiased and supremely logical, when they are often nothing of the sort. Any time a machine has to engage with a human system it has to be taught how to do so by humans, and the humans doing the teaching, in this case engineers, are often at best ignorant, or at worst downright dismissive of the kinds of social realities they are programming their creations to engage in.”

Jerry Michalski, founder at REX, commented, “Algorithms are already reshaping—might we say warping?—relationships, citizenship, politics, and more. Almost all the algorithms that affect our lives today are opaque, created by data scientists (or similar) behind multiple curtains of privacy and privilege. Worse, the mindset behind most of these algorithms is one of consumerism: How can we get people to want more, buy more, get more? The people designing the algorithms seldom have citizens’ best interests at heart. And that can’t end well. On the positive side, algorithms may help us improve our behavior on many fronts, offsetting our weaknesses and foibles or reminding us just in time of vital things to do. But on the whole, I’m pessimistic about algorithm culture.”

Chris Showell, an independent health informatics researcher based in Australia, observed, “Algorithms have the potential to simplify complex decision making, and to guide users through a complex mix of related options. However, the reasoning embedded in an algorithm is rarely transparent. In some cases, an algorithm developed through machine learning may follow paths of logic that are opaque even to its developers. This means that the organisation developing the algorithm has significant capacity to influence or moderate the behaviour of those who rely on the algorithm’s output. Two current examples: manipulation of the process displayed in online marketplaces, and use of ‘secret’ algorithms in evaluating social welfare recipients (in the UK). There will be many others in years to come. It will be challenging for even well-educated users to understand how an algorithm might assess them, or manipulate their behaviour. Disadvantaged and poorly educated users are likely to be left completely unprotected. Algorithms may also fail catastrophically in the face of ‘black swan’ events, as Tesla are now discovering.”

Lisa Heinz, doctoral student at Ohio University, said, ”I’ve addressed these concerns in a previous answer, but I will say that those of us who learn and work in human-computer areas of study will need to make sure our concerns about discrimination and the exclusionary nature of the filter-bubble are addressed in the oversight mechanisms of algorithm development. This means that all voices, genders and races, need to be incorporated into the development of algorithms to prevent even unintentional bias. Algorithms designed and created only by young white men will always benefit young white men to the exclusion of all others.”

Luis Lach, president of the Sociedad Mexicana de Computación en la Educación, A.C., said, “Algorithms and big data have arrived to education at a very slow pace. The worst scenario is to not to be engaged in modern technologies, especially in this case in algorithms, and every tool able to combine, produce, and manage big amounts of data. Societies that decide to be outside of these new technological environments will be forced to go back to feudalism. The other scenarios rest in a technological paradigm. The thing is that the technology by itself is not bad. The real problem is what we do with it. On the negative side we will see huge threats to security, data privacy, and attacks to individuals, by governments, private entities, and other social actors. And on the positive we will have the huge opportunity for collective and massive collaboration across the entire planet. Of course the science will rise and we will see marvelous advances. Of course we will have a continuum between positive and negative scenarios. What we will do depends on individuals, governments, private companies, nonprofits, academia, etc.”

Dan York, senior content strategist for a major nonprofit communications governance organization, commented, “Given the huge deluge of content and information available to us, we *need* algorithms to help us navigate. Algorithms can help us make sense of all the info and find the signals amidst all the noise. However, the issue is the *control* and *transparency* of algorithms. Do we *know* how the algorithms are working? Are we in control of how they work, or is some faceless corporation in control? If so, do they explain what they are doing? Do they allow us to help tweak the algorithm? Those in control of the algorithms that feed us information can become the gatekeepers who prevent us from seeing things, who charge us to see content or to publish content. The algorithm owners can also manipulate with algorithms in ways that influence public activity, elections, buying patterns, etc. The possibility for negative manipulation is huge. There’s also a danger that algorithms can create ‘echo chambers’ that wind up reinforcing people’s beliefs and leading to further polarization. If an algorithm decides you really like right-wing news, for instance, it might feed you a diet of only that news, further reinforcing and affirming your own beliefs and never challenging your beliefs. You might come to believe more strongly that ‘your way is the only way’ without ever reading or seeing things from other viewpoints. Algorithms will be necessary to help us navigate the future—but the potential for misuse and abuse is quite high.”

Will Kent, an e-resources specialist on the staff at Loyola University-Chicago, observed, “Positives—gleaning a more granular understanding of wants, needs, trends from bigger sets of data. Negatives—trusting others to know what it is that we want. Any amount/type of discrimination could occur. It could be as innocent as a slip-up in the code or a mistranslation. It could be as nefarious as deliberate suppression, obfuscation, or lie of omission. Working in a library, it’s always nice to reveal to patrons what our search queries. We can always pop open the hood and say, here is our collection. If you think our search does not work for you, you can search as you see fit. You can’t opt out of algorithms. Providing options for users will be essential in the future to avoid discrimination. In some ways avoid algorithms will be impossible, but oversight of how they work, what they gather, what they miss, and more transparency will be of extreme importance. This topic is the classic confusion of mistaking something for a solution when it’s only a tool. If our tool malfunctions (algorithms) we need to be able to call them out and fix them or replace them. If there is no mechanism or protocol to do that, then we are stuck with broken tools giving us unfair solutions. Besides creating an oversight body, advocating for less business in algorithms will be a necessary conversation at some point. Users need results sometimes. Not promoted results. Finding a balance between when is it wrong to let capitalism dictate how we find things vs. finding what we need is a conversation that we need to have continuously.”

John B. Keller, director of eLearning at the Metropolitan School District of Warren Township, Indiana, wrote, “While I am bullish about the benefits of technology in our lives, we must also be cautious about wholesale abandonment of critical thinking when it comes to the expense of tradeoffs made to realize the benefits. Algorithms have assumptions baked into them and to the extent that algorithms are blind to important nuances of human decision-making and communication, algorithms will not be 100% reliable. As algorithms become more complex and move from computational-based operations into predictive operations and perhaps even into decisions requiring moral or ethical judgment, it will become increasingly important that that built-in assumptions are transparent to end users and perhaps even configurable. Algorithms are not going to simply use data to make decisions—they are going to make more data about people that will become part of their permanent digital record. We must advocate for benefits of machine-based processes but remain wary, cautious, and reflective about the long-term consequences of seemingly innocuous progress of today.”

Mike Warot, machinist at Allied Gear, wrote, “Computing is fairly morality-neutral, there are some large, bad uses, such as the rigging of trading in Wall Street, but there are many more small positives that will balance it out.”

Christopher Owens, a community college professor, said, “It all depends who owns the algorithms and the legal framework in which they exist. Algorithms are tools, and how they are used is up to their controllers. If the current economic order remains in place, then I do not see the growth of data-driven algorithms providing much benefit to anyone outside of the richest in society.”

Andrew Eisenberg, technical lead at Ganchrow Scientific, wrote, “Based on history, it seems that people are generally willing to give up privacy for convenience. As long as data collection and predictive modelling continues to provide short-term benefit, people will be willing to give up their privacy to take advantage of it.”

David Durant, a business analyst for the UK Government Digital Service, replied, “Algorithms will provide many benefits, especially in the analytical side of healthcare. Advanced imaging plus deep-layered neural networks will lead to much earlier and better diagnosis of disease. Algorithms will also continue to improve the speed on transactions between people and business and government. However, algorithms have a strong possibility of ‘baking in’ the biases of those who produce them (see many recent papers on ‘algorithmic accountability’). These include everything from the chance of post-prison recidivism to whether someone should be approved for a loan. In many situations close monitoring and auditing of such algorithms, as well as their publication as open source.”

Joshua Segall, a software engineer, said, “Algorithms are human creations and are subject to the same biases that humans have. In the short run algorithms will have a negative effect, particularly against those who are different from the algorithms creators, the fairly wealthy, young, white and Asian males working in the technology sector. This specifically will harm or exclude poor, elderly, female, black, and Hispanic populations. Today we see algorithms that focus on narrow, marginal conveniences for these groups. However, in the leg run I’m optimistic these biases will be noted and corrected to have broader beneficial impact, with algorithms being used to benefit society overall. Collection of data will continue unabated but the insights gained from it will be limited. Analysis of individuals will be used for inappropriate purposes, targeting them and limiting their potential. But aggregated data will be used to good purpose looking for trends that can be used to validate or invalidate scientific hypotheses in medicine, sociology, and analysis of government programs. We already have the statistical tools today to assess the impact of algorithms, which will be aided by better data collection. However, assessment will continue to be difficult regardless of algorithms and data because of the complexity of the systems we aim to study.”

Stephen Schultz commented, “Algorithms are to the ‘white-collar’ labor force what automation is to the ‘blue-collar’ labor force. Lawyers are especially vulnerable, even more so if those with competency in computer programming start acquiring law degrees and passing legislation and re-writing the syntax of current legal code to be more easily parsed by AI. For the layman, this could mean being able to obtain, online, quick, plain-language explanations of contracts and other legal documents without needing to hire a lawyer. Another profession that might benefit from algorithmic processing of data is nursing. In the United States, floor nursing is one of the most stressful jobs right now, in part because floor RNs are being given higher patient loads (up to 6) and at the same time being required to enter all assessment data into the Electronic Medical Record (EMR), and then creating/revising care plans based on that data, all of which subsequently leave little time for face-to-face patient care. The nursing process consists of five stages: assessment, diagnosis, planning, implementation, and evaluation. Algorithmic information processing would be most helpful in the diagnosis and evaluation stages. In tandem with self-reporting monitoring devices directly feeding into the EMR, this system would allow much more time for the face-to-face interactions needed by both the caretaker and the patient.”

Dudley Irish, software engineer, observed, “Negatives need not intrinsically exist in an algorithm, but the sort of machine learning systems that are usually meant when this topic comes up are heavily dependent on training data. All, let me repeat that, all of the training data contains biases. Much of it either racial or class related, with a fair sprinkling of simply punishing people for not using a standard dialect of English. To paraphrase Immanuel Kant, out of the crooked timber of these data sets no straight thing was ever made.”

Matt Bates, freelance programmer and concept artist, said, “Better healthcare and health decisions are possible. Ever-improving searchability of all sorts of knowledge and products is possible. Fewer vehicular deaths is achievable, as is more efficient delivery and route planning (perhaps even city planning). Peoples’ employability and insurance qualification can be positively or negatively affected, ditto with issues of criminal justice (an algorithm might well identify wrong convictions, for example, or it might inaccurately lump offenders into risk groups). I can’t say what dimensions of life will be most affected. All of them? Technology permeates so deeply into our lives, and it changes so rapidly, that it’s difficult to say.”

Don Philip, a retired PhD lecturer, wrote, “I answered that the effects will be about 50-50, but that’s probably wildly optimistic. More jobs will be lost to algorithms than will be created. However, if we manage society correctly, this will create opportunities for people to do useful work that previously went undone because it was too expensive to hire people. If this is improperly managed we will have a massively unemployed underclass and huge social unrest.”

Alan Cain, a respondent who did not share other identifying background, commented, “So. No jobs, growing population, and less need for the average person to function autonomously. Which part of this is warm and fuzzy?”

Helmut Krcmar, professor of information systems at the Technical University of Munich, said, “Think about what we call ‘human error.’ Very often that is deviation from established behaviour, thus increasing variance in outcomes. Algorithms usually do not deviate. However, if you use learning algorithms shifts may occur.”

Garth Graham, board member at Telecommunities Canada, wrote, “The future positives will only outweigh the negatives if the simulation of myself—the anticipation of my behaviours—that the algorithms make possible is owned by me, regardless of who created it.”

Christa Taylor, CEO of dotTBA, commented, “Algorithms utilizing existing and new data will change our lives. Wearable technologies will provide new insights on our health and eating choices, the Internet of Things will alter our behaviors and provide us insight that we never thought possible. Our fridge will be filled with foods recommended by our wearable technologies. We will have greater insight into the choices we make and their impacts not just on ourselves but our society and environment. Discrimination could easily occur based on generalizations of the data so ensuring the data is handled by experienced professionals will be key to reduce these risks but cannot eliminate it.”

Ray Schroeder, associate vice chancellor for online learning at the University of Illinois, Springfield, observed, “Predictive modeling will make for much more efficient service for most of those accessing information and services online. There will always be exceptions where individuals will require customized responses and specialized services. I anticipate that these will be provided, perhaps at extra cost or time delay.”

Paul Dourish, chancellor’s professor of informatics at the University of California-Irvine, noted, “Positives outweigh negatives, but not without requiring that we address the problems of audit and inequity that arise in the current situation. More needs to be done to give people insight into and control over algorithmic processing—which includes having algorithms that work on individuals’ behalf rather than on behalf of corporations.”

Ed Lyell, professor of business and economics at Adams State University, commented, “Disruption is already underway in most every area of life. Most everyone’s life can be improved by using the new AI and information access tools extant and emerging. The training—emotional and cognitive—is the limiting factor, along with overprotection of industries by status quo players. America, and most of the world, currently discriminates based on race as well as income. America’s schools are very different based on zip code and not likely to change in the current model. If we better use the tools of information access and learning it is possible to give low-income (which is also mostly of-color) families ways to get better learning, better access to jobs, and ways to overcome current barriers to success.”

Erik Anderson, a respondent who did not share other identifying background, commented, “There is too much information in the world. You need machines to filter through the information. Small, singular-purpose, connected devices and the Internet of Things, will become more prevalent in our lives. This will make small tasks gone. I expect algorithms to have a high impact on identity and data security. Take a look at calculus. Algorithms can be used to enforce business processes to properly stop hackers.”

Kjartan Ólafsson, head of the department of social sciences at the University of Akureyri, Iceland, wrote, “Whether the development of algorithms is seen as positive or negative depends on various factors. The negative view might be that algorithms in the end limit people’s choice. The positive view can be that in a world of overwhelming complexity and choice algorithms help to structure the choices available and as humans become more experienced in navigating the online world they will also become more aware of how algorithms are used and more able to navigate the structured online world created through the use of algorithms.”

Julian Hopkins, lecturer in communication at Monash University, Malaysia, said, ”Mining big data and using algorithms is likely to assist in many challenges of human society that relate to managing resources and understanding social behaviour in contexts such as epidemics or economics. The most important probable negative outcome will be the ownership and control of these data and the algorithms by private commercial interests. There may come a point where arguments related to the public good may be deployed to open up these data and algorithms to public oversight.”

Eric Keller, retired from the US Army, said, “We already use them. They can be regulated, and predictive modeling—already common in marketing—can be used in healthcare as well as for educational outcomes.”

Marshall Kirkpatrick, co-founder of Little Bird, previously with ReadWriteWeb and TechCrunch, replied, “Most commercial entities will choose to implement algorithms that serve them even at the expense of their constituents. But some will prioritize users and those will be very big. Meeting a fraction of the opportunities that arise will require a tremendous expansion of imagination.”

LT Wilson, a respondent who did not share other identifying background, observed, “Similar to the aim of the scientific method, the articulation of algorithms are subject to the same steps of conception, application, and validation. Overall, expansiveness will likely be realized. Algorithms, as a social technology, are prone to the same, bilateral as all forms; they can be crafted to enhance and enable or to diminish and alienate.”

David Adams, vice president of product at a new startup, said, “All technological revolutions have downsides, and this tech is complicated enough that it will take decades for us to really get it right, but the benefits are potentially awesome. I fear that our legal and political system will struggle with the issues you raise, because of lawmakers’ technological ignorance and the likelihood of regulatory capture and lobbying by tech firms. This could be mitigated by requiring that important algorithms that are government-related or serve the public good be open source. If source code can’t be examined, then trusting it is hazardous. Overreach in intellectual property in general will be a big problem in our future.”

Maria Pranzo, director of development at The Alpha Workshops, wrote, “We just bought my father an Amazon Echo, and it was like talking to the Enterprise’s computer: a glimpse into the future. While there is certainly the strong possibility that discrimination and social engineering can take place through the use of algorithms, that same possibility exists in all media. Perhaps an oversight committee—a new branch of the FCC made up of coders—can monitor new media using algorithms of their own, sussing out suspicious programming—a watchdog group to keep the rest of us safely clicking.”

Kirk Munsell, a Web developer/producer for US science and technology projects, wrote, “As new generations arrive, their perspective of the world starts with an understanding that technology is interwoven into their daily routine. Only the most skeptical will think to question the results of their searches, their social networks, and whether they should share information for their own benefit.”

Travis, US military medical, said, “1) So much time saved. Efficiency. 2) Human manipulation of the algorithms to better only who they want. 3) Health care, not so much. Consumer choice, I did not even think of this. It could effect it greatly. It would be terrible. Dissemination of news, same as above. Educational opportunities, no I don’t see any connection. 4) Generally positively. 5) The same as now, just a new look. 6) Human observation, or even test groups.”

Stephan G. Humer, head of the internet sociology department at Hochschule Fresenius Berlin, commented, “The overall effect of algorithms will be positive, because there are many people—and even societies as a whole—who are interested in shaping a better digital future. Only societies that lack a digital spirit will have to fight with more or less negative outcomes, e.g., Germany. So, it depends on the digital culture a society has. The more digitality is wanted and shaped, the better the net overall effect of algorithms will be.”

Rob Smith, software developer and privacy activist, observed, “My answer is that positives will outweigh negatives, but with a very strong caveat: We’ll all have to become a lot better at understanding the value of our personal data; more technologically and socially savvy when it comes to privacy protection; and devices and software will have to be a whole lot more secure. Some of the major payoffs will likely be in the form of convenience. Algorithms that can help us structure our day and slide in services as and when we need them (whether we know we need them or not) will not immediately revolutionise the world, but they could make it more convenient. It is to be hoped, however, that this will spread from such things as recommendations for shopping and on-the-fly auctions for the best food delivery service or taxi or morning alarm algorithm to address some of society’s more troublesome problems. For example, could an algorithmic approach to service provisioning help the more vulnerable members of society? Could poorer people and society as a whole benefit? There are some areas in which this is already having an impact. Car sharing and food distribution schemes are popping up and might arguably have some long-term benefits (to the disadvantaged and to the environment in these cases). Algorithmic approaches might help us to better target charitable donations (for example, if an algorithm knows about the causes I’ve supported in the past, it might provision services that donate money to related charities or have a policy of protecting certain minorities). Unfortunately, such technologies are at least equally likely to harm as to benefit disadvantaged people. As always, it depends on how they are used and there has to be a pre-existing will to help the disadvantaged before services will arise that do significant good. I’m not particularly optimistic that it will happen. The major downside is that in order for such algorithms to function, they will need to know a great deal about everyone’s personal lives. In an ecosystem of competing services, this will require sharing lots of information with that marketplace, which could be extremely dangerous. I’m confident that we can in time develop ways to mitigate some of the risk, but it would also require a collective acceptance that some of our data is up for grabs if we want to take advantage of the best services. That brings me to perhaps the biggest downside. It may be that in time people are in practical terms unable to opt out of such marketplaces. They might have to pay a premium to contract services the old-fashioned way. In summary, such approaches have a potential to improve matters, at least for relatively rich members of society and possibly for the disadvantaged. But the price is high and there’s a danger that we sleepwalk into things without realising what it has cost us.”

David Wuertele, a software engineer for a major company innovating autonomous vehicles, noted, “I am optimistic that the services engineers build are capable of being free of discrimination, and engineers will try to achieve that ideal. I expect that we will have some spectacular failures as algorithms get blamed for this or that social tragedy, but I believe that we will have an easier time fixing those services than we will have fixing society.”

Theo Armour, coder, commented, “The improvements in algorithms in the last ten years have been astonishing. I can now do in hours what it used to take a cohort weeks to do.”

Eric Marshall, a systems architect, replied, “Algorithms are tweaked or replaced over time. Similar to open source software, the good will outweigh the bad, if the right framework is found.”

Ed Dodds, a digital strategist, wrote, “Algorithms will force persons to be more reflective about their own personal ontologies, fixed taxonomies, etc. regarding how they organize their own digital assets or bookmark the assets of others. AI will extrapolate. Users will then be able to run thought experiments such as: ‘OK, show the opposite of those assumptions?’ and such in natural-language queries. A freemium model will allow whether or not inputting a user’s own preferred filters will be of enough value.”

David Williams, a respondent who did not share other identifying background, said, “In general, the improved, enhanced algorithms will provide better tuned and focused information streams. The two dark areas are I see are the profit motive of many information generation or dissemination organizations will require them to focus first on their bottom line rather than the common good. The second dark area is the feedback loops likely created by information filtered by algorithms tuned to individual preference; for example, I’ll see news from a point of view I agree with and little from opposing points of view (which might sway me, if I was more exposed to it).”

Karel Kerstiens, retired from the US Air Force, said, ”This statement: ‘The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts,’ indicates an inability of society to self-regulate. The power and collective goodness of the people should never be underestimated.”

Masha Falkov, artist and glassblower, said, “This is a debate as old as the concept of robotics itself. It takes on a different tone when it begins to actually pervade every aspect of our lives. Algorithms are useful because they can help reduce the effort of calculating decisions. However, real life does not always mimic mathematics. Algorithms have a limited number of variables, and often life shows that it needs to factor in extra variables. There should always be feedback in order to develop better variables and human interaction when someone falls through the cracks of the new normalcy as defined by the latest algorithm. We have relied on simple algorithms before, for example with the SAT test. It was supposed to sort students by effort and intelligence for colleges. What happened was that it became easy for affluent students with access to expensive SAT courses to learn how to figure out patterns within the test, faster than students without access to those courses. So this system isn’t perfect—yet it still isn’t updated. And there is some discrimination by family income as a result. Health care will have the strongest positive and negative results. Positive, because a complex flurry of symptoms can now be funneled into an algorithm with a database of diseases rather than relying on the imperfect memory and intuition of a human being. Negatively, because there is still much stigma for certain diseases—STDs, drug abuse, mental illness—which a person may not want on their record and will suddenly be exposed for their insurance companies and employers to see. A person may be otherwise a good person in society but they may be judged for factors over which they do not have any control. For this reason it is important to moderate algorithms with human judgment and compassion. Already we see every day how insurance companies attempt to wrest themselves out of paying for someone’s medical procedure. The entire healthcare system in the US is a madhouse presently moderated by individuals who secretly choose to rebel against its tyranny. Doctors who fight for their patients to get the medicine they need, operators within insurance companies who decide to not deny the patient the service, at the risk of their own job. Our artificial intelligence is only as good as we can design it. If the systems we are using presently do not evolve with our needs, algorithms will be useless at best, harmful at worst.”

Shawn Otto, organizational executive, speaker, and writer with ScienceDebate.org, commented, “Generally in our history we tend to quickly adopt and commercialize new technologies like these without full investigation or knowledge of the potential consequences. I have documented this in past writings as a seven-stage process. By the time the consequences become clear and we move to regulate, an entrenched vested economic interest has developed based on the now-old technology that often invests in anti-science public relations campaigns to create uncertainties about the new science suggesting regulation, in the hopes of forestalling it. At this point in time, there are some good reasons to expect that this cycle will repeat with the growth of algorithms and AI.”

James McCarthy, manager, commented, “It has already been shown that algorithms tend to be discriminatory. It’s not their fault; they’re just sequences of actions. But the truth is, they’re products of human beings, who are far from the ideal arbiters of justice, fairness, or real-world exigencies. Shutting down my debit card due to fraudulent activity on my account on the day I’m supposed to pay my rent (and triggering a phone call to a number I no longer own and couldn’t use due to my deafness anyway), for example, or allocating insufficient insulin at the local pharmacy for my friend on the week she needs to fill her prescription. Sometimes stuff just happens that can’t be accounted for by even a sufficiently-complex rules set, and I worry that increasing our dependency on algorithmic decision-making will also create an increasingly-reductive view of society and human behavior.”

Andrew Walls, managing vice president at Gartner, wrote, “In specific technical pursuits (e.g., medical diagnoses) algorithmic approaches will yield significant benefit. In less well-defined areas, the benefits will be less tangible and the liabilities will be significant. Algorithms are not amoral or apolitical. They are designed by people and reflect the perceptions and cultural expectations of the designers. As a result, an algorithm will not consider variables and influencers that were outside of worldview of the designer. A simple example is the ongoing of discussion of GDP as a measure of an economic system.”

Chris Zwemke, Web developer, observed, “Starting from the assumption that you mean new algorithms going forward. I can’t imagine exactly what, but I can imagine there are many of life’s little decisions that will be automated in the future. The huge decisions are the ones that worry me. The fact philosophers have spent hundreds of years without coming to a conclusion on many important life and death issues says to me that no computer hacker in the next decade will solve them either. Kill one person to save many? Save the elder or younger relative from the disaster? The impact of how much ice my refrigerator creates on a chilly day is minimal. How my car (or my neighbor’s car) chooses to react in a situation when somebody must be hurt is hugely important. Algorithms have authors; authors have biases. Until an algorithm is created without human intervention (which might be the creation of life) algorithms will have bias.”

Julie Gomoll, a freelancer, wrote, “The overall effect will be positive for some individuals. It will be negative for the poor and the uneducated. As a result, the digital divide and wealth disparity will grow. It will be a net negative for society.”

Jeff Kaluski, a respondent who did not share other identifying background, commented, “New algs will start by being great, then a problem will emerge. The creator will be sued in the US. The alg will be corrected. It won’t be good enough for the marginalized group. Someone else will create a better alg that was ‘written in part by marginalized group’ then we’ll have a worse alg than the original+correction.”

Seti Gershberg, executive producer and creative director at Arizona Studios, wrote, “AI and robots are likely to disrupt the workforce to a potential 100% human unemployment. They will be smarter more efficient and productive and cost less, so it makes sense for corporations and business to move in this direction. At first the shift will be a net benefit, but as AI begin to pass the Turing test and potentially become sentient and likely super-intelligent, leading to an intelligence explosion as described by Vernor Vinge it is impossible to say what they will or will not do. If we can develop a symbiotic relationship with AI or merge with them to produce a new man-machine species it would be likely humans would survive such an event. However, if we do not create a reason for AI to need humans then they would either ignore us or eliminate us or use us for a purpose we cannot imagine. Recently, the CEO of Microsoft put forth a list of 10 rules for AI and humans to follow with regard to their programming and behavior as a method to develop a positive outcome for both man and machines in the future. However, if humans themselves cannot follow the rules set forth for good behavior and a positive society (i.e., the 10 Commandments—not in a religious sense, but one of common sense) I would ask the question, why would or should AI follow rules humans impose on them?”

Pete Cranston of Euroforic Services wrote, “Public engagement with the Web will proceed in waves. Smart(er) new apps and platforms will require people to learn how to understand the nature of the new experience, learn how it is guided by software, and learn to interact with the new environment. That has tended to be followed by a catch-up by people who learn then to game the system, as well as navigate it more speedily and reject experiences that don’t meet expectations or needs. The major risk is that less-regular users, especially those who cluster on one or two sites or platforms, won’t develop that navigational and selection facility and will be at a disadvantage. Algorithms designed to provide assistant services—for example, medical, or security research—will be provided by commercial services which will inevitably include a bias toward their own profit and business growth: this will threaten independent, objective advice and consultation.”

Elisabeth Gee, a professor at Arizona State University, commented, “This is a hard call. Algorithms are increasingly necessary in the type of society we’ve created. If we’re happy with this type of society—for example, if we assume large-scale institutions and an ever-growing human population—then algorithms seem to be a useful means of coping at scale. The costs in terms of discrimination or loss of individual choice seem worth the benefits. A different vision of the future than the one we’re facing would make algorithmic approaches to our lives seem backward-thinking. After all, algorithms are based on the past, not an unanticipated future.”

Tom Ryan, CEO of eLearn Institute Inc, observed, “Regardless of opinion or consequence the use and collection of data will continue to grow. Certainly those privileged enough to profit from the use of data stand to benefit the most. The use of digital tools has also provided greater access to information on individuals who have betrayed the public trust.”

Dan McGarry, media director at the Vanuatu Daily Post, said, “People will continue to game the system for short-term gain, which will mitigate many of the benefits of AIs and intelligent systems. Eventually, though, humanity is always progressive. People will find a way to benefit more from positive algos than negative. That’s pretty much the essence of Smithian economics.”

Adrian Schofield, an applied research manager based in South Africa, wrote, “It is difficult to find the perfect answer when it comes to using algorithms. When they work, they save enormous amounts of time and they provide a fair outcome based on reasonable standards. However, humans design and test algorithms and inevitably build in their own prejudices and anticipation of the ‘best’ outcome.”

Katharina Anna Zweig, a professor at Kaiserslautern Technological University in Germany, commented, “Positive effects could outweigh negative ones by far—and vice versa. If we do not come up with an Algorithm Ethics and rules on where we want algorithms to make a decision and where not—things will turn out to be very bad. If we, however, do manage a well-funded understanding of when to use which algorithm, our lives might be much more informed by real data instead of subjective experiences. So: The overall impact could either be very negative or very positive or virtually anything in between.”

Ben Railton, professor of English and American studies at Fitchburg State University, wrote, “To me, algorithms are one of the least attractive parts of both our digital culture and 21st century capitalism. They do not allow for individual identity and perspective. They instead rely on the kinds of categorizations and stereotypings we desperately need to move beyond.”

Polina Kolozaridi, researcher at the Higher School of Economics, Moscow, wrote, “It is a big political question, whether different institutions will be able to share their power, not knowing—obviously—how to control the algorithms. Plenty of routine work will be automated. That will lead to a decrease in people’s income unless governments elaborate some way of dealing with it. This might be a reason for big social changes—not always a revolution, but in some places a revolution as well. Of course the Digital Gap will extend, as people who are good in automating their labour will be able to have more benefits, as they will be a kind of slave-host for the machines. Only regular critical discussion involving more people might give us an opportunity to use this power in a proper way (by proper I mean more equal and empowering).”

Justin Reich, executive director at the MIT Teaching Systems Lab, observed, “Technology accelerates societal trends, and algorithms will work no differently. The algorithms will be primarily designed by white and Asian men—with data selected by these same privileged actors—for the benefit of consumers like themselves. Most people in positions of privilege will find these new tools convenient, safe, and useful. The harms of new technology will be most experienced by those already disadvantaged in society, where advertising algorithms offer bail bondsman ads that assume readers are criminals, loan applications that penalize people for proxies so correlated with race that they effectively penalize people based on race, and similar issues. The advancing impact of algorithms in our society will require new forms and models of oversight. Some of these will need to involve expanded ethics training in computer science training programs to help new programmers better understand the consequences of their decisions in a diverse and pluralistic society. We also need new forms of code review and oversight, that respect company trade secrets but don’t allow corporations to invoke secrecy as a rationale for avoiding all forms of public oversight.”

Noah Grand, a respondent who did not share other identifying background, commented, “Every expert likes to criticize Facebook. It’s like the kiddie pool of social networking sites. Even more importantly, everything that appears when you log in is determined by algorithm. But Facebook is also the most popular social networking site. I don’t think this is an accident. My media research focused on the question ‘what do people consider news?’ Imagine if you had to go out and search for the most newsworthy stories of the day yourself. How could you comb through all the possibilities? It’s impossible. There is too much information. Even professional journalists have to limit their searches. Most people don’t want to spend that much time searching for news or reading it, so Facebook using algorithms to do the search for them actually gives people what they want. Concerns about an echo chamber are certainly valid. Algorithms help create the echo chamber. It doesn’t matter if the algorithm recognizes certain content or not. In politics and news media it is extremely difficult to have facts that everyone agrees on. Audiences may not want facts at all. To borrow from Stephen Colbert, audiences may prefer ‘truthiness’ to ‘truth.’ Algorithms that recognize ‘engagement’—likes, comments, retweets, etc.—appear to reward truthiness instead of truth.”

Sam Ladner, a respondent who did not share other identifying background, commented, “Too few social scientists and humanists are involved in algorithm development. Moreover, those who are involved somehow believe that their work is ‘neutral’ or ‘more objective’ than other kinds of categorization.”

David Collier-Brown, a respondent who did not share other identifying background, commented, “As with all new areas, how we start off will be very important: catching bad schemes and nipping them in the bud can be critical. In my case, a bias toward selecting employees using the social activities of a young staff as a criteria, would tend to disqualify me for my current (dream!) job, I have older hobbies and social patterns.”

Malcolm Pell, an IT consultant, said, “The costs of developing and supporting high-quality algorithms limit their rollout to systems where the costs can be justified.”

Eduardo Villanueva-Mansilla, associate professor at Pontificia Universidad Católica del Perú, observed, “Transborder algorithms can hinder the autonomy of states and businesses in emerging economies, as they make decisions based on the interests, principles, and customs of their original country/corporation. Only through the recognition of specific risks and incorporation of some governance mechanisms at international, regional, and local levels will the risks mentioned be softened.”

Richard Lachmann, professor of sociology at the University at Albany, wrote, “Algorithms determine the information people see online. Since they are made mainly by for-profit companies to maximize audiences, they have the effect of sending users to popular sites. In that way, diverse and dissident voices are slighted.”

James Hinton, a writer, commented, “On the one hand, the algorithms we already have through such entities as Google allow the internet to tailor things to our particular wants and needs. The fact that Google knows that my personal hobbies mean a search for the word ‘Berserk’ is aimed toward the Japanese manga series rather than Scandinavian warriors. However, at the same time, the fact the internet can, through algorithms, be used to almost read our minds means those who have access to the algorithms and their databases have a vast opportunity to manipulate large population groups. The much-talked-about ‘experiment’ conducted by Facebook to determine if it could manipulate people emotionally through deliberate tampering with news feeds is but one example of both the power, and the lack of ethics, that can be displayed.”

Peter Brantley, director of online strategy at the University of California-Davis, commented, “The trend toward data-backed predictive analytics and decision-making is inevitable. While hypothetically these could positively impact social conditions, opening up new forms of employment and enhanced access and delivery of services, in practice the negative impacts of dissolution of current employment will be an uncarried social burden. Much as the costs of 1960-80s deindustrialization were externalized to the communities which firms vacated, with no accompanying subvention to support their greater needs, so will technological factors continue to tear at the fabric of our society without effective redress, creating significant unrest and upheaval. Technological innovation is not a challenge well accommodated by the current American capitalist system.”

Chris Kutarna, author of Age of Discovery and fellow at the Oxford Martin School, wrote, “Algorithms are an explicit form of heuristic, a way of routinizing certain choices and decisions so that we are not constantly drinking from a fire hydrant of sensory inputs. That coping strategy has always been co-evolving with humanity, and with the complexity of our social systems and data environments. Becoming explicitly aware of our simplifying assumptions and heuristics is an important site at which our intellects and influence mature. What is different now is the increasing power to program these heuristics explicitly, to perform the simplification outside of the human mind and within the machines and platforms that deliver data to billions of individual lives. It will take us some time to develop the wisdom and the ethics to understand and direct this power. In the meantime, we honestly don’t know how well or safely it is being applied. The first and most important step is to develop better social awareness of who, how, and where it is being applied.”

Rebecca MacKinnon, director of Ranking Digital Rights at New America, commented, “Recent research and reporting shows that algorithms are not neutral: they reflect the conscious or often unconscious biases not only of their creators but of the critical mass of people who generate the datasets they are working with. Algorithms driven by machine learning quickly become opaque even to their creators who no longer understand the logic being followed to make certain decisions or produce certain results. The lack of accountability and complete opacity is frightening. On the other hand, algorithms have revolutionized humans’ relationship with information in ways that have been life-saving and empowering and will continue to do so.”

Paul Lehto, an author, observed, “Unless the algorithms are essentially open source and as such can be modified by user feedback in some fair fashion, the power that likely algorithm-producers (corporations and governments) have to make choices favorable to themselves, whether in internet terms of service or adhesion contracts or political biases, will inject both conscious and unconscious bias into algorithms.”

Randy Albelda, professor of economics at the University of Massachusetts-Boston, replied, “People use algorithms in their heads. They just aren’t as quick in processing tons of data. So in some cases the algorithms might predict better. The issue is how are they used and for whose purpose. If it is all about making money, then the net effect will largely be negative for people in there can be 10 different colors of cars or 20 different kinds of sweetened cereals and we can have the ‘right’ one be marketed to us, but that is not about real choices in our lives. Access to information is remarkable, but that doesn’t mean you can discern what is useful or not. Further it hasn’t prevented the continued growth in income inequality. Why would it start now? My research is on poor people. I’ve been doing this for a long time (close to 30 years). And no matter how much information, data, empirical evidence that is presented about poor people, we still have horrible anti-poverty policies, remarkable misconceptions about poor people, and lots more poor people. Collecting and analyzing data does not ‘set us free.’ ‘Facts’ are convenient. Political economic forces shape the way we understand and use ‘facts/data’ as well as technology. If we severely underfund health care or much of healthcare dollars get sucked up by insurance companies, algorithms will be used to allocate insufficient dollars on patients. It will not improve health care. And those who can afford it will circumvent the algorithm if it does not match the health care they think they need and just purchase it outright.”

John Bell, architect at Dartmouth College, wrote, “This question is so broad as to be meaningless. Algorithms are too ubiquitous to really say that they will have a net positive or negative impact. It’s like asking if the screwdriver will have a positive or negative impact. They will have a transformative impact, positive in some ways but negative in others, but society with and without either are not directly comparable.”

Amali De Silva Mitchell observed, “Predictive modeling will be limit individual self-expression hence innovation and development. It will cultivate a spoon-fed population with those in the elite being the innovators. There will be a loss in complex decision-making skills of the masses. Kings and serfs will be made and the opportunity for diversification lost and then even perhaps global innovative solutions lost. The costs of these systems will be too great to overturn if built at a base level. The current trend toward the uniform will be the undoing rather than building of platforms that can communicate with everything so that innovation is left as key and people can get the best opportunities. Algorithms are not the issue, the issue is a standard algorithm.”

David Banks, co-editor of Cyborgology, replied, “This question encourages answers that miss the important distinction here: algorithms are the product of human labor and, like all human endeavours, are capable of propagating human biases, often outside of their creator’s explicit intentions. Therefore, algorithms are only as dangerous to human flourishing as we are willing to ignore that they are an extension of human will.”

Amber Tuthill, a survey respondent who shared no additional identifying details, replied, “Algorithms + data = intelligent well-informed analysis. Intelligent well-informed analysis can be used for good and for bad. Powerful entities such as companies and governments use this information to create a custom-tailored market for consumers. It is also possible that powerful forces will attempt to create paradigms in their own self-interest using data and algorithms, such as in the form of a fake mass. Algorithms can be used to feed users targeted media. And though this media is based upon the user’s preferences, it could be cleverly steered in a particular direction using bias in the interest of those who created the algorithms. Ultimately, algorithms’ positive or negative virtue depends upon the hands they are in. If the everyday internet user takes it upon him or herself to create and use algorithms favorably then we will see a positive gain with the use of algorithms. Unfortunately, everyday users of the internet seem more interested in consuming the fruits of the internet than they are in building upon the structures of the internet.”

Ryan Sweeney, director of analytics at Ignite Social Media, commented, “The benefits of algorithms will allow more tailored information, products, and services resulting in everyday life efficiencies. In theory, this approach is great. Unfortunately, the technology can be abused or can unintentionally be developed or trained with bias. Every human is different, so an algorithm surrounding health care could tailor a patient’s treatment plan. It could also have the potential to serve the interests of the insurance company over the patient. Facebook is currently receiving a lot of criticism for the bubble it has created for users with its News Feed algorithm. Algorithms surrounding news and information are beneficial when it comes to sorting out the spam that you’re not interested in. The trouble lies when the algorithm thinks an important piece of information is not interesting to you because of past behavior, so it does not serve that information to you resulting in unintentional censorship. The technology will continue to improve, though I don’t see the potential of negative results going away anytime soon.”

Jennifer A. Dukarski, attorney at Butzel Long, commented, “In terms of the automotive realm, safety will be significantly improved. With the average age of a vehicle on the road being 11.4 years, as time passes, new algorithms will enhance driver safety. This will be true, even as we learn to address concerns brought by the recent Tesla accident.”

Nick Tredennick, technology analyst, replied, “Positives always outweigh the negatives in a free exchange of information, goods, and services. The alternative is to give more power to designated experts, who invariably cause more problems by making unilateral decisions restricting choices for others.”

Karl M. van Meter, sociological researcher and director of the Bulletin of Methodological Sociology, Ecole Normale Supérieure de Paris, said, “You can’t honestly answer this question Yes, No, or 50-50 since algorithms only do what programmers ask them to do, and programmers only do what their bosses ask them to do. So the question is really, ‘Will the net overall effect of the next decade of bosses be positive for individuals and society or negative for individuals and society?’ Good luck with that one.”

Matt Mathis, a respondent who did not share other identifying background, observed, “By comparing hand-crafted ‘fair’ algorithms to statistically derived automatic algorithms, we will gain diagnostic visibility into social problems. We can, by policy, choose to suppress some of the automatically detected bias.”

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, wrote, “I only picked the ‘positive will outweigh’ answer because the more appropriate one wasn’t listed: We don’t have a firm grasp of the negative externalities that will arise with growing algorithmic dependency, and so a real comparative judgment of commensurate pros and cons can’t be made. The more algorithmic advice, algorithmic decision-making, and algorithmic action that occurs on our behalf, the more we risk losing something fundamental about our humanity. But because being ‘human’ is a contested concept, it’s hard to make a persuasive case for when and how our humanity is actually diminished, and how much harm each diminishment brings. Only when better research into these questions is available, can a solid answer be provided as to whether more positive or negative outcomes arise.”

Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, observed, “Even though I have been, and continue to be a critic of the sorts of decisions shaping the distributions of opportunity that matter to most of us, I do see the benefits as outweighing the harms. That doesn’t mean that we should ignore the harms just because on some balance the total benefits are greater. What will continue to matter, of course, is the distribution of these benefits and harms. The most important or concerning harms are those associated with access to opportunity. Because of the nature of algorithmic assessments, we should expect that the distribution of these harms will tend to reinforce the influence of sources of harm and the social disparities that result. Social theorists these days talk about ‘intersectionality’ as a way of underscoring the connections between all of the sectors of one’s life. This means, of course, that all of the dimensions that you identify in asking about which will be the most affected are all in some ways, some of which are, of course, more tightly connected than others. This may mean that our choices with regard to the dimensions of life that will be more affected reflect our own theoretical assumptions about their importance for quality of life now and in the future for segments of the population. For me, then health care, that provided, and that aspect of well-being associated with one’s own behavior are likely to be shaped by algorithmic assessments, which will shape the kinds of health-relevant choices that some consumers will face. Concerns about ‘surveillance capitalism’ (see the work of Shoshanna Zuboff) lead me to doubt that algorithmic assessments will provide the opportunity, or the appropriate ‘nudges’ that people will need to make informed choices in this area. I am almost as concerned about the kinds of threats to the democratic process (with regard to your references to making things more ‘convenient for citizens.’) I see algorithmic assessments facilitating the strategic targeting of persuasive messages designed to move members of the public toward supporting or opposing policies and candidates who will not often be in their, or their nation’s best interests. I have suggested that we need to re-authorize something akin to the former US Office of Technology Assessment that would engage in routine audits of the use of algorithms for the delivery of opportunity and resources. One of the primary concerns of these audits should be on the distributional impacts (and interaction effects) associated with their use, especially by commercial entities, but in the context of expanded business/government ‘partnerships,’ those uses also should be included in these assessments. We will need to consider the development of systems for delivering compensation to the victims, as well as punishments for those engaged in irresponsible use of this technology.”

Joan Noguera, professor at the University of Valencia (Spain) Institute for Local Development, wrote, “I am convinced that algorithms will have a more positive contribution than potential damage. Of course, for this to occur, a continuous evaluation of their implementation and results should be integral part of the system. And companies and institutions should have the capacity for early reaction when problems are detected to analyse their causes and improve the implementation.”

Fredric Litto, emeritus professor of communications at the University of São Paulo, Brazil, said, “In the early 1960s, I was one of the leaders of Sam Hayakawa’s attempt to stop the changeover from the use of telephone number prefixes with local historical significance (i.e., Yu(kon) 5-5555 in San Francisco) to a new all-numeric digital dialing system. The argument was humanistic. My picture came out in the Time magazine article on the subject. But we lost. There will be advantages in greater number and disadvantages to the increase in the use of algorithms—the principal advantage could be the ‘customization’ of an individual´s preferences and tastes—something heretofore granted only to the very well off. If there is, built-in, a manner of over-riding certain classifications into which one falls, that is, if one can opt-out of a ‘software-determined’ classification, then I see no reason for society as a whole not taking advantage of it. On the other hand, I have ethical reservations about the European laws that permit individuals to ‘erase’ ‘inconvenient’ entries in their social media accounts. I leave to the political scientists and jurists (like Richard Posner) the question of how to legislate humanely the protection of both the individual and society in general.”

Dave Kissoondoyal, CEO of KMP Global Ltd., wrote, “Algorithms will definitely continue to influence on the people and their surroundings. However the positive impact will outweigh the negative ones as algorithms will bring effectiveness, accuracy and quickness in one’s job.”

David Karger, professor of computer science at MIT, said, “Algorithms are just the latest tools to generate fear as we consider their potential misuse, like the power loom (put manual labors out of jobs), the car (puts kids beyond the supervision of their parents), and the television (same fears as today’s internet). In all these cases there were downsides but the upsides were greater. The question of algorithmic fairness and discrimination is an important one but it is already being considered and I am confident that it can be addressed, though never perfectly. Discrimination predates algorithms; the primary challenge is social (to remove it from society) rather than technological. If we want algorithms that don’t discriminate, we will be able to design algorithms that do not discriminate. Of course there are ethical questions: if we have an algorithm that can very accurately predict whether someone will benefit from a certain expensive medical treatment, is it fair to withhold the treatment from people the algorithm thinks it won’t help? But the issue here is not with the algorithm but with our specification of our ethical principles.”

Hume Winzar, associate professor of business at Macquarie University in Sydney, Australia, wrote, “While we see things such as health care arriving before we even are aware of a problem, and many other services, the negatives suggested are very real. Banks, governments, insurance companies, and other financial and service providers will use whatever tools they can to focus on who are risks. It’s all about money and power.”

Susan Price, digital architect and strategist at Continuum Analytics, commented, “Positive changes will include allowing humans to learn more, more quickly, and to understand and adapt to conditions as revealed by data. The data will mirror the flaws and biases in our data sets; the algorithms we embed in our artificial intelligence are created by humans and will likely also mirror those flaws and biases. All such systems require both transparency and regular auditing to ensure biases are discovered and mitigated, on an ongoing basis. The transparent provenance of data and transparent availability of both algorithms and analysis will be crucial to creating the trust and dialog needed to keep these systems fair and relatively free of bias. This necessary transparency is in conflict with the goals of corporations developing unique value in intellectual property and marketing. The biggest challenge is getting humans in alignment on what we collectively hope our data will show in the future—establishing goals that reflect a fair, productive society, and then systems that measure and support those goals. Predictive modeling has promise to alleviate traffic congestion, inform housing and civil engineering projects, food production, and distribution.”

David Bernstein, a former research director, wrote, “I can see how using predictive behaviour could lead to discrimination in some strange cases, but to the extent that those doing the analysis and constructing the algorithms are aware of potential biases, it does not seem any more likely that what we do with targeted advertising today. I have seen how one search on the internet then shows up in my email, ads in other search platforms, and so on. It can be frustrating and even annoying but not discriminating since it is based on my own behavior. Today’s recommendation engines will hopefully being more fine-tuned. The more important aspect of this technology is for the individual to be able to alter the recommendations, predictive behaviour, etc., as his needs change. If you are suggesting that algorithms may be used to decide what news or entertainment I have access to, that’s a show-stopper. That’s Big Brother, and it must never get to that point. However, that’s not generally the fault of the technology. It is the people who are implementing the technology who will be the culprits. Algorithms could easily help doctors in the ER to determine the best course of action may be by taking into consideration the wealth of information about the individual on the gurney. BUT, the doctor still has to make the final decision, just as I will always want to ability to grab the steering wheel in a self-driving auto. But if these can help me plan my shopping, vacations, entertainment choices, warm me before my water heater or AC is about to quit, and so forth. I am very willing to work with it. Just be sure to show me the off button before you leave it with me.”

Christian Dawson, a survey participant who shared no additional identifying details, commented, “On a long enough timeline, the positives will outweigh the negatives, but we will be facing major civil liberties crossroads as well as having to make decisions about how exploitative large companies are allowed to be, while utilizing the data they have in their possession about you. There will be debates on all of these, but the arc of society bends toward justice and we will eventually learn to use these tools ethically and responsibly for the most part.”

Joanna Bryson, senior associate professor at the University of Bath, wrote, “First, there are two major caveats. This is not a good usage of the technical term ‘algorithm,’ though it’s becoming a dominant one so I know what you actually mean. Second, I’m less certain about this, particularly in the near term (< 30 years), given the other societal pressures, e.g., climate change, population growth. We could be in for a bad time in the immediate future. AI being just an extension of human control/governance could in that case also be predominantly used for ill. But on average I expect that given the large number of smart people working on the problems, there will be more good than ill.”

George McKee, a retiree, replied, “Algorithms always have bugs, and as algorithmic organizational structures become more pervasive the impact of those bugs will become greater. The greatest danger is that systems will become more complex than anyone can understand, and it will be impossible to make improvements without introducing more problems than before. Fred Brooks wrote about this in 1975 in The Mythical Man-Month. Many organizations may have passed this complexity threshold already, contributing to the economic stagnation that has economists puzzled.”

Larry Magid, CEO of ConnectSafely.org, said, “Even though it may be difficult to decipher the algorithms themselves, their effect will be heavily scrutinized and there will be enormous pressure from stakeholders, including governments, to make these more positive than negative.”

Manoj, an engineer working in Singapore, replied, “Most positive impact: Enhanced customer reach, more-targeted approach. Negative impact: Alienated or closely monitored feeling of the customer and discrimination regarding change—people hoping for change may be left out.”

Mary K. Pratt, freelance journalist, commented, “Algorithms have the capability to shape individuals’ decisions without them even knowing it, giving those who have control the algorithms (in how they’re built and deployed) an unfair position of power. So while this technology can help in so many areas, it does take away individual decision-making without many even realizing it.”

Eelco Herder, senior researcher at the L3S Researcher Center, based in Germany, observed, “Fear of control of government, artificial intelligence or robots is nothing new. Following on Brave New World and 1984, the dangers of the loss of privacy and the bad influence of powerful algorithms (or robots) has been covered in books like The Circle. Steven Hawking specifically warns us of the risks of artificial intelligence. To a certain extent, these fears have their justification, as negative effects are easily to achieve. On the other hand, we as a society decide in which direction we want to go, what is accepted and what not, and how technology should—or should not—be used. This socio-technological engineering has and will have positive and negative artifacts. I trust and hope that humankind will find a good balance.”

Dave McAllister, director at Philosophy Talk, said, “We will find ourselves automatically grouped into classes (caste system) by algorithms. While it may increase us being effective in finding the information we need while drowning in a world of big data, it will also limit the scope of synthesis and serendipitous discovery.”

Karen Mulberry, a director, commented, “Algorithms will drive behavior both good and bad. For example, shoppers are being targeted by algorithms based on purchases and location with suggestions to purchase products. This type of data gathering is also being used to determine our daily patterns and activities for us. Not sure that taking the freedom to chose out of the mix and only having the ability to chose from the targeted menu of opportunities should be allowed. It also limits opportunities to only those that are presented based on a preconceived set of data that may not really be applicable to everyone and how they have been cast into particular ‘buckets’ by the algorithms design. Plus the design could be flawed, further limiting the value of the output.”

Andrew Nachison, founder at We Media, observed, “The positives will be enormous—better shopping experiences, better medical experience, even better experiences with government agencies. Algorithms could even make ‘bureaucrat’ a friendlier word. But the dark sides of the ‘optimized’ culture will be profound, obscure and difficult to regulate—including pervasive surveillance of individuals and predictive analytics that will do some people great harm (Sorry, you’re pre-disqualified from a loan; sorry, we’re unable to sell you a train ticket at this time). Advances in computing, tracking, and embedded technology will herald a quantified culture that will be ever more efficient, magical, and terrifying.”

Joseph Turow, a communications professor at the University of Pennsylvania, said, “Algorithms will be useful to individuals for helping people perform everyday tasks and navigate perceived complexities of life. A problem is that even as they make some tasks easier for individuals, many algorithms will chip away at their autonomy by using the data from the interactions to profile them, score them, and decide what options and opportunities to present them next based on those conclusions. All this will be carried out by proprietary algorithms that will not be open to proper understanding and oversight even by the individuals who are scored.”

Hilary Swett, a librarian, commented, “Algorithms can certainly have a positive impact on our lives by tailoring our experiences to our needs, expectations and desires. Algorithms cut out the extra stuff (news, info, choices) and leave us with just the stuff we want or expect. The problem is, they cut out that extra stuff, the stuff that makes life varied, unpredictable, interesting, boring, exciting, annoying. In short, they keep us from our humanity. This sounds bad but it doesn’t have to be. It just means that new ways of interacting with the world will have to be created. And it means that we humans will have to work harder to regain our humanity.”

Jan Schaffer, executive director at J-Lab, predicted, “The public will increasingly be creeped out by the non-stop data mining. I certainly am.”

Joel Barker, futurist and author at Infinity Limited, said, “Criminals can write algorithms, too. I believe this is going to be a shoot-out in the OK Corral.”

Daniel Menasce, professor of computer science at George Mason University, wrote, “Algorithms have been around for a long time, even before computers were invented. They are just becoming more ubiquitous, which makes individuals and the society at large more aware of their existence in everyday life devices and applications. The big concern is the fact that the algorithms embedded in a multitude of devices and applications are opaque to individuals and society. Consider for example the self-driven cars being currently developed. They certainly have collision-avoidance and risk-mitigation algorithms. Suppose a pedestrian crosses in front of your vehicle. The embedded algorithm may decide to hit the pedestrian as opposed to ramming the vehicle against a tree because the first choice may cause less harm to the vehicle occupants. How does an individual decide if he or she is OK with the myriad decision rules embedded in algorithms that control your life and behavior without knowing what the algorithms will decide? This is a non-trivial problem because many current algorithms are based on machine learning techniques and the rules they use are learned over time. Therefore, even if the source code of the embedded algorithms were made public, it is very unlikely that an individual would know the decisions that would be made at run time. In summary, algorithms in devices and applications have some obvious advantages but pose some serious risks that have to be mitigated.”

Aj Reznor, vulnerability and network researcher at a Fortune 500 company, observed, “While ‘algorithms’ can apply to things like search engines, algorithms will also be used in research—finding methods to best insert invasive advertising in to our lives, for example. Previously we’ve seen the obvious: Banner ads, popups, interstitials. Rather than rely on human imagination and observation to decide the next hot space to sell advertising real estate it’s only a matter of time until a method is created to find all avenues into our daily lives to propagate advertising. Of course, as with banner ads, popups, and other past and current methods, communities will appear that develop methods to circumvent or prevent these new methods. Interfering with the algorithms directly would likely prove difficult as they would be most effective observing societies and life at a very high level, and small anti-advert communities would barely be a blip on the radar, let alone seeding a larger movement, that could create enough ‘bad’ data to taint any output from such an algorithm; the response would have to be produced in the form of dissent at the consumer endpoint (as with current ad-blocker technology).”

Antero Garcia, assistant professor at Colorado State University, wrote, “The biggest problem with putting a lot of hope into the transformative effects of algorithms is that the fact that algorithms are created by people is largely ignored. For every Google Photo incident that mistakenly identifies black people as gorillas we need to remember that there are human beings who developed, wrote, and implemented these algorithms. We do not escape bias from having algorithms dictate our lives—rather, the cultural blindspots we make are exacerbated by them. Further, because the general assumption is that computers are politically neutral, these biases get reified as truth.”

If you wish to read the full survey report with analysis, click here.

To read anonymous survey participants’ responses with no analysis, click here.