Code-Dependent: Experts weigh in on potential benefits and threats of algorithm-based advances in modern life

In a new study by the Pew Research Center and Elon's Imagining the Internet Center, more than 1,300 experts share their thoughts on the programs behind the technology systems that are integrated with our lives.

Asked to predict whether the coming age of algorithms will be positive or negative for society, 1,302 technologists, futurists and scholars gave decidedly mixed and unsettling verdict: About a third said positives would outweigh negatives, another third said the opposite and the rest think the impact will be split about 50-50.

Algorithms are digital codes used to accomplish a wide variety of tasks. The internet runs on algorithms and all online activities are accomplished through them. These computer tools are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, many of these experts worry that the application of algorithms created with good intentions can lead to unintended consequences.

The expert respondents voiced seven major themes that are detailed in a new report by the Pew Research Center and Elon’s Imagining the Internet Center, an initiative of the university’s School of Communications. Many observed that the advance of algorithms is inevitable and will produce positive results. The two most upbeat themes they sounded are:

Theme 1: Algorithms will continue to spread everywhere: Most respondents agreed that the benefits of the computer codes can lead to greater human insights into the world, less waste, and major safety benefits.

Theme 2: Good things lie ahead: A share of respondents said data-driven approaches to problem-solving will expand; that code processes will be constantly refined and improved (especially when ethical issues arise); and that algorithms will be effective tools to make up for human shortcomings.

While many agreed upon the growing benefits of algorithms, respondents expressed a set of concerns that can be organized into four themes:

Theme 3: Humanity and human judgment are lost when data and predictive modeling become paramount: These experts argued that algorithms are primarily created in pursuit of profits and efficiencies and that this is a threat; that algorithms can manipulate people and outcomes; that a somewhat flawed yet inescapable “logic-driven society” will emerge; that code will supplant humans in decision-making and, in the process, humans will lose skills and local intelligence; and that respect for individuals could diminish.

Theme 4: Biases exist in algorithmically-organized systems: Many in this sampling said algorithms reflect the biases of programmers and that the data sets they use are often limited, deficient or incorrect.

Theme 5: Algorithmic categorizations deepen divides: These experts worry that those who are disadvantaged will be even more so in an algorithm-organized future. They are concerned that algorithms create filter bubbles shaped by corporate data collectors. They worry that algorithms limit people’s exposure to a wider range of ideas and eliminate serendipitous encounters with information.

Theme 6: Unemployment will rise: A number of respondents focused on the loss of jobs as the primary challenge of the algorithm age. They said the spread of artificial intelligence will create significant unemployment, with major social and economic implications.

Finally, a share of these experts urge that there be societal responses to these issues now:

Theme 7: The need grows for algorithmic literacy, transparency and oversight: Some experts called for programs aimed at teaching algorithmic literacy beyond basic digital literacy and they pressed for accountability processes to oversee algorithms and their impact. Still, they expressed pessimism about the prospects for policy rules and oversight.

The report by Pew Research Center and Elon’s Imagining the Internet Center was released on Feb. 8, 2017.

“Two clear patterns emerge from these experts’ answers,” notes Lee Rainie, Director of Internet, Science and Technology research at the Pew Research Center and a co-author of this report. “The first is that, to a person, they believe algorithms will be infused into most of the crannies of human experience. The second is that even the most hopeful among them can describe some knotty problems that will accompany that change.”

Elon University Professor Janna Anderson, co-author of the report, adds: “These experts are excited about the future potential of advances in AI and the Internet of Things but they are also deeply concerned about keeping global good and basic human values at the forefront. They believe humans must act now to create and implement mechanisms to accentuate the positives and reduce the likely negatives of algorithm-based everything.”   

This report is based on a non-random canvassing of experts conducted July 1-August 12, 2016. A total of 1,302 respondents answered this question: Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society? The answer options were: 1) Positives outweigh negatives (38% of these respondents said this); or 2) Negatives outweigh positives (37% of them said this); or 3) The overall impact will be about 50-50 (25% of respondents said this). The survey then asked respondents to explain their answers in an open-ended way. The results are not projectable to any other population.

The following is a sample of thoughts shared by experts through this survey:

Vinton Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google: “Algorithms are mostly intended to steer people to useful information and I see this as a net positive.”

Marc Rotenberg, executive director of the Electronic Privacy Information Center, observed, “The core problem with algorithmic-based decision-making is the lack of accountability. Machines have literally become black boxes – even the developers and operators do not fully understand how outputs are produced. The problem is further exacerbated by ‘digital scientism’ (my phrase) – an unwavering faith in the reliability of big data. ‘Algorithmic transparency’ should be established as a fundamental requirement for all AI-based decision-making.”

David Clark, Internet Hall of Fame member and senior research scientist at MIT, replied, “I see the positive outcomes outweighing the negative, but the issue will be that certain people will suffer negative consequences, perhaps very serious, and society will have to decide how to deal with these outcomes. These outcomes will probably differ in character, and in our ability to understand why they happened, and this reality will make some people fearful. But as we see today, people feel that they must use the internet to be a part of society. Even if they are fearful of the consequences, people will accept that they must live with the outcomes of these algorithms, even though they are fearful of the risks.”

Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, responded, “If we use machine learning models rigorously, they will make things better; if we use them to paper over injustice with the veneer of machine empiricism, it will be worse.”

Jonathan Grudin, principal researcher at Microsoft, said, “We are finally reaching a state of symbiosis or partnership with technology. The algorithms are not in control; people create and adjust them. However, positive effects for one person can be negative for another, and tracing causes and effects can be difficult, so we will have to continually work to understand and adjust the balance. Ultimately, most key decisions will be political, and I’m optimistic that a general trend toward positive outcomes will prevail, given the tremendous potential upside to technology use. I’m less worried about bad actors prevailing than I am about unintended and unnoticed negative consequences sneaking up on us.”

danah boyd, founder of Data & Society, commented, “The same technology can be used to empower people (e.g., identify people at risk) or harm them. It all depends on who is using the information to what ends (e.g., social services vs. police). Because of unhealthy power dynamics in our society, I sadly suspect that the outcomes will be far more problematic – mechanisms to limit people’s opportunities, segment and segregate people into unequal buckets, and leverage surveillance to force people into more oppressive situations. But it doesn’t have to be that way.”

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp. said, “We have already turned our world over to machine learning and algorithms. The question now is, how to better understand and manage what we have done…. Algorithms are the new arbiters of human decision-making in almost any area we can imagine, from watching a movie (Affectiva emotion recognition) to buying a house (Zillow.com) to self-driving cars (Google)…. Our algorithms are now redefining what we think, how we think and what we know. We need to ask them to think about their thinking – to look out for pitfalls and inherent biases before those are baked in and harder to remove.”

John Markoff, author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots and senior writer at The New York Times, observed, “I am most concerned about the lack of algorithmic transparency. Increasingly we are a society that takes its life direction from the palm of our hands – our smartphones. Guidance on everything from what is the best Korean BBQ to who to pick for a spouse is algorithmically generated. There is little insight, however, into the values and motives of the designers of these systems.”

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “Like virtually all past technologies, algorithms will create value and cut costs, far in excess of any costs. Moreover, as organizations and society get more experience with use of algorithms there will be natural forces toward improvement and limiting any potential problems.”

Justin Reich, executive director at the MIT Teaching Systems Lab, observed, “The algorithms will be primarily designed by white and Asian men – with data selected by these same privileged actors – for the benefit of consumers like themselves. Most people in positions of privilege will find these new tools convenient, safe and useful. The harms of new technology will be most experienced by those already disadvantaged in society, where advertising algorithms offer bail bondsman ads that assume readers are criminals, loan applications that penalize people for proxies so correlated with race that they effectively penalize people based on race, and similar issues.”

Judith Donath of Harvard Berkman Klein Center for Internet & Society, replied, “The solution is design. The process should not be a black box into which we feed data and out comes an answer, but a transparent process designed not just to produce a result, but to explain how it came up with that result. The systems should be able to produce clear, legible text and graphics that help the users – readers, editors, doctors, patients, loan applicants, voters, etc. – understand how the decision was made. The systems should be interactive, so that people can examine how changing data, assumptions, rules would change outcomes. The algorithm should not be the new authority; the goal should be to help people question authority.”

Susan Etlinger, industry analyst at Altimeter Group, said, “[O]ur entire way of managing organizations will be upended in the next decade. The power to create and change reality will reside in technology that only a few truly understand. So to ensure that we use algorithms successfully, whether for financial or human benefit or both, we need to have governance and accountability structures in place. Easier said than done, but if there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.”

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied, “Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else. The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences. My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.”

Jamais Cascio, distinguished fellow at the Institute for the Future, observed, “The impact of algorithms in the early transition era will be overall negative, as we (humans, human society and economy) attempt to learn how to integrate these technologies. Bias, error, corruption and more will make the implementation of algorithmic systems brittle, and make exploiting those failures for malice, political power or lulz comparatively easy. By the time the transition takes hold – probably a good 20 years, maybe a bit less – many of those problems will be overcome, and the ancillary adaptations (e.g., potential rise of universal basic income) will start to have an overall benefit. In other words, shorter term (this decade) negative, longer term (next decade) positive.”

Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, commented, “The future effects of algorithms in our lives will shift over time as we master new competencies. The rates of adoption and diffusion will be highly uneven, based on natural variables of geographies, the environment, economies, infrastructure, policies, sociologies, psychology, and – most importantly – education. The growth of human benefits of machine intelligence will be most constrained by our collective competencies to design and interact effectively with machines. At an absolute minimum, we need to learn to form effective questions and tasks for machines, how to interpret responses and how to simply detect and repair a machine mistake.”

Amy Webb, futurist and CEO at the Future Today Institute, wrote, “In order to make our machines think, we humans need to help them learn. Along with other pre-programmed training datasets, our personal data is being used to help machines make decisions. However, there are no standard ethical requirements or mandate for diversity, and as a result we’re already starting to see a more dystopian future unfold in the present. There are too many examples to cite, but I’ll list a few: would-be borrowers turned away from banks, individuals with black-identifying names seeing themselves in advertisements for criminal background searches, people being denied insurance and health care. Most of the time, these problems arise from a limited worldview, not because coders are inherently racist. Algorithms have a nasty habit of doing exactly what we tell them to do. Now, what happens when we’ve instructed our machines to learn from us? And to begin making decisions on their own? The only way to address algorithmic discrimination in the future is to invest in the present. The overwhelming majority of coders are white and male. Corporations must do more than publish transparency reports about their staff – they must actively invest in women and people of color, who will soon be the next generation of workers. And when the day comes, they must choose new hires both for their skills and their worldview. Universities must redouble their efforts not only to recruit a diverse body of students –administrators and faculty must support them through to graduation. And not just students. Universities must diversify their faculties, to ensure that students see themselves reflected in their teachers.”

This is the seventh Future of the Internet study conducted together by the Pew Research Center and Elon University’s Imagining the Internet Center.

Read the full report: http://www.pewInternet.org/2017/02/08/code-dependent/ 

Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions. The Center is a subsidiary of The Pew Charitable Trusts, its primary funder.

The Imagining the Internet Center, an initiative of of Elon University’s School of Communications, documents the evolution of digital communication. Students, faculty and alumni cover major forums around the world and survey thousands of technology experts about the future. Their research is included in the Library of Congress and regularly cited in major international media outlets.