Elon University Home

 

The 2016 Survey: Algorithm impacts by 2026

Anonymous responses by those who wrote to explain their response

Internet experts and highly engaged netizens participated in answering a five-question canvassin fielded by the Imagining the Internet Center and the Pew Internet Project from July 1 through August 12, 2016. One of the questions asked respondents to share their answer to the following query:

Algorithms will continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other. The hope is that algorithms will help people quickly and fairly execute tasks and get the information, products, and services they want. The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts. Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society? Select from 1) Positives outweigh negatives; 2) Negatives outweigh positives; 3) The overall impact will be about 50-50. Please elaborate on the reasons for your answer.

Among the key themes emerging from 1,302 respondents' answers were: - Algorithms will continue to spread everywhere. - The benefits, visible and invisible, can lead to greater insight into the world- The many upsides of algorithms are accompanied by challenges- Code processes are being refined; ethics and issues are being worked out- Data-driven approaches achieved through thoughtful design are a plus- Algorithms don't have to be perfect; they just have to be better than people- In the future, the world may be governed by benevolent AI- Humanity and human agency are lost when data and predictive modeling become paramount. - Programming primarily in pursuit of profits and efficiencies is a threat. - Algorithms manipulate people and outcomes, and even read our minds. - All of this will lead to a flawed yet inescapable logic-driven society. - There will be a loss of complex decision-making capabilities and local intelligence. - Suggested solutions include embedding respect for the individual. - Algorithms reflect the biases of programmers and datasets. - Algorithms depend upon data that is often limited, deficient, or incorrect. - The disadvantaged are likely to be more so. - Algorithms create filter bubbles and silos shaped by corporate data collectors. - Algorithms limit people's exposure to a wider range of ideas and reliable information and elminate serendipity. - Unemployment numbers will rise as smarter, more-efficient algorithms will take on many work activities. - There is a need for a redefined global economic system to support humanity. - Algorithmic literacy is crucial. - There should be accountability processes, oversight, and transparency. - There is pessimism about the prospects for policy rules and oversight.

The non-scientific canvassing found that 38% of these particular respondents predicted that the positive impacts of algorithms will outweigh negatives for individuals and society in general, while 37% said negatives will outweigh positives; 25% said the overall impact of algorithms will be about 50-50, positive-negative.

If you wish to read the full survey report with analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/algorithm_impacts.xhtml

To read credited survey participants' responses with no analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/algorithm_impacts_credit.xhtml

Written elaborations by anonymous respondents

Following are the full responses by study participants who chose to remain anonymous. The remarks were shared by those who included a written elaboration explaining how they see the near future for the impacts of algorithms. Some of these are the longer versions of expert responses that are contained in shorter form in the official survey report. About half of respondents chose to take credit for their elaboration on the question (credited responses are published on a separate page).

These responses were collected in an “opt in” invitation to several thousand people who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet.

About 38% of the respondents expect positives to outweigh negatives; about 37% expect that the impacts of algorithms will be an even split between positive and negative outcomes for individuals and society; and about 25% of respondents anticipate that the expanding deep dive into algorithm-driven, digital systems will have mostly negative impacts.

An anonymous respondent who works for a major global human rights foundation commented, "Algorithms are already put in place to control what we see on social media and how content is flagged on the same platforms. That's dangerous enough—introducing algorithms into policing, health care, educational opportunities can have a much more severe impact on society.”

An anonymous assistant professor of data ethics, law, and policy said, "Algorithmic sorting and categorising is already invisible and is not becoming more accountable. If this trend continues the ability to sort and influence will be increasingly decentralised to those who are not charged with the best interests of the public."

An anonymous senior IT analyst said, "Most people use and will in the future use the algorithms as a facility, not understanding their internals. We are in danger of losing our understanding and then losing the capability to do without. Then anyone in that situation will let the robots decide."

An anonymous professor at MIT observed, "This is the greatest challenge of all. Greatest because tackling it demands not only technical sophistication but an understanding of and interest in societal impacts. The ‘interest in’ is key. Not only does the corporate world have to be interested in effects, but consumers have to be informed, educated, and, indeed, activist in their orientation toward something subtle. This is what computer literacy is about in the 21st century."

An anonymous respondent wrote, “Algorithms constructed and directed by humans are going to have their biases built in, whether inadvertently or deliberately. If we rely too much on algorithmic shaping of the Internet to expedite functions, we'll be giving up control of our own choices further to be directed by increasingly self-serving messages and the goals of commercial and other entities. Profiling will be increased and inverted."

An anonymous futurist said, "This has been going on since the beginning of the industrial revolution. Every time you design a human system optimized for efficiency or profitability you dehumanize the workforce. That dehumanization has now spread to our healthcare and social services. When you remove the humanity from a system where people are included they become victims.”

An anonymous information systems security manager replied, "Privacy and surveillance issues have not been adequately addressed in a manner that maintains individual liberties for a free society."

An anonymous senior research scholar at a major university's digital civil society lab commented, "This is a question of the different paces at which tech (algorithmic) innovation and regulation work. Regulating and governing algorithms lags way behind writing them and setting them loose on ever-growing (already discriminatory) data sets. As deep learning (machine learning) exponentially increases, the differential between algorithmic capacity and regulatory understanding and its inability to manage the unknown will grow vaster."

An anonymous respondent wrote, "Algorithms are neutral tools, but I expect profit decisions will drive us toward easier-to-implement solutions that trend negative.”

An anonymous professor at the University of California-Berkeley observed, "Algorithms are being created and used largely by corporations. The interests of the market economy are not the same as those of the people being subjected to algorithmic decision-making. Costs and the romanticization of technology will drive more and more adoption of algorithms in preference to human, situated decision-making. Some will have positive impacts. But the negatives are potentially huge. And I see no kind of oversight mechanism that could possibly work. Algorithms are, by definition, impersonal and based on gross data and generalized assumptions. The people writing algorithms, even those grounded in data, are a non-representative subset of the population. The result is that algorithms will be biased toward what their designers believe to be ‘normal.’ One simple example is the security questions now used by many online services. E.g., what is your favorite novel? Where did your spouse go to college? What was your first car? What is your favorite vacation spot? What is the name of the street you grew up on?"

An anonymous respondent said, "Businesses will ‘write’ the algorithms and businesses only care about profits (so there will be little incentive to write ‘fairer’ algorithms)."

An anonymous postdoctoral fellow in humanities at a major US university commented, "It is easy to call an algorithm racist, sexist, ageist, etc., but the bias of many, if not most, of the algorithms and databases governing our world are now corporate. The recent debate over whether Facebook's Newsfeed algorithm is biased against conservative news in the US, for example, does little to address the bias Facebook has in presenting news which is likely to keep users on Facebook, using, and producing data for Facebook. A democratic oversight mechanism aimed at addressing the unequal distribution of power between online companies and users could be a system in which algorithms, and the databases they rely upon, are public, legible, and editable by the communities they effect."

An anonymous respondent wrote, "There will be increasing mistrust of algorithm-based approaches to matters of real importance, though great reliance for the more trivial aspects of daily life."

An anonymous principal architect at Microsoft noted, "Currently, algorithms are being widely deployed without adequate review or consideration of consequences. We will probably need to see major civil cases or even criminal prosecutions go forward for this to change, but change it must. Once the proper regulatory framework is in place, I see the impact shifting to more neutral or even positive impact, but that is in the future, not today.”

An anonymous respondent said, "The algorithms will serve the needs of powerful interests, and will work against the less-powerful. We are of course already seeing this start to happen. Today there is a ton of valuable data being generated about people's demographics, behaviours, attitudes, preferences, etc. Access to that data (and its implications) is not evenly distributed. It is owned by corporate and governmental interests, and so it will be put to uses that serve those interests. And so what we see already today is that in practice, stuff like differential pricing does not help the consumer; it helps the company that is selling things, etc."

An anonymous respondent commented, "There is a lot of potential for abuse here that we have already seen in examples such as sentencing for non-violent offences. Less-well-off and minority offenders are more likely to serve sentences or longer sentences than others whose actions were the same. We also see that there is a lot of potential for malicious behaviour similar to the abuses corrected previously when nasty neighbours would spread lies and get their victims reclassified for auto or other insurance rates.”

An anonymous respondent observed, "Algorithms can provide targeted, useful information. However, I'm concerned that they create wider divisions between people than we already have, especially as they apply to the way that people receive and consume news and information."

An anonymous respondent commented, "The impact depends on the industry. For example, the reliance on algorithms in medicine adds to the dehumanization.”

An anonymous respondent wrote, "Much depends on what one means by positive and negative. For instance, an expanded role of the market pleases libertarians and irks socialists. Algorithm-enforced hate speech policies will divide most groups. Answering the question also depends on who owns and operates algorithms. Here I'm biased in favor of individuals, and skeptical of automation run by large states and businesses. Overall, these opposing forces should struggle with each other. Governments will continue to use code to monitor and influence populations, while civil libertarians oppose them (think surveillance versus sousveillance). Marketers will refine their campaigns with algorithms, while consumers persist in doing their own thing. One major question is to what extent will the increased use of algorithms encourage a behaviorist way of thinking of humans as creatures of stimulus and response, capable of being gamed and nudged, rather than as complex entities with imagination and thought? It is possible that a wave of algorithm-ization will trigger new debates about what it means to be a person, and how to treat other people. Philip K. Dick has never been more relevant.”

An anonymous respondent noted, "While there will be some algorithms that help, they won't be tailored and/or specific enough in the next decade to significantly shift the overall value."

An anonymous assistant professor at a state university said, "I do worry that the use of algorithms, while not without its benefits, will not more harm than good by limiting information and opportunities. On the good side, algorithms and big data will improve health care decisions, for example, but they will really hurt us in other ways, such as their potential influence on our exposure to ideas, information, opinions, and the like."

An anonymous doctoral candidate of anthropology wrote, "With ongoing use comes increased dependence that often translates to a naturalized interpretation of the ‘fairness’ of algorithmic decisions."

An anonymous survey participant commented, "The positives are all pretty straightforward, e.g., you get the answer faster, the product is cheaper/better, the outcome fits the needs more closely. Similarly, the negatives are mostly pretty easy to foresee as well, given that it's fundamentally people/organizations in positions of power that will end up defining the algorithms. Profit motives, power accumulation, etc., are real forces that we can't ignore or eliminate. Those who create the algorithms have a stake in the outcome, so they are, by definition, biased. It's not necessarily bad that this bias is present, but it does have dramatic effects on the outputs, available inputs, and various network effects that may be entirely indirect and/or unforeseen by the algorithm developers. An example: XYZ Co. develops a new magic shipping algorithm that allows you to have any product on earth delivered to your house in less than 24 hours cheaply. Sounds great for consumers and companies supplying products into the market. But what about the companies that aren't fully digital yet, the suppliers not near major air freight hubs, or customers that aren't using the service? All of these groups are made worse by the magic algorithm irrespective of their otherwise ‘natural’ competitive abilities. As the interconnectedness of our world increases, accurately predicting the negative consequences gets ever harder, so it doesn't even require a bad actor to create deleterious conditions for groups of people, companies, governments, etc.”

An anonymous respondent observed, "There needs to be checks and balances in place to allow the 50-50 split to occur, which I think are working and coming more and more into existence.”

An anonymous respondent commented, "Oh gosh. Infinitely negative. Algorithms, code, AI, language itself, are only ever reflective of their creators."

An anonymous computer science PhD researcher noted, "Algorithms embed coder bias, but add the appearance of objectivity. They typically lack sufficient empirical foundations, but are given higher trust by users. They are over-sold and deployed in roles beyond their capacity.”

An anonymous respondent said, "It's not that algorithms are the problem; it's that we think that with sufficient data we will have wisdom. We will become reliant upon 'algorithms' and data and this will lead to problematic expectations. Then that's when things will go awry.”

An anonymous respondent wrote, "As algorithms depend on data. There will be even more mining for personalized data.”

An anonymous senior program manager at Microsoft observed, "Algorithms are written by people—and currently this workforce responsible for the creation of algorithms is not a very diverse one. Because of this inherent bias (due to this lack of diversity) many algorithms will not fully reflect the complexity of the problems they are trying to address and solutions will tend to sometimes neglect important factors. Unfortunately, there will be an utter lack of transparency around many important decision algorithms, and it will take time until biases (or simply short-sighted thinking) baked into these algorithms will get detected. By then the American government will have banned innocent people from boarding planes, insurers will have raised premiums for the wrong people, and ‘predictive crime prevention’ will have gotten out of hand.”

An anonymous respondent wrote, "Algorithms certainly make some things easier. You can build infrastructure and do work at a large scale using these algorithms that used to take complex work. An example that I have thought of recently is the Pokemon Go phenomenon. This app uses an algorithm to send users to landmarks and other locations where they can participate in the game by catching pokemon, which gives its users a large number of potential places to play and diverse spaces to discover in their surroundings. However, this algorithm has resulted in some insensitive behavior, such as making the Pokemon Go augmented-reality figures appear on hallowed ground like the 9/11 Memorial and Auschwitz. People get upset by that, yet they value the other locations generated by the algorithm. This example is emblematic of the way that algorithms can 'unintentionally' do something insensitive and yet that is built into their programming; it seems as if the people who make the algorithms cannot envision all of the potential outcomes of their work. A more refined sense of the use of data and algorithms is needed and a critical eye at their outputs to make sure that they are inclusive and relevant to different communities. User testing using different kinds of groups is needed. Furthermore, a more diverse group of creators for these algorithms is needed! If it is all young white men, those who have privilege in this country, then of course the algorithms and data will serve that community. We need awareness of privilege and a more diverse group of creators to be involved.”

An anonymous research communication director said, "Simple, we'll have to pay for the advantages. The important is that net users are aware of the negative aspects.”

An anonymous professor wrote, "Algorithms are only good as the social context in which they operate. We live in a deeply racist, sexist, and classist society. How can we expect algorithms designed to maximize ‘efficiency’ (which is an inherently conservative activity) also push underlying social reform?”

An anonymous chief scientist observed, "Short-term, the negatives will outweigh the positives, but as we learn and go through various experiences, the balance will eventually go positive. We always need algorithms to be tweakable by humans according to context, creating an environment of IA (intelligent assistants) instead of AI (artificial intelligence)."

An anonymous professor of public policy at a technical university noted, "An algorithm embodies the biases, mistakes, and lack of understanding of its creator. Unlike a creator, it usually doesn't learn.”

An anonymous respondent wrote, "Algorithms are not neutral, and often privilege some people at the expense of those with certain marginalized identities. As data mining and algorithmic living becomes more pervasive, I expect these inequalities will continue."

An anonymous sociologist at the Social Media Research Foundation commented, "Machine decision systems are not impartial. Algorithms reflect the biases of their authors. Algorithms make discrimination more efficient and sanitized. Positive impact will be increased profits for organizations able to avoid risk and costs. Negative impacts will be carried by all deemed by algorithms to be risky or less profitable."

An anonymous policy advisor said, "The impact for society as a whole will be positive in terms of increased productivity and economic growth, but it could cause negative effects for individuals. There is a need for algorithmic literacy, and to critically assess outcomes from, e.g., machine learning, and not least how this relates to biases in the training data. Finding a framework to allow for transparency and assess outcomes will be crucial. Also a need to have a broad understanding of a the algorithmic 'value chain' and that data is the key driver and as valuable as the algorithm which it trains."

An anonymous respondent observed, "I expect meta algorithms will be developed to try to counter the negatives of algorithms. Until those have been developed and refined, I can't see there being overall good from this."

An anonymous researcher at the Karlsruhe Institute of Technology said, "Strong and competent institutions are needed to keep algorithms in check and to make them work in a way that benefits society. Although policy-makers become increasingly aware of this, it seems likely that this will be a longer process in which we will also witness many negative effects."

An anonymous respondent commented, "Algorithms in the past have been created by a programmer. In the future they will likely be evolved by intelligent/learning machines. We may not even understand where they came from. This could be positive or negative depending on the application. If machines/programs have autonomy, this will be more negative than positive. Humans will lose their agency in the world."

An anonymous social scientist replied, "We are mostly unaware of our own internal algorithms, which, well, sort of define us but may also limit our tastes, curiosity, and perspectives. I'm not sure I'm eager to see powerful algorithms replace the joy of happenstance. What greater joy is there than to walk the stacks in a graduate library looking for that one book I have to read, but finding one I'd rather? I'm a better person to struggle at getting 5 out of 10 New Yorker cartoons than to have an algorithm deliver 10 they'd know I get. I'm comfortable with my own imperfection; that's part of my humanness. Efficiency and the pleasantness and serotonin that come from prescriptive order are highly overrated. Keeping some chaos in our lives is important."

An anonymous director of business and human rights commented, "The principal risk of algorithms is the lack of oversight. In certain decisions that could have a discriminatory effect, algorithms could have a negative outcome. In such cases, there needs to be human oversight. This could happen in areas related to credit, housing, employment, and other financial services. But it does not seem that there has been enough attention to the privacy implications of such data or the mechanisms to ensure people have recourse if wrongly assessed or impacted by such data analysis."

An anonymous respondent observed, "Algorithmic decision-making will make many parts of life more efficient and convenient. By definition, these squeeze out some potential outcomes in favor of others. In some ways this will be mostly benign—helping you organize your own email account or personal photos, for instance. In other areas, they will be troubling. It's increasingly difficult to get beyond a filter bubble for things like news consumption and Web search, and it seems like that will get worse. But the most potentially threatening outcome is the ways that algorithms will be good for some people at the expense of others. Because algorithms are trained on past outcomes, they'll lock in disparities in areas like employment, criminal justice, and housing in ways that will be largely undetectable. People are aware of these tendencies, and mitigating them will be one of the major challenges as algorithm-based services expand."

An anonymous CEO said, "If a task can be effectively represented by an algorithm, then it can be easily performed by a machine. The negative trend I see here is that—with the rise of the algorithm—humans will be replaced by machines/computers for many jobs/tasks. What will then be the fate of Man?"

An anonymous respondent wrote, "Hopefully it will split into two equal halves, meaning the positive and negative affects will neutralize each other. But, as this depends on many external parameters one cannot really foresee what will the future hold."

An anonymous user experience manager commented, "It is going to take a long time for the negative side of algorithms to shake out, and we will probably identify areas where they simply cause more discomfort than benefit."

An anonymous Internet social researcher working in higher education replied, "You can't reduce everything humans do to simple or even complex algorithms. Many will want to and/or purposely break out."

An anonymous professor emeritus of history wrote, "Surely, we'll gain from automation, and surely we'll lose privacy."

An anonymous respondent commented, "I suspect there won't be huge changes from most of this for most people, but for some people there will be huge changes in their capacity to act and for other huge changes in their constraints."

An anonymous professor of digital media at an Australian university replied, "We will see a growth in public demands and tools for transparency. Algorithms will be increasingly subject to regulation and, for example, anti-discrimination legislation. This will create massive headaches for global corporations in terms of compliance."

An anonymous respondent said, "Algorithms enable the reproduction of human social bias at tremendous scale. They polarize and split societies and lead to more frequent incarcerations of lower classes and reproduce more privilege for the few. Studies of predictive systems in policing and algorithmically produced pricing actually reproduce social ills instead of correct for them. Predictive modeling should be banned and companies like Palantir should be razed to the ground."

Positives outweigh negatives - An anonymous respondent wrote, "Algorithms in general enable people to benefit from the results of the synthesis of large volumes of information where such synthesis was not available in any form before—or at least only to those with significant resources. This will be increasingly positive in terms of enabling better-informed choices. As algorithms scale and become more complex, unintended consequences become harder to predict and harder to fix if they are detected, but the positive benefit above seems so dramatic it should outweigh this effect. Particularly if there are algorithms designed to detect unintended discriminatory or other consequences of other algorithms."

An anonymous respondent commented, "The use of algorithms will create a distance from those who make corporate decisions and the actual decision that gets made. This will result in the plausible deniability that a manager did not actively control the outcome of the algorithm, and as a result, (s)he is not responsible for the outcome when it affects either the public or the employees."

An anonymous political science professor replied, "The first issue is that randomness in a person's life is often wonderfully productive, and the whole purpose of algorithms seems to be to squash those opportunities in exchange for entirely different values (such as security and efficiency). A second, related question is whether algorithms kill experimentation (purposely or not); I don't see how they couldn't, by definition."

An anonymous researcher and software developer commented, "The influence of algorithms will increase but their positive or negative impact will depend on the understanding of policy and decision makers. I expect it will take a while for a new generation of policy and decision makers with algorithmic know-how to stand up but overall I expect a relatively balanced impact."

An anonymous chairman and CEO at a non-profit organization commented, "The potential for good is huge, but the potential for misuse and abuse, intentional and inadvertent, may be greater."

An anonymous respondent replied, "Algorithms aren't going anywhere, so it's important to be cautious to make sure that they do not reinforce stereotypes and discourage upward mobility or stop somebody from receiving the care they deserve. However, it would be a fallacy to say that without algorithms our society would be more fair. We can 'unteach' discrimination in computers more easily than we can in human beings. The more algorithms are capable of mimicking human behavior the more we will need to reconsider the implications of what makes us human and how we interact."

An anonymous respondent said, "It's just technology; how we apply it depends on us, not it."

An anonymous president of a consulting firm observed, "Once people understand which algorithms manipulate them to build corporate revenues without benefiting users, they will be looking for more-honest algorithm systems that share the benefits as fairly as possible. When everyone globally is online, another 4 billion young and poor learners will be coming online. A system could go viral to win trillions in annual revenues based on micropayments due to sheer volume. Example: The Facebook denumerator app removes the manipulative aspects of Facebook, allowing users to return to more typically social behavior. LinkedIn tries to manipulate me to benefit from my contacts' contacts and much more. If everyone is intentionally using or manipulating each other, it is acceptable? We need to see more-honest, trust-building innovations and fewer snarky corporate manipulative design tricks. Someone told me that someday only rich people will not have smartphones, suggesting that buying back the time in our day will soon become the key to quality lifestyles in our age of information overload. At what cost, and with what 'best practices' for the use of our recovered time per day? The overall question is whether good or bad behaviors will predominate globally."

An anonymous respondent wrote, "What the masses like has nothing to do with personal preferences. Maybe people don't want random strangers being able to 'guess what they want.'"

An anonymous respondent observed, "Most algorithms will be created by profit-seeking entities, and they will be pushed to maximize that end. This is often the case in capitalist societies. There will be efforts made by activists and other individuals and groups to counterbalance the impact, resulting in a rough balance."

An anonymous respondent replied, "It all depends on who is creating and troubleshooting the algorithms. They can definitely be designed in a way that prevents discrimination and does not reinforce toxic patterns. In order for predictive algorithms to operate effectively and salubriously they must be approached by a team of interdisciplinary scientists and engineers including social scientists."

An anonymous respondent said, "The positive impacts I see are mainly in areas like epidemiology, where data aggregation and correlation are the main issues. The application of 'algorithms' to daily life I expect to have more issues in the short term than benefits, where short-term includes the next 10-20 years. The only way I believe the likelihood of these could be mitigated is if there was a very strong oversight group created that had full access to the different actors within the field(s), and if it included a truly representative population. Instead I expect a weak oversight group, if any, which will include primarily old, rich, white men, who may or may not directly represent vested interests especially in 'Intellectual Property' groups. I also expect all sorts of subtle manipulation by the actual organizations that operate these 'algorithms' as well as single bad actors within them, to basically accomplish propaganda and market manipulation. As well as a further promulgation of the biases that already exist within the analog system of government and commerce as it has existed for years. Any oversight must have the ability to effectively end any bad actors, by which I mean fully and completely dismantle companies, and to remove all senior and any other related staff of government agencies should they be found to be manipulating the system or encouraging/allowing systemic discrimination. There would need to be strong representation of the actual population of whatever area they represent, from socioeconomic, education, racial, and cultural viewpoints. All of their proceedings should be held within the public eye."

An anonymous respondent observed, "It all depends on who develops the algorithms and for what purposes. The algorithms are themselves no different than any other knowledge technology like printing presses or TV/radio or sample surveys/polling.”

An anonymous writer commented, "From my perspective, we need to step back from the algorithm- and sales-driven Internet. The social-control factor is getting out of control. I miss the Internet 1.0!"

An anonymous respondent wrote, "Over time, algorithms can be adjusted for any shortcomings, initially there are often problems that need to be addressed. Society tends to overcomplicate things in general.”

An anonymous professor observed, "The use of extensive data collection and automated processes to analyze and provide services will put a premium on certain skill sets and diminish the value of others. Unless there is public support for education and continued training, as well as wage and public-service support, automation will expand the number of de-skilled and lower-paying positions paired by a set of highly skilled and highly compensated privileged groups. The benefits of increased productivity will need to be examined closely.”

An anonymous CEO noted, "The key question to ask abut algorithms is 'Who pays the piper?" Is this result based on the needs of an advertiser or data aggregator or does it meet my needs? This decision will need to be made at the corporate or institutional level (not by politics, governments, laws) and the post-Web free searching will be a large niche for those without access to quality resources except through good libraries.”

An anonymous research associate said, "Algorithms are only as biased as the people who write them. Anytime an algorithm replaces a person, it is similarly likely to have biases. Those biases may be different, certainly, but they will exist nonetheless."

An anonymous survey participant commented, "The indiscriminate use of algorithms without an accompanying public education on what algorithms are and how they are used risks many personal, social and political/ideological problems. There are few, if any, neutral algorithms—they all reflect a viewpoint.”

An anonymous respondent observed, "The positives will be great, but the negative impacts will be huge to those impacted. We've already seen that poor algorithms in justice systems actually preserve human bias instead of mitigating it. As long as these algorithms are hidden from public view, they can pose a great danger to those affected by them.”

An anonymous chief legal officer commented, "The question presumes less privacy for everyone. That is a concern even if most people do not yet recognize it. Additionally, limiting information, even if it appears to be for a positive purpose, is still a limit on information. That is never good.”

An anonymous respondent noted, "I find this question a bit too abstract. But I will say that access to tech and tech skills is a determining factor in a person's ability to either participate in more integrated and centralized tasks. I am wary of the great enthusiasm for big data among researchers right now. While it can be useful to answer some large-scale questions, I don't think it can answer more meaningful questions such as impact on peoples' lives.”

An anonymous assistant professor at a major US research university said, "Many of the potential consequences of algorithms, particularly with large datasets, will be hard to predict. With unintended consequences, there will be some negative outcomes that are hard to anticipate."

An anonymous respondent wrote, "I seriously hope we can turn this around somehow, but the entropy in force right now is a strong current worldwide, permeating a diverse cast of players—including a lot of the crude oversight folks. Concentrated oil money is one key part of this, but systems to resist entropy without going Terminator are an unsolved problem."

An anonymous respondent observed, "Algorithms will overestimate the certainty with which people hold convictions. Most people are pretty wishy-washy but algorithms try to define you by estimating feelings/beliefs. If I ‘kind of like’ something I am liable to be grouped with fervent lovers of that thing.”

An anonymous associate professor of communication studies at a public university in Canada wrote, "Algorithm politics should be a more visible topic of public policy and public discussion. They're invisible to most consumers."

An anonymous associate professor observed, "Whether algorithms positively or negatively impact people's lives probably depends on the educational background and technological literacy of the users. I suspect that winners will win big and losers will continue to lose—the Mathew effect. This is likely to occur through access to better, cheaper and more efficient services for those who understand how to information, and those who don't understand it will fall prey to scams, technological rabbit holes, and technological exclusion.”

An anonymous computer scientist wrote, "Whilst algorithms will help us to achieve elevated states of wellbeing and improved lives on average, the bad experiences will become far worse (and dystopian). The tech industry is attuned to computer logic, not feelings or ethical outcomes. The industrial 'productivity' paradigm is running out of utility, and we need a new one that is centered on more human concerns.”

An anonymous respondent commented, "If you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities. For example, if you were really willing to use the data that exist right now, we would tell African-American men from certain metro areas that they should not even consider going to college—it won't ‘pay off’ for them because of wage discrimination post-schooling. Is this an ethical position? No. But is it what a computer would determine to be the case based on existing data? Yes."

An anonymous senior strategist observed, "The negative effects are extremely worrisome. I am concerned about loss of privacy and about discrimination that is based on demographic categories.”

An anonymous respondent noted, "An algorithm is only as good as the developer and if s/he failed to foresee a particular corner case, the algorithm won't deal with it. The real problem is that companies/organisations will come to rely on these and reduce the staff dealing with the public and/or give them less discretion. So people will get discriminated against because they don't fit neatly into a particular box."

An anonymous respondent said, "I would hope for positive gains, but it all depends on human nature, agendas, and greed.”

An anonymous computer science professor wrote, "The overall impact will depend on whether or not we develop an ethical and philosophical global (world-level) approach to the use and design of algorithms. In principle, with more or less work or time, algorithms will be able to perform all sorts of tasks and reasoning that may lead to negative impact on society.”

An anonymous Web and mobile developer commented, "As with every technological evolution, access to it is parsed and depends on socioeconomic factors. In emerging countries, even though they have minds as bright as there are on other nations, the sheer number of inhabitants with low incomes will continue to restrict access to equipment able to take advantage of those algorithms.”

An anonymous respondent wrote, "This is imitative or indicative."

An anonymous respondent wrote, "The techies act with no social purpose. They rule the algorithmic world. The sooner we control them, the better will be the world.”

An anonymous respondent observed, "People's use of the Internet also shapes algorithms just as algorithms shape people's use. So 50-50”

An anonymous senior software engineer at Microsoft commented, "The majority of people are unaware of and uneducated about the algorithmic influence already exerted by the existing—mostly crude—systems. As the sophistication increases, these algorithms will be harder to control and monitor and become almost impossible to detect. Since these are dynamic systems, major random and upredictable outcomes are likely."

An anonymous respondent wrote, "The ones with the most control over the algorithms (government, large corporations) do not have individuals'/consumers' rights and interest at heart."

An anonymous engineer at a US government organization commented, "Some work will become easier, but so will profiling. I, personally, am often misidentified as one racial type, political party, etc., by my gender, address, career, etc., and bombarded with advertising and spam for that person. If I had an open social profile, would I even have that luxury—or would everything now 'match' whichever article I most recently read?"

An anonymous respondent noted, "The algorithms are designed by the companies for their own benefit, not for consumer benefit or societal benefit. So they will be focused on getting the highest profit out of most people. So far, the market has not shown that to be good for consumers or done in a way that views people as diverse and nuanced humans. We are treated like the lowest common denominator, and when human judgment is involved we are treated like stereotypes. Technology can't fix humans' values.”

An anonymous respondent wrote, "Algorithms will exacerbate confirmation bias. Also, algorithms are functionally tools created by companies and teams currently composed primarily of white men. I would think it would stand to reason that implicit biases of white men would more comprehensively impact our data-driven interactions.”

An anonymous respondent commented, "Algorithms are not apolitical: they are designed and will exhibit the systemic bias of the designers."

An anonymous respondent observed, "Of course there are benefits to tailoring of goods and services via algorithms. However, large intermediaries seem to be manipulating their algorithms to achieve certain goals. They should be required to adjust their algorithms to minimize illegal activity, of course to address objectives consistent with the principles of democracy."

An anonymous respondent said, "An algorithm is only as good as the filter it is put through, and the interpretation put upon it. Too often we take algorithms as the basis of fact, or the same as a statistic. Which they are not, they are ways of collecting information into subjects. An over-reliance on this and the misinterpretation of what they are created for shall lead to trouble within the next decade."

An anonymous education director wrote, "There will be great advances, with algorithms becoming smarter and helping to tailor information better. There will also be devastating incidents of technology overstepping its role, dictating rather than supporting its smarter use. As a result we will ebb and flow in our level of trust and use of these tools."

An anonymous respondent commented, "Positive changes include convenience, cutting through information overload to more quickly get what wenwant/need and being increasingly less bound by geographic barriers. Cons: The owners of content, delivery systems, and technology will disproportionately shape discourse and dissemination and perpetuate bias (unconsciously and consciously). There will be less opportunity for serendipitous discovery and/or being exposed to a variety of experience—and as a result, less tolerance for differences. There will be greater security threats."

An anonymous CEO observed, "So long as the people designing our tech (including both devices and algorithms) reflect only a small portion of the global community, then the products themselves will also have positive impact on a small portion of the community.”

An anonymous director of academic computing noted, "'Algorithms' existed way before computers came on the scene—the question is so vague as to be meaningless."

An anonymous professor said, "Algorithms will do a lot of damage and some good. Most of the egregious damage will be fixed by turning off the algorithms, by people stopping their use, or by fixing them.”

An anonymous respondent wrote, "The main positives I see are those that involve routine business and personal affairs. The downsides to these are any situations that do not fit a standard set of criteria or involve judgment calls—large systems do not handle exceptional situations well and tend to be fairly inflexible and complicated to navigate. I see a great deal of trouble in terms of connections between service providers and the public they serve because of a lack of empathy and basic interaction. It's hard to plan for people's experiences when the lived experience of the people one plans for are alien to one's own experiential paradigm.”

An anonymous respondent commented, "The overall impact will be utopia or the end of the human race; there is no middle ground foreseeable. I suspect utopia given that we have survived at least one existential crisis (nuclear) in the past and that our track record toward peace, although slow, is solid.”

An anonymous systems administrator noted, "Positive: Computers become generally more helpful. Negative: They will know everything about us, and need to in order to be optimally helpful. It will make filter bubbles more pervasive and effective. It will kill consumer choice. Incumbent has invested billions into machine learning and an established body of knowledge (human and machine). Competitors have a very steep barrier to entry when open source machine learning is scarce and not great."

An anonymous respondent said, "The AI overlords will [expletive] us in the ass."

An anonymous technical operations lead replied, "Among the positives: 1) There was a guy who wrote a bot to fight parking tickets. If someone writes bots for all government programs, it would help a lot of people. (It would also make the government programs more efficient, since they would need fewer bureaucrats to ask questions. 2) Sites that aggregate and amplify knowledge. For example, StackOverflow does a good job elevating the ‘good’ answers, instead of a normal forum, where everyone's answer is considered equal value. Among the negatives: 1) Things like Google search altering the results based on what they ‘think’ you want. Obviously, this lets people view the world through a filter, which may eliminate entire points of view. 2) Companies will always try to game the system, and if they ‘are’ the system, they will try to maximize profits at the expense of the consumers.”

An anonymous lead field technician replied, "Predictive modeling is based on statistical analysis, which by its nature ignores edge cases. It will lead to less freedom of choice in products, more subtly coercive advertising and an inability for people to make human mistakes that don't haunt them for long periods or their whole lives.”

An anonymous respondent said, “In the near future, artificial intelligence systems will improve the quality of individuals and societies, while over time, artificial intelligence has the potential to evolve into a scenario negative for humanity, like one of those portrayed in science fiction literature."

An anonymous computer programmer, wrote, “Who will check the algorithms to make sure they are fair? The advertising agency? Google? The government?"

An anonymous director of evaluation and research at a private university wrote, "The normative factors that come into play will outweigh the possibilities for new and broader choices people have. We'll be ever more susceptible to manipulation and fall victim to advertisers and social pressures until, perhaps, a tipping point is reached. Big data uses lead to more conformity and shallower views of human possibility."

An anonymous respondent said, "While these algorithms will free up time and thought, I don't believe they will necessarily lead to us using the freed resources better. We simply don't seem to have the motivation to use the time gained through increased productivity for qualitatively different activities like helping the indigent, exercising, tutoring kids, or other activities. Instead we watch more videos, play more games and log more hours on mobile devices.”

An anonymous online community consultant wrote, "Talking to a brilliant older writer who always sees ads for cemetery plots and wrinkle creams, it becomes obvious how much hurtful stereotyping goes into thoughtless advertising, and presumably into search results based on age, race, region, and more."

An anonymous Internet security consultant commented, "This radically increases power for those with the most resources (corporations).”

An anonymous software architect observed, "Resistance is futile."

An anonymous systems administrator in municipal government said, "There will be benefits, but there will always be people who abuse any system in place.”

An anonymous data center technician wrote, "Currently there is very little understanding of programming. As that increases the trust we put in algorithms will become more sane."

An anonymous respondent observed, "An algorithm is just a set of instructions, and has been around since humans used tools. Some instructions are poorly written—by humans."

An anonymous respondent said, "Negative: algorithms are defined by people who want to sell you something (goods, services, ideologies) and they will twist the results to favor doing so. Positive: They can reduce search time."

An anonymous programmer and data analyst wrote, "Given the current drive to isolate consumers and extract revenue to the greatest possible degree, any algorithms written will be done from the perspective of maximum income for the creating company. There are few (if any) organizations that actually put customer privacy and value (real value, not what the corporations determine to be valuable) and I don't expect this to change."

An anonymous respondent observed, "Algorithms are useful, no doubt. Let's just accept that. And they rely on data. Let's accept that too. And for a while, both have been wonderfully useful as has computing writ large. And computing, data and algorithms would be fine. However, the current systems are designed to emphasize the collection. concentration, and use of data and algorithms by relatively few large institutions that are not accountable to anyone and/or if they are theoretically accountable are so hard to hold accountable that they are practically unaccountable to anyone. This concentration of data and knowledge creates a new form of surveillance and oppression (writ large). It is antithetical to and undermines the entire underlying fabric of the erstwhile social form enshrined in the US constitution and our current political-economic-legal system. Just because people don't *see* it happening, doesn't mean that it's not or that it's not undermining our social structures. It is. It will only get worse because there's no "crisis" to respond to, and hence, not only no motivation to change, but every reason to keep it going—especially by the powerful interests involved. We are heading for a nightmare."

An anonymous respondent noted, "Everything will be 'custom'-tailored based on the group-think of the algorithms; the destruction of free thought and critical thinking will ensure the best generation is totally subordinate to the ruling class."

An anonymous coordinator of member services at a non-profit association wrote, "Algorithms are mostly controlled by entities that either are based on profit or have a specific agenda. I don't see them improving any time soon. My main concern is that the print media and radio are already consolidated and owned by very few companies. The dissemination of news online is likely to follow the same route and it will be harder to locate different sources of information using regular channels."

An anonymous respondent commented, "The bad guys appear to be way ahead of the good guys."

An anonymous assistant professor at a state university observed, "In my experience, the term 'algorithm' in this context usually means an automated organizational process. The key problem is that using the term 'algorithm' in this way imbues these processes with a kind of mystique that diffuses responsibility. It will be increasingly difficult to hold organizations and institutions accountable for the socially undesirable consequences of unjust policies when these policies are recast as technologies with bugs to be fixed. On balance, however, the transformation of policy into algorithm creates a new opportunity for intervention. The conversation about algorithms and transparency could ultimately lead to greater accountability as organizations are required to document the policy outcomes they expect from their algorithmic systems."

An anonymous respondent said, "Automated decision-making will reduce the perceived need for critical thinking and problem solving. I worry that this will increase trust in authority and make decisions of all kinds more opaque."

An anonymous respondent wrote, "The negative aspects of the future's increased information sharing will, as often happens, receive more attention and more publicity from alarmists. And there may also be a few emergencies happening while people learn how to put up new safeguards, but the reality is that there will be just as much benefit, if not maybe a tiny bit more, as there will be problems to overcome or live with.”

An anonymous respondent commented, "It really depends who is behind the algorithms. In my Midwest hometown our bus system was designed to be most efficient. Which evidently meant that the bus never took poor people to places like business parks, rich people’s neighborhoods, or anywhere they could be seen by the well-to-do population. While local governments are a double-edged sword, systems or algorithms that interface with local-level systems will probably be required to have oversight by local officials who may or may not have the expertise to have such influence."

An anonymous digital coordinator observed, "Tools can be used for good or bad. It depends on how the system is structured. I think so far the Internet has been a net positive, thanks to its open nature."

An anonymous respondent noted, "I am not confident the power of these tools is in the hands of responsible people."

An anonymous marketing specialist wrote, "Computer algo's are currently used to target and discriminate. We've seen this time and again. Why would you think that this will change?"

An anonymous respondent observed, "When software is made it has an intended use. The actual use only happens after people use it."

An anonymous respondent said, "I don't think we understand intersectionality enough to engineer it in an algorithm. As someone who is LBGTQ, and a member of a small indigenous group who speaks a minority language, I have already encountered so many 'blind spots' online—but who do you tell? How do you approach the algorithm? How do you influence it without acquiescing?”

An anonymous IT architect at IBM said, "Companies seek to maximize profit, not maximize societal good. Worse, they repackage profit-seeking as a societal good. We are nearing the crest of a wave the trough side of which is a new ethics of manipulation, marketing, nearly complete lack of privacy. All predictive models, whether used for personal convenience or corporate greed, require large amounts of data. The ways to obtain that are at best gently transformative of culture, and on the low side, destructive of privacy. Corporations' use of big data predicts law enforcement's use of shady techniques (e.g., Stingrays) to invade privacy. People all too quickly view law-enforcement as 'getting the bad guys their due' but plenty of cases show abuse, mistaken identity, human error resulting in police brutality against the innocent, and so on. More data is unlikely to temper the mistakes; instead, it will fuel police overreach, just as it fuels corporate overreach."

An anonymous respondent wrote, "Without constant, active work to prevent it, newly implemented structures that govern the social and political will inevitably reflect and therefore compound existing biases and discrimination."

An anonymous respondent observed, "It's extremely difficult for individual users to perceive subtle bias in content, and the psychological impact is significant. I'm not sure how this can be mitigated. On the other hand, the Web is currently impossible to navigate without algorithms, and they will improve.”

An anonymous respondent noted, "Systemic discrimination will neither increase or decrease by the use of different tools."

An anonymous respondent wrote, "Filtering via algorithm—that is, letting algorithms decide what stories I might be interested in or what people I might like to 'follow'—will lead to an increase in the silos that eventually cause tribalism.”

An executive manager at a social service NGO said, "The question ignores the quality of the algorithms. Most of the horror stories we hear nowadays are of badly designed or misapplied algorithms. Human beings have always used algorithms, and have always gotten them wrong from time to time. And revised them."

An anonymous respondent commented, "We'll see both good and bad outcomes. We'll spot things in advance that save lives, help us make more informed decisions etc. We'll see negative side effects too. Denying health coverage over a statistical likelihood of future conditions as a likely first result.”

An anonymous respondent observed, "Just as all automation events in history there will be good and bad results. More uniformity, higher quality control, commoditization of high-end technology are pluses but loss of individuality, attempting to fit one model to more and more situations, strict and inflexible rules are negatives."

An anonymous respondent wrote, "While there are some organizations that are very transparent about how their algorithms operate, however there are equally as many if not more that can have impacts on our lives but the operation of which are not obvious."

An anonymous respondent working in IT governance observed, "Remember 'Greed is good?' Unless security scorecards go industry-wide, IT will eat its children much like the stock market does.”

An anonymous student and research assistant noted, "As a computer science graduate, I know the truth about algorithms: An algorithm is only as good as the data you put into it. Full stop. What this means is, if you feed biased data to an algorithm, you get a biased algorithm (thus the 'algorithmic cruelties' that have been reported various places). This can be resolved, of course—if training data is known to be biased, weighting can be added to counter it—but the fact that this has to be deliberately added opens up a significant avenue for error and negligence. Overall, the addition of strong predictive algorithms to society will have a negative effect unless they are deployed very carefully—both from biased algorithms working as intended, and unbiased ones that nonetheless screw up (not to mention what can happen if you configure one wrongly). The primary reason is that if an algorithm is biased, this bias tends to reaffirm already extant power structures, locking them in further—and this is likely to happen just by people assuming the algorithms are unbiased (both the designers thereof and the users thereof)."

An anonymous respondent commented, "The algorithms I see are approximations of reality. For example, a tool estimating blood pressure via pulse wave transit times has been put on the market but has little positive science behind it. Venture capitalists see profit before any beneficial effect of tech. There are many other examples. This doesn't even begin to address the marketing strategies of companies like Amazon where value is in volume without addressing quality.”

An anonymous scientific editor observed, "The system will win; people will lose. Call it The Selfish Algorithm; algorithms will naturally find and exploit our built-in behavioral compulsions for their own purposes. We're not even consumers any more. As if that wasn't already degrading enough, it's a commonplace to observe that these days people are the product. The increasing use of 'algorithms' will only—very rapidly—accelerate that trend. Web 1.0 was actually pretty exciting. Web 2.0 provides more convenience for citizens who need to get a ride home, but at the same time—and it's naive to think this is a coincidence—it's also a monetized, corporatized, disempowering, canabalizing harbinger of the End Times. (I exaggerate for effect. But not by much.)"

An anonymous retired programmer noted, "It is not algorithms that are at fault, but the parties that use them becoming more invasive and controlling.”

An anonymous respondent wrote, "Expanding the collection of information may help doctors make better decisions. However, the lack of privacy and the increased leaks of private information could easily lead to health and gender discrimination. Information that had previously been public such as stating voting preference or filing campaign contributions are now more easily aggregated and publicly disseminated, sometimes with unintended consequences.”

An anonymous respondent commented, "Social class/caste systems will become more entrenched than ever, everywhere. Life will become like a never-ending automated phone tree call."

An anonymous respondent said, "My negativity is maybe a bit overstated because much of human interaction with processes will be made easier with good 'clean' algorithms. But too much of Web use is based on monetizing the experience for advertisers and that will clutter the screen, slow down the user, and probably cost more in actual dollars."

An anonymous software security consultant said, "There will be many positive impacts that aren't even noticed. Having an 'intelligent' routing system for cars may mean most people won't notice when everyone gets to their destination as fast as they used to even with twice the traffic. Automated decisions will indeed have significant impacts upon lots of people, most of the time in ways they won't ever recognize. Already they're being used heavily in financial situations, but most people don't see a significant difference between 'a VP at the bank denied my loan' and 'software at the bank denied my loan' (and in practice, the main difference is an inability to appeal the decision). A major difference will be the lack of appeals processes for decisions made automatically. It's already nearly impossible to correct an incorrect credit report, despite the existence of clear laws requiring support for doing so. It seems unlikely that similar problems will be easy to correct in the future unless significant regulation is added around such systems. I am hopeful the benefits will be significant, but I expect the downsides to be far more obvious and easy to spot than the upsides."

An anonymous respondent wrote, "Algorithms will reduce choice for consumers in order to increase profits for corporations."

An anonymous software engineer commented, "Algorithms on their own have little to do with the positive or negative outcomes of their results. What the engineer uses those algorithms for is what determines the level of benefit it does for society. Technology is only as good as the people who use it, and unless we as a society improve in how we consider other people and how we use technology, the technology itself will not improve."

An anonymous respondent observed, "Life will become more convenient, but at the cost of discrimination, information compartmentalization, and social engineering. Everything will be geared to serve the interests of the Corporations and the 1%. “

An anonymous technician noted, "Algorithms are just electronic prejudices, just as the big grownup world is just high school writ large. We'll get the same general sense of everything being kind of okay, kind of sucking, and the same daily outrage story, and the same stupid commentary, except algorithms will be the responsible parties, and not just some random schmuck, and artificial intelligences composed of stacks of algorithms will be writing the stories and being outraged."

An anonymous respondent said, "Strictly in terms of convenience and productivity, the algorithms will be positive and consistently improve. For instance, healthcare apps will be normalized. But all this very personal data will no doubt be collected, sorted, sold, and used in ways outside the public's control, perhaps even against the populace based on notions of race, party affiliation, activism, etc. Major and sweeping policy changes (stopping the 'collect it all' mentality, for instance) must be in place to prevent this, including a powerful oversight board."

An anonymous respondent wrote, "These types of algorithms are only as thier training sets and the biases of their programmers. They tend to magnify existing discrimination. AI used in the criminal justice system for setting bails is a great example of this, racism in equals racism out. So the greatest impact on people's lives will be where discrimination exists today: education and healthcare.”

An anonymous respondent commented, "I'm enough of a pessimist to assume that the positive effects will be largely consumer-focused (faster/better user experience with services and products), and the negative ones will be largely economic (new ways for the financial industry to extract wealth without any kind of regulatory oversight)."

An anonymous economist at a private university observed, "Algorithms will allow for efficiency in the provision of services and public goods. However, these same algorithms will concentrate information and power at the hands of monopolists and unaccountable third parties. Without any further information, it is hard to say which side outweighs the other."

An anonymous respondent noted, "There will no doubt be many positive outcomes, but the more power is ceded to algorithms, the less agency humans will experience. This may well have seriously destabilising effects on society."

An anonymous faculty member at a US state university said, "Historically, algorithms are inhumane and dehumanizing. They are also irresistible to those in power. By utilitarian metrics, algorithmic decision-making has no downside; the fact that it results in perpetual injustices toward the very minority classes it creates will be ignored. The Common Good has become a discredited, obsolete relic of The Past."

An anonymous public utility manager commented, "I believe the problem with these algorithms is that they can be easily misled, and those using them lose touch with what's behind them, if they ever knew in the first place. I would rather have better search capabilities than have so much thrown at me."

An anonymous respondent noted, "Individual variations will tend to be ignored by algorithmic approaches. In addition, variability and chance will be more difficult to introduce into activities and systems."

An anonymous respondent said, "I believe the intent of this question is to ask about big data analytics, rather than 'algorithms' generally (which is like asking 'Is math good?'). Generally big data analytics will further empower companies and governments that have the sophistication to run the analytics. How they use the insights is crucially important. The most likely result is that they would be used for profit and social control, which is ultimately bad, not because math is bad, but because existing power imbalances in society will be exacerbated. This is not an inevitable natural result of the technology existing, but of the lack of political will to prevent the consolidation of power in a handful of companies and agencies.”

An anonymous senior software engineer wrote, "Some of the negative changes will be unforseen—for example Facebook's 'trending news' algorithms already sometimes have unintended consequences, promoting smaller stories whilst leaving out major ones. The biases of programmers may also come out in the algorithms, unconsciously or deliberately."

An anonymous respondent commented, "All algorithms are based on assumptions that may or may not be true. For example, if I research on Amazon for a product I want a friend to know about, I continue to see it recommended to me, or other products recommended because I looked at that one.”

An anonymous respondent observed, "There are certainly areas where algorithms could improve things and areas where they could be detrimental. Positive changes: Better matching of people looking for work with work they could excel at. Better matching of employers with employees. Better education—identifying subjects that need improvement, subjects that are a challenge and subjects that are well-grasped. Better energy and water efficiencies. Better ways to connect with people in person and form communities. Better identification of communities that need financial and social help. Negative changes: 'Garbage In, Garbage Out'—bad or incorrect assumptions can lead to incorrect medical diagnosis. More manipulation of the news—this is already happening. Current online news is terrible. (In the early 2000's, online news was fantastic, varied and interesting. Now whatever algorithm is running Google news seems to think all I want to see is celebrity news. I may be a housewife, but some assumptions are being made that irk me. I'm going to start subscribing to magazines again because all the online news is very shallow. Also, the news is now all converging on sameness, no matter what outlet I get it from.) More manipulation of consumers—already happening. Data allows fine-grained manipulation of what we see that leads us to want things that we do not need and ignore what we really do need. Oversight mechanisms: We'll need more mathematicians to review and vet algorithms to make sure that they are working in a fair and unbiased manner. We may also need new oversight of algorithms, by trusted review organizations, something like the Consumer Reports or ConsumerLabs of algorithms."

An anonymous respondent noted, "I don't think we have a culture of transparency in software design for these things. We need to know how these systems work so everyone knows how to interact with them and what to protest if need be.”

An anonymous senior publisher in residence at a private college commented, "Algorithms are a tool like anything else. The same question would have been asked about cars or sewing machines. They were ultimately positive but could also lead to negative changes (such as early 20th century labor practices, destruction of ecosystems, etc.)”

An anonymous respondent observed, "Unless we can de-bias the assumptions upon which algorithms are built, they will only amplify existing stereotypes and problems. The data collected by this survey is a perfect example of flawed inputs. The phrasing of the questions means you must be well educated to answer. Any algorithm based on these survey responses will be biased."

An anonymous graduate student noted, "It's hard to say, since I don't know where it sits now. Makes it hard to estimate where, or how far, it will go from here.”

An anonymous respondent said, "Google's search algorithms already filter out way too much stuff when I do a search, I don't get to listen to all the music I might want to on Pandora, Facebook determines what I should be looking at: it's already negative!”

An anonymous respondent observed, "I expect this to destroy what little remains of customer service in most businesses. It also will likely mean a narrowing of consumer choices, as businesses focus on only the most popular choices in any given range of products based on predictive algorithms."

An anonymous respondent noted, "So, human nature being what it is—deeply flawed and easily frightened—the overall impact will teeter precariously between good intentions and hate-filled intentions."

An anonymous respondent commented, "Automated systems can never address the complexity of human interaction with the same degree of precision as a person would."

An anonymous respondent said, "Even if you remove malice from the equation, the more complex an algorithm becomes, the more likely it is to accidentally encode human biases (see the current debate about which prisoners are considered 'low-risk' enough to be released early, and that's a pretty simple one). I am also troubled by the way algorithms contribute to the atomization of media through Facebook and the like. We are quite literally losing the discursive framework we need to communicate with people who disagree with us.”

An anonymous respondent commented, "Algorithms value efficiency over correctness or fairness, and over time their evolution will continue the same priorities that initially formulated them. Just as societal mores are hard to break, algorithmic standards will evolve along the lines of original intent, and will be less helpful towards aims of fairness and inclusiveness.”

An anonymous respondent observed, "Who are these algorithms accountable for, once they are out in the world and doing their thing? They don't always behave in the way their creators predicted. Look at the stock market trading algorithms, the ones that have names like The Knife. These things move faster than human agents ever could, and collectively, through their interactions with each other, they create a non-random set of behaviors that cannot necessarily be predicted ahead of time, at time zero. How can we possibly know well enough how these interactions among algorithms will all turn out? Can we understand these interactions well enough to correct problems with algorithms when injustice invariably arises?”

An anonymous IT analyst noted, "Simply search ‘google search grammar matter’ in Google and see the top results for what type of discrimination happens via algorithms. Another example is Facebook trying to only show topics you've previously shown interest in on their platform to show you more of the same. You're far less likely to expand your worldview if you're only seeing the same narrow-minded stuff every day. It's a vast topic to delve into when you consider the circumstances a child is born into and how it will affect individuals' education.”

An anonymous respondent said, "Depends on what the basis of the algorithm is. As it is, I think confirmation bias distorts many current algorithms."

An anonymous teacher commented, "Based on current trends, I foresee algorithms replacing almost all workers with no real options for the replaced humans.”

An anonymous respondent observed, "These are bad/limiting choices. The extent to which algorithms will impact society will be determined by how transparent/accountable they are and whether people are capable enough and care enough to learn how they work."

An anonymous respondent noted, "The positives of algorithmic analysis are largely about convenience for the comfortable; the negatives (inscribing racism and gender bias into systems which most users and lawmakers will treat as impartial) vastly outweigh them in significance."

An anonymous respondent said, "Algorithms will naturally have a resolution that is greater than the individual, so results will tend to reduce the variance in a group and exaggerate that between groups.”

An anonymous respondent commented, "Algorithms will be improved as a reactive response. So negative results of using them will be complained about loudly at first, word-workers will work on them and identify the language that is at issue, and fine-tune them. At some point it will be 50-50. New ones will always have to be fine-tuned, and it will be the complaining that helps us fine-tune them."

An anonymous respondent observed, "it is always hard to judge positive vs. negative outcome. Is capitalism good or bad?"

An anonymous respondent noted, "Algorithms are like any other targeted marketing technique—CRM, micro-targeting, etc. People will adapt, taking advantage of algorithms with they're useful and working around them when they're not.”

An anonymous respondent said, "Negative, because no algorithm will be able to consider all of the viewpoints unless it learns."

An anonymous respondent wrote, "Charmin Algorithms will aid in group data collection for scientific research, but there should be limits on the data collected and stored on individuals.”

An anonymous PhD candidate in aeronautics commented, "Without changes in the economic situation, the massive boosts in productivity due to automation will increase the disparity between workers and owners of capital. The increase in automation/ use of algorithms leads to fewer people being employed."

An anonymous respondent observed, "Government is a really good solution to protect marginalized people and groups. Governments may have to take an emerging and ongoing role in regulating algorithms through legislation and other mechanisms.”

An anonymous respondent noted, "Algorithms affect quantitative factors more than relational factors. This has had a huge effect already on our society in terms of careers and in the shadow work that individuals now have to do. Algorithms are too complicated to ever be transparent or to ever be completely safe. These factors will continue to influence the direction of our culture."

An anonymous respondent said, "Algorithms are ultimately created and controlled by people. I don't think using more algorithms will generally change the fairness of decisions people make; however, those decisions may become more efficient."

An anonymous respondent wrote, "Algorithms will continue to make live more convenient for shoppers and news searchers, but potentially more unjust as well, as people's options are preemptively foreclosed for them because options they might have preferred are, based on predictive modeling, never offered. In the absence of regulation this may also lead to expensive insurance penalties for algorithmic correlations and, in an extreme case, even Minority Report-style law enforcement—or at least profiling amounting to discrimination, starting from terrorism fear but inevitably spreading to broader mission creep as all the special terror-fighting powers have."

An anonymous principal research programmer at a private university observed, "Algorithms are just systems designed by people often with automated data analysis backing design decisions. We have already seen repeatedly the failures of these systems to properly analyze assumptions and inherent biases in the people making and data used to make some systems. I don't think it unreasonable to assume people will continue failing in the same way in the future.”

An anonymous senior design researcher noted, "Algorithms risk entrenching people in their own patterns of thought and like-mindedness, which I see as a potentially negative consequence. However, they also have the ability to deepen our understanding of things we are already seeking awareness of. In some cases, such as health knowledge, this actually isn't always a positive thing (sometimes ignorance is bliss as they say).”

An anonymous respondent wrote, "Do you really think wealthy people will allow their interests to be subverted? People know they are being marketed to. People don't care. It is harder to have privacy. It is harder to have anonymity. It is harder to stop fraud. The working and middle classes are held accountable for these issues instead of the people who should protect them. That is wrong and dumb.”

No response- An anonymous respondent observed, "I really cannot weight the answer to satisfy the choices. Algorithms are inherently biased because software is made by humans. Software not made by humans still can contain bias because bias is in the training sets used to create AI agents. Algorithms are opaque and not easily subject to critique. People too easily believe that they are scientific. Healthcare—there is not a single study that shows clinical improvement from the use of the electronic health record, and instead of saving costs, it has increased them. Resources going there are resources not gong into patient care. Consumer choice—we only see what we are allowed to see in whatever markets we've been segmented into. As that segmentation increases, our choices decrease. Corporate consolidation also decreases choices. Likewise news, opportunities, access. Big data can be helpful—like tracking epidemics, but it can also be devastating because there is a huge gap between individuals and the statistical person. We should not be constructing social policy just on the basis of the statistical average but, instead, with a view of the whole population. So I am inclined to believe that big data gets us to Jupiter, it may help us cope with climate change but it will not increase justice, fairness, morality, and so on. Re governance: 1) Let's start with it being mandatory that all training sets be publicly available. In truth, probably only people well qualified will review them, but at least vested interests will be scrutinized by diverse researchers whom they cannot control. 2) Before any software is deployed it should be thoroughly tested not just for function but for values. 3) No software should be deployed in making decisions that affect benefits to people without a review mechanism and potential to change them if people/patients/students/workers/voters etc. have a legitimate concern. 4) No lethal software should be deployed without human decision makers in control. 5) There should be a list of disclosures at least about operative defaults so that mere mortals can learn something about what they are dealing with."

An anonymous respondent said, "Algorithmic systems have a place in prediction of resource management, but not in social management. Resources we're pretty good with, but handing off social problems to be handled without human interaction is simply foolish and shortsighted."

An anonymous respondent wrote, "Algorithms purport to be fair, rational and unbiased but just enforce prejudices with no recourse. The increasing migration of health data into the realm of 'big data' has potential for the nightmare scenario of Gattaca writ real."

An anonymous respondent commented, "I'm not sure algorithms can ever approximate the entire human experience (barring a sentient AI).”

An anonymous administrator at a state library observed, "We won't know until we try. So many things have been found to be discriminatory (SATs, IQ tests ,etc.) The real issue is to what extent we will respond and change the algorithms to repair the damage."

An anonymous respondent noted, "The algorithms will be essentially created by middle-class white males who'll imbue their algorithms with their own bias“

An anonymous system analyst said, "Algorithms will build upon the response of each one."

An anonymous professor of telecommunications and law commented, "Algorithms have no way of factoring fairness and other non-quantifiable factors.”

An anonymous respondent observed, "Algorithms are not having a greater or lesser effect today than they were 5 or 10 years ago. The difference is the amount of data thrown at them and the lengths companies go through to acquire that data. I expect to see a rapid increase in the loss of privacy in the name of trying to accumulate sufficient data for the algorithms to produce answers that are something other than nonsense."

An anonymous respondent noted, "I know that whether I like it or not, algorithms are going to dominate my life even more than they already do. Just reading the question completely filled me with angst. I do not want to live in a world run by algorithms—predictive modeling, god! The outliers are what make life rich and interesting! I know there are many brilliant minds in this world capable of doing the extraordinary with the design and implementation of algorithmic models, but I would always prefer to interact with living human beings who are kind and genuinely desire to help me, even when that human being is having a bad day. When life experiences, emotions, processes, and chance are all reduced to data points, we tend to miss the mark. We simply can't capture every data element that represents the vastness of a person and that person's needs, wants, hopes, desires. Who is collecting what data points? Do the human beings the data points reflect even know or did they just agree to the terms of service because they had no real choice? Who is making money from the data? How is anyone to know how his/her data is being massaged and for what purposes to justify what ends? There is no transparency, and oversight is a farce. It's all hidden from view. Why exactly is that? I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It's the basic nature of the economic system in which we live. And how many workers in any given workplace does any one algorithm replace? What will be that economic impact? For goodness sake, the bright minds at Google don't even understand how their own complex search algorithm works, and in my mind the answer is 'Not so very well anymore.' It seems related to selling advertising rather than meeting my search needs."

An anonymous respondent said, "I have seen a mixed bag so far.”

An anonymous system administrator wrote, "The positive changes will be on everyday life: algorithms will serve as a tool to unburden attention. They will work as filters or 'purifiers' of digital interactions we don't want to be bothered with. Some content is only interesting under certain circumstances, and the filters will do that job. The discriminations will be automatic. The main problem will be with advertising policies and political restrictions. The commercial and political players will always be interested in manipulating those systems; the interests of the consumer will not always be first. But as we have already seen with inefficient/illegal advertising systems (Myspace/Napster), those will disappear driven by lack of consumers."

An anonymous product specialist commented, "Algorithm creation will always depend on the guidance of people, who will deliberately empower it or weaken it depending on the quality and quantity of variables they are exposed to."

An anonymous business analyst noted, "The outcome will be positive for society on a corporate/governmental basis, and negative on an individual basis.”

An anonymous respondent said, "These are silly questions attempting to get people to be upset about the future. Please consider that life is not all black and white.”

An anonymous respondent commented, "'The fear is that algorithms can purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts.' I 100% agree with these fears. Algorithms can only reflect our society back to us, so in a feedback loop they will also reflect our prejudices and exacerbate inequality. It's very important that they not be used to determine things like job eligibility, credit reports, etc."

An anonymous respondent observed, "The potential positives are strong for medicine, particularly. However, I am pretty confident that these tools will primarily be used for social control, propaganda, and marketing. While I believe full and complete transparency, coupled with extremely strong whistle-blower protections and extremely strong protection over the press, would prevent the negatives these tools will bring, it's extremely unlikely to happen that way. The most likely course is that we are moving into a dark period of history, with a likely resurgence of a capitalist-style totalitarianism in the West."

An anonymous respondent noted, "While this really boils down to implementation, I would assume the overall trend of improperly managing and securing data by corporate interests will inevitably lead to leaks/breaches/misuse continuing for the foreseeable future, especially considering that no regulatory body has made it expensive enough (by way of fines) to dissuade poor practices thus far. That said, it would probably streamline a lot of the customer service side of the equation, so about 50/50."

An anonymous teacher replied, "I fear algorithms advertently discriminate now, such as when I search for a bar online, that bar appears on Google Maps more readily than other bars. The algorithm gives me content I have shown interest in. But will the algorithm not present me with new content of any kind, unless I have shown interest in it previously? I fear I will miss out on up and coming content."

An anonymous respondent noted, "Until we begin to measure what we value rather than valuing what we measure, any insights we may gain from algorithms will be canceled out by false positives caused by faulty or incomplete data.”

An anonymous community advocate said, "There are a lot of places where algorithims are beneficial and helpful, but so far, none of them take into account the actual needs of humans. Human resources are an input in a business equation at the moment, not real, thinking, feeling symbiotes in the eyes of business.”

An anonymous journalism professor wrote, "Algorithms eliminate transparency, protecting bias and discrimination."

An anonymous technical writer noted, "Data collection and quantification lets us spot, measure, and ideally fix all kinds of problems—when you can measure something, you can think of a way to fix it, and that's great. From traffic patterns to illnesses and health issues and income inequality—all great targets for improving the human condition, if the political will follows the data collection. On the other hand, we've got a major issue of social fragmentation going on and I don't know what would fix it. There is a wealth of media available but we all consume that which agrees with us most. There is a wealth of discussion sites and social networks but we isolate ourselves and then algorithms reify that. We have systems that let completely groundless, hateful crap bubble to the same place in the metaphorical conversation that traffic and weather occupy. I'm not sure how you deal with that. As for predictive modeling, the potential for harm is so much greater right now than the potential for benefit. I saw a news story last week that basically posited a 'pre-crime' prediction division straight out of science fiction, and that kind of nonsense disproportionately affects black and brown, low-income citizens in a way that doesn't help anyone."

An anonymous respondent said, "Our critical-thinking skills and serendipitous discovery are at risk of diminishing.”

An anonymous respondent wrote, "The algorithms need to be publicly reviewed so as to ensure their fairness. In some instances, such as a legal case, the suggestion of the algorithm should be thoroughly vetted before a final decision is made.”

An anonymous assistant manager commented, "Surrounding myself with my social media peers and like-minded individuals does not expand my world view—and neither do algorithms. They only help expand my individual consumer capital; not much else."

An anonymous systems administrator wrote, "If oligarchic tendencies accumulate overall, those who control the algorithms of the future system will call the shots as to who has access to what, when, and why.”

An anonymous respondent noted, "See algorithm researcher Frank Pasquale's work. Bias is inherent in algorithms. This will only function to make humans more mechanical, and those who can rig algorithms to increase inequality and unfairness will, of course, prevail."

An anonymous respondent said, "Despite data mining, machine learning, etc., being presented as unbiased and purely logical, they merely reproduce the biases of their designers. In an economy increasingly dominated by a tiny, very privileged and insulated portion of the population, it will largely reproduce inequality for their benefit. Criticism will be belittled and dismissed because of the veneer of digital 'logic' over the process."

An anonymous respondent wrote, "Algorithms are a tool. How we use them matters."

An anonymous senior account representative commented, "As with so many things: it all depends. If the algorithm is written to benefit the individual, great! But there's no reason it cannot be tailored for corporate or government benefit instead. And there is no way to easily weigh/diagnose algorithmic bias. All you can do is consider the source.”

An anonymous freelance consultant observed, "There will be major benefits to those who fit the model in terms of employability, quality of life, etc. The built-in biases (largely in favour of those born to privilege such as Western Caucasian males, and, to a lesser extent, young south-Asian and east-Asian men) will have profound, largely-unintended negative consequences to the detriment of everybody else: women, especially single-parents, people of colour (any shade of brown or black), the 'olds' over 50, immigrants, Muslims, non-English speakers, etc. This will not end well for most of the people on the planet.”

An anonymous respondent said, "As with so many things: If handled responsibly, algorithms will allow for more efficient and effective deployment of resources in all fields. Used irresponsibly, or if poorly designed, they'll perpetuate and exacerbate existing problems. Fingers crossed for the former."

An anonymous real estate broker observed, "50/50 in short term. This has the potential for some very negative outcomes."

An anonymous respondent noted, "There will be clearer pathways out of the echo chambers, but we take our biases with us wherever we go."

An anonymous respondent said, "I am a computer scientist. All computers are algorithmic processing machines.”

An anonymous respondent wrote, "Algorithms tend to perpetuate an individual's categorization, i.e., it puts an individual in a 'box' and then feeds them news, products, opportunities, etc., that reinforce their previous selections, rather than encouraging expansion into new areas."

An anonymous respondent commented, "Even thoughtfully developed algorithms will prompt algorithmic efforts to subvert them, leading to an arms race that consumes increasing resources for no real public good.”

An anonymous respondent observed, "Algorithms are written by people, and are only as unbiased as the people writing and testing them."

An anonymous respondent said, "Main positives may include more or personalized choices but in exchange for more of our personal data. Which in turn will be sold off for more advertising/marketing/tracking."

An anonymous senior software developer wrote, "Smart algorithms can be incredibly useful, but smart algorithms typically lack the black-and-white immediacy that the greedy, stupid, and short-sighted prefer. They prefer stupid, overly broad algorithms with lower success rates and massive side effects because these tend to be much easier to understand. As a result, individual human beings will be herded around like cattle, with predictably destructive results on rule of law, social justice, and economics. For instance, I see algorithmic social data crunching as leading to ‘PreCrime,’ where ordinary, innocent citizens are arrested because they set off one too many flags in a Justice Department data dragnet."

An anonymous respondent observed, "Human-created algorithms that are not properly reviewed will be flawed and full of the same subtle racist and sexist cuts that the creators were raised to believe in."

An anonymous respondent wrote, "Consumer choice and news will be more limited in ways that we will have choice over. This will happen in the 'background' and be controlled by special interests. Mark Zuckerberg is the face of the ambivalence that comes with this—if his politics, personal preferences, personality get woven into the algorithms I probably will never know, but if I did I'd probably find it entirely foreign."

An anonymous manager said, "I don't expect algorithms to be generally beneficial unless they can be examined. Useful algorithms have reached a complexity point where multiple researchers and sources are needed to improve, check, and confirm their usefulness. Without corporate transparency, I don't see them becoming generally useful."

An anonymous respondent wrote, "I personally don't think they work, either for the crowd at large, or myself."

An anonymous respondent commented, "Negatives will outweigh the positivies except for those controlling the algorithms."

An anonymous respondent observed, "The risk is always the greed of the few at the expense of the many. Online resources are already causing global improvement in education and information sharing. The trouble will be in the restriction, manipulation, and insertion of biases into the available information.”

An anonymous respondent wrote, "Clearly there are many tasks and decisions that can be better handled by algorithms than by tired, poorly-trained, emotionally-vulnerable people. The big issue in the use of these algorithms is what the function of a 'job' is. If it is to keep a person participating in society and earning a living, then algorithms are deadly; they will inevitably reduce the number of people necessary to do a job. If the purpose is to actually accomplish a task (and possibly free up a human to do more-human things), then algorithms will be a boon to that new world. I worry, though, that too many people are invested in the idea that even arbitrary work is important for showing ‘value’ to a society to let that happen."

An anonymous respondent said, "They will just be used as the new ‘targeted advertising.’ If designed and implemented by altruistic people they have amazing potential, but we know that this will start with Apple, Google, and other for-profit companies and will only be used to promote their bottom lines.”

An anonymous associate professor and research center director wrote, "Unless computer science starts working with the humanities, they will continue to create algorithms that do not solve human problems because they do not take into consideration human issues."

An anonymous executive director for an open source software organization commented, "Most people will simply lose agency as they don't understand how choices are being made for them. This will reinforce social divisions, bias, class, etc."

An anonymous respondent observed, "I don't know if the positives will outweigh the negatives or vice versa. That is not a function of technology, but of social norms and legal/policy design."

An anonymous respondent based in the US said, "The algorithms themselves will be helpful; however, policy and legislation must deal with the potential negative side. I don't see Congress being proactive about that, or keeping up with it at the needed pace."

An anonymous research scientist observed, "Some algorithms are already being flagged for being discriminatory. If this identification continues, if we create this culture, then the positives will outweigh the negatives."

An anonymous respondent replied, "Market forces will outpace regulatory forces with respect to algorithmic influence. Those who are being victimized or otherwise concerned lack the organizational cohesion and financial incentives to apply the brakes of oversight."

An anonymous respondent said, "Negatives are already beginning to outweigh positives. Social engineering is beneficial for large corporations. They've seen the benefit of promoting their brands in such a way that the package appears desirable to the masses. Algorithms allow predictive models to do many positive things, e.g., DARPA has created algorithms that predict traffic-flow patterns, allowing civil engineers to design better highways and intersections. The flip side is that they can use algorithms to predict human nature (to some degree). See this portrayed in the TV series Person of Interest."

An anonymous principal analyst wrote, "Algorithms are designed by people; they have biases and enable companies to take advantage of them. For example, an insurance company algorithm identifies the consumer as age 65+, having had previous health issues and thus eligible for in-home visits which will generate incremental reimbursements for the insurance company and the nurse who visits, entirely independent of an individual's own doctor being involved. Some states have viewed this as fraud (see Florida), some might call it predictive modeling."

An anonymous respondent commented, "Although a more streamlined approach to executing several basic functions could be achieved (in healthcare, education, and retail, for example), the downside is that people will lose their intimate familiarity with the 'inputs' and core variables that make up such algorithms, and will be further alienated from the 'means of production,' to use Marxian terms."

An anonymous respondent wrote, "Seeing these patterns and developing new ones will help develop new breakthroughs in assorted disciplines, however the use of algorithms to manipulate financial markets and human behavior on a massive scale is a bit like letting the genie out of the bottle. Diversity—ethnic, cultural, racial, etc.—will also often be overlooked in the name of using the brute force of normative algorithms. Oversight of such algorithms needs to be performed at assorted levels. Online forums, town halls, expert panels that include individuals from the social sciences, activists, ethicists, as well as the math and computer sciences can help assess the impact."

An anonymous respondent said, "The danger is that people are reduced down to the algorithm not the algorithm helps—as whose intent guides this process?"

An anonymous respondent wrote, "Algorithms = editors. We've always had them under some name."

An anonymous respondent observed, "It really depends on who is writing the algorithms and what kinds of oversight occur. The positives are in regard to speed, but accuracy and human intuition are lost when we rely solely on algorithms. All walks of life will be affected by algorithms (and already are). We need to be careful with how information is obtained, who has access, and what they have access to and for how long. We need to think through all of the impacts and have governmental oversight—this is similar to HIPAA and FERPA. The types of discrimination that occur are vast—anything from people with disabilities who have to use accessible technologies to those who don't use online technologies to those who think differently from the ways the programmers built the algorithms. It's impossible to know all of the ways that an algorithm could be discriminatory.”

An anonymous professor in the social effects of mass communication at a state university said, "People and organizations that design and understand these will benefit. Most people do neither, so will superficially benefit (in the sense that they will benefit in ways visible to them) but in some larger sense not fully benefit (the underlying aggregation is not there to primarily serve the individual, and profits do not accrue to the individual. Small benefits and efficiencies certainly will accrue, but even those not in always perceivable ways."

An anonymous associate research professor wrote, "It's called ‘The Doctrine of Unintended Consequences.’"

An anonymous CTO commented, "AI will help with UI (e.g., voice). Search algorithms with help find results. Other algorithms will help elsewhere."

An anonymous security architect with a national telecommunications provider in Canada wrote, "Firstly, I don't think there's a firm understanding of the degree to which bias (in code, in data collection methods, in analytical questions or methods) plays a role today. We're only just learning about this. In the meantime there is, in fact, useful data being used for novel purposes to beneficial outcomes. There is also a lot of sloppy work negatively impacting people's lives. There is a lot of opportunity for improved public health and safety, for improved interactions (at least for some people), for improved learning and working conditions. There will also be more decisions about people's educational, financial, employment, criminal, and health futures being decided by black boxes that no one can (or alternately is allowed to) describe. I also expect to see movement in this space similar to what we're seeing with search engine optimization, spam, and social interactions to start gaming these systems. This is inevitable. People naturally respond to the pressures of their environment and this will be a constant pressure over the next 10 gigaseconds. The first widely reported instances of gaming of the algorithms might crop up in criminal justice or health care. The clickbait headlines almost write themselves.”

An anonymous respondent commented, "With regard to algorithms used in for-profit businesses, the market will sort out the bad algorithms. As a trivial example, there was the practice of asking for criminal backgrounds for job applicants and automatically weeding them out, but lately there's been a backlash against this practice. And, more practically, good candidates were being inadvertently weeded out, and so the companies that were willing to explore a larger pool of candidates could find good employees that were being rejected by others."

An anonymous technology journalist wrote, "Somebody recently called algorithms 'money laundering for biases' and I think this make sense. Of course people operate through their biases, it saves time and the effort of judgment. But if you do things quickly with algorithms you don't get to save that time; the workflow will expand enabling people to do more stuff and also far more complicated stuff. For instance, if legal algorithms make divorce faster and cheaper, people will be tempted to marry and divorce more often, or they might choose to marry in legal venues where they know these algorithms are operating, Las Vegas-style. Also, it's a mistake to frame this as algorithms versus citizens; citizens might well use algorithms against bureaucracies, like, for instance, predictive modeling to aid in smuggling or money-laundering.”

An anonymous respondent said, "There is a high risk that the pure commercial approach is limiting the positive potential.”

An anonymous respondent commented, "On the positive side, algorithms help people to save time and bring them selections that they are likely to enjoy. However, that has the potential to kill the joy of discovery and allow for change. In this sense, the chaos of a world without algorithms has the potential to spur innovation and ideas. One option could be to have a mechanism (button, link or other) to opt out of an algorithm—for one visit, or for longer—to give individuals the freedom to choose chaos over order.”

An anonymous senior security engineer observed, "I would prefer to simply say the outcome is unclear. Algorithms have great positive potential, but they also have great negative potential, so what direction this takes depends entirely on the course the people wielding the algorithms would like to follow.”

An anonymous associate professor at a state university noted, "Efficiency can be a positive, but it is often counter-balanced by losses for the already-marginalized. Without careful attention, any benefits created by algorithms will be matched by discrimination and other forms of unfairness."

An anonymous respondent said, "I can't tell when things are chosen by algorithms or sponsored. This gives credence to sponsored items that they might not merit—they are a new kind of ad. The echo chamber perpetuates biases in part because people are exposed to less variety, or perceive disagreement as hostility rather than variety or diversity. We see this polarization in news already. I am sure algorithms can be used to change this, but it seems no one has tried. I am worried about electronic health records because health care situations are where many people experience stigma and discrimination, and so one's podiatrist knowing sexually-transmitted infection or abortion history may not be relevant to care but will affect care, with increased stigma and discrimination. Predictive modeling could possibly help with parking, budgeting, planning public transit services, and other civil engineering tasks. I hope they will be used to improve such tasks.”

An anonymous CEO commented, "This is a bizarre question, meaningless without context. Every piece of computer code necessarily contains algorithms, as do all business processes, whether automated or not."

An anonymous respondent wrote, "The impact will largely follow the power dynamics of general society. There will be some exceptions: e.g., people who are wealthier (or algorithmically identified as likely to be wealthier) may pay more for online purchases, especially if they are noy tech-savvy about adopting workarounds."

An anonymous respondent observed, "These algorithms will be self-serving to the various 'stacks' encoding them. The potential for good is there, but it will be co-opted by third-party marketers who provide the main revenue for the Big Five [Amazon, Apple, Facebook, Google, Microsoft].”

An anonymous technical analyst noted, "I'm afraid that exceptional/special-circumstances situations will become increasingly harder to handle, though handling of standard situations is likely to be streamlined."

An anonymous respondent said, "It is impossible to offer a sensible prediction at this level of generality.”

An anonymous respondent observed, "Algorithms are designed by a wildly unrepresentative population, and will only reinforce many societal divides.”

An anonymous principal engineer noted, "Algorithms are only as good as the people who code and operate them. The effect will depend on the situation. In areas where human judgment is required, I foresee negative effects. In areas where human judgment is a hindrance it could be beneficial. For example, I don't see any reason for there to be train accidents (head-on collisions, speeding around a curve) with the correct design of an intelligent train system. Positive and negative effects will also depend on the perception of the person involved. For example, an intelligent road system could relieve congestion and reduce accidents, but also could restrict freedom of people to drive their cars as they wish (e.g., fast). This could be generalized to a reduction in freedom in general which could be beneficial to some, but detrimental to others.”

An anonymous digital manager said, "There are sectors working on 'anti-racist' algorithms, algorithms that flag hateful content for additional review. This is good, but we need to invest more in doing this as soon as possible. People also need to be educated about how much algorithms impact their lives and how they work. I frequently do this with the Facebook algorithm.”

An anonymous network architect wrote, "This all depends on your worldview. I would argue that negatives outweigh positives because using technology and algorithms to 'show people how to live their lives' might save lives, but at the cost of 'humanness' in some way. For some who strongly believe in progressivism, this is the ideal solution—it is progress that is going to 'fix the human problem,' or rather the imperfections found in humans today. We will hopefully discover that relying on algorithms, and 'improving' humans is a dangerous sport with many unintended consequences. If we discover this soon enough, we might learn how to make information more accessible without viewing humans as somehow a collection of bits. The future here is murky.”

An anonymous professor responded, "If lean, efficient global corporations are the definition of success, the future will be mostly positive. If maintaining a middle class with opportunities for success is the criterion by which the algorithms are judged, this will not be likely. It is difficult to imagine that the algorithms will consider societal benefits when they are produced by corporations focused on short-term fiscal outcomes.”

An anonymous associate professor of political science at a major US university said, "Algorithms are the typecasting of technology. They are a snapshot of behavior influenced by contextual factors that give us a very limited view of an individual. Typecasting is a bad way to be regarded by others and it is a bad way to ‘be.’"

An anonymous professor at a US university wrote, "It is up to us to decide the overall effect of algorithms. The decisions we make today, right now, will decide the impact in the future."

An anonymous respondent wrote, "The impact of algorithms will be socially different. Assessing as positive or negative is to be looking backward. The important thing is the values and assumptions behind the algorithm and that these should be transparent. It will be an elite cadre coding algorithms—there must be oversight."

An anonymous professor of media production and theory said, "The negative impacts are likely to be stronger. I love the sophistication of the search engine, but it is a very dangerous thing to have your life defined by algorithms. We are each unique. It is more or less impossible for an algorithmic environment to provide aleatory experiences. While there is starting to be citizen response to algorithms, they tend to be seen as neutral if they are seen at all. Since algorithms are highly proprietary and highly lucrative, they are highly dangerous. With TV, the US developed public television, what kind of public space for ownership of information will be possible? It is the key question for anyone interested in the future of democratic societies.”

An anonymous professor of sociology said, "Algorithms function to narrow one's access to information and experience and to reinforce existing tendencies, leading to the online segregation of people and overexposure to views, ideas, products that people already favor.”

An anonymous professor of humanities at a private college wrote, "I would like to say positive but political leaders must truly begin to understand how algorithms work. Policy is required to create limits around the misuse of algorithms and the data they manipulate. Also the public must be educated about the failure of objectivity in algorithms."

An anonymous senior lecturer in computing replied, "While more-complex algorithms will allow tailoring to individuals in areas such as health care and advertisements displayed, news and information dissemination will cater to the owners of the algorithms or those who can pay to influence behaviour with algorithms finding the right ‘buttons’ to press."

An anonymous respondent said, "The rise of unfounded faith in algorithmic neutrality coupled with spread of big data and AI will enable programmer bias to spread and become harder to detect.”

An anonymous respondent said, "Algorithms are today's targeted advertising. Not every ad is great for every person, but they are great for many. My biggest concern is regression toward the mean effect."

An anonymous respondent wrote, "We must view algorithms like any other tool. They can be used to accomplish positive things as well as negative things. The overall effect will be driven by who uses the algorithms and for what purpose.”

An anonymous director of research at a European futures studies organization commented, "We need to think about how to accommodate the displaced labour.”

An anonymous research assistant and instructor at a technical university observed, "It is obvious that you can sift through those people who are more literate than others. If you can give someone a set of instructions or tasks and they are not able to execute them because of a lack of literacy skills, they would be deficient."

An anonymous respondent said, "Both outcomes in the answer will happen: Algorithms will make decisions quickly and easier for persons and they will enable discrimination and the like. Overall, I guess is 50/50 although there will be clearly winners (people that receive generally advantages) and losers (people that receive generally disadvantages). Actually, a lot of the effect will depend on other societal trends (how we react to those algorithms, after all they still are created by human beings)."

An anonymous principal and thought leader wrote, "Our technological capabilities outstrip our social structures to provide proper context and boundaries. The continued rapid advancement will always provide incentives for bad actors to game the social/political system faster than they can be reined in. Algorithmic decision making will overtake common sense and discretionary judgment."

An anonymous researcher at a major US research university commented, "It all depends on the quality of the algorithms and how transparent their operations are. High-quality algorithms with transparent structure could diminish discrimination. Low quality, black box style algorithms could increase it."

An anonymous respondent noted, "Sadly, there's going to be a negative effect. The idea of an ‘objective’ algorithm removing admittedly subjective humans from various situations is an extremely seductive one, but it's simplistic, and misses a variety of quite serious underlying concerns. Frist, what human judgment informs the algortihm? At some point, a human being had to decide what the parameters, the weighting, were. If obscured by the interface, or by the implied objectivity of the algorithm, these still very human decisions are removed from scrutiny or understanding. It's relatively easy to understand what a judge decided, and maybe why, if they explain their reasoning—but a black box sentencing algorithm is a different story. The potential for discrimination is vast, whether it is in the justice system, health care, voting, education, etc. We are at this strange cusp where most everyone acknowledges that humans make faulty judgments, but by running to solve this problem, we've simply moved the subjectivity bottleneck to a new level rather than confronting the problem.”

An anonymous principal engineer said, "The Law of Unintended Consequences is going to rule here, limiting both the positive and negative impacts of the technology.”

An anonymous respondent commented, "This is one area where we might see regulation aimed at preventing discriminatory use of algorithmic operations. If we do, then overall effect should be positive. If not? Either way, surely algorithmic inference can help with the ‘broken democracy’ problem.”

An anonymous respondent said, "Those with power are better positioned to use such algorithms, as these types of algorithms require large amounts of data and computing power. In the West, conservative politics have hamstrung nation-state political power and continue to do so. In the US, the federal government is incapable of rolling out workable computer solutions. Only those with economic power will be able to fully use algorithms in a widespread manner and they will do it for their own gain, behind closed doors, and will claim intellectual property protection whenever anyone asks for an audit or something to that effect."

An anonymous respondent said, "That which is most valuable in terms of quality of life, is precisely that which cannot be automated. In some cases, automation will take away value, as it becomes vulgar. Automated comments have the same value as voicemail systems telling you: We value your business."

An anonymous respondent wrote, "It seems that you are imputing an awful lot of agency into a theoretical concept. Why the focus on ‘algorithms’ rather than users, data, human-computer interaction?”

An anonymous professor commented, "Increased convenience for most, with increased discrimination and diminished access for others. Is that ‘50-50’? I think it's more useful to think in terms of distributional impacts: I expect durable inequality to be the result.”

An anonymous senior fellow at a futures organization studying civil rights observed, "It depends on how they are implemented. There is a need for human reviews to ensure they are fair and not running amuck—after all, people create algorithms. There must be redress procedures since errors will occur."

An anonymous respondent based in a network in Kenya noted, “Positive changes: we are likely to see more customised content for audiences such as Africans, etc. Negative changes: probability of skewed messages, unfair advertising, surveillance on political adversaries, etc.”

An anonymous respondent wrote, "The positives outweigh the negatives, but only if we restructure how society works. For instance, in education, it's no good for everyone to have access to free higher education through massive, open online courses [MOOCs] if all the jobs, from fast food joints, to programmers, to realtors, have all been put out of work by algorithms and everyone is expected to work 40 hours a week or be unable to afford housing and food. We need a societal change that accepts the dwindling availability of traditional work, or we'll have PhDs rioting because they can't afford to eat. Something like Basic Income will need to be implemented if increased automation is going to be a good for humanity."

An anonymous developer replied, "Many white-collar workers are doing jobs that could be better done by algorithms based on the current data."

An anonymous chief scientist wrote, "Whenever algorithms replace illogical human decision-making, the result is likely to be an improvement.”

An anonymous associate professor at MIT observed, "The negative effects will be debugged over time. There is already work on that direction."

An anonymous deputy CEO wrote, "I hope we will finally see evidence-based medicine and integrated planning in the human habitat. The latter should mean cities developed with appropriate service delivery across a range of infrastructures."

An anonymous respondent proposed, "Algorithms initially will be an extension of the 'self' to help individuals maintain and process the overload of information they have to manage on a daily basis. 'How' identities are managed and 'who' develops the algorithms will dictate the degree of usefulness and/or exploitation. Fast-forward 200 years—no governments or individuals hold a position of power. The world is governed by a self-aware, egoless, benevolent AI. A single currency of credit (a la Bitcoin) is earned by individuals and distributed by the AI according to the 'good' you contribute to society. The algorithm governing the global, collective AI will be optimized toward the common good, maximizing health, safety, happiness, conservation, etc."

An anonymous professor of information and history at a state university replied, "Positive changes will include: better-targeted delivery of news, services, and advertising; more evidence-based social science using algorithms to collect data from social media and click trails; improved and more proactive police work, targeting areas where crime can be prevented in advance. Negatives may include: increasingly siloed political news leading to magnified filter bubbles in which people don't hear as much about competing viewpoints and ideas; more massive thefts of credit card numbers and identity data; inadvertent (or deliberate) discrimination in credit checking, leading to segregation in housing."

An anonymous respondent observed, "There's a lot of potential for algorithms to improve life, with the biggest worry at this point being abuse of private data. I know people who are worried about AI getting out of control, and I don't rule that out. But I don't see any reason to view it as particularly likely."

An anonymous professor working at Stanford University wrote, "It is my hope that algorithms plus other technologies such as machine learning will have positive impact. The major issues will be how this can be done without creating the illusion of privacy invasion, intrusive marketing, etc."

An anonymous professor at New York University said, "Weighing good or bad depends on comparing it to what? No non-algorithmic future is possible (since no non-algorithmic present is possible.) Automated filtering and management of information and decisions is a move forced on us by complexity. False positives and false negatives will remain a problem, but they will be edge cases, as with problems with search today, the central algorithmic tool of the last two decades."

An anonymous respondent wrote, "In general, there is a self-selection process in algorithm results. Good algorithms produce better results, and in a marketplace that has a balanced ability to see the results, this will tend to optimal results. The problem occurs when marketplaces are biased or corrupt. The ideal environment would be one in which transparency were required, with or without control over the actual algorithms. In places where algorithms are used to create or enforce public policy, both algorithm transparency and data availability are crucial for avoiding public harm."

An anonymous respondent replied, "Positives will win, but only if we work at exposing, understanding, and dealing with the negatives. The programmers alone wont do it."

An anonymous computer security researcher observed, "This is a qualified 'yes.' The algorithms have already been demonstrated to have biases: gender, race and class. This is already affecting the job postings people are shown, news stories and products suggested. In the worst case, these algorithms will used to control and direct public opinion and behavior. For reference, look at how the media was used to get women out of the work force after WWII—TV shows called 'wrappers' directed at women (source see the book The Way Things Never Were). That said, algorithms, particularly combined with machine learning and data analysis, could result in products that predict self-defeating behaviors and react and incentivize in ways that could push users far further than they could go by themselves."

An anonymous respondent said, "Positive: Predictive diseases Negative: Predictive job candidate matching."

An anonymous respondent observed, "I hope the positives will outweigh the negatives. For that to happen, personal data must be protected, privacy respected, and free will must be the ultimate determining factor. Predictive modeling that results in better outcomes is good; predictive modeling that usurps choice or penalizes individuals is bad."

An anonymous instructor at the at a state university observed, "The impact of the algorithms will be positive if we continue to be thinking and discerning consumers of information. One skill I work to impart to my students is to provide them with the background knowledge to know when something is being left out. People need to know that they are not required to click on the first search result. They also need to be savvy about the quality of the source."

An anonymous associate professor of mathematics at the Université Abdou Moumouni said, "I definitively support the positive impacts of usage of algorithms to cope with routine tasks and also to open new ways of making human thinking more insightful. Indeed, we need to bear in mind side effects. It's the way of human history; cars were not invented to facilitate armed robbery."

An anonymous research psychologist wrote, "One benefit of algorithms is that it won't have subjective discrimination. The downside is systematic discrimination. The latter can be studied and qualified and resolved easier than the former."

An anonymous engineer at Neustar wrote, "I am convinced that the positives of using a much larger quantity of data, coupled with much more complex algorithms ('big data analytics') will radically improve our ability to personalize services to individuals. I'm sure this capability will also lead to unfortunate uses, but I believe that the good will far outweigh the bad."

An anonymous respondent replied, "Positives include measureable and often significant efficiency. Negatives are fear of exposing social rot; that doesn't sound very negative."

An anonymous respondent said, "One can see spin in algorithms. One can't see spin in human decision-making. (See the Internal Revenue Service.) Open-source algorithms are transparency applied to decision-making."

An anonymous respondent commented, "If there is transparency and accountability, there will be a net-positive impact, but there is no incentive or regulatory environment to move us in those directions."

An Internet Hall of Fame member said, "Algorithms are just the coded form of human judgment and decision-making. Their use is getting increased scrutiny from the public."

An anonymous respondent observed, "Algorithms will provide a lot of advantages, with easier interactions and better service. As for the downsides, there will always be downsides. The difference is that with algorithms, you can change it much more easily. Take the discrimination fear. Discrimination can occur with both algorithms and in real life, but it’s much easier to change an algorithm than a person's bias. If there is a problem with the algorithms, whatever they may be, they can be updated, fixed, or totally replaced.”

An anonymous respondent who works for the US government commented, "I expect that the positives will outweigh the negatives because I believe reason favors fairness for groups of people. However, there is a tension between the wishes of individuals and the functions of society. Fairness for individuals comes at the expense of some individual choices. It is hard to know how algorithms will end up on the spectrum between favoring individuals over a functioning society because the trend for algorithms is toward artificial intelligence. AI will likely not work the same way that human intelligence does."

An anonymous technical director wrote, "While there is some traditional privacy that will be lost, the gains in health science, social science, and technology will be able to increase the quality of life for many."

An anonymous respondent said, "Algorithms provide the backbone for any online toolset, and we can already see that the existence of our online tools has done a lot to improve our everyday life.”

An anonymous respondent wrote, "Although the positives outweigh the negatives, there is a significant risk this can have some very negative impacts in society. The positives are that we may be able to use some of the algorithms to make quicker decisions and move more quickly in times of crises. For example, we can more easily pinpoint the spread of infectious diseases and react more quickly to halt the spread. The negative risks are based on who has access to the data and what they plan to use it for. It could be easy to twist data to fit a certain political agenda. For example, if there are a high number of Latinas that are diagnosed with Zika infections, a group might try to argue that no Latinas should be allowed to be pregnant because they have a high number of Zika infections. I think the best way to provide oversight is to allow more people to view the data and be a part of discussions on how to use or track the data. The more open our world is, and the more diverse, the less likely any one group can use this data negatively."

An anonymous respondent wrote, "I'm sure there will be numerous anecdotal travesties, and construably criminal oversights galore, and I'm sure people in positions of power, will use the power of math to consolidate theirs. In fact, the more I think about it, the more worrisome it seems. Reducing people to numbers is at the root of nearly all dystopian science fiction, no? That said, our grasp of complex math is in a constant state of refinement, and I believe complex math underlay the entire natural universe. So, I don't know if a decade is long enough a time frame for society to reap the algorithmic method's greatest benefits, but I choose to believe that, in this sort of math, we're pursuing a valuable direction for the benefit of mankind."

An anonymous futurist commented, "We will end up looking at algorithms as extension of our brains—making all and every process easier to replicate. There will be a 'winner takes all' outcome in that better algorithms will outperform others—but on the way there, we will have numerous stray mistakes. Just like an incompetent teacher can harm by spreading incorrect knowledge. Though it may be bumpy, the road is promising.”

An anonymous respondent observed, "Algorithms are a can of worms—not in and of themselves because the underlying principle is reasonably sound—but because the data they collect and use is in the hands, as often as not, of people/businesses with no vested interest in or responsibility for the validity, availability and security of those data. There are three things that need to happen here: 1) A 21st century solution to the prehistoric approach to passwords; 2) A means whereby the individual has ultimate control over and responsibility for their information; and 3) Governance and oversight of the way these algorithms can be used for critical things (like healthcare and finance), coupled with an international (and internationally enforceable) set of laws around their use. Solve these, and the world is your oyster (or, more likely, Google's oyster).”

An anonymous respondent commented, "Picture-archiving and communications systems [PACS] for radiology reports and other medical documents were supposed to revolutionize healthcare, making patient care so much better, easier, cheaper, and more comprehensive by allowing for better data, fewer medical errors, better sharing among practitioners and specialists, etc. Although all hospitals and even most small/private clinics now have some variant of PACS systems, there is still much potential that has not been realized. Self-driving cars are making great progress—and could dramatically reduce the number of accidents we have per year, as well as improve quality of life for most people. Discrimination will be very easy, unfortunately, much like actuarial tables are used by insurance companies, our data and behavior modeling can be used to determine risk for medical and other insurance, housing, purchases, etc. Our legal system and government will have to make a huge push to keep up with the rapid pace of scientific development—something that will be difficult to do in the anti-science society we live in, in which both politicians and individuals fear science and try (in numbers unprecedented for some decades) to return to religion in lieu of science."

An anonymous research officer said, "I'm hopeful that the positives of such efficiencies will outweigh the positives, but it will be important to build systems of accountability, review, and legitimacy for the public to accept them as fair and efficient."

An anonymous respondent said, "Algorithms can ease the friction in decision-making, purchasing, transportation, and a large number of other behaviors in ways that will ultimately benefit society. There is clearly discrimination built into many algorithms, but a combination of social and market pressure can, over time, help solve some of these ingrained biases. A bigger danger is that a very few companies have the most useful algorithms, making it much harder for upstarts to compete on their merits.”

An anonymous respondent commented, "Algorithms are programs written by humans, not devil creatures from outer space :) The problem today is that a few companies (*they* are the problem) have opaque control over a few algorithms."

A principal consultant at a top consulting firm wrote, "Fear of algorithms is ridiculously overblown. Algorithms don't have to be perfect, they just have to be better than people. Also, people often confuse a biased algorithm for an algorithm that doesn't confirm their biases. If Facebook shows more liberal stories than conservative, that doesn't mean something is wrong. It could be a reflection of their user base, or of their media sources, or just random chance. What is important is to realize that everything has some bias, intentional, or not, and to develop the critical thinking skills to process bias."

An anonymous senior principal engineer said, "The benefits cannot be realized unless (as a separate matter) people know how to achieve happiness.”

An anonymous professor emeritus wrote, "Technology has been enormously helpful for people to be more efficient and productive. It would be a serious mistake to try to slow this down. There will be some people who lag in adapting and we will have to do the best we can to help those people adjust. It is unfortunate for some but overall beneficial to society. Change is always difficult."

An anonymous information privacy researcher said, "Coming from a computer science and social science background, I see that the current way algorithms are being deployed and used is positive with some negative stories here and there. My hope for the future is that companies will continue to perform inclusive user-centered studies to include all portions of the population to design culture- and values-sensitive algorithms.”

An anonymous respondent wrote, "Most questions these days about algorithms assume they exist somehow outside of a human realm and spring up on their own. Not true! As always, it's human values, human approaches, human biases that *determine* the software tasks we want or need help doing. So I see more useful algos than not that relate to specific tasks, and anticipating next steps in workflows and processes. Anything that overtakes stupid workflows and dumb interactions will be good. In theory that should include everything to do with government (local, state, federal), healthcare (ACA, Medicare, etc., more than private plans, probably), job training, job applications, military / veterans services, etc. As for discrimination, it's all in the screening/limiting/funneling process. And that in turn goes to the human creators.”

An anonymous respondent commented, "We'll wind up with versions of the classic tradeoffs of benefits versus privacy that have been percolating around online (and offline, c.f., loyalty cards) behavior for a long time now. But there will be enough improvement in individual experiences plus enough novel collective-good type of applications (e.g., health) where the aggregation of data is a win that it will outweigh costs to privacy and risks to individuals around profiling/discrimination or to groups via polarization through personalization (though I believe those will occur as well)."

An anonymous researcher noted, "In general the impact should be very positive, but there is a risk from powerful algorithms with their own goals, should these arise."

An anonymous clinical informaticist said, "Positive—greater ability to understand and work within structure. Negative—lack of ability to work without structure.”

An anonymous information systems researcher analyst commented, “'Citizen scientists’ have a lot of access to open databases, and they could have a serious voice beinf heard in case of a social risk approved after the application of algorithmic intelligence.”

An anonymous respondent noted, "The benefits do outweigh the negatives for all involved government, business and individuals. There will be obstacles and setbacks but we will move forward.”

An anonymous respondent said, "The positives include fewer underdeveloped areas and more international commercial exchanges. The negatives include less local control of economy and politics. All dimensions of life will be affected: health care, consumer choice, dissemination of news, education. It will give us easy access to data. Oversight mechanisms might include a much more improved formulation of rights to access data and information and the build-up of a oversight framework of institutions, at both national and international levels.”

An anonymous graduate student at Harvard University said, "Algorithms can offer efficient analyses of problems, and hopefully move us closer to important solutions."

An anonymous futurist commented, "We will definitely have some unintentional as well as intentional bias built into the machine intelligence of the future. I believe most of these will be screened out in the future as we develop 'scam filters' for this type of bias and deception.”

An anonymous respondent noted, "Bias in algorithms is a reflection of the societies that build those systems. The root cause is not the technology. As people become better educated about how bias persists, then technologies will be better built to minimize these effects.”

An anonymous respondent commented, "Algorithms will find knowledge in an automated way that is much faster than traditionally feasible. As any knowledge, the knowledge found by algorithms can be applied for good or evil, and often both good and evil. The dimensions of life affected depend on the availability of sensors and data collectors for each dimension. Given the rate of sensor development and network capability growth, it is likely that many aspects of life will be changed by algorithm-generated knowledge. Availability of funding will guide their development: the more funding for good purposes, the more good knowledge will be generated.”

An anonymous respondent noted, "Algorithms are merely the codification of human decision-making, generally representing better decision-making than any individual is capable of. They make all processes more efficient and effective, and their discriminatory biases or opacity can be brought to light.”

An anonymous CTO said, "Machine learning and AI algorithms only recently started picking up traction and are still very nascent, requiring a lot of human intervention. Machine learning and AI, just like any other automation (cars, robotic factories, etc.) will improve society.”

An anonymous respondent with the Internet Engineering Task Force wrote, "Sub-optimization will help many things get done the best known way. It takes a very long time for best practices to permeate through society. (E.g., for best practices in medicine to be assimilated by most doctors.) Algorithms can capture best practices. However it is hard work to systematize all important knowledge. So this only happens incrementally.”

An anonymous professor of media and communications observed, "As long as we can now put the safeguards in place, and recognize the need for humans and affirmative systems to guide and shape the role of algorithms, there are many positives to be realized."

An anonymous respondent said, "Main positives will be customized services, facilitated seeking of products and services. Discrimination will increase but if the right laws are put in place can be mitigated."

An anonymous respondent observed, "Algorithms will help diagnose medical issues easier and cheaper, leading to an increase in life expectancy. They will also positively affect education by allowing for smarter adaptable courses and training."

An anonymous IT director said, "The fact that people are already having this conversation about potential negative effects of algorithms gives me hope that future algorithms will be designed to minimize these effects as much as possible. I'm sure there will still be cases of unintended or unforeseen harmful societal impacts, but as long as we as a society continue to have this conversation and use any failures as learning opportunities, I predict the overall net effect will be much more positive than negative."

An anonymous respondent noted, "This question is funny; doesn't correspond to well-known definition of ‘algorithm.’"

An anonymous assistant director commented, "As with any new technology, testing will take place and lessons will be learned. The fundamental problem I see with the growth in an internet-connected society is that the amount of information available is out of scale with a person's ability to consume it in a useful way. Given this, we must overcome the scale of information in some way and using algorithms is an effective way to help bring us manageable amounts of information when we need it. The problem is that when all the information we consume is customized to our taste we lose seeing different vantage points. I already sometimes have to log out of my Google account when doing research in order to get a range of viewpoints rather than information that is filters for my age, gender, geography, related to past search history and a host of other facets. I fear that a world that living in a world where all the information you get is targeted for you, we will lose the ability to understand divergent viewpoints and the lives of those outside our reality. (Kind of like trying to have a logical conversation with grandpa who watches Fox News 24/7). I also worry that this sort of segmenting can intentionally or unintentional have a propaganda effect which can be used to make societies do bad things—all while the people believe that since it came from the internet that they were completely free to choose the information they consumed. Bottom line—there is a way to make algorithms work to our benefit but it should be an open process to ensure transparency.”

An anonymous respondent wrote, "Our world has gotten much larger with the arrival of the internet and it is no longer possible to search or find everything without an algorithm. Fortunately, the increased size of our data and services pool with the help of complex algorithms can refine our choices to get an extremely tailored result that otherwise would not be possible.”

An anonymous Web developer commented, "There seems to be an irrational fear of some vague 'algorithms' shaping the way we view information on the Web. As we live in constant bombardment by information, there is a need for tools to sift through that information, and display what is relevant to us. It seems these are the ‘algorithms’ we're talking about. The Web is already shaped by the ‘algorithms,’ and their relevance is easily perceivable. There's a fear that the algorithms may bias the data you see, and there is some evident bias, but internet users learn to take that bias into consideration.”

An anonymous digital media archivist noted, "The main negative of this scenario is that data can erode and get lost far more easily than pieces of paper! Though being able to access medical records from across the globe in the case of an emergency can be a life-saving development, there are always hacking and privacy/leak risks. Discriminatory practices may be built into these systems (wittingly or not) because the systems are designed and built by humans who come equipped with discriminatory and categorical minds."

An anonymous IT manager and systems administrator commented, "Algorithms can crunch databases quickly enough to alleviate some of the red tape and bureaucracy that currently slows progress down. For example, complex algorithms could have a dramatic impact on voting processes and efficiency in the US. We could institute vote-by-mail or even online voting nationwide, and reduce barriers to civic engagement. The fears around algorithms have less to do with algorithms themselves, and more to do with adequate security and privacy."

An anonymous respondent noted, "Anything where sorting or recommendations matter will be AI-assisted at some point. Since our society is unfair, any learning engine will gather real-world data, which will leave it biased. To counter this, we must include programming that recognizes this bias and eliminates it from consideration—we don't understand the problem quite well enough, though, and can only know about it if/when algorithms are 'open,' their code and details published. The danger is that a nontechnical public and leadership will not consider this issue.”

An anonymous devops engineer commented, "In general, this trend is positive for the well-off and educated, but it will make a hard life harder for at least the bottom 50%.”

An anonymous respondent said, "Deep-learning algorithms will be widely used to make decisions that were previously made by someone's subjective opinion. The results will be empirically shown to have benefits and some of these algorithms will later be shown to be racist, sexist, etc., and will have to be tweaked by hand to comply with laws and rules that people accept as fair. Eventually deep-learning algorithms will take non-discrimination into account.”

An anonymous civil engineer working in state government said, "Algorithms can have a good effect but the net effect is negative as even the best algorithm has the built-in inherent bias of its programmer as well as intentional corporate bias baked into it.”

An anonymous analyst programmer commented, "As long as algorithms are self-maintaining, and not unduly influenced by ‘forces of evil’ they should become better for all involved.”

An anonymous systems engineer noted, "Many things will be based on algorithms. There will be self-driving cars. Bots will follow orders to buy your stocks. Digital agents will find the materials you need online, etc.”

An anonymous respondent wrote, "I suspect that, for the vast majority, such algorithms will make marginal improvements in myriad aspects of their lives. So in aggregate, they will be touted as providing great improvements in the lives of citizens. But any benefits to individuals will only be arrived at once corporate profits have been maximized. And there will be some individuals for whom such algorithms will create entirely new means for discrimination, and on these people the effects will likely be devastating. In short, predictive modeling will provide a smidgen more convenience for the masses, while allowing corporations to wring even more profit from them, even further devastating the lives of the most marginalized.”

An anonymous respondent commented, "The great thing about using expert systems is that you don't need to be an expert to gain some really valuable advice. However, there is a range of potential disadvantages, for example people who blindly accept advice and don't understand it enough to critique it relevant to their own context and preferences. If these systems are built with appropriate protections (and disclaimers), they can do enormous good. But there is no such thing as a free lunch, and users must be wary for the inclusion of hidden assumptions (profit is better than community health), etc."

An anonymous chief strategy officer observed, "It will be critical that we move quickly to align governmental policies to address these issues.”

An anonymous respondent noted, "It will require refereeing by the legal system for the foreseeable future."

An anonymous respondent commented, "Algorithms will need something similar to net neutrality's protections to maintain the goal of an open and free internet. The gatekeepers creating algorithms, like Google and Facebook, have a disproportionate amount of control over what people manage to see online. We're already seeing companies create walled gardens of content that they alone manage with no transparency or oversight. I believe the success of these companies will encourage others to do the same and the internet will get more and more closed off until no one strays away from those platforms to go to discrete websites anymore. The internet will become more like TV and radio, all corporate sponsored/approved, full of advertising, controlled by the rich and powerful. I hope algorithms become more transparent, with user settings that you can see and configure yourself, rather than having everything act in the background based on data collection and analysis. Informed consent is missing from the current arrangement, as most consumers barely understand how they're being monitored or how algorithms work. I don't believe companies need or deserve access to all the information they're currently getting."

An anonymous respondent noted, "While the positives outweigh the negatives, people will find themselves in bubbles of information that they will be unable to escape. Algos will make things easier, but we had better be careful; see Minority Report.”

An anonymous respondent said, "The harmful societal impacts mentioned in the question can be mitigated with legislation and/or careful engineering. The positive impact can be summarized as an unprecedented improvement in efficiency in nearly every field from health care to transportation and shipping. We will see less pollution, improved human health, less economic waste, and fewer human jobs (which must be managed by increasing state-funded welfare)."

An anonymous network architect at a major mobile communications corporation commented, "Awareness of the dangers is growing such that the very real dangers will probably be addressed and the net result will be positive. I suspect some areas will be overwhelmingly positive and other domains will end up being poisoned by bad actors."

An anonymous respondent said, "I'm not sure this is a valid premise. Software reflects the talents and prejudices of its designers. The open-source movement will play an important role in keeping those designers honest, so to speak."

An anonymous respondent wrote, "In general, predictive modeling will have the potential to improve many fields via personalization and optimization: improvements in healthcare and education will have the most material impact, while things like personalized product or activity recommendations will make small improvements to daily life. These benefits will accrue disproportionately to the parts of society already doing well—the upper middle class and above. Lower down the socioeconomic ladder, algorithmic policy may have the potential to improve some welfare at the expense of personal freedom: for example, via aggressive automated monitoring of food stamp assistance, or mandatory online training. People in these groups will also be most vulnerable to algorithmic biases, which will largely perpetuate the societal biases present in the training data. Since algorithms are increasingly opaque, it will be hard to provide oversight or prove discrimination—algorithms and data science techniques will need to be used there too, which will be a challenge in the face of closed proprietary data.”

An anonymous learning systems and analytics lead said, "There are extremely high stakes and important issues with the 'algorithmization' of society, but the fact is that there will be huge gains in important areas for human health and well-being. If we can transition to a global economy where humans don't do all jobs, and where we've been able to reign in the issues of rights, discrimination, etc., with increased application of analytics/data science/algorithms, then we'll have some incredible innovations to look forward to."

An anonymous respondent noted, "Positives will outweigh negatives if and only if those algorithms can be reviewed for inadvertent bias, and transparency exists so the public can see what factors are being considered.”

An anonymous respondent wrote, "Good designs equal good code."

An anonymous survey participant replied, "Currently algorithms tell us what will most likely happen. But there is a huge market for them to tell us what should happen. As we store and use more data, it has huge impacts on healthcare. But how much information should be given out, and how much control does the public have?"

An anonymous respondent said, "Algorithms can create much more ease of use in most systems, and can only be updated and refined with time.”

An anonymous computer software sales engineer wrote, "The golden rule: He who owns the gold makes the rules."

An anonymous respondent working in global public policy at a major telecommunications company said, "Data analytics and algorithms will deliver enormous social benefits such as improved health care, smart cities, and productivity gains. However, consumers increasingly will be concerned about profiling and the impact of automated decision-making."

An anonymous senior researcher employed by Microsoft replied, "Yes, algorithms can cause biases to be exacerbated. But they also enable us to search the Web and sequence genomes. These two activities alone dwarf the negatives."

An anonymous respondent wrote, "More and more data is being generated about everyone and everything—algorithms help make sense of that. Technically speaking the algorithm is neutral in identifying patterns or making predictions, but for sure there's a possibility that groups of people (however they're grouped or delineated) get characterized a certain way based on their data."

An anonymous partner at a business firm replied, "Great improvements in efficiency will result from algorithms, and, in the foreseeable future, human beings will still be able to understand the algorithms well enough to adjust behavior accordingly. See how we have adapted how we search on Google. The algorithm adjusts, and how we type in search terms adjusts along with it."

An anonymous respondent wrote, "As we learn more about algorithms, we can fine-tune them to get real projections.”

An anonymous respondent wrote, "As my online ‘character’ is tracked and modeled, I find directed ads on many pages on items I might be researching. The extent that this occurs seems to be more accurate every year and catches my eye more.”

An anonymous respondent wrote, "We've already seen that algorithms can have unintended consequences, such as crime-reduction algorithms which unjustly target Black neighbourhoods. But some of the problems will get ironed out by increasing diversity in tech, and perhaps an increasing of understanding of the sorts of problems algorithms are good at solving. It's still human reasoning that's being used, if at one remove.”

An anonymous respondent wrote, "Government and corporate big data-based algorithms have already hit a pretty high level of saturation. While there are still gains to be made, I expect that crowd-sourced and private data efforts to clarify complex procedures (e.g., appealing a parking ticket or buying health insurance will lead to a measured increase in quality of life.)"

An anonymous respondent commented, "Anyone found to be using algorithms in a negative way will face a backlash. With so many people familiar with how algorithms work and what they are doing, abusers will get caught.”

An anonymous engineering student said, "Any errors could be corrected. This will mean the algorithms only become more efficient to humanity's desires as time progresses."

An anonymous system administrator commented, "Overall the effect will be positive, though care must be taken as to who sets the rules, we can't allow geeks or libertarians to do it, we need some kind of rainbow coalition to come up with rules to avoid inbuilt bias and group think to effect the outcomes."

An anonymous technical worker commented, "There will be many positive outcomes as people are able to better manage their lives. The biggest negatives will come from governments using these same technologies to spy on their own citizens. Some corporations will also try this, but in both cases, we will have Snowden-level events that will keep shifting public opinion to require more and more openness in algorithm usage and capabilities."

An anonymous respondent commented, "I find this very difficult to answer. How are we to compare the reduction of manual labor with the loss of privacy? It's all apples and oranges."

An anonymous professor at a private university observed, "Computer algorithms that are helpful become mainstream. Sorting, virtual memory paging, information retrieval, natural language processing become successful.”

An anonymous operations NCO noted, "Most significant negative aspects are those that exist without algorithmic support, just in heuristic 'human' manner (prejudice, disenfranchisement, etc.). Properly monitored and tested, algorithmic operations just make in amenable to manipulation and correction."

An anonymous respondent wrote, "There's a lot of negative—the echo chambers of Facebook, invasions of privacy for targeted advertising/security, but I cannot let go of the idea of self-driving cars and the freedom of place and freeing of time it can afford everyone with access to it.”

An anonymous devops engineer noted, "Have you ever tried a bad search engine? They demonstrate really nicely how helpful the search algorithms Google has come up with are. Yes there will be algorithms like what Facebook has been playing with, but even Facebook is trying to tweak its algorithms to be more network-focused and less sponsor-focused. Even when it was more sponsor-focused, at least the algorithms do a bit to break up the homogeneous worldview people tend to trap themselves in. There is great opportunity for those systems to be abused as well, but for the next decade I doubt anybody will come up with something subtle enough to do more harm than good without people noticing and reacting to it."

An anonymous respondent commented, "The main positives will probably be in the field of commerce and information gathering—both for research and for reporting. Negatives will certainly include mistaking similar variables (socio-economic status and race for example) with unforeseen outcomes (including discrimination masked as objective data gathering). Not renting to certain applicants, or hiring certain applicants because of metrics that become a proxy for race."

An anonymous psychologist and mobile applications lead noted, "Human decision-making is fallible at the 'hardware' level due to evolutionary constraints and needs. Like any tool—for example atomic power—algorithmic enhancements to augment our fallibility could be used both irresponsibly and for the benefit of humankind. To not explore these benefits would be akin to saying, ‘Let’s not research ‘antibiotics to enhance our immune systems because they could one day lead to superbugs that will kill us all!’"

An anonymous business owner said, "The efficiencies of algorithms will lead to more creativity and self expression."

An anonymous cloud-computing architect commented, "We have always had algorithms. They are part of our social and cultural fabric. A perfect example is ethnic bias—racism. Ethnic bias is a cultural 'shortcut' that intends to simplify social interactions by labeling people dangerous or not dangerous based on ethnic characteristics. The problem is that racism is a very bad algorithm in that it works counterproductive to modern society's objectives of peace, justice, and equality. It has survived because it exists as an implicit algorithm. Modern algorithms differ from classical, cultural, and social biases in several ways: 1) They are explicit rather than implicit, which makes them more open to scrutiny, discussion, and evaluation. 2) Their impact is measurable. We can objectively evaluate an algorithm's impact on social interactions, and decide if it helps or hurts our society. This gives us the opportunity to design algorithms with feedback loops for continuous improvement for specific outcomes. 3) They are institutional, rather than cultural. Modern algorithms are maintained by institutions with distinct corporate organizations. While algorithms may conflict with each other and larger social goals, we can hold individuals and organizations accountable for outcomes, as well as accountable to change algorithms that result in counterproductive outcomes. 4) They are increasingly complex. Reality is complex. Human interaction is complex. The complexity of algorithms managed in computers reflects the complexity of human experience. This means we can select models that more accurately represent human decisions to improve decision-making and policy, without the dangers of over-simplification. The dangers of algorithms, though, are real. While they may be explicit, if they are proprietary or secret, or if they are only focused on a narrow set of data, then we can't have meaningful public discussion about their effectiveness. Closed algorithms in closed organizations can lead to negative outcomes and large scale failures. If there is not enough oversight and accountability for organizations and how they use their algorithms, it can lead to scenarios where entire institutions fail, leading to widespread collapse. Nowhere is this more apparent than in critical economic institutions. While many of these institutions are considered ‘too big to fail,’ they operate based on highly secretive and increasingly complex rules with outcomes that are focused on only single single factor—short-term economic gains. The consequence is that they can lead to economic disparity, increased long-term financial risk, and larger social collapse. The proper response to this risk, though, is to increase scrutiny into algorithms, make them open, and make institutions accountable for the broader social spectrum of impact from algorithmic decisions.”

An anonymous respondent observed, "Algorithms aren't just about newsfeeds- take this scenario: My mom needs a new hip. Based on her body scans, an algorithm can create and submit a design to a 3D printer—and a custom hip will be created for her in minutes at a very low cost. Warehouses will be virtually eliminated due to on demand fulfillment based on algorithms. Any tech can be used for nefarious purposes, but data algorithms will completely change the world for the better."

An anonymous respondent noted, "I'm excited about the evolution of algorithms and how they can improve our lives."

An anonymous respondent commented, "The dangers of algorithms are that they stifle diversity and opportunity by looking for one-size-fits-all solutions and that they can be made to tell us whatever we want them to tell us with a false air of authority. I read about an algorithm that was being used to set bail amounts in parts of the US. Even though race was excluded as a factor considered by the algorithm, it consistently set bail higher for black defendants. Whatever correlated with being black that the algorithm looked at also correlated with higher bail because of the racist system, so the algorithm just perpetuated that. At the same time, people can point at that result and say, ‘see, black people are objectively greater risks’ even though there is nothing objective about it. In the end I think they can help us understand our biases and where those biases come from, but that is going to be a long road."

An anonymous business director wrote, "The greatest risk is probably the accidental training of neural networks with our biases and bigotries.”

An anonymous respondent said, "This question unfortunately mixes two topics that need to be separated. 1) Automation of routine or usually routine tasks. Management by exception applies here—the common case can and will be automated, with exceptional cases immediately escalated to an expert (human) for attention. Banks and governments are examples here. 2) Selection, recommendation and curation of content. There are significant risks here, and a learning curve. For what it’s worth, the current media/press is similar—one can easily find media/press outlets that favor a specific point of view—those who don't like that point of view will look elsewhere. Platforms are more of a risk, and need some sort of societal oversight—where there are alternative platforms, this should be self-correcting as people move on to platforms that aren't slanted, but for natural monopolies (no feasible alternative platform), government regulation of media and monopolies provides precedent for controlling excesses.”

An anonymous technology analyst at Cisco Systems commented, "Algorithms can diminish transportation issues; they can identify congestion and alternative times and paths as I can time-shift work. I wish they could tell me where to move to."

An anonymous company president wrote, "Driverless cars is one major example which will revolutionize the world, after some growing pains and hiccups"

An anonymous respondent said, "The public sector will be dominated by automated decisions. This will be more effective, and also make public services more personalized. There will be need for oversight bodies, someone with expertise, who can go in and evaluate the algorithms now and then."

An anonymous open source technologist noted, "If Pinker is to be believed, humans are getting less violent. That surely is a net social positive. I think technology and social justice play a role here, but so do good governance, sustainable and ethical business practices, clean water, lack of corruption, etc. Algorithms are surely helpful but likely insufficient unless combined with human knowledge and political will.”

An anonymous leader at a Silicon Valley think tank said, "This is a strange use of the world ‘algorithms,’ so it's hard to fully understand the question. Most of the tools that will help people do stuff will not be algorithmic. Algorithmic tools are used more by institutions and companies, where indeed there are the risks identified. Still, I think we are going to make our tools better and easier to use, and that will outdo what we screw up."

An anonymous chief marketing officer wrote, "It’s up to us really, but—like any other tools we have used so far—our evolution in digital tech and communications will most probably have a net benefit for humanity for the years to come as well across the board. Health, education, etc."

An anonymous senior security engineer responded, "I would prefer to simply say the outcome is unclear. Algorithms have great positive potential, but they also have great negative potential, so what direction this takes depends entirely on the course the people wielding the algorithms would like to follow.”

An anonymous professor replied, "The fact that algorithms, while mathematical objects, can be discriminatory has already been recognized. So now there is opportunity—and interest—to learn how to control these negative aspects. The solution will undoubtedly involve a complex array of law, policy, and technical incentives and requirements, but in a decade we will see progress from where we are today."

An anonymous professor at the University of Toronto noted, "Algorithms are written and trained by people. People’s behaviours can be influenced by laws and community norms. It's up to us.”

An anonymous online course designer commented, "Relatively simple algorithms can help make better decisions where the data is available via online systems. As an example, in education, results from automated short quizzes can help both students and their instructors identify what they need to work more on. This may be discriminatory where the assumptions of the algorithm do not fit: for example if the student has non-study obligations, for work, religious, cultural, or family reasons, when the quiz is due."

An anonymous professor at a private law school wrote, "Positives: detection of correlations and patterns to improve services Negatives: opacity, errors that are not detected or corrected, loss of deliberative discourse."

An anonymous program director at the US National Science Foundation commented, "This question could be equivalently asked by replacing the word ‘algorithms’ with the word ‘systems.’ Each is an undefined something that mediates interaction, in a way that is intended for the better. One must often interact with a system without understanding all of the working parts, which leaves some people at a disadvantage, or suspicious of the algorithm/system.”

An anonymous respondent commented, "Algorithms can be regulated and also controlled to establish trust to the public."

An anonymous executive director and vice president said, "I am just being optimistic in choosing the positives outweighing the negatives. It is very hard to tell today which direction this will go."

An anonymous executive director wrote, "There is so much to be gained from machine learning and the increasingly sophisticated algorithms will no doubt offer more positive impact than negative.”

An anonymous respondent commented, "The positives will win out until AI develops its own morality. That could be sooner than we imagine.”

An anonymous senior research director said, "This will speed up some decisions; they could be almost automatic.”

An anonymous chair of the board at a futures studies organization commented, "Algorithms can be made using different approaches, so any negative condition would be eliminated during the phase of design.”

An anonymous CEO responded, "Algorithms are a fact of life, and aren't going away. There are negative impacts, but responsible organizations are correcting them as they find them. Transparency and informed consumers/users are the keys. Crowdsourcing, watchdog organizations, etc. are all part of the ecosystem that will keep organizations honest in this arena by identifying any discriminatory or biased algorithms. I believe the area that will be most impacted is our interaction with social media. We are already seeing this in the filtering of our feeds and in the advertising that are 'selected' for us. Given that millions now get their ‘news’ from social media, this could have a dramatic impact on our goal of creating an informed society.”

An anonymous respondent observed, "Algorithms will move from the few large corporations, i.e., institutions to other groups. Health care, education, and consumer choices will benefit. “

An anonymous research associate noted, "While we have seen greater awareness of inequities and some efforts to close those gaps, major social inequalities endure across all sectors of society. I don't think algorithms will make these worse, particularly if computer scientists build them so that they're equal.”

An anonymous activist and blogger observed, "As algorithms have been developed for facilitating the control of services and tracking users the wrong use of algorithms are certainly not a question of application. For, e.g., Google’s algorithm has grown and today it's virtually tracking people about their choices and preferences. So it certainly threatens the future with a sad compilation of what needs to be controlled and bottled with proper policy."

An anonymous professor at a large US university wrote, "I am hopeful that increased focus on user needs will guide the development of digital algorithms to structure online interactions. The use of smart, adaptive, and tailored interaction algorithms that are user-guided will help to make these systems more adaptive to the needs of users and less rigid.”

An anonymous professor at a US research university noted, "Overall, they will improve the efficiency of water and energy consumption."

An anonymous senior researcher wrote, "The positive forces will be the increasing centrality of science and mathematics as the core of modern human civilization. Hopefully, this leads to enlightened conversations and breakthroughs that we cannot even imagine. On the negative side, access to so much information could lead to possible misuse, either by bad state actors or political extremists. Additionally, the overload of information might make it hard for older or less-educated people to cope with the new era of data. Overall, though, the positive aspect of using data and algorithms in daily life to make the world a better place will probably win out, even if it is rough along the way.”

An anonymous assistant dean at a large university commented, "While there is negative potential to the use of algorithms, the benefits outweigh the risks because they provide potential access to resources equally. If coupled with AI and learning systems, algorithms have the potential to equalize access to information that may not be possible when interfacing with people in certain situations and in certain parts of the country where personal bias may be a factor in interpersonal interactions.”

If you wish to read the full survey report with analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/algorithm_impacts.xhtml

To read credited survey participants' responses with no analysis, click here:
http://www.elon.edu/e-web/imagining/surveys/2016_survey/algorithm_impacts_credit.xhtml