Data-driven decisions

How data collection is affecting our decision-making in today’s technology saturated environment.

By Keren Rivas 04

You are on vacation and in the mood for Chinese food but don’t know of a good restaurant, so you decide to check online reviews. After browsing a couple of entries, you make your choice. But was it really your decision? How do you know the information the search engine provided was accurate and not simply designed to get you to choose a predetermined selection?

“Decision-making in the digital age is ceded to code-driven tools created by commercial interests,” says Janna Anderson, professor of communications and director of Elon’s Imagining the Internet Center, which collects experts’ views on digital trends. The center’s findings show a major concern among experts today is the surrender of independence and privacy to technology. “Experts say we are giving away our power over choice — that we blindly trust these tools because it’s convenient. “When you decide what Chinese restaurant you are going to, that decision is being made by a technology company’s code, and even they sometimes do not understand why it makes the choices it does.”

Whether we like it or not, as society’s dependence on technology increases, our ability to make free and independent decisions is bound to decrease. Somewhere in cyberspace, a profile of you is being compiled using the digital footprint you leave behind. “Every word you type, every word you speak, even every place you go can become part of a permanent collection of data,” Anderson says. The sites you’ve visited. The number of steps you’ve taken in a day. The comments you’ve posted. The queries you’ve searched — or asked Alexa and Siri to search for you. And while it’s no secret that tech companies like Amazon, Apple, Google and Microsoft collect personal data from consumers and sell, trade or use it to target ads and make a profit, consumers know little to nothing about what happens to their personal information. 

There is also a tremendous amount of data that is generated about people, governments and corporations without people’s knowledge, says David Levine, associate professor of law at Elon University School of Law and affiliate scholar at the Center for Internet and Society at Stanford Law School. “This mining of data has potential social benefits, and is lucrative for companies that control and analyze the data, but it also has downsides as privacy is being eroded.” 

Levine’s research focuses on the flow of information in the lawmaking and regulatory process and the impact of intellectual property law on public and private secrecy, transparency and accountability. “The entire world is struggling with the implications of data mining and artificial intelligence,” he says.
Many experts say today’s accepted business models are harming society and threatening human rights. “We have gone from being worried about privacy to being worried that today’s form of market capitalism may be destroying democracy,” Anderson says. Technology companies have been so busy working to optimize profit and power they didn’t consider the ways in which their programming of newsfeeds can be used to manipulate opinion or perpetrate potential misuses of people’s personal data. One example is the suspected Russian influence during the 2016 U.S. presidential election.

The personal data of up to 87 million users was compromised starting in 2014 through a Facebook app, thanks to a Russian-American computer science professor hired by Cambridge Analytica, the political consulting and data analysis firm that ran targeted political ads on social media on behalf of then-candidate Donald Trump.

For Elon Professor of Computer Science Megan Squire, whose research has offered insight into the nature of communications and connections among online communities, there is no inherently dark side of data, only irresponsible actors. She relies heavily on data collected from social media companies like Facebook to identify networks, connections and relationships that might not be easy to spot at first glance. “My job is to study data, so I look at everything,” she says. “If there is a question about the world, data can help us answer it.”

While Squire believes technology companies need a watchdog, she laments how Facebook’s efforts to make privacy more robust after the Cambridge Analytica debacle has resulted in researchers like herself getting caught in the crossfire. “Companies are taking away the ability for people to study data, but they are not filling the gap themselves,” she says, adding that better partnerships are needed between researchers and social media companies, though she acknowledges those are hard to forge. “Not only do we have more data being generated, but there are also more techniques for handling the data. Both of those things are increasing,” she says. Add to that the fact that the hardware to process that data — think computer chips — is also getting more powerful, and you have a recipe for issues to arise. “There are a lot of fast-changing, fast-moving companies making mistakes,” Squire says. “There is a struggle because companies don’t know what to do. That’s the place where we are at right now. There are a lot of mistakes and a lot of confusion about what should happen, what’s legal and what’s ethical. “We are operating in this gap where the behavior might be legal but is leading to things that might be illegal or unethical. Those holes are where the gray areas, where the question marks are.”

Levine says governments have historically gathered data, so there tend to be laws dealing with varying levels of access and decision-making processes, depending on the country. In the U.S., for instance, the Freedom of Information Act and other laws put a thumb on the scale in favor of disclosure of information. The opposite is true for private companies, but efforts to change that are already in motion. The European Union’s holistic General Data Protection Regulation, which is designed to protect users’ privacy by offering more transparency throughout the data collection process and providing new regulation for how companies manage data, entered into effect in 2018. [The pop-up message you get when you visit a site informing you about how it uses cookies to collect data is a result of that legislation.] California adopted its version of the GDPR last year as well. In June of this year, France made it illegal for entities to use data to predict how judges might render decisions, and in the U.S. a bipartisan legislation has been introduced to force the largest companies, those with 100 million or more monthly users, to disclose how much consumers’ personal data is worth to them. More importantly, the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data (DASHBOARD) Act also attempts to give more control to users by requiring companies to share what information they are collecting and to give consumers the ability to delete all or parts of that data.

Whether the law will pass is anyone’s guess, but Levine cautions about the efficacy of such sweeping regulations, pointing to instances when privacy has been used to squelch commentary in European countries. He warns that as more companies have demanded confidentiality and privacy for new technologies like algorithms for autonomous cars, search engines and social media, the public has been kept in the dark about how these developments impact them. “If an algorithm is opaque, it becomes impossible for the public to understand the rationale behind any particular outcome or determine if, when and how algorithms are misused,” Levine argues in a 2017 paper on the subject. “In order to bring us closer to the ability of understanding how technology is integrated into our lives, we need to recognize the differences between justified and dubious confidentiality and privacy claims.”

While regulation is crucial, Anderson says many experts emphasize the need for better education in digital literacy. She compares it to the effort we put into teaching teens what they need to know before they get behind the wheel of a car, though she acknowledges it’s more complicated to teach how to live in a digital age. “You’re not just taking a highway. There are a lot of people out there trying to get your attention and gain something from you — your information — or to sell you something,” Anderson says. “Expert respondents in our survey work urge that digital literacy should be part of every aspect of every parent interaction, and it should be embedded throughout our education systems, starting in kindergarten.”

The best advice in the meantime: Know the risks associated with using these platforms and be careful with what you share. “No one is forcing you to put things on Facebook and Twitter. Full stop,” Levine says. “If you want to use a proprietary platform that has no good history of respecting the privacy of users, you have to be aware of that history. Be more judicious about what you post and understand that as private entities, all the privacy settings you agreed to are out the window if the company chooses to do so.” 

As Anderson prepares to survey industry experts about the impact of technology on democratic institutions by 2030, she remains optimistic. She says some experts expect that social and civic innovation might eventually modulate many problems we see today and improve trust in democratic institutions, create kinder social media spaces and allow the most beneficial, fact-based information to rise to the top in the sea of information. “If we can do this and business leaders put people before profit, then that can do a lot to mitigate some of the negative impacts we have seen so far,” she says.