Elon University

Internet Governance Forum, USA 2019 – Artificial Intelligence

IGF USA LogoPanel – Balancing the Governance of AI: Benefits and Challenges of Possible Approaches

Brief session description:

Thursday, July 25, 2019 – Artificial intelligence is radically transforming industries, governments, economies and civil society. It has great promise, but it also has its limitations and it can cause harm. This panel was asked to examine current applications and trends and discuss when and where the governance of artificial intelligence should occur. It was asked to consider how to arrive at the right balance of innovation and precaution in order to for society to realize the benefits while limiting the harms of AI.

Moderator – Jess Fjeld, assistant director, Cyberlaw Clinic, Berkman Klein Center for Internet & Society, Harvard University

Panelists:
Lee McKnight, associate professor, School of Information Studies, Syracuse University
Lynn Parker, assistant director of artificial intelligence, White House Office of Science and Technology Policy
Nicol Turner-Lee, fellow in governance studies at the Center for Technology Innovation, Brookings Institution
Kathryn Krolopp, independent analyst and researcher

Details of the session:

Artificial intelligence is becoming increasingly relevant in everyday life –– from uses in health care to ad algorithms on social media – but there’s a lack of understanding among typical Internet users about what AI involves, making it difficult to regulate, suggested panelists.

Moderator Jessica Fjeld, assistant director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society at Harvard Law School, said it’s difficult to expect everyone to have the same understanding of artificial intelligence and governance.

“There are two big words in the title [of the panel] that people bring a lot of different perspectives to,” Fjeld said. “One being AI and one being governance.”

Lee McKnight, associate professor in the School of Information Studies at Syracuse University, broke the definition of AI into four “categories of definitions”: thinking humanly, acting humanly, thinking rationally and acting rationally.

Similar to McKnight, Fjeld described AI governance in four pieces: technology-level, actor-level, sector-level and governance by the government.

Lynne Parker, director of artificial intelligence for the White House Office of Science and Technology Policy, said the U.S. government needs to initiate meaningful principles for artificial intelligence use.

“We want to encourage innovation of course,” she said. “There are a lot of great uses of AI. We don’t want to make it so heavy-handed that no company will build up on AI innovation.”

In February, President Donald Trump signed an executive order for the American AI Initiative, which would help develop regulations for AI use. With AI’s “innumerable uses,” Parker said, it’s difficult to find the right balance in regulation.

“We don’t want to be in a society where it is a surveillance state where everyone is being watched,” Parker said.

Privacy concerns

Privacy needs to be central to any conversation about AI, according to Nicol Turner Lee, a fellow at the Center for Technology Innovation at the Brookings Institution. Lee shared that she was on a website for an African-American fraternity when pop-up ads appeared, offering to provide arrest records for a cheap price.

“That revealed to me that we were in trouble because here was this innocuous, celebratory site with predatory ads,” she said.

Lee said, as a consumer, she wants to know when her data is being collected and what data is being collected.

“What you assume about me based on my purchasing profile, my kids, my status, you don’t know quite who I am,” she said. “But you make those inferences. It doesn’t allow me as a consumer to check on a box. I want some marker that says I can trust that this algorithm is not going to take me down that slippery slope or make more consumer harms.”

Kathryn Krolopp, an independent analyst and researcher, had a similar experience as Lee. Krolopp said a large international media company she worked for mined user’s data to estimate their net worth and target ads based on that information.

“If this kind of marketing becomes the norm, it’s going to be devastating for consumers,” Krolopp said.

McKnight alluded to the recent protests in Hong Kong, where he said an AI startup used a system with cameras and facial recognition to estimate how many people were protesting. He  said it’s ironic that the attendees were protesting against a surveillance state, while still being recorded.

“Only use AI with the right techniques where it’s appropriate,” he said. “And where it’s not, have some sort of practice or policy for control.”

McKnight used a metaphor of a stop light to understand when it’s appropriate to use AI and when it’s not. The “green light” data is open information, “yellow light” data needs some control or permission to access, and “red light” data must be secured.

Divide in digital education

McKnight works with the local government in Syracuse on developing AI systems in the city. To him, all people have a right to understand AI and how data is being used.

“There’s certain things where everybody should be worried about cybersecurity things,” McKnight said. He specifically alluded to when systems get hacked or human life can be harmed.

Krolopp said she believes there is a knowledge gap in how people in the tech industry understand AI versus how people in civil society do.

“People just don’t understand that technology, and AI touch our lives every day,” she said. “AI isn’t going to kill you unless somebody really hates you.”

To combat the education gap, Parker said part of the American AI Initiative involves incorporating AI education into schools from kindergarten through college. This, she said she believes, will level the playing field.

“It’s not just about winners or losers, but helping more people have the opportunity to participate,” she said.

What’s the solution?

Though government regulation –– like the American AI Initiative –– may regulate tech companies use of AI, Krolopp said she believes tech companies should require ethical training to eliminate the extraction of sensitive user data.

Lee acknowledged the benefits of AI, including advances in health care and environmental advances. But she said members of civil society should get involved in conversations surrounding AI because they’re the ones being affected by the negative side of AI.

“I think we need to do something because I honestly don’t want the Internet to infer who I am and put me in a box,” she said.

McKnight agreed that responsibility lies with tech companies – not just the government – to be transparent with how software is being used.

“Let’s have conversations now and make sure we don’t build these systems that are so intrusive and violate privacy,” he said.

– By Brian Rea

Click here to return to IGF-USA 2019 homepage

Full Video of “Governing the Balance of Artificial Intelligence” Panel – https://livestream.com/internetsociety/igfusa2019/videos/194354085

The multimedia reporting team for Imagining the Internet at IGF-USA 2019 included the following Elon University School of Communications students, staff and faculty:

Janna Anderson, Maeve Ashbrook, Elisabeth Bachmann, Bryan Baker, Paloma Camacho, Samantha Casamento, Colin Donohue, Abby Gibbs, Jack Haley, Hannah Massen, Grace Morris, Jack Norcross, Maria Ramirez, Brian Rea, Alexandra Roat, Baylor Rodman, Zach Skillings, Ted Thomas, Victoria Traxler, Julia Walter, Courtney Weiner, Mackenzie Wilkes and Cameron Wolfslayer