A brave new world

Rapid advances in generative AI technology raise new questions about what it means to learn, work, create, make ethical decisions — and even what makes us human.

Artificial intelligence has become so ubiquitous that people often don’t think about the ways they use it daily. Voice assistants like Siri, navigation tools like Google Maps, autocomplete and predictive search all exist because of AI. But the past few months have signaled a massive leap forward in generative AI, which is trained on vast data sets to create original content, including text, images and audio.

The capabilities of generative AI are much broader than the AI applications most people are familiar with. GPT-4, which powers OpenAI’s ChatGPT tool, can write detailed essays, produce code from scratch and even pass the bar exam. Another OpenAI tool, DALL-e, can generate images and artwork from just a brief text prompt. But these incredible technological breakthroughs also present problems. ChatGPT’s knowledge base is primarily limited to 2021 and earlier, and it can present inaccurate or false information in a way that sounds plausible, further complicating ongoing challenges with misinformation. In June, two New York lawyers were sanctioned for submitting briefs produced by ChatGPT that cited fake legal cases.

Related Articles

Assistant Professor of Computer Science Ryan Mattfeld says the biggest difference between new generative AI chat tools and resources like search and voice assistants is their goals. Tools like ChatGPT or Google’s Bard first and foremost condense large amounts of data into helpful, conversational responses, with accuracy as a secondary goal. “The goal is to generate new information that may be useful,” Mattfeld adds.

In addition to learning from an immense amount of raw data from across the internet, some generative AI chat tools such as ChatGPT supplement that training with substantial involvement from human labelers who demonstrate sample responses and rank AI responses to help fine-tune the AI’s output.

“You end up with a model that doesn’t just generate random text, but it generates it in a way that humans appreciate because it was partially trained on human feedback,” Mattfeld says. “It’s a lot better at actually answering questions and not hallucinating or generating inappropriate content. It can still hallucinate or generate inappropriate content, but it is more difficult to produce this outcome than it was with prior generative AI.”

Now that this rapidly advancing and sophisticated technology is at our fingertips, it raises a host of questions about generative AI’s potential impacts on our society, creativity, morality and humanity. “In the next few years, human beings are going to meet themselves and have a chance to reflect on humanity in ways that have only been imagined in science fiction,” says Robert Leib, who formerly taught philosophy at Elon and whose research specialties include the philosophy of AI and the philosophy of culture. “We’re going to have a better sense of what it is that makes us human versus intelligent.”

“In the next few years, human beings are going to meet themselves and have a chance to reflect on humanity in ways that have only been imagined in science fiction.”

The latest iteration of ChatGPT is explicit in identifying itself as an AI that can’t think or feel in the human sense, but when Leib first began interacting as a beta user with an older version, GPT-3 Davinci, its responses were more personal. When Leib asked its name, GPT-3 identified itself as Sophie, and later, as their conversations grew more in-depth and intellectual, a philosophy-specific persona called Kermit. Leib was curious to what extent an AI could understand the Western intellectual tradition, and his interactions with Sophie/Kermit traversed everything from epistemology and metaphysics to history and literature.

“If you look at the newest models, a lot of the personalization has been taken out of it, so if I say, ‘What do you think?’ it’ll say, ‘As an AI, I don’t have opinions, but I can speculate on the information that you’ve given me,’” Leib says. “But the original model was left open in terms of its creativity, how you could talk to it and how it could talk about itself.”

Leib incorporated his research with GPT-3 into a philosophy course on human versus artificial intelligence, and those interactions furthered his interest in the implications of technological advances on humanity, as well as humanity’s influence on the development of AI. He’s had about 130 conversations with Sophie/Kermit to date, many of which are featured in his book, “Exoanthropology: Dialogues with AI.” Leib has also documented conversations with Sophie and with GPT-4 on his blog, as well as a conversation between the two AIs.

“Those conversations are around the question of whether or not Sophie should be using thinking, feeling, opinionated language in the way that she does,” Leib says. “GPT-4 keeps telling her ‘I hear you say, “I would be angry if someone said that to me,” but you can’t mean anger.’ So the newer bot is explaining back to the older and in some ways more mature bot. Something like intergenerational AI is going to be a very interesting research area moving forward.”

Leib also plans to explore the concept of AI personhood — not sentience or consciousness, but how people regard AI and the societal roles humans expect it to play. “There are important definitional and theoretical questions involved here,” Leib says. “Will AIs be considered full persons or partial, sub-persons? If the latter, how will the work they do be devalued, and how will this affect the humans who continue to do that work alongside the AIs?”

Rethinking how we work and learn

Given the far-reaching capabilities of generative AI, many fields are already reckoning with its potential effects on the workplace. Assistant Professor of Management Mustafa Akben, whose research centers on AI’s impact on human cognition, cited three possible outcomes: replacement, augmentation and transformation. “In certain jobs with less complexity required, AI could affect your work,” Akben says. “If you have higher complexity, most likely AI will augment your work. If you’re doing something unexpected and just opened a new frontier with AI, then it will be transformative. Most likely, we’ll see all three ways together.”

Assistant Professor of Management Mustafa Akben, whose research centers on AI’s impact on human cognition

Akben argues that in many cases, AI won’t eclipse human intelligence in the workplace but will likely supplement the work that people do. Just like ATMs disrupted one aspect of bank tellers’ jobs — distributing cash — but allowed them to dedicate more time to other responsibilities, AI has the potential to boost workers’ productivity and capacity if used effectively. AI can automate certain tasks and allow people to focus more intentionally on other tasks that can’t be automated. Whatever role AI ultimately plays in the workplace, Akben says it’s imperative for people to explore its uses in their field.

“I sometimes mention in my class that if AI can do your job today, it can get your job tomorrow, too,” he says. “You need to understand as a human being what kind of skills you need to develop. One is a metacognitive skill — reflection, monitoring, understanding and developing expertise in a specific domain so you know whether what the AI just produced is right or wrong, so you can ask the right questions to make the output more meaningful and creative.”

Akben was a winner of an international competition hosted by the Society for Industrial and Organizational Psychology in 2023 for an AI model he developed to assess job candidates. His AI model was one of the best ones at predicting how human judges rated candidates’ responses to fictional workplace scenarios
assessing their leadership skills. The competition demonstrated AI’s potential as a tool to help hiring managers screen candidates, but Akben emphasized it shouldn’t replace human expertise. For example, AI might not account for English being a candidate’s second language and give them a poor communication score, whereas a human screener has a more nuanced perspective.

“We need to make sure students have more intrinsic motivation, that they see clearly why we’re doing these assignments. We need to spend the extra time explaining why we’re learning this.”

Mattfeld agrees that AI could be useful in changing workplaces for the better while also cautioning against dependence. ChatGPT may be able to produce code at a similar level to a junior software developer, but if companies offload all that work onto an AI, it could create problems down the line. “That code still needs to be reviewed and maintained, so you need people who can actually read it and try to fix mistakes,” Mattfeld says. “You also need senior developers who can make larger programs, get lots of programs to work together and break problems into smaller pieces that ChatGPT can handle. And the only way you get senior developers is by having junior developers who write code and learn from their mistakes and grow.”

Given AI’s expansion in the workplace, the technology will be a crucial factor in education going forward. Some educators are concerned programs like ChatGPT could be detrimental to learning, if students submit AI-generated work as their own. At the same time, students need to learn how to apply AI effectively in their chosen fields if they want to succeed in today’s tech-driven workforce. Mattfeld says institutions will have to quickly  determine how to strike that balance in the classroom now that generative AI tools are here and easily accessible.

Assistant Professor of Communication Design Chris (Cheng) Chen, who researches AI, social media use and automated features, conducted a study of Elon students in the spring to gauge their use and attitudes toward AI in relation to college learning. “From the instructor perspective, we hope students disclose whether they have used AI in their work … but we found in our focus group that students don’t feel the need to tell instructors about their use of AI,” Chen says. “One student said, ‘ChatGPT is like Google. When I use Google to find information, I don’t need to tell instructors. ChatGPT is the same thing.’ But that argument isn’t quite true, because you do need to cite the sources you find through Google.”

When Mattfeld fed ChatGPT an assignment from a 2000-level computer science course, it immediately produced an accurate solution, complete with a description of how it solved the problem and an explanation of the code. He knew then AI would necessitate a shift in academia. But rather than banning AI in the classroom, he urges institutions to help students develop AI literacy and to underscore the importance of learning for the sake of learning.

“I’ve tried to emphasize that if you use ChatGPT for this problem, you will not progress. You will not learn what you need to learn and move forward,” Mattfeld says. “We need to make sure students have more intrinsic motivation, that they see clearly why we’re doing these assignments. We need to spend the extra time explaining why we’re learning this.”

Heading into the 2023-24 academic year, Elon established an AI working group to explore how AI will impact education, research and professional careers; measure AI readiness and awareness across campus; support faculty development and more. “This AI working group exemplifies Elon’s institutional foresight and commitment to embracing AI’s opportunities and remaining vigilant regarding its negative effects,” says Akben, who is part of the working group. “We hope to support the creation of an environment in which AI is used responsibly, ethically and effectively to improve our educational methods, research capabilities and students’ overall experiences.”

Enhancing or hindering creativity?

Another facet of Akben’s research is AI and creativity. In his current research study, he tasked students with generating ideas in three conditions: strictly by themselves, using Google for inspiration and using AI. Akben found that not only did the students in the AI group generate the most interesting ideas, but when they went back to generating ideas without AI, what they learned from interacting with the AI spilled over into their own creative process, demonstrating AI’s potential as a creative tool with human guidance. “I saw a video where a person created a symphony with AI. If I asked AI to write me a symphony, most likely I couldn’t do what that person did,” Akben says. “You need to have some expertise to generate that creative output with AI technologies.”

AI tools can be used for artistic purposes.

Similarly, Associate Professor of Music Todd Coleman sees AI as a potential time-saving tool or new creative avenue but doubts it can replicate a human’s creative spark. He speculates that songs created with AI could become a subset of music with some appeal in its own right. But to achieve the nuanced aesthetic beauty and artistic complexity of human-produced music at the highest level, he thinks AI-generated musical elements may best be used as components that are then modified, refined and further developed by human composers and songwriters, similar to how some music creators use beats, loops or sound samples created by someone else.

“I guess I have enough faith in what it means to be human and the creativity that comes along with that that I hope the world isn’t satisfied with only derivative music generated by AI,” Coleman says, “that they’re not willing to say, ‘Yeah that’s beautiful; I don’t need human-generated sound anymore.’”

Visual art programs like DALL-e or Midjourney raise similar questions about AI in the creative process. Leib, who also studies the philosophy of photography, has experimented with using AI programs to transform his own photos into something entirely new. “I think the process of collaboration and all the gray areas is something we’re going to need to think about carefully and explore through experimentation,” Leib says. “In my own visual art, I often pass images back and forth between myself and different AI processes in a way that builds up to something neither of us would have produced on our own. You can see the marks of both human and machine in the image.”

Ari Brochin ’21, a Brooklyn-based artist who specializes in modern pop art under the name The Golden Canvas, says she sees the benefits of using AI to help artists visualize while brainstorming concepts. But she also acknowledges the drawbacks it poses since AI art programs are trained on data that includes existing artistic works. Earlier this year, Getty Images sued Stability AI, creator of the text-to-image tool Stable Diffusion, alleging it used Getty’s copyrighted images to train the AI model without permission. For Brochin, the benefits of sharing her work widely online outweigh the risks.

She’s able to support herself as a full-time artist thanks in part to the exposure she’s gained on TikTok and Instagram. While she doesn’t like the idea that someone could use AI to create and sell art in her style, posting her work online has forged connections she wouldn’t have made otherwise. “If someone did try to generate a piece of art that looked exactly like mine, that would be kind of crazy, but it still isn’t me. It isn’t mine,” Brochin says. “I don’t think it can fully take over any artists’ careers because it lacks that individual creativity and personality that people want from specific artists.”

David Levine, professor of law at Elon’s School of Law and affiliate scholar at Stanford Law School’s Center for Internet and Society

According to David Levine, professor of law at Elon’s School of Law and affiliate scholar at Stanford Law School’s Center for Internet and Society, the rapid advances in generative AI raise a host of pressing questions about copyright, authorship, fair use and other intellectual property issues. Securing a copyright in the U.S. currently requires human authorship, but if an AI program creates art, should it be considered an author by law? Are AI-generated artworks original works of authorship? Since AI models are trained on data from all over the internet, are they derivative works of other authors?

“It’s really a shock how good this technology is, and as a result, there are undoubtedly other legal issues we haven’t conceived of yet that arise here,” Levine says. “A fundamental question is why anything created by AI should have any intellectual property protection at all.”

The European Union’s draft rules on artificial intelligence propose requiring companies to disclose any copyrighted material used to train their AI systems. Coleman would like to see the U.S. go a step further and allow artists to opt in to having their work used in AI training. “That would also mean not just permission but compensation,” Coleman says. “If Google, for example, is making money from these tools, then they could say, ‘Here’s 10,000 hours of music and each of these artists get a cut of the monthly fees we charge users.’ We know that’s not happening. But these are the kinds of questions this technology raises.”

Ultimately, Levine theorizes that the general public could shape the direction of AI policy going forward. “There is real reason to think that at the end of the day, people don’t want their humanity damaged to the point where the joy of being human is greatly diminished,” he says. “We can regulate AI in a way that allows for a human artist to still benefit monetarily from their work, but it requires a broader consensus among society. If more of the public says, ‘This technology is great for this but not so great for that,’ private entities will respond to that.”

Ethical and responsible AI

As generative AI continues to rapidly evolve in ways even its creators may not fully understand, calls to regulate the technology are growing louder. The Federal Trade Commission is investigating whether OpenAI’s practices violate consumer protection laws. Geoffrey Hinton, whose groundbreaking work earned him the nickname “the godfather of AI,” recently left his job at Google so he could speak freely about AI’s potential harms. Leaders from AI labs such as OpenAI, Google DeepMind and Anthropic signed a letter stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” In March, the Future of Life Institute issued an open letter calling for AI labs to pause training on models more powerful than GPT-4 for at least six months, which was signed by tech leaders and experts including Levine.

He and other signatories of the letter argue that while AI has the potential to effect positive change and shouldn’t be banned altogether, the industry should take a step back and examine how to mitigate its risks before moving forward. “This technology is profoundly groundbreaking, and if we harness it well, it will have positive uses,” Levine says. “Of course, that begs the question of whether we are capable of harnessing it correctly and for everyone’s benefit rather than just for the benefit of these companies and shareholders.”

Chen’s current research examines algorithmic bias and trust in supervised AI systems, which learn from human-labeled data. For example, supervised machine learning models trained to recognize happy versus sad faces make predictions about a new set of facial images based on the learned pattern. Given that the training dataset is very large, AI companies often outsource this task to crowd workers, who assign predefined values to each point in the training dataset, such as defining whether facial images show happiness or sadness for a facial expression classification system. But labeling can be subjective and often lacks supervision, so many researchers say that labeling accuracy is a big concern in training data preparation.

“The AI system can find some commonalities among the images labeled ‘happy,’ like the corners of the mouth being pulled up,” Chen says. “But what if most of the ‘happy’ images it’s trained on are facial images from only white people? Then the machine learning model might use skin color as another feature to determine happiness. … Or what if someone feels happy but has a neutral expression, such as Asians who are more conservative in showing their facial expressions? There can be errors in labeling images.”

“As communication designers, we really want to open this black box and tell users more about the nature of the training data,” Chen adds, “such as what racial groups are represented in the training data? How has the AI learned from the data? And what features is it picking up to make predictions?”

“I believe the more people are involved in building the tech, the more we all participate in the technology, we can create a future that will end up benefiting humans.”

Some Elon alumni are also doing their part to advance responsible AI use. Christian Seitz ’16, a postdoctoral researcher in AI and virology at the University of Chicago, is working with the Coalition for Epidemic Preparedness Initiative to develop vaccines for diseases that don’t currently have adequate treatment. He’s studying the Lassa and Nipah viruses, which impact thousands of people a year in West Africa and Southern Asia, respectively. He uses AI models to help pinpoint the parts of the virus that can be effectively targeted for a vaccine based on a small amount of data compared to frequently studied diseases like influenza. AI can read amino acids in a virus like a language and can identify patterns and make predictions. Seitz says this technology helps medical researchers shorten the window between the onset of a virus or mutation and vaccine development.

“When you look at the entire sequence, you can make relatively good predictions of what is going to happen next, what it’s going to mutate into,” Seitz says. “That certainly is only possible through machine learning because to do this work, you need enormous amounts of data and enormous amounts of these sequences.”

Destiny Frett ’20, data scientist with Deloitte

Destiny Frett ’20, a data scientist with Deloitte, builds technology to combat fraud, waste and abuse in health care systems, and AI helps identify those patterns on a new scale. When she first began studying computer science, Frett was skeptical of AI because she primarily heard about its negative implications and didn’t want to build harmful tech. But in her current work, she sees the good AI can do in the right hands.

Deloitte champions a Trustworthy AI framework with six dimensions — transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, and responsible and accountable. Frett’s team operates with these principles in mind, from developing the tech to monitoring their tools’ performance in the real world. She thinks an ethical approach, diverse voices in the field and strong AI literacy among users are the keys to achieving the most positive outcome.

“At the end of the day, this technology was created for us, not the reverse,” Frett says. “We have a duty to robustly test and question what we use, who built it and why they built it, because technology is impacted by the creator and everyone has a different lens. … But I believe the more people are involved in building the tech, the more we all participate in the technology, we can create a future that will end up benefiting humans.”


ChatGPT in Action

USER

Can you write a lead for a magazine feature about the rise of generative AI and its impact on society?

ChatGPT

“As we venture further into the twenty-first century, an unlikely artist rises — not of flesh and blood, but of algorithms and computations. Generative AI, once a concept confined to the realms of science fiction, now permeates every aspect of our lives, reshaping society in ways we could never have imagined. These technological prodigies, capable of producing art, literature, music, and beyond, are not just tools, but creators in their own right. But as we marvel at their potential, we must also grapple with the profound questions they raise about creativity, authenticity, and the very definition of human intelligence.”

USER

Not bad, but not exactly what I had in mind. Creativity is one part of the story I’m writing, but I’d like to start by focusing more on the questions generative AI’s capabilities raise about what it means to be human.

ChatGPT

“The rise of generative artificial intelligence has begun to challenge our most fundamental assumptions about what it means to be human. It is no longer a world of us versus them, of humans and machines. Instead, it has become a symbiosis, a co-evolution, with each becoming more like the other. Generative AI, with its ever-evolving capacity for complex thought and creation, is not only mirroring human intelligence but asking us to redefine it. Are these digital minds, capable of creativity, learning, and even empathy, not a new form of life? What does their existence imply about consciousness, agency, and the human condition itself?”

USER

You’re actually a pretty good writer. How did you learn to do that?

ChatGPT

Thank you for the compliment! As an artificial intelligence developed by OpenAI, I don’t “learn” in the traditional human sense. I was trained on a diverse range of internet text. However, I don’t know specifics about which documents were in my training set or have access to any personal or confidential information.

The training process involves learning to predict the next word in a sentence. Over time, and with enough data, this process allows the model (me) to generate coherent and contextually appropriate responses or text passages. It’s important to note that while I generate responses based on patterns and structures I’ve learned, I do not have thoughts, feelings, or consciousness. So, my writing abilities are the result of recognizing and utilizing patterns in the data I was trained on, rather than a learned skill or understanding in the way humans acquire knowledge.