This column by President Connie Ledoux Book and Scholar-in-residence Lee Rainie focused on how higher education should engage with artificial intelligence was distributed by the Elon University Writers Syndicate.
By Connie Ledoux Book and Lee Rainie
Scholars around the world are anxious to engage questions about artificial intelligence in higher education. To promote that conversation, we asked faculty members in many countries to help create a set of common principles to guide colleges and universities in developing policies and practices related to AI. Within hours, the responses were pouring in:
“We are facing clearly transdisciplinary challenges that go beyond national borders,” said a professor at the Central University of Venezuela.
“People before machines. Equal opportunities to learn,” responded a professor from Sister Nivedita University in India.
A professor at the College of Charleston wrote that “the urgency to balance the potential of AI with its inherent risks is evident … collaboration is key to creating a future where AI is used as a tool for growth and progress.”
A university president in Virginia said his institution was forming a task force on AI and would “be delighted to connect with other institutions working on these issues.”
As the world marks the one-year anniversary of the introduction of ChatGPT, higher education is navigating classroom policy issues and teaching students how to be “prompt engineers.” But we are already transitioning to more substantive questions, considering how our institutions can lead the global AI conversation; address complex issues related to ethics, truth, security and privacy; and promote digital literacy – including AI literacy – across all populations and disciplines.
Throughout history, universities have played a dual role in technological innovation. At one level, universities have been engines of scientific research and discovery. At another level, they have also prepared learners to understand and exploit the new tools through a strong core education in the liberal arts. Both roles are essential to humanity.
Now, as the AI revolution expands, the demand to promote new literacies has never been more urgent. That is why we began an initiative to develop a globally sourced set of foundational principles for higher education.
The response has been impressive. More than 140 faculty members, researchers and higher education groups and leaders in 46 countries have contributed ideas and added their signatures of support. The power of this collective intelligence is in the simple guidelines that could apply to all institutions, whatever their size, location or mission:
- People, not technology must be at the center of our work. As we engage with AI, human health, dignity, safety, privacy and security must be our first considerations.
- We should promote digital inclusion within and beyond our institutions. Collaboration with government, the private sector and civil society will enable us to expand outreach to all populations.
- Digital and information literacy is an essential part of a core education. Learners in all disciplines must be prepared to use AI proficiently, safely and ethically, and must understand the basic concepts of computer systems and programming, machine learning and data science.
- AI tools should enhance teaching and learning. AI must enrich and extend the educational experience and advance access and equity. We must also carefully protect the interests of learners and teachers.
- Learning about technologies is an experiential, lifelong process. We must help learners gain the hands-on skills they need to adapt to continual change.
- AI research and development must be done responsibly. We need rigorous ethical standards and failsafe systems as we advance AI research and design.
In line with our goal of fostering a global conversation, we released the principles in October at the United Nations Internet Governance Forum in Kyoto, Japan. IGF is a long-running annual gathering that was sizzling this year with discussion about the myriad issues raised by AI.
Nobel Peace Prize recipient Maria Ressa said there is “insidious manipulation” that takes place when “clones” of our identities are used by AI to influence our behavior and opinions. She also talked with us about unconscious “coded bias” that can be built into AI systems and replicated from one model to another.
Divina Frau-Meigs, a professor at Sorbonne Nouvelle University in France, called for “proper guardrails for teachers and students,” because the fundamental credibility of universities could be undermined in an era of pseudo-science and fake information distributed by “synthetic media.”
Legal researcher Eve Gaumond of the University of Montreal raised concerns about the “datafication of higher education” and the possibility that some students could avoid taking risks or writing about controversial topics because AI systems might use that information to limit their opportunities.
Danielle Smith, a professor of African American Studies at Syracuse University, is concerned about the estimated 2.9 billion people who are not yet connected to the internet. “It’s important, as we think of all the opportunities that AI brings, we also reflect on the challenges of those who could be left behind,” Smith said.
Many of the scholars we talked with said higher education should be a leading force in developing AI policies. Experts in all disciplines can help us understand these complex issues and advocate for society’s best interests. We also consistently heard calls for substantial investments in education, giving schools at all levels the resources they need to teach people how to use digital technologies to serve the common good.
As we continue to develop and share this set of guiding principles for higher education, we’re struck by the range of predictions we’re hearing about the future impact of AI. While some see a new age of enlightenment and growth, others see a potential existential threat to humanity.
Nearly all agree we are at a watershed moment that could transform our institutions in ways we cannot fully foresee, but we can bend towards beneficial outcomes. That is reason enough to clearly define some core values that will guide us through the fog of change.
Views expressed in this column are the authors’ own and not necessarily those of Elon University.