The rapid development of artificial intelligence is igniting exciting opportunities and posing frightening risks to humanity. Technology experts and entrepreneurs are implementing applications that are benefiting human health and well-being, scientific discovery, business practices, media and information systems, education and much more. At the same time, hundreds of AI experts, policymakers and others signed a statement about existential risks of this technology and some have called for a moratorium on the training of AI systems until safety protocols can be developed.
Regulatory bodies around the world are considering ways to govern AI as its impact on modern life unfolds. These actions include the overwhelming approval of the Artificial Intelligence Act by the European Parliament. At stake are such foundational structures as are covered by the Universal Declaration of Human Rights: basic human rights and dignity, health, security, truth, freedom of action and thought, work, education, culture and spiritual development. It is therefore essential that wise and diverse perspectives about AI are voiced at this crucial time, preparing our world for what lies ahead.
We applaud the UNESCO recommendation that educational institutions worldwide should foster AI literacy to empower people and reduce digital access inequalities. Lifelong citizen education, from early childhood onward, has proven to be the most effective strategy that societies use to adapt to new technologies and shape them for human purposes. As concerned educators, leaders and scholars, we urge higher education institutions to accelerate their commitment to serve society’s best interests in the age of AI. We also urge government, business and civil society to provide resources to support these educational efforts. To guide this work, we recommend the following six holistic principles as a framework for action:
PRINCIPLES
- People, not technology, must be at the center of our work
Teaching and learning about AI should begin with the primacy of human health and well-being, dignity, safety, privacy and security. We must educate people about the inviolable value of human oversight and control of AI systems and the paramount importance of safeguards that guarantee AI systems do no harm to people. We should engender an understanding of responsible AI development designed to augment and enhance human capabilities rather than replace them and risk undermining basic human autonomy, agency and dignity. - We should promote digital inclusion within and beyond our institutions
All people associated with educational institutions should have the opportunity to fully participate in the digital world and interact with AI systems. This includes physical access to digital devices and the internet, as well as the permission and capability to engage effectively and equitably with these technologies. With consideration of institutional mission, we urge universities to collaborate with government, the private sector and civil society entities to expand outreach to all populations, especially those disadvantaged by poverty, disability, geographic isolation, income level and education and literacy level. We should work together to develop global human knowledge and understanding about AI and the way it influences our lives. - Digital and information literacy is an essential part of a core education
Universities should embrace instruction in foundational skills and knowledge about digital technologies, preparing all learners to use AI proficiently, safely and ethically. Everyone should understand the basic concepts of computer systems, machine learning, data science, algorithms and programming. Since AI is a multidisciplinary field, people should know about AI’s intersections with philosophy and ethics, social sciences, health sciences, business, communications, government and legal studies, creative arts and many other fields. Universities should help learners gain critical thinking and analysis skills appropriate for functioning in an AI-assisted world, including concepts such as social responsibility and citizenship, media and information literacy, how AI might reinforce human bias and discrimination, its implications for personal privacy, intellectual property and the ways it can abet deception through fraud and fakery. - AI tools should enhance teaching and learning
AI technologies should empower learners, enrich and extend the educational experience and advance access and equity in education. The role of AI should be to augment, not fully replace the vital human relationships between teachers and learners, or within groups of peer learners. AI systems should never compromise the privacy of students’ personal data and humans should maintain a primary role in the evaluation of students’ learning progress, behaviors and outcomes. AI systems used in education should be transparent and neutral – they should disclose the positionality of their data and models and should not manipulate learning processes in unethical, deceptive or subliminal ways. - Learning about technologies is an experiential, lifelong process
Because AI is constantly evolving, institutions of higher learning should emphasize the need for continuous skill development and human adaptability. Learners should be encouraged to apply their knowledge in conjunction with AI tools to solve real-world problems, collaborate with those building AI systems and share their findings. Universities should partner with external enterprises to create internship opportunities, sponsor learning events and lectures and inspire collaborative projects that provide learners with practical exposure to AI technologies and rapidly changing applications. - AI research and development must be done responsibly
As an engine of discovery, innovation and societal progress, higher education should adopt rigorous ethical standards and failsafe systems for AI research and design. Scientists should take all necessary steps to ensure that AI development takes full account of the likely benefits it will produce, the limits that should be imposed on its application, and the risks (known and unknown) and potential negative consequences that might emerge from these technologies. AI’s relationship to the development of new creative works must be clearly understood, along with its impact on intellectual property rights. Administrative accountability and requirements should be in place, similar to laboratory safeguards for dangerous biological agents.
In support of these principles, we call for the higher education community, including those beyond the traditional technology fields, to be proactively and integrally involved in the development of multistakeholder governance mechanisms for AI. Educators in all fields are well suited to provide intellectual and ethical guidance, conduct much-needed independent research, serve as trustworthy watchdogs and be advocates for learners, teachers and society.