Image: Supplied
Artificial Intelligence (AI) is reshaping industries and jobs at a rapid pace, and education systems in the Middle East are responding with urgency.
The UAE, for example, has witnessed a 344 per cent year-over-year jump in enrollments for Generative AI courses, according to Coursera’s 2025 Global Skills Report, and has introduced AI as a formal subject in all public schools from 2025. These developments underscore a critical imperative: universities must embed AI literacy across curricula so every student is equipped to use AI tools responsibly, verify the technology’s outputs, and recognise biases in AI-generated content.
In a 2025 Higher Education Policy Institute survey, 92 per cent of students reported using AI tools in their academic work, yet 83 per cent of faculty worry they cannot critically evaluate AI-generated output. This disconnect reveals a fundamental challenge: while students enthusiastically adopt AI, they often lack the frameworks for effective, ethical use. Section’s AI Proficiency Report found only 7 per cent of knowledge workers are proficient in AI tools, underscoring the urgent need for universities to embed structured AI literacy across disciplines.
The evolution from early digitisation to today’s AI revolution highlights how ethics and structured learning must evolve in parallel. A disciplined, ethics-first approach to AI education is essential – one that balances enthusiasm for innovation with “appropriate caution”, teaching students not just how to use AI, but how to question it.
Interdisciplinary integration of AI in education
A key strategy in building AI literacy is integrating AI across all disciplines, not just in computer science or engineering programs. Students in every field are starting to use AI in ways relevant to their domain. For example, architecture students now use GenAI to create novel building designs. Life Sciences students can apply machine learning techniques on genomic data. Business students experiment with AI for financial forecasting and risk analysis, while liberal arts scholars leverage AI to mine historical texts or create media content in new ways.
By tailoring AI applications to each field, educators make AI literacy hands-on and relevant for every student. This interdisciplinary approach reflects a broader trend: AI literacy is becoming as essential today as basic computer literacy was a generation ago.
Ethics and critical thinking: The core of AI literacy
AI literacy isn’t just about technical skills; it also demands ethics and critical thinking. Generative AI tools can produce content that sounds convincing but may be false or biased. Large language models sometimes “hallucinate”, fabricating answers and even references, and such polished output can mislead students. Thus, a core tenet of AI literacy is learning to verify sources and facts behind AI-generated content.
Educators weave these lessons into coursework. A common exercise is to have students do a task manually, then use an AI tool to do it and compare the results. This shows where the AI helps and where it fails, reinforcing that AI can assist but not replace human judgment. Instructors also explain how AI models work and how biases in training data can creep in. When Purdue University researchers demonstrated an AI system struggling to recognise faces of people with darker skin tones, participants became more aware and sceptical of AI’s biases.
Incorporating cases like these into coursework sensitises students to issues of fairness and bias in AI – from facial recognition and hiring algorithms to the selection of news articles we see on social media. By instilling habits of checking multiple sources and looking for evidence, educators ensure that students treat AI as a starting point, not the final authority. The outcome is a new kind of digital literacy: one where a graduate can harness AI tools confidently but will pause to question its output, which is a crucial reflex in the era of deepfakes.
Collaboration with industry and real-world alignment
To keep pace with advances, universities are partnering with tech companies and industry experts to ensure that classroom AI applications mirror real-world ones and exposes students to the ethical standards and challenges faced by industry, from data privacy issues to addressing bias in AI models deployed at scale.
Industry collaboration also helps align curricula with evolving skill needs. In the UAE, tech firms co-develop AI labs with universities to give students hands-on experience with current tools, addressing employers’ concerns that graduates aren’t ready for AI-driven workplaces.
Incorporating industry case studies lets students practice solving real problems and emphasises the responsibility that comes with deploying AI outside campus. Additionally, as the UAE positions itself as a global AI hub, nurturing talent that can innovate responsibly is seen as a competitive advantage.
Future-ready, responsible graduates
The ultimate goal of AI literacy is to produce graduates who are as comfortable working with AI as they are working with people. This is crucial in the Middle East, where governments have ambitious digital economy plans. PwC projects, AI will contribute around $320bn to the Middle East’s GDP by 2030. Such growth will demand a workforce adept in AI.
However, it’s not enough for tomorrow’s leaders to use AI widely; it must be used wisely. If they adopt AI without understanding its pitfalls, they risk amplifying biases or spreading misinformation, undermining the technology’s benefits. Many students already recognise this risk and are calling for more AI training for both students and faculty, and a voice in how AI is used on campus. And it is time for academia to step up to this challenge.
Middle Eastern universities have an opportunity to lead by example, to ride the AI wave without drowning in it. By implementing structured AI modules, critical thinking exercises, strong ethical guidelines, and industry partnerships, they can ensure graduates are both tech-savvy and ethically grounded. These graduates will enter the workforce ready to innovate with AI while upholding trust and accountability. Such balanced expertise will anchor a sustainable, inclusive digital future for the region.
The message is clear: equip students today with ethical AI skills, and they will drive the Middle East’s AI revolution responsibly tomorrow.
Dr Sudhindra Shamanna is the pro vice chancellor, Manipal Academy of Higher Education (MAHE), Dubai Campus.


