12 Teaching That Transforms: Reimagining Higher Education in the Age of AI
Rachel Fitzgerald
How to cite this chapter:
Fitzgerald, R. (2025). Teaching that transforms: Reimagining higher education in the age of AI. In R. Fitzgerald (Ed.), Inquiry in action: Using AI to reimagine learning and teaching. The University of Queensland. https://doi.org/10.14264/d4aaaf6
Abstract
This concluding chapter brings together the insights and provocations from across the collection to reimagine higher education in the age of artificial intelligence. It argues that meaningful innovation arises from reflection, ethics, and purposeful design. As generative AI becomes a defining feature of academic life, universities must move beyond policy debates to cultivate literacy, leadership, and integrity in practice. The chapter explores how educators are evolving from gatekeepers to guides and are reframing teaching as a relational act grounded in care, collaboration, and curiosity. It calls for equity, transparency, and accountability to be embedded as design principles that ensure AI serves human learning rather than replacing it. Finally, it proposes a definition of AI literacy that is contextual, ethical, and creative and asserts that the future of higher education will not be determined by sophisticated technologies, it will be determined by the strength of our values and commitment to teaching that transforms.
Keywords
AI literacy, learner-centred leadership, pedagogy of care, assessment, equity, feedback, SoTL, higher education
Introduction
As this collection draws to a close, it is safe to say that inquiry in action is what sustains meaningful innovation. Across disciplines and contexts, each chapter has shown educators grappling with the same essential question, how to design, teach, and lead when technology reshapes what it means to learn. The stories shared here remind us that transformation comes from reflection, ethics and courage. Generative AI has intensified this work, challenging us to balance experimentation with care and to ensure that human learning remains at the centre of every learning design decision. This project began as a response to disruption but has since evolved into a collective reimagining of pedagogy, one that invites all of us to act with purpose in shaping the future of higher education.
Our universities need adaptive systems that are responsive to the evolving needs of our learners and society more broadly (Sankey et al., 2023). The university of the future must learn to innovate without losing sight of its purpose, which is to cultivate curiosity, foster connection, and create societal impact. The innovations showcased throughout this book demonstrate how educators are weaving AI into feedback, reflection, simulation, and assessment. The real takeaway lies in the spirit of inquiry that shapes the use of technology. Each case represents design thinking in motion, an exploration of how tools can expand participation, clarify learning, and re-centre the student voice. Collectively, the chapters tell us that pedagogy remains our most powerful form of innovation and that collaborative learning is the most sustainable way of developing our future graduates.
Why Responsible AI Use Matters Now
As AI becomes a defining condition of graduate life, many students continue to encounter it through uncertainty or exclusion rather than purpose (Henderson et al., 2024). Higher education, by its nature, is slow to change and it will take time for institutions to adapt and prepare learners for this new reality. Yet students are ready for educators to lead this transition. Recent evidence from a national study across four Australian universities reinforces this point: while nearly half of the 6,960 students surveyed sought feedback from generative AI, they consistently rated teacher feedback as more trustworthy and contextually relevant. At the same time, they valued AI feedback for its accessibility, immediacy, and perceived neutrality (Henderson et al., 2024). These findings remind us that literacy, dialogue, and trust are the conditions through which learning systems evolve.
These findings also highlight an urgent need for higher education to cultivate AI-era literacy that includes the ability to evaluate, interpret, and ethically engage with machine-generated feedback. The case studies in this collection begin to address this gap by fostering literacy, dialogue, and trust between educators and students. Responsible AI use is, at its heart, about developing good judgement. Our future professionals must be able to interrogate data sources, detect bias, understand context, and apply ethical reasoning (Fitzgerald & Curtis, 2025). For educators, this means teaching with transparency and humility and having the courage to model learning in real time. It means recognising that AI belongs within the curriculum, not outside it.
The Educator’s Evolving Role
This book implies a reimagining of the educator’s role, as the boundaries between human and machine intelligence continue to blur. As facilitators of learning and responsible practice, educators need to model curiosity, critical thinking, and ethical awareness in ways that help students navigate an increasingly complex landscape. As educators become guides in uncertainty, they help learners build the confidence and judgement to use intelligent tools and technologies with integrity and purpose. Across the chapters in this book, we often see this movement as authors shift from gatekeepers to guides. As in the early days of online learning when many feared the loss of authority to the screen (Sankey et al., 2023), similar anxieties now surround AI. However, in education, it is increasingly evident that authority is grounded in trust rather than control.
When educators invite students into questioning and co-design, learning becomes shared, visible, authentic, and more engaging for all. This is leadership at its most relational, built on curiosity and the willingness to learn alongside students, with the courage to make mistakes publicly, and the capacity to translate complexity into care. It reflects learner-centred leadership, where educators act as co-learners and collaborators who design for curiosity, integrity, and inclusion (Krause, 2024). In this view, effective AI integration is part of a broader learning ecosystem in which staff and students co-create understanding through experimentation and reflection.
This shift asks us to rethink what it means to teach. Active learning does not happen by chance, it happens when theory meets practice and staff and students are confident enough to take risks (Czaplinski & Huijser, 2023). It requires deliberate choices about tasks, facilitation, and relationships, regardless of context. Many academics begin with content and then look for strategies to fit, consulting learning designers only at specific stages or when problems emerge. Few have mentors for teaching, and sustained collaboration with educational experts remains limited (Sim & Huijser, 2023). As a result, technology is often used to transfer information rather than to promote interaction, and institutional barriers can discourage experimentation. Without adequate time, support, and structures, innovation is easily lost. Yet, across these case studies, we can see how collaboration and space for reflection can initiate new and innovative approaches to learning.
Collaboration is particularly important as many academics now carry a triple role as disciplinary expert, designer of learning, and facilitator of knowledge transfer (Czaplinski & Huijser, 2023). However, expertise in a field does not automatically translate into pedagogical expertise, and cognitive entrenchment can make it difficult to see problems as students do. Institutions that seek robust learning must therefore invest in learning design for both lecturers and tutors, with explicit time, recognition, and practical development to match. Good teaching strengthens both the academic and affective dimensions of learning through care, presence, and interaction, qualities that are essential to active learning and belonging (Kahu & Picton, 2019).
To make this sustainable, the book points to structures that work; ongoing, planned collaboration with learning designers and student partners and communities of practice where educators test ideas, examine theory together, and bring evidence into everyday design (Fitzgerald et al., 2020; Pleschová et al., 2021). In practical terms, we need to embed learning design capability across roles and align workload with the time it genuinely takes to build active learning. We also need to make collaborative design the norm by resourcing educator–designer- student partnerships throughout the teaching cycle, not only at the start. Finally, we must treat technology choices as pedagogical decisions guided by theory and evidence, with the aim of making participation visible, relational and connected. These are the conditions that turn classrooms into places where curiosity grows and confidence takes root.
The academic role is no longer confined to content expertise alone, it now encompasses design, facilitation and the building of learning communities. This relational shift reflects a pedagogy of care, one that values connection and belonging as much as mastery and output. Within this scenario, AI becomes a catalyst for rediscovering the essence of teaching. The educator’s evolving role is, at the core, an ethical one. It calls for courage to unlearn, to question, and to design with humanity at the centre. When we teach with openness and curiosity, we remind students that learning has always been a profoundly human act, something that no algorithm can replicate.
Equity, Transparency, and Accountability
As educators take on these expanded roles, leadership becomes inseparable from ethics. Designing for learning also means designing for fairness. The same curiosity and care that drives innovation must guide how we use AI responsibly and inclusively. We must consider equity, transparency, and accountability as design principles that shape educational decisions. The digital age has already exposed inequities that AI risks magnifying. Students from diverse linguistic, cultural, and economic backgrounds may rely on AI tools more heavily yet also be more exposed to their biases (Pierrès et al., 2024). Inclusion cannot be assumed; it must be intentionally designed into the curriculum for transparency to follow. Students deserve clarity about how AI is used in teaching and assessment, and about the expectations that govern its ethical application. Co-created guidelines and open discussion restore confidence where secrecy breeds mistrust.
Accountability, both personal and institutional, anchors the future of responsible innovation. Assessment in the age of AI represents a wicked problem, without simple solutions, demanding iterative, value-based responses from educators and institutions alike (Corbin et al., 2025). As AI becomes embedded in feedback, analytics, and administration, universities must evaluate the efficiency of such systems and more importantly, the impact. Accountability begins with owning decisions, listening to students, and aligning technological choices with pedagogical purpose. In this context, leadership means openness, transparency and inclusive design that advances both equity and excellence.
From surface to substance: Redefining AI literacy
If equity and transparency form the ethical foundation of AI use in higher education, then literacy is the practice that sustains it. Too often, AI literacy is reduced to a functional skill set, a matter of learning prompts or mastering tools. Genuine literacy goes deeper. It requires students and educators to understand how AI systems produce knowledge, whose perspectives are embedded within them, and how their use shapes learning itself. The case studies in this collection show that AI literacy is not about what students can make AI do, but how they interpret, evaluate, and apply its outputs responsibly. Students may be quick to experiment with generative AI tools, but without guidance they risk mistaking fluency for understanding. True literacy involves developing evaluative and nuanced judgement and the ability to discern when to trust, adapt, or question machine-generated feedback (Corbin et al., 2025).
AI literacy must evolve from surface familiarity to deep engagement. It should be contextual, ethical, and creative. Contextual literacy enables students to recognise how AI functions differently across disciplines. Ethical literacy helps them identify bias, question transparency, and consider the social impact of automated systems. Creative literacy allows experimentation, encouraging students to use AI to extend rather than replace human thought. Such literacy cannot be taught through a single module or policy but should be woven through the curriculum as a living practice cultivated through dialogue, reflection, and shared responsibility to turn AI from a black box into a catalyst for critical inquiry (Fitzgerald & Curtis, 2025).
The Future of Higher Education – Reclaiming the Human
As we look ahead, the lessons of this book point to a simple truth, the future of higher education will not be determined by the sophistication of our systems or the technologies we can use but by the strength of our values. AI may change how we access and apply knowledge, but it cannot define what knowledge, or indeed wisdom, means. That remains our responsibility. Sustainable transformation relies on sensemaking, the collaborative process of interpreting complexity and change (Krause, 2024), and being able to embrace iteration and reflection (Corbin et al., 2025).
Generative AI magnifies this need. Educators must continuously interpret new contexts, engage in dialogue, and anchor their practice in values that prioritise learning. The human relationship at the heart of learning remains irreplaceable. Students value the empathy, contextual understanding, and credibility teachers bring to feedback (Henderson et al., 2024). The challenge is to design learning systems in which human and AI complement each other and serve distinct pedagogical purposes.
This is an education system that teaches students to use intelligent tools and to live and learn thoughtfully in a world shaped by such tools. To ensure that every graduate can engage with AI critically, ethically, and imaginatively, we must remember that education is not content and information delivery but the development of wisdom. The educators who contributed to this book have already begun that transformation. They model what leadership in learning and teaching looks like in this new era, collaborative, reflexive, and grounded in care. Their practices remind us that meaningful change begins in classrooms. In the end, this book is not about technology at all; it is about people and their capacity to question, to connect, and to create meaning in a world transformed by intelligent tools. If we can hold on to that, then the future of higher education will be AI-enabled and humanity-enriched. May we continue to design futures where technology serves humanity, and learning remains our most transformative act.
References
Bax, S. (2011). Normalisation revisited: The effective use of technology in language education. International Journal of Computer-Assisted Language Learning and Teaching, 1(2), 1–15. https://doi.org/10.4018/ijcallt.2011040101
Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment and Evaluation in Higher Education, 1–17. https://doi.org/10.1080/02602938.2025.2553340
Czaplinski, I., & Huijser, H. (2023). The role and application of learning theories in the virtual university. In M. Sankey, H. Huijser, & R. Fitzgerald (Eds.), Technology Enhanced Learning and the Virtual University (pp. 233–244). Springer. https://doi.org/10.1007/978-981-99-4170-4_13
Fitzgerald, R., Huijser, H., Meth, D., & Neilan, K. (2020). Student–staff partnerships in academic development: The course design studio as a model for sustainable course-wide impact. International Journal for Academic Development, 25(2), 134–146. https://doi.org/10.1080/1360144X.2019.1631170
Fitzgerald, R., & Curtis, C. (2025, September 8). AI is now part of our world – uni graduates should know how to use it responsibly. The Conversation. https://theconversation.com/ai-is-now-part-of-our-world-uni-graduates-should-know-how-to-use-it-responsibly-261273
Fitzgerald, R., Roe, J., Roehrer, E., Yang, J., & Kumar, J. A. (2025). The next phase of AI in higher education: An agenda for meaningful research. Journal of University Teaching and Learning Practice, 22(2). https://doi.org/10.53761/jwt7ra63
Goldie, J. G. S. (2016). Connectivism: A knowledge learning theory for the digital age? Medical Teacher, 38(10), 1064–1069. https://doi.org/10.3109/0142159X.2016.1173661
Goodyear, P. (2021). Navigating difficult waters in a digital era: Technology, uncertainty and the objects of informal lifelong learning. British Journal of Educational Technology, 52(6), 2241–2255. https://doi.org/10.1111/bjet.13107
Henderson, M., Bearman, M., Chung, J., Fawns, T., Buckingham Shum, S., Matthews, K. E., & de Mello Heredia, J. (2025). Comparing generative AI and teacher feedback: Student perceptions of usefulness and trustworthiness. Assessment & Evaluation in Higher Education, 50(3), 1–16. https://doi.org/10.1080/02602938.2025.2502582
Kahu, E. R., & Picton, C. (2019). The benefits of good tutor–student relationships in the first year. Student Success, 10(2), 23–33. https://doi.org/10.5204/ssj.v10i2.1293
Krause, K.-L. (2024). Learner-centred leadership in higher education: A practical guide. Routledge.
Pleschová, G., Roxå, T., Thomson, K. E., & Felten, P. (2021). Conversations that make meaningful change in teaching, teachers, and academic development. International Journal for Academic Development, 26(3), 201–209. https://doi.org/10.1080/1360144X.2021.1958446
Sankey, M., Huijser, H., & Fitzgerald, R. (Eds.). (2023). Technology enhanced learning and the virtual university. Springer. https://doi.org/10.1007/978-981-99-4170-4
Sim, K.N., & Huijser, H. (2023). Models of professional development for technology-enhanced learning in the virtual university. In: Sankey, M.D., Huijser, H., Fitzgerald, R. (eds) Technology-enhanced learning and the virtual university. Springer. https://doi.org/10.1007/978-981-99-4170-4_8
Sweller, J., Ayres, P. L., Kalyuga, S., & Chandler, P. A. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23–31. https://doi.org/10.1207/S15326985EP3801_4
Tay, Z. A., Huijser, H., Dart, S., & Cathcart, A. (2023). Learning technology as contested terrain: Insights from teaching academics and learning designers in Australian higher education. Australasian Journal of Educational Technology, 39(1), 56–70. https://doi.org/10.14742/ajet.8179