Introduction
Rachel Fitzgerald
This collection began with a shared vision, to reimagine how educators and students could engage with artificial intelligence (AI) through inquiry, design and reflection to represent core values of higher education: curiosity, care, and critical inquiry. The project grew from a recognition that practice, not policy should lead the conversation about AI in learning. Across disciplines, staff and students came together to make teaching and learning visible, to surface, through the Scholarship of Teaching and Learning (SoTL), how they are experiencing and shaping this moment of technological change. Consistent with SoTL’s commitment to students-as-partners, the project positioned learners as co-designers, co-authors, and co-evaluators throughout the work (Felten, 2013; Cook-Sather et al., 2014).
Emerging from a 2024 Teaching Innovation Grant at the University of Queensland (UQ), Inquiry in Action formed a community of practice, a network of educators and students experimenting, reflecting and writing together to navigate AI’s uncertainties and explore its potential as a pedagogical partner. Throughout the project, educators approached AI as a catalyst for reflection, creativity and renewed attention to their teaching. Each contributor began with a practical question such as how might AI enhance feedback or inclusivity or engagement? Their classrooms became living laboratories where teachers and students co-designed, tested, and refined new learning experiences. Collectively, these chapters illustrate how SoTL and design-based inquiry can guide responsible innovation that is iterative, evidence-based, and grounded in authentic disciplinary practice (Felten, 2013; Fitzgerald et al., 2025).
The contributors to this volume represent a cross-section of disciplines, from dentistry and dietetics to economics, engineering, and information systems. Each case reflects a unique context, yet all share a common commitment to exploring how AI can enhance the human dimensions of learning. The project created space for educators and students to experiment safely, to document their insights rigorously, and to connect local innovations to broader questions of pedagogical design, ethics, and impact. By situating each case within a shared SoTL framework, this collection transforms individual acts of experimentation into a collective inquiry into how teaching and learning evolve in the age of AI. Together, these cases demonstrated that leadership in learning and teaching rests on fostering the curiosity, courage, and community that enable transformation to take root, a learner-centred leadership stance that aligns institutional purpose with the lived experience of learners through trust, collaboration, and shared responsibility (Krause, 2023).
The chapters are organised to trace a progression, from early experiments that introduce AI as a practical teaching tool, to more mature designs that embed it as a partner in inquiry, reflection, and assessment. Each section builds on the last, moving from exploration to integration, from individual innovation to collective learning. Together, they showcase the range of ways educators are making sense of GenAI in context: redesigning learning tasks, scaffolding ethical reasoning, rethinking feedback, and strengthening inclusivity through co-creation with students. Inquiry in Action also honours the people who brought it to life. The educators and the students who contributed to this collection model leadership that is relational and grounded in curiosity, evidence and care. This is SoTL in Action, inquiry that values partnership, is grounded in context, and aims to improve learning. In curating this work, the project seeks to create a living record of what responsible, values-led AI engagement looks like across higher education.
Much of the literature on AI in higher education is theoretical, descriptive or tool-focused (Zawacki-Richter et al., 2019). Inquiry in Action seeks to fill that gap by showing how AI integration unfolds pedagogically through the relationships, choices, and the values that shape learning. Across the book, educators use frameworks such as Technological Pedagogical Content Knowledge (TPACK) and Universal Design for Learning (UDL) to ensure that design remains mindful of inclusion and understanding. Several authors embed self-regulated learning principles, reflective cycles, and participatory ethics, positioning students as partners in shaping learning. A commitment to partnership pedagogy underpins the entire collection. Students co-authored, co-tested, and co-evaluated many of these interventions, offering insight into the opportunities and tensions of AI-enhanced learning. This work reflects a philosophy of leadership through learning that recognises innovation as a collective, relational process. By bringing together educators and students across disciplines, Inquiry in Action demonstrates that curiosity, care, and collaboration remain the foundation of transformation in the age of AI. Across the collection, partnership is enacted in concrete ways, from co-writing and co-analysis to student-led evaluation reflecting the Students-as-Partners framework (Healey et al., 2016; Cook-Sather et al., 2014).
The early chapters explore the foundations of AI literacy and ethical confidence. Sowmya Shetty and Divya Anantharaman’s Starting Smart with AI describes a large-scale, first-year health course that introduces students to AI through structured, reflective practice. Their design scaffolds prompting, critique, and attribution, showing that ethical confidence begins with transparency and consistency in assessment (see Shetty & Anantharaman, this volume). They make a persuasive case for early, supported exposure to AI before students encounter it informally as essential for professional readiness. Building on this, Nantana Taptamat, Marnie Holt, Dominic McGrath and student partners Tiarna McElligott, Lauren Miller, and Hana Purwanto’s From Anxiety to Agency charts how uncertainty about AI can be transformed into ethical capability. Co-creating the Science Generative AI Essential Guide with students, they model partnership pedagogy in action, treating students lived experiences of confusion and curiosity as design data (see Taptamat et al., this volume). Their work reveals that agency develops when students are invited to shape, critique, and own the norms of responsible use.
Across disciplines, these cases reveal that curiosity about AI was often matched by caution. Many students approached the technology with a blend of fascination and fear, uncertain about its legitimacy, doubtful of its accuracy, and uneasy about authenticity. In Starting Smart with AI and From Anxiety to Agency, educators reframed this hesitancy as a space for learning rather than a limitation. By acknowledging students’ reluctance, and designing for transparency, educators helped transform student scepticism into ethical awareness and reflective agency. Later in the book, Weerakoon’s and Wright’s health-focused case will also show how trust develops through dialogue and human presence. These moments of hesitation serve as a good reminder that learning begins when uncertainty is named, shared, and explored together.
Melinda Pratt and Iris Zhang’s Leading with Responsible AI provides a philosophical counterpoint. Set within education and creative inquiry, their chapter examines how students distinguish between human and automated creativity. They show that ethical learning involves technical and cognitive judgement, and also attentiveness, imagination, and moral reflection (see Pratt & Zhang, this volume). Here they show how AI is a mirror for what it means to learn and create as a human being. Taken together, these chapters help us to move the conversation toward curiosity and awareness, and importantly, from tool-use to ethical engagement.
The middle of the book turns to design and how AI can make learning more inclusive, relevant, and connected, and reveals the pressures and paradoxes of doing so. Arosha Weerakoon’s Designing with AI details a rapid, real-time digital uplift in two dentistry courses, where GenAI was used as a co-creative assistant to streamline design and enhance patient-centred learning. Her story captures both the possibilities and precariousness of GenAI. As a clinician–academic balancing teaching, research, and professional practice within tight workload constraints, Weerakoon demonstrates how technology can clarify expectations, reduce cognitive load, and support diverse learners when guided by universal design principles and supported by structured success criteria, multimodal resources, and transparent AI use. Her findings also caution that “AI-ness” such as synthetic voices, perceived artificiality, and questions of authenticity can undermine trust in a student-staff relationship if not framed by a clear pedagogical purpose (see Weerakoon, this volume).
Olivia Wright and her student partner Shreyasi Baruah continue the theme in Integrating Generative AI to Strengthen Counselling and Communication in Dietetic Education, using AI-powered simulations and chatbots to help students practise empathy and motivational interviewing. Their study illustrates how AI can support clinical communication when introduced sequentially with scaffolds that balance practicality and reflection (see Wright & Baruah, this volume). This is followed by Reihaneh Bidar’s Paving Paths of Enquiry-Led AI Integration for Future-Ready Learners which further demonstrates how to bridge inclusion and inquiry. Bidar shows how information systems students progress deliberately from dependency to collaboration and then to integration as they learn to prompt, validate, and critique AI outputs. Her model reflects another focus of this collection, developing technical fluency through thoughtful, critically informed practice (see Bidar, this volume). Across these cases, we can see clearly that inclusion is not achieved by technology alone but from the educators’ capacity to translate feedback, even resistance, into design that re-centres human learning. It is also clear that when AI design is guided by pedagogy, it can augment the human strengths at the heart of this work.
The chapters that follow further the focus on inquiry, reflection, and disciplinary depth. In Bicycles for the Mind, Russell Manfield and students Alice Hawkesby, and Nicholas Nucifora test the idea that AI can enhance critical thinking. Their business and innovation class used AI tools transparently to prepare debates, moderate peer resources, and analyse case studies. For the authors, the results affirm Steve Jobs’s metaphor of the computer as “a bicycle for the mind”, so when students are asked to engage critically rather than passively, AI can accelerate learning while maintaining human judgement (see Manfield, Hawkesby, & Nucifora, this volume). John Raiti’s GenAI in Economics Education complements this by showing how AI supports inquiry-led learning in economics, helping students to use data to explore behavioural questions while sustaining reflective awareness of their own reasoning processes (see Raiti, this volume). Both cases position AI within disciplinary epistemologies as a tool for thinking rather than producing.
Aneesha Bakharia and student partner Jasmine Burt’s Teaching with GenAI offers a methodologically rigorous, TPACK-based approach to integrating AI into computing courses. By aligning technology, pedagogy, and content, they demonstrate how governance, structured prompting, and authentic assessment can coexist to build both employability and ethical awareness (see Bakharia & Burt, this volume). Sean Mitchell’s Three Paths to AI Integration brings the section to a close with three distinctive implementations: BBOP, NotABot, and The Lighthouse. Each iteration refines the role of AI, first as a verifier, then as a coach, and finally as a facilitator of reflection to support cognitive engagement and metacognitive growth. His findings reinforce a message echoed across this collection, that thoughtful design, rather than technological sophistication, ultimately determines educational value (see Mitchell, this volume).
What unites these contributions is a shared ethic of inquiry. Each author reminds us that responsible AI integration begins with questions, not answers. They show that design thinking, feedback, and iteration are practices that make learning visible, contextual and human. The diversity of disciplines represented here, from health and science to business, computing, and education, shows that meaningful innovation has little to do with technical prowess and everything to do with how educators make technology serve learning, not the other way around! Across this book, readers will see leadership in its most authentic form, less about positional authority, and more about action grounded in curiosity and reflection. The educators represented here embody what we need in higher education, collaborative leaders who build capability across roles, and innovate practice grounded in shared values of integrity, inclusion, and impact. This vision aligns closely with sentiments from the Australian Universities Accord (Department of Education, 2024), which calls for a sector that values people, partnerships, and purpose. These contributors already demonstrate that future by leading through influence, sharing practice, mentoring peers, and creating space for others to grow. Their stories show how inquiry-led teaching can drive systemic change, connecting micro-level classroom design to macro-level reform.
This approach to leadership finds philosophical resonance in Carmen Vallis’s Leading and Practising GenAI Care-fully (see Vallis, this volume). Vallis reframes leadership through SoTL as relational rather than hierarchical, grounded in care, curiosity, and shared inquiry. Her call to “lead care-fully” invites educators to embrace uncertainty, to turn experimentation into scholarship, and to treat collective sense-making as a moral and pedagogical act. In this spirit, Inquiry in Action becomes more than a showcase of AI integration, it reflects a community of practice embodying the very principles Vallis articulates curiosity over control, connection over compliance, and leadership through care.
AI has become a catalyst for re-examining the purposes of education, it makes us think about why we teach, what we value, and how we prepare students for a world where knowledge is co-created with machines. Each chapter shows that AI’s potential depends on the integrity of pedagogy and the humanity of its teachers. This collection is also a testament to the education community. The Teaching Innovation Grant that seeded this project was the beginning, the curiosity, collaboration, and care demonstrated by educators who voluntarily participated made it thrive. Despite heavy workloads, these educators found space to question, experiment, and share. They model leadership that is relational and reflective, the kind that changes culture from within. As higher education navigates AI’s impact, the message here is clear – begin with learning, stay grounded in evidence, and design for inclusion and integrity – good teaching comes first (Crawford et al., 2023). When educators approach AI as inquiry rather than instruction, they build digital capability and resilience, empathy, and ethical judgement, the qualities that remain uniquely human. This is the mission of higher education, to develop knowledgeable, ethical, and confident graduates ready to lead in a world transformed by intelligence, both human and artificial.
References
Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. Jossey-Bass.
Crawford, J., Vallis, C., Yang, J., Fitzgerald, R., O’Dea, C., & Cowling, M. (2023). Editorial: Artificial intelligence is awesome, but good teaching should always come first. Journal of University Teaching & Learning Practice, 20(7). https://doi.org/10.53761/1.20.7.01
Department of Education. (2024). Australian Universities Accord final report – Summary report. Australian Government. https://www.education.gov.au/australian-universities-accord/resources/australian-universities-accord-final-report-summary-report
Felten, P. (2013). Principles of good practice in SoTL. Teaching & Learning Inquiry, 1(1), 121–125. https://doi.org/10.2979/teachlearninqu.1.1.121
Fitzgerald, R., Roe, J., Roehrer, E., Yang, J., & Kumar, J. A. (2025). The next phase of AI in higher education: An agenda for meaningful research. Journal of University Teaching & Learning Practice, 22(2). https://doi.org/10.53761/jwt7ra63
Healey, M., Flint, A., & Harrington, K. (2014). Engagement through partnership: Students as partners in learning and teaching in higher education. Higher Education Academy. (If you cite the 2016 expanded framework, list that edition instead.)
Krause, K.-L. (2023). Learner-centred leadership: Principles and practices for contemporary higher education. Australian Council for Educational Research.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(39). https://doi.org/10.1186/s41239-019-0171-0