"

1 Starting Smart with AI: A Practical Literacy Task for First-Year Health Students

Sowmya Shetty and Divya Anantharaman

How to cite this chapter:

Shetty, S., & Anantharaman, D. (2025). Starting smart with AI: A practical literacy task for first-year health students. In R. Fitzgerald (Ed.), Inquiry in action: Using AI to reimagine learning and teaching. The University of Queensland. https://doi.org/10.14264/969bc18

Abstract

Generative Artificial Intelligence (GenAI) is rapidly reshaping healthcare education. This case study reports on the design and implementation of an AI literacy task embedded in HLTH1000, a large first-year cross-disciplinary course close to than 1,500 students. The task introduced tools such as ChatGPT, Copilot, and Claude to support academic writing and critical thinking while foregrounding ethical, professional, and practical considerations. Drawing on student reflections and course evaluation data, we identify perceived benefits, including efficiency, clarity, and a useful starting point for writing, alongside cautions about reliability, bias, overreliance, and the continuing need for human oversight. We outline the instructional design decisions, scaffolds, and workflow used, report challenges related to tutor alignment, digital literacy variance, and submission logistics, and present actionable refinements. We argue that early, structured, and ethically grounded engagement with AI can build digital confidence and professional judgement, provided it is supported by clear guidance, consistent assessment criteria, educator preparation, and robust governance.

Keywords

Generative AI, AI Literacy, Healthcare Education, Academic Writing, Ethics, Assessment Integrity, First-Year Curriculum, Student Reflections

Practitioner Notes

  1. Introduce AI literacy in first year with staged practice in prompting, critique, and attribution, then revisit across the program.
  2. Require tracked changes, annotated justifications, and a brief AI-use log to centre transparency, evidence, and accountability.
  3. Pre-semester alignment and a living FAQ minimise mixed messages in large cohorts.
  4. Let students either edit an AI draft or write first and then critique, to support different strengths and reduce overreliance.
  5. Embed prompts and exemplars that mirror real healthcare contexts, so students see relevance to future practice.

Introduction

The integration of Generative Artificial Intelligence (GenAI) tools, particularly large language models (LLMs) such as ChatGPT, are important for healthcare education. These technologies offer valuable support in developing academic writing and critical thinking skills essential for clinical practice. GenAI assists students in drafting, editing, and refining healthcare report-style assessments, providing real-time feedback that improves clarity, structure, and coherence. It also helps interpret complex medical terminology and data, making learning more accessible and inclusive.

Thoughtful integration into curricula is vital. GenAI must be used responsibly, guided by ethical principles and supported by institutional frameworks. In healthcare disciplines, where professional integrity is paramount, responsible use is non-negotiable. While GenAI can enhance learning and support clinical reasoning, its implementation requires caution. Issues such as data privacy, ethical implications, and the need for human oversight must be addressed. Transparency and trust are central to AI adoption in healthcare education. Patient confidentiality and strong data governance are critical. Ethical guidelines are needed to reduce bias and promote equitable outcomes. Explainability and accountability help uphold professional standards and ensure safe, effective use of GenAI in clinical settings.

To support these opportunities and challenges, an AI literacy task was introduced into the curriculum of a large cross-disciplinary course with more than seventeen health and behavioural science disciplines participating. The structured activity encouraged students to explore GenAI tools reflectively, considering their ethical, professional, and practical dimensions. This case study examines how such an intervention could promote informed decision-making, digital responsibility, and critical engagement with AI. By fostering both technical competence and professional integrity, the task aimed to prepare students for the evolving realities of healthcare practice in a digital age.

Background, Evidence and Need

As healthcare prepares students for the digital realities of clinical practice, GenAI has emerged as a challenge and a catalyst for change. Tools such as ChatGPT are transforming how learners engage with information, build arguments and communicate evidence, while opening new pathways for inclusion and engagement. Within a Scholarship of Teaching and Learning (SoTL) lens, this case considers GenAI as a pedagogical development that reshapes how educators cultivate judgement, ethics and professional identity in future clinicians.

The Promise of GenAI in Learning and Writing

GenAI assists students in drafting, editing, and refining healthcare report-style assessments, improving clarity, structure, and coherence through iterative feedback loops (Peláez-Sánchez et al. 2024). These tools also help interpret complex medical terminology and data, making learning more inclusive. Research shows that GenAI can enhance engagement, enable personalised and adaptive learning, and improve academic productivity (Bakthavatchaalam & Sivasankar, 2024; Kim et al., 2025).

A growing body of literature highlights the value of GenAI in transforming higher education by enabling autonomous, interactive and adaptive learning experiences (Bakthavatchaalam & Sivasankar, 2024; Peláez-Sánchez et al. 2024). These technologies support students to enhance their academic writing, developing critical thinking skills, and engage more deeply with course content. Platforms such as ChatGPT have also been shown to provide personalised feedback, scaffold complex and facilitate virtual simulations, leading to improved learning outcomes and higher student satisfaction (Kim et al., 2025).

GenAI tools are also recognised for their role in promoting technological fluency among students. By equipping learners with the skills to navigate and critically engage with emerging digital environments, GenAI supports both academic and professional development (Krause et al., 2025).

Their integration into academic writing has demonstrated practical benefits, including improved efficiency, enhanced quality, and greater learner autonomy (Kim et al., 2025). This aligns with emerging pedagogies that prioritise student-centred learning, continuous feedback and authentic assessment, the key pillars of contemporary learning design (Sankey et al. 2023). Recent systematic reviews further emphasise the pedagogical potential of GenAI in fostering creativity, learning independence, and prompt literacy. However, they also caution against overreliance, which may hinder the development of essential cognitive and metacognitive skills (Qian, 2025). Responsible and strategic implementation is therefore crucial to maximise educational benefits while maintaining academic integrity and inclusivity. To harness its full potential, educators must consider how GenAI transforms the very processes of writing, reflection, and ethical reasoning that underpin healthcare education.

Risks, Responsibilities and Ethical Imperatives

The emergence of Multimodal Large Language Models (MLLMs) marked a significant advancement in healthcare education. These models, which integrate text, image, and other data types, offer real-time feedback, clarify complex medical terminology, interpret large volumes of clinical data, and simulate realistic interactions for training and decision support. For instance, MLLMs can assist clinicians during consultations or procedures by providing instant, context-aware insights, thereby improving diagnostic accuracy and clinical efficiency (Topol, 2019). MLLMs also enhance communication between healthcare providers and patients by translating technical language into accessible terms, they support patient understanding and informed consent, contributing to safer and more transparent care (Rajpurkar et al., 2018).

In healthcare disciplines, where ethical standards and professional integrity are paramount, the responsible use of GenAI is essential. Ethical concerns such as algorithmic bias, data privacy, and the lack of transparency in AI decision-making must be addressed to ensure fair and accountable assessment practices. García-López and Trujillo-Liñán (2025) warn of risks including the loss of cognitive autonomy and potential misuse of student data by institutions. They call for inclusive regulatory frameworks and ethically grounded pedagogical models to safeguard student rights and academic integrity. Structured AI ethics education has shown promise. Abuadas et al. (2025) demonstrate that targeted instruction in AI ethics can significantly enhance students’ moral sensitivity and ethical awareness. These findings reinforce the need for curriculum-level interventions, particularly in healthcare education where professional standards are paramount. Researchers further highlight the importance of human oversight in the use of GenAI to uphold academic standards and reduce risks such as overreliance (Peláez-Sánchez et al. 2024) as well as originality, bias, and data privacy (Bobula M, 2024). To support responsible integration, scholars advocate for updated assessment policies and the scaffolded development of AI literacy throughout the curriculum (Bobula M 2024; Almarzouqi et al. 2024).

Educator-led training is also considered essential. It ensures students receive guided support in navigating the complexities of AI use in academic writing while maintaining rigour and integrity (Lazar et al., 2024). Ethical frameworks play a critical role in this process, helping students understand the boundaries and responsibilities associated with AI-assisted learning (Castillo-Martínez et al., 2024, Lazar et al., 2024 and Cheng et al., 2025). Thoughtful integration of GenAI into health professional curricula must be guided by ethical principles and supported by robust institutional frameworks (Dempere et al., 2023; Dos, 2025; Li Y & Li J, 2024). As these tools become more embedded in higher education, the need for responsible use intensifies, especially in healthcare disciplines, where professional integrity cannot be compromised. These technologies hold immense promise, they can enhance learning, sharpen clinical reasoning, and support informed decision-making. Yet implementation demands caution. Issues such as data privacy, ethical implications, and the irreplaceable role of human oversight must be carefully considered. Without these safeguards, the risks may outweigh the benefits. Patient confidentiality also remains a major concern, and the need for strong data governance frameworks cannot be overstated (Reddy et al., 2020). Researchers have called for the development of ethical guidelines to support the responsible use of AI, aiming to reduce bias and promote fair, equitable outcomes across diverse populations (Morley et al., 2020). Explainability and accountability are essential to uphold professional standards and ensure AI is used safely and effectively in clinical education (Amann et al., 2020).

As discussed earlier, newer MLLMs are being used to simulate clinical scenarios, allowing students to engage in interactive learning environments that mirror real-world practice. These simulations help scaffold complex tasks and promote adaptive learning, particularly in disciplines such as radiology, dermatology, and critical care (Esteva et al., 2017; Johnson et al., 2016). By enabling learners and practitioners to interact with multimodal data in meaningful ways, MLLMs foster technological fluency and strengthen clinical reasoning skills, ultimately supporting more informed decision making and improved patient outcomes. Together these opportunities and challenges highlight the importance of scholarly inquiry into how ethical frameworks and curriculum design can prepare students to engage critically, reflectively, and responsibly with AI in professional healthcare contexts.

Design Implications for Teaching and Learning

Effective implementation within academic settings requires thoughtful instructional design and comprehensive teacher training to ensure that AI tools align with learning goals and pedagogical intent. Vorobyeva et al. (2025) emphasise that successful integration of AI in education depends not only on technological readiness but also on educators’ preparedness and the development of targeted professional learning programs. The increasing presence of artificial intelligence in healthcare education and practice highlights the importance of early and responsible engagement with its tools. Patient safety, enhanced learning outcomes, and trust in technology are foundational to effective teaching and professional practice. These considerations must be embedded from the very beginning of a student’s academic journey.

As GenAI continues to evolve, its integration into healthcare education must be guided by ethical principles, transparent practices, and a strong commitment to equity. This approach ensures that students benefit from technological advancements while also developing the critical competencies needed to navigate AI-enhanced clinical environments responsibly. By fostering digital literacy, ethical awareness, and professional integrity, educators can prepare future health professionals to use AI thoughtfully and effectively in real-world practice.

Healthcare Education Context

Understanding the learning and teaching environment was essential to interpreting how the AI literacy task unfolded in practice. HLTH1000, a large cross disciplinary first-year course offered a distinctive opportunity to explore how students from multiple health fields encounter and interpret emerging technologies. Within this diverse cohort, learners ranged from recent school leavers to mature age professionals, bringing wide variations in digital literacy and confidence, disciplinary language and expectations of what academic writing looked like in health. Integrating GenAI into this space was both ambitious and necessary. It provided a rare opportunity to see how students from multiple health fields might engage with artificial intelligence as part of the professional toolkit they will soon rely on. The goal was to empower students to make purposeful, ethical, and informed use of these tools, understanding not only their technical functions but their implications for professional judgement and scholarly practice.

Challenges in First-Year Healthcare Education

Cognitive readiness and overreliance

First-year students are still developing foundational knowledge and critical thinking skills. Overreliance on GenAI tools may hinder the development of cognitive and metacognitive abilities that are essential for clinical reasoning (Qian, 2025). Without appropriate scaffolding, students may struggle to distinguish between AI-generated content and their own academic contributions, which can undermine learning autonomy and originality (Peláez-Sánchez et al., 2024).

Ethical and professional integrity

Healthcare education requires a strong commitment to ethical standards and professional integrity. The use of GenAI raises concerns about algorithmic bias, data privacy, and the lack of transparency in AI decision-making processes (Bobula, 2024; Almarzouqi et al., 2024). These issues are especially critical when students engage with sensitive health data or simulate clinical scenarios. García-López and Trujillo-Liñán (2025) highlight risks such as cognitive autonomy loss and institutional misuse of student data, which could compromise ethical learning environments.

Instructional design and educator preparedness

Effective integration of GenAI tools depends on thoughtful instructional design and educator training. Vorobyeva et al. (2025) stress that technological readiness must be matched with educator preparedness and professional development. Without clear pedagogical frameworks, AI integration may result in inconsistent learning experiences and misalignment with curriculum goals.

Assessment integrity and transparency

The use of GenAI in academic tasks challenges traditional assessment models. Concerns about plagiarism, authorship, and transparency in AI-assisted writing require updated assessment policies and ethical guidelines (Castillo-Martínez et al., 2024; Cheng et al., 2025). Ensuring fair and transparent evaluation practices is essential to maintain academic standards and student trust.

Pedagogical Design Response

Having identified these opportunities and challenges, the HLTH1000 team sought to design a practical, scalable intervention that could introduce GenAI in a way that was both responsible and engaging. Several alternatives and complementary strategies were considered prior to, during and after the implementation of this AI literacy task:

Scaffolded AI Literacy Tasks

Introducing structured AI literacy tasks early in the curriculum helps students engage critically with GenAI tools while developing ethical awareness and academic integrity (Abuadas et al., 2025). Early, low-stakes activities were key to helping students move from curiosity to confidence.

Multimodal Learning Models

The use of Multimodal Large Language Models (MLLMs) supports interactive simulations that mirror real-world clinical practice. These models enhance technological fluency and clinical reasoning (Topol, 2019; Esteva et al., 2017; Johnson et al., 2016). However, with a diverse and very large first-year cohort such as HLTH1000, this level of simulation was not feasible at scale. Instead, the teaching team incorporated AI inclusive and digitally supported tools into other learning activities and assessments, including:

RiPPLE: a UQ adaptive learning platform featuring an AI-powered chatbot that provides immediate feedback on student generated content before it is shared for peer moderation.

Padlet: a collaborative platform used to foster discussion and collective reflection. Its flexible visual design supports multimodal engagement and inclusive participation across large cohorts, while its embedded AI features assist in designing and organising shared spaces, generating ideas, and maintaining a safe, moderated environment.

Educator-Led Training and Ethical Frameworks

Professional development for educators and the integration of ethical frameworks into teaching practice are essential to guide responsible AI use (Lazar et al., 2024; Castillo-Martínez et al., 2024). This step was crucial in building a shared language of trust and transparency across the teaching team.

Hybrid Learning Models

Blending GenAI tools with traditional teaching methods helps strike a balance between technological support and cultivating independent thinking and professional judgment. In this spirit, an AI literacy task was introduced into the curriculum to provide students with a structured, reflective and academically grounded opportunity to explore GenAI tools in meaningful ways. The activity was designed to familiarise student with the functionality of AI tools and to deepen their understanding of the ethical, professional, and practical dimensions of their use. It invited students to think critically, pose questions, and consider the broader implications of technology in clinical and academic settings. By promoting informed decision-making, digital responsibility, and an appreciation of how AI can complement rather than replace human judgement, the task aimed to strengthen technical competence and professional integrity.

Ultimately, it served as a foundation for preparing future health professionals to navigate the complexities of an increasingly digital practice landscape.

This case study therefore set out to:

  • Build technological confidence by supporting students to develop the knowledge and skills required to use GenAI tools effectively in healthcare report writing, including understanding their benefits, recognising limitations, and reflecting on ethical implications.
  • Embed GenAI into authentic learning by implementing AI use within a first-year report writing assignment as part of a large, multidisciplinary course with approximately 1,500 students, creating a scalable, inclusive approach to digital literacy.
  • Integrate academic tools by introducing a suite of platforms that support writing and research. Students engaged with GenAI platforms alongside traditional word processors and citation managers, building practical digital scholarship skills.
  • Foster AI literacy and ethical awareness by guiding students to make informed choices about appropriate and transparent use of AI, developing foundational writing, critical thinking, and professional responsibility.
  • Evaluate impact and engagement by assessing how AI use influenced student performance, confidence and interaction with course materials to better understand its role in enhancing academic outcomes through active and reflective engagement.

By mirroring familiar academic structures, the task was designed to reduce cognitive load and foster a sense of authenticity and relevance. Students were invited to explore GenAI tools particularly large language models within the scope of their report writing at an introductory level. This allowed them to reflect on both the benefits and limitations of these technologies in a guided and supportive environment. Through structured activities and reflection, the task cultivated ethical awareness, digital responsibility, and critical thinking.It also provided a safe space for experimentation. Students were encouraged to engage with AI tools critically and transparently, considering their implications for academic integrity and professional standards. Ultimately, this approach aimed to build confidence in navigating AI-enhanced environments and to prepare students for the evolving realities of healthcare education and practice.

Implementation: From Design to Delivery

In HLTH1000, a large cross-disciplinary first-year course within the Faculty of Health and Behavioural Sciences to Faculty of Health, Medicine and Behavioural Sciences, the teaching team took a deliberate and research informed approach to introducing GenAI. The implementation in 2024 was conceived as a curriculum experiment and a professional learning opportunity for students and staff. Guided by the principles of safe exploration, critical reflection and transparent practice, the rollout modelled the kind of ethical, evidence-based engagement expected in future clinical contexts. GenAI tools such as Copilot, ChatGPT, and Claude were introduced to enhance learning while embedding foundational AI literacy. The focus was to develop technical proficiency and informed reflective and responsible users of emerging technologies.

During the semester, students were introduced to GenAI through scaffolded activities designed to encourage critical engagement and professional reflection. They were asked to:

  • Summarise complex healthcare concepts.
  • Generate and critique an AIgenerated health report, assessing its use of evidence applying their own academic rationale to refine it.
  • Explore the ethical implications of AI in healthcare, including issues of bias, data privacy and professional accountability.

The integration of GenAI into healthcare education for first-year students presents distinct pedagogical challenges. These include ensuring responsible use, maintaining academic integrity, and supporting students in developing the skills needed to evaluate and apply AI tools thoughtfully. Many students in HLTH1000, are still building foundational knowledge and clinical reasoning skills. Without careful scaffolding, early exposure to AI risks promoting overreliance, which may hinder the development of cognitive and meta-cognitive abilities (Qian, 2025). To address this, the teaching team implemented a structured AI literacy task that encouraged students to critically evaluate AIgenerated content and distinguish it from their own academic contributions (Peláez-Sánchez et al., 2024). Ethical dimensions were explicitly woven into the reflection component, with students exploring issues such as hallucinations, algorithmic bias, data privacy, and transparency (Bobula, 2024; Almarzouqi et al., 2024). Each student critiqued AI-generated healthcare reports using tracked changes and annotated rationales grounded in scientific evidence, an exercise designed to reinforce ethical reasoning and professional judgement.

Effective integration of GenAI requires deliberate instructional design and educator readiness. To ensure alignment with curriculum goals, the HLTH1000 teaching team embedded GenAI tasks within existing assessment frameworks and professional contexts. While Multimodal Large Language Models (MLLMs) can simulate clinical practice through immersive environments, they are not feasible at scale for this course. Instead, a hybrid learning model was adopted, combining GenAI tools with traditional discussion-based teaching to ensure balanced skill development and to uphold academic integrity.

The introduction of GenAI also prompted a re-examination of traditional assessment models, particularly around authorship, originality and plagiarism (Castillo-Martínez et al., 2024; Cheng et al., 2025). This invited students to consider the ethical and scholarly dimensions of AI use, the how, when and why to incorporate it responsibly. Clear, transparent marking criteria were also introduced to maintain academic standards and foster trust. Together these measures positioned AI as a catalyst for critical thinking and academic growth.

The TPACK framework (Technological, Pedagogical and Content Knowledge) guided the design and implementation of this task, integrating three key dimensions:

  • Content Knowledge: understanding what AI is and how it applies to health and behavioural contexts.
  • Pedagogical Knowledge: designing accessible and engaging learning experiences for a first-year cohort with a high proportion of recent school leavers.
  • Technological Knowledge: meaningfully integrating AI and digital tools such as ChatGPT for critical thinking activities, Kahoot for in-class knowledge review, RiPPLE for adaptive peer learning with chatbot feedback, and Padlet for collaborative idea sharing.

This approach also supported culturally and linguistically diverse (CALD) students by offering multimodal learning opportunities and fostering inclusive, participatory learning around AI concepts.

Workflow

Tutorials became the engine of this experiment. Working in small groups of five to eight, within larger workshops of around 30 students assigned to each tutor, learners engaged in open discussions about the responsible use of AI and its promise, its pitfalls and its place in future professional practice. These smaller tutorial settings provided a more personal environment for exploring ethical dilemmas, sharing examples and reflecting on the importance of human oversight in healthcare contexts. The goal was to help students understand how AI functions and build their confidence in using and critically evaluating it within authentic clinical and academic scenarios.

Assessment instructions and marking rubrics were distributed early in the semester (July 2024), with final submissions due in October. Throughout the semester, students were guided through each stage of the task both in class and via group discussion board on the learning management system. This reinforced academic integrity, ethical reasoning, and digital fluency, while aligning with the broader learning outcomes of HLTH1000. As part of the AI literacy assessment, students were asked to generate a short paragraph using a GenAI tool such as ChatGPT, CoPilot or Claude responding to prompts related to healthcare topics. They then iteratively refined this output by creating new prompts to adjust tone, clarity and use of evidence, ensuring the text met word limits and included appropriate references. The process then required students to critique the AI-generated material for clarity, accuracy, bias and ethical soundness, identifying any factual errors, weak arguments, or unprofessional language.

Using a word processing software, students annotated their edited documents with tracked changes and provided concise rationales for each revision, grounded in evidence-based reasoning. In addition, they were encouraged to maintain a brief reflective log documenting when and how GenAI tools were used, what challenges were encountered and what insights were gained. This scaffolded workflow encouraged students to engage critically, think ethically and treat AI as a partner in inquiry rather than an answer engine.

Implementation Challenges

Rolling out a GenAI-integrated task in a course of this scale presented several practical and pedagogical challenges. Key issues included:

Digital Literacy Variance

Students entered the course with widely differing levels of comfort and experience with GenAI tools and word processing software. Some students required additional support with prompt engineering, critical evaluation, and the use of tracked changes. These challenges extended to tutors, several of whom were also adapting to new tools and needed guidance in troubleshooting and modelling effective AI use.

Tutor preparedness and workload

Tutors needed to quickly familiarise themselves with GenAI technologies and ethical frameworks adding complexity to lesson planning and delivery. Consistent communication and shared resources became essential to maintain alignment.

Inconsistencies in AI-generated output

Variability in AI responses made it difficult for students to judge reliability and accuracy. This, however, became a valuable teaching moment reinforcing the need for human oversight and critical review.

Academic integrity concerns

Students were sometimes uncertain about the boundaries of acceptable AI use. Clear guidance, exemplars and transparent marking criteria were essential in establishing trust and accountability.

Technical limitations

Issues such as platform access, formatting inconsistencies and compatibility with institutional systems occasionally disrupted workflow and required contingency planning. Submitting assessments with track changes enabled through the learning management system presented further complications with some students encountering formatting errors or uncertainty around submission requirements. These logistical hurdles highlighted the importance of clear instructions, early testing of systems, and accessible technical support when introducing innovative digital assessment at scale.

Student reflections

Student reflections on their experiences with GenAI provided rich insight into how first-year learners conceptualised AI’s role in healthcare education. Through an exploratory thematic analysis of a small, randomised sample (n=28), five themes emerged, revealing how students made sense of efficiency, reliability, collaboration, ethics and context when engaging with AI. These reflections illuminate students’ developing literacy and inform broader pedagogical questions about responsible innovation in curriculum design.

Theme 1: Perceived Utility and Strengths

Students widely acknowledged the efficiency and speed of AI tools, particularly their ability to organise large volumes of data and produce well-structured, grammatically sound outputs.

Many used AI for summarisation, synthesis and as a starting point for writing tasks, valuing its clarity and accessibility. Overall students reported that AI reduced cognitive load, improving the organisation of information and provided a reliable foundation for academic writing. For educators, these findings highlight AI’s potential to support novice writers through structured modelling and feedback, particularly in large first-year cohorts transitioning to academic communication in professional disciplines.

Theme 2: Limitations and Reliability Issues

While students appreciated the practical benefits of GenAI, they voiced significant concerns about reliability and accuracy. Many noted inconsistent ability to follow the specific task parameters such as word count, referencing, lack of credible sources and factual errors. Some described AI as being in an “infant” stage, lacking creativity and depth. These reflections reinforce the importance of embedding critical evaluation and verification skills into AI literacy tasks, positioning AI as a starting point for inquiry rather than a substitute for scholarly judgement.

Theme 3: Human-AI Collaboration and Boundaries

Students consistently recognised the need for human oversight and professional judgement when working with AI. They viewed ethical reasoning, contextual understanding and critical thinking as inherently human capabilities that AI cannot replicate. For many students, this recognition was shaped through direct interaction with AI-generated content where they observed fluency but also a lack of nuance and insight. Several reflections highlighted how effective use of AI depends not only on what is asked but how it is asked. Prompt design emerged as a form of dialogic practice, one that requires intention, curiosity, and critical awareness. This developing sense of prompt literacy revealed students beginning to see AI as a partner in inquiry and not a source of truth. It was recognised that outputs must be interpreted, verified, and contextualised by human judgment. This theme shows how early exposure to AI can cultivate meta-awareness of the human role in technology use. By positioning students as both users and evaluators, the activity prompted them to consider not only what AI can do, but what it should do, laying the groundwork for ethical, reflective, and profession-ready engagement with emerging technologies.

Theme 4: Ethical and Future Implications

The AI literacy task in HLTH1000 represented an innovative, practice-based exploration of how first-year students engage with emerging technologies in healthcare. It served as a teaching innovation and a research-informed intervention, generating valuable insights into student learning, ethics, and curriculum design. Many students questioned whether it was appropriate to rely on AI for academic tasks, especially assessments, where concerns around plagiarism, authorship and academic integrity were frequently raised. There was a shared view that overuse of AI could undermine critical thinking and reduce opportunities for genuine learning.

Students also reflected on the broader applications of AI and the potential consequences of its use. In general contexts, they saw value in AI’s ability to support efficiency and productivity particularly for managing information and supporting routine administrative tasks. However, in the context of healthcare, student reflections expressed concern about the risks of using AI-generated content without proper human oversight. They emphasised the continued importance of ethical decision-making and human judgement particularly when decisions affect patient care or professional integrity. Bias in AI-generated reporting was another key concern. Students recognised that AI outputs can reflect skewed or incomplete perspectives, depending on the data used to train the system. This prompted deeper questioning about fairness, accuracy and the potential for harm if AI is used uncritically.

Students also looked ahead, reflecting on the future of AI, with curiosity and caution. Some expressed optimism about its capacity to transform education and professional practice while others warned of possible misuse and unintended consequences, if ethical safeguards are neglected. Many reflected on the role of prompts and how they influence AI outputs. In doing so, they began to see AI as a collaborator and not an authoritative source.

Theme 5: Context, Caution and Conditions for Meaningful Use

Student reflections strongly emphasised that the effectiveness of GenAI depends heavily on the context in which it is used. While many recognised clear benefits in tasks such as summarising information, drafting reports and supporting learning, others stressed that meaningful use requires more than just access to the technology. They called for clear guidelines, structured training and strong ethical frameworks to ensure AI is used responsibly effectively and with purpose. In healthcare settings, students were especially cautious. They acknowledged the potential of AI to assist with managing data and streamlining documentation but also voiced concerns about the risks when technological errors have real world impact on patient care. Accuracy, ethical reasoning and human judgement were seen as non-negotiable, and that AI must be carefully integrated to support professional expertise rather than replace it.

Across all contexts, participants agreed that the value of AI is maximised when it is applied thoughtfully and with a clear awareness of its limitations. They recognised that effective integration depends as much on the culture of use not just the technology itself. Student reflections point to a future where the success of AI in Healthcare Education will depend on how well it aligns with human values and the ethical commitment to do no harm.

Reflection on AI inclusive task in HLTH1000 (2024)

The integration of the AI task into HLTH1000 was a bold and timely pedagogical move, designed to immerse first-year health students in the realities of emerging technologies in professional practice. By embedding GenAI directly into the curriculum, the task gave students a hands-on opportunity to explore how these tools are reshaping health professions. The purpose was to introduce AI tools and encourage students to think critically about responsible and ethical use. This innovation took place at a moment of uncertainty in higher education, when the use of AI was being cautiously considered and when Turnitin still gave a “AI-use warning label”! In fact, a big chunk of the cohort’s tasks was labelled as “check AI use”. Through this activity, students were challenged to engage with AI-generated content, evaluate its accuracy, reflect on its ethical limitations and professional implications. The activity aimed to build essential skills in digital literacy, ethical awareness and reflective thinking, capabilities increasingly vital in healthcare contexts. It also encouraged students to consider how AI might influence clinical decision-making, patient communication and professional accountability, particularly as report topics often required comparing healthcare systems across countries.

Formal student feedback collected through course evaluations revealed a mix of enthusiasm and challenge. Many students described the task as “fun,” “eye opening,” and “innovative,” appreciating the relevance of working with technology that was transforming their field. One participant praised the “very creative,” design of the AI and tasks, while another reflected on how the task helped them understand both the “liability” and “unreliability” of AI in professional settings, an insight that aligns with the task’s intended learning outcomes. These comments also revealed a growing awareness of the need for critically evaluation, especially in contexts where accuracy and accountability matter. For many, the task became more than an introduction to AI, it was a formative experience that deepened their understanding of ethics, reliability and human oversight in technology use.

However, the rollout also surfaced challenges that shaped student experiences. A recurring theme in feedback was confusion around task instructions. Students were unsure about formatting requirements, whether to include screenshots or AI transcripts, how to manage word counts, and how to annotate revisions effectively. This confusion was compounded by inconsistent advice from tutors and course coordinator, leading students to describe the task as “tedious”. Others felt it was time consuming and overly technical, detracting from core disciplinary learning. Some students questioned the practical relevance of editing AI-generated reports. They suggested that a focus on critique rather than revision might be better aligned with authentic professional tasks, allowing them to apply ethical reasoning without getting lost in technical formatting requirements. These reflections highlight a lesson in curriculum innovation, that novel tasks require clear communication, consistent guidance and a strong pedagogical alignment to maximise learning impact.

Students offered thoughtful suggestions for future refinement, many of which have been incorporated into planning for future teaching practice.

Key recommendations

  • Providing step-by-step exemplars and video walkthroughs to clarify expectations (students valued a video made by the teaching team which was reactive to feedback through the semester).
  • Offering alternative formats for students to engage with AI (e.g., critique vs. edit).
  • Strengthen alignment and communication between tutors to ensure consistent messaging.
  • Embedding the AI activity more explicitly within course content to enhance perceived relevance and authenticity.

Overall, the AI task in HLTH1000 represented a valuable experiment in integrating emerging technologies into healthcare education. It successfully introduced students to both the promise and pitfalls of GenAI, encouraging them to reflect on its ethical and professional implications. While issues of clarity and consistency limit its impact for some, the experience offered rich insights into how innovation unfolds in practice and how thoughtful design, communication and reflection can turn experimental tasks into transformative learning experiences.

Future Directions

Reflecting on the implementation of the HLTH1000 AI task, the next phase focuses on refinement, relevance and co-creation. While the initial design successfully encouraged curiosity and ethical reflection, it also revealed the need for clearer guidance, stronger tutor alignment and closer links between task design and professional practice.

Planned improvements include providing detailed exemplars and clearer formatting instructions, offering students the choice to either edit or critique AI-generated content, and conducting tutor calibration sessions supported by shared marking guides and FAQs. Strengthening communication across the teaching team will help ensure consistent guidance and enhance student confidence. Future iterations should also embed discipline-specific examples to make AI use more authentic and meaningful for instance, exploring AI in medication safety for Pharmacy, mental health triage for Psychology, or clinical documentation for Nursing and Dentistry. Embedding these contexts will help students connect ethical reflection with real-world professional practice. Ethical and critical AI literacy should be scaffolded throughout the semester, with discussions around bias, misinformation, and the boundaries of human–AI collaboration. This approach aims to cultivate both technical fluency and professional integrity.

Post-project, the teaching team is engaging in a student–staff partnership to analyse outcomes, share findings at teaching and learning forums, and develop a case study for dissemination. Plans include piloting a revised version of the AI task in a smaller discipline specific cohort within Dentistry and establishing a student–staff advisory group to guide ongoing AI integration across the curriculum. Ultimately, the next stage builds on lessons learned to create a model for responsible, inclusive, and contextually relevant AI education. This needs to be a model that prepares students to think critically and act ethically in an increasingly digital healthcare landscape.

AI Use Declaration

This chapter was prepared with assistance from Co-Pilot, which supported aspects of writing and editing, including clarity, structure, and citation formatting. All ideas, findings, and interpretations are the authors’ own, and the AI did not generate or analyse primary data or replace academic judgement.

References

Abuadas, M., Albikawi, Z., & Rayani, A. (2025). The impact of an AI-focused ethics education program on nursing students’ ethical awareness, moral sensitivity, attitudes, and generative AI adoption intention: a quasi-experimental study. BMC Nursing, 24(1), Article 720. https://doi.org/10.1186/s12912-025-03458-2

Almarzouqi, A., Aburayya, A., Alfaisal, R., Elbadawi, M. A., & Salloum, S. A. (2024). Ethical Implications of Using ChatGPT in Educational Environments: A Comprehensive Review. In B. Gupta, S. A. Salloum, M. Al-Saidat, A. Al-Marzouqi, & A. Aburayya (Eds.), Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom (Vol. 144, pp. 185–199). Springer International Publishing AG. https://doi.org/10.1007/978-3031-52280-2_13

Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), Article 310. https://doi.org/10.1186/s12911-020-01332-6

Bakthavatchaalam, V., & Sivasankar, K. (2024). AI in healthcare education: A systematic review of applications in teaching and learning. In W. Shafik, D. Crowther, R. Singh, & V. Kumar (Eds.), Transforming healthcare sector through artificial intelligence and environmental sustainability (pp. 253–274). Springer Nature Singapore. https://doi.org/10.1007/978-981-97-9555-0_13

Bobula, M. (2024). Generative artificial intelligence (AI) in higher education: A comprehensive review of challenges, opportunities, and implications. Journal of Learning Development in Higher Education, 30. https://doi.org/10.47408/jldhe.vi30.1137

Castillo-Martínez, I. M., Flores-Bueno, D., Gómez-Puente, S. M., & Vite-León, V. O. (2024). AI in higher education: A systematic literature review. Frontiers in Education (Lausanne), 9. https://doi.org/10.3389/feduc.2024.1391485

Cheng, A., Calhoun, A., & Reedy, G. (2025). Artificial intelligence-assisted academic writing: recommendations for ethical use. Advances in Simulation (London), 10(1), Article 22. https://doi.org/10.1186/s41077-025-00350-6

Dempere, J., Modugu, K., Hesham, A., & Ramasamy, L. K. (2023). The impact of ChatGPT on higher education. Frontiers in Education (Lausanne), 8. https://doi.org/10.3389/feduc.2023.1206936

Dos, I. (2025). A systematic review of research on ChatGPT in higher education. The European Educational Researcher, 8(2), 59–76. https://doi.org/10.31757/euer.824

Esteva, A., Kuprel, B., Novoa, R. et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115–118. https://doi.org/10.1038/nature21056

García-López, I. M., & Trujillo-Liñán, L. (2025). Ethical and regulatory challenges of Generative AI in education: A systematic review. Frontiers in Education (Lausanne), 10. https://doi.org/10.3389/feduc.2025.1565938

Johnson, A. E. W., Pollard, T. J., Shen, L., Lehman, L. H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Anthony Celi, L., & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific Data, 3(1), Article 160035. https://doi.org/10.1038/sdata.2016.35

Kim, J., Yu, S., Detrick, R., & Li, N. (2025). Exploring students’ perspectives on Generative AI-assisted academic writing. Education and Information Technologies, 30(1), 1265–1300. https://doi.org/10.1007/s10639-024-12878-7

Krause, S., Dalvi, A., & Zaidi, S. K. (2025). Generative AI in education: Student skills and lecturer roles. https://doi.org/10.48550/arxiv.2504.19673

Lazăr, A. M., Repanovici, A., Popa, D., Ionas, D. G., & Dobrescu, A. I. (2024). Ethical principles in AI use for assessment: Exploring students’ perspectives on ethical principles in academic publishing. Education Sciences, 14(11), 1239. https://doi.org/10.3390/educsci14111239

Li Y. & Li J., (2024) Generative artificial intelligence in medical education: way to solve the problems, Postgraduate Medical Journal, 100(1181), 203–204, https://doi.org/10.1093/postmj/qgad116

Morley, J., Machado, C. C. V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine (1982), 260, Article 113172. https://doi.org/10.1016/j.socscimed.2020.113172

Peláez-Sánchez, I. C., Velarde-Camaqui, D., & Glasserman-Morales, L. D. (2024). The impact of large language models on higher education: exploring the connection between AI and Education 4.0. Frontiers in Education (Lausanne), 9. https://doi.org/10.3389/feduc.2024.1392091

Qian, Y. (2025). Pedagogical applications of generative AI in higher education: A systematic review of the field. TechTrends. https://doi.org/10.1007/s11528-025-01100-1

Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C. P., Patel, B. N., Yeom, K. W., Shpanskaya, K., Blankenberg, F. G., Seekins, J., Amrhein, T. J., Mong, D. A., Halabi, S. S., Zucker, E. J., … Lungren, M. P. (2018). Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Medicine, 15(11), e1002686. https://doi.org/10.1371/journal.pmed.1002686

Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27, 491-497. https://doi.org/10.1093/jamia/ocz192

Sankey, M. D., Huijser, H., & Fitzgerald, R. (2023). The Virtual University in Practice. In M. D. Sankey, H. Huijser, & R. Fitzgerald (Eds.), Technology-Enhanced Learning and the Virtual University (pp. 619–639). Springer Nature Singapore. https://doi.org/10.1007/978-981-99-4170-4_31

Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7

Vorobyeva, K. I., Belous, S., Savchenko, N. V., Smirnova, L. M., Nikitina, S. A., & Zhdanov, S. P. (2025). Personalized learning through AI: Pedagogical approaches and critical insights. Contemporary Educational Technology, 17(2), ep574. https://doi.org/10.30935/cedtech/16108


About the authors

Sowmya Shetty is a teaching focussed academic at The University of Queensland. Currently she is Senior Lecturer, Discipline Lead (Oral Biosciences) and Director for Teaching & Learning at the School of Dentistry. In 2024, Sowmya was seconded to Faculty of Health and Behavioural Sciences (HaBS) where she coordinated a first year large cross faculty course, HLTH1000 (Professions, People and Healthcare), and led the HaBS faculty-based Interprofessional Education curriculum. Sowmya has special interest in development of clinical educator support, the transition to university and peer to peer mentoring initiatives, enhancing the student experience and work integrated learning practices.

Divya Anantharaman is a Doctoral student at the at the School of Health and Rehabilitation Sciences, University of Queensland. Her work currently focuses on identifying key leverage points to drive systemic change and improve hearing and vision care outcomes in aged care settings, and using innovative technologies to enhance care for people living with dementia. She specialises in qualitative and implementation research, and co-teaches interprofessional collaborative practice education and research methods at the university.