"

5 Integrating Generative AI to Strengthen Counselling and Communication in Dietetic Education

Olivia Wright and Shreyasi Baruah

How to cite this chapter:

Wright, O. and Baruah, S. (2025). Integrating generative AI to strengthen counselling and communication in dietetic education. In R. Fitzgerald (Ed.), Inquiry in action: Using AI to reimagine learning and teaching. The University of Queensland. https://doi.org/10.14264/6dd4662

Abstract

This case study explores how generative artificial intelligence (GenAI) tools were integrated into DIET7103: Interviewing and Counselling for Dietetics Practice to enhance student learning, clinical preparedness, and professional confidence. Guided by the TPACK framework and action research principles, three tools: Julia (AI voice-based client simulation), Notebook LM (AI podcast creator), and The Interviewing and Counselling (Tic) Bot (text-based counselling chatbot) were implemented to bridge theory and practice in communication and counselling. Reflexive thematic analysis revealed that AI tools empowered learning through realism, efficiency, and accessibility, but also introduced cognitive overload, variable engagement outcomes, and equity challenges. Students emphasised the need for structured support and sequential tool introduction. Findings highlight that thoughtful AI integration can enhance learning effectiveness and professional readiness in health education, provided human-centred values and scaffolding are maintained.

Keywords

Generative Artificial Intelligence, Dietetic Education, Counselling Skills, Simulation-Based Learning, AI Literacy, Health Professional Education, TPACK Framework

Practitioner Notes

  1. Use AI-powered simulations to provide safe, repeatable opportunities for communication skill development.
  2. Introduce AI tools sequentially, allowing time for students to build competence and confidence before adding complexity.
  3. Embed AI literacy and ethics training early to support critical evaluation and professional accountability.
  4. Align AI activities with curriculum progression to prevent overload and stage mismatch.
  5. Pair AI integration with structured human support (tutorials, workshops, and reflective feedback) to balance technology with empathy.

Introduction

In DIET7103: Interviewing and Counselling for Dietetics Practice, generative artificial intelligence (GenAI) tools were introduced as a purposeful innovation designed to integrate technology meaningfully with pedagogy and content knowledge, with the aim of strengthening students’ confidence and competence in client-centred counselling. This innovation responded to student feedback highlighting ongoing challenges in translating theory into practice, particularly in developing confidence for real-world client consultations. DIET7103 provides a comprehensive foundation in the theoretical frameworks and practical skills necessary for effective client interaction in dietetics. The course emphasises the development of professional communication techniques and the application of client-centred principles to facilitate lasting behaviour change. The curriculum is structured around four main phases of the counselling process: Involving (Engaging), Exploration-Education (Focusing and Evoking), Resolving (Planning), and Closing.

The course encompasses several critical competency areas that underpin effective dietetic counselling practice:

Core Counselling and Communication: Students develop the fundamental components required for effective counselling relationships, recognising the therapeutic alliance serves as the guiding force for change. Key areas include understanding the psychological drivers of eating behaviour, applying relationship skills such as rapport building, congruence, openness, and warmth, and mastering effective non-verbal communication techniques including posture, eye contact, and vocal tone.

Assessment and Planning: Students engage with structured nutrition counselling protocols, learning dietary interviewing methods such as the 24-hour recall and “typical day” strategies. Emphasis is placed on establishing measurable and achievable goals using the SMART framework (Specific, Measurable, Attainable, Relevant, and Time-based), fostering competence in both assessment and change planning.

Advanced Therapeutic Models: The course provides deep engagement with evidence-based approaches that strengthen students’ ability to support client behaviour change including:

Motivational Interviewing (MI): A collaborative conversation style that enhances client motivation and commitment to change, grounded in the “Spirit” of MI – Collaboration, Evocation, Compassion and Acceptance and operationalised through core skills known as OARS (Open-ended questions, Affirmations, Reflective Listening and Summaries).

Cognitive Behavioural Therapy (CBT): Students examine how cognitions affect behaviours, exploring links between thoughts, feelings, and actions, identifying cognitive distortions, and practising cognitive restructuring techniques.

Transtheoretical Model (TTM): This model conceptualises behaviour change as a process moving through sequential, often non-linear, stages supporting the tailoring of interventions to client readiness to change.

The introduction of AI into this context was designed as scholarly enquiry grounded in the Scholarship of Teaching and Learning (SoTL). The design, implementation, and evaluation of each tool followed a cycle of evidence-informed experimentation, reflection, and improvement, aimed at generating insights that could be shared beyond the course context (Felton, 2013).

Theoretical Framework

The design and implementation of this innovation were guided by the Technological, Pedagogical and Content Knowledge (TPACK) framework (Koehler & Mishra, 2009), which emphasises the dynamic interaction between what educators teach (content), how they teach (pedagogy), and the tools that support learning (technology). Applying this framework ensured that each AI tool meaningfully aligned with both disciplinary competencies and pedagogical intent.

Content Knowledge (CK)

The content foundation of DIET7103 encompasses complex nutritional science, client-centred counselling theory and professional ethics. Students are required to integrate biomedical understanding of diet-disease relationships with the interpersonal skills necessary for effective behaviour change interventions.

Pedagogical Knowledge (PK)

Previous course iterations relied primarily on experiential workshops, case-based discussions, and peer role-plays to build counselling competence. Whilst these strategies successfully fostered foundational communication skills, they provided limited opportunities for iterative practice in realistic scenarios. Whilst students valued role-playing exercises as preparation for client interactions, these simulations inevitably lacked the authenticity and unpredictability of real clinical encounters. Additionally, this approach demanded considerable time investments for both preparation and content revision, limiting opportunities for formative feedback.

Technological Knowledge (TK)

To address these pedagogical challenges, three GenAI tools were selected for strategic integration, each mapped to a distinctive learning objective:

  • Julia (ElevenLabs Conversational Agent): An AI voice-interactive client simulation embodying a busy mother managing iron deficiency. Julia allowed students to practice motivational interviewing, empathetic listening and client specific counselling in a low-risk environment.
  • Notebook LM (Google): An AI-driven podcast generator was used to transform static lecture content into conversational audio. This supported flexible learning and embodied Universal Design for Learning (UDL) principles, supporting diverse learner engagement and accessibility needs.
  • The Interviewing and Counselling (Tic) Bot (Zapier AI Chatbot): A text-based practice environment that guided users through all phases of a nutrition consultation, engaging, exploring, planning and closing, while offering feedback on questioning technique, rapport building, and reflective listening.

Together, these tools created a multi-modal, interactive learning ecosystem designed to bridge theory and practice, enhance accessibility and engagement and support student-driven skill development.

Innovation Objectives

The AI integration aimed to achieve several key objectives:

  1. Enhance clinical preparedness through adaptive, realistic simulations.
  2. Improve learning efficiency while maintaining comprehension.
  3. Provide multiple means of engagement for diverse learners. (preferences)
  4. Increase accessibility and flexibility for time constrained students.
  5. Build professional confidence through repeated low risk practice.

Literature Review

The Imperative for AI Integration in Health Professional Education

The integration of artificial intelligence in health professional education represents a paradigm shift towards personalised, adaptive learning experiences that address longstanding challenges in traditional educational delivery. The landscape of health professional education is undergoing a significant transformation, driven by technological advances and the subsequent pedagogical frameworks that emphasise experiential learning, personalised instruction, and adaptive assessment methods (Hwang et al., 2020). Within dietetics education, the development of effective communication and counselling skills represents a fundamental competency that directly impacts patient outcomes and the quality of professional practice (Knight et al., 2024).

Effective communication is fundamental to nutrition and dietetics practice, although providing novice practitioners with a variety of experiences within training remains challenging, and is normally limited by professional practice placement availability (Knight et al., 2020). Simulation-based learning is recognised as an opportunity for health professional development, and technology-enhanced simulation for health professions education can significantly improve learning outcomes compared to traditional methods (Cant & Cooper, 2017; Cook et al., 2011). Traditionally, however, simulation methods often required significant resources, yet did not always provide the adaptive, personalised feedback essential for learning.

Evidence for AI-Enhanced Learning Outcomes

Emerging research provides compelling evidence for the effectiveness of AI integration in health professional education. Studies demonstrate that students using AI-powered learning tools show improved clinical reasoning skills, increased confidence in patient interactions, and enhanced knowledge retention compared to traditional learning methods (Holmes et al., 2019). Recent work by Kassem et al. (2025) examined AI’s role in nutrition, identifying applications in dietary assessment, personalised nutrition, disease management, and nutrition education. Their findings indicate that AI can help dietitians and nutrition professionals analyse large datasets and provide customised educational content, enhancing the efficiency of nutrition education and promoting healthier dietary habits.

Virtual Simulated Patients in Professional Education

Virtual simulated patients (VSPs) provide unlimited opportunities for practice through consistent, standardised scenarios that can be repeated and adapted according to educational objectives (Peddle et al., 2016). The evidence base supporting VSP effectiveness in communication skills training is growing, with studies showing that well-designed virtual patient activities enable students to build communication skills in safe and cost-effective learning environments (Cook et al., 2010). A systematic review of virtual patient simulators for medical communication training found that half of the eight comparative studies reported significant improvements in the VSP group (Lee et al., 2020). However, recent research by Fernández-Alcántara et al. (2025) notes that studies vary widely in their methods, technologies, and interaction styles, which makes it difficult to create consistent standards across the field.

AI-Powered Conversational Agents

Modern AI chat systems use large language models to hold natural, realistic conversations. These models, such as OpenAI’s GPT-5, Anthropic’s Claude, and Google’s Gemini are trained on extensive language datasets and generate complex, advanced, human-level output (Kasneci et al., 2023). This allows the creation of realistic patient simulations that help students to apply theoretical knowledge in authentic communicative contexts. Research shows that AI tutors are improving, and students appreciate that they are always available, patient, and non-judgemental (Dai et al., 2023). These characteristics are particularly valuable in dietetics training, where students must develop sensitivity and empathy whilst addressing potentially sensitive topics such as eating behaviours, body image, and lifestyle modifications (Cant & Aroni, 2008).

Content Accessibility and Flexible Learning

The challenge of content accessibility in health professional education has been exacerbated by increasing curriculum demands and student time constraints. AI-powered content summarisation tools help address these pressures by transforming traditional educational materials into more accessible formats. Converting dense textual content into conversational, podcast-style presentations can enhance comprehension and retention, particularly for students who face barriers with text-based materials, including those with neurodiversity, English as a second language, dyslexia, or visual impairments (Buchem & Hamelmann, 2010). Audio formats also enable flexible learning during activities incompatible with reading, such as commuting or exercising, alleviating time constraints for working students. The conversational style may help reduce cognitive load for learners managing complex professional curricula. In this way, AI tools can advance educational equity, an increasingly central focus in higher education policy discussions.

Supporting Personalised Learning

The literature consistently emphasises the importance of personalised learning in health professional education. AI tools excel in providing individualised feedback and adapting to specific student needs. Research demonstrates that AI-powered educational tools can identify individual learning patterns and adjust content presentation, accordingly, leading to improved learning outcomes (Chen et al., 2020). These affordances align closely with the principles of Universal Design for Learning (UDL) which advocate for multiple means of engagement, representation and expression to support diverse learners (Meyer et al., 2014). By enabling flexible pathways and responsive feedback, AI can operationalise UDL in practice, ensuring all students are equally supported. This personalisation capability is particularly valuable in dietetic education, where students must develop competence across multiple areas including biochemistry, psychology, communication skills, and cultural humility.

Considerations and Potential Challenges

Discipline-Specific Challenges in Dietetic Education

Implementing AI tools in dietetic education presents unique challenges that require careful consideration. Unlike some academic disciplines, dietetics requires the integration of complex scientific knowledge with nuanced interpersonal skills, cultural sensitivity, and ethical reasoning.

The Complexity of Human-Centred Care

Dietetic practice is fundamentally human-centred, requiring practitioners to navigate psychosocial factors alongside nutritional science. The key concern with AI integration is the potential oversimplification of these human complexities. While AI tools can simulate client interactions, they cannot fully capture the emotional nuance, cultural subtleties, and ethical dilemmas that characterise real-world dietetic practice. Research raises important questions about how effectively AI simulation prepares students for the unpredictability of human behaviour and the ethical complexities of professional practice (Topol, 2019). There is a risk that students may become over-reliant on predictable AI responses, limiting their adaptability when facing genuine human interactions (Topol, 2019).

Information Accuracy and Currency

Dietetic practice depends on up-to-date, evidence-based information. AI tools, particularly those trained on static or incomplete datasets, may provide outdated or inaccurate nutritional information. The concern is amplified by the rapidly evolving nature of nutritional science. For instance, Naja et al. (2024) evaluated ChatGPT’s accuracy in its suggestions for managing for Type 2 Diabetes and metabolic syndrome and identified errors in weight loss recommendations, energy deficit calculations, and dietary interventions. If students rely on AI-generated information that is incorrect, they may inadvertently provide misleading advice to clients, undermining patient safety and professional credibility. This issue is particularly critical in DIET7103, where students learn to integrate nutritional science with counselling skills. Inaccurate information could compromise the technical accuracy of dietary advice and the trust essential to effective therapeutic relationships.

To mitigate these risks, a multi-layered approach is required, combining pedagogical safeguards, quality assurance, and professional development.

Pedagogical safeguards: Frame AI tools as a communication practice environment, not as an authoritative source of nutritional advice. Require students to cross-reference and critique AI outputs with peer-reviewed sources and practice critical evaluation skills (Chen et al., 2020).

Quality assurance: Implement staff oversight of AI tool outputs, develop approved scenario banks for simulated consultation, and establish institutional AI ethics guidelines tailored for health professional education contexts (Morley et al., 2020).

Professional Development: Prepare students for the responsible use of AI in clinical settings, through training in accountability, ethical practice and appropriate AI use protocols (Wartman & Combs, 2018).

In addition, continuous feedback loops where students report questionable AI responses and instructors regularly review tool performance, helps to ensure pedagogical approaches evolve alongside technology (Felton, 2013). Together, these measures ensure AI tools strengthen, rather than compromise, the development of evidence-based practice competencies essential for safe and effective dietetic counselling.

Cognitive Load and Technology Integration

The simultaneous introduction of multiple AI tools can create significant cognitive load, particularly for students already managing complex curriculum demands. Cognitive load theory suggests that learners have limited capacity for processing new information (Sweller, Ayres and Kalyuga, 2011). When students must simultaneously learn course content and navigate multiple new technological tools, their cognitive resources may become overtaxed, potentially hindering rather than enhancing learning. This suggests that sequential introduction (one tool at a time) may be more effective than simultaneous deployment, allowing students to achieve competency with each system before adding complexity (Paas, Renkl and Sweller, 2003). This suggests the importance of intentional learning design and the need for extensive orientation and support systems, rather than assuming students will naturally adapt to new technologies (Koehler and Mishra, 2009; Paas, Renkl and Sweller, 2003).

Equity and Access Considerations

Building on these cognitive load considerations, equity and access must also be addressed. While AI tools offer potential for increased accessibility, they may also create new barriers and equity issues. Students with limited technology access, digital literacy challenges, or learning differences may face disadvantages when AI tools become integral to course success. The assumption that all students will engage equally with technology-mediated learning may inadvertently create inequities. Embedding Universal Design for Learning (UDL) principles can mitigate these risks by offering multiple means of engagement (Meyer et al., 2014). Intentional learning design provides multiple pathways to achieve learning outcomes rather than relying on one-size-fits-all technological solutions.

Implementation and Methodology

While a sequential introduction of tools would have best aligned with cognitive load and equity principles, a simultaneous implementation was intentionally chosen as part of the research design to evaluate comparative student engagement and learning outcomes within the intensive six-week course timeframe. This deliberate decision to introduce three AI tools across different modalities aimed to maximise learning opportunities and identify which tools most effectively supported student engagement, learning outcomes and professional preparation.

Additional technological options were considered during the planning phase, including video-enhanced chatbots to increase authenticity of client interactions. However, suitable platforms that combined conversational AI with realistic video personas were not readily available within the project’s technical and budgetary constraints, leading to the selection of voice-based and text-based tools instead. To address potential digital divide issues, several proactive measures were implemented. Additional ElevenLabs subscription credits were purchased to ensure all students could access Julia without usage limitations that might create inequitable learning experiences. Tic Bot was fully supported through institutional funding, whilst Notebook LM remained freely accessible to all users. These decisions prioritised equitable access over cost considerations, recognising that financial barriers could compound existing educational disadvantages.

However, the assumption that technological access equals technological equity proved problematic. Students with varying digital literacy levels, learning differences, and technological comfort zones experienced differential benefits from the tools, highlighting the need for more comprehensive support systems beyond mere access provision. Recognising the risks associated with AI-generated nutritional information, extensive staff oversight mechanisms were implemented throughout the course. Regular monitoring of student interactions with AI tools occurred both during class sessions and through informal check-ins about out-of-class usage. This oversight aimed to identify instances of inaccurate information provision and address misconceptions before they became entrenched in student learning.

Traditional pedagogical methods were maintained alongside AI integration, with in-class simulations and role-playing exercises continuing to provide opportunities for group critique and discussion. In some instances, AI-generated scenarios or information were deliberately used as starting points for critical analysis, enabling students to develop skills in evaluating AI outputs whilst benefiting from peer and instructor feedback.

Research Design and Theoretical Framework

This study employed an action research approach grounded in the Scholarship of Teaching and Learning (SoTL) principles outlined by Felton (2013). The research was designed as an inquiry-focused investigation into student learning experiences, grounded in disciplinary practice, methodologically sound, conducted in partnership with students, and intended for broader community dissemination. The Technology, Pedagogy and Content Knowledge (TPACK) framework provided the theoretical foundation for implementation, ensuring systematic integration of technological, pedagogical, and content knowledge components. This framework guided tool selection, implementation strategies, and evaluation criteria throughout the project.

Implementation Framework

The implementation of AI tools in DIET7103 followed a structured approach based on action research principles. The methodology was designed to ensure systematic integration whilst allowing for iterative refinement based on student feedback and observed outcomes.

Pre-Implementation Planning

Prior to semester commencement, comprehensive planning sessions were conducted to align AI tool capabilities with specific learning objectives within the dietetics curriculum. Each AI tool was mapped to competencies:

  • Julia was aligned with consultation skills development, clinical reasoning preparation, and OSCE readiness
  • Notebook LM was integrated with content review processes, workshop preparation activities, and examination study strategies
  • Tic Bot was designated for supplementary information access, concept clarification, and AI tutoring support

Tool selection criteria included educational alignment, user accessibility, technical reliability, and integration potential with existing course structures.

Staged Implementation Process

The implementation followed a structured six-week timeline designed to gradually introduce students to AI tool capabilities whilst managing cognitive load:

Weeks 1-2, Introduction and Orientation: Students were introduced to the AI innovation project through dedicated orientation sessions. The rationale for AI integration was explained, emphasising how these tools would enhance rather than replace traditional learning methods. Students were provided with access credentials and basic navigation instructions for each tool.

Weeks 3-4, Guided Exploration: Structured activities were designed to familiarise students with each AI tool’s capabilities. Students were assigned specific tasks requiring engagement with different tools, allowing them to experience the unique benefits of each platform. Technical challenges emerged during this phase, including occasional platform connectivity issues and variations in user interface familiarity.

Weeks 5-6, Independent Application and Advanced Integration: Students were encouraged to integrate AI tools into their independent study routines and explore advanced features. Usage patterns were monitored through platform analytics, revealing significant individual variations in tool engagement and preferences.

Support Infrastructure

A comprehensive support infrastructure was established to facilitate successful AI tool adoption:

  • Weekly office hours dedicated specifically to AI tool troubleshooting
  • Regular check-in discussions integrated into twice-weekly class sessions to monitor student progress
  • Peer mentoring opportunities through structured collaborative learning sessions

Data Collection Methods

Comprehensive data collection methods were employed to capture both quantitative usage patterns and qualitative user experiences:

Quantitative Measures

  • Platform engagement analytics tracking frequency, duration, and feature utilisation
  • Academic performance comparisons with previous cohorts at equivalent assessment points.
  • Time-on-task measurements for specific learning activities
  • Student Evaluation of Courses and Teachers (SECAT) analysis

Qualitative Measures

Focus group discussion facilitated by independent student partners.

Data Analysis Framework

The study employed Braun and Clarke’s (2006, 2019) six-phase reflexive thematic analysis approach, providing a systematic yet flexible framework for analysing qualitative data. This methodology was selected for its emphasis on researcher reflexivity and suitability for exploring student experiences with novel educational technologies. The analysis employed an inductive approach, meaning themes emerged organically from student experiences rather than being predetermined by existing theoretical categories. A semantic focus was maintained throughout the analysis, concentrating on explicit participant meanings rather than engaging in extensive interpretive speculation. Student partnership was integrated throughout the analytical process, extending beyond simple member-checking to include students as co-researchers in theme development and validation.

Participant Demographics

  • Total participants: 5 students from DIET7103 cohort 2025
  • Population: All participants were enrolled in the Master of Dietetics Studies program
  • Participation: Voluntary with informed consent obtained
  • Demographics: Mix of domestic and international students with varied technological backgrounds

Ethical Considerations

  • Ethics approval obtained through university Human Research Ethics Committee (2024/HE001346).
  • Voluntary participation with right to withdraw without penalty
  • Data anonymisation protocols implemented
  • Independent facilitation to minimise power dynamics
  • Clear separation between research participation and academic assessment

Reflection: Successes and Challenges

Major Findings from Thematic Analysis

The reflexive thematic analysis of focus group data identified five major themes characterising student experiences with AI-enhanced learning in dietetic education. “Empowering Learning Through AI” was the most frequently mentioned theme, followed closely by “Call for Structured Support,” suggesting that while students saw significant value in AI tools, they also felt underprepared to use them optimally.

Theme 1: Empowering Learning Through AI

Description: This theme highlights the positive impact AI tools had on students’ learning, representing the most frequently mentioned theme in the analysis. Tools like Julia and Notebook LM were praised for their ability to simulate realistic client interactions and provide accessible summaries of course content, with students feeling more confident and prepared for clinical practice and assessments.

Enhanced Clinical Simulation Capabilities: Students appreciated Julia’s adaptive consultation simulation:

“I genuinely enjoyed Julia. I thought it was good. She was adaptive… I could turn around and say, ‘Julia, I’m just assuming that I’ve taken your diet history, we’re moving on now,’ and she’d be like, ‘Okay'”

(Focus Group Participant)

Dramatic Efficiency Improvements: Notebook LM provided substantial time savings in content preparation:

“Notebook LM helped me cut down three hours of prep to just 15 minutes” 

(Focus Group Participant)

“Rather than sitting on my desk for three hours after my typical day, it only took me like, less than 15 or 10 minutes to upload everything and listen to it” 

(Focus Group Participant)

Professional Confidence Building: Students utilised AI tools strategically for assessment and clinical preparation:

“I did end up using Julia to practise for clinic, because I had a case of a woman who had an iron deficiency… So I wanted to practise that before my first day of clinic” 

(Focus Group Participant)

Students valued how Julia supported learning, particularly the development of consultation and counselling skills:

“I really liked the opportunity to practice that one-on-one consultation… I felt that that side of my practice was inhibited or delayed because of our lack of time in workshops… I definitely found Julia supportive. I would have liked more Julias with more diverse problems.”

(Focus Group participant)

“I find it really helpful… helped me to practice my time management.”

(Focus Group participant)

Students also reflected on the feedback provided by Julia at the end of their consultations:

“I did start asking her for feedback, which was really good… My feedback was that I should have provided more summaries throughout the consultation.”

(Focus Group participant)

Julia’s feedback was seen as aligned with course expectations, but was not always contextually relevant:

“My feedback was that I should have provided more summaries throughout the consultation, but… it wasn’t relevant feedback in that situation.”

(Focus Group participant)

Tic bot was shown to support active recall and metacognition:

“I’m really big on learning through teach-back… I would say to Tic Bot, ‘I want to explain this topic. Can you check how well I explained it?’”

(Focus Group participant)

Theme 2: Navigating Overwhelm

Students consistently reported feeling overwhelmed when multiple AI tools were introduced simultaneously, highlighting the need for more staged, sequential implementation approaches.

Initial Technology Shock

Students experienced significant cognitive load when confronted with multiple unfamiliar AI tools at course commencement:

“I tried to access Julia early, but was too overwhelmed, because I was like, I’m not confident in using her right now… early on in the semester” 

(Focus Group Participant)

Preference for Graduated Introduction: Participants articulated clear preferences for sequential tool introduction:

“It would be good to maybe have a simpler one… maybe just one at a time, not all of them at once… maybe… just like the MCQ one, and that was it, and then slowly, gradually increase… more complex AI tools as the semester goes on”

(Focus Group Participant)

However, despite this, the design of Julia helped to alleviate some of the potential overwhelm:

“It was also interesting how it’s not too overwhelming at the end, because Julia lets us talk first before Julia starts talking about her problems.”

(Focus Group Participant)

Students mentioned Tic Bot was a good alternative to a “discussion board” for them, for certain types of questions:

“Tic Bot for me was just a really good way to clarify information… if I had nuanced or silly questions that I didn’t want to ask on Discussion.”

(Focus Group Participant)

Knowledge-Stage Misalignment: Students reported anxiety when AI tools provided information beyond their current learning stage:

“Tic Bot gave me information we hadn’t covered yet, and it made me panic… it wasn’t something that I’ve known about, because it wasn’t in class” 

(Focus Group Participant)

However, some students worked out solutions to mitigate this issue in their prompting strategies:

“Once I started to learn the system… I told Tic Bot, ‘Please stay within the scope.’”

(Focus Group Participant)

Theme 3: Tool-Specific Learning Affordances and Limitations

Julia – Realistic Practice with Interaction Challenges

Students valued Julia’s consultation simulation capabilities but identified significant interaction limitations:

“Sometimes when Julia starts talking, it’s just like, I have this, I have this and this… and it gets a little overwhelming.”

 (Focus Group Participants)

Further similar interaction challenges included:

“She would give so much information when I was like, not even there yet… she’ll just be already talking about, do you need my blood test level? Blood test results?” 

She was a bit impatient… you’d take a moment to think, and she’d go, ‘You there? You there?’”

(Focus Group Participants)

This led to the idea that Julia showed a lack of realism in behaviour:

“I wanted her to make me work harder for the information… we’re gonna get clients that are just like, ‘Yeah, I’m Sam,’ full stop.”

(Focus Group Participants)

Some students took the opportunity to take further control of their learning experience with Julia by making their own chatbots, or ElevenLabs agents, and training them to suit their needs, which helped to resolve some of the above issues. Their interaction with the agents allowed the students to improve their own agency and personalisation of learning:

“I used personality prompts to not speak over me or to provide any information until I ask – that was the game changer.”

(Focus Group Participant)

Students rated Julia’s clinical preparation value between 4-7 out of 10, with higher ratings from more advanced users who learnt to customise interactions.

Notebook LM – Efficiency Gains with Engagement Variability

Notebook LM demonstrated the most dramatic efficiency improvements but with highly variable learning outcomes based on usage approach:

“Rather than sitting on my desk for three hours after my typical day, it only took me like, less than 15 or 10 minutes to upload everything and listen to it” 

(Focus Group Participant)

It also helped students to prepare for their interactive workshops:

“I use it pretty much every single day… it helps me to interact in class, rather than figuring out what the workshop is about.”

(Focus Group Participant)

However, students mentioned they were sometimes distracted by the podcast itself:

“Sometimes the man would say half a sentence, and then the lady would say the other half… I was listening to the change of the voice rather than the actual content.”

(Focus Group Participant)

Furthermore, other ideas emerged about what was covered in the podcast versus the workshop:

“After the workshop… I was like, wait a second, this wasn’t covered in the summary in the Notebook LM podcast.”

(Focus Group Participant)

Active engagement (note-taking whilst listening) produced learning ratings of 8/10, whilst passive engagement (multitasking) resulted in ratings of 2-4/10:

“Using it at the gym was not successful… I would zone out and I’d be like, ‘Oh, I think that was really important'”

(Focus Group Participant)

Tic Bot – Information Depth vs. Curriculum Alignment

Tic Bot provided comprehensive information but created curriculum alignment challenges. Students developed prompting strategies to manage information appropriateness:

“What I told Tic Bot was like, ‘Please stay within the scope’… only explain this concept to me, and don’t explain other concepts outside what is not irrelevant” 

(Focus Group Participant)

Theme 4: Diverse Engagement and Personalised Learning

Description: Students reported that individual differences in how they engaged with AI tools influenced learning effectiveness. This theme underscores the importance of providing multiple modes of engagement (audio, text, interactive) to accommodate diverse learning needs.

Auditory Engagement Benefits

Students identifying as perceived auditory learners reported significant benefits from Notebook LM’s podcast format:

“I’m a very visual and auditory learner… I love podcasts, and I listen to podcasts on Spotify all the time. So this was not like, it didn’t feel like extra work… it just felt like, you know, I’m just giving myself information” 

(Focus Group Participant)

“I liked the conversation-style podcast; it felt less like a lecture” 

(Focus Group Participant)

Kinaesthetic Learning Challenges

Students with active engagement preferences found limited value in primarily digital, passive engagement tools:

“I’m a tactile learner—I need to do things to remember” 

(Focus Group Participant)

“I would say I’m more tactile learner… I need to actually do things to kind of be able to remember and learn… I’d use it whilst I was driving, and it was great to be able to listen. I was like, Oh, this is awesome. But then I don’t know, I’d stop and I’d be like, ‘Did I take anything in? 

(Focus Group Participant)

Adaptive Learning Strategies

More successful students developed personalised approaches to maximise AI tool benefits:

“I actually went on to find the actual website on how Julia was made. And then I actually practised with using my own information and fed case studies to this bot… I had multiple separate bots who had different case studies” 

(Focus Group Participant)

Theme 5: Call for Structured Support

Description: Many students expressed a need for clearer guidance on how to use AI tools effectively, representing the second most frequently mentioned theme in the analysis. A lack of awareness about tool capabilities led to missed learning opportunities, with students requesting dedicated workshops, cheat sheets, and ethical guidelines.

Missed Learning Opportunities: Students recognised they had not fully utilised available AI capabilities:

“I missed out on so many learning opportunities from AI this year”

 (Focus Group Participant)

Need for Comprehensive Training: Students specifically requested structured support systems:

“A tutorial on how to use AI properly would have saved me hours”

 (Focus Group Participant)

Students expressed desires for dedicated workshops on AI tool navigation and optimisation:

“A workshop or something to really see the full potential of these tools… phrases to get around certain things.”

(Focus Group Participant)

Requests for ethical use sessions were also expressed:

“It would also be nice… to do a session on how to use AI properly… not overuse it or breach academic conduct.”

(Focus Group Participant)

Further ideas from students to improve their understanding of and interactions with AI tools included the development of written guides and cheat sheet for quick reference:

“My biggest feedback would be… having like a cheat sheet or something to say, look, you can ask for feedback from Julia, or you can use Notebook LM. I didn’t realise you could do all that stuff… kicking myself, because I would have loved that, but I didn’t really know about it.”

(Focus Group Participant)

Peer sharing sessions to learn from successful usage strategies were also suggested:

“I just wish that an hour of the workshop had been dedicated to the use of these AI content, or even one workshop… go away and make a bot, and then everyone shares their bot, and they share what’s good about it, what’s bad about it.”

(Focus Group Participant)

Recognition of Professional Development Value: Despite initial challenges, students recognised the professional development value of learning to work effectively with AI tools, acknowledging that AI integration in healthcare is inevitable and appreciating the opportunity to develop these competencies within their educational programme.

Tool-Specific Detailed Analysis

Table 1 presents a comprehensive synthesis of the AI tools implementation findings, integrating tool-specific outcomes with cross-cutting implementation challenges. The table is organised to facilitate comparison across the three AI tools (Julia, Notebook LM, and Tic Bot), presenting student ratings alongside the key successes and challenges encountered with each platform. This structure reveals the critical learning factors that distinguished effective from ineffective tool usage across all three technologies.

Table 1 AI Tools Implementation in Health Professional Education: Comprehensive Analysis

AI Tool Student Rating Key Successes Primary Challenges Critical Learning Factors

Julia (Clinical Simulation)

4–7 / 10

(Higher with customisation)

Enhanced Professional Confidence: Enabled safe, repeatable practice of challenging conversations without peer judgment.
Authentic Role Preparation: Adaptive client simulations felt relevant to future professional roles.
Student Innovation: Advanced users explored and created customised case studies.

Usage Complexity: Required technical proficiency to achieve higher ratings.
Learning Curve: Benefits realised mainly by students who learned to customise interactions.

Success Factor: Technical proficiency and ability to customise simulations. Rating variability reflected differences in engagement rather than tool limitations.

Notebook LM (Audio Learning)

Active: 8/10; Passive: 2–4/10

Time Efficiency: Reduced preparation time from 3 hours to ~15 minutes (92% reduction).
Sustainable Study Practices: Enabled manageable workload within intensive program demands.

Engagement-Dependent Outcomes: Passive use (multitasking) yielded minimal benefits.
Strategy Required: Effective only when used purposefully and interactively.

Success Factor: Active engagement (e.g., note-taking while listening).
Passive consumption ineffective without structured learning strategies.

Tic Bot (Information Assistant)

Not specified

Strategic Thinking Development: Encouraged sophisticated prompting and metacognitive reflection.
Adaptive Learning: Supported independent management of information scope.

Information Management: Sometimes provided content beyond the user’s current knowledge level, creating anxiety.
Requires User Intervention: Needed explicit prompting (e.g., “Please stay within scope and only explain this concept”).

Success Factor: Sophisticated prompting skills and active regulation of information relevance.

The analysis demonstrates that whilst each tool presented unique characteristics and challenges, a consistent pattern emerged: success was contingent upon active, strategic engagement rather than passive consumption.

Table 2 identifies four implementation issues – cognitive overwhelm, curriculum misalignment, training gaps, and engagement variability. These challenges affected student experiences across all tools regardless of their specific functionalities. This highlights the importance of thoughtful implementation strategies that extend beyond individual tool selection to encompass broader pedagogical and support considerations.

Table 2 AI Tool Implementation Issues

Issue Category Description Impact Solution Needs

Cognitive Overwhelm

Simultaneous introduction of multiple AI tools early in semester

Created additional stress rather than learning support; particularly problematic when students adjusting to intensive course content and clinical expectations

Staggered implementation; reduced initial cognitive load

Curriculum Misalignment

Tools occasionally provided information beyond students’ current knowledge level

Generated anxiety instead of support; students missed learning opportunities due to content mismatch

Better alignment of AI capabilities with curriculum progression and developmental stages

Training Gap

Insufficient preparatory support and ongoing guidance

Students felt they missed opportunities due to poor understanding of tool capabilities and optimal usage strategies

Comprehensive initial training plus ongoing support systems

Engagement Variability

Highly variable outcomes based on individual usage approaches

Some students achieved excellent results while others struggled significantly

Explicit instruction on effective engagement strategies; modeling best practices

The synthesis of findings demonstrates that technical proficiency, customisation skills, and strategic prompting abilities were essential determinants of learning outcomes in this case study. This pattern suggests that the effectiveness of AI integration in health professional education depends not merely on tool capabilities, but critically on the alignment between tool functionality, curriculum design, learner preparation, and ongoing pedagogical support.

Implementation Insights

The analysis revealed that successful AI integration requires careful attention to cognitive load management, sequential introduction strategies, diverse engagement, and realistic expectations about clinical preparation outcomes. Students’ adaptive responses and innovative usage strategies suggest that enhanced training and support systems could significantly improve learning outcomes and tool adoption rates.

Future Directions

Immediate Next Steps

Based on the experiences and insights gained from this implementation, several immediate refinements will be implemented in future iterations:

Staged Implementation Approach

Future implementations will adopt a more gradual, scaffolded approach to AI tool introduction. Rather than presenting all tools simultaneously, students will be introduced to one tool at a time, allowing for competency development before adding complexity. The proposed sequence would introduce Notebook LM first for content accessibility, followed by Julia for consultation practice, and finally Tic Bot for advanced information access. This format will be well supported, as the structure of the dietetics program is changing, and all content will no longer need to be taught within 6 weeks. The transition from a compressed 6-week intensive to a standard 13-week semester directly addresses the key challenges identified in this study, particularly the ‘Navigating Overwhelm’ and ‘Call for Structured Support’ themes that emerged from student feedback. This extended timeframe will enable sequential tool introduction to manage cognitive load, dedicated orientation workshops, comprehensive user training sessions, and regular check-in periods to monitor student progress and address emerging challenges. The structural change provides the temporal space necessary to implement the scaffolded approach students explicitly requested, while maintaining the innovative benefits of AI-enhanced learning. This evolution demonstrates how action research principles can inform immediate practical improvements, ensuring that technological innovation serves rather than overwhelms educational objectives. The lessons learned from this intensive implementation provide a foundation for more sustainable and equitable AI integration that prepares dietetic students for the AI-enhanced healthcare environments they will encounter in professional practice.

Enhanced Training and Support Systems: Comprehensive training programs will be developed including:

  • Mandatory orientation workshops with hands-on practice sessions
  • Written guides and video tutorials for self-directed learning
  • Regular check-in sessions to monitor progress and address challenges
  • Peer mentoring programs pairing experienced users with newcomers

Improved Tool Customisation and Content Filtering: Working with AI tool developers, efforts will be made to enhance content filtering capabilities, particularly for Tic Bot. The goal is to ensure that information provided aligns with students’ current learning stage and curriculum progression.

Medium-Term Developments

Assessment Integration Refinement: Future implementations will explore more sophisticated integration of AI tools into assessment strategies, including:

  • Developing rubrics that assess students’ ability to effectively utilise AI tools for professional purposes
  • Creating authentic assessment tasks that require integration of AI-assisted preparation with human interaction skills
  • Establishing criteria for evaluating the quality of AI-mediated learning outcomes

Cross-Course Integration: Plans are underway to extend AI tool integration across multiple courses within the dietetic program. This coordinated approach would allow students to develop AI competencies progressively whilst avoiding redundancy and confusion.

Long-Term Vision

Program-Wide AI Integration Strategy

The goal is development of a comprehensive AI integration strategy across the entire dietetics program, including:

  • Standardised AI competency expectations for graduates
  • Coordinated tool selection and implementation across courses
  • Faculty development programs to ensure consistent, effective AI integration
  • Research initiatives to continuously evaluate and refine AI-enhanced educational approaches

Professional Practice Preparation: Future developments will focus on better preparing students for AI integration in their professional practice, including:

  • Training on ethical considerations in AI use within healthcare contexts
  • Development of critical evaluation skills for AI-generated recommendations
  • Understanding of AI limitations and appropriate use boundaries in dietetic practice

Research and Evaluation Continuation

Longitudinal Impact Studies

Longitudinal studies tracking graduates’ utilisation of AI tools in professional practice and their perceived preparedness for AI-integrated healthcare environments are planned.

Comparative Effectiveness Research

Collaboration with other institutions will enable comparative research examining different approaches to AI integration in health professional education, contributing to evidence-based best practices.

Student Partnership Expansion

The successful student partnership model used for focus group facilitation will be expanded to include students as co-researchers in ongoing evaluation and refinement efforts.

Contribution to Educational Scholarship

This implementation experience has generated valuable insights warranting broader dissemination within the health professional education community. Plans include:

  • Publication of findings in peer-reviewed education journals
  • Presentation at professional conferences focused on health education innovation
  • Development of best practice guidelines for AI integration in dietetic education
  • Collaboration with other institutions implementing similar innovations

Conclusion

The integration of AI tools in DIET7103 represents a significant step forward in health professional education innovation. Whilst challenges emerged, the overall experience demonstrated substantial potential for AI to enhance learning efficiency, build professional confidence, and accommodate diverse learning preferences. The key to successful implementation is in thoughtful integration that respects both technological capabilities and human learning needs. Future developments will focus on refining this balance, ensuring that AI tools enhance rather than replace the human elements that remain central to effective health professional education.

This experience has contributed valuable insights to the growing body of knowledge about AI integration in educational contexts. The lessons learned will inform future implementations both within our institution and, through scholarly dissemination, across the broader health professional education community. The success of this initiative demonstrates that when implemented thoughtfully, AI tools can significantly enhance educational outcomes while preparing students for the AI-integrated healthcare environments they will encounter in their professional careers. The future of health professional education will undoubtedly include AI as a central component, and experiences like this provide crucial guidance for effective integration strategies.

AI Use Declaration

This case study was developed with assistance from Claude (Anthropic), an AI language model, which provided support across multiple stages of the research and writing process.

Claude’s contributions included:

Literature Review Enhancement: The AI assistant identified gaps in the original literature review and suggested additional studies that strengthened the evidence base, particularly regarding global perspectives on AI in nutrition education.

Writing and Editing Support: The AI aided with academic writing conventions, citation formatting, reference list compilation, and clarity improvements. Claude helped refine technical language, suggested alternative phrasings for unclear passages, and ensured consistency in terminology throughout the document.

All substantive content, research findings, student quotes, and interpretations remain the original work of the author. Claude served as research and writing assistant, but did not generate or analyse primary data, nor did it substitute for the author’s professional expertise and judgement in dietetic education.

References

Academy of Nutrition and Dietetics. (2018). Academy of Nutrition and Dietetics: Standards of practice in nutrition care and standards of professional performance for registered dietitian nutritionists. Journal of the Academy of Nutrition and Dietetics, 118(1), 132–140. https://doi.org/10.1016/j.jand.2017.10.003

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. https://doi.org/10.1080/2159676X.2019.1628806

Buchem, I., & Hamelmann, H. (2010). Microlearning: A strategy for ongoing professional development. eLearning Papers, 21(7), 1–15.

Cant, R. P., & Cooper, S. J. (2017). Use of simulation-based learning in undergraduate nurse education: An umbrella systematic review. Nurse Education Today, 49, 63–71. https://doi.org/10.1016/j.nedt.2016.11.015

Cant, R. P., & Aroni, R. A. (2008). Exploring dietitians’ verbal and nonverbal communication skills for effective dietitian-patient communication. Journal of Human Nutrition and Dietetics, 21(5), 502–511. https://doi.org/10.1111/j.1365-277X.2008.00883.x

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510

Cook, D. A., Erwin, P. J., & Triola, M. M. (2010). Computerized virtual patients in health professions education: A systematic review and meta-analysis. Academic Medicine, 85(10), 1589–1602. https://doi.org/10.1097/ACM.0b013e3181edfe13

Cook, D. A., Hatala, R., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., … & Hamstra, S. J. (2011). Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA, 306(9), 978–988. https://doi.org/10.1001/jama.2011.1234

Crompton, H., & Burke, D. (2023). The use of mobile learning in higher education: A systematic review. Computers & Education, 123, 53–64. https://doi.org/10.1016/j.compedu.2018.04.007

Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y. S., Gašević, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. Proceedings of the IEEE International Conference on Advanced Learning Technologies, 323–327. https://doi.org/10.1109/ICALT58122.2023.00100

Felton, P. (2013). Principles of good practice in SoTL. Teaching and Learning Inquiry, 1(1), 121–125. https://doi.org/10.20343/teachlearninqu.1.1.121

Fernández-Alcántara, M., Escribano, S., Juliá-Sanchis, R., Castillo-López, A., Pérez-Manzano, A., Macur, M., … & Cabañero-Martínez, M. J. (2025). Virtual simulation tools for communication skills training in health care professionals: Literature review. JMIR Medical Education, 11, e63082. https://doi.org/10.2196/63082

Fryer, L. K., & Carpenter, R. (2006). Bots as language learning tools. Language Learning & Technology, 10(3), 8–14. https://doi.org/10.64152/10125/44068

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, 100001. https://doi.org/10.1016/j.caeai.2020.100001

Jeffries, P. R., Rodgers, B., & Adamson, K. (2015). NLN Jeffries simulation theory: Brief narrative description. Nursing Education Perspectives, 36(5), 292–293. https://doi.org/10.5480/1536-5026-36.5.292

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Kassem, H., Beevi, A. A., Basheer, S., Lutfi, G., Cheikh Ismail, L., & Papandreou, D. (2025). Investigation and assessment of AI’s role in nutrition: An updated narrative review of the evidence. Nutrients, 17(1), 190. https://doi.org/10.3390/nu17010190

Kavanagh, S., Luxton-Reilly, A., Wuensche, B., & Plimmer, B. (2017). A systematic review of virtual reality in education. Themes in Science and Technology Education, 10(2), 85–119.

Kemmis, S., McTaggart, R., & Nixon, R. (2014). The action research planner: Doing critical participatory action research. Springer. https://doi.org/10.1007/978-981-4560-67-2

Knight, A., Baldwin, C., Reidlinger, D. P., & Whelan, K. (2020). Communication skills teaching for student dietitians using experiential learning and simulated patients. Journal of Human Nutrition and Dietetics, 33(5), 601–613. https://doi.org/10.1111/jhn.12743

Knight, A., Palermo, C., Reedy, G., & Whelan, K. (2024). Communication skills: A scoping review of experiences, perceptions and use in dietetic practice. Journal of the Academy of Nutrition and Dietetics, 124(3), 324–335. https://doi.org/10.1016/j.jand.2023.12.008

Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70.

Kris-Etherton, P. M., Akabas, S. R., Bales, C. W., Bistrian, B., Braun, L., Edwards, M. S., … & Tracy, S. W. (2020). The need to advance nutrition education in the training of health care professionals and recommended research to evaluate implementation and effectiveness. American Journal of Clinical Nutrition, 99(5), 1153S–1166S. https://doi.org/10.3945/ajcn.113.073502

Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A systematic review. Education and Information Technologies, 28(1), 973–1018. https://doi.org/10.1007/s10639-022-11177-3

Lee, J., Kim, H., Kim, K. H., Jung, D., Jowsey, T., & Webster, C. S. (2020). Effective virtual patient simulators for medical communication training: A systematic review. Medical Education, 54(9), 786–795. https://doi.org/10.1111/medu.14152

Leong, K., Sung, A., Au, D., & Blanchard, C. (2021). A review of the trend of microlearning. Journal of Work-Applied Management, 13(1), 88–102. https://doi.org/10.1108/JWAM-10-2020-0044

Masters, K. (2019). Artificial intelligence in medical education. Medical Teacher, 41(9), 976–980. https://doi.org/10.1080/0142159X.2019.1595557

Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney‐Kennicutt, W., & Davis, T. J. (2014). Effectiveness of virtual reality‐based instruction on students’ learning outcomes in K‐12 and higher education: A meta‐analysis. Computers & Education, 70, 29–40. https://doi.org/10.1016/j.compedu.2013.07.033

Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal design for learning: Theory and practice. CAST Professional Publishing.

Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172. https://doi.org/10.1016/j.socscimed.2020.113172

Motola, I., Devine, L. A., Chung, H. S., Sullivan, J. E., & Issenberg, S. B. (2013). Simulation in healthcare education: A best evidence practical guide. Medical Teacher, 35(10), e1511–e1530. https://doi.org/10.3109/0142159X.2013.818632

Naja, F., Taktouk, M., Matbouli, D., Khaleel, S., Maher, A., Uzun, B., … & Nasreddine, L. (2024). Artificial intelligence chatbots for the nutrition management of diabetes and the metabolic syndrome. European Journal of Clinical Nutrition, 78(10), 887–896. https://doi.org/10.1038/s41430-024-01476-y

Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers & Education, 171, 104124. https://doi.org/10.1016/j.caeai.2021.100020

Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1–4. https://doi.org/10.1207/S15326985EP3801_1

Peddle, M., Bearman, M., & Nestel, D. (2016). Virtual patients and nontechnical skills in undergraduate health professional education: An integrative review. Clinical Simulation in Nursing, 12(9), 400–410. https://doi.org/10.1016/j.ecns.2016.04.004

Phillips, B. J., Hickey, M. K., & Barnard, J. (2022). Technology-enhanced learning in nutrition education: A systematic review. Journal of Nutrition Education and Behavior, 54(3), 186–200.

Rasheed, R. A., Kamsin, A., & Abdullah, N. A. (2020). Challenges in the online component of blended learning: A systematic review. Computers & Education, 144, 103701. https://doi.org/10.1016/j.compedu.2019.103701

Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887

Sandars, J., & Patel, R. (2023). The challenge of online learning for medical education during the COVID-19 pandemic. International Journal of Medical Education, 11, 169–170. https://doi.org/10.5116/ijme.5f20.55f2

Schwartz, V. S., Rothpletz-Puglia, P., Denmark, R., & Byham-Gray, L. (2015). Comparison of standardized patients and real patients as an experiential teaching strategy in a nutrition counseling course for dietetic students. Patient Education and Counseling, 98(2), 168–173. https://doi.org/10.1016/j.pec.2014.11.009

Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer. https://doi.org/10.1007/978-1-4419-8126

Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7

Tudor Car, L., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T., Joty, S., Theng, Y. L., & Atun, R. (2020). Conversational agents in health care: Scoping review and conceptual analysis. Journal of Medical Internet Research, 21(8), e14867. https://doi.org/10.2196/17158

Wang, B., Rau, P. L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behaviour & Information Technology, 42(9), 1324–1337. https://doi.org/10.1080/0144929X.2022.2072768

Wartman, S. A., & Combs, C. D. (2018). Medical education must move from the information age to the age of artificial intelligence. Academic Medicine, 93(8), 1107–1109. https://doi.org/10.1097/ACM.0000000000002044

Winkler-Schwartz, A., Bissonnette, V., Mirchi, N., Ponnudurai, N., Yilmaz, R., Ledwos, N., … & Del Maestro, R. F. (2019). Artificial intelligence in medical education: Best practices using machine learning to assess surgical skills in virtual reality. Journal of Medical Internet Research, 21(2), e13963. https://doi.org/10.1016/j.jsurg.2019.05.015

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., … & Li, Y. (2021). A review of artificial intelligence (AI) in education from 2000 to 2020. Association for Educational Communications and Technology, 69, 1041–1056. https://doi.org/10.1155/2021/8812542


About the authors

Olivia Wright is a Senior Lecturer in Nutrition and Dietetics and the Director of Teaching and Learning in the School of Human Movement and Nutrition Sciences at The University of Queensland. She leads initiatives that explore the ethical and pedagogical integration of AI in higher education – developing secure, inclusive and culturally responsive assessment models that strengthen academic integrity and student engagement. Olivia has led projects embedding AI feedback systems, digital tutors and AI-based conversational agents to replicate real-world client interactions, enhancing students’ confidence and competence in clinical settings. She also coordinates and teaches DIET7103: Interviewing and Counselling for Dietetics Practice and DIET7107: Food From Science to Systems as part of the Master of Dietetics Studies Program. Alongside her teaching innovation portfolio, Olivia’s research examines how dietary patterns influence microbial and health resilience and how this science can inform education, clinical practice and policy. Her interdisciplinary program connects culinary nutrition, chronic disease prevention, Indigenous food systems and AI-assisted data analysis, with a shared goal of improving health and learning outcomes across communities.

Shreyasi Baruah holds dual undergraduate degrees spanning cultural, exercise, and nutrition sciences and is currently completing a Master of Dietetics. She is passionate about bridging the gap between scientific research and everyday understanding, translating complex health concepts into clear, actionable insights. Her dietetic interests lie in women’s health, First Nations health, nutrition across the lifespan—particularly during maternal and early life stages—and public health. Driven by a commitment to equity in healthcare, Shreyasi aims to ensure that people from all backgrounds have access to reliable, evidence-based information that empowers them to make informed choices for their well-being. Shreyasi is also deeply passionate about education and believes in using knowledge as a tool to inspire and empower future generations to take charge of their health and communities. With hands-on research experience in exercise and nutrition science, Shreyasi is deeply engaged in the academic world and hopes to continue contributing to the evolving field of nutrition science through research that informs practical health strategies. Beyond academia, she finds joy in creative expression through art and poetry, as well as in powerlifting, sprinting, and spending time in nature—activities that ground her, fuel her creativity, and reflect her holistic approach to health and life.