9 Teaching with GenAI: A TPACK-Based Case Study in Web and Mobile Development Education
Aneesha Bakharia and Jasmine Burt
How to cite this chapter:
Bakharia, A., & Burt, J. (2025). Teaching with GenAI: A TPACK-Based case study in web and mobile development education. In Fitzgerald R. (Ed.), Inquiry in Action: Using AI to Reimagine Learning and Teaching. The University of Queensland. https://doi.org/10.14264/911d013
Abstract
This case study applies a SoTL-informed, TPACK-based framework to examine how Generative AI (GenAI) can be responsibly embedded in web and mobile development education. GenAI was introduced across two courses through governance frameworks, structured prompting examples, API integration, and secure assessment. Data from a focus group and Student Evaluation of Course and Teacher (SECaT) comments were thematically analysed to explore student learning, perceptions, and skill development. Findings show that embedding GenAI within authentic, project-based learning enhanced students’ technical proficiency, prompt literacy, and employability awareness while maintaining integrity through in-person code reviews and exams. The study contributes to SoTL by evidencing how design, pedagogy, and ethics intersect in developing discipline-specific AI literacy, illustrating a model for inquiry-led innovation that aligns with Felton’s (2013) principles of SoTL focused on learning, grounded in discipline, methodologically sound, and conducted in partnership.
Keywords
Web and Mobile Development Education, AI Literacy, Prompt Literacy, Project-Based Learning, Responsible AI Integration, Authentic Assessment, In-Person Code Review
Practitioner Notes
- GenAI literacy can be embedded across a course through iterative exposure, not a single intervention.
- Linking GenAI use to governance and ethics normalises responsible professional practice.
- Scaffolded prompting examples help students move from curiosity to competence.
- Secure assessments (e.g., code reviews) maintain academic integrity while enabling authentic AI use.
- Embedding AI literacy within project-based learning supports both employability and reflective capability.
Introduction
The embedding of Generative AI (GenAI) in COMP2140 Web/Mobile Programming and INFS3202 Web Information Systems is a course-wide approach designed to build discipline-specific GenAI literacy. The overall objective is to help students learn how to use GenAI tools in thoughtful, responsible and practical ways alongside their development of core technical coding skills. This integrated approach ensures that AI literacy is treated as an evolving professional competence rather than one-off skill development or add-on content topic. The embedding begins with the introduction of a GenAI governance framework, giving students a foundation for understanding the ethical, responsible, and governance aspects of AI. This starting point situates technical learning within broader conversations about data ethics, transparency and accountability. It also encourages students to critically question how AI systems operate, how bias is produced and mitigated and where human oversight is most essential. By foregrounding these principles early in the semester, students are invited to view GenAI through both technical and ethical lenses, framing their practice within a culture of critical inquiry and responsibility.
To build practical literacy, prompting examples are introduced towards the end of lectures, reinforcing the week’s content and demonstrating how GenAI can be applied in relevant programming contexts. These examples are carefully sequenced to increase in difficulty and depth across the semester, so that students explore multiple aspects of prompting techniques, from generating and debugging code through to producing creative artefacts or design elements. This step-by-step approach builds confidence while also cultivating discernment in when and how to use GenAI effectively. Students come to see prompting as a process of reflection and iterative thinking.
A dedicated lecture extends this work by exploring how GenAI can be integrated into web products via API calls. Students learn how to build chatbots, perform API integrations, and apply Retrieval-Augmented Generation (RAG). The lecture content is supported by a lab that allows structured, hands-on practice, allowing students to experiment with these ideas in authentic, inquiry-based and project-oriented tasks. Within the assessment structure, students are given the option to include GenAI integration as an advanced feature in their projects. This optional approach encourages creativity and experimentation, while also recognising that not every project requires or benefits from GenAI. This offers flexibility and inclusivity, allowing students to engage at their own level of readiness. It also positions GenAI as a creative opportunity rather than a requirement, reinforcing learner agency and autonomy. To ensure academic integrity, both courses also incorporate a secure assessment component through either in-person code reviews or an exam. In these reviews, students must demonstrate and explain their own code, showing that they understand its functionality, structure, and design choices. This ensures that any GenAI-generated content has been critically evaluated and understood by the student, maintaining academic integrity and verifying genuine learning.
The rationale for this holistic embedding is twofold. First, it supports discipline-specific AI literacy development by gradually introducing students to governance, prompting, evaluation, and integration in authentic contexts. Second, it reflects a scholarship of learning and teaching approach where teaching innovation is a form of inquiry into how students learn with and through AI. This case offers an evidence-based exploration of how GenAI can enrich authentic, project-based learning, positioning GenAI as a catalyst for critical engagement and reflective practice. Through this approach, students learn to use GenAI responsibly while developing the capacity to critically integrate it into their emerging professional identities.
Literature Review: Embedding AI in Computing Education
Learning web development has long been recognised as challenging for students because of the breadth and complexity of skills required. Beginners must work across multiple layers of the stack: front-end markup and styling (HTML/CSS), client-side interaction programming (JavaScript), and back-end systems with database integration. This multi-layered nature creates a significant cognitive load. Park & Wiedenbeck (2011) observed that novices often struggle when faced with learning all these layers at once. Helgesson et al. (2021) likewise showed that the expanding ecosystem of software engineering tools compounds this overload for novice developers. Introducing web frameworks like React (taught in COMP2140) or Django (taught in INFS3202) further increases the complexity. Samudio & LaToza (2022) report that students often find it difficult to adapt code snippets to framework conventions, lacking knowledge of where to add and make changes when programming. This aligns with Xinogalos & Kaskalis (2012), who noted web programming courses demand unusually high levels of conceptual integration compared to other introductory computing subjects. Given these challenges, it is unsurprising that students increasingly turn to Generative AI (GenAI) tools for support.
GenAI tools such as Open AI ChatGPT and GitHub Copilot are now able to generate web applications with increasing sophistication. They can generate boilerplate, suggest code completions, as well as scaffold front-end and back-end structures. GitHub (2023) reported measurable productivity gains from Copilot, with developers completing tasks significantly faster. Lee and Palmer (2025) found students using prompt engineering with GenAI completed programming tasks up to 55% faster, since the AI handled routine “plumbing” code. Multimodal models even allow design-to-code workflows, for example Wang et al. (2024) evaluated GPT-4 and found that it was able to reproduce user interfaces from screenshots. Beyond anecdotal benefits, broader reviews reinforce these findings. AI is already reshaping software engineering through automated code generation, debugging support, predictive maintenance, and design assistance (Alenezi & Akour, 2025). These advances accelerate development cycles, but the authors warn of ethical risks, degraded human expertise, and challenges around reliability if AI becomes over-relied upon. GenAI speeds learning and motivation by reducing tedious work but also shifts collaboration patterns with developers sometimes turning to AI instead of peers, potentially weakening social learning loops central to agile teams (Ulfsnes et al., 2024).
At the industry level, the Jobs and Skills Australia (2025) Gen AI Transition report outlines the future labour market implications of GenAI, suggesting AI is more likely to augment rather than replace human roles. While adoption is uneven, in all cases the demand for AI literacy and adaptability is rapidly increasing. Graduates will increasingly need to integrate AI into workflows while maintaining critical human capabilities such as evaluation, collaboration and ethical judgement. This reinforces the educational need to go beyond teaching technical programming and web development but also to embed AI literacy and reflective practice as core professional skills.
Despite the clear benefits, the risks are well documented. AI-generated outputs can be unreliable or misleading. Oertel et al. (2024) demonstrated how Copilot often produces subtly flawed solutions, while Tóth et al. (2024) showed that LLM-generated PHP code can replicate vulnerabilities seen in its training data. There is also a worrying trend; students assisted by AI often become overconfident, misjudging their code’s correctness and security (Woo et al., 2024). This highlights a tension, while GenAI can reduce barriers to entry and increase productivity, it cannot replace human review, testing and critical thinking.
To address these issues, it is becoming important for educators to explicitly embed AI literacy within the curriculum. Students must understand AI’s strengths, limitations, and ethical implications (Fitzgerald & Curtis, 2025; UNESCO, 2023). In computing education, this extends to domain-specific AI literacy: evaluating AI-generated code for performance, maintainability, and security in real projects. Prompt engineering is a growing area of focus, explicit instruction in structured prompting strategies improves both student confidence and output quality (Denny et al., 2023; Woo et al., 2024). Lee & Palmer (2025) highlight prompt engineering as an emerging curricular topic, suggesting that teaching students to role-assign, iterate, and refine prompts is itself a key skill for modern software education. Fitzgerald & Curtis (2025) likewise argue that graduates must not only know how to use AI, but how to use it responsibly, as part of developing the ethical, critical, and reflective capacities required in an AI-augmented world. This emphasis on responsible, situated practice connects directly to project-based learning, where students apply technical and ethical reasoning within authentic tasks that mirror professional contexts.
Project-based learning (PBL) remains central to computing education because it mirrors authentic industry practice. Sun and Deng (2024) found that structured use of ChatGPT within Kolb’s experiential learning cycle deepened student engagement while Murniarti & Siahaan (2025) showed that AI support increased motivation and innovative problem-solving. However, overuse risks hollowing out learning if students outsource too much to AI. This is where secure assessments become essential. Du Plessis (2025) advocated for project-based assessment balanced with checkpoints of student accountability and Lau & Guo (2023) identified live code reviews and oral demonstrations as “middle path” solutions, allowing students to use AI tools while still requiring them to explain and defend their code. Such models ensure authentic learning with secure assessment while reflecting the realities of professional software development.
Overall, the literature points in a clear direction, web development education should integrate GenAI in a holistic yet critical way, harnessing its benefits while managing its risks. Students need scaffolds for prompt engineering, authentic projects that embed AI integration, and secure assessments to safeguard understanding. AI should be normalised as a professional tool, yet always situated within frameworks of reflection, ethics, and accountability. This approach aligns with broader labour market evidence showing GenAI will augment rather than replace human expertise (Jobs and Skills Australia, 2025). The central challenge for educators is to strike a balance, leveraging AI’s potential to lower barriers and enhance efficiency while still cultivating the deep, transferable skills that underpin sustainable learning, creativity and lifelong learning.
These disciplinary challenges highlight a broader gap identified in the higher education literature. Fitzgerald et al. (2025) argue that much existing AI-in-education research remains descriptive or tool-focused, lacking pedagogical grounding and critical inquiry into how students learn with and through AI. They call for research that unites ethics, design, and disciplinary context, what they term a pedagogically grounded, ethically aware approach to AI. This case study directly responds to that agenda, demonstrating how structured embedding of GenAI, supported by governance frameworks, scaffolding, and secure assessment can strengthen both technical skill and reflective capacity in web and mobile development education. This approach also aligns with Felton’s (2013) principles of good practice in the Scholarship of Teaching and Learning, which emphasise inquiry that is focused on student learning, grounded in discipline, methodologically sound, and conducted in partnership.
Balancing Authenticity and Integrity in AI-Enabled Learning
A key challenge in embedding GenAI into COMP2140 and INFS3202 was ensuring that the project-based nature of both courses was preserved while also maintaining secure assessment practices. Allowing students to make use of GenAI tools created valuable opportunities for learning, but it also raised questions of authorship, understanding, and integrity. This required careful design to ensure that AI served as a catalyst for inquiry and reflection rather than a shortcut that undermined learning.
In COMP2140, there is no formal exam. Instead, assessment is built around four major tasks, three of which are substantial projects. To ensure security and fairness, COMP2140 includes two in-person code reviews. These reviews act as a safeguard by requiring students to demonstrate their knowledge of the code they submit, explaining design choices, debugging strategies, and how different parts of their application work. This model ensures that the projects remain authentic while providing a secure checkpoint that validates genuine understanding. In contrast, INFS3202 is structured around a major project, a code review and an exam. The exam functions as a traditional secure assessment component, while the project allows for extended exploration of backend development concepts. Together, these two elements balance the flexibility of project work with the accountability of a controlled assessment environment. Across both courses, secure assessments serve as a pass/fail identity verified hurdle assessment, confirming that students genuinely understand and own their own work, even when GenAI tools are used to support development. This balance between openness to innovation and commitment to integrity sits at the heart of this design ensuring that the integration of GenAI strengthens authentic learning.
Methodology: A Collaborative SoTL Inquiry
This study adopts a Scholarship of Teaching and Learning (SoTL) perspective, positioning teaching innovation as a form of inquiry into student learning. To evaluate the effectiveness of this embedded approach, a small-scale qualitative study was conducted across the two courses. Students who completed COMP2140 Web/Mobile Programming and INFS3202 Web Information Systems in Semester 1 and Semester 2 of 2024 were invited to participate in a focus group facilitated by a student partner and member of the teaching team. In addition, student feedback from course evaluations was reviewed to provide a broader perspective on the impact of innovation and a small number of individual student interviews were conducted to capture deeper reflections.
This partnership model reflects the principles of collaborative inquiry, recognising students as co-investigators rather than passive participants. This methodological design aligns with the Scholarship of Teaching and Learning (SoTL) commitment to systematic inquiry grounded in classroom practice. It evaluates a teaching innovation to explore how students negotiated new forms of agency, accountability, and professional identity in AI-supported learning environments. The data were analysed using reflexive thematic analysis (Braun & Clarke, 2006).
Six students participated: three had completed both courses, while the others had taken either COMP2140 or INFS3202.
The focus group was structured into four parts:
- Introduction: Students introduced themselves, shared their prior experience with AI, and reflected on their initial thoughts when they learned that GenAI and prompt engineering would be part of the course.
- Understanding the AI Tools: Students discussed the usefulness of the prompting examples covered in lectures and how well the lectures and labs prepared them for assessments. Participants also reflected on whether they attempted an advanced AI feature in their project, and any technical challenges they faced.
- Authenticity and Accountability: Students explored how the combination of project work and secure code reviews shaped their experience of learning with GenAI, including how they navigated authorship and integrity.
- Overall Impact and Future Improvements: Students reflected on how the experience influenced their perception of AI-assisted development, whether they felt more confident about applying AI in future projects, and what improvements they would recommend for embedding GenAI in the curriculum.
In addition to the focus group, qualitative data from the university Student Evaluation of Course and Teaching (SECaT) surveys were analysed to triangulate student perspectives. The focus group was conducted online via Zoom, with participant consent obtained in advance. Recordings were transcribed and validated by facilitators to ensure accuracy. Data from transcripts were then manually coded using thematic analysis to identify patterns and key themes related to learning engagement, confidence, perceived value and ethical use of tools. This study received ethics approval from The University of Queensland Human Research Ethics Committee (BEL LNR Panel), project number 2024/HE001346: AI Inquiry-Led Teaching Innovation.
Findings and Reflection
The focus group discussions and annotations provide rich insights into how students experienced the embedding of GenAI within COMP2140 and INFS3202. Several interconnected themes emerged that reflect cognitive and affective dimensions of learning with AI. Students described how the courses’ openness, structure and critical framing shaped their engagement, their perceptions of employability and their development as independent learners:
Normalising GenAI in Learning
Students consistently remarked that GenAI was openly discussed and embedded into the courses, unlike in other subjects where it was banned or ignored. This transparency “normalised” AI use, reducing stigma and allowing students to view it as a legitimate professional tool rather than something hidden. As one participant noted, in other courses “we were not allowed at all,” whereas in COMP2140 “we learned how to use it correctly”. This distinction was viewed positively, as students recognised the alignment between the course’s approach and industry expectations. They valued the opportunity to develop GenAI proficiency as a workplace-relevant skill.
“I was surprised… about the fact that they’d actually integrated learning how to use AI into the course. Because in that semester I had a course where we were not allowed at all… but in COMP2140 at least, we learned how to use it correctly.”
“Before that most of the courses… just tried to ban it, but most students still used it. So it kind of pushed people under the rug. Now that it’s always talked about, it’s more normal.”
By legitimising GenAI through structured discussion rather than prohibition, the course cultivated trust and responsible engagement.
Relevance to Industry and Employability
Many students emphasised that learning how to prompt effectively and integrate AI into web products was highly relevant for future employment. They described AI as a “required skill for industry” and linked its use to productivity, efficiency, and job prospects.
“Knowing how to use it is useful for future job prospects … people efficient with AI will be able to get roles.”
“I think it’s very relevant… in implementing it into the applications and websites we make, but also just using it as a tool. It’s here to stay.”
“When we go to industry… we all want jobs. So I think it was a good part to use AI and ChatGPT… it saved a lot of time.”
Students recognised that knowing how to use AI responsibly is now part of being work-ready.
Skill Development in Prompting and Evaluation
A strong theme was the improvement of prompt engineering skills. Before the courses, students often provided vague prompts and received poor outputs. Through lecture examples and lab exercises, they learned to be more specific, to assign roles, and to give structured instructions. This resulted in more accurate, efficient outcomes. Students also became more critical, noting the importance of verifying AI output and ensuring they still “knew the basics” themselves.
“Before the course I didn’t provide as good prompts as I should have. I feel like the course taught me how to properly write a prompt that would give the right solution straight away.”
“In lectures we got a really good breakdown of prompt engineering… giving it a role, explaining the objective, giving an example of the output you want.”
“They didn’t just let us do it — they taught us how to, with different structures of a prompt and showing how efficient AI can be if you provide the right information.”
This theme illustrates how students developed both technical prompt engineering skills. The iterative design of lecture examples and labs encouraged experimentation and reflection, aligning with an experiential learning cycle (Kolb & Kolb, 2017). Students’ growing discernment knowing when and how to rely on AI signals the development of critical AI literacy, in other words the ability to use, question, and adapt technology within disciplinary practice (Fitzgerald & Curtis, 2025).
Practical Utility and Time-Saving
Students found GenAI particularly useful for debugging, generating boilerplate code, and speeding up routine tasks. Several students said it “saved a lot of time” and helped them overcome blocks when stuck on coding problems. However, they also recognised its limitations: outputs could be misleading or inefficient if prompts were weak, and some still preferred traditional resources such as documentation or Stack Overflow for reliability. This blend of enthusiasm and caution reflects growing metacognitive awareness of when and how to apply GenAI effectively.
“It helps me save a lot of time… especially when debugging the code. Without AI it would take much longer, but AI can help me find it in a very short time.”
“GenAI saved a lot of time and was useful for debugging when getting stuck.”
“I use it for simple functions… just like create a for loop, what structure is best to have this data.”
Assessment Design and Accountability
Students reflected positively on the secure assessment model of in-person code reviews. This structure ensured they could not rely on AI uncritically: they had to explain, justify and demonstrate understanding of their own code. This approach reassured students that integrity was maintained while still allowing authentic exploration of GenAI within project work.
“Even if you were to copy and paste it, you would have failed if you couldn’t explain it to the tutor. The code reviews were a good way to help understand it.”
“We had to reference it minimally … just saying these lines were developed by AI and include the prompt. I think that was a really good way of handling it.”
Integration of GenAI in Student Assessment Project
Students valued learning how to make API calls to GenAI models and integrate them into their assessment projects. This provided practical experience in treating AI as a feature of their applications rather than just a support tool.
“I had made an artificial marker website where you’d submit your assignment and it would call the API to give a mark and reasoning.”
Two students extended these skills into their research theses on LLMs, showing how course activities transferred into higher-level inquiry.
“Now I’m using AI in my Masters thesis building chatbots.”
“I’m building the AI Chatbot, and the project is about using the chatbot to help beginners debug their code. But I changed the prompt to make the chatbot not give the answers directly, but to guide them step by step.”
Students applied knowledge through active experimentation and reflection, transforming classroom learning into self-directed exploration, a key principle of experiential learning (Kolb & Kolb, 2017).
Optional vs Compulsory AI Integration in Assessment Projects
While the option to include an advanced AI feature in projects was appreciated, several students suggested making AI integration compulsory in future course iterations. Those who chose not to attempt it expressed regret, feeling they had missed a valuable learning opportunity to develop applied literacy and technical fluency. Students proposed that compulsory integration would ensure equitable engagement and deeper learning across the cohort.
“I didn’t actually implement AI in my assessment… looking back I definitely regret not doing it, because having that skill would really be beneficial. Maybe making it compulsory would be even better.”
“AI integration was optional, but make it compulsory.”
This feedback highlights students’ readiness to embrace AI as a core professional capability one that bridges technical competence with reflective, authentic learning practice.
Future Improvements and Expansions
Students identified several opportunities to expand the embedding of GenAI in future courses. Suggestions included developing a chatbot tutor to assist with debugging, expanding advanced labs to explore more complex prompting and incorporating design-focused applications such as generating visual assets and design support. Participants also recommended exploring a wider range of GenAI models and tools (e.g. Gemini, Perplexity, Ollama) beyond ChatGPT, and integrating GenAI with institutional platforms like Blackboard to provide consistent support across courses.
“It would be nice to broaden people’s horizons about what you can use it for — I find myself using it the same way over and over, but there are so many interesting applications.”
“Maybe having more about using AI for visual elements like generating assets or colour palettes — that would have been really helpful.”
These insights highlight students’ curiosity and creative engagement, reinforcing the value of iterative curriculum design grounded in feedback and reflection.
Balanced Perspectives
While many students valued GenAI’s efficiency and problem-solving capacity, they also expressed caution. Some worried that over-reliance might erode foundational knowledge, especially in earlier years of study. Others described AI as a “hit and miss tool” that required careful prompting and verification. The consensus was that embedding it alongside secure assessment struck the right balance: students could experiment and innovate while being held accountable for genuine understanding.
“AI is just a tool… sometimes I use it and it works, but I don’t really know how it worked. That happens most of the time, to be honest.”
“It reinforced my perception that it’s a hit and miss tool… keep it very simple and it can be useful, but otherwise you end up going down a wild goose chase.”
“I think I need to know the basics… but if I don’t use AI, I’ll be slower than the rest of the people I work with if they do.”
This balanced perspective reflects the emergence of critical AI literacy where students understand AI as both an accelerator of learning and a subject for ethical and reflective evaluation. Students recognised that success in future professional contexts would depend not on avoiding AI, but on learning to work with it responsibly and intelligently. Taken together, these reflections suggest that effective GenAI integration is an evolving pedagogical practice that must continually balance innovation with rigour, creativity with accountability, and efficiency with critical reflection.
Future Directions: Designing for Lifelong AI Literacy
Students appreciated and valued the holistic embedding of GenAI across the courses, noting that it normalised AI use and connected directly to their professional development. Since the initial embedding in 2024, GenAI has advanced rapidly, with agentic systems now capable of generating almost complete web applications. This progress raises new opportunities but also new challenges for curriculum design.
Two central challenges stand out:
- Balancing Fundamentals with Complexity
It is crucial that students continue to master the fundamentals of web programming. At the same time, they need the capability to design, debug, and evaluate larger, AI-assisted systems. The curriculum must balance foundational knowledge with the realities of modern AI-augmented development.
- Fostering Lifelong Learning
GenAI evolves quickly, meaning that students must complete the course not only with skills in today’s tools, but also with the ability to learn and adapt with AI in the future. This requires explicit attention to learning with AI as an ongoing practice, cultivating habits of critical evaluation, reflection, and adaptation.
Looking ahead, two directions were identified for strengthening the embedding of GenAI:
- Mandatory Integration in Assessments
While GenAI integration was optional in 2024, future iterations could make it a required component of project assessments. This would ensure all students gain hands-on experience but introduces challenges around providing API key access and managing cost implications.
- Refining Prompting into Context Engineering
Prompt engineering remains important, but the focus is shifting toward context engineering: shaping the task, providing examples, and managing input/output flows for more advanced models. Teaching these skills will prepare students for a landscape where GenAI tools are increasingly agentic and system oriented.
Looking ahead, the challenge is striking a balance between grounding students in the fundamentals of web development and preparing them to adapt to rapidly evolving AI tools. Educators play a vital role in holding these two priorities together, maintaining rigour in the basics while guiding students to continue learning as technologies evolve. This case study demonstrates how structured embedding of GenAI, underpinned by ethical and pedagogical design, can strengthen both technical skill and reflective capacity. Its findings reinforce the need for educators to engage critically with AI, positioning it as a catalyst for inquiry (Fitzgerald et al., 2025), creativity, and lifelong adaptability.
AI Use Declaration
This chapter was prepared with assistance from ChatGPT, which supported aspects of writing and editing, including clarity, structure, and citation formatting. All ideas, findings, and interpretations are the authors’ own, and the AI did not generate or analyse primary data or replace academic judgement.
References
Alenezi, M., & Akour, M. (2025). AI-driven innovations in software engineering: A review of current practices and future directions. Applied Sciences, 15(3), 1344. https://doi.org/10.3390/app15031344
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Denny, P., Leinonen, J., Prather, J., Luxton-Reilly, A., & Albrecht, J. R. (2023). Promptly: Using prompt problems to teach learners how to effectively utilize AI code generators. arXiv preprint arXiv:2307.16364. https://arxiv.org/abs/2307.16364
Du Plessis, E. (2025). Embracing project-based assessments in the age of AI in open distance e-learning. International Journal of Information and Education Technology, 15(2), 142–149. https://doi.org/10.18178/ijiet.2025.15.2.2249
Felton, P. (2013). Principles of good practice in SoTL. Teaching & Learning Inquiry, 1(1), 121–125. https://doi.org/10.2979/teachlearninqu.1.1.121
Fitzgerald, R. & Curtis, C. (2025, September 8). AI is now part of our world – uni graduates should know how to use it responsibly. The Conversation. https://theconversation.com/ai-is-now-part-of-our-world-uni-graduates-should-know-how-to-use-it-responsibly-261273
Fitzgerald, R., Kumar, J. A., Roe, J., Roehrer, E., & Yang, J. (2025). Framing the future: A research agenda for AI in higher education. Journal of University Teaching and Learning Practice, 22(3). https://doi.org/10.53761/jwt7ra63
GitHub. (2023). Measuring the impact of GitHub Copilot. GitHub Resources. https://resources.github.com/learn/pathways/copilot/essentials/measuring-the-impact-of-github-copilot/
Helgesson, D., Broman, D., & Runeson, P. (2021). A grounded theory of cognitive load drivers in novice agile software development teams. Empirical Software Engineering, 26(5), 1–29. https://ar5iv.labs.arxiv.org/html/2107.04254
Jobs and Skills Australia. (2025). Our Gen AI transition—Implications for work and skills. https://www.jobsandskills.gov.au/publications/generative-ai-capacity-study-report
Kolb, A. Y., & Kolb, D. A. (2017) Experiential learning theory as a guide for experiential educators in higher education. Experiential Learning & Teaching in Higher Education, 1(1), Article 7. https://nsuworks.nova.edu/elthe/vol1/iss1/7
Lau, S., & Guo, P. J. (2023, August). From “ban it till we understand it” to “resistance is futile”: How university programming instructors plan to adapt as more students use AI code generation and explanation tools such as ChatGPT and GitHub Copilot. Proceedings of the 2023 ACM Conference on International Computing Education Research (ICER ’23) (pp. 106–121). ACM.
Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(7). https://doi.org/10.1186/s41239-025-00503-7
Murniarti, E., & Siahaan, G. (2025). The synergy between artificial intelligence and experiential learning in enhancing students’ creativity. Frontiers in Education, 10, 1606044. https://doi.org/10.3389/feduc.2025.1606044
Oertel, J., Pfahler, L., & Pretschner, A. (2024). Don’t settle for the first! How many GitHub Copilot solutions should you check? Proceedings of the 2024 ACM Symposium on Software Engineering. https://www.researchgate.net/publication/390604728
Park, T., & Wiedenbeck, S. (2011). Learning web development: Challenges at an earlier stage of computing education. Information and Software Technology, 53(5), 449–461.
Samudio, A., & LaToza, T. (2022). Barriers in front-end web development. Empirical Software Engineering, 27(4), 1–37. https://cs.gmu.edu/~tlatoza/papers/BarriersInFrontendWebDevelopment.pdf
Sun, R., & Deng, X. (2024). Using ChatGPT to enhance experiential learning of college students. 57th Hawaii International Conference on System Sciences (HICSS-57).
Tóth, G., Pástor, A., & Gyimesi, P. (2024). LLMs in web development: Evaluating LLM-generated PHP code – Unveiling vulnerabilities and limitations. arXiv preprint arXiv:2404.14459. https://arxiv.org/abs/2404.14459
UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
Ulfsnes, R., Moe, N. B., Stray, V., & Skarpen, M. (2024). Transforming software development with generative AI: Empirical insights on collaboration and workflow. In A. Nguyen-Duc, P. Abrahamsson, & F. Khomh (Eds.), Generative AI for Effective Software Development (pp. 219–234). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-55642-5_10
Wang, X., Chen, Y., & Huang, J. (2024). A systematic evaluation of large language models for generating programming code. arXiv preprint arXiv:2403.00894. https://arxiv.org/html/2403.00894v1
Woo, D. J., Wang, D., Yung, T., & Guo, K. (2024). Effects of a prompt engineering intervention on undergraduate students’ AI self-efficacy, AI knowledge and prompt engineering ability: A mixed methods study. arXiv preprint arXiv:2408.07302. https://arxiv.org/abs/2408.07302
Xinogalos, S., & Kaskalis, T. (2012). Teaching web programming: Literature review and proposed guidelines. International Journal of Web Information Systems, 8(2), 123–142.