"

6 Paving Paths of Inquiry-Led AI Integration for Future-Ready Learners

Reihaneh Bidar

How to cite this chapter:

Bidar, R. (2025). Paving paths of inquiry-led AI integration for future-ready learners. In R. Fitzgerald (Ed.), Inquiry in action: Using AI to reimagine learning and teaching. The University of Queensland. https://doi.org/10.14264/319212f

Abstract

This chapter presents an inquiry-led model for integrating Generative AI (genAI) into information systems education that develops students’ technical competence alongside critical judgement, validation rigour, and collaborative practice. Drawing on two course iterations across undergraduate and postgraduate cohorts, I trace a learning arc from dependency to collaboration to integration, showing how scaffolded prompting, systematic verification, and team-based workflows position AI as a partner rather than a proxy. The design combines early AI-literacy foundations (prompt engineering and validation protocols), structured practice through Google Colab notebooks (manual first, then AI-assisted) and assessments that evidence human oversight, error-checking, and reflective justification. A reflective practitioner methodology is used, supported by student observations, to surface five actionable insights: (1) prompting as a form of technical literacy, (2) validation as a core competency, (3) AI as facilitator of learning, (4) the scaffolding–independence paradox, and (5) AI’s role in team learning. I map these to the University of Queensland graduate attributes and discuss tensions (deskilling risks, integrity expectations, uneven readiness) with pragmatic mitigations. The chapter concludes by offering practical resources including prompting guidelines, a validation checklist, and a simplified rubric for responsible AI use and identifies future directions such as earlier scaffolding, stronger alignment between assessment and AI literacies, and mid-course monitoring. Together, these strategies aim to support learners worldwide to build confidence, act ethically, and develop the skills needed for success in an AI-enabled future.

Keywords

generative AI, enquiry-led learning, prompt engineering, validation, assessment design, teamwork; information systems education

Practitioner Notes

  1. Students use AI frequently but lack deeper literacies. Courses must explicitly teach prompting, validation, and reflective judgement.
  2. Provide structured support for prompt engineering and validation at the start of the course to build confidence before tackling complex tasks.
  3. Embedding quality-assurance practices (testing, debugging, cross-checking) develops habits essential for AI-augmented professional work.
  4. Activities should frame AI as a collaborator that enhances, rather than replaces, human problem-solving and teamwork.
  5. Incorporate reflective tasks, evidence of validation, and mini-rubrics that reward ethical, critical, and effective AI integration.

Introduction

As the rapid rise of genAI intelligence brings radical change to professional landscapes and entire industries, universities face an urgent imperative to adapt. This endeavour was sparked primarily by the escalating demand for competencies throughout the global job market, evidenced by, for example, a twenty-fold increase in AI-related job postings within the first 10 months of 2023 (Dumas, 2024). A recent Microsoft survey indicates three quarters of knowledge workers already use AI in their daily tasks. While 79% of business leaders consider AI adoption vital for competitive survival, two thirds are hesitant to hire individuals lacking AI proficiency (Microsoft & LinkedIn, 2024). Only 3% of employers believe that higher education is supplying adequate preparation for an AI-driven future creating a gulf between industry and our universities[1]. Complicating matters, university students display a significant AI-readiness divide. In a polling of 3,839 students across 16 countries, 86% reported using tools such as ChatGPT, Grammarly, and Microsoft Copilot for coursework, mainly to seek information (69%), check their grammar (42%), create summaries (33%), paraphrase material (28%) and draft text for submission (24%). However, 58% reported lacking sufficient AI knowledge, with 48% did not feel ready for an AI-enabled workplace (Rong & Chun, 2024). This mismatch suggests that, while employing AI tools frequently, students are not gaining the deeper skills and confidence needed for thriving in AI-driven professional settings.

Universities are uniquely positioned to close this AI readiness gap. With student interest in artificial intelligence at an all-time high (Leckrone, 2023), educators have a prime opportunity to weave genAI into the curriculum in ways that sharpen critical-thinking skills and promote responsible, ethical use (Fitzgerald & Curtis, 2025). Information Systems (IS) educators, in particular, play a pivotal role in integrating business and technology skills by empowering students to navigate and contribute to AI-driven professional environments. Effectively weaving genAI into education requires rethinking traditional teaching methods, opening avenues for personalised tutoring, adaptive assessment, and novel interactive learning tools (Walczak & Cellary, 2023).

In my own teaching, I have observed how students engage with genAI in their learning and what kinds of understanding emerge from that experience. By examining these interactions, I have sought to identify effective strategies for integrating genAI into university education so that students learn to use AI and develop the confidence, critical judgement and capability required to contribute meaningfully to the future workforce. This study draws on both undergraduate and postgraduate information systems cohorts to explore how students apply genAI tools, the competencies they develop through this process and the evidence-based lessons educators can use when embedding AI in their teaching.

Literature Review

Historically, by treating technology adoption as a purely technical challenge, education institutions have often overlooked broader systemic changes required in organisational and pedagogical approaches to learning. In contrast, those that have embraced new technologies through redesigned learning activities, up-to-date competency frameworks, and restructured education processes have demonstrated improved student outcomes and sustained relevance (Christensen & Eyring, 2011). The transformation represented by artificial intelligence aligns with what Rogers (2003) described in his innovation-diffusion framework as a critical inflection point, where early adoption is essential for survival. Yet, success requires more than uptake and technology upgrades, it is realising genAI’s potential also requires revisiting assumptions related to pedagogy, curricula, assessment design and even the very nature of learning. Without systemic adjustments, institutions risk widening skills gaps, not only from limited technical knowledge but from a deeper misalignment; traditional education tasks do not fully align with the collaborative, AI-augmented work environments that increasingly shape professional practice.

To address these challenges, educators are increasingly turning to 21st-century skills frameworks that position AI literacy within broader reconfiguration of learning design, structure and assessment (Partnership for 21st Century Learning, 2015). Agile teaching approaches, with emphasis on student-centred learning, teamwork, and rapid feedback, along with active-learning and immersive work integrated course designs have been effective in fostering adaptable, future ready graduates (Sharp et al, 2020). However, gaps remain, while curricula encompassing AI have gained traction in postgraduate programmes, undergraduate offerings are still limited (Chen, 2022). Compounding this issue, information-systems programmes are on the decline, despite growing demand for IS skills (Bohler et al., 2020). That said, there is encouraging recent evidence that genAI integration personalises learning (Pesovski et al., 2025). Integrating genAI tools into coursework can also enhance experiential learning (Sun & Deng, 2025), foster problem solving skills (Urbaczewski & Keeling, 2025) and support teaching in complex technical fields (Jiang & Nakatani, 2025; Zhang, 2025).

Information-systems education and AI-related competency development

Research in IS and business-education disciplines reveals both opportunities and complex challenges bundled with AI’s integration into these fields’ education. These fields reflect a unique position at the intersection of technology and business practice. Large language models (LLMs) have demonstrated capacity to improve learning outcomes in cases of routine and foundational tasks, but they may present limitations when applied to more complex analytical work (Storey & Song, 2017), thus drawing attention to the need for implementation approaches that preserve rigor. Practice-anchored framings, such as integration of AI assistants into learning-management systems (Forment et al., 2024) and use of LLMs to generate teaching cases (Lang et al., 2024), offer starting points for IS educators who seek to balance innovation with pedagogical effectiveness. Because IS programmes have long prepared students to straddle multiple socio-technical domains, they can play a special role in managing this balance, thanks to synthesis of technical proficiency with ethics and business considerations (Chen, 2022). While concerns surrounding academic integrity plague work in this arena, studies attest that responsible integration of genAI can mitigate associated risks while keeping the focus squarely on skills in analysis and critical thinking (Jiang & Nakatani, 2025; Urbaczewski & Keeling, 2025). Yet, teaching with AI remains an emerging frontier, one that challenges long-held assumptions about learning design, assessment, and student engagement. As educators, we are collectively learning how to teach with AI rather than around it. Continued research, dialogue, and sharing of pedagogical practice will therefore be essential to refine effective approaches, build an evidence base, and shape the next generation of AI-enabled learning.

Methodology: Integrating AI into the course

The pillars for this teaching case gained solid footing from an approach based on reflective practice [2]. The initiative was implemented through two core Business Information Systems courses offered in Semesters 1 and 2 of 2025: the undergraduate-level Managing Business Data (BISM2207) and the postgraduate-level Information Retrieval and Management (BISM7206). Enrolments for BISM2207 were 65 in Semester 1 and 49 in Semester 2, while BISM7206 had 215 students in Semester 1 and 204 in Semester 2. The reflections draw on my first-hand experience as lecturer and coordinator, tutors’ observations of engagement and assessment performance, and students’ informal and formal feedback on the use of genAI. That illustratively mirrors the focus in my construction of the techniques employed in the course and the framework for their implementation. This integration of genAI into the database management courses adhered to a deliberate plan aimed at both enhancing technical learning and stimulating critical awareness of AI’s strengths and limitations. The process, briefly outlined below, unfolded across three dimensions: teaching modules, learning activities, and assessments (example attached to the Appendix A).

Table 1  genAI integration across course components

genAI Integration
Component Before After
Lecture Lectures presented static examples of information retrieval and SQL coding. (25% genAI )

Introduced a dedicated module on Generative AI (genAI) in database management, covering benefits and limitations of genAI in coding, prompt engineering, validation techniques, and ethical considerations. Includes live demonstrations using Gemini and Google Colab to model prompt generation, code execution, and validation in real time.

Tutorial

 

Traditional tutorials emphasized manual SQL exercises and instructor-led problem solving, with no AI-assisted elements. (25% genAI)

Adopted a blended model combining in-class and self-directed Colab-based exercises, where students perform parallel manual and genAI-assisted SQL tasks. Each session includes a guided introduction to prompt engineering followed by independent experimentation.

Assignment Item 2

 

Assessments consisted of database design and information retrieval exercises completed manually, emphasizing accuracy and technical implementation without AI support. (40% genAI-enabled, 5% reflection component)

Team-based assignments now require documentation of genAI tool usage, prompt engineering strategies, and validation methods. A 5% reflection component assesses students’ metacognitive understanding and critical evaluation of genAI’s role in problem solving.

The students’ journey with AI integration began with fundamental building blocks for AI literacy, to equip each learner with both practical skills and critical awareness. Accordingly, the first module introduced students to the benefits that genAI can bring to coding, particular­ly for SQL development, while also highlighting the limitations and risks of relying on AI-based tools without ensuring comprehensive validation. I placed stress on prompt engineering strategies (partly to hone and optimise the outputs) and on validation techniques (to guarantee that all AI-generated SQL corresponds with expected results). Ethics considerations, most prominently employing AI responsibly and maintaining­ academic integrity, were central to the discussion throughout. To model best practice, I conducted live demonstrations of concrete ways to generate and validate­ queries via genAI tools, thereby illustrating not only correct outputs but also potential errors and pitfalls. I designed these activities to make sure that all students, however much experience they brought to the table, developed a coherent understanding of AI-related concepts and obtained all the skills that effective yet responsible use of these tools demands. This design was intended to nurture both building of technical competence and consistent reflective judgement in students’ AI use of AI.

Other modules built on these foundations. Team projects required the students to document their use of AI tools by specifying their strategies, sample prompts, and their validation steps. The activities encouraged them to treat AI as a member of the team, working alongside it but remaining responsible for directing, validating, and critically interpreting its outputs. This dovetailed with reflection in which the learners evaluated how they were applying genAI and considered what worked well as is versus required adjustment. That reflection – in parallel with my own – encouraged metacognitive awareness, spotlighting the importance of using AI as a complement to, not a replacement for, human skills in manual coding. By combining manual SQL tasks with AI-assisted coding, the assessments too aligned with the learning outcomes articulated in the syllabus while also promoting responsible and skilled AI use.

The course’s structure proved vital to its success. To complement the teaching modules, I restructured multiple learning activities under a self-directed model that exploits Google Colab notebooks. At the heart of the mechanism is manual completion of each SQL task, after which the same task gets conducted with genAI assistance. Hence, the students could readily compare real-world outputs, delve into refining prompts, and build their confidence in both manual and AI-supported work. I scaffolded the learning by guiding them through structured prompt engineering exercises before steering them toward independent experimenting. Such resources as in-class instruction, online notebooks, and externally produced genAI integration guides provided ongoing support for this advanced learning and for a course fuelled by its creators, the larger community, the learners, and ideas from the AI agents themselves.

Course Coordinator Reflection

Just as the students’ journey with AI integration is a living one with an evolving map, my part in it began as an organic path, an experiment in enhancing the already scaffolded structure of my database management courses. Recognising genAI tools’ suitability for aiding students especially in crafting and refining SQL queries or pseudocode for data manipulation tasks, I began exploring the opportunity to introduce these tools to the carefully designed learning pathway as systematic support mechanisms for solid content delivery, active learner involvement, and personalised learning. Furnishing dedicated genAI tutorials and lectures from the fourth week of instruction onward eased the students via structured scaffolding whereby they gained justified confidence in their prompt engineering and validation skills before tackling complex database-related tasks.

Throughout the implementation and growth process, which spanned two cohorts, I observed precisely how students gained experience of both the benefits and the risks of engaging with genAI as a learning partner. Moving through distinct phases in their manner of interacting with AI, they initially relied on it for basic tasks such as provision of clarification or more examples, generation of data, and construction of simple SQL queries. Then, they advanced to applying AI for complex problem-solving and strategic planning. By the end of the term, they were integrating AI-facilitated insight into their original thinking and team-based projects. This development may be characterised overall as a dependency → collaboration → integration trajectory.

Most significantly, the students’ trajectory encompassed far more than learning to use a tool: they were cultivating new literacies and metacognitive habits. For comprehensive understanding of how their new competencies were developing, I collected their reflections, closely observed their learning processes, and reviewed those outputs. Throughout, I maintained an emphasis on judgement, verification, and responsible use, aligned with both academic integrity expectations and professional standards for data-related roles.

What I was thinking and feeling along the way

I began with a tingle of both excitement and trepidation. The educator in me was thrilled by the potential to provide support tailored to each student’s struggles with complex database concepts. Normalisation rules, SQL syntax, and ER modelling often demand extensive time and practice before proficiency emerges. This is all the more true for students whose lack of background in the area leaves them with a steep hill to climb in ‘bootstrapping’ terms. They might not know where to turn first to start learning. While encouraged by the prospect of immediate AI-afforded bespoke assistance, I was concerned about responsible education, related integrity issues, and risks bound up with users getting overly dependent on AI tools, from the ease of ‘copy-and-paste’ solutions to complacency developed in tandem with trust in the tools. Would my students still develop the critical thinking skills so crucial for a database professional? Would they grasp the reasoning behind design decisions, see the importance of validation, etc. versus simply accept AI-generated solutions?

As our mutual journey progressed, these doubts took on a more nuanced configuration. I began to regard myself less as the fount of all knowledge and more as a facilitator of learning conversations (McLean & Attardi, 2023), rich dialogue that now incorporated AI as a participant. That shift too felt both liberating and challenging. It required me to reconsider fundamental assumptions about teaching and deeply held beliefs about learning in technical fields. At the same time, I had to accept that I needed to engage deeply with AI myself, embracing my vulnerability as fuel for my learning. It could enrich each lecture session as a source of further questions and examples that extend beyond my expertise.

What worked well

Through holistic contemplation of the growth observed as the term progressed, I identified four key areas wherein AI integration yielded significant positive outcomes:

(1) Honing of technical literacy, as manifested via prompt engineering

I witnessed a clear learning arc: students advanced from producing vague, task-level prompts to highly specific, context-rich queries. This transition attested not only to growth in technical skill but also to a shift in cognitive framing. The learners began to see crafting each prompt as a problem decomposition exercise, refining their understanding of the task itself as they broke the problem into readily manageable components. In a telling example, ‘Generate some customer reviews’ evolved into ‘Generate multiple unique customer reviews for all products, each with a varying rating from 1 to 5 and written in varying realistic tones’. Students found this process challenging which was a key part of their learning. As one student succinctly noted: “Learning how to use effective prompt writing.”

While AI accelerated technical execution, I observed that the most successful teams used it critically, verifying accuracy, comparing multiple solutions, and integrating domain understanding. It served as an effective support tool when balanced with human judgment and teamwork.

(Tutor)

Key insight

This increasing sophistication and detail signal the emergence of a new kind of technical literacy demanded by the digital age: effective prompt engineering competence requires metacognitive-level awareness (e.g., addressing task complexity) in addition to skills (e.g., in problem decomposition). Rather than merely learning to operate tools, these students were developing sophisticated cognitive frameworks for human–AI interaction. Education tuned for the age of AI must explicitly cultivate these new literacies alongside traditional subject-matter expertise.

(2) Validation’s emergence as a core competency

Over time, the students moved beyond surface-level acceptance of AI outputs, and, in a well-designed education setting, they indeed developed and maintained what I call a quality-assurance mindset. Both cohorts manifested strong emphasis on the importance of human oversight and critical evaluation. Sensitised to the immense value of ensuring accuracy and reliability, the students adopted multiple validation strategies: cross-checking SQL queries via MySQL Workbench, reviewing AI-produced content against assignment rubrics, manually testing and debugging AI-generated suggested material, and collaborating within their teams to verify technical accuracy. Reflective discussions with the teaching team spotlighted these validation practices as they steadily became second nature: I observed them grow interwoven into the habit of learning. As students reflected:

When having errors showing in the database, using GenAI helped me to solve the problem and be able to learn the correct structure of creating codes at the same time.

(Student)

Students use AI tools to generate sample data for their database tables, especially for their assignments. This helps them test queries more efficiently. I find this approach beneficial, because students learn how to evaluate whether the generated data align with their database design and constraints.

(Tutor)

This pattern articulated a key principle underpinning the learning experience envisioned: AI should serve as a partner in the process.

Key insight

Developing a stance of systematic scepticism (i.e., cultivating a thoughtfully followed structured habit of questioning and critically evaluating information) and adherence to verification protocols represents a competence foundational to work in any AI-augmented professional environment. As students encountered evidence (sometimes first-hand) that AI-generated results not backed by a strong foundation in database concepts often end up incomplete, misleading, or lacking in contextual relevance, they grew increasingly aware that core knowledge of the subject at hand remains pivotal for effectively prompting AI agents, interpreting the output appropriately, and applying it well. Likewise, students developed heightened awareness of boundaries (between fields of expertise, partners, etc.), learning to identify when AI possesses sufficient domain-specific depth versus where the process requires human judgement and corrective steering.

(3) AI’s strength as a facilitator of learning

The learners consistently positioned AI in supportive roles and as an enabler of efficiency rather than a substitute to independent thinking. Many students opted to portray it as a supportive tool that complemented their efforts while the core responsibilities (such as concept-level understanding, design decisions, and production of the final outputs) remained their own. The trajectory on which students clearly progressed from leaning heavily on AI for basic tasks such as construction of simple queries, through collaborative activities that contribute to complex problem-solving, to eventual synthesis of AI-fuelled and human insight, was cast into vivid relief in the assignment submissions. Students also identified the importance of contextual learning when working with AI. Students discovered that effective AI use required substantial domain knowledge. AI could provide structure or suggestions but results devoid of grounding in underlying database concepts often ended up incomplete, misleading, or contextually inappropriate. This phenomenon serves to highlight the irreplaceable nature of foundational subject-matter expertise: a crutch cannot suffice on its own.

I use GenAI to build a foundation for my assignments—then challenge it to find gaps or alternative perspectives. It acts like a second set of eyes, encouraging me to refine my reasoning rather than replace it.

(Student)

Students also report using generative AI to identify and fix errors in their SQL code. They find it helpful and convenient, as they can access quick feedback which is always available instead of waiting for consultation sessions

(Tutor)

Key insight

A trajectory toward genuine collaboration with AI constitutes the heart of a roadmap suitable for educators wishing to design an AI-enabled learning experience. In my project, students’ active negotiation of responsible boundaries in human–AI collaboration stemmed not from externally imposed rules but from emergent reflective practice and thoughtfully structured opportunities for experimentation. This situation does imply that fostering academic integrity in the AI era requires solidly scaffolded experience-oriented learning opportunities rather than reliance solely on formal guidelines/procedures. Intentional, principled scaffolding can supply highly effective guidance from the dependence stage, all the way to responsible integration into confidently embraced learning processes.

(4) Elucidation of the scaffolding/independence paradox

Notwithstanding the incremental, systematic nature of the learning, students still needed strong support, even midway through the term, for building their confidence and certain skills vital for using AI effectively. This fact necessitated activities dealing specifically with genAI: I implemented a dedicated genAI lecture in Week 4, along with guided demonstrations and practice activities. While the goal was to foster independent, reflexive engagement, these activities highlighted that true independence develops from a strong base of structured guidance. Early support, in the reliance stage, can pave the way for greater independence later.

AI support boosted their confidence over time in tackling complex database queries and improved analytical thinking.

(Tutor)

Key insight

Providing precisely targeted, structured guidance need not restrict independence. It can promote self-support and accelerate progress toward resilience. In this respect, our experience calls into question the notion that heavy guidance inhibits deep learning: the students who engaged most actively with the lecture and exercises demonstrated greater sophistication than the others as the term progressed. Moving decisively beyond surface-level interactions, they were soon critically evaluating AI’s outputs, iteratively refining prompts, and integrating AI-driven insight into their thoughts and collaborative work. At least in the context of AI literacy, deliberate, early support promotes deeper and more autonomous learning. By building foundational skills and confidence, the students grew better equipped to engage in self-directed exploration and evaluate for themselves how appropriate and accurate a given set of AI outputs might be in increasingly complex tasks.

(5) AI’s position in team-based learning

In addition, introducing AI significantly reshaped how learners operated in teams. It provided shared starting points for group tasks, helped standardise contributions, and streamlined communication. Within their respective teams, the students frequently employed AI for brainstorming possible ideas, clarifying technical matters, planning their project milestones, and distributing tasks more effectively. As students shared:

In group projects, it has helped me communicate better and plan timelines during challenging situations […and helped to] expand my critical thinking and creativity.

(Student)

Students used AI as a “virtual teammate” to brainstorm database structures, check query syntax, and refine report writing. AI acted as a sounding board, helping them iterate quickly and identify conceptual gaps. Teams often used AI collaboratively discussing whether its suggestions aligned with business logic and course requirements. This fostered peer discussion and improved collective problem solving

(Tutor)

In several instances, they applied AI tools in jointly devising their initial SQL queries, troubleshooting their handling of complex problems, and creating structured outlines for deliverables. Thus, they rendered the teamwork more efficient and reduced the time spent on routine co-ordination tasks.

Key insight

In team settings, AI quite often functioned as a shared resource to be relied upon – almost like an additional, expert member of the group. It provided consistent support that helped students align their efforts, grapple with uncertainties, and maintain momentum as they carried out collaborative tasks. This team dynamic highlights the need for educators to design activities that exploit AI as a collaboration partner while still guaranteeing the students’ continued critical engagement and reflection throughout their teamwork.

Summary of Insights

Honing of technical literacy

Students progressed from basic tool use to developing nuanced prompt engineering skills that required both technical and metacognitive awareness.

Validation’s emergence as a core competency

Students learned to question AI results and check them against their own knowledge. This helped them understand when to trust AI and when to rely on their own judgement, building strong critical thinking skills for working with AI tools.

AI’s strength as a facilitator of learning

Students moved from relying on AI to working with it as a learning partner, guided by structured activities that encouraged reflection and experimentation. Students learned to use AI responsibly through hands-on experience, which helped build deeper learning.

Elucidation of the scaffolding/independence paradox

Students began with targeted, scaffolded guidance that built their foundational skills and confidence. As they progressed, they moved beyond surface-level use of AI, critically evaluating outputs, refining prompts, and integrating AI insights into their work.

AI’s position in team-based learning

Students began using AI as a helpful tool in team projects, gradually treating it like an expert teammate that supported planning and problem-solving. This collaborative use helped them stay aligned and productive, while also encouraging critical thinking and reflection.

GenAI-linked learning and its mapping to strengths cultivated in graduates

Mapping the five facets of genAI learning-related insights discussed above to the attributes sought in The University of Queensland graduates[3] reveals that AI integration comprehensively develops students’ capabilities. While these capabilities are typically cultivated across an entire program, achieving them even in part within a single course is valuable because it provides students with early, applied experiences that reinforce broader program goals. The positive effects extend across the full spectrum but demonstrate a particularly strong alignment with creating ‘courageous thinkers’ (for which four of the five insight areas show strong relevance) and ‘connected citizens’ (meshing with three of them).

Developing prompt engineering as a form of technical literacy cultivated the scholarly depth expected of ‘accomplished scholars’, while also fostering the critical reasoning and innovative thinking vital in a courageous thinker. Similarly, work on validation competency demonstrated how AI’s integration builds the critical mindset required of such thinkers and the disciplined accuracy expected of an accomplished scholar. The insight into team-based learning further illustrates AIs power to reshape collaboration dynamics, and reinforces graduates ability to become influential communicators, through clear, solidly structured team interaction. Finally, the scaffoldingindependence paradox accentuates the importance of structured, early guidance in enabling deeper autonomy; sensitivity to it advances both courageous thinking and scholarly accomplishments. Although the final attribute considered, that of someone capable of responding flexibly to cultural factors, shows less direct alignment, AI-incorporating collaboration and team activities create opportunities for students to engage with diverse perspectives. These point to the vast potential of purposeful design for future iterations’ aimed at richer intercultural engagement.

(1) Accomplished scholars

From several of the perspectives considered, students demonstrated the depth and rigour expected of accomplished scholars. As they honed their prompt engineering, they developed advanced technical literacy, evidenced by breaking down complex tasks to formulate actionable prompts and with applying the resulting skills effectively in the domain of database management. Through profound engagement with validation, they showed scholarly discipline through strict testing, systematic debugging, and comprehensive evaluation of AI-generated outputs in pursuit of technical accuracy. Attending to the scaffolding–independence issue reinforced provision of structured guidance, which holds immense value for systematic building of high-level technical skills. This sensitivity allowed students to progress toward independent and more criticism-oriented engagement with AI, equipping them with analytical and evaluative skills that are transferable to complex problem-solving across diverse professional and academic domains.

(2) Courageous thinkers

Critical and innovative thinking was a recurring theme across four of the areas of insight. The results clearly spotlight the students’ development as courageous thinkers. With their prompt engineering, they engaged in experimentation and iteration, refining their human–AI interaction and embedding the technical skills in contexts of database problem-solving. Through validation, they cultivated systematic scepticism, aligned for critically questioning outputs rather than relying passively on AI. By growing to employ AI to facilitate learning, the students developed skills in adaptive problem-solving and in reflective thinking during their dependency → collaboration → integration journey. Lastly, our attention to the balance of scaffolding and independence attests to how early structured support can empower students to take calculated risks, thereby planting seeds for intellectual curiosity and resilience, fostering a mindset that supports innovative thinking and adaptive decision-making in rapidly evolving digital and organisational environments.

(3) Influential communicators

The integration of AI also strengthened students’ capacity as influential communicators. Solid prompt engineering demanded clear, well-structured communication. Their work toward this objective taught the learners how to articulate complex ideas in precise, actionable ways, such that AI could interpret them. This iterative process fostered conversational reasoning as students refined prompts, clarified context, and critically assessed responses to ensure complete clarity and accuracy. Through the team based learning, the students honed their negotiation skills and accumulated experience of collaborative dialogue, collectively refining and validating AI-generated outputs in ways that improved the quality and effectiveness of their group work, while also cultivating collaborative communication skills essential for interdisciplinary teamwork and leadership in data-driven contexts.

These graduate attributes reflect the kinds of qualities commonly emphasised across many university programs and institutions that aim to develop well-rounded, socially aware, and adaptable graduates. Collectively, these three graduate attributes show how AI-enabled pedagogy advances globally recognised capabilities, AI literacy, analytical and evaluative skills, adaptive problem-solving, and professional communication, equipping students to contribute confidently to industry.

Student Reflections

What proved to be challenging

One of the main challenges I and my students faced arose from the complexity of engineering prompts for AI agents and from factors that complicate validation efforts. Both cohorts found prompt engineering and quality control more challenging than anticipated. Students initially struggled with the metacognitive demands of crafting effective prompts and developing systematic validation approaches. However, these struggles ultimately deepened their learning, and over time, they became more cognisant of how to craft a good prompt (e.g., it swiftly became apparent that asking an AI agent to generate a SQL statement by telling it ‘write an insert statement’ yields overly generic results). They learned from experience that effective prompt design requires specificity and taking context into account.

Secondly, I encountered a challenge stemming from the need for prior scaffolding. Notwithstanding the above-mentioned dedicated genAI lectures and tutorial activities, which held great value, both cohorts requested more hands-on skill building earlier in the course. The timing and intensity of the scaffolding proved crucial; more structured support had to be given closer to the beginning of the teaching term, to build confidence before the students could embark on complex tasks. They also requested that self-guided learning be incorporated into the active participation built into the tutorials.

From my perspective as the instructor, navigating the necessary adaptations to course structure constituted one of the most significant challenges. The structure grew more adaptive and student-driven over time, with its structure evolving dynamically as the term progressed. Learning increasingly employed external tools to support the built-in scaffolding. These rendered the content’s coverage and delivery far more fluid than in previous iterations of the course. Rather than strictly follow a pre-ordained ‘one size fits all’ lecture-to-tutorial path, students began to chart their own learning trajectories, independently selecting AI tools to match their needs. For example, in parallel with lectures primarily about SQL query formulation and database design principles, more than a few students waded more deeply into related areas not formally covered by the curriculum. Venturing into advanced data cleaning and preprocessing techniques, some of them used genAI to identify anomalies in sample datasets, experiment with automated generation of code for data transformations, and even explore newly released tools for large-scale data handling. This selfdirected learning fed the process in several ways: it created valuable opportunities for deeper engagement and innovation, with students also bringing the resulting insight into discussion and group projects. While that strengthened the quality of the outputs generally and fostered a sense of peer-to-peer learning within student teams, it also posed challenges for maintaining coherent progression of learning across the cohort.

Because not all students move at the same pace, I had to adjust the instruction strategies ‘on the fly’. For example, I integrated brief instructional videos on targeted topics and made office hours available for assistance in bridging the knowledge gaps and ensuring that foundational learning objectives were still met. Reflecting on this experience highlighted the importance of constant balance. The course structures must be designed to be flexible yet anchored. They should give students room to explore and innovate while still providing clear scaffolding and checkpoints for maintaining alignment with the core learning objectives.

Future Directions

My observations and students’ feedback together point to several areas in which taking a different approach might have enhanced the implementation. Looking forward enables future courses to benefit from real-world insight. More importantly, these insights can directly inform the direction followed in future use of AI in teaching and learning.

One key area for improvement identified relates to more intensive, early scaffolding. As noted above, both cohorts explicitly requested earlier, more hands-on work on their skills in prompt engineering and validation techniques. While I provid­ed a dedicated genAI lecture and, in conjunction with tutorial activities, self-directed learning, a more front-loaded approach with intensive workshops near the beginning of the term might have smoothed students’ path along the dependency → collabora­tion → integration trajectory. Considering these insights, my plans include adopting a front‑loaded scaffolding strategy that emphasises AI-literacy development early in the term. This is going to encompass hands-on tutorial activities through which students will be able to drill into specific skills at manual coding, compare the results with genAI-produced code, and experiment with prompt engineering techniques. This approach should not only serve to build foundational technical skills but also, thanks to dedicated activities, help students critically evaluate AI agents’ outputs from the outset.

Assessment design represents another important area. While the course is not centred on AI, integrating AI as a supportive learning tool has already influenced how the students engage with content and complete their tasks. My work expands on traditional assessment methods via augmentation from portfolio reflections, practical exercises, and group projects that capture students’ ways of utilising AI for problem-solving, validation, and collaboration. In further efforts to ensure that the assessment framework reflects the new literacies, I will continue reviewing and refining the assessment design, so that it comprehensively reflects the development of learning and accurately captures the depth of students’ engagement with AI. This includes the incorporation of evaluation log–based exercises, which enhance the reflective components by encouraging critical analysis of students’ AI use. These exercises also support more authentic, real-world anchored evaluation of validation and debugging and help assessment tasks with the evolving balance between AI-augmented learning and independent, conceptually oriented mastery of the field.

Finally, there is a need for more systematic monitoring of how the students use AI and of the challenges that arise as the journey through the course progresses. Although I tracked students’ advancement along the overall trajectory, my approach did not employ structured mechanisms for monitoring each individual’s development from early on: although the reflection activities and portfolio components in place capture their learning well, these come into their own primarily toward the end of the term. To enhance the opportunities for timely intervention and support, I plan to refine the assessment structure by means of more systematic techniques, mechanisms consistent with earlier checkpoints and ongoing tracking of progress. I will implement mid-term diagnostics to identify those students who might be ‘stuck’ in the reliance stage, along with targeted support activities to help them advance. At the same time, the structure should ensure that outside enrichment and growth opportunities, not just challenges, are visible earlier and can be handled proactively.

Building on these pedagogical insights, this study contributes to the growing discourse on AI in Information Systems education by providing an empirically grounded view of how genAI integration reshapes students’ learning trajectories and assessment design. The next phase of this work will aim to generate more systematic evidence to inform curriculum-level frameworks and institutional strategies for AI-augmented learning. Future research should further explore how to balance automation with higher-order skills, as emphasised by Sun and Deng (2025), who highlight the need to shift focus from syntax-based tasks toward evaluation, prompt engineering, and critical analysis of AI-generated outputs. In parallel, attention must be paid to cultivating students’ adaptive and lifelong learning capabilities, aligning with Walczak and Cellary’s (2023) call for fostering better adaptation skills in AI-supported education. Further research is also needed to strengthen students’ validation capabilities as a cornerstone of responsible and effective AI use in professional and academic settings.

Statement of Authorship and AI Use

Bidar led all aspects of this chapter, from framing and investigation to writing and review. Generative AI was used only to assist with editing; the insights and scholarly reflections presented here are solely the author’s.

References

Bohler, J. A., Larson, B., Peachey, T. A., & Shehane, R. F. (2020). Evaluation of information systems curricula. Journal of Information Systems Education, 31(3), 232–243.

Chen, L. (2022). Current and future artificial intelligence (AI) curriculum in business school: A text-mining analysis. Journal of Information Systems Education, 33(4), 416–426.

Christensen, C. M., & Eyring, H. J. (2011). The innovative university: Changing the DNA of higher education from the inside out. Wiley.

Dumas, A. (2024, May 7). Artificial intelligence is big, but are companies hiring for AI roles too fast? Fox Business. https://www.foxbusiness.com/technology/artificial-intelligence-big-companies-hiring-ai-roles-too-fast

Fitzgerald, R., & Curtis, C. (2025). AI is now part of our world – uni graduates should know how to use it responsibly. The Conversation. https://theconversation.com/ai-is-now-part-of-our-world-uni-graduates-should-know-how-to-use-it-responsibly-261273

Forment, M. A., Pereira, J., García-Peñalvo, F. J., Casañ, M. J., & Cabré, J. (2024). LAMB: An open-source software framework to create artificial-intelligence assistants deployed and integrated into learning management systems. Computer Standards & Interfaces, 92, Article 103940. https://doi.org/10.1016/j.csi.2024.103940

Jiang, Y., & Nakatani, K. (2025). Exploring implementations of GenAI in teaching IS subjects and student perceptions. Journal of Information Systems Education, 36(2), 180–194. https://doi.org/10.62273/WFHO1011

Lang, G., Triantoro, T., & Sharp, J. H. (2024). Large language models as AI-powered educational assistants: Comparing GPT-4 and Gemini for writing teaching cases. Journal of Information Systems Education, 35(3), 390–407. https://doi.org/10.62273/YCIJ6454

Leckrone, B. (2023, December 5). AI skills in high demand from employers: Survey. BestColleges. https://www.bestcolleges.com/news/ai-skills-in-high-demand-from-employers/

McLean, S., & Attardi, S. M. (2023). Sage or guide? Student perceptions of the role of the instructor in a flipped classroom. Active Learning in Higher Education, 24(1), 49–61. https://doi.org/10.1177/1469787418793725

Microsoft, & LinkedIn. (2024). Work trend index annual report: AI at work is here—now comes the hard part. Microsoft. https://marketingassets.microsoft.com/gdc/gdcAev8aq/original

Partnership for 21st Century Learning. (2015, May). P21 framework definitions. Battelle for Kids. https://www.battelleforkids.org/wp-content/uploads/2023/11/P21_Framework_Definitions_New_Logo_2015_9pgs.pdf

Pesovski, I., Jolakoski, P., Trajkovik, V., Kubincova, Z., & Herzog, M. A. (2025). Predicting student achievement through peer-network analysis for timely personalization. Computers and Education: Artificial Intelligence, 8, Article 100430. https://doi.org/10.1016/j.caeai.2025.100430

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

Rong, H., & Chun, C. (2024). Digital Education Council global AI student survey 2024. Digital Education Council. https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024

Sharp, J. H., Mitchell, A., & Lang, G. (2020). Agile teaching and learning in information systems education: An analysis and categorization of literature. Journal of Information Systems Education, 31(4), 269–281.

Storey, V. C., & Song, I.-Y. (2017). Big data technologies and management: What conceptual modeling can do. Data & Knowledge Engineering, 108, 50–67. https://doi.org/10.1016/j.datak.2017.01.001

Sun, R., & Deng, X. (2025). Using generative AI to enhance experiential learning: An exploratory study of ChatGPT use by university students. Journal of Information Systems Education, 36(1), 53–64. https://doi.org/10.62273/ZLUM4022

Urbaczewski, A., & Keeling, K. (2025). G(AI)etting with the program: Teaching IS and analytics in the age of GAI. Communications of the Association for Information Systems, 56(1), 1104–1117. https://doi.org/10.17705/1CAIS.05642

Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI. Economics and Business Review, 9(2), 71–100. https://doi.org/10.18559/ebr.2023.2.743

Zhang, X. (2025). Teaching tip: Incorporating AI tools into database classes. Journal of Information Systems Education, 36(1), 37–52. https://doi.org/10.62273/GKZI2477


  1. See, for instance, https://www.digitaleducationcouncil.com/post/ai-in-the-workplace-2025.
  2. This study received ethics approval from The University of Queensland Human Research Ethics Committee (BEL LNR Panel), project number 2024/HE001346: AI Inquiry-Led Teaching Innovation.
  3. See https://itali.uq.edu.au/files/30084/UQ%20Graduate%20statement%20and%20attributes%20information.pdf

About the author

Reihaneh Bidar is a Lecturer in Business Information Systems at the UQ Business School. Her research centres on how organisations manage the complexities of AI, automation, and digital integration. Reihaneh’s work delves into the impact of emerging technologies like AI on the redesign and organisation of work, while also addressing the challenges of managing their potential negative consequences. Reihaneh is an Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society at the University of Queensland.
Reihaneh also is the Business Information Systems (BIS) Major Convenor at UQ and teaches Managing Business Data and Information Retrieval and Management in the undergraduate and postgraduate Information Systems programs. Prior to joining UQ, she developed, coordinated, and taught courses in Business Analytics, Enterprise Architecture, Design of Enterprise IoT Systems, Mobile and Pervasive Systems, and Mobile App Development for both undergraduate and master’s students at the Queensland University of Technology (QUT).