"

4. AI-generated media

AI-generated media is any image, video, audio, or text that is created or heavily edited by an AI tool. Some examples:

  1. You type a prompt like “Create an infographic of a neuron firing, teal and purple, labelled parts,” and the tool generates an image.

  2. You upload a selfie and the tool gives you a professional headshot.

  3. You paste a script and the tool creates a voiceover and talking-head style video, without you recording audio.

  4. You tell a video editor to “remove the background and make this look like a studio.”

These tools don’t just “filter” an image. They can invent new people, new scenes, and new speech. You will also hear terms like:

  • deepfake – AI-generated video or audio that makes it look like a real person said or did something they didn’t

  • voice clone – AI makes speech in someone’s voice

  • AI upscale / enhancement – AI cleans up, colour-corrects, or sharpens an existing image or clip.

Why responsible use it matters

AI tools make it fast and easy to create professional looking media for assignments, presentations, social media, portfolios, and group work. But using AI to create media does not mean no rules apply. You are still responsible for:

  • legal issues (copyright, defamation)

  • ethical issues (consent, cultural safety)

  • academic integrity (declaring AI use)

  • accessibility (alt text, captions, transcripts).

This chapter will look at these responsibilities.

Can I use AI-generated media in my assignment?

Often you can use AI-generated media in assignments, but only if the assessment specifically allows AI use.

Check your course profile for each assessment item to find out if AI use is allowed. Find out more about UQ rules for using AI in assessment.

If AI use is allowed, you must be transparent about where and how you used it. Always acknowledge and reference any use of AI. If you are not sure whether you’re allowed to use AI-generated media check with your course coordinator.

Who owns AI-generated media?

This can be complicated, and there isn’t yet legal consensus. Here are some takeaways to be aware of:

“AI made this, so it’s free to use” is not true

Even if a tool generated the image or video for you, that doesn’t mean:

  • you automatically own it,

  • you’re allowed to sell it,

  • or you can publish it anywhere with no conditions.

Different AI tools give different usage rights. Some tools let you reuse the output for study and personal projects. Some allow commercial use. Some don’t.

Always check the tool’s usage terms. If you plan to publish AI-generated media publicly (e.g. on YouTube, LinkedIn, a portfolio, social media), you are responsible for making sure you’re allowed to.

Here are some of the terms and usage rights for a few popular AI tools:

Remember different tools, even ones created by the same companies may have different rights. The enterprise version of Copilot Chat has different protections than the free Bing Image Creator.

Training data is still being argued about

Many AI tools are trained on huge collections of art, photographs, video, and voice recordings from other people. Whether this is always legal is currently being challenged around the world. Cases like Getty v Stability AI, NYT v OpenAI, and the EU AI Act will have impacts on the future of AI.

This means:

  • You should not claim “I definitely own exclusive copyright on this AI image.”

  • You should not say “no copyright applies to this, because AI made it.”

  • You should not remove watermarks, licence notices, or usage notes the tool gives you.

You still have moral obligations

Even if you can use an AI image you are still responsible for how it represents people, cultures and ideas. Visit our Artificial Intelligence module to find out more about the Legal, Ethical and Social Issues with AI.

Consent, likeness, and cultural safety

AI doesn’t just generate generic clip art. It can copy the style, face, and voice of real people, sometimes without them agreeing to it. This can cause harm, even if you didn’t mean to. Even companies like Meta are being pressured to deal with images of people without their consent.

Using someone’s face or voice

You should not:

  • make an AI video or photo that shows a real person (e.g. a classmate, lecturer, celebrity, or community member) doing or saying something they didn’t do

  • create or share an AI voiceover using someone else’s voice without their permission

  • edit someone’s image in a way that could embarrass them, damage their reputation, or misrepresent them.

Why?

  1. It can be harassment, bullying, or image-based abuse.

  2. It can be defamation (harming someone’s reputation by spreading false material).

  3. It can break privacy, workplace, or university policies.

  4. “I didn’t think they’d mind” is not a defence.

Rule of thumb: If the media could make a real person look bad, unprofessional, offensive, or vulnerable don’t generate it, don’t share it, and definitely don’t upload it publicly without their clear consent.

Indigenous perspectives and cultural safety

AI tools have learned from culturally significant material without consultation or consent of Indigenous people. Some are now challenging companies like Meta using their work for training. This lack of guardrails has made it easy for people to misuse AI to create and profit off material that is culturally sensitive.

Some content is culturally sensitive even if it looks public. For example:

  1. AI-generated “Indigenous-style” art without permission.

  2. Recreating images of Aboriginal and Torres Strait Islander peoples, Elders, ancestors, or cultural material in ways that are disrespectful or taken out of context.

  3. Using Indigenous voices or visual identity to “add authenticity” to a project without actually partnering with Indigenous people.

Why this matters:

  • cultural knowledge and representations are not just aesthetic “themes”

  • using someone else’s culture for style points can be extractive and disrespectful.

Ask yourself: Who is represented here? Did they (or their community) say it’s okay to be represented that way? Am I treating their identity / culture / voice like decoration?

If you’re not sure: don’t include it. Ask for guidance instead.

Is this ethical to submit?

Accessibility and inclusive practice

When you add media to an assignment, you are responsible for making it accessible. This includes AI-generated media. Visit our Accessibility module for more information on ensuring your work is accessible.

As a general rule AI can help suggest alt text for you, but:

  • AI guesses. It can describe something incorrectly.

  • You are responsible for fixing it so it’s accurate and useful.

Accessibility checklist

  1. Does every meaningful image have useful alt text?

  2. Do my videos have captions or a transcript that I’ve checked for accuracy?

  3. If I generated the voice/video with AI, did I explain that somewhere (caption, slide note, appendix)?

  4. Did I include anything that could harm someone’s reputation or misuse their culture, voice, or image?

TL;DR Summary

Do:

  1. Use UQ-approved or trusted tools where possible.

  2. Check the tool’s usage terms before you publish AI-generated media outside your course.

  3. Review accuracy of any AI outputs.

  4. Respect cultural knowledge, community ownership, and individual consent.

  5. Acknowledge AI use in your assignment.

Don’t:

  1. Put words in a real person’s mouth (voice clone, fake video) without permission.

  2. Generate or post content that could embarrass, harm, or defame someone.

  3. Assume “AI made it” means it’s copyright free.

  4. Hide that you used AI.

Quiz

Licence

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Find and Use Media Copyright © 2023 by The University of Queensland is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.