"

4. How to prompt an AI tool

What is prompting?

Prompts are the input instructions for AI tools to perform tasks. They can include text, data (such as tables) or images.

The input of a prompt is converted into tokens by the AI tool. Tokens represent the prompt elements converted into a form that the tool understands. These are processed and converted back into words or other outputs. The tool prompt length is limited by the number of tokens allowed to be input.

Components of a good prompt

The more specific your prompt details are, the more relevant your output will be. Provide plenty of details, and define the task and the output you want.

 

Prompting tips

  • Use instructional verbs: summarise, classify, write, and compose.
  • Expand on prompts iteratively — your first go doesn’t have to be perfect!
  • Write short but detailed prompts. Break down your tasks into a subtask list if needed.
  • Tell the AI what to do rather than tell it what not to do. Negative prompts can be useful in image generators.

read icon Read Please Be Polite to ChatGPT on why being polite to your chatbot makes for better responses.

 

Types of easy prompts

Prompt type Description Example Key features
Simple Prompt (Zero-shot) A direct request made to the model without any examples or prior context. Relies on LLM’s pretraining to generate a response. Prompt: “List 10 ideas for a report on housing issues in Australia.”
Output: “1. Affordable housing crisis, 2. Urban density...”
Fast
Good for factual or creative generation
Based on LLM’s general knowledge
Few-Shot Prompt The prompt includes several examples of a task before asking the model to respond to a similar new input. This structure demonstrates the expected pattern. Prompt: “Here are movie reviews with sentiments...”Task: “Now classify: ‘Amazing effects but boring plot.’”Output: “Sentiment: Mixed” Supports consistency
Useful for tasks like classification Pattern recognition
Chain-of-Thought Prompt Encourages the model to explain its reasoning in steps before giving a final answer. Ideal for problem solving or nuanced questions. Prompt: “Would a pear sink in water? Explain step-by-step.”Output: “...Density, air content, comparison to water... → No, it floats.” Increases transparency
Supports logic-based learning
Critical thinking

Comparison Table: structured side-by-side comparison of three foundational prompting techniques used with large language models (LLMs). The goal is to help you distinguish between the techniques based on purpose, structure, and cognitive complexity.

Simple prompts

A simple prompt or zero-shot prompt relies on the LLM training data to answer a question without any examples.

  • Provide a list of ten ideas for a report on housing issues in Australia.
  • Write a poem about The University of Queensland.

Few-shot prompting

Few-shot prompting provides examples of how a task is to be solved. It is used to provide guidance for the required output. An easy example is a sentiment analysis.

Input:

Here are a few examples of movie reviews and what sentiment they have:

  • Review: “This movie was absolutely fantastic! I loved every minute of it.”
  • Sentiment: Positive
  • Review: “The plot was confusing and the acting was terrible. What a waste of time.”
  • Sentiment: Negative
  • Review: “It was okay. Not great, not terrible, just average.”
  • Sentiment: Neutral

Now, classify the sentiment of this review: “The special effects were amazing, but the story was predictable and the characters were flat.”

Output:

  • Review: “The special effects were amazing, but the story was predictable and the characters were flat.”
  • Sentiment: Mixed.
  • Reasoning: The review contains both positive and negative aspects. While it praises the special effects, it criticizes the story and characters. This balance of contrasting opinions classifies it as a mixed sentiment.

[Prompted 29 November 2024, OpenAI (2024), ChatGPT 4.0 response on topic]

Chain-of-thought prompting

Chain-of-thought prompting is a technique in which the model is encouraged to provide reasoning for its answer in a series of logical steps to solve a problem provided it initially got wrong in its output. It also increases the transparency of the answer and prompts users to use critical thinking when evaluating the responses.

Input:

Yes or no: would a pear sink in water? Provide a reasoning for your answer in logical steps before providing an answer.

Output: 

  1. Density determines floating or sinking:
    An object floats in water if its density is less than that of water (approximately 1 g/cm³). If the object’s density is greater, it sinks.
  2. Examining a pear’s composition:
    A pear is composed of water, sugars, fibers, and air pockets within its cellular structure. The presence of air pockets lowers its overall density compared to water.
  3. Considering variations in pears:
    While most pears have a density less than water, factors like ripeness, water content, or damage can affect buoyancy. Overripe or denser pears might sink.

Answer: No, a pear would not sink in water under normal conditions. It generally floats because its density is slightly less than water.

[Prompted 29 November 2024, OpenAI (2024), ChatGPT 4.0 response on topic]

Choosing the right prompt

This visual guides you through a step-by-step decision pathway to match your task type with an appropriate prompting method.

video icon Watch Four Methods of Prompt Engineering (YouTube, 12m 41s) by IBM Technology:

Additional resources on prompt engineering

Licence

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Artificial Intelligence: Revised version Copyright © 2023 by The University of Queensland is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.