Key Insights from “The Prompt Report” Research Paper

Key Insights from “The Prompt Report” Research Paper

Understanding how to leverage various prompting techniques can be a real unlock in your journey with AI.

It’s not hyperbole to say that simply knowing a few of the more comprehensive prompting methods can transform how you work—across all sorts of tasks and use cases.

In an effort to dive deeper into prompting, I want to highlight a recently published research paper titled, “The Prompt Report: A Systematic Survey of Prompting Techniques.”

The 75-page research paper, which offers a structured understanding of prompts by assembling a taxonomy of prompting techniques and analyzing their use, was last revised in July of 2024 (as of this writing.)

It’s a substantial read, but it’s worth spending some time on because it’s likely the most comprehensive rollup of prompting methods we’ve seen to date.

 

What is prompting?

For the uninitiated, it’s worth taking a moment to define the term “prompt.”

In this case, a prompt is an input given to a generative AI model (like ChatGPT, Claude, or Gemini) designed to elicit a specific output. Essentially, it’s the instruction you’re giving to the machine.

This input (your prompt) can include text, images, sounds, or other media (meaning: they can be “multimodal”), and can be as detailed or minimal as needed to achieve your desired response.

The process of prompting is critical because the way you structure your prompts can significantly impact the quality and relevance of the AI’s outputs.

Put another way, the use of generative AI can altogether succeed or fail based on the prompt you enter.

It’s worth understanding how to apply them more effectively.

 

“Shot” means “example”

In the context of prompting, particularly in AI and machine learning, “shot” refers to the number of examples provided to the model within the prompt to guide its response.

In the examples below, you can replace the word “shot” with the word “example,” if you find it easier to understand.

 

Exploring 6 key prompting techniques

Here are six common prompting techniques that were evaluated in the paper. For most of what you do on a computer, one of these prompting techniques will work.

To try and bring more clarity to each prompt, I’ve included a definition, an example prompt, and an example illustrating how the AI might respond.



Zero-Shot Prompting:
No examples are given. In zero-shot prompting, the model is asked to perform a task without any prior demonstration within the prompt. It’s particularly useful when examples are unavailable or when the model needs to apply knowledge to new scenarios.

Prompt: “Write a marketing email for our new hoist that highlights its durability and efficiency.”

AI Output: “Dear valued customer, we are excited to introduce our latest hoist, engineered for unparalleled durability and efficiency…”


 

Few-Shot Prompting: Multiple examples are provided within the prompt to help the model understand the task better. It’s ideal for tasks where initial examples can significantly improve performance.

Prompt: “Here are some examples of safety protocols. Example 1: ‘Wear a hard hat at all times.’ Example 2: ‘Ensure all scaffolding is properly secured.’ Now, generate a new safety protocol.”

AI Output: “Example 3: ‘Always wear high-visibility vests on site.’”



Chain-of-Thought (CoT) Prompting
: CoT encourages the AI to articulate its reasoning process before arriving at a final answer. This is beneficial for complex tasks requiring detailed, step-by-step reasoning.

Prompt: “Calculate the total cost of a construction project if labor costs are $500,000, material costs are $300,000, and equipment costs are $200,000. Think step-by-step.”

AI Output: “First, add the labor costs ($500,000) to the material costs ($300,000), resulting in $800,000. Next, add the equipment costs ($200,000) to get a total of $1,000,000.”



Ensembling:
Using multiple prompts to solve the same problem and aggregating responses to improve accuracy.

Scenario: A factory manager is tasked with optimizing a project timeline for a building project that must start in January. The goal is to ensure the timeline is both realistic and efficient, accounting for potential delays and resource allocation challenges.

Prompt 1: “Generate a project timeline for a building project starting in January that minimizes delays.”

Prompt 2: “Create a project timeline for the same building project that optimizes resource allocation.”

Prompt 3: “Produce a project timeline that includes buffer periods to manage unforeseen delays.”

Ensembling Process: The AI generates separate timelines for each prompt:

AI Output 1: A timeline focused on minimizing delays, prioritizing the fastest completion times.

AI Output 2: A timeline optimized for resource allocation, ensuring that labor and materials are used efficiently.

 AI Output 3: A timeline that includes buffer periods, allowing for flexibility in case of unforeseen issues.

Aggregation: Using ensembling, you now aggregate these responses into a final, comprehensive timeline.



Self-Criticism:
Here, the AI critiques its own outputs, providing feedback to refine and improve responses.

Prompt: “Generate a response to a customer complaint about delayed delivery, then critique your response.”

AI Output: “Response: ‘We apologize for the delay and are working to expedite your order.’ Critique: ‘The response is polite but lacks a specific resolution timeframe. Adding a concrete follow-up action would improve it.’”

You would then ask the LLM to generate a new response that includes its suggestion.



Decomposition:
This method breaks down complex problems into simpler sub-questions, making them easier for the AI to tackle.

Prompt: “Outline the steps for implementing a new safety protocol in a construction company.”

AI Output: “Step 1: Assess current safety measures. Step 2: Identify areas needing improvement. Step 3: Develop new protocols. Step 4: Train staff on new protocols. Step 5: Monitor compliance and effectiveness.”

Prompt: “Now provide detailed actions for each of the outlined steps.”



Multilingual Prompting:
 Multilingual prompting techniques allow AI outputs (or inputs, you can prompt in other languages) in various languages, making it a versatile tool for organizations in global operations.

Prompt: “Describe the features of our new bulldozer in Spanish.”

AI Output: “Nuestro nuevo bulldozer cuenta con características como una mayor durabilidad y eficiencia energética…”



Answer Engineering

While prompt engineering focuses on crafting and refining prompts to guide AI responses effectively, answer engineering is about extracting precise and actionable answers from those responses.

This involves developing methods to ensure that the AI’s outputs are clear, relevant, and directly address the query.

For example, in customer service, answer engineering can help design algorithms that produce concise, accurate responses to customer inquiries, thereby enhancing overall satisfaction and efficiency.

Consider an example from an industrial company where a maintenance technician asks an AI for instructions on troubleshooting a malfunctioning conveyor belt.

Prompt engineering ensures the AI understands the context and specifics of the query.

Answer engineering, however, which is typically performed by data scientists and AI specialists during the iterative process of fine tuning the model, ensures the response is practical and actionable: “First, turn off the power supply to the conveyor belt to ensure safety. Check for any visible blockages or obstructions and remove them. Inspect the belt for any signs of wear or damage and replace it if necessary. Verify that the belt is properly aligned and adjust the tension if it’s too loose or tight. Finally, turn the power back on and test the conveyor belt to ensure it’s operating correctly.”

This clear and step-by-step guidance helps the technician resolve the issue efficiently, demonstrating the value of well-engineered AI responses in an industrial setting.

 

Practice makes perfect

As with everything, practice makes perfect. Prompting is more art than science at this point, but what this paper shows is that there are certain techniques that work better depending on what you’re trying to achieve.

Create a prompt library, practice prompting across various models, and don’t be afraid to ask the models themselves to refine your prompts. You’ll often be amazed at what they come up with.

If you’re ready to take your prompting to the next level, consider Thrive’s AI Adoption Accelerator program. We’d love to work with you.

 

Never miss an insight. We’ll email you when new articles are published.
ReLATED ARTICLES