Large Language Models (LLMs) like ChatGPT, Gemini and Claude are unpredictable. Ask the same question twice and you’ll always get a different answer.
This unpredictability can be frustrating, but there are approaches you can take to drastically improve the quality of output you receive.
If you’re a programmer, prompt engineer or vibe coder these techniques can help you generate accurate and reliable code. They can also help with any other type of prompt as well.
Zero-shot Prompting
A single, well-phrased request to the AI without giving examples. Use it for quick answers when you trust the model’s general knowledge by writing a clear, specific question or instruction.
Write a function in JavaScript that validates if a string is a valid email address using regex.
Few-shot Prompting
Provide a few examples of the pattern you want the AI to follow before asking your question. Ideal when you need consistent formatting or want to guide the AI’s approach by showing it what good answers look like.
Convert these JavaScript functions to Python:
JavaScript:
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
Python:
def calculate_total(items):
return sum(item["price"] for item in items)
JavaScript:
function filterActiveUsers(users) {
return users.filter(user => user.isActive);
}
Python:
def filter_active_users(users):
return [user for user in users if user["is_active"]]
JavaScript:
function sortByCreationDate(posts) {
return posts.sort((a, b) => new Date(b.createdAt) - new Date(a.createdAt));
}
Python:
Chain-of-Thought Prompting
Ask the AI to break down its reasoning step by step before providing an answer. Perfect for complex problems where you want to verify the model’s logic or learn how it reaches conclusions.
Explain how to optimize this SQL query step by step and provide the optimized version:
SELECT users.name, COUNT(orders.id) as order_count
FROM users
LEFT JOIN orders ON users.id = orders.user_id
WHERE users.created_at > '2023-01-01'
GROUP BY users.id
HAVING COUNT(orders.id) > 5
ORDER BY order_count DESC;
Meta Prompting
Frame your prompt with instructions about how the AI should approach answering. Use when you want to guide the AI’s overall strategy or thought process for a specific task.
Approach this request as a senior DevOps engineer with expertise in AWS cloud infrastructure. Explain how to set up a robust CI/CD pipeline for a microservices architecture using GitHub Actions, with emphasis on security best practices and cost optimization.
Self-Consistency
Ask the model to solve a problem multiple ways and find the most consistent answer. Best for quantitative problems or situations where there are multiple paths to a solution and you want the most reliable result.
Solve this algorithm problem using three different approaches and identify which solution has the best time complexity:
Write a function to find the longest substring without repeating characters in a given string. For example, for input "abcabcbb", the answer is "abc" with length 3.
Generate Knowledge Prompting
Request the AI to first produce relevant facts or information before answering a question. Useful for specialized topics where organizing relevant knowledge first leads to better answers.
First, list five key concepts about React hooks and their usage rules. Then, based on those concepts, refactor this class component to use functional components with hooks:
class UserProfile extends React.Component {
constructor(props) {
super(props);
this.state = {
user: null,
loading: true,
error: null
};
}
componentDidMount() {
fetch(`/api/users/${this.props.userId}`)
.then(res => res.json())
.then(data => this.setState({ user: data, loading: false }))
.catch(err => this.setState({ error: err, loading: false }));
}
render() {
const { loading, user, error } = this.state;
if (loading) return <div>Loading...</div>;
if (error) return <div>Error: {error.message}</div>;
return (
<div>
<h1>{user.name}</h1>
<p>Email: {user.email}</p>
</div>
);
}
}
Prompt Chaining
Break complex tasks into smaller, sequential prompts where the output of one feeds into the next. Effective for multi-step tasks where you want to guide the process more carefully or review intermediate steps.
First task: Design a database schema for a blog application with users, posts, comments, and categories. Include tables, fields, and relationships.
[After receiving schema]
Second task: Using the database schema you just created, write the necessary SQL commands to create all tables with appropriate constraints and relationships.
[After receiving SQL commands]
Third task: Now write a Node.js function using Sequelize ORM that retrieves all posts with their authors and comment counts, sorted by publication date.
Tree of Thoughts
Ask the AI to explore multiple reasoning paths and evaluate them before selecting the best approach.
Ideal for complex decision-making problems or creative tasks with multiple possible directions.
You need to design an authentication system for a web application. Explore three different approaches (JWT-based, session-based, and OAuth integration). For each approach, consider:
1. Security implications
2. Scalability
3. Implementation complexity
4. User experience
After exploring all three approaches, recommend which one would be best for a medium-sized SaaS application with approximately 10,000 users, and explain your reasoning.
Retrieval Augmented Generation
Provide specific external information for the AI to reference when generating its response. Perfect when you need answers based on specific documents, data, or knowledge that may not be in the model’s training.
Using the following API documentation:
```
POST /api/v1/transactions
Creates a new transaction
Request Body:
{
"amount": number (required) - Transaction amount in cents
"currency": string (required) - 3-letter currency code (e.g., USD)
"description": string (optional) - Transaction description
"metadata": object (optional) - Additional transaction metadata
"customer_id": string (required) - Unique customer identifier
}
Response:
{
"id": string - Unique transaction ID
"status": string - Transaction status (pending, completed, failed)
"created_at": string - ISO timestamp
"updated_at": string - ISO timestamp
... (all request fields are also returned)
}
Error Codes:
400 - Invalid request parameters
401 - Authentication error
402 - Payment required
404 - Customer not found
500 - Internal server error
```
Write a TypeScript function that uses Axios to call this API with proper error handling and typing.
Automatic Reasoning and Tool-use
Instruct the AI to use specific techniques or “tools” to solve a problem. Best when you want the model to approach a problem with a particular methodology or framework.
Use Big O notation analysis to evaluate the time and space complexity of the following three algorithms for finding duplicate elements in an array. Then recommend which algorithm would be best for large datasets with millions of elements.
Algorithm 1:
```javascript
function findDuplicates1(arr) {
const duplicates = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j] && !duplicates.includes(arr[i])) {
duplicates.push(arr[i]);
}
}
}
return duplicates;
}
```
Algorithm 2:
```javascript
function findDuplicates2(arr) {
const seen = {};
const duplicates = [];
for (const item of arr) {
if (seen[item]) {
if (seen[item] === 1) {
duplicates.push(item);
}
seen[item]++;
} else {
seen[item] = 1;
}
}
return duplicates;
}
```
Algorithm 3:
```javascript
function findDuplicates3(arr) {
return [...new Set(arr.filter(item => arr.indexOf(item) !== arr.lastIndexOf(item)))];
}
```
Automatic Prompt Engineer
Ask the AI to optimize its own prompts for a specific task. Useful when refining complex queries or when you want to improve results iteratively.
My goal is to get high-quality, optimized TypeScript code for a React component that handles form validation. Create three different prompts that would help achieve this goal, then select the best one and explain why it's likely to produce the most informative and accurate response.
Active-Prompt
Engage the AI in a process where it actively improves its approach based on feedback. Ideal for situations where you want to refine results through iteration and feedback.
I need to write a GitHub Actions workflow for a Python project. The workflow should handle testing, linting, and deployment to AWS Lambda. Provide a first draft, then explain three specific ways I could improve it to make it more efficient and reliable. After that, incorporate those improvements into a revised version.
Directional Stimulus Prompting
Guide the AI toward or away from specific themes or approaches in its response. Use when you want to emphasize certain aspects while avoiding others in complex topics.
Explain how to implement proper error handling in a REST API. Focus primarily on HTTP status codes and structured error responses, while avoiding discussion of specific programming languages. Prioritize real-world examples over theoretical concepts and emphasize security considerations.
Program-Aided Language Models
Ask the AI to approach a task as if it were writing or following a program with defined steps. Excellent for procedural tasks or when you want highly structured, systematic responses.
Function: CreateRESTfulAPI
Inputs:
- Resource type: User management
- Authentication: JWT
- Database: PostgreSQL
- Language/Framework: Node.js/Express
Execute this function to design a RESTful API for user management. For each endpoint, provide:
1. HTTP method and route
2. Required request headers
3. Request body schema (if applicable)
4. Success response schema with status code
5. Possible error responses with status codes
6. A code snippet showing the Express route handler implementation
ReAct
Request the AI to alternate between reasoning about a problem and describing concrete actions to take. Valuable for problem-solving scenarios where both analysis and specific action steps are needed.
You're debugging a performance issue in a web application where page load times have increased significantly. Using the ReAct approach (Reason, then Act), work through the debugging process:
1. Reason about possible causes of slow page loads
2. Describe specific actions to diagnose frontend performance issues
3. Reason about potential backend bottlenecks
4. Describe specific actions to identify database query problems
5. Reason about infrastructure and scaling factors
6. Describe specific actions to optimize and verify improvements
For each step, clearly separate your reasoning from your action steps and include relevant code or commands where appropriate.
Reflexion
Ask the AI to generate a response, then reflect on its own answer to identify improvements or issues. Best for situations where critical self-assessment of responses would lead to better quality.
Write a function in Python that implements a binary search tree with insert, delete, and search operations. After providing your implementation, critique your own code by identifying any edge cases not handled, performance issues, or areas that could be improved. Then provide an improved version based on your critique.
Multimodal CoT
Guide the AI to use multiple formats (text, diagrams, equations) in its reasoning process. Ideal for explaining complex concepts where visual or mathematical representations enhance understanding.
Explain how the Redux state management pattern works in React applications, using a combination of:
1. A textual explanation of key components (store, actions, reducers)
2. A visual representation (described as if drawing a component diagram)
3. A code example showing the flow from action dispatch to state update
4. A real-world analogy that ties these elements together
Walk through this step by step, making sure each format builds upon the others to create a comprehensive understanding.
Graph Prompting
Ask the AI to explore relationships between concepts by building a conceptual graph structure. Excellent for mapping complex systems or exploring interconnections between ideas.
Create a knowledge graph exploring the components and relationships in a modern web application architecture. Identify at least 8 major nodes (e.g., frontend framework, state management, backend API, database, authentication, etc.), then connect them with labeled edges that explain their relationships and data flows. After creating the graph structure, analyze which nodes represent the most critical points for ensuring application security and explain why.
Summary
These advanced prompting techniques for can significantly improve your interactions with AI prompts by improving reliability, specificity, and output quality.
If you’re looking to improve code generation quality or take on complex system design using structured approaches like this is the best way push LLMs to their full potential.
Keep in mind though, this list of approaches is definitely going to keep growing rapidly. I highly suggest looking at research papers published on arxiv.org if you want to keep up with the latest and greatest. Like all published papers, they’re confusing but you can always drop them into a prompt and ask for a “summary written in 4th grade language” (or laymen’s terms if 4th grade is too embarrassing).
References
- Language Models are Few-Shot Learners (2020)
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022)
- Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022)
- Generated Knowledge Prompting for Commonsense Reasoning (2021)
- AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts (2022)
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023)
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (2020)
- Large Language Models are Zero-Shot Reasoners (2022)
- Automatic Prompt Engineer: Automatic Prompt Generation for Large Language Models Based on Turing Test (2022)
- Active Prompting with Chain-of-Thought for Large Language Models (2023)
- Directional Stimulus Prompting for Text Generation (2023)
- PAL: Program-Aided Language Models (2022)
- ReAct: Synergizing Reasoning and Acting in Language Models (2022)
- Reflexion: Language Agents with Verbal Reinforcement Learning (2023)
- Multimodal Chain-of-Thought Reasoning in Language Models (2023)
- Graph Prompting: Attentive Graph Prompting for Zero-Shot Classification (2023)
- Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding (2023)
- Meta-learning for Few-shot Natural Language Processing: A Survey (2020)