How to Write Better AI Prompts to Get Useful Answers Instead of Generic Ones

You type something into ChatGPT. You get back a response. It’s technically correct. Somewhat relevant. And completely useless for what you actually needed.

That’s not the AI being difficult. That’s a prompting problem.

Most people treat AI like a search engine — short query, hope for the best. “Write me a cover letter.” “Summarise this topic.” “Give me some ideas.” And then they’re genuinely surprised when the output is bland, generic, and reads like it was written for absolutely nobody in particular. Which, to be fair, it was.

The uncomfortable truth is that what you get out of an AI is almost entirely determined by what you put in. Vague in, generic out. But flip that around — a well-built prompt can turn a frustrating AI session into something that saves you an actual hour of work. Here’s exactly how to do it.


AI

Why Generic Prompts Get Generic Answers

Think of it like ordering food at a restaurant. “Give me something good” is technically a prompt. The waiter will bring you something. It won’t be what you wanted. “I’ll have the paneer tikka, medium spice, no onions, extra naan” — that’s a prompt that gets you what you actually came for.

AI works the same way. It isn’t lazy. It isn’t holding back the good stuff. It’s doing exactly what you asked — and if what you asked was vague, it fills in the blanks with the most average, middle-of-the-road answer it can construct. One that offends nobody, helps nobody specifically, and lands in the vast grey zone of “technically fine.”

Writing a good AI prompt is like giving instructions to an incredibly smart person who knows an enormous amount but has zero context about your life, your job, or what you’re actually trying to do. The more you tell them, the better the result. The less you tell them, the more they guess. And guesses are rarely what you needed.


The PCRF Framework — Start Here

Before diving into specific techniques, here’s a simple framework worth memorising. After analysing thousands of prompts, researchers identified four elements that almost every effective prompt contains: Persona, Context, Request, and Format. PCRF.

Persona — who should the AI be? Context — what’s the situation, the audience, the constraints? Request — what exactly do you want? Format — how should the answer look?

Not every prompt needs all four. Simple tasks work fine with just Request and Format. But for anything complex, all four working together is the difference between a useful answer and a generic one.

Here’s what that looks like in practice:

Weak: “Write something about marketing.”

Strong: “You are an experienced B2B marketing strategist. Write a 500-word blog post with 3 actionable tactics for SaaS companies trying to reduce churn. Use headers and bullet points. Avoid jargon.”

Same topic. Completely different output. The second one tells the AI who it is, who it’s writing for, what to produce, and how to structure it. The first one is basically asking a stranger to surprise you — and not in a good way.


Technique 1: Give It a Role

This single change will improve your outputs more than anything else on this list.

Instead of: “Explain machine learning to me.” Try: “Act as a data science teacher explaining machine learning to a non-technical marketing professional with no coding background.”

The role does three things simultaneously — it sets the expertise level, the communication style, and the assumed knowledge of the reader. The AI isn’t guessing anymore. It knows exactly what register to use and who it’s talking to.

Works for almost everything. “Act as a strict editor who hates filler words.” “Act as a sceptical investor who pokes holes in business plans.” “Act as a teacher explaining this to a 12-year-old.” The role shapes everything that follows — and the more specific the role, the sharper the output.


Technique 2: Add Context — It Doesn’t Know Your Life

This is where most people leave the most value on the table. The AI knows a lot about the world. It knows absolutely nothing about your specific situation unless you spell it out.

Bad: “Help me write an email to my boss.”

Good: “Help me write an email to my boss asking for a week off next month. I’ve been working extra hours for two months, we just shipped a major project, and my boss responds better to logic than emotion. Keep it professional but not stiff. Under 150 words.”

The second version gives the AI everything it needs to write something you could actually send without editing. The first version gives it nothing — so it writes something generic enough to apply to any person, any job, any boss, anywhere. Which means it applies perfectly to nobody.

Your situation is specific. Tell it the specifics.


Technique 3: Tell It What You Don’t Want

Most people forget this entirely. Negative instructions are just as powerful as positive ones — sometimes more.

Some genuinely useful don’ts to steal:

  • “Don’t start with ‘Great question!’ or any filler opener.”
  • “Don’t use bullet points — write in flowing prose.”
  • “Don’t hedge everything — give me a direct recommendation.”
  • “Don’t explain what you’re about to do. Just do it.”
  • “Avoid the words ‘leverage,’ ‘synergy,’ and ‘delve.'”

That last one is doing important work. There’s a specific flavour of AI-generated corporate-speak — confident, smooth, completely empty — that has somehow infected every industry simultaneously. Telling it to avoid the worst offenders cuts through a surprising amount of it.

If you’ve ever gotten back a response that opens with “Certainly! Here’s a comprehensive overview…” and immediately wanted to close the tab — this is how you stop that from happening.


Technique 4: Ask It to Think Step by Step

For anything analytical, this addition alone makes a visible difference.

Without it: the AI pattern-matches to the most common answer it’s seen for this type of question and delivers it with confidence.

With it: it actually works through the logic, catches contradictions, and arrives at something more considered and accurate.

“Here are three job offers I’m considering. Think through this step by step, weigh the pros and cons based on my priorities, and give me a clear recommendation.”

Use it for: debugging a problem, evaluating competing options, analysing an argument, reviewing something critically, working through any decision with real stakes. Anywhere reasoning matters more than retrieval.


Technique 5: Show It What Good Looks Like

Instead of describing what you want in abstract terms, just show an example. Paste in a piece of writing you like and say: “Write in this style.” Share a subject line that performed well and say: “Write ten more like this.” Give it a sample and say: “Match this format exactly.”

The AI is extraordinarily good at pattern recognition. Give it a pattern to match and it’ll nail it far more reliably than if you try to describe the pattern in words. Words are ambiguous. An example isn’t.

This is the technique that makes AI actually sound like you — your tone, your rhythm, your preferences — rather than like a press release written by committee at 11pm on a Thursday.


Technique 6: Break Big Prompts Into Steps

One massive prompt trying to do research, analysis, writing, and formatting all in one go almost always produces muddled output. Break it into sequential steps instead.

Think of it like cooking a complicated meal. You don’t throw everything in the pot simultaneously and hope it works. You prep first, then cook, then plate.

Same principle. Instead of: “Research the topic, write an outline, draft the article, add statistics, and format with headers” — do it in four separate prompts:

  1. “Give me an outline — 6 sections, each with a one-line summary.”
  2. “Expand Section 1 into 200 words.”
  3. “Add a relevant statistic or real example to each paragraph.”
  4. “Format the whole thing with headers and subheadings.”

Four focused prompts beat one overloaded prompt every single time. The quality difference is significant and you’ll notice it immediately.


Technique 7: Ask for Options, Not One Answer

This is massively underused and genuinely one of the most useful habits to build.

Instead of asking for one headline and hoping it’s great, ask for ten and pick the one that lands. Instead of one angle on a topic, ask for five approaches and choose the one that fits. Options cost you nothing — two extra words in the prompt. And they show you a range of possibilities you’d never have thought to ask for individually.

“Give me 8 subject line options — vary the tone from formal to playful.” “Write three versions of this paragraph — short and punchy, detailed and thorough, conversational.”

More options. Better output. Less time convincing yourself the first draft was fine when it clearly wasn’t.


Prompts That Work vs. Ones That Don’t

❌ Generic✅ Better
“Write a cover letter”“Write a cover letter for a UX designer with 4 years experience applying to a fintech startup. Confident but not arrogant. Under 250 words.”
“Summarise this article”“Summarise in 5 bullets for a non-technical audience. Focus on practical implications, not technical details.”
“Give me business ideas”“5 low-investment ideas for someone with copywriting skills who wants to work remotely. No team required, no upfront stock.”
“Help me with this email”“Rewrite this to sound more direct and less apologetic. Remove unnecessary qualifiers — I tend to over-hedge.”
“Write a social media post”“Write 5 LinkedIn post options about [topic]. Vary the hook. No buzzwords. Each under 150 words.”

The One Habit That Separates Good Prompters From Great Ones

Iteration. That’s genuinely it.

Your first prompt rarely produces perfect output. The real skill isn’t writing a flawless prompt on the first attempt — it’s treating the first response as a draft and refining from there. Push back. “Too formal — make it sound like a real person wrote it.” “The second paragraph is weak — rewrite with a stronger example.” “The conclusion is vague — give me a specific call to action.”

The AI doesn’t take criticism personally. It doesn’t get tired of revisions. And unlike an actual freelancer, it won’t send you a passive-aggressive “as per my last message” email when you ask for the fifth round of changes.

Use that. Relentlessly.


The Bottom Line

The gap between a frustrating AI experience and a genuinely useful one isn’t about which tool you’re using or how much you’re paying. It’s almost entirely about how you ask.

Give it a role. Give it context. Tell it what to avoid. Ask for options. Break complex tasks into steps. Treat every first response as a starting point, not a final answer.

You’re not searching. You’re collaborating. And the quality of that collaboration starts entirely with what you type.


Frequently Asked Questions

Q1: Are longer prompts always better? Not even close. A focused 50-word prompt with the right context beats a rambling 300-word prompt every time. Length matters far less than clarity. Include everything relevant, cut everything that isn’t. If your prompt starts to feel like an essay, break it into sequential steps instead.

Q2: Do these techniques work the same across ChatGPT, Claude, and Gemini? Largely yes — the fundamentals transfer across all major models. Claude handles long documents and nuanced analysis particularly well. ChatGPT excels at creative tasks and code. Gemini integrates well with Google Workspace. The prompting skills work everywhere — which model you choose for a specific task is a secondary decision.

Q3: What’s the fastest way to improve right now? Role assignment and negative instructions. These two changes alone will visibly improve your outputs within the first session. Add context about your specific situation, ask for options instead of single answers. Those four habits cover 80% of the gap between generic and useful.

Q4: Should I save prompts that work well? Absolutely — build a personal prompt library. A simple document where you keep prompts that produced great results. Over time this becomes genuinely valuable. Good prompts are reusable, shareable, and get better every time you refine them. Think of it as compound interest for your AI workflow.

Q5: What do I do when the AI just won’t get what I want after multiple tries? Switch tactics rather than repeating the same prompt louder. Try giving an example of what good looks like. Try breaking the task into smaller pieces. Try a completely different role or frame. Or try a different model — sometimes a task that frustrates one AI lands perfectly on another. And occasionally the honest answer is that the task needs a human. Knowing when AI isn’t the right tool is part of using it well.


Sources: MIT Sloan Teaching & Learning Technologies — Effective Prompts for AI (May 2025), Lakera — Ultimate Guide to Prompt Engineering 2026, Vendasta — AI Prompting Complete Guide (January 2026), Iternal — How to Write AI Prompts 2026, AnitaB.org — How to Write an Advanced AI Prompt (November 2025), StackAI — How to Write Good AI Prompts 2026, DataSchool — Stop Overthinking Your AI Prompts (January 2025), Descript — How to Write AI Prompts Definitive Guide (2025).

Leave a Comment