top of page

Prompt Engineering 101 - Think like an Agent

  • Writer: Oliver Nowak
    Oliver Nowak
  • 11 minutes ago
  • 3 min read

The first time I started interacting with AI, I treated it as though it was all-knowing - I'd been told it basically was. So I spoke to it as if it lived in my head and understood exactly what I knew. And yet, it fumbled. Badly. It acted strangely, made odd choices, and didn’t produce anything close to what I was hoping for. At first, I blamed the model: “AI just isn’t quite there yet.”


But then I realised something: I was speaking to it the way I wanted to be spoken to. And it doesn’t work like that. It's become a running joke in the world of AI how pleases and thank yous are costing the likes of OpenAI millions because they are extra words or extra tokens that need processing unnecessarily. The AI won't be offended if you don't include "please" in your request. It doesn't need politeness, it needs precision.


Then I came across an interesting concept that changed the way I thought about it: think like an agent.


At first I dismissed it as ridiculous, but as I dug into it, it does actually make sense.


Imagine this - You wake up in a strange system, you've got a vague description of a task, a few tools you barely understand, and one chance to act. No memory, no visibility, just the words provided by the 10K-token window containing the prompt.


That's what it's like for AI Agents.


They don't see what you see, they only know what you tell them. And when things go wrong, it's often because we've given them a small fraction of the overall context, instructions that are confusing and hard to unpick, or we've made a whole host of assumptions that are obvious to us but totally invisible to them.


So how do we fix this? That's where Prompt Engineering comes in.


Prompt Engineering is the art and science of structuring inputs to guide AI towards accurate, helpful, and consistent outputs.


Whether you’re building a conversational flow, a GenAI use case, or a fully orchestrated AI agent these five elements are non-negotiable:



1. Role

Give the AI a purpose. “You are a smart, helpful agent supporting IT operations” is far better than hoping it figures that out. Role sets the tone, voice, and focus.


2. Context

This is the agent’s map of the world. What is the task? What are the constraints? What’s the bigger picture? Cram every relevant piece into that context window - but be clear, not cluttered.


3. Tools

If the agent has tools - functions, APIs, plugins - make them discoverable. Describe them well. A vague tool list is like handing someone a Swiss Army knife and forgetting to label the blades.


4. Rules

No essays when you want bullet points. No speculation when accuracy matters. Set boundaries. Good rules reduce randomness and keep things on track.


5. Examples

If you want consistent behaviour, show it. A few examples go further than a paragraph of explanation. Examples act like stabilisers - they keep the output in control.


These days, I test prompts by walking through them from the agent’s perspective. Would I understand what to do if this was all I could see?

Are the tools clearly defined? Would the examples steer me in the right direction?


Sometimes, I even ask the model, “Does this make sense to you?” You’d be shocked how often that works.


When things go wrong, I don’t just blame the model and assume it's not intelligent enough. I start with myself first. Could I have been clearer? Have I made any assumptions that the AI is missing? Does it have the same context that I do? The truth is, great results don’t come from magic - they come from thoughtful design. Think like the agent, and you’re halfway there.

Comments


©2020 by The Digital Iceberg

bottom of page