The Ultimate Guide to Prompt Engineering for ServiceNow

Your quick path to mastering prompts for NowAssist. Practical patterns you can apply immediately in design, build, and operations.

I've been in the ServiceNow trenches since the Aspen days. Back then, we were excited about workflow automation and CMDB reconciliation. We thought we were cutting edge when we built our first Service Portal widgets. And you know what? We were.

Keynote speaker at ServiceNow Knowledge 23, Las Vegas

Then came the Virtual Agent. Remember that? ServiceNow's answer to chatbots. We spent hours building conversation flows, mapping intents, and training the natural language understanding. It was suitable for its time. Structured. Predictable. But let's be honest, it was still very much paint-by-numbers. You had to think of every possible question, every variation, every path users might take.

It worked, but it was exhausting.

The Generative AI Transformation. Before ChatGPT, building conversational AI in ServiceNow meant exhaustive intent mapping, utterance training, and rigid conversation flows. The November 2022 revolution changed everything. Now Assist brings natural language understanding, context awareness, and generative responses that make AI feel like a genuine conversation rather than a decision tree.

Then November 2022 happened. ChatGPT dropped, and the world changed overnight.

Suddenly, everyone was talking to AI as if it were a person. No more rigid conversation trees. No more endless intent mapping. Just natural conversation that actually understood the context. The genie was out of the bottle, and there was no putting it back.

ServiceNow, like every other major platform, saw which way the wind was blowing. They had to evolve or get left behind. That's when the real investment in generative AI began. Virtual Agent got smarter. The platform started getting AI injected into every corner. And then came Now Assist.

But Now Assist isn't just another feature release.

This is ServiceNow having actual conversations with your users. This is AI that can read through 300 work notes and tell you what actually matters. This is technology that writes knowledge articles whilst you're making tea.

Sounds brilliant, right? It is. But here's what nobody tells you at those fancy roadmap presentations.

AI is only as good as the instructions you give it.

Think about it. You've spent years building workflows, tuning business rules, and getting incident management running smoothly. But now you're asking an AI to summarise incidents, generate notes, and answer user questions. If your instructions are woolly, you'll get output that's just as messy as rubbish data in a CMDB.

And this is where it gets interesting. Prompts aren't just for the obvious stuff like chatbots. They're everywhere in Now Assist. When do you enable incident summarisation? That's a prompt. Resolution note generation? Prompt. Knowledge article creation? Prompt. Alert analysis in ITOM? You guessed it, prompt. Even when Now Assist suggests code in the Script Editor, there's a prompt behind the scenes that tells it how to help you.

The platform is full of these things. Some you'll see. Most of you won't. But they're all doing the heavy lifting.

Here's where your fourteen years become your unfair advantage.

You've trained hundreds of agents. You've written documentation that actually makes sense. You've translated technical gibberish into language real humans understand. You know what good looks like because you've seen what bad looks like, and you've fixed it.

That's precisely what prompt engineering is. Taking what you know about clarity, context, and communication, and applying it to AI. It's understanding that the difference between a useless prompt and a brilliant one might be just one well-placed instruction.

Let me show you what I mean.

Without proper prompts, your incident summaries repeat what's already in the fields. Nobody needs AI to tell them "Priority: High, Status: Resolved" because they can already see that on screen. But with good prompts? You get summaries that actually explain what went wrong, what was done, and why it matters. That's useful. That's worth having.

Your knowledge articles stop reading like they were assembled by a robot having a rough morning. They flow naturally. They answer questions people actually ask. They sound human.

Your virtual agent stops giving those painful "I'm sorry, I didn't understand that" responses every third message. It actually helps.

And the business impact? It's real.

Teams close tickets faster because they're not drowning in noise. Knowledge bases become genuinely helpful instead of graveyards of outdated articles. Employee satisfaction goes up because the AI actually works instead of just being another thing in the way.

I've seen implementations where the only difference between success and failure was prompt quality. Same platform. Same data. Same use cases. But one organisation took prompts seriously, and the other treated them as an afterthought.

Guess which one got the budget for phase two?

But here's the bit that should keep you up at night.

Bad prompts create risk. Your AI outputs go into production. They get sent to customers. They become permanent records. An AI that makes things up because the prompt was vague? That's not just embarrassing, it's dangerous. One that accidentally reveals sensitive information because nobody specified what to exclude? That's a compliance nightmare. One that gives different answers to the same question depending on the time of day? That's just broken.

These aren't horror stories I'm inventing. They're real problems I've helped organisations fix.

Sound prompt engineering isn't optional. It's how you ensure your AI implementation actually delivers what you promised in that business case. It's how you build trust. It's how you scale.

This is the new frontier.

We've gone from manual processes to automated workflows to intelligent systems that genuinely understand language. The platform that started with basic ticketing has evolved into something extraordinary. ServiceNow isn't just managing work anymore. It's understanding it, analysing it, and helping solve it.

That's genuinely exciting.

But with that power comes a new skillset. You need to know what makes prompts work. You need to test, iterate, and optimise just like you would with any other implementation. You need to think about context, clarity, and edge cases.

The brilliant news? You've already been doing this for years. You didn't call it prompt engineering.

This guide shows you how to apply everything you already know to this new world. It's not starting from scratch. It's adding tools to your existing toolkit.

So let's get stuck in.

About Me

I'm Enamul Haque, Director of ServiceNow Strategic Solutions and Architecture at Wipro. I've been working with ServiceNow for 14 years, starting from the Aspen release, and I've watched the platform evolve from basic ticketing to the AI powered ecosystem it is today.

Beyond my day job, I'm a technology authorarrow-up-right, educator, and adjunct professor arrow-up-rightat Bangladesh Maritime University, where I teach AI and Data Science at the Faculty of Ocean and Earth Sciences. I'm passionate about bridging the gap between AI potential and practical application.

This GitBook brings together everything I've learned about making generative AI work in enterprise environments. I hope it helps you on your Now Assist journey.

Connect with me on LinkedInarrow-up-right or through my YouTube channel, Digital Deep Divearrow-up-right.

Last updated