Chapter 4: Prompt Engineering Fundamentals
Welcome to Part II. This is where theory meets practice.

In Part I, we covered the strategic foundations. You understand why Now Assist matters, how to plan your implementation, and what the architecture looks like under the hood. That knowledge gives you the framework to make good decisions.
Now we get into the craft itself. Prompt engineering.
I want to be honest with you. When I first started working with generative AI in ServiceNow, I thought prompts were simple. You type a question, you get an answer. How complicated can it be?
Very complicated, as it turns out. The difference between a prompt that works and a prompt that works brilliantly often comes down to subtle choices in how you structure your request. Small changes in wording can produce dramatically different outputs. And in an enterprise context, those differences matter.
What This Chapter Covers
This chapter builds your foundation. We'll explore what prompts actually are, break down their core ingredients, understand what prompt engineering means in practice, and examine why it matters so much for enterprise AI.
Think of it as learning the grammar before writing the novel. You need to understand the building blocks before you can construct something sophisticated.
The four subpages in this chapter take you through the fundamentals step by step.
What is a Prompt? starts with the basics. Not because it's obvious, but because getting the definition right shapes everything that follows. A prompt is more than just a question. Understanding its true nature changes how you approach the entire discipline.
Prompt Ingredients breaks down the four components that make up effective prompts: instruction, context, input data, and output indicator. Like a recipe, getting the proportions right determines whether the result is excellent or disappointing.
What is Prompt Engineering? moves from the individual prompt to the discipline itself. This is where we explore the iterative process of designing, testing, and refining prompts to achieve consistent, high quality outputs.
Why Prompt Engineering Matters in Enterprise AI connects the technical to the strategic. In an enterprise environment, prompt engineering isn't just a nice to have skill. It directly impacts accuracy, efficiency, cost, and ultimately the success of your AI investment.
The Mindset Shift
Here's something I've learned over the years. Prompt engineering requires a different way of thinking.
When you write code, you give explicit instructions. The machine does exactly what you tell it. There's a predictable relationship between input and output.
With language models, you're communicating with a system that interprets your request. The model makes decisions about how to respond based on patterns it learned during training. Same prompt, same input, and you might get slightly different outputs each time.
This probabilistic nature isn't a flaw. It's how language models work. Your job as a prompt engineer is to provide enough structure and clarity that the model consistently produces useful results despite this inherent variability.
That's the skill we're building in this chapter.
ServiceNow Context
Everything in this chapter applies to prompt engineering generally, but I'll ground the concepts in ServiceNow examples throughout. The principles work across any language model, but the specific application to Now Assist skills, NowLLM constraints, and ServiceNow use cases makes the learning immediately practical.
You'll see how these fundamentals apply to incident summarisation, knowledge article generation, case analysis, and other real scenarios you'll encounter in your implementation.
By the end of this chapter, you'll have the vocabulary, the mental models, and the foundational knowledge to move into the Five Principles of Prompting in Chapter 5.
Let's begin.
Last updated