1: Give Direction: Provide Clear Instructions
This is where everything starts. Without clear direction, your AI is essentially guessing what you want. And in enterprise environments, guessing is not acceptable.
The Core Idea
Give Direction means describing your desired request in detail and referencing relevant information the AI needs to do its job. Think of it like briefing a new team member. You would not just say "handle that incident." You would explain what handling means, what the expected outcome looks like, and what resources are available.
The same principle applies to prompts. The more specific your direction, the more reliable your output.
Why Vague Prompts Fail
Consider this prompt: "Tell me everything about the incident."
Sounds reasonable, right? But think about what you are actually asking. Everything could mean the timeline, the resolution, the impact, the people involved, the related configuration items, the SLA status, or dozens of other details. The AI has no idea what aspects matter to you, so it either tries to cover everything poorly or picks randomly.
Now consider this alternative: "Summarise the incident, focusing on root cause, resolution steps, and impacted services."
This prompt gives direction. The AI knows exactly what to focus on and what to leave out. The output becomes predictable and useful.
The Power of Imperative Language
ServiceNow recommends using imperative language in your prompts. This means direct commands rather than polite suggestions.
Instead of writing "You should analyse this incident log," write "Analyse this incident log."
Instead of "Could you write a summary," write "Write a summary."
This is not about being rude. It is about being clear. Direct verbs like Analyse, Write, Generate, Summarise, Extract, and List leave no room for interpretation. The AI knows exactly what action to take.
Assigning a Persona
One of the most powerful direction techniques is role based prompting. You tell the AI who it should be when responding.
For example: "You are a customer service representative. Summarise the following ticket."
Or: "You are a Level 3 Network Engineer. Summarise the outage impact for the VP of Operations."
The persona changes everything. A customer service representative writes differently than a network engineer. The vocabulary shifts. The level of technical detail adjusts. The tone adapts to the audience.
In ServiceNow implementations, personas are particularly valuable because they help the AI match the communication style your organisation actually uses. An HR case summary needs professional empathy. An incident resolution note needs technical precision. The persona sets these expectations.
Being Specific About Input
Good direction tells the AI not just what to produce but what to use as source material.
If your incident has child records, work notes, and SLA information, you need to specify which parts matter for your output. Otherwise, the AI might focus on the wrong sections or try to include everything, diluting the quality.
ServiceNow's prompt engineering guidance recommends using demarcation tags to identify input sections clearly. For example, if child incidents are wrapped in tags like CHILD_INCIDENT_START and CHILD_INCIDENT_END, your prompt should reference these explicitly: "For generating the child incident summary, only use the information given between the tags CHILDS START and CHILDS END."
This level of specificity prevents the AI from wandering into irrelevant content.
Weak vs Strong Prompts: Real Examples
ServiceNow has documented clear patterns of what separates weak prompts from strong ones.
Closing Child Records
Weak prompt: "Close all the child records when done."
Problems: Does not specify which table to act on. Does not define when the process is over. Uses ambiguous phrasing.
Strong prompt: "When the parent incident reaches Resolved state, close all child records in the incident table by setting their state to Closed and adding a work note indicating automatic closure due to parent resolution."
This version specifies the table, defines the trigger condition, and describes the exact actions to take.
Counting Open Cases
Weak prompt: "Count the open cases."
Problems: Does not indicate which users or groups to query. Does not specify conditions on the Case table. Provides ambiguous instructions.
Strong prompt: "Count all Case records where the assigned_to field matches the current user and the state is not Closed or Cancelled. Return the count using the Case table internal API."
This version identifies the user context, specifies the conditions, and gives clear technical instructions.
Querying HR Cases
Weak prompt: "Get the recent cases."
Problems: Does not specify whether to look at HR cases or CSM cases. "Recent" is subjective.
Strong prompt: "Retrieve all HR Case records created in the last 30 days where the state is Active or Awaiting Information. Order by created date descending."
This version provides proper context that ensures HR cases are selected rather than other case types.
The Virtual Agent Challenge
Direction becomes especially critical in Virtual Agent topic descriptions. The description field tells the LLM what the topic is for and how to match user queries to it.
A vague description like "used for employee lookup" struggles to match real user questions like "Who is James Henning?" or "Whom does James report to?"
A better description reads: "This topic is used to look up employees or users and provide details such as their name, role, manager, department, and location. Sample prompts include: Who is James Henning? What is James' cost centre? Whom does James report to?"
Notice how the improved version leads with action verbs, avoids ambiguous terms, and includes example queries. This gives the AI clear direction on when this topic should activate.
Avoiding Assumptions
A subtle but important aspect of giving direction is avoiding prompts that assume facts which may not exist.
Consider this prompt: "Explain how the customer escalated the issue through the Partner Portal."
This assumes a Partner Portal escalation process exists. If it does not, the AI might fabricate details to satisfy the prompt, leading to hallucinated content.
A safer approach: "Describe the escalation process used by the customer, if mentioned in the case notes. If not available, respond with 'Escalation process not documented.'"
This gives direction whilst protecting against hallucination.
Practical Checklist for Clear Direction
When writing prompts, ask yourself these questions:
Have I specified exactly what action the AI should take? Use imperative verbs.
Have I defined what the AI should focus on and what to ignore? Be explicit about scope.
Have I assigned a persona that matches the task? Consider who should be speaking.
Have I referenced the specific input sections to use? Point to the right data.
Have I avoided assumptions that could trigger hallucination? Build in safety clauses.
Have I used precise terminology that the AI can interpret correctly? Avoid internal jargon.
If you can answer yes to all of these, your prompt has clear direction.
The Foundation for Everything Else
Give Direction is the first principle for good reason. Without clear direction, the other principles cannot compensate. You can specify formats all day long, but if the AI does not know what content to produce, your format will be filled with the wrong information.
Master this principle first. The rest builds from here.
Last updated