Identifying Use Cases and Success Criteria
Right, you've assessed your readiness. Now comes the critical bit. Which use cases do you actually implement?
This is where most organisations go wrong. They try to do everything at once, pick use cases that sound impressive but deliver minimal value, or choose things that require perfect data when their data is average.
Use Case Prioritisation Matrix. Plot potential Now Assist use cases based on implementation effort and business impact. Start with Quick Wins in the top-left quadrant: high impact skills like Incident Summarisation that are easy to deploy. Plan carefully for Strategic Investments that deliver significant value but require more effort. Add Easy Additions after your initial wins. Reconsider use cases in the bottom-right where effort outweighs likely returns. Use this framework to build a phased implementation roadmap. Let me show you how to pick battles you can actually win.
The Quick Win Framework
Start with use cases that tick three boxes. High value, low complexity, and good data quality.
High value means it solves a real problem. Do agents spend 2 hours a day reading incident histories? That's high value. Does someone occasionally need to summarise a change request? Not so much.
Low complexity means it doesn't need heavy customisation or depend on fixing loads of other things first. Out-of-the-box incident summarisation is low complexity. Custom skill that pulls data from five external systems and applies complex business rules? That's high complexity.
Good data quality means the information is actually there and reasonably clean. If your incident descriptions are detailed and your work notes are comprehensive, you've got good data. If they're empty or single-word entries, you haven't.
Find use cases where all three overlap. That's where you start.
Common Starting Points
Incident summarisation is usually a winner. Most organisations have decent incident data, the skill works out of the box, and agents immediately feel the benefit. You can pilot it with a small team, prove value quickly, and build momentum.
Knowledge article generation works well if your knowledge base isn't a complete disaster. Agents who've just closed incidents can generate draft articles in minutes. Real-time saving, visible benefit, easy to measure.
A virtual agent for common queries is brilliant when you've got clean knowledge content and straightforward catalogue items. Password resets, leave requests, and standard questions. Start there, not with complex workflows.
Resolution note generation helps teams that struggle to document what they actually did. But only if they're entering work notes correctly in the first place. Check that first.
Use Cases to Avoid Initially
Please don't start with anything that requires perfect data unless you've actually got it. Don't begin with heavily customised workflows unless you're prepared for significant configuration work. Don't pick use cases where success is subjective and hard to measure.
AI Agents sound exciting, but they're complex. Leave them until you've got skills working correctly and you understand prompt engineering. Walking before running and all that.
Custom skills for niche requirements can wait. Prove value with standard capabilities first. Then invest time in building custom solutions.
Defining Success Criteria
Vague goals like "improve efficiency" or "enhance user experience" are useless. You need specific, measurable criteria.
For incident summarisation, that might mean reducing the time spent reading case history from 2 minutes to 30 seconds, measured across 100 incidents. Or improving agent satisfaction scores by 15 points within three months.
For knowledge article generation, maybe it's increasing the article creation rate from five per month to 20 per month, or reducing the average creation time from 45 minutes to 10 minutes.
For Virtual Agent, perhaps it's deflecting 200 common queries per week or improving first-contact resolution by 20 percentage points.
Make your criteria specific enough that you'll know whether you've met them. Then track them from day one.
Building Your Use Case Backlog
List all potential use cases. Score each one on value, complexity, and data quality. Start with the highest scoring ones.
Create a proper backlog. Use case name, expected benefit, success criteria, dependencies, and estimated effort. Prioritise ruthlessly based on value and feasibility.
Don't try to implement everything in phase one. Pick a maximum of three to five use cases. Do those adequately, prove value, learn lessons, then expand.
A design authority helps here. Someone who ensures use cases are built consistently, following governance rules, with proper quality standards. Stops people from building things in isolation that don't integrate well.
The Pilot Approach
Never roll out to everyone immediately. Start with a pilot group. Ten to twenty users, not your entire organisation. People who'll give honest feedback and work with you to refine things.
Run the pilot for four to six weeks. Gather data religiously. What worked? What didn't? What needs changing? Then adjust before scaling up.
Pilots fail safely. Full rollouts fail expensively. Pick your poison.
What Good Looks Like
A well-chosen use case solves a genuine problem, has clear success criteria, needs minimal customisation, and can be piloted quickly. You can measure whether it's working. You can show value within weeks, not months.
Bad use cases sound impressive but need perfect conditions you don't have, depend on fixing ten other things first, and have success criteria so vague you'll never know if you've achieved them.
Pick the former. Avoid the latter. Build momentum with wins, then tackle more complicated problems.
Right, you've got your use cases. Now let's talk about getting people on board.