How to write an automation scope that survives real work

Most automation projects fail because scope was written before anyone watched the work. A pragmatic scoping discipline for care providers.

A printed 'PROJECT REQUIREMENTS' document on a wooden desk, heavily marked up in red pen with circles, arrows, and a large question mark in the margin, a coffee stain on one corner, next to a folded orange hi-vis vest — a scope document that has clearly been out to the actual worksite

Most failed automation projects have the same post-mortem. The vendor built what the scope said. The scope said the wrong thing. Everyone signed it off in good faith, and six weeks later a support worker was still printing rosters because nobody had noticed the print button was load-bearing.

Scope documents fail quietly. They look reasonable on a PDF. They pass a review meeting. Then they meet the actual work and fall apart, because the person who wrote them never sat next to the person doing the work.

This is a note on how to write a scope that holds up. It is aimed at operations leads at small care providers who have been burned once and would rather not be burned again.

The scope is a contract about reality, not a list of features

A good scope answers three questions:

  1. What does the work actually look like today, step by step, with names and systems and files?
  2. Which parts of that work are stable and which parts will change in the next twelve months?
  3. What does “done” mean, in language the ops manager can sign off without calling a developer?

If any of these three is missing, the scope is a wish list. Wish lists produce builds that technically meet the spec and operationally fail.

Step one: shadow the work before you describe it

There is no substitute for sitting next to the person doing the task. Watch them do it three times. Do not interview them about it, because interviews produce a cleaned-up version of the process the person thinks you want to hear.

When you shadow, you will find things nobody told you about:

  • A spreadsheet that lives on one person’s desktop and is the only source of truth for client preferences
  • A habit of copying a claim reference from one system, opening Notepad, cleaning it up, and pasting it into another system
  • A rule that support workers submit timesheets on Monday unless it is a public holiday, in which case they submit Tuesday, and the payroll officer has a mental checklist of which workers forget
  • A printout taped to a monitor because a form field does not carry over between two screens

None of these appear in a process diagram. All of them break an automation if you miss them. Shadowing is not a nice-to-have. It is the discovery.

Step two: separate the change surface from the stable core

Every process has parts that will still be true in a year and parts that will not. Good scoping names both, because the stable core is what you build against and the change surface is what you design around.

Stable core examples in a disability support provider:

  • Every client has a support plan with funded categories
  • Every shift produces a timesheet that needs to be reconciled against a roster
  • Every claim needs a reference linking it to a funded line item

Change surface examples in the same provider:

  • The specific fields on the intake form
  • The naming convention for funded categories
  • The third-party software used to submit claims
  • The compliance document templates

Automations built against stable cores age well. Automations built against change surfaces need monthly repair. The scope should name which is which, and the build should isolate change-surface logic behind a single configuration file that an operations lead can update without rewriting anything.

Step three: name the inputs and outputs precisely

This is where scopes usually go vague. Somebody writes “the system receives timesheet data” and everyone nods, and nobody asks in what format, from which system, with which fields, at what cadence, under whose permission.

A tight scope says:

  • Input: CSV file exported from the rostering system by the operations coordinator every Monday at 9am, containing columns worker ID, shift date, start time, end time, client ID, activity code
  • Output: Validated timesheet summary written to the shared drive at /Payroll/YYYY-MM-DD/ as an Excel file, plus a plain-text exception report listing shifts that failed validation with a reason per line

That is the kind of specification that a developer cannot misread and an operations lead can verify after the build. Vague inputs produce vague outputs produce arguments about whether the thing works.

Step four: write acceptance criteria for humans, not engineers

Acceptance criteria are the list of statements that must be true for the build to be signed off. The test is: can the ops manager read them, look at a live run, and say yes or no to each line?

Good acceptance criteria look like this:

  • Given a timesheet CSV with 100 shifts, the automation produces a payroll summary in under two minutes
  • If any shift has a missing client ID, the shift appears on the exception report with the message “missing client”
  • The payroll summary file is named in the format payroll-summary-YYYY-MM-DD.xlsx
  • The automation runs on the payroll officer’s machine without requiring admin rights
  • A support worker whose shifts were all valid receives no email; a support worker whose shifts failed validation receives one summary email listing the failures

Every line is observable. No line requires reading code. The ops manager can sit at a desk on payroll day and tick them off. That is the bar.

Step five: name what is out of scope

The most valuable paragraph in a scope document is the one that lists what the build will not do. Exclusions prevent the midnight Slack message six weeks in: “but I thought it would also handle…”

Good exclusions are specific. Not “anything not listed above” — that is a lawyer’s exclusion, not a useful one. Instead:

  • This build does not submit claims. It prepares a validated file for the claims officer to review and submit.
  • This build does not change the rostering system. It reads exports from it.
  • This build does not handle public holiday shift loading calculations. Those remain manual until a later phase.

Now everyone knows. The argument happens during scoping, not during delivery.

What a survivable scope buys you

A scope written this way is longer than the one-page summary most providers are used to. That is fine. The cost of spending an extra week on scoping is always smaller than the cost of rebuilding an automation in month three.

It also changes the shape of the engagement. A vendor who is willing to shadow work, name stable cores, write input and output specifications, and list exclusions is a vendor who intends to finish the job. A vendor who will not do those things is telling you, in advance, where the overruns are going to come from.

If you are staring at a quote from a previous engagement that went sideways, the first thing to look at is the scope document. Nine times in ten, the answer is there. The work was never written down clearly enough to be built clearly.


If you want a second set of eyes on a scope before you sign anything, the fixed-fee opportunity assessment exists for exactly this. Start at /services/assessment or use the ROI calculator to sanity-check what an automation should be worth to you before scoping it at all.

If this resonated, the next step is one conversation.

Book a free 20-minute process review and we will apply the thinking in this piece to your actual operation.