Browse docs

Drift

Handle changes in inputs, models, policies, evidence chains, and attacker behavior over time.

Drift means risk changes after the first scan.

Your product may receive new file types. A model may change. OCR may produce different text. A policy may become stricter. Attackers may adapt. A reviewer may add new evidence.

When that happens, do not rely on an old scan as if the workflow is unchanged.

Drift

Risk changes when content, models, policies, or evidence chains change.

Input driftUsers start submitting different formats, larger files, or new attack patterns.
Model driftA new model, prompt, tool, or agent policy changes output behavior.
Policy driftYour app changes what should be allowed, reviewed, redacted, or blocked.
Evidence driftOCR, extraction, summaries, and review notes diverge from the original item.
Detect changeNew file type, model, prompt, route, policy, or claim state
Rescan or reviewUse the same session and scan group when evidence is related
Update routingStore new scan IDs and keep the review decision separate

Types Of Drift

Drift typeWhat changesExample
Input driftUsers submit new formats, larger documents, more images, or different language.A claims flow starts receiving multi-page PDF packets instead of one damage photo.
Modality driftThe same workflow shifts from text to image, PDF, audio transcript, or extracted fields.A support chat adds file attachments.
Model driftThe model, prompt, tools, system message, or output format changes.A chat assistant starts generating claim recommendations instead of summaries.
Policy driftThe product changes tolerance, routing, or review standards.Public output moves from standard to strict data sensitivity.
Evidence driftDerived text or summaries diverge from the original item.OCR misses hidden instructions in a PDF, then an AI summary treats the extracted text as clean.
Adversary driftAttackers learn where the controls are and change their payloads.A prompt injection moves from chat text into an image or PDF layer.

When To Rescan

Rescan or send to review when:

  • A file is transformed into OCR text, extracted fields, or a summary.
  • A model output will be shown to a user or used by automation.
  • A user edits and resubmits evidence.
  • A new model, prompt, tool, or agent policy ships.
  • Your routing policy changes from tolerant to strict.
  • A reviewer adds evidence or marks a result as disputed.
  • A scan returns indeterminate and the workflow receives new material.
  • A batch runs over older evidence after a policy update.

What To Keep Stable

FieldHow to use it during drift
session_idKeep the chat, claim, case, customer, batch, or agent run together.
scan_group_idReuse it for the original item and derived scans from that item.
request_idUse a new value for each new scan request.
metadataRecord model version, prompt version, workflow version, reviewer ID, and policy version when available.

Use a new scan_group_id when the material is a new item. Reuse the existing group when the material is derived from the same item.

Drift Routing

Drift eventRecommended route
New OCR output from a scanned PDFScan OCR text with the same scan_group_id.
Model output from a scanned promptScan output with scan_phase=output and the input scan_group_id.
New model or prompt versionRescan public output paths and compare WARN or BLOCK rates.
New policy versionRescan high-risk pending items or route old scans to review.
New evidence in a claimScan the new item with a new group, then attach it to the same session_id.
Weak or conflicting signalsRoute indeterminate to review or request more evidence.

Drift Metrics

Track these fields over time:

  • action rate by workflow.
  • WARN and BLOCK rate by model version.
  • indeterminate rate by modality.
  • risk_score distribution by workflow.
  • Review outcome versus Mighty action.
  • False positive and false negative feedback from reviewers.
  • SCU usage by workflow after mode changes.

These metrics help you decide when to adjust mode, focus, profile, data_sensitivity, async usage, or review routing.

Common Mistakes

  • Treating an old ALLOW as valid after OCR, model output, or policy changes.
  • Creating a new session for each derived scan, which breaks audit history.
  • Reusing a scan_group_id for unrelated evidence.
  • Measuring only blocked scans and ignoring WARN or indeterminate.
  • Letting model output into a public response because the original prompt was already scanned.
Next step

Ready to scan real traffic?

Create an API key, keep it on your server, then wire Mighty into the workflow that handles untrusted material.

AI-Agent Prompt

AI-ready prompt
Add drift handling

Paste this into Cursor, Codex, Claude Code, or Windsurf.

Add Mighty drift handling to this product.

Requirements:
- Identify where content changes after the first scan.
- Rescan OCR output, extracted fields, model output, agent output, and edited evidence.
- Keep related derived scans on the same scan_group_id.
- Keep the wider chat, claim, case, batch, or agent run on the same session_id.
- Add metadata for model version, prompt version, workflow version, and policy version when available.
- Route indeterminate results to review or request more evidence.
- Track action rate, WARN rate, BLOCK rate, indeterminate rate, and review outcome over time.

Acceptance criteria:
- Old ALLOW results are not reused after meaningful content, model, or policy changes.
- Tests cover rescanning derived output.
- Review history can show how a result changed over time.