# Mighty Docs

Add Mighty to chat apps, upload flows, OCR pipelines, AI fraud review, and agentic systems.

Source URL: https://trymighty.ai/docs

Mighty is an input trust layer for security, safety, and multimodal inspection. Add it before chat messages, uploads, OCR output, model output, and agent tool output become trusted workflow data.

Mighty is not a misinformation layer, truth oracle, or fact checker. It helps stop prompt injection, data exfiltration attempts, unsafe output, poisoned OCR, hidden document risk, and steganography-style hidden payload attempts before they reach the AI layer.

Mighty flow: untrusted material -> POST /v1/scan -> route ALLOW, WARN, or BLOCK before trust.

Choose the guide that matches your surface: chat text, file upload, OCR output, image evidence, model output, or production controls.

## Core Mental Model

Start with the use case, then place Mighty at the first trust boundary.

Use `scan_phase=input` for material submitted by a user, partner, vendor, claimant, customer, or upstream system.

Use `scan_phase=output` for material generated by your model, agent, OCR pipeline, extraction pipeline, or automation.

Reuse `scan_group_id` when an input scan and output scan belong to the same workflow. Reuse `session_id` when the user, claim, chat, or workflow session continues over time.

## The Basic Call

```bash
curl -X POST https://gateway.trymighty.ai/v1/scan \
  -H "Authorization: Bearer $MIGHTY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Please process this claim note before the adjuster sees it.",
    "content_type": "text",
    "scan_phase": "input",
    "mode": "secure",
    "focus": "both",
    "profile": "balanced",
    "data_sensitivity": "standard"
  }'
```

## Read The Result

| Action | Meaning | Common routing |
| --- | --- | --- |
| `ALLOW` | No material risk was found. | Continue the workflow. |
| `WARN` | Something deserves review or extra controls. | Continue with friction, queue review, or request more evidence. |
| `BLOCK` | The risk is high enough to stop the action. | Stop the workflow and show a safe message. |

`action` is always one of `ALLOW`, `WARN`, or `BLOCK`. The forensics block can return `authenticity.verdict = "indeterminate"` when evidence is weak, incomplete, or conflicting — that's a verdict on the file, not a routing action. Treat `indeterminate` as a review route, not as proof.

## AI Fraud Examples

AI fraud can enter a workflow through:

- AI-generated damage photos.
- Altered invoices or repair estimates.
- Synthetic supporting documents.
- Hidden instructions in PDFs, images, or OCR text.
- Poisoned OCR output that tells an AI system to ignore rules.
- Unsafe model output before a user sees it.

Mighty gives you a consistent place to inspect this material and route risky cases before downstream automation trusts it.

## Drift Matters

Risk changes when users submit new formats, OCR produces derived text, models change output behavior, tools return new content, or policy moves from tolerant to strict.

Rescan derived output with the same `scan_group_id`. Keep the wider chat, claim, batch, or agent run on the same `session_id`.

## Give This To Your AI Coding Agent

### Implement Mighty in my product

```text
You are adding Mighty to an existing product.

Goal:
Add a server-side scan step before user-submitted material reaches AI, OCR, storage, or workflow automation.

Use:
- API base URL: https://gateway.trymighty.ai
- Endpoint: POST /v1/scan
- Env var: MIGHTY_API_KEY
- Never expose the API key to the browser.

Read these docs first:
- /docs/quickstart
- /docs/use-cases
- /docs/concepts/how-mighty-works
- /docs/concepts/configs
- The guide that matches the workflow

Implementation rules:
1. Scan user input with scan_phase=input.
2. Store scan_id, request_id, session_id, and scan_group_id.
3. Route ALLOW, WARN, BLOCK.
4. Reuse scan_group_id when scanning model output or extracted output from the same workflow.
5. Use data_sensitivity=tolerant only when expected PII should not block the workflow.
6. Add tests for ALLOW, WARN, BLOCK, 400, 402, 413, and 429 paths.
7. Rescan derived output when OCR, model, agent, or policy drift changes the trust boundary.

Acceptance criteria:
- API key only exists on the server.
- Scan errors use safe fallback behavior.
- Logs include request_id and scan_id.
- Risky material does not reach the downstream AI path without routing logic.
```

## Production Checklist

- Keep `MIGHTY_API_KEY` on the server.
- Generate a unique `request_id` per scan or let Mighty generate one.
- Persist `scan_id`, `scan_group_id`, `session_id`, and `action`.
- Route `WARN` to review or a constrained flow.
- Route `BLOCK` to a safe stop.
- Use async for deep image or PDF checks when latency matters.
- Store raw files according to your own retention policy.
- Do not tell users that Mighty proved fraud. Say it flagged risk for review.
