Use With AI
Copy prompts, Markdown exports, llms.txt, and integration instructions for AI coding agents.
This page is built for Cursor, Codex, Claude Code, Windsurf, and other AI coding agents.
Use these AI-readable files:
/llms.txt/llms-full.txt/openapi/mighty-api.yaml- Per-page Markdown from the
Markdownbutton on every docs page.
Next step
Give your AI agent the docs first.
Use the prompt that matches your workflow. It tells the agent which Mighty fields to use, where to place the server-side scan, and what tests to add.
Master Implementation Prompt
AI-ready prompt
Implement Mighty from the docs
Paste this into Cursor, Codex, Claude Code, or Windsurf.
You are adding Mighty to this product.
First read:
- /docs/quickstart
- /docs/integrate/multimodal-support
- /docs/concepts/configs
- /docs/concepts/sessions
- /docs/concepts/billing-scu
- /docs/api-reference/v1-scan
Then inspect the codebase and map every untrusted surface:
- chat input
- file upload
- image evidence
- OCR or IDP output
- model output
- agent tool output
Implementation rules:
1. Keep MIGHTY_API_KEY on the server.
2. Use POST https://gateway.trymighty.ai/v1/scan.
3. Use scan_phase=input for submitted material.
4. Use scan_phase=output for model, OCR, IDP, agent, or automation output.
5. Reuse scan_group_id across input, file, OCR, output, and review scans from the same item.
6. Use data_sensitivity=tolerant when normal business PII is expected.
7. Use redacted_output only when Mighty returns it and product policy allows it.
8. Route ALLOW, WARN, and BLOCK explicitly.
9. Handle 400, 402, 409, 413, 429, pending, complete, and failed.
10. Do not claim Mighty proves fraud. Say it flags suspicious evidence for review.
Acceptance criteria:
- Every risky surface has a server-side scan before trust.
- API key never reaches client code.
- Tests cover ALLOW, WARN, BLOCK, redacted_output, 402, 413, and 429.
- Logs include scan_id, request_id, scan_group_id, session_id, action, and risk_score.
- Review wording is honest and does not make fraud conclusions by itself.Chat App Prompt
AI-ready prompt
Chat app implementation prompt
Paste this into Cursor, Codex, Claude Code, or Windsurf.
Add Mighty to this chat app.
Goal:
Scan user input before the model runs and scan model output before users see it when the route is strict.
Requirements:
- Use MIGHTY_API_KEY only on the server.
- Add a helper for POST https://gateway.trymighty.ai/v1/scan.
- Input scan body: content_type=text, scan_phase=input, mode=secure, focus=both.
- Output scan body: content_type=text, scan_phase=output, mode=secure, focus=both, profile=ai_safety.
- Reuse scan_group_id from input scan for output scan.
- BLOCK input must not call the model.
- WARN input must route to review, friction, or constrained generation.
- BLOCK output must not be shown unless redacted_output is available.
Acceptance criteria:
- Tests cover ALLOW, WARN, BLOCK for input.
- Tests cover ALLOW, WARN, BLOCK for output.
- Logs include scan_id and scan_group_id.File Upload Prompt
AI-ready prompt
File upload implementation prompt
Paste this into Cursor, Codex, Claude Code, or Windsurf.
Add Mighty to this file upload flow.
Requirements:
- Do not send files directly from the browser to Mighty.
- Proxy uploads through a server route.
- Forward the file to POST https://gateway.trymighty.ai/v1/scan as multipart form data.
- Use content_type=auto, scan_phase=input, mode=secure, focus=both, data_sensitivity=tolerant.
- Route ALLOW to normal storage and processing.
- Route WARN to quarantine and human review.
- Route BLOCK to reject or quarantine.
- Store scan_id, request_id, scan_group_id, action, risk_score, threats, and content_type_detected.
Acceptance criteria:
- API key is never in client code.
- Tests cover missing file, Mighty unavailable, ALLOW, WARN, BLOCK, 402, 413, and 429.OCR And IDP Prompt
AI-ready prompt
OCR and IDP implementation prompt
Paste this into Cursor, Codex, Claude Code, or Windsurf.
Add Mighty to this OCR or IDP pipeline.
Requirements:
- Scan the original file before OCR when possible.
- Scan extracted text with content_type=text and scan_phase=input.
- Use mode=secure, focus=both, data_sensitivity=tolerant.
- Reuse scan_group_id from file scan when scanning extracted text.
- If extraction or summarization produces model output, scan it with scan_phase=output.
- Route WARN to review before writing trusted fields.
- Route BLOCK to stop automation.
Acceptance criteria:
- Extracted fields are not trusted until scan routing passes.
- Review queue includes scan_id, threats, and extracted text reference.
- Tests cover poisoned OCR output.Output Scanning Prompt
AI-ready prompt
Output scanning implementation prompt
Paste this into Cursor, Codex, Claude Code, or Windsurf.
Add Mighty output scanning before generated content reaches users or tools.
Requirements:
- Scan generated text with scan_phase=output.
- Include scan_group_id from the matching input scan.
- Include original_prompt when available.
- Use profile=ai_safety for public AI output.
- Route ALLOW to show output.
- Route WARN to safe fallback plus review.
- Route BLOCK to redacted_output if present, otherwise block.
Acceptance criteria:
- No strict output route returns unscanned model text.
- Tests cover redacted_output and blocked output.
- Logs connect input and output by scan_group_id.Review Wording For AI Fraud
Use:
- "Flagged for review."
- "Suspicious evidence signal."
- "Indeterminate evidence. Request more proof."
Do not use:
- "Proved fraud."
- "Fake document."
- "Fraud confirmed."
Mighty helps route risky material. Final decisions belong to your workflow and review process.
