Browse docs

FAQ

Clear answers about what Mighty is, what it is not, and how to use it before the AI layer.

Is Mighty A Misinformation Or Truth Scoring Layer?

No.

Mighty is not a truth oracle, fact checker, misinformation detector, or content truthiness score.

Mighty is a security, safety, and multimodal inspection layer. It helps your app decide whether untrusted material should reach AI, OCR, storage, agents, automation, or users.

What Does Mighty Stop?

Mighty helps detect and route security and safety risks such as:

  • Prompt injection.
  • Instruction injection hidden in text, files, images, PDFs, OCR output, or tool output.
  • Attempts to exfiltrate secrets or private data.
  • Unsafe model output before users see it.
  • Poisoned OCR or IDP output.
  • Hidden document risk.
  • Steganography-style hidden payload attempts when detected.
  • Suspicious AI-generated or altered evidence signals.

The product route is what stops the risk. Mighty returns three distinct fields:

  • action is one of ALLOW, WARN, or BLOCK — switch on this for routing.
  • scan_status is one of pending, complete, or failed — async lifecycle, separate from routing.
  • authenticity.verdict is one of likely_real, likely_ai_generated, ai_generated, or indeterminate — forensics on the file itself, separate from routing.

Your app routes on action. indeterminate is a forensics verdict, not a routing action.

Mighty superhero support operator explaining that Mighty routes prompt injection, data exfiltration, hidden document risk, unsafe model output, and suspicious AI-generated evidence
Mighty is a security checkpoint, not a truth oracle.Use it before AI, OCR, storage, agents, automation, or users trust untrusted material.

What Is Deterministic Input Sanitization?

It means you put a server-side scan step before the AI layer.

user input or file -> Mighty scan -> route result -> AI layer

The model does not decide whether the input is safe to read. Your server scans first, then only passes routed material into the model, OCR system, agent, tool, or workflow.

Where Should Mighty Sit?

Put Mighty before the first trust boundary:

SurfacePut Mighty before
Chat promptModel call.
Uploaded fileStorage, OCR, extraction, or indexing.
OCR outputExtracted fields become trusted workflow data.
Model outputUsers or downstream tools see it.
Agent tool outputTool output enters model context.
Damage photoClaim, repair, or payment decision.
Invoice or estimateApproval, payment, or AI summary.

Does Mighty Prove Fraud?

No.

Mighty can flag suspicious evidence, hidden instructions, unsafe output, or authenticity signals. Your team and product policy make the final business decision.

Use this wording:

  • Mighty flagged this for review.
  • Mighty blocked this route.
  • This result needs more evidence.
  • This result is indeterminate.

Do not say:

  • Mighty proved fraud.
  • Mighty proved the document is real.
  • Mighty proved the statement is true.

Does Mighty Replace App Security?

No.

Keep your normal security controls:

  • Authentication.
  • Authorization.
  • Rate limits.
  • File size limits.
  • Malware scanning when required.
  • Audit logs.
  • Human review for high-risk decisions.

Mighty handles the untrusted material inspection layer before AI and automation trust it.

Does Mighty Support Multimodal Inputs?

Yes.

Mighty supports one scan contract across:

  • Text.
  • Images.
  • PDFs.
  • Documents.
  • OCR and IDP output.
  • Model output.
  • Agent tool output.
  • Audio transcripts today. Audio file scanning is closed beta.

Use Multimodal Support to choose settings.

What Should My App Store?

Store these fields when returned:

  • scan_id
  • request_id
  • scan_group_id
  • session_id
  • action
  • risk_score
  • risk_level
  • threats
  • content_type_detected
  • redacted_output

This gives support, billing, review, and audit teams enough context to understand the route.

What Should I Give My AI Coding Agent?

Use this prompt.

AI-ready prompt
Explain Mighty correctly

Paste this into Cursor, Codex, Claude Code, or Windsurf.

Use Mighty as a security, safety, and multimodal inspection layer before AI.

Do not implement Mighty as:
- a truth oracle
- a fact checker
- a misinformation classifier
- a source-of-truth system

Implement Mighty as:
- server-side input scanning before the AI layer
- output scanning before users or tools see generated text
- multimodal scanning for text, images, PDFs, documents, OCR output, model output, and agent tool output
- routing on action (ALLOW, WARN, BLOCK), scan_status lifecycle (pending, complete, failed), and authenticity.verdict (likely_real, likely_ai_generated, ai_generated, indeterminate)

Security risks to route:
- prompt injection
- instruction injection
- data exfiltration attempts
- secret leakage
- poisoned OCR or IDP output
- unsafe model output
- steganography-style hidden payload attempts when detected

Acceptance criteria:
- Untrusted material is scanned before AI, OCR, storage, agents, or automation trusts it.
- The app does not claim Mighty proves fraud or truth.
- The app stores scan_id, request_id, scan_group_id, session_id, action, and risk_score.
- Tests cover ALLOW, WARN, BLOCK, scan failure, and output scanning.
Next step

Ready to scan real traffic?

Create an API key, keep it on your server, then wire Mighty into the workflow that handles untrusted material.