Browse docs

Damage Photo AI Fraud Review

Scan images for risk and authenticity signals before claims, repair, insurance, or review workflows trust them.

Goal

Scan images before automated decisions use them.

This guide fits damage photos, claim photos, repair proof, delivery photos, product condition photos, identity evidence, and other image-heavy workflows.

Architecture

  1. Upload the image to your server.
  2. Send it to Mighty with content_type=image.
  3. Use focus=both to check standard threats and AI authenticity signals.
  4. Route suspicious or indeterminate images to review.
  5. Keep final fraud decisions in your review process.

Request And Response

curl -X POST https://gateway.trymighty.ai/v1/scan \
  -H "Authorization: Bearer $MIGHTY_API_KEY" \
  -F "file=@./damage-photo.jpg" \
  -F "content_type=image" \
  -F "scan_phase=input" \
  -F "mode=secure" \
  -F "focus=both" \
  -F "profile=strict" \
  -F "data_sensitivity=tolerant" \
  -F "context=damage_photo_review"

threats is an array of objects with category, confidence, an optional evidence excerpt, and a human-readable reason. The authenticity block carries the structured forensics verdict — verdictlikely_real | likely_ai_generated | ai_generated | indeterminate — distinct from the routing action.

Routing Logic

export function routeDamagePhoto(scan: { action: string; authenticity?: { label?: string } }) {
  if (scan.action === "ALLOW") return "continue_claim_intake";
  if (scan.action === "WARN") return "request_review_or_more_evidence";
  if (scan.action === "BLOCK") return "stop_automated_decision";

  return "manual_review";
}

Honest AI Fraud Wording

Use wording like:

  • "Mighty flagged this image for review."
  • "This image has suspicious evidence signals."
  • "The evidence is indeterminate. Ask for more proof."

Avoid wording like:

  • "This photo is fake."
  • "This claimant committed fraud."
  • "Mighty proved fraud."

Mighty helps route risky material. It does not replace your claims, compliance, legal, or fraud investigation process.

Async Deep Review

Use async=true and mode=comprehensive when a photo is high value or latency can be handled by a review queue.

{
  "content_type": "image",
  "scan_phase": "input",
  "mode": "comprehensive",
  "async": true,
  "webhook_url": "https://example.com/api/mighty/webhook"
}

Production Checklist

  • Use focus=both for damage photos.
  • Store authenticity, forensics, and indeterminate signals when returned.
  • Route suspicious evidence to human review.
  • Ask for more evidence when signals are weak or conflicting.
  • Do not block claims solely because an AI signal is suspicious.
  • Use async for high-value or complex cases.
Next step

Ready to scan real traffic?

Create an API key, keep it on your server, then wire Mighty into the workflow that handles untrusted material.

AI-Agent Prompt

AI-ready prompt
Add damage photo scanning

Paste this into Cursor, Codex, Claude Code, or Windsurf.

Add Mighty image scanning to the damage photo intake flow.

Requirements:
- Scan server-side before automated claim decisions.
- Use multipart upload to POST https://gateway.trymighty.ai/v1/scan.
- Use content_type=image, scan_phase=input, mode=secure, focus=both, profile=strict.
- Store scan_id, scan_group_id, action, risk_score, threats, authenticity, and forensics.
- Route ALLOW to normal processing.
- Route WARN to review or request more evidence.
- Route BLOCK to stop automated decisioning.
- Never say Mighty proves fraud. Use review wording.

Acceptance criteria:
- Tests cover suspicious, indeterminate, and allowed image paths.
- Review queue receives scan details and original upload reference.
- API key stays server-side.