# Damage Photo AI Fraud Review

Scan images for risk and authenticity signals before claims, repair, insurance, or review workflows trust them.

Source URL: https://trymighty.ai/docs/integrate/images-ai-fraud

import {
  CodeBlockTabs,
  CodeBlockTabsList,
  CodeBlockTabsTrigger,
  CodeBlockTab,
} from "fumadocs-ui/components/codeblock";

## Goal

Scan images before automated decisions use them.

This guide fits damage photos, claim photos, repair proof, delivery photos, product condition photos, identity evidence, and other image-heavy workflows.

## Architecture

1. Upload the image to your server.
2. Send it to Mighty with `content_type=image`.
3. Use `focus=both` to check standard threats and AI authenticity signals.
4. Route suspicious or indeterminate images to review.
5. Keep final fraud decisions in your review process.

## Request And Response

<CodeBlockTabs defaultValue="request">
  <CodeBlockTabsList>
    <CodeBlockTabsTrigger value="request">Request</CodeBlockTabsTrigger>
    <CodeBlockTabsTrigger value="response">Response</CodeBlockTabsTrigger>
  </CodeBlockTabsList>
  <CodeBlockTab value="request">

```bash
curl -X POST https://gateway.trymighty.ai/v1/scan \
  -H "Authorization: Bearer $MIGHTY_API_KEY" \
  -F "file=@./damage-photo.jpg" \
  -F "content_type=image" \
  -F "scan_phase=input" \
  -F "mode=secure" \
  -F "focus=both" \
  -F "profile=strict" \
  -F "data_sensitivity=tolerant" \
  -F "context=damage_photo_review"
```

  </CodeBlockTab>
  <CodeBlockTab value="response">

```json
{
  "action": "WARN",
  "risk_score": 74,
  "risk_level": "HIGH",
  "threats": [
    {
      "category": "ai_authenticity_signal",
      "confidence": 0.78,
      "reason": "AI involvement is likely based on visual consistency signals."
    },
    {
      "category": "metadata_inconsistency",
      "confidence": 0.62,
      "reason": "Compression and metadata signals do not match a typical camera capture."
    }
  ],
  "content_type_detected": "image",
  "authenticity": {
    "model_family": "authenticity_v9",
    "evidence_modality": "image",
    "ai_involvement": "yes",
    "verdict": "likely_ai_generated",
    "confidence": 0.78
  },
  "forensics": {
    "signals": ["compression_inconsistency", "metadata_missing"]
  },
  "scan_id": "81f47b0a-7a6d-49f2-a0c3-e2c7d735688c",
  "scan_group_id": "3fe06052-baa8-4ae8-8571-d10c9ce4072b",
  "scan_status": "complete"
}
```

  </CodeBlockTab>
</CodeBlockTabs>

`threats` is an array of objects with `category`, `confidence`, an optional `evidence` excerpt, and a human-readable `reason`. The `authenticity` block carries the structured forensics verdict — `verdict` ∈ `likely_real | likely_ai_generated | ai_generated | indeterminate` — distinct from the routing `action`.

## Routing Logic

```ts
export function routeDamagePhoto(scan: { action: string; authenticity?: { label?: string } }) {
  if (scan.action === "ALLOW") return "continue_claim_intake";
  if (scan.action === "WARN") return "request_review_or_more_evidence";
  if (scan.action === "BLOCK") return "stop_automated_decision";

  return "manual_review";
}
```

## Honest AI Fraud Wording

Use wording like:

- "Mighty flagged this image for review."
- "This image has suspicious evidence signals."
- "The evidence is indeterminate. Ask for more proof."

Avoid wording like:

- "This photo is fake."
- "This claimant committed fraud."
- "Mighty proved fraud."

Mighty helps route risky material. It does not replace your claims, compliance, legal, or fraud investigation process.

## Async Deep Review

Use `async=true` and `mode=comprehensive` when a photo is high value or latency can be handled by a review queue.

```json
{
  "content_type": "image",
  "scan_phase": "input",
  "mode": "comprehensive",
  "async": true,
  "webhook_url": "https://example.com/api/mighty/webhook"
}
```

## Production Checklist

- Use `focus=both` for damage photos.
- Store `authenticity`, `forensics`, and `indeterminate` signals when returned.
- Route suspicious evidence to human review.
- Ask for more evidence when signals are weak or conflicting.
- Do not block claims solely because an AI signal is suspicious.
- Use async for high-value or complex cases.

## AI-Agent Prompt

### Add damage photo scanning

```text
Add Mighty image scanning to the damage photo intake flow.

Requirements:
- Scan server-side before automated claim decisions.
- Use multipart upload to POST https://gateway.trymighty.ai/v1/scan.
- Use content_type=image, scan_phase=input, mode=secure, focus=both, profile=strict.
- Store scan_id, scan_group_id, action, risk_score, threats, authenticity, and forensics.
- Route ALLOW to normal processing.
- Route WARN to review or request more evidence.
- Route BLOCK to stop automated decisioning.
- Never say Mighty proves fraud. Use review wording.

Acceptance criteria:
- Tests cover suspicious, indeterminate, and allowed image paths.
- Review queue receives scan details and original upload reference.
- API key stays server-side.
```
