One API call returns a normalized 0–1 trust score for any text, image, or video. Ensemble AI detection plus cryptographic C2PA provenance — reconciled into a single answer.
No credit card required. 500 text + 200 image calls free every month.
POST /scan/image/url X-API-Key: vgk_•••••••• { "url": "https://example.com/photo.jpg" }
{ "content_type": "image", "ai_score": 0.38, "deepfake": 0.38, "c2pa": null, "report": { "verdict": "mixed", "confidence": "low" } }
AI-generated text, synthetic images, and manipulated video are indistinguishable at scale. Teams end up building and maintaining a patchwork of integrations that drift out of sync, return incompatible scores, and leave entire content types unaddressed. Verigin closes all the gaps with one integration.
Text, images, and video share the same endpoint structure and response schema. Build once, apply everywhere.
Checks for signed content credentials on every call. A valid C2PA manifest is the strongest trust signal available — cryptographic, not probabilistic.
Multiple detection methods run in parallel internally. Disagreements are reconciled before the response is returned. One confident score, not raw probabilities to interpret.
Enable verbose mode for a full audit trail: which signals fired, how strongly, whether provenance was present, and how the final score was composed.
The same scale across every content type. Your moderation rules, alert thresholds, and dashboards stay simple regardless of what you're checking.
One API key, clear docs, and a Sandbox tier with no credit card. Average response time under 800ms for text and image. SDKs for Python and Node coming soon.
Every Verigin call runs through two independent verification layers.
Multiple detection methods analyze content in parallel. Each has different strengths — some excel at fully synthetic content, others at subtle edits and human-AI hybrids. Verigin reconciles the signals internally and surfaces one confident output. You don't pick a model or manage thresholds. The reconciliation is done for you.
C2PA (Coalition for Content Provenance and Authenticity) is an open standard backed by Adobe, Google, Microsoft, and the BBC. When content carries a C2PA manifest — a signed, tamper-evident record of its origin and edit history — Verigin reads and validates it. A valid credential is the strongest possible trust signal because it's cryptographic, not probabilistic.
The two layers combine into a single ai_score between 0 and 1. Higher = more likely AI-generated or manipulated. Lower = more likely human origin. C2PA credentials, when present, are returned separately as a provenance object.
EU AI Act Articles 50 and 52 require organizations to label AI-generated content and maintain transparency for AI systems interacting with the public. Penalties reach €15 million or 3% of global annual turnover. Verigin's API is built for exactly this workflow.
Organizations deploying AI-generated text, audio, or video in public-facing systems must label it. Verigin's detection API provides the labeling infrastructure. Every call returns an auditable record suitable for compliance documentation.
AI systems interacting with users must disclose their AI nature. Verigin's verbose output mode is designed as a compliance artifact — a full audit trail of which signals fired and how the final score was composed.
Organizations in scope cannot defer. Media companies, HR platforms, and social networks operating in the EU need detection and labeling infrastructure now. Enterprise contracts include an EU AI Act compliance guarantee clause.
Flag AI-drafted submissions before they reach the editorial queue. Surface C2PA credentials from verified photojournalists automatically.
Score user-generated content at ingest. Route low-trust content to human review with a full signal breakdown to justify the decision.
Detect AI-generated creative assets before they run. Protect brand safety without slowing down campaign operations.
Maintain an auditable record of every content check. Verbose mode output is designed for storage as a compliance artifact.
Analyze content at scale with consistent scoring across text, image, and video — cross-modal analysis that fragmented tools can't support.
Integrate a trust score into your moderation pipeline. Automate the easy cases and surface hard ones to human reviewers.
No credit card required to get started.
| Plan | Sandbox | Builder | Scale | Enterprise |
|---|---|---|---|---|
| Price | Free | $49 / mo | $199 / mo | From $1,500 / mo |
| Text calls | 500 / mo | 10,000 / mo | 60,000 / mo | Custom |
| Image calls | 200 / mo | 5,000 / mo | 25,000 / mo | Custom |
| Video | — | — | 120 min / mo | Custom |
| Overage — text | — | $0.006 / call | $0.004 / call | Negotiated |
| Overage — image | — | $0.008 / call | $0.005 / call | Negotiated |
| Verbose breakdown | — | ✓ | ✓ | ✓ |
| C2PA provenance | ✓ | ✓ | ✓ | ✓ |
| Uptime SLA | — | 99.5% | 99.9% | 99.9% |
| Support | Docs only | Email (48h) | Email + Slack (4h) | Dedicated |
| DPA included | — | — | — | ✓ |
Clear docs, real examples, and a response schema that doesn't change under you.
curl -X POST https://api.verigin.com/scan/text \ -H "X-API-Key: vgk_••••••••" \ -H "Content-Type: application/json" \ -d '{"text": "Your article text here..."}'
curl -X POST https://api.verigin.com/scan/image/url \ -H "X-API-Key: vgk_••••••••" \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com/image.jpg"}'
Get your free Sandbox key and see a trust score come back. No credit card, no meeting required.