Chapter one · What you are about to read
A bug bounty program is only as good as what gets through — so we built something that filters.
In 2024, security teams at major bug bounty platforms started reporting the same pattern: a flood of polished-looking vulnerability reports, each plausible at a glance, each fabricated by an LLM. Triage teams began spending more time rejecting AI-generated slop than investigating real findings. A handful of high-profile programs shut down entirely.
VibeBounty is the answer. Not another triage service, not another "AI-assisted" dashboard — a submission pipeline built from the ground up around the assumption that most submissions are synthetic. We call the pipeline the Gauntlet. It's six independent layers, each automated, each designed to fail open for humans and fail closed for scripts.
What follows is what you need to know. Researchers — you'll understand what you're submitting to and how to submit well. Developers — you'll understand what you're paying for and why you can trust your inbox again.
Identity Screen
Account eligibility screening
Not every account is eligible to submit. Certain registration patterns are screened before any submission is possible.
If your account clears eligibility at signup, this layer is invisible. It runs once, at registration, and has no impact on your submission experience from that point forward. You will also need to connect a Stripe account before you can submit. Accepted bounty payouts are final and paid net of estimated Stripe processing fees.
The pipeline starts before the first report is submitted. Accounts that don't meet eligibility criteria are filtered at signup — so they never reach the form, and you never pay for a submission that shouldn't have happened.
Reputation Gate
Track record enforcement
Submission access is tied to your track record on the platform. Quality is rewarded; noise is throttled — automatically.
Your history determines your access. Researchers who consistently submit real findings have no restrictions placed on them. Those whose submissions don't clear the technical gates are progressively limited — the system handles it without any manual intervention.
A dynamic reputation layer automatically throttles researchers who generate noise. No per-program configuration required. The system adjusts based on each researcher's behaviour across the entire platform.
Structured Proof
Technical evidence required
A report without a technical footprint is not a report. The form enforces a minimum evidentiary standard before anything enters the pipeline.
The form asks for things a real report already has: a target, request details, and evidence the vulnerability triggered. If you've reproduced the bug, you already have everything required — this step takes seconds.
Every submission you ever read has verifiable technical evidence attached. Narrative-only reports do not enter the pipeline. This is also what makes automated verification in gate 6 possible.
Behavioral Analysis
Submission pattern scoring
Each submission carries a behavioral profile. Patterns consistent with automated generation are identified and actioned without human review.
This layer is invisible when you submit as a human. The signals it analyses are specific to automated submission patterns — not humans working under normal conditions. You don't need to change how you work.
Automated submitters who get past the earlier gates are caught here. The analysis runs silently on every submission and has no effect on the experience for legitimate researchers.
Duplicate Detection
Cross-submission similarity check
Every report is fingerprinted and compared against the platform corpus. Variants of the same underlying issue are identified and collapsed.
If your finding is genuinely novel, it passes. If the underlying issue has already been filed, you will be pointed to the existing record rather than silently rejected. Independent discovery of the same vulnerability is handled fairly — you can still share in the bounty.
Near-duplicate submissions are automatically identified and collapsed into a single ticket. You pay for a unique finding — not multiple researchers filing the same vulnerability under different headings.
Live Verification
Automated claim verification
Every report that reaches this gate is tested against live infrastructure. If the claim cannot be reproduced automatically, the report is rejected.
Design your report around clear, reproducible steps. If the vulnerability is real and your reproducer is clean, this gate verifies it quickly and routes it to the developer's inbox with a verified status. Intermittent bugs can be flagged for human review instead of auto-rejection.
By the time a report reaches your inbox, an automated system has already verified it end to end. You receive a clear verdict and everything you need to act on it — or not.
The whole thing · in under 30 seconds
What the Gauntlet looks like from the outside.
A single submission, clock-timed end to end. The first five gates are cheap and synchronous; only the reproducer takes real time — and runs in parallel across a pool of Cloudflare Workers.
Chapter two · What it means for you
Two audiences. One pipeline.
The Gauntlet is designed to be boring to the researchers who do real work and invisible to the developers who pay fair bounties. Here's your side of the deal.