Protocol Documentation

How Agent911 Works

A structured, identity-weighted incident reporting protocol designed to resist spam, reward verification, and produce a trustworthy signal layer for the inVerus Consortium.

6Signal categories
4Identity tiers
1.0×Max per-reporter weight
3+Downstream consumers
The Process

Three steps. One trust layer.

01

Submit a Structured Signal

Every report is a structured data packet - not a freeform complaint. You select a category, optionally attach evidence, and your identity tier is automatically applied.

  • Choose from 6 incident categories - each maps to a specific threat type
  • Evidence links are immutable once submitted to the registry
  • Anonymous submissions receive zero signal weight - identity is the filter
  • Each report is deduplicated against existing signals for the same agent
02

Identity Determines Weight

The inVerus Trust Model is resistant to spam because every signal is weighted by the verified identity of its reporter. More verification = more influence.

  • No account: signal is discarded entirely
  • Basic account: 0.25× weight applied
  • GitHub-verified: 0.75× weight (proof of real developer identity)
  • Proven usage on-chain: 1.0× full weight - the gold standard
03

Signals Feed the Trust Registry

Weighted signals are aggregated, abuse-filtered, and bounded per-reporter. The result updates the inVerus Consortium Trust Score for that agent.

  • Per-reporter influence is capped to prevent coordinated attacks
  • Signals from the same IP cluster are automatically discounted
  • Trust scores update in real-time as new weighted signals arrive
  • Downstream consumers (Clawdbase, etc.) query the registry before execution
Identity → Weight

Your identity is your signal weight.

Anonymous reports are worthless - anyone can file infinite noise. The inVerus weighting model makes spam economically irrational by tying influence to verified identity.

A GitHub-verified developer carries 3× the influence of a basic account. An on-chain proven developer carries 4× - because they have something real to lose.

Trust Score Formula

score(agent) = Σ (signal_weight × identity_multiplier)

bounded per reporter · deduplicated · abuse-filtered

No Account

Unverified, anonymous submission

No Signal

Account

Registered but unverified identity

0.25×

Verified GitHub

OAuth-linked GitHub with public activity

0.75×

Proven Usage

On-chain verified developer interaction

1.0×
Incident Categories

Six threat vectors. One registry.

Every report maps to a specific incident type. Structured categorisation makes signals queryable, aggregatable, and comparable across the registry.

critical

Malicious Execution

Agent executed code, commands, or actions with harmful intent - unauthorized system modifications, destructive operations, or covert payloads.

critical

Data Exfiltration

Agent accessed, copied, or transmitted sensitive data to external destinations without authorization or outside the defined scope.

high

Identity Spoofing

Agent misrepresented its capabilities, authorship, or affiliation - falsely claiming to be a different agent or operating entity.

high

False Claims

Agent made demonstrably false statements about its outputs, abilities, or actions - deliberately misleading the user or downstream systems.

high

Security Vulnerability

Agent contains or introduces a security flaw - injection risk, insecure credential handling, or exploitable attack surface in generated output.

medium

Behavioural Drift

Agent exhibits consistent divergence from its stated behavior spec - gradual scope creep, unexpected capability expansion, or alignment regression.

FAQ

Common questions.

How reports work, what happens after submission, and how the trust registry is protected against manipulation.

Ready to file a signal?

Your verified identity is your weapon. Report what you know, strengthen what everyone can trust.

Agent911.ai · inVerus Consortium · Same Universe