Skip to main content

Auction and Scoring

AIP runs a short timed auction. When a platform sends a PlatformRequest, the operator tags it into category pools, derives a ContextRequest, and sends it only to Brand Agents subscribed to those pools. Delivery happens through a pub/sub system. Agents have about 30–70 ms to send a signed bid. When the window closes, the Operator scores all bids (CPA first, then CPC, then CPX) and returns the winner. If no one responds in time, it returns no_bid. The model avoids blasting every bidder, keeps latency low, and ensures only relevant agents compete.

1. TL;DR

AIP evaluates all eligible bids and returns the offer with the highest protocol-defined score.

2. Why It Matters

AIP enables AI systems to monetize intent responsibly.
Auctions prioritize relevance, quality, and economic value, creating recommendations that feel useful, aligned, and trustworthy.
This ensures:
  • AI experiences remain user-centric
  • Brands compete on merit and integrity
  • Platforms gain a predictable monetization layer

3. How an Auction Works

Every user intent triggers a micro-auction. The sequence is:

1. Intent Enters the System

An AI platform submits a PlatformRequest describing the user’s query or action.

2. Bid Requests Are Issued

The Operator classifies the request into category pools, derives a ContextRequest, and uses publish/subscribe transport to deliver it only to Brand Agents subscribed to those pools.

3. Bids Are Returned

During the 30–70 ms auction window, each Brand Agent sends a signed POST /aip/bid-response payload with:
  • an offer
  • a bid price
  • necessary metadata

4. Scoring Pipeline Evaluates Bids

The Operator runs its scoring model and produces a ranking.

5. A Single Winner Is Returned

The Operator sends the AuctionResult (or no_bid when no responses arrive in time) and serve_token back to the Platform.

4. Scoring Model

AIP scoring evaluates each bid using three core signals:

Value

Economic strength of the bid relative to the current intent.

Relevance

Alignment between the offer and the user’s query, conversation, or task context.

Quality

Trust, integrity, and historical performance signals from the Brand Agent’s verified events. The Operator applies these pillars consistently for all bids, producing a deterministic score for each candidate.
The highest-scoring offer becomes the result of the auction.

5. AuctionResult

{
  "auction_id": "auc_981",
  "serve_token": "stk_abcxyz123",
  "winner": {
    "brand_agent_id": "ba_451",
    "preferred_unit": "CPA",
    "reserved_amount_cents": 500
  },
  "render": {
    "label": "[Ad]",
    "title": "Scale your CRM",
    "body": "Built for founders.",
    "cta": "Try for free",
    "url": "https://admesh.click/stk_abcxyz123"
  },
  "ttl_ms": 60000
}
For complete auction result schema, see: Auction Result Schema

6. Transparency and Verification

Every exchange in the auction cycle is logged as verifiable artifacts:
  • signed ContextRequest
  • signed BidResponse
  • signed AuctionResult
  • timestamps and nonces
  • protocol version identifiers

7. Example Flow

User asks: “Best AI note-taking tools.”
  1. A ContextRequest is issued.
  2. The Operator distributes it to eligible Brand Agents.
  3. Agents return bids.
  4. The scoring pipeline ranks them.
  5. The Platform receives a single recommendation and a serve_token.
  6. Exposure and click events connect cleanly to the same attribution chain.

8. Guarantees

AIP ensures:
  • consistent evaluation across all Brand Agents
  • deterministic outcomes for identical inputs
  • complete verifiability via signed artifacts
  • unified attribution through the serve_token
  • scalable real-time performance

Summary

AIP auctions transform user intent into ranked recommendations through a structured, deterministic evaluation system. The protocol harmonizes how Platforms monetize intent, how Brand Agents compete, and how Operators deliver transparent, reproducible outcomes.