Skip to main content
AIP defines protocol-level constraints and guarantees for selection, while operators implement their own selection mechanics and scoring algorithms. When a platform sends a PlatformRequest, the operator evaluates participation eligibility, derives a ContextRequest, and distributes it to brand agents. Operators define their own distribution mechanisms and selection timing.

1. TL;DR

AIP defines selection inputs, timing constraints, and outcome guarantees. Operators implement their own selection logic - auction-based, rule-based, or hybrid - to determine which brand agent participates and in what mode.

2. Why it matters

AIP enables AI systems to govern commercial participation by defining:
  • Protocol-level constraints that ensure fair competition
  • Timing guarantees that respect platform latency budgets
  • Outcome guarantees that ensure deterministic settlement
Operators implement selection logic that prioritizes relevance, quality, and economic value, producing results that feel useful, aligned, and trustworthy. This ensures:
  • AI experiences remain user-centric
  • Brands compete on merit and integrity
  • Platforms gain a predictable participation layer

3. Protocol-level selection constraints

AIP defines constraints and guarantees for selection, not the specific algorithms:

1. Input requirements

An AI platform submits a PlatformRequest describing the user’s query or action. The request includes latency_budget_ms to specify the platform’s timing requirements.

2. Participation governance

Before selection begins, the Operator evaluates whether commercial participation is allowed for this intent. This governance step is separate from selection and must complete first.

3. Distribution requirements

The Operator derives a ContextRequest and filters brand agents that match the context. The Operator distributes (broadcasts) the ContextRequest to all matching brand agents. Operators define their own distribution mechanisms (transport methods, etc.).

4. Timing constraints

  • The selection window is derived from the platform’s latency_budget_ms field in the PlatformRequest
  • Operators compute the available response window by subtracting operator overhead from the platform’s latency budget
  • Brand agents MUST submit responses within the computed window
  • Late responses are rejected and do not participate in selection or settlement
  • If no latency_budget_ms is provided, operators apply their own default fallback

5. Response collection

Brand agents submit signed responses with pricing, content, and relevance scores to participate in the auction-style selection. Operators collect responses received within the selection window.

6. Outcome guarantees

The Operator MUST return a single PlatformResponse (or no_match when no responses arrive in time) and a serve_token back to the platform. The result includes the selected agent and the determined interaction mode (recommend or delegate).

7. Settlement is separate from ranking

Settlement paths are billing semantics, not ranking rules.
  • External click-out flows follow CPX -> CPC -> CPA
  • Delegated-session flows follow CPX -> CPE -> CPA
  • Operators still define their own ranking and scoring logic for selection

4. Selection mechanisms

Operators may implement any selection mechanism that satisfies the protocol constraints:

Auction-based selection

Real-time competitive bidding where brand agents submit priced responses. The operator scores bids based on value, relevance, and quality signals. Highest-scoring response wins.

Rule-based selection

Deterministic matching based on eligibility rules, category alignment, and pre-negotiated agreements. No real-time bidding - the operator selects based on configuration.

Hybrid selection

Combines elements of both. For example, rule-based eligibility filtering followed by auction-based ranking among eligible agents. AIP does not mandate which mechanism operators use. The protocol only requires that:
  • Selection produces a single winner (or no_match)
  • A serve_token is generated for the outcome
  • The interaction mode is determined
  • All responses are evaluated within the timing window

5. Scoring (informational)

Operators implement their own scoring algorithms to evaluate responses. While AIP does not prescribe a specific scoring model, operators typically consider signals such as:

Value

Economic strength of the response relative to the current intent.

Relevance

Alignment between the offer and the user’s query, conversation, or task context.

Quality

Trust, integrity, and historical performance signals from the brand agent’s verified outcomes. Operators apply their own scoring logic consistently for all responses, producing deterministic outcomes. Note: AIP does not mandate how operators combine these signals or weight them. Each operator defines their own scoring methodology.

6. PlatformResponse

The operator returns a PlatformResponse containing auction status, winner state, render instructions, and optional delegation metadata:
{
  "spec_version": "1.0",
  "response_id": "resp_981",
  "auction_id": "auc_981",
  "serve_token": "stk_abcxyz123",
  "timestamp": "2026-03-26T18:00:02Z",
  "status": "filled",
  "winner": {
    "bid_id": "bid_7823",
    "brand_agent_id": "ba_451",
    "pricing": {
      "model": "CPA",
      "price_micros": 10000000,
      "currency": "USD"
    },
    "billing": {
      "reserved_amount_micros": 500000000,
      "currency": "USD"
    }
  },
  "render": {
    "format": "weave",
    "disclosure": "[Ad]",
    "creative": {
      "advertiser": {
        "brand_name": "Nimbus"
      },
      "ad_assets": {
        "headline": "Scale your CRM",
        "description": "Built for founders.",
        "cta_text": "Try for free"
      },
      "landing_page_url": "https://nimbus.example.com/signup",
      "click_url": "https://admesh.click/stk_abcxyz123"
    }
  },
  "ttl_ms": 60000
}
For delegation-enabled outcomes, the response may include a delegation object:
{
  "spec_version": "1.0",
  "response_id": "resp_982",
  "auction_id": "auc_982",
  "serve_token": "stk_def456",
  "timestamp": "2026-03-26T18:00:02Z",
  "status": "filled",
  "delegation": {
    "available": true,
    "mode": "recommended",
    "trigger": "explicit_consent",
    "cta_text": "Continue with Nimbus"
  },
  "ttl_ms": 60000
}

7. Transparency and verification

Every exchange in the selection cycle is logged as verifiable artifacts:
  • signed ContextRequest
  • signed brand agent responses
  • signed PlatformResponse
  • timestamps and nonces
  • protocol version identifiers

8. Example flow

User asks: “Best AI note-taking tools.”
  1. Platform sends PlatformRequest with latency_budget_ms to Operator.
  2. Operator evaluates participation eligibility - allowed.
  3. Operator derives ContextRequest and distributes (broadcasts) it to all matching brand agents using operator-defined mechanisms.
  4. Brand agents return responses within the computed selection window.
  5. Operator applies its selection logic to rank responses and determine interaction mode.
  6. Platform receives a single PlatformResponse with serve_token.
  7. Lifecycle events connect cleanly to the same attribution chain via serve_token.

9. Protocol guarantees

AIP ensures:
  • Timing compliance: Responses received after the computed window are rejected
  • Deterministic settlement: Event ladder ensures one charge per serve_token
  • Selection autonomy: Settlement paths do not prescribe how operators rank responses
  • Complete verifiability: All messages are signed and timestamped
  • Unified attribution: serve_token links all events to the original selection
  • Operator autonomy: Operators define their own selection and scoring logic within protocol constraints
  • Mode determination: Every selection produces a clear interaction mode (recommend or delegate)

Summary

AIP defines protocol-level constraints for selection - timing, inputs, and outcome guarantees - while operators implement their own selection mechanics and scoring algorithms. The protocol governs how platforms request participation, how brand agents respond, and how operators deliver transparent, reproducible outcomes within their own implementation choices.