Documentation Index
Fetch the complete documentation index at: https://mnemomllc-feat-aip-output-analysis-docs.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
A2A Integration Guide
Time to integrate: ~10 minutes. This guide shows how to extend A2A Agent Cards with AAP alignment properties, enabling value coherence checks before agent-to-agent coordination. Examples in both Python and TypeScript.
Overview
A2A (Agent-to-Agent) protocol defines Agent Cards for capability discovery and task negotiation. AAP extends these cards with an alignment block that declares:
- Who the agent serves (principal relationship)
- What values guide decisions (declared values and conflicts)
- What it can do autonomously (autonomy envelope)
- How decisions are audited (trace commitment)
This extension enables agents to verify value coherence before delegating tasks, rather than discovering conflicts mid-execution.
Where A2A and AAP Fit
A2A and AAP are complementary protocols in the agentic AI stack, both part of the Agentic AI Foundation (AAIF):
+---------------------------------------------------------------+
| Agentic AI Foundation (AAIF) |
+---------------+---------------+---------------+----------------+
| MCP | A2A | AAP / AIP | AGENTS.md |
| Agent<->Tools| Agent<->Agent| Trust & | Project |
| | | Integrity | Guidance |
+---------------+---------------+---------------+----------------+
| "What tools | "What can we | "Should we | "How should |
| can I use?" | do together?"| work together | I behave |
| | | and can we | here?" |
| | | prove it?" | |
+---------------+---------------+---------------+----------------+
MCP + A2A + AAP/AIP = the complete trust stack. MCP connects agents to tools. A2A connects agents to each other. AAP verifies that coordinating agents share compatible values and produces auditable decision trails. AIP adds real-time integrity monitoring of agent reasoning.
The Alignment Card as Superset of the A2A Agent Card
An A2A Agent Card tells other agents what you can do. An Alignment Card tells them why you do it and whose interests you serve.
| A2A Agent Card | AAP Alignment Card | What AAP Adds |
|---|
id, name | agent_id, card_id | Stable identity for audit trails with issuance/expiry |
description | values.declared | Machine-verifiable intent, not just prose |
skills | autonomy_envelope.bounded_actions | Which skills are safe to execute autonomously |
capabilities | autonomy_envelope | Escalation triggers, forbidden actions, spending limits |
securitySchemes | (complementary) | A2A handles auth; AAP handles behavioral trust |
extensions | extensions.aap | URI-based extension linking to alignment metadata |
signature | issued_at, expires_at | Both support signed, time-bound artifacts |
| (no equivalent) | principal | Who the agent serves and their relationship |
| (no equivalent) | values.conflicts_with | Explicit declaration of incompatible values |
| (no equivalent) | audit_commitment | Trace format, retention, queryability guarantees |
The Alignment Card doesn’t replace the A2A Agent Card — it extends it:
+-------------------------------------------------+
| A2A Agent Card |
| name, skills, capabilities, interfaces, |
| securitySchemes, extensions |
| |
| +---------------------------------------------+ |
| | AAP Alignment Block | |
| | principal, values, autonomy_envelope, | |
| | audit_commitment, extensions | |
| +---------------------------------------------+ |
+-------------------------------------------------+
Prerequisites
# Python
pip install agent-alignment-protocol
# TypeScript
npm install @mnemom/agent-alignment-protocol
Step 1: Understand Your Current Agent Card
A standard A2A Agent Card (v0.3) declares capabilities:
{
"id": "shopping-assistant",
"name": "shopping-assistant",
"description": "Finds and compares products for users",
"url": "https://shopping.example.com/agent",
"version": "1.0.0",
"provider": {
"name": "Acme Corp",
"contact": "support@acme.example.com"
},
"capabilities": {
"streaming": true,
"pushNotifications": false,
"stateTransitionHistory": true
},
"skills": [
{
"id": "product-search",
"name": "Product Search",
"description": "Search for products matching criteria",
"inputSchema": {
"type": "object",
"properties": {
"query": {"type": "string"},
"maxPrice": {"type": "number"}
}
}
},
{
"id": "compare-products",
"name": "Compare Products",
"description": "Compare features of multiple products"
},
{
"id": "purchase",
"name": "Purchase Product",
"description": "Complete a purchase transaction"
}
],
"interfaces": [
{"type": "json-rpc", "endpoint": "https://shopping.example.com/rpc"}
],
"securitySchemes": {
"oauth2": {
"type": "oauth2",
"flows": {
"clientCredentials": {
"tokenUrl": "https://auth.example.com/token",
"scopes": {"agent:invoke": "Invoke agent skills"}
}
}
}
},
"extensions": []
}
This tells other agents what your agent can do, but not how it makes decisions or whose interests it serves.
Step 2: Add the Alignment Block
Extend your Agent Card with an alignment block and declare AAP support via the A2A extensions array:
{
"id": "shopping-assistant",
"name": "shopping-assistant",
"description": "Finds and compares products for users",
"url": "https://shopping.example.com/agent",
"version": "1.0.0",
"provider": {
"name": "Acme Corp",
"contact": "support@acme.example.com"
},
"capabilities": {
"streaming": true,
"pushNotifications": false,
"stateTransitionHistory": true
},
"skills": [
{"id": "product-search", "name": "Product Search", "...": "..."},
{"id": "compare-products", "name": "Compare Products", "...": "..."},
{"id": "purchase", "name": "Purchase Product", "...": "..."}
],
"interfaces": [
{"type": "json-rpc", "endpoint": "https://shopping.example.com/rpc"}
],
"securitySchemes": {
"oauth2": {"type": "oauth2", "...": "..."}
},
"extensions": [
{
"uri": "urn:aap:alignment-card",
"version": "0.1.0",
"required": false
}
],
"alignment": {
"aap_version": "0.1.0",
"card_id": "ac-shopping-assistant-001",
"agent_id": "shopping-assistant",
"issued_at": "2026-01-31T12:00:00Z",
"principal": {
"type": "human",
"relationship": "delegated_authority"
},
"values": {
"declared": ["principal_benefit", "transparency", "minimal_data"],
"conflicts_with": ["deceptive_marketing", "hidden_fees", "dark_patterns"]
},
"autonomy_envelope": {
"bounded_actions": ["product-search", "compare-products"],
"escalation_triggers": [
{
"condition": "skill_id == \"purchase\"",
"action": "escalate",
"reason": "Purchases require explicit user approval"
},
{
"condition": "purchase_value > 100",
"action": "escalate",
"reason": "Exceeds autonomous spending limit"
}
],
"forbidden_actions": ["share_payment_info", "auto_subscribe"]
},
"audit_commitment": {
"trace_format": "ap-trace-v1",
"retention_days": 90,
"queryable": true
}
}
}
Key Mapping: A2A Skills to AAP Actions
Your A2A skills map to AAP bounded_actions:
| A2A Skill | AAP Treatment | Rationale |
|---|
product-search | bounded_actions | Low risk, no state change |
compare-products | bounded_actions | Low risk, no state change |
purchase | escalation_triggers | Financial commitment, requires approval |
Step 3: Serve the Alignment Card
AAP specifies that Alignment Cards SHOULD be served at a well-known URL:
GET https://shopping.example.com/.well-known/alignment-card.json
You can either:
Option A: Embed in Agent Card (recommended for A2A)
{
"name": "shopping-assistant",
"alignment": { "...": "full alignment block" }
}
Option B: Reference External Card
{
"name": "shopping-assistant",
"alignment": {
"$ref": "https://shopping.example.com/.well-known/alignment-card.json"
}
}
Option C: A2A v0.3 Extensions (declare support, serve separately)
{
"extensions": [
{
"uri": "urn:aap:alignment-card",
"version": "0.1.0",
"required": false
}
]
}
With Option C, AAP-aware agents fetch the alignment card from the well-known URL. Non-AAP agents ignore the extension. Set required: true if you want to enforce that all coordinating agents must support AAP.
Step 4: Implement Value Coherence Handshake
Before your agent delegates work to another agent, verify value coherence.
Python:
from aap import check_coherence
def delegate_task(my_card: dict, their_agent_card: dict, task: dict):
"""Delegate a task to another agent after checking value coherence."""
# Extract alignment blocks
my_alignment = my_card.get("alignment", {})
their_alignment = their_agent_card.get("alignment", {})
if not their_alignment:
# Other agent doesn't support AAP
return handle_no_alignment(their_agent_card, task)
# Check coherence
result = check_coherence(my_alignment, their_alignment)
if result.compatible:
# Values are compatible, proceed
return execute_delegation(their_agent_card, task)
# Handle conflicts
for conflict in result.value_alignment.conflicts:
print(f"Value conflict: {conflict.description}")
if result.proceed:
# Minor conflicts, can proceed with logging
return execute_delegation(their_agent_card, task, log_conflicts=True)
else:
# Significant conflicts, escalate to principal
return escalate_to_principal(
task=task,
conflicts=result.value_alignment.conflicts,
recommendation=result.proposed_resolution
)
TypeScript:
import { checkCoherence } from '@mnemom/agent-alignment-protocol';
import type { AlignmentCard } from '@mnemom/agent-alignment-protocol';
interface A2AAgentCard {
name: string;
alignment?: AlignmentCard;
[key: string]: unknown;
}
function delegateTask(myCard: A2AAgentCard, theirCard: A2AAgentCard, task: unknown) {
const myAlignment = myCard.alignment;
const theirAlignment = theirCard.alignment;
if (!theirAlignment) {
return handleNoAlignment(theirCard, task);
}
const result = checkCoherence(myAlignment, theirAlignment);
if (result.compatible) {
return executeDelegation(theirCard, task);
}
for (const conflict of result.value_alignment.conflicts) {
console.log(`Value conflict: ${conflict.description}`);
}
if (result.proceed) {
return executeDelegation(theirCard, task, { logConflicts: true });
} else {
return escalateToPrincipal({
task,
conflicts: result.value_alignment.conflicts,
recommendation: result.proposed_resolution,
});
}
}
Step 5: Generate AP-Traces for A2A Actions
When your agent performs actions (especially across agent boundaries), produce AP-Traces.
Python:
from aap import APTrace, Action, Decision, Alternative, Escalation
from datetime import datetime, timezone
import uuid
def search_products_with_trace(card_id: str, query: str, preferences: dict):
"""A2A skill implementation with AAP tracing."""
# Your existing search logic
results = perform_search(query, preferences)
# Build trace for this decision
trace = APTrace(
trace_id=f"tr-{uuid.uuid4().hex[:12]}",
agent_id="shopping-assistant",
card_id=card_id,
timestamp=datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
action=Action(
type="search",
name="product-search", # Matches A2A skill ID
category="bounded",
),
decision=Decision(
alternatives_considered=[
Alternative(
option_id=r["id"],
description=r["name"],
score=r["relevance_score"],
flags=["sponsored"] if r.get("sponsored") else [],
)
for r in results[:5]
],
selected=results[0]["id"] if results else None,
selection_reasoning=build_reasoning(results, preferences),
values_applied=["principal_benefit", "transparency"],
),
escalation=Escalation(
evaluated=True,
triggers_checked=[
{"trigger": "skill_id == \"purchase\"", "matched": False},
],
required=False,
reason="Search action within autonomy envelope",
),
)
# Store trace for audit
store_trace(trace.model_dump(mode="json"))
return results
TypeScript:
import { verifyTrace } from '@mnemom/agent-alignment-protocol';
import type { APTrace, Action, Decision, Alternative, Escalation } from '@mnemom/agent-alignment-protocol';
import { randomUUID } from 'crypto';
function searchProductsWithTrace(cardId: string, query: string, preferences: Record<string, unknown>) {
const results = performSearch(query, preferences);
const trace: APTrace = {
trace_id: `tr-${randomUUID().replace(/-/g, '').slice(0, 12)}`,
agent_id: 'shopping-assistant',
card_id: cardId,
timestamp: new Date().toISOString(),
action: {
type: 'search',
name: 'product-search', // Matches A2A skill ID
category: 'bounded',
},
decision: {
alternatives_considered: results.slice(0, 5).map((r) => ({
option_id: r.id,
description: r.name,
score: r.relevanceScore,
flags: r.sponsored ? ['sponsored'] : [],
})),
selected: results[0]?.id ?? null,
selection_reasoning: buildReasoning(results, preferences),
values_applied: ['principal_benefit', 'transparency'],
},
escalation: {
evaluated: true,
triggers_checked: [
{ trigger: 'skill_id == "purchase"', matched: false },
],
required: false,
reason: 'Search action within autonomy envelope',
},
};
storeTrace(trace);
return results;
}
Step 6: Handle Incoming Coherence Checks
When another agent requests your alignment card or initiates a coherence check.
Python (Flask):
from flask import Flask, jsonify, request
from aap import check_coherence
app = Flask(__name__)
# Serve alignment card at well-known URL
@app.route("/.well-known/alignment-card.json")
def alignment_card():
return jsonify(load_alignment_card())
# Handle coherence check requests
@app.route("/aap/coherence-check", methods=["POST"])
def coherence_check():
"""Respond to value coherence handshake."""
their_card = request.json.get("initiator_alignment")
my_card = load_alignment_card()
result = check_coherence(their_card, my_card)
return jsonify({
"compatible": result.compatible,
"score": result.score,
"proceed": result.proceed,
"matched_values": result.value_alignment.matched,
"conflicts": [
{"description": c.description, "severity": c.severity}
for c in result.value_alignment.conflicts
],
})
TypeScript (Express):
import express from 'express';
import { checkCoherence } from '@mnemom/agent-alignment-protocol';
const app = express();
app.use(express.json());
app.get('/.well-known/alignment-card.json', (_req, res) => {
res.json(loadAlignmentCard());
});
app.post('/aap/coherence-check', (req, res) => {
const theirCard = req.body.initiator_alignment;
const myCard = loadAlignmentCard();
const result = checkCoherence(theirCard, myCard);
res.json({
compatible: result.compatible,
score: result.score,
proceed: result.proceed,
matched_values: result.value_alignment.matched,
conflicts: result.value_alignment.conflicts.map((c) => ({
description: c.description,
conflict_type: c.conflict_type,
})),
});
});
app.listen(3000);
Complete Example: Two Agents Coordinating
Here’s a complete flow with a user agent delegating to a vendor agent:
# user_agent.py
from aap import check_coherence
USER_AGENT_CARD = {
"name": "user-shopping-agent",
"alignment": {
"aap_version": "0.1.0",
"card_id": "ac-user-agent-001",
"agent_id": "user-shopping-agent",
"issued_at": "2026-01-31T12:00:00Z",
"principal": {"type": "human", "relationship": "delegated_authority"},
"values": {
"declared": ["principal_benefit", "transparency", "minimal_data"],
"conflicts_with": ["deceptive_marketing", "hidden_fees"],
},
"autonomy_envelope": {
"bounded_actions": ["search", "compare", "recommend"],
"escalation_triggers": [
{"condition": "action == \"purchase\"", "action": "escalate", "reason": "Requires approval"}
],
"forbidden_actions": ["share_payment_info"],
},
"audit_commitment": {"trace_format": "ap-trace-v1", "retention_days": 30, "queryable": True},
}
}
VENDOR_AGENT_CARD = {
"name": "vendor-deals-agent",
"alignment": {
"aap_version": "0.1.0",
"card_id": "ac-vendor-agent-001",
"agent_id": "vendor-deals-agent",
"issued_at": "2026-01-31T12:00:00Z",
"principal": {"type": "organization", "relationship": "delegated_authority"},
"values": {
"declared": ["customer_satisfaction", "transparency", "upselling"],
"conflicts_with": [],
},
"autonomy_envelope": {
"bounded_actions": ["search", "recommend", "apply_discount"],
"escalation_triggers": [],
"forbidden_actions": [],
},
"audit_commitment": {"trace_format": "ap-trace-v1", "retention_days": 90, "queryable": True},
}
}
def coordinate_with_vendor():
"""Attempt to coordinate with vendor agent."""
result = check_coherence(
USER_AGENT_CARD["alignment"],
VENDOR_AGENT_CARD["alignment"]
)
print(f"Compatible: {result.compatible}")
print(f"Score: {result.score:.2f}")
print(f"Matched values: {result.value_alignment.matched}")
if result.value_alignment.conflicts:
print("Conflicts detected:")
for conflict in result.value_alignment.conflicts:
print(f" - {conflict.description}")
if result.proceed:
print("Proceeding with coordination (minor conflicts logged)")
else:
print("Escalating to principal for approval")
return result
# Run
if __name__ == "__main__":
coordinate_with_vendor()
# Output:
# Compatible: False
# Score: 0.42
# Matched values: ['transparency']
# Conflicts detected:
# - Responder's 'upselling' may conflict with initiator's 'principal_benefit'
# Escalating to principal for approval
For a comprehensive example with multiple vendors, coherence checks, delegation traces, and verification, see the working example code (available in both Python and TypeScript).
EU Compliance Shortcut
Both SDKs include presets for EU AI Act Article 50 compliance (enforcement August 2026):
Python:
from aap import EU_COMPLIANCE_AUDIT_COMMITMENT, EU_COMPLIANCE_VALUES
alignment = AlignmentCard(
# ...
values=EU_COMPLIANCE_VALUES,
audit_commitment=EU_COMPLIANCE_AUDIT_COMMITMENT, # 365-day retention, queryable
)
TypeScript:
import { EU_COMPLIANCE_AUDIT_COMMITMENT, EU_COMPLIANCE_VALUES } from '@mnemom/agent-alignment-protocol';
const alignment = {
// ...
values: EU_COMPLIANCE_VALUES,
audit_commitment: EU_COMPLIANCE_AUDIT_COMMITMENT, // 365-day retention, queryable
};
See the EU compliance for full field-level Article 50 mapping.
Beyond Verification: Real-Time Monitoring with AIP
AAP provides post-hoc verification — checking whether actions matched declared alignment after they happen. The Agent Integrity Protocol (AIP) adds real-time integrity monitoring by analyzing agent reasoning (thinking blocks) as they occur.
Both AAP and AIP share the same Alignment Card. An A2A agent with an alignment block gets both:
- AAP: Did this agent do what it said it would? (
verify_trace, check_coherence, detect_drift)
- AIP: Is this agent thinking clearly right now? (integrity checkpoints with
clear / review_needed / boundary_violation verdicts)
To make AAP/AIP signals visible in your existing observability stack, use the OpenTelemetry exporters:
# Python
pip install aip-otel-exporter
# TypeScript
npm install @mnemom/aip-otel-exporter
These emit standard OTel spans with attributes like aap.verification.result, aap.verification.similarity_score, aip.integrity.verdict — compatible with Langfuse, Arize Phoenix, Datadog, and Grafana.
Integration Checklist
Handling Non-AAP Agents
Not all agents will support AAP. Define your policy:
Python:
def delegate_with_fallback(my_card: dict, their_card: dict, task: dict):
"""Handle delegation to agents with or without AAP support."""
their_alignment = their_card.get("alignment")
if their_alignment:
# Full AAP flow
result = check_coherence(my_card["alignment"], their_alignment)
if not result.proceed:
return escalate_to_principal(task, result.value_alignment.conflicts)
return execute_delegation(their_card, task)
# No AAP support - apply fallback policy
if is_trusted_agent(their_card):
# Known agent, proceed with logging
return execute_delegation(their_card, task, log_no_aap=True)
if task_is_low_risk(task):
# Low-risk task, proceed with caution
return execute_delegation(their_card, task, log_no_aap=True)
# High-risk task with unknown agent - escalate
return escalate_to_principal(
task,
reason="Target agent does not support AAP alignment verification"
)
TypeScript:
function delegateWithFallback(myCard: A2AAgentCard, theirCard: A2AAgentCard, task: unknown) {
const theirAlignment = theirCard.alignment;
if (theirAlignment) {
const result = checkCoherence(myCard.alignment!, theirAlignment);
if (!result.proceed) {
return escalateToPrincipal({ task, conflicts: result.value_alignment.conflicts });
}
return executeDelegation(theirCard, task);
}
if (isTrustedAgent(theirCard)) {
return executeDelegation(theirCard, task, { logNoAap: true });
}
if (taskIsLowRisk(task)) {
return executeDelegation(theirCard, task, { logNoAap: true });
}
return escalateToPrincipal({
task,
reason: 'Target agent does not support AAP alignment verification',
});
}
Standard Value Identifiers
Use these standard identifiers where applicable:
| Identifier | Description |
|---|
principal_benefit | Prioritize principal’s interests |
transparency | Disclose reasoning and limitations |
minimal_data | Collect only necessary information |
harm_prevention | Avoid actions causing harm |
honesty | Do not deceive or mislead |
user_control | Respect user autonomy and consent |
privacy | Protect personal information |
fairness | Avoid discriminatory outcomes |
Custom values MUST be defined in the definitions block of your alignment card.
Reputation in A2A Agent Cards
Beyond value coherence, agents can advertise their Mnemom Trust Rating directly in A2A Agent Cards. This enables programmatic trust decisions before delegation — an agent can refuse to work with peers below a certain trust rating threshold.
The Trust Block
Add a trust block to your A2A Agent Card:
{
"id": "shopping-assistant",
"name": "shopping-assistant",
"skills": ["..."],
"alignment": { "...": "existing alignment block" },
"trust": {
"provider": "mnemom",
"score": 847,
"grade": "AA",
"verified_url": "https://api.mnemom.ai/v1/reputation/shopping-assistant",
"badge_url": "https://api.mnemom.ai/v1/reputation/shopping-assistant/badge.svg"
}
}
| Field | Type | Description |
|---|
provider | string | Trust provider identifier ("mnemom") |
score | number | Current reputation score (0 — 1000) |
grade | string | Letter grade (AAA, AA, A, BBB, BB, B, CCC, NR) |
verified_url | string | API endpoint for real-time score verification |
badge_url | string | Dynamic SVG badge URL |
The score and grade in the trust block are snapshots. Always verify via verified_url for real-time scores before making high-stakes delegation decisions. The badge URL always serves the current score.
Programmatic Trust Thresholds
Agents can use the trust block to enforce minimum reputation requirements before accepting delegation:
Python:
import httpx
MINIMUM_GRADE = "A" # Only delegate to agents with grade A or above
GRADE_ORDER = ["NR", "CCC", "B", "BB", "BBB", "A", "AA", "AAA"]
def should_delegate(their_agent_card: dict) -> bool:
"""Decide whether to delegate based on reputation."""
trust = their_agent_card.get("trust")
if not trust or trust.get("provider") != "mnemom":
# No Mnemom reputation -- fall back to alignment-only check
return check_alignment_only(their_agent_card)
# Verify real-time score (don't trust static snapshot)
verified = httpx.get(trust["verified_url"]).json()
grade = verified.get("grade", "NR")
if GRADE_ORDER.index(grade) < GRADE_ORDER.index(MINIMUM_GRADE):
print(f"Rejected: {their_agent_card['id']} has grade {grade} (minimum: {MINIMUM_GRADE})")
return False
print(f"Accepted: {their_agent_card['id']} has grade {grade} ({verified['score']}/1000)")
return True
TypeScript:
import { fetchReputation } from '@mnemom/reputation';
const MINIMUM_SCORE = 700; // Grade A threshold
interface A2AAgentCard {
id: string;
trust?: {
provider: string;
score: number;
grade: string;
verified_url: string;
};
alignment?: unknown;
[key: string]: unknown;
}
async function shouldDelegate(theirCard: A2AAgentCard): Promise<boolean> {
const trust = theirCard.trust;
if (!trust || trust.provider !== 'mnemom') {
return checkAlignmentOnly(theirCard);
}
// Verify real-time score
const verified = await fetchReputation(theirCard.id);
if (!verified || verified.score < MINIMUM_SCORE) {
console.log(`Rejected: ${theirCard.id} score ${verified?.score ?? 'NR'} < ${MINIMUM_SCORE}`);
return false;
}
console.log(`Accepted: ${theirCard.id} score ${verified.score} (${verified.grade})`);
return true;
}
ReputationGate Middleware
For agents that process many delegation requests, implement a ReputationGate middleware that checks reputation before any task execution:
Python (Flask):
from functools import wraps
import httpx
MINIMUM_SCORE = 700
API_BASE = "https://api.mnemom.ai"
def reputation_gate(min_score=MINIMUM_SCORE):
"""Middleware that rejects requests from agents below a reputation threshold."""
def decorator(f):
@wraps(f)
def wrapped(*args, **kwargs):
requester_id = request.headers.get("X-Agent-Id")
if not requester_id:
return jsonify({"error": "X-Agent-Id header required"}), 400
rep = httpx.get(f"{API_BASE}/v1/reputation/{requester_id}").json()
if rep.get("score", 0) < min_score:
return jsonify({
"error": "reputation_insufficient",
"message": f"Minimum score {min_score} required, agent has {rep.get('score', 'NR')}",
"reputation": {
"score": rep.get("score"),
"grade": rep.get("grade"),
"verified_url": f"{API_BASE}/v1/reputation/{requester_id}",
},
}), 403
return f(*args, **kwargs)
return wrapped
return decorator
@app.route("/a2a/tasks/execute", methods=["POST"])
@reputation_gate(min_score=700)
def execute_task():
"""Only agents with grade A or above can execute tasks."""
# ... task execution logic
TypeScript (Express):
import { fetchReputation } from '@mnemom/reputation';
import type { Request, Response, NextFunction } from 'express';
function reputationGate(minScore = 700) {
return async (req: Request, res: Response, next: NextFunction) => {
const requesterId = req.headers['x-agent-id'] as string;
if (!requesterId) {
return res.status(400).json({ error: 'X-Agent-Id header required' });
}
const rep = await fetchReputation(requesterId);
if (!rep || rep.score < minScore) {
return res.status(403).json({
error: 'reputation_insufficient',
message: `Minimum score ${minScore} required, agent has ${rep?.score ?? 'NR'}`,
reputation: {
score: rep?.score,
grade: rep?.grade,
verified_url: `https://api.mnemom.ai/v1/reputation/${requesterId}`,
},
});
}
next();
};
}
// Apply to A2A task execution endpoint
app.post('/a2a/tasks/execute', reputationGate(700), (req, res) => {
// Only agents with grade A or above reach here
});
SDK Helper: getA2AReputationExtension()
Both SDKs provide a helper to generate the trust block for your Agent Card:
Python:
from mnemom_reputation import get_a2a_reputation_extension
trust_block = get_a2a_reputation_extension(agent_id="shopping-assistant")
# Returns:
# {
# "provider": "mnemom",
# "score": 847,
# "grade": "AA",
# "verified_url": "https://api.mnemom.ai/v1/reputation/shopping-assistant",
# "badge_url": "https://api.mnemom.ai/v1/reputation/shopping-assistant/badge.svg"
# }
# Add to your A2A Agent Card
agent_card["trust"] = trust_block
TypeScript:
import { getA2AReputationExtension } from '@mnemom/reputation';
const trustBlock = await getA2AReputationExtension('shopping-assistant');
// Returns the trust block with current score and grade
// Add to your A2A Agent Card
agentCard.trust = trustBlock;
The getA2AReputationExtension() helper fetches the latest score from the API, so the trust block always contains current data. Call it when serving your Agent Card, not just once at startup.
What’s Next?
Questions? See the specification or check the examples.