INSTALLATION
From PyPI
pip install acp-sdkFrom Source (Development)
git clone https://github.com/phoenix1803/Agent-Control-Plane.gitcd Agent-Control-Planepip install -e .Requirements
- • Python 3.9 or higher
- • No external dependencies for core functionality
- • Optional:
openai,anthropicfor LLM integrations
INITIALIZATION
Every recording session begins with acp.init(). This creates a new run directory and prepares the trace recorder.
import acp
acp.init(
agent_version="v1.0.0", # Your agent's version identifier
llm="gpt-4-turbo", # Model being used
seed=42, # Random seed for reproducibility
tools=["search", "calc"] # List of enabled tool names
)| Parameter | Type | Description |
|---|---|---|
agent_version | str | Identifier for your agent (e.g., "v1.0.0") |
llm | str | Model identifier (e.g., "gpt-4", "claude-3") |
seed | int | Random seed for reproducibility |
tools | list[str] | List of enabled tool names |
Strict Mode
For production environments, enable strict mode to catch configuration errors early:
acp.init(
agent_version="v1.0.0",
llm="gpt-4",
strict=True # Raises RuntimeError if steps are recorded without active run
)DECORATORS
@acp.tool(name=None, retry_policy=0)
Wraps a function to automatically record its inputs, outputs, and execution status as a [TOOL] phase step.
@acp.tool(name="web_search", retry_policy=2)
def search(query: str, max_results: int = 10) -> dict:
"""
Automatic capture:
- Input: {"query": "...", "max_results": 10}
- Output: {"results": [...]}
- Duration: 1234ms
- Status: success/error
With retry_policy=2:
- Up to 2 retry attempts on failure
- Each retry recorded as [RETRY] step
- Final success/failure recorded as [TOOL] step
"""
response = requests.get(f"https://api.search.com?q={query}")
return response.json()| Parameter | Default | Description |
|---|---|---|
name | func.__name__ | Custom name for the tool in traces |
retry_policy | 0 | Number of automatic retries (0 = none) |
@acp.llm_wrapper
Wraps LLM call functions to record prompts and responses as[REASON] phase steps.
@acp.llm_wrapper
def call_gpt(messages: list, temperature: float = 0.7) -> str:
"""
Automatic capture:
- Input: Full message array and parameters
- Output: Complete response text
- Token counts (if available)
- Latency
"""
response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages,
temperature=temperature
)
return response.choices[0].message.content
# Usage
thought = call_gpt([
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like?"}
])STEP CONTEXT
For fine-grained control, use the acp.step() context manager to manually define execution boundaries.
with acp.step("reason", input_data={"source": "user_query"}) as ctx:
# Perform your logic
thought = analyze_situation()
# Dynamically set outputs
ctx.set_output("thought", thought)
ctx.set_output("confidence", 0.85)
# Set status based on result
if thought is None:
ctx.set_status("failure")
else:
ctx.set_status("success")Available Phases
"reason" — LLM thinking"observe" — Environment input"act" — Agent action"memory" — State update"retry" — Retry attempt"terminate" — Run endStepContext Methods
ctx.set_output(key, value)— Add output datactx.set_status(status)— Set "success", "failure", or "retry"ctx.add_metadata(key, value)— Attach custom metadataSTATE MANAGEMENT
Track agent memory state changes with acp.update_memory(). Each update creates a snapshot that's attached to subsequent steps.
# Initialize conversation memory
memory = [
{"role": "system", "content": "You are a helpful assistant."}
]
# Update memory after user input
memory.append({"role": "user", "content": "What's 2+2?"})
acp.update_memory(memory)
# After LLM response
with acp.step("reason") as ctx:
response = call_llm(memory)
ctx.set_output("response", response)
# Update memory with assistant response
memory.append({"role": "assistant", "content": response})
acp.update_memory(memory)
# Memory snapshots are automatically saved to snapshots/ directory
# Each step references its memory snapshot by IDMemory Best Practices
- • Call
update_memory()after any state change - • Pass the complete memory state, not just deltas
- • Memory can be any serializable type (dict, list, string)
- • Large memory states are stored as separate snapshot files
ARTIFACT STRUCTURE
Each run generates a directory with the following structure:
traces/
└── run_a1b2c3d4/
├── meta.json # Run configuration and status
├── steps.jsonl # Newline-delimited step records
├── snapshots/ # Memory state snapshots
│ ├── snap_001.json
│ ├── snap_002.json
│ └── ...
└── tools/ # Raw tool output logs
├── search_001.log
└── calc_002.logmeta.json
{
"run_id": "run_a1b2c3d4",
"agent_version": "v1.0.0",
"llm": "gpt-4",
"seed": 42,
"tools": ["search", "calc"],
"status": "success",
"start_time": "2026-02-11T10:00:00Z",
"end_time": "2026-02-11T10:00:15Z",
"step_count": 12,
"truncated": false
}steps.jsonl (single line example)
{"step_id": 1, "phase": "reason", "status": "success", "timestamp": "2026-02-11T10:00:01Z", "input": {"prompt": "..."}, "output": {"thought": "..."}, "snapshot_id": "snap_001", "duration_ms": 1234}