Deep Agent Adapter
openharness-deepagent (Python)
Sophisticated AI agents with planning, subagents, and file system capabilities.
Available
v0.1.0
LangGraph
Installation
pip install openharness-deepagent
# With MCP support
pip install openharness-deepagent[mcp]
Note: Deep Agents requires Python 3.11+.
Quick Start
from openharness_deepagent import DeepAgentAdapter
from openharness_deepagent.types import DeepAgentConfig
from openharness.types import ExecuteRequest
# Create adapter with configuration
adapter = DeepAgentAdapter(DeepAgentConfig(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="You are an expert researcher and analyst.",
))
# Execute a prompt
result = await adapter.execute(
ExecuteRequest(message="Research the latest trends in AI agents")
)
print(result.output)
Capabilities
Built on LangGraph
Deep Agents are compiled LangGraph StateGraph objects, providing powerful state management, streaming, and extensibility through the LangChain ecosystem.
Supported
| Domain | Capability | Notes |
|---|---|---|
| Execution | execute() | Sync execution |
| Execution | execute_stream() | Async streaming |
| Planning | write_todos | Task decomposition |
| Planning | read_todos | View task list |
| Subagents | task tool | Delegate to specialized agents |
| Files | ls, read_file, write_file | Full file operations |
| Files | edit_file | Exact replacements |
| Files | glob, grep | Search files |
| Execution | execute tool | Sandboxed shell commands |
| Tools | list_tools() | Built-in + custom |
| MCP | Via langchain-mcp-adapters | MCP tool integration |
| Models | Multi-model | Any LangChain model |
Not Supported
| Domain | Reason | Workaround |
|---|---|---|
| Sessions | Stateless by default | Use file backends for persistence |
| Agents | Uses invocation model | Create adapter instances |
| Hooks | Not supported | Use middleware |
Planning & Todos
Deep Agents include built-in planning tools for task decomposition:
from openharness_deepagent.types import DeepAgentConfig
adapter = DeepAgentAdapter(DeepAgentConfig(
system_prompt="""
You are a methodical researcher. For complex tasks:
1. Use write_todos to create a task list
2. Work through each task systematically
3. Update todos as you complete them
"""
))
# The agent will automatically use write_todos and read_todos
result = await adapter.execute(
ExecuteRequest(message="Research and summarize the top 5 AI frameworks")
)
Built-in Planning Tools
| Tool | Description |
|---|---|
write_todos | Create or update task lists for workflows |
read_todos | View current task list and status |
Subagents
Delegate specialized work to subagents for context isolation:
from openharness_deepagent.types import DeepAgentConfig, SubagentConfig
adapter = DeepAgentAdapter(DeepAgentConfig(
model="anthropic:claude-sonnet-4-5-20250929",
subagents=[
SubagentConfig(
name="code-reviewer",
description="Reviews code for bugs and improvements",
system_prompt="You are an expert code reviewer.",
),
SubagentConfig(
name="researcher",
description="Performs in-depth technical research",
system_prompt="You are an expert researcher.",
model="openai:gpt-4o", # Different model
),
],
))
# Agent can delegate using the task tool
result = await adapter.execute(
ExecuteRequest(message="Review this code and research best practices")
)
Context Isolation
Subagents run with isolated context, keeping the main agent's context clean and allowing specialized handling of specific tasks.
File System
Deep Agents have comprehensive file system tools:
from openharness_deepagent.types import DeepAgentConfig, BackendType
# Use filesystem backend for real disk access
adapter = DeepAgentAdapter(DeepAgentConfig(
backend_type=BackendType.FILESYSTEM,
backend_root_dir="/path/to/workspace",
))
# Agent can now use file tools
result = await adapter.execute(
ExecuteRequest(message="Read the README.md and summarize it")
)
Built-in File Tools
| Tool | Description |
|---|---|
ls | List directory contents |
read_file | Read file with optional pagination |
write_file | Create or overwrite files |
edit_file | Edit with exact string replacements |
glob | Find files matching patterns |
grep | Search text patterns in files |
execute | Run sandboxed shell commands |
File Backends
| Backend | Description |
|---|---|
STATE | Ephemeral, in-memory (default) |
FILESYSTEM | Real disk access |
STORE | Persistent via LangGraph Store |
COMPOSITE | Route paths to different backends |
Streaming
from openharness.types import ExecuteRequest
async for event in adapter.execute_stream(
ExecuteRequest(message="Write a Python script to analyze data")
):
if event.type == "text":
print(event.content, end="")
elif event.type == "thinking":
print(f"[Thinking: {event.thinking}]")
elif event.type == "tool_call_start":
print(f"\n[Tool: {event.name}]")
elif event.type == "progress":
print(f"[Progress: {event.step}]")
elif event.type == "done":
print("\n[Complete]")
Event Types
| Event | Description |
|---|---|
text | Text content chunk |
thinking | Agent reasoning (if enabled) |
tool_call_start | Tool invocation beginning |
tool_result | Tool execution result |
tool_call_end | Tool invocation complete |
progress | Task completion progress |
error | Error occurred |
done | Execution complete |
Configuration
DeepAgentConfig Options
from openharness_deepagent.types import (
DeepAgentConfig,
SubagentConfig,
BackendType,
InterruptConfig,
)
config = DeepAgentConfig(
# Model (default: claude-sonnet-4-5-20250929)
model="anthropic:claude-sonnet-4-5-20250929",
# Custom system prompt (appends to default)
system_prompt="You are an expert assistant.",
# Custom tools
tools=[my_tool_function],
# Subagents
subagents=[
SubagentConfig(
name="researcher",
description="Research specialist",
),
],
# File backend
backend_type=BackendType.FILESYSTEM,
backend_root_dir="/workspace",
# Human-in-the-loop interrupts
interrupt_on=[
InterruptConfig(
tool_name="execute",
allowed_decisions=["approve", "reject"],
),
],
)
Supported Models
Deep Agents supports any LangChain model:
anthropic:claude-sonnet-4-5-20250929(default)anthropic:claude-opus-4-20250514openai:gpt-4ogoogle:gemini-1.5-proollama:llama3.2(local)