Installation

pip install openharness-deepagent

# With MCP support
pip install openharness-deepagent[mcp]

Note: Deep Agents requires Python 3.11+.

Quick Start

from openharness_deepagent import DeepAgentAdapter
from openharness_deepagent.types import DeepAgentConfig
from openharness.types import ExecuteRequest

# Create adapter with configuration
adapter = DeepAgentAdapter(DeepAgentConfig(
    model="anthropic:claude-sonnet-4-5-20250929",
    system_prompt="You are an expert researcher and analyst.",
))

# Execute a prompt
result = await adapter.execute(
    ExecuteRequest(message="Research the latest trends in AI agents")
)
print(result.output)

Capabilities

Built on LangGraph

Deep Agents are compiled LangGraph StateGraph objects, providing powerful state management, streaming, and extensibility through the LangChain ecosystem.

Supported

DomainCapabilityNotes
Executionexecute()Sync execution
Executionexecute_stream()Async streaming
Planningwrite_todosTask decomposition
Planningread_todosView task list
Subagentstask toolDelegate to specialized agents
Filesls, read_file, write_fileFull file operations
Filesedit_fileExact replacements
Filesglob, grepSearch files
Executionexecute toolSandboxed shell commands
Toolslist_tools()Built-in + custom
MCPVia langchain-mcp-adaptersMCP tool integration
ModelsMulti-modelAny LangChain model

Not Supported

DomainReasonWorkaround
SessionsStateless by defaultUse file backends for persistence
AgentsUses invocation modelCreate adapter instances
HooksNot supportedUse middleware

Planning & Todos

Deep Agents include built-in planning tools for task decomposition:

from openharness_deepagent.types import DeepAgentConfig

adapter = DeepAgentAdapter(DeepAgentConfig(
    system_prompt="""
    You are a methodical researcher. For complex tasks:
    1. Use write_todos to create a task list
    2. Work through each task systematically
    3. Update todos as you complete them
    """
))

# The agent will automatically use write_todos and read_todos
result = await adapter.execute(
    ExecuteRequest(message="Research and summarize the top 5 AI frameworks")
)

Built-in Planning Tools

ToolDescription
write_todosCreate or update task lists for workflows
read_todosView current task list and status

Subagents

Delegate specialized work to subagents for context isolation:

from openharness_deepagent.types import DeepAgentConfig, SubagentConfig

adapter = DeepAgentAdapter(DeepAgentConfig(
    model="anthropic:claude-sonnet-4-5-20250929",
    subagents=[
        SubagentConfig(
            name="code-reviewer",
            description="Reviews code for bugs and improvements",
            system_prompt="You are an expert code reviewer.",
        ),
        SubagentConfig(
            name="researcher",
            description="Performs in-depth technical research",
            system_prompt="You are an expert researcher.",
            model="openai:gpt-4o",  # Different model
        ),
    ],
))

# Agent can delegate using the task tool
result = await adapter.execute(
    ExecuteRequest(message="Review this code and research best practices")
)

Context Isolation

Subagents run with isolated context, keeping the main agent's context clean and allowing specialized handling of specific tasks.

File System

Deep Agents have comprehensive file system tools:

from openharness_deepagent.types import DeepAgentConfig, BackendType

# Use filesystem backend for real disk access
adapter = DeepAgentAdapter(DeepAgentConfig(
    backend_type=BackendType.FILESYSTEM,
    backend_root_dir="/path/to/workspace",
))

# Agent can now use file tools
result = await adapter.execute(
    ExecuteRequest(message="Read the README.md and summarize it")
)

Built-in File Tools

ToolDescription
lsList directory contents
read_fileRead file with optional pagination
write_fileCreate or overwrite files
edit_fileEdit with exact string replacements
globFind files matching patterns
grepSearch text patterns in files
executeRun sandboxed shell commands

File Backends

BackendDescription
STATEEphemeral, in-memory (default)
FILESYSTEMReal disk access
STOREPersistent via LangGraph Store
COMPOSITERoute paths to different backends

Streaming

from openharness.types import ExecuteRequest

async for event in adapter.execute_stream(
    ExecuteRequest(message="Write a Python script to analyze data")
):
    if event.type == "text":
        print(event.content, end="")
    elif event.type == "thinking":
        print(f"[Thinking: {event.thinking}]")
    elif event.type == "tool_call_start":
        print(f"\n[Tool: {event.name}]")
    elif event.type == "progress":
        print(f"[Progress: {event.step}]")
    elif event.type == "done":
        print("\n[Complete]")

Event Types

EventDescription
textText content chunk
thinkingAgent reasoning (if enabled)
tool_call_startTool invocation beginning
tool_resultTool execution result
tool_call_endTool invocation complete
progressTask completion progress
errorError occurred
doneExecution complete

Configuration

DeepAgentConfig Options

from openharness_deepagent.types import (
    DeepAgentConfig,
    SubagentConfig,
    BackendType,
    InterruptConfig,
)

config = DeepAgentConfig(
    # Model (default: claude-sonnet-4-5-20250929)
    model="anthropic:claude-sonnet-4-5-20250929",

    # Custom system prompt (appends to default)
    system_prompt="You are an expert assistant.",

    # Custom tools
    tools=[my_tool_function],

    # Subagents
    subagents=[
        SubagentConfig(
            name="researcher",
            description="Research specialist",
        ),
    ],

    # File backend
    backend_type=BackendType.FILESYSTEM,
    backend_root_dir="/workspace",

    # Human-in-the-loop interrupts
    interrupt_on=[
        InterruptConfig(
            tool_name="execute",
            allowed_decisions=["approve", "reject"],
        ),
    ],
)

Supported Models

Deep Agents supports any LangChain model:

  • anthropic:claude-sonnet-4-5-20250929 (default)
  • anthropic:claude-opus-4-20250514
  • openai:gpt-4o
  • google:gemini-1.5-pro
  • ollama:llama3.2 (local)