Working with Tools#

Tools are functions that perform specific actions and can be used by the agent. They are the core capability that transforms an LLM into an intelligent agent.

Creating Tools#

The simplest way to create a tool is by decorating a Python function with the @tool decorator:

from brain.agents.tool import tool
from pydantic import BaseModel

@tool()
def weather(city: str) -> str:
    """Get the weather for a city"""
    # Implementation...
    return f"The weather in {city} is sunny."

The tool decorator automatically infers the input and output schemas from the function signature using type hints.

Custom Tool Names and Descriptions#

You can customize the tool name and description:

@tool(
    name="get_weather",
    description="Get the current weather conditions for a specified city"
)
def weather(city: str) -> str:
    """Implementation details..."""
    return f"The weather in {city} is sunny."

Using Pydantic Models for Input/Output#

For more complex schemas, you can use Pydantic models:

from brain.agents.tool import tool, Output
from pydantic import BaseModel

class WeatherRequest(BaseModel):
    city: str
    country: str = "US"
    units: str = "metric"

class WeatherResponse(Output):
    temperature: float
    conditions: str
    humidity: int

@tool()
def weather(request: WeatherRequest) -> WeatherResponse:
    """Get detailed weather information"""
    # Implementation...
    return WeatherResponse(
        temperature=25.5,
        conditions="Sunny",
        humidity=60
    )

Asynchronous Tools#

You can create asynchronous tools for non-blocking operations:

@tool()
async def fetch_weather(city: str) -> str:
    """Fetch weather data asynchronously"""
    # Async implementation...
    return f"The weather in {city} is cloudy."

Stateful Tools#

Tools can maintain state between calls using the stateful flag and state_schema:

from brain.agents.tool import tool, StateContext
from pydantic import BaseModel

class ConversationState(BaseModel):
    conversation_history: list[str] = []

@tool(stateful=True, state_schema=ConversationState)
def chat_memory(message: str, context: StateContext[ConversationState]) -> str:
    """Remember previous messages in the conversation"""
    state = context.state
    state.conversation_history.append(message)
    return f"Remembered. History now has {len(state.conversation_history)} messages."

How Tools Are Used by Agents#

When an agent is created with tools, the following happens:

  1. The tool specifications (name, description, input/output schemas) are provided to the LLM

  2. When the LLM decides to use a tool, it generates a structured tool call with inputs

  3. The agent validates the inputs against the tool’s input schema

  4. The agent executes the tool with the validated inputs

  5. The tool’s output is converted to a string and sent back to the LLM

  6. The LLM uses the tool output to generate its next response

This process allows the LLM to use tools effectively without needing to know how they are implemented.

Tool Error Handling#

If a tool raises an exception during execution, the error is captured and returned to the LLM:

@tool()
def divide(a: float, b: float) -> float:
    """Divide a by b"""
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

If the LLM calls this tool with b=0, the agent will catch the error and send the error message back to the LLM, allowing it to try again with different inputs.