Advanced Topics#
This section covers advanced usage patterns and features of Malevich Brain.
Tool Composition#
Complex tools can be built by composing simpler tools together:
from brain.agents.tool import tool
# Create individual tools
@tool()
def fetch_weather_data(city: str) -> dict:
"""Fetch raw weather data for a city"""
# Implementation...
return {"temp": 25, "humidity": 60, "conditions": "Sunny"}
@tool()
def format_weather_report(data: dict) -> str:
"""Format weather data into a human-readable report"""
# Implementation...
return f"Weather: {data['conditions']}, Temperature: {data['temp']}°C, Humidity: {data['humidity']}%"
# Compose tools into a higher-level tool
@tool()
def get_weather_report(city: str) -> str:
"""Get a formatted weather report for a city"""
data = fetch_weather_data(city)
return format_weather_report(data)
Specialized Input/Output Types#
For tools that work with non-textual data, you can use specialized Pydantic models:
from pydantic import BaseModel, Field
from typing import List
import base64
class ImageInput(BaseModel):
base64_image: str = Field(..., description="Base64-encoded image data")
class DetectionResult(BaseModel):
class_name: str
confidence: float
bounding_box: List[float]
class ObjectDetectionOutput(Output):
detections: List[DetectionResult]
@tool()
def detect_objects(input: ImageInput) -> ObjectDetectionOutput:
"""Detect objects in an image"""
# Decode the image
image_data = base64.b64decode(input.base64_image)
# Perform object detection (implementation details omitted)
detections = [
DetectionResult(class_name="person", confidence=0.98, bounding_box=[0.1, 0.2, 0.3, 0.4])
]
return ObjectDetectionOutput(detections=detections)
Error Handling Strategies#
Implement more sophisticated error handling in your tools:
from brain.agents.tool import tool
from pydantic import BaseModel
class SearchQuery(BaseModel):
query: str
max_results: int = 5
class SearchResult(Output):
results: list[str]
error: str | None = None
@tool()
def search(input: SearchQuery) -> SearchResult:
"""Search for information"""
try:
# Attempt to perform the search
if not input.query:
raise ValueError("Empty search query")
# Implementation...
results = ["Result 1", "Result 2", "Result 3"]
return SearchResult(results=results, error=None)
except Exception as e:
# Return a SearchResult with error information
return SearchResult(
results=[],
error=f"Search failed: {str(e)}"
)
Testing Agents and Tools#
Malevich Brain includes a TestLLM class for testing agents and tools:
import pytest
from brain.agents.agent import Agent
from brain.agents.tool import tool
from tests.mock_llm import TestLLM
@pytest.mark.asyncio
async def test_calculator_agent():
# Create a simple calculator tool
@tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Create a mock LLM
llm = TestLLM()
llm.support_message_streaming = False
# Create an agent with the tool
agent = Agent(llm=llm, tools=[add])
# Run the agent
result = await agent.run("Add 2 and 3")
# Assert the result
assert "5" in result
Working with Multiple Tools#
When using multiple tools, consider organizing them into logical groups:
# Weather tools
@tool(name="get_weather")
def get_weather(city: str) -> str: ...
@tool(name="get_forecast")
def get_forecast(city: str, days: int = 5) -> str: ...
# Calendar tools
@tool(name="get_events")
def get_events(date: str) -> list[str]: ...
@tool(name="add_event")
def add_event(title: str, date: str, time: str) -> str: ...
# Create the agent with all tools
agent = Agent(
llm=llm,
tools=[
# Weather tools
get_weather,
get_forecast,
# Calendar tools
get_events,
add_event
],
instructions="""
You are an assistant that can help with weather information and calendar management.
Use the appropriate tools based on what the user is asking for.
"""
)
Performance Considerations#
For optimal performance with Malevich Brain:
Minimize Tool Execution Time: - Use async tools for I/O-bound operations - Consider implementing timeouts for external API calls - Cache results when appropriate
Optimize Message History: - Limit the number of messages kept in memory - Implement a message pruning strategy for long conversations
Reduce LLM Token Usage: - Keep tool descriptions concise - Be specific in system instructions - Filter out unnecessary details from tool outputs
Implement Streaming: - Use streaming responses for better user experience - Process streamed chunks incrementally