Overview
StateGraph is perfect for building AI agents - systems that can reason, use tools, and make decisions autonomously. An agent workflow typically follows this pattern:Copy
User Input → [LLM Reasoning] → [Tool Calls] → [Tool Results] → [LLM Processing] → Response
Basic Agent Pattern
Here’s the simplest agent structure:Copy
from typing import Annotated, List
from typing_extensions import TypedDict
import operator
from upsonic.graphv2 import StateGraph, START, END
from upsonic.models import infer_model
from upsonic.messages import ModelRequest, UserPromptPart, SystemPromptPart, get_text_content
from upsonic.tools import tool
# Define state
class AgentState(TypedDict):
messages: Annotated[List, operator.add]
result: str
# Define tools
@tool
def calculator(a: float, b: float, operation: str) -> float:
"""Perform basic math operations. Use this tool for all calculations.
Args:
a: First number
b: Second number
operation: The operation to perform - must be one of: "add", "multiply", "divide"
Returns:
The result of the calculation
"""
if operation == "add":
return a + b
elif operation == "multiply":
return a * b
elif operation == "divide":
return a / b if b != 0 else 0
return 0
# LLM node
def llm_node(state: AgentState) -> dict:
"""Let the LLM reason and use tools."""
model = infer_model("openai/gpt-4o-mini")
# Bind tools to the model
model_with_tools = model.bind_tools([calculator])
# Get the last message content
last_message_content = state["messages"][-1].content if state["messages"] else "Hello"
# Create request with improved prompt to encourage tool usage
request = ModelRequest(parts=[
SystemPromptPart(content="You are a helpful assistant. When asked to perform calculations, you MUST use the calculator tool. Do not calculate in your head - always use the tool for math operations."),
UserPromptPart(content=last_message_content)
])
# Invoke (Upsonic automatically handles tool execution)
response = model_with_tools.invoke([request])
# Extract text content from response
text_content = get_text_content(response) or response.text or str(response)
return {
"messages": [response],
"result": text_content
}
# Build graph
builder = StateGraph(AgentState)
builder.add_node("llm", llm_node)
builder.add_edge(START, "llm")
builder.add_edge("llm", END)
graph = builder.compile()
# Execute
result = graph.invoke({
"messages": [
UserPromptPart(content="What is 23 multiplied by 17? Please use the calculator tool to compute this.")
],
"result": ""
})
print(result["result"])
Automatic Tool Execution: When you use
model.bind_tools(), Upsonic automatically executes tool calls and feeds results back to the LLM. The final response is already processed.Agentic Loop with Conditional Exit
Create agents that loop until they complete the task:Copy
from typing import Annotated, List, Literal
from typing_extensions import TypedDict
import operator
from upsonic.graphv2 import StateGraph, START, END, Command
from upsonic.models import infer_model
class LoopingAgentState(TypedDict):
task: str
steps_completed: Annotated[List[str], operator.add]
iterations: Annotated[int, lambda a, b: a + b]
max_iterations: int
status: str
def agent_loop(state: LoopingAgentState) -> Command[Literal["agent_loop", END]]:
"""Agent that loops until task is complete."""
model = infer_model("openai/gpt-4o-mini")
# Check if we should continue
if state["iterations"] >= state["max_iterations"]:
return Command(
update={"status": "max_iterations_reached"},
goto=END
)
# Perform reasoning
prompt = f"""
You are an autonomous agent completing a multi-step task.
TASK: {state['task']}
COMPLETED STEPS:
{chr(10).join(f"{i+1}. {step}" for i, step in enumerate(state['steps_completed'])) if state['steps_completed'] else "None yet"}
CURRENT ITERATION: {state['iterations'] + 1} of {state['max_iterations']}
INSTRUCTIONS:
- Perform the NEXT concrete action toward completing the task
- Provide actual results, not just plans or proposals
- Be specific and actionable
- When ALL steps are complete (you've done at least 3 iterations), end your response with "TASK_COMPLETE"
- Do NOT ask for user input or approval
- Take your time - don't rush to completion
Your response (the actual work for this step):
"""
response = model.invoke(prompt)
response_text = response.text if response and response.text else ""
if "TASK_COMPLETE" in response_text.upper() and state['iterations'] >= 2:
return Command(
update={"status": "completed", "steps_completed": [response_text], "iterations": 1},
goto=END
)
else:
return Command(
update={"steps_completed": [response_text], "iterations": 1},
goto="agent_loop" # Continue looping
)
# Build
builder = StateGraph(LoopingAgentState)
builder.add_node("agent_loop", agent_loop)
builder.add_edge(START, "agent_loop")
graph = builder.compile()
result = graph.invoke(
{
"task": "Research Python web frameworks and recommend one",
"steps_completed": [],
"iterations": 0,
"max_iterations": 5,
"status": ""
},
config={"recursion_limit": 10}
)
print(f"Status: {result['status']}")
print(f"Steps: {result['steps_completed']}")
Always set
max_iterations and recursion_limit to prevent infinite loops in agentic workflows.Best Practices
1. Clear System Prompts
Copy
SystemPromptPart(content="""
You are a research assistant. Your goal is to:
1. Break down complex questions into sub-questions
2. Use tools when needed
3. Synthesize findings into a clear answer
Always think step-by-step.
""")
2. Limit Tool Sets
Only provide relevant tools:Copy
# ✅ Good - focused tools
if task_type == "math":
tools = [calculator, statistics_tool]
elif task_type == "research":
tools = [search_web, summarize_tool]
3. Set Guardrails
Protect against runaway agents:Copy
class SafeAgentState(TypedDict):
messages: Annotated[List, operator.add]
iterations: int
max_iterations: int
def safe_agent(state: SafeAgentState) -> dict:
if state["iterations"] >= state["max_iterations"]:
return {"messages": ["Max iterations reached"], "status": "stopped"}
# Continue normally
...
Next Steps
- Advanced Features - Explore Send API and parallel patterns
- Reliability - Add retry and cache policies
- Human-in-Loop - Add human oversight

