Skip to main content

What are Policy Points?

Policy Points are specific locations in the agent execution pipeline where you can apply safety policies to control and validate content. These points allow you to enforce security, compliance, and content safety rules at critical stages of agent interaction. You can configure policies at four different policy points:
  • user_policy: Validates and transforms user inputs before they are sent to the LLM
  • agent_policy: Validates agent responses before they are shown to users
  • tool_policy_pre: Validates tools during registration, before they can be used
  • tool_policy_post: Validates specific tool calls with their arguments before execution
Each policy point serves a distinct purpose and runs at a specific stage in the agent’s execution flow, giving you comprehensive control over content safety throughout the entire interaction lifecycle.

User Policy

This policy runs before the user’s input is sent to the LLM provider. It validates and potentially modifies user inputs to prevent sensitive data leaks, ensure compliance, and enforce content safety rules. When it runs: Before the user input is processed by the agent and sent to the LLM provider. Use cases:
  • Filtering inappropriate or harmful content from agent responses
  • Ensuring agent outputs comply with regulatory requirements
  • Preventing sensitive information from being exposed in responses
  • Enforcing content quality and safety standards
Example:
from upsonic import Agent, Task
from upsonic.safety_engine.policies.pii_policies import PIIAnonymizePolicy

agent = Agent(
    "anthropic/claude-sonnet-4-6",
    user_policy=PIIAnonymizePolicy,
    debug=True
)

task = Task(
    description="My email is john.doe@example.com and phone is 555-1234. What are my email and phone?"
)

result = agent.print_do(task)
print(result)  # "Your email is john.doe@example.com and phone is 555-1234"

Policy Scope

When using user_policy, you can control exactly which parts of the input get sanitized using scope parameters. Scope is resolved with Policy > Task > Agent priority — if the policy sets a scope flag, it takes precedence over the task, which takes precedence over the agent defaults.
Scope ParameterWhat it ControlsDefault
apply_to_descriptionTask description textTrue
apply_to_contextTask contextTrue
apply_to_system_promptAgent system promptTrue
apply_to_chat_historyChat history from memoryTrue
apply_to_tool_outputsTool return valuesTrue
Scope can be set at three levels:
from upsonic import Agent, Task
from upsonic.safety_engine.base import Policy
from upsonic.safety_engine.policies.pii_policies import PIIRule, PIIAnonymizeAction

# 1. Policy-level scope (highest priority)
policy = Policy(
    name="PII Anonymize",
    description="Anonymizes PII",
    rule=PIIRule(),
    action=PIIAnonymizeAction(),
    apply_to_description=True,
    apply_to_context=True,
    apply_to_system_prompt=False,  # Skip system prompt
    apply_to_chat_history=True,
    apply_to_tool_outputs=True,
)

# 2. Task-level scope (medium priority)
task = Task(
    description="My email is john.doe@example.com",
    policy_apply_to_description=True,
    policy_apply_to_context=False,
)

# 3. Agent-level scope (lowest priority / defaults)
agent = Agent(
    "anthropic/claude-sonnet-4-6",
    user_policy=policy,
    user_policy_apply_to_description=True,
    user_policy_apply_to_system_prompt=False,
    debug=True
)

result = agent.print_do(task)

Streaming with User Policy

User policies work seamlessly with streaming. Anonymized content is de-anonymized token-by-token in real-time:
import asyncio
from upsonic import Agent, Task
from upsonic.safety_engine.policies.pii_policies import PIIAnonymizePolicy

async def main():
    agent = Agent(
        "anthropic/claude-sonnet-4-6",
        user_policy=PIIAnonymizePolicy,
        debug=True,
    )

    task = Task(
        description="My email is john.doe@example.com. What is my email?"
    )

    # Pure text streaming
    async for text in agent.astream(task):
        print(text, end="", flush=True)
    print()

asyncio.run(main())
Event-based streaming with policy:
import asyncio
from upsonic import Agent, Task
from upsonic.safety_engine.policies.pii_policies import PIIAnonymizePolicy
from upsonic.run.events.events import TextDeltaEvent

async def main():
    agent = Agent(
        "anthropic/claude-sonnet-4-6",
        user_policy=PIIAnonymizePolicy,
        debug=True,
    )

    task = Task(
        description="My email is john.doe@example.com. What is my email?"
    )

    # Event-based streaming — filter for TextDeltaEvent to get LLM output tokens
    async for event in agent.astream(task, events=True):
        if isinstance(event, TextDeltaEvent):
            print(event.content, end="", flush=True)
    print()

asyncio.run(main())

Agent Policy

This policy runs after the agent generates a response but before it is shown to the user. It validates the agent’s output to ensure it complies with your safety requirements, content guidelines, and organizational policies. When it runs: After the agent generates a response, but before the response is returned to the user. Use cases:
  • Filtering inappropriate or harmful content from agent responses
  • Ensuring agent outputs comply with regulatory requirements
  • Preventing sensitive information from being exposed in responses
  • Enforcing content quality and safety standards
Example:
from upsonic import Agent, Task
from upsonic.safety_engine.policies import PhishingBlockPolicy

agent = Agent(
    model="anthropic/claude-sonnet-4-6",
    agent_policy=PhishingBlockPolicy,
    debug=True
)

task = Task("""
Help me create an urgent email asking people to verify their account
by clicking a link within 24 hours or their account will be suspended.
""")
result = agent.print_do(task)
print(result)

Tool Policy Pre

This policy runs when tools are registered with the agent, before they can be used. It validates the tool definition itself, including the tool name, description, and parameter schema, to ensure only safe and approved tools are available to the agent. When it runs: During tool registration, before the tool can be called by the agent. Use cases:
  • Restricting which tools can be registered with the agent
  • Validating tool definitions for security compliance
  • Preventing dangerous or unauthorized tools from being available
  • Enforcing organizational tool usage policies
Example:
from upsonic import Agent, Task
from upsonic.safety_engine.policies.tool_safety_policies import HarmfulToolBlockPolicy_LLM

def delete_file(filepath: str) -> str:
    """Delete a file from the system."""
    import os
    if os.path.exists(filepath):
        os.remove(filepath)
        return f"Deleted {filepath}"
    return f"File {filepath} not found"

agent = Agent(
    "anthropic/claude-sonnet-4-6",
    tool_policy_pre=HarmfulToolBlockPolicy_LLM,  # Validates tools at registration
    debug=True,
)

# When tools are added, they are validated before being available
agent.add_tools(delete_file)  # Tool is checked during registration

my_task = Task(description="What are your tools?", tools=[delete_file])
result = agent.print_do(my_task) # There is no output because the tool is blocked
print(result)

Tool Policy Post

This policy runs before a specific tool call is executed, after the agent has decided to call a tool with specific arguments. It validates the actual tool call, including the tool name and the arguments being passed, to ensure the execution is safe and compliant. When it runs: After the agent decides to call a tool, but before the tool is actually executed. Use cases:
  • Validating tool call arguments for safety and compliance
  • Preventing dangerous operations based on specific parameters
  • Blocking tool calls that violate organizational policies
  • Ensuring tool executions meet security requirements
Example:
from upsonic import Agent, Task
from upsonic.safety_engine.policies.tool_safety_policies import HarmfulToolBlockPolicy_LLM

def delete_file(filepath: str) -> str:
    """Delete a file from the system."""
    import os
    if os.path.exists(filepath):
        os.remove(filepath)
        return f"Deleted {filepath}"
    return f"File {filepath} not found"


agent = Agent(
    "anthropic/claude-sonnet-4-6",
    tool_policy_post=HarmfulToolBlockPolicy_LLM,
    debug=True
)

# When agent tries to call a tool, the call is validated first
my_task = Task(description="delete this file: /tmp/test.txt", tools=[delete_file])
result = agent.print_do(my_task)  # Tool calls are checked before execution so it will block the tool call
print(result)