Overall Class Structure
Actions inherit fromActionBase and must implement the action method. They decide what to do when content is detected (block, allow, replace, anonymize, or raise exception).
Available Action Methods
Content Flow
allow_content()/allow_content_async(): Let content pass through unchangedraise_block_error(message)/raise_block_error_async(message): Block with a messageraise_exception(message): RaiseDisallowedOperationexception
Content Transformation
replace_triggered_keywords(replacement)/replace_triggered_keywords_async(replacement): Replace detected keywords with a fixed placeholder string (all values share the same placeholder)anonymize_triggered_keywords()/anonymize_triggered_keywords_async(): Replace detected keywords with unique random values that preserve format (each value gets a distinct placeholder)
LLM-Powered
llm_raise_block_error(reason)/llm_raise_block_error_async(reason): Generate contextual block message using LLMllm_raise_exception(reason)/llm_raise_exception_async(reason): Generate exception message using LLM
Anonymize vs Replace
Both methods are reversible — the original values are automatically restored in the agent’s final response. The difference is in what the LLM sees during processing:anonymize_triggered_keywords() — Random Placeholders
Replaces sensitive data with random characters while preserving format. Digits become random digits, letters become random letters. Each detected value gets a unique replacement, so the LLM can distinguish between different pieces of sensitive data.
"Your email is xhkw.abc@defghij.klm and phone is 382-947-1056"
This is ideal when you need the LLM to reason about the structure of the data (e.g., distinguish between different values) without seeing real content.
replace_triggered_keywords(replacement) — Fixed Placeholders
Replaces sensitive data with a fixed placeholder string. All detected values get the same replacement, making the content more opaque to the LLM.
"Your email is [PII_REDACTED] and phone is [PII_REDACTED]"
This is ideal when you want maximum opacity — the LLM cannot infer anything about the format or structure of the sensitive data.

