Skip to main content

Attributes

The UEL system is configured through various components. The following table provides a comprehensive overview of all attributes and methods:
ComponentAttribute/MethodTypeDefaultDescription
Runnable (Base)invoke(input, config)Method-Execute runnable synchronously
ainvoke(input, config)Method-Execute runnable asynchronously
__or__(other)Method-Pipe operator for chaining (|)
ChatPromptTemplatetemplatestr | NoneNoneTemplate string with placeholders
input_variableslist[str][]List of variable names in the template
messagesList[Tuple] | NoneNoneList of (role, template) tuples for message-based templates
is_message_templateboolFalseWhether this is a message-based template
from_template(template: str)ClassMethod-Create from a template string
from_messages(messages: List[Tuple])ClassMethod-Create from message tuples
invoke(input: dict, config)Method-Format template with variables
ainvoke(input: dict, config)Method-Async format template
RunnableSequencestepslist[Runnable]RequiredList of runnables to execute in sequence
invoke(input, config)Method-Execute all steps in sequence
ainvoke(input, config)Method-Async sequential execution
__or__(other)Method-Extend sequence with another runnable
get_graph()Method-Get graph representation
get_prompts()Method-Extract all ChatPromptTemplate instances
RunnableParallelstepsDict[str, Runnable]{}Dictionary of named runnables to execute in parallel
from_dict(steps: Dict)ClassMethod-Create from dictionary
invoke(input, config)Method-Execute all runnables in parallel
ainvoke(input, config)Method-Async parallel execution
__or__(other)Method-Chain after parallel execution
RunnablePassthroughassignmentsDict[str, Runnable]{}Dictionary of key-runnable pairs to assign
assign(**kwargs)ClassMethod-Create with assignments
invoke(input, config)Method-Pass through input with optional assignments
ainvoke(input, config)Method-Async passthrough
RunnableLambdafuncCallableRequiredFunction or coroutine to wrap
is_coroutineboolFalseWhether the function is a coroutine
invoke(input, config)Method-Execute wrapped function
ainvoke(input, config)Method-Async execution
RunnableBranchconditions_and_runnablesList[Tuple][]List of (condition, runnable) tuples
default_runnableRunnableRequiredDefault runnable when no conditions match
invoke(input, config)Method-Evaluate conditions and execute matching branch
ainvoke(input, config)Method-Async conditional execution
@chain DecoratorfuncCallableRequiredFunction being decorated
is_asyncboolFalseWhether the function is async
invoke(input, config)Method-Execute decorated function (auto-invokes returned Runnables)
ainvoke(input, config)Method-Async execution
RunnableGraphrootRunnableRequiredRoot runnable of the graph
nodesDict[int, RunnableNode]{}Dictionary of graph nodes
node_counterint0Counter for node IDs
print_ascii()Method-Print ASCII representation of graph
to_ascii()Method-Generate ASCII string representation
to_mermaid()Method-Generate Mermaid diagram code
get_structure_details()Method-Get detailed structure information
Model Integrationadd_memory(history=True, memory=None)Method-Add conversation history management
bind_tools(tools, tool_call_limit=5)Method-Attach tools to model
with_structured_output(schema)Method-Configure Pydantic output validation

Configuration Example

from upsonic.uel import ChatPromptTemplate, RunnableParallel, StrOutputParser
from upsonic.models import infer_model
from operator import itemgetter

# Create model with features
model = infer_model("openai/gpt-4o").add_memory(history=True)

# Create parallel execution that generates joke and fact
# RunnableParallel with explicit passthrough of input variables
parallel = RunnableParallel(
    topic=itemgetter("topic"),  # Pass through the topic
    chat_history=itemgetter("chat_history"),  # Pass through the chat_history
    joke=ChatPromptTemplate.from_template("Tell a joke about {topic}") | model,  # Generate joke in parallel
    fact=ChatPromptTemplate.from_template("Share a fact about {topic}") | model  # Generate fact in parallel
)

# Create prompt template that uses parallel results
synthesis_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. You will receive a joke and a fact about a topic. Combine them into an interesting response."),
    ("placeholder", {"variable_name": "chat_history"}),
    ("human", "Topic: {topic}\n\nJoke: {joke}\n\nFact: {fact}\n\nPlease synthesize this information into an engaging response about {topic}.")
])

# Combine into full chain with parallel processing
# RunnableParallel outputs: {topic, chat_history, joke, fact}
chain = (
    parallel
    | synthesis_prompt
    | model
    | StrOutputParser()
)

# Execute the chain
print("=== Chain with Parallel Processing ===")
result = chain.invoke({
    "topic": "artificial intelligence",
    "chat_history": []
})

print(result)