Skip to main content

Overview

LiteLLM provides a unified OpenAI-compatible interface to access 100+ LLM providers including OpenAI, Anthropic, Azure, Google, AWS Bedrock, and more. Run it as a proxy server for centralized model management. Model Class: OpenAIChatModel (OpenAI-compatible proxy)

Authentication

export LITELLM_BASE_URL="http://localhost:4000"
# Optional: API key if proxy requires authentication
export LITELLM_API_KEY="sk-..."

Examples

from upsonic import Agent, Task
from upsonic.models.openai import OpenAIChatModel

model = OpenAIChatModel(model_name="gpt-4o", provider="litellm")
agent = Agent(model=model)

task = Task("Hello, how are you?")
result = agent.do(task)
print(result)

Parameters

ParameterTypeDescriptionDefaultSource
max_tokensintMaximum tokens to generateModel defaultBase
temperaturefloatSampling temperature (0.0-2.0)1.0Base
top_pfloatNucleus sampling1.0Base
seedintRandom seedNoneBase
stop_sequenceslist[str]Stop sequencesNoneBase
presence_penaltyfloatToken presence penalty0.0Base
frequency_penaltyfloatToken frequency penalty0.0Base
parallel_tool_callsboolAllow parallel toolsTrueBase
timeoutfloatRequest timeout (seconds)600Base