Skip to main content

Custom Workflows SDK

Overview

The Custom Workflows SDK allows you to define and expose your LLM workflows as API endpoints. By decorating your functions with @ag.route, you can create custom routes that integrate with Agenta for playground interactions, evaluations, and deployments.


Decorators

@ag.route Decorator

The @ag.route decorator exposes your function as an API endpoint that Agenta can call.

Syntax

@ag.route(path, config_schema)
def function_name(parameters) -> str:
# Function implementation

Parameters

  • path (str): The URL path for the route. Use "/" for the main entry point.
  • config_schema (Type[BaseModel]): A Pydantic model defining the configuration schema.
warning

The decorated function must return a str.

Example

from pydantic import BaseModel, Field
import agenta as ag

class MyConfig(BaseModel):
prompt: str = Field(default="Hello, {{name}}!")

@ag.route("/", config_schema=MyConfig)
def generate_text(input_text: str) -> str:
config = ag.ConfigManager.get_from_route(schema=MyConfig)
# Function implementation
return output_text

Configuration Types

PromptTemplate

PromptTemplate is the recommended way to define prompts. It bundles messages, model selection, and parameters into a single configurable unit that renders as a rich editor in the playground.

Syntax

from agenta.sdk.types import PromptTemplate, Message, ModelConfig

PromptTemplate(
messages=[...],
template_format="curly",
input_keys=["var1", "var2"],
llm_config=ModelConfig(...)
)

Parameters

  • messages (List[Message]): List of chat messages
  • template_format (str): Variable syntax. Use "curly" for {{variable}} syntax
  • input_keys (List[str]): Variables expected in the template
  • llm_config (ModelConfig): Model and parameter settings

Example

from pydantic import BaseModel, Field
from agenta.sdk.types import PromptTemplate, Message, ModelConfig
import agenta as ag

class Config(BaseModel):
prompt: PromptTemplate = Field(
default=PromptTemplate(
messages=[
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="{{user_input}}")
],
template_format="curly",
input_keys=["user_input"],
llm_config=ModelConfig(model="gpt-4o-mini", temperature=0.7)
)
)

@ag.route("/", config_schema=Config)
async def generate(user_input: str) -> str:
config = ag.ConfigManager.get_from_route(schema=Config)

formatted = config.prompt.format(user_input=user_input)
response = client.chat.completions.create(**formatted.to_openai_kwargs())

return response.choices[0].message.content

Methods

  • format(**kwargs): Substitutes variables and returns a new PromptTemplate
  • to_openai_kwargs(): Converts to arguments for the OpenAI client

Message

Represents a single chat message.

from agenta.sdk.types import Message

Message(role="system", content="You are helpful.")
Message(role="user", content="Hello {{name}}")
Message(role="assistant", content="Hi there!")

ModelConfig

Configures the LLM model and parameters.

from agenta.sdk.types import ModelConfig

ModelConfig(
model="gpt-4o-mini",
temperature=0.7,
max_tokens=1000,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)

Basic Configuration Types

For simpler configurations, you can use basic Python types:

Accepted Types

  • String (str): Renders as a text area
  • Integer (int): Renders as a slider
  • Float (float): Renders as a slider
  • Boolean (bool): Renders as a checkbox
warning

Each field must have a default value.

Example with Constraints

from pydantic import BaseModel, Field
import agenta as ag

class MyConfig(BaseModel):
temperature: float = Field(default=1.0, ge=0.0, le=2.0)
max_tokens: int = Field(default=500, ge=1, le=1000)
use_cache: bool = Field(default=False)

Constraints like ge (greater than or equal) and le (less than or equal) set the slider range.


Multiple Choice Fields

For dropdown selections, use Annotated with ag.MultipleChoice:

from typing import Annotated
from pydantic import BaseModel, Field
import agenta as ag
from agenta.sdk.assets import supported_llm_models

class MyConfig(BaseModel):
model: Annotated[str, ag.MultipleChoice(choices=supported_llm_models)] = Field(
default="gpt-4o-mini"
)
language: Annotated[str, ag.MultipleChoice(choices=["English", "Spanish", "French"])] = Field(
default="English"
)

Grouped Choices

You can group choices using a dictionary:

from typing import Annotated
from pydantic import BaseModel, Field
import agenta as ag

supported_models = {
"OpenAI": ["gpt-4o", "gpt-4o-mini", "gpt-4"],
"Anthropic": ["claude-3-opus", "claude-3-sonnet"]
}

class MyConfig(BaseModel):
model: Annotated[str, ag.MultipleChoice(choices=supported_models)] = Field(
default="gpt-4o-mini"
)

Importing Supported Models

Agenta provides a predefined list of supported models:

from agenta.sdk.assets import supported_llm_models

Configuration Management

ag.ConfigManager.get_from_route()

Retrieves the configuration from the current request context.

Syntax

config = ag.ConfigManager.get_from_route(schema=MyConfig)

Parameters

  • schema (Type[BaseModel]): The Pydantic model class for the configuration.

Returns

An instance of the specified schema populated with configuration data from the request.

Example

from pydantic import BaseModel, Field
import agenta as ag

class MyConfig(BaseModel):
prompt: str = Field(default="Hello!")

@ag.route("/", config_schema=MyConfig)
def generate_text(input_text: str) -> str:
config = ag.ConfigManager.get_from_route(schema=MyConfig)
# Use config.prompt
return output_text

Initialization

ag.init()

Initializes the Agenta SDK. Call this at the top of your application.

Syntax

ag.init(host=None, api_key=None)

Parameters

  • host (Optional[str]): The Agenta backend URL. Defaults to "https://cloud.agenta.ai".
  • api_key (Optional[str]): Your Agenta API key. Can also be set via AGENTA_API_KEY environment variable.

Example

import agenta as ag

ag.init() # Uses environment variables

Or with explicit values:

import agenta as ag

ag.init(
host="https://cloud.agenta.ai",
api_key="your-api-key"
)

Observability

@ag.instrument() Decorator

Adds tracing to your function for observability in the Agenta UI.

warning

The @ag.instrument() decorator must come after @ag.route().

Example

@ag.route("/", config_schema=Config)
@ag.instrument()
async def generate(input_text: str) -> str:
# Your code here
return result

OpenTelemetry Integration

For automatic LLM call tracing, use OpenTelemetry instrumentation:

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument()

This captures all OpenAI API calls in your traces.


Complete Example

from openai import OpenAI
from pydantic import BaseModel, Field
from typing import Annotated

import agenta as ag
from agenta.sdk.types import PromptTemplate, Message, ModelConfig
from agenta.sdk.assets import supported_llm_models
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

ag.init()
client = OpenAI()
OpenAIInstrumentor().instrument()


class Config(BaseModel):
prompt: PromptTemplate = Field(
default=PromptTemplate(
messages=[
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="{{user_input}}")
],
template_format="curly",
input_keys=["user_input"],
llm_config=ModelConfig(model="gpt-4o-mini", temperature=0.7)
)
)
max_length: int = Field(default=100, ge=10, le=500)
use_examples: bool = Field(default=False)


@ag.route("/", config_schema=Config)
@ag.instrument()
async def generate(user_input: str) -> str:
config = ag.ConfigManager.get_from_route(schema=Config)

formatted = config.prompt.format(user_input=user_input)
response = client.chat.completions.create(**formatted.to_openai_kwargs())

return response.choices[0].message.content