Skip to content

API Reference

This provides API references generated from our code. Please see the extension guides for step-by-step instructions on how to use the APIs.

tool

tool(*, description: str, icon: str = 'handyman', name: str | None = None, render: Callable[[Content], None] | None = None, is_available: Callable[[], bool] | None = None, max_uses: int = 1) -> Callable[[ToolHandler[ToolParams]], ToolHandler[ToolParams]]

Decorator function to create and register a tool.

PARAMETER DESCRIPTION
description

Description of what the tool does

TYPE: str

icon

Icon identifier for the tool

TYPE: str DEFAULT: 'handyman'

render

Optional render function to be called when the content generated by the tool is displayed

TYPE: Callable[[Content], None] | None DEFAULT: None

is_available

Optional function to determine if the tool is available based on the context

TYPE: Callable[[], bool] | None DEFAULT: None

max_uses

Maximum number of times this tool can be used (default: 1)

TYPE: int DEFAULT: 1

Returns: Decorator function that creates a Tool instance

AgentContext dataclass

AgentContext(input: Input, base_prompt: str = '', content: Content = Content(), _history: list[ChatMessage] = list(), _pad_ids: set[str] = set(), _used_pad_ids: set[str] = set(), _file_paths: set[str] = set(), _observed_files: dict[str, str] = dict(), _observations: list[AgentObservation] = list(), _language_model_ids: dict[LanguageModelType, str] = lambda: {'core': id, 'editor': id, 'reasoner': id, 'router': id}(), _tools: Sequence[Tool] = list(), _tool_use_counts: dict[str, int] = dict(), _hashtags: set[str] = set())
METHOD DESCRIPTION
add_file_paths
add_pad_ids
call_tool

Calls the specified tool with the given arguments and yields the

get_file_paths
get_prompt
observe
observe_file_paths
stream_chunks
stream_step

Calls the 'editor' language model to figure out if it should use a tool

stream_structured_output
stream_to_content

Calls the 'core' model to generate a final response if no tools are used.

ATTRIBUTE DESCRIPTION
base_prompt

TYPE: str

content

TYPE: Content

hashtags

TYPE: set[str]

history

TYPE: Sequence[ChatMessage]

input

TYPE: Input

base_prompt class-attribute instance-attribute

base_prompt: str = ''

content class-attribute instance-attribute

content: Content = field(default_factory=Content)

hashtags property

hashtags: set[str]

history property writable

history: Sequence[ChatMessage]

input instance-attribute

input: Input

add_file_paths

add_file_paths(file_paths: list[str] | set[str])

add_pad_ids

add_pad_ids(pad_ids: list[str])

call_tool

call_tool(tool: ToolHandler[ToolParams], *args: args, **kwargs: kwargs) -> Generator[None, None, Any]

Calls the specified tool with the given arguments and yields the generated content.

get_file_paths

get_file_paths() -> set[str]

get_prompt

get_prompt() -> str

observe

observe(content: str, metadata: dict[str, str] | None = None)

observe_file_paths

observe_file_paths(file_paths: list[str] | set[str])

stream_chunks

stream_chunks(*, content: Content | None = None, input: str | None = None, model_type: LanguageModelType = 'core', system_prompt: str = '', skip_observe_files: bool = False) -> Generator[AgentChunk, None, None]

stream_step

stream_step(tools: Iterable[ToolHandler] | None = None) -> Generator[None, None, AgentStep]

Calls the 'editor' language model to figure out if it should use a tool or provide a final answer. Returns the concatenated text response.

stream_structured_output

stream_structured_output(output_type: type[BaseModelType], *, input: str | None = None, model_type: LanguageModelType = 'router', system_prompt: str = '') -> Generator[BaseModelType, None, None]

stream_to_content

stream_to_content(*, content: Content | None = None, input: str | None = None, system_prompt: str | None = None, model_type: LanguageModelType = 'core') -> Generator[None, None, None]

Calls the 'core' model to generate a final response if no tools are used. Yields the streaming content.

Content

Bases: BaseModel

METHOD DESCRIPTION
add_child

Add a child to the chat content.

append_chunk

Append a chunk to the chat content.

data_of
from_text

Create a chat content object from text.

get_direct_text

Get the text content of the chat.

get_text

Get the text content of the chat.

set_data
set_text

Set the text content of the chat.

ATTRIBUTE DESCRIPTION
children

TYPE: list[Content]

errors

TYPE: list[ContentError]

internal_checkpoint

TYPE: Checkpoint | None

internal_children

TYPE: list[Content]

internal_data

TYPE: Any | None

internal_tool_render_id

TYPE: ToolId | None

is_loading

TYPE: bool

metadata

TYPE: ContentMetadata

parts

TYPE: list[Part]

step

TYPE: AgentStep

children property

children: list[Content]

errors class-attribute instance-attribute

errors: list[ContentError] = Field(default_factory=list)

internal_checkpoint class-attribute instance-attribute

internal_checkpoint: Checkpoint | None = None

internal_children class-attribute instance-attribute

internal_children: list[Content] = Field(default_factory=list)

internal_data class-attribute instance-attribute

internal_data: Any | None = Field(default=None)

internal_tool_render_id class-attribute instance-attribute

internal_tool_render_id: ToolId | None = None

is_loading property

is_loading: bool

metadata class-attribute instance-attribute

metadata: ContentMetadata = Field(default_factory=ContentMetadata)

parts class-attribute instance-attribute

parts: list[Part] = Field(default_factory=list)

step class-attribute instance-attribute

step: AgentStep = Field(default_factory=DefaultStep)

add_child

add_child(child: Content)

Add a child to the chat content.

append_chunk

append_chunk(chunk: AgentChunk)

Append a chunk to the chat content.

data_of

data_of(typeclass: type[T]) -> T

from_text staticmethod

from_text(text: str) -> Content

Create a chat content object from text.

get_direct_text

get_direct_text() -> str

Get the text content of the chat.

get_text

get_text() -> str

Get the text content of the chat.

set_data

set_data(value: BaseModel) -> None

set_text

set_text(text: str)

Set the text content of the chat.

AgentChunk module-attribute

AgentChunk = Annotated[TextChunk | ErrorChunk, Field(discriminator='type')]

ErrorChunk

Bases: BaseModel

CLASS DESCRIPTION
Config
ATTRIBUTE DESCRIPTION
message

TYPE: str

type

TYPE: Literal['error']

message instance-attribute

message: str

type class-attribute instance-attribute

type: Literal['error'] = 'error'

Config

ATTRIBUTE DESCRIPTION
frozen

frozen class-attribute instance-attribute
frozen = True

TextChunk

Bases: BaseModel

CLASS DESCRIPTION
Config
ATTRIBUTE DESCRIPTION
text

TYPE: str

type

TYPE: Literal['text']

text class-attribute instance-attribute

text: str = ''

type class-attribute instance-attribute

type: Literal['text'] = 'text'

Config

ATTRIBUTE DESCRIPTION
frozen

frozen class-attribute instance-attribute
frozen = True

CompletionMetadataChunk

Bases: BaseModel

Chat usage metadata.

CLASS DESCRIPTION
Config
ATTRIBUTE DESCRIPTION
cached_input_tokens_count

TYPE: int

finish_reason

TYPE: LanguageModelFinishReason

input_tokens_count

TYPE: int

output_tokens_count

TYPE: int

type

TYPE: Literal['completion-metadata']

cached_input_tokens_count class-attribute instance-attribute

cached_input_tokens_count: int = 0

finish_reason class-attribute instance-attribute

input_tokens_count class-attribute instance-attribute

input_tokens_count: int = 0

output_tokens_count class-attribute instance-attribute

output_tokens_count: int = 0

type class-attribute instance-attribute

type: Literal['completion-metadata'] = 'completion-metadata'

Config

ATTRIBUTE DESCRIPTION
frozen

frozen class-attribute instance-attribute
frozen = True

LanguageModelFinishReason

Bases: Enum

Enum for the reason why the language model stopped generating.

ATTRIBUTE DESCRIPTION
MAX_TOKENS

OTHER

STOP

UNKNOWN

MAX_TOKENS class-attribute instance-attribute

MAX_TOKENS = 'max_tokens'

OTHER class-attribute instance-attribute

OTHER = 'other'

STOP class-attribute instance-attribute

STOP = 'stop'

UNKNOWN class-attribute instance-attribute

UNKNOWN = 'unknown'

ChatMessage

Bases: BaseModel

Chat message metadata.

METHOD DESCRIPTION
to_language_model_text
ATTRIBUTE DESCRIPTION
content

TYPE: Content

id

TYPE: str

role

TYPE: Role

content class-attribute instance-attribute

content: Content = Field(default_factory=Content)

id class-attribute instance-attribute

id: str = Field(default_factory=lambda: str(uuid4()))

role class-attribute instance-attribute

role: Role = 'user'

to_language_model_text

to_language_model_text() -> str

LanguageModelType module-attribute

LanguageModelType = Literal['core', 'editor', 'reasoner', 'router']