API Reference¶
This provides API references generated from our code. Please see the extension guides for step-by-step instructions on how to use the APIs.
tool
¶
tool(*, description: str, icon: str = 'handyman', name: str | None = None, render: Callable[[Content], None] | None = None, is_available: Callable[[], bool] | None = None, max_uses: int = 1) -> Callable[[ToolHandler[ToolParams]], ToolHandler[ToolParams]]
Decorator function to create and register a tool.
PARAMETER | DESCRIPTION |
---|---|
description
|
Description of what the tool does
TYPE:
|
icon
|
Icon identifier for the tool
TYPE:
|
render
|
Optional render function to be called when the content generated by the tool is displayed
TYPE:
|
is_available
|
Optional function to determine if the tool is available based on the context
TYPE:
|
max_uses
|
Maximum number of times this tool can be used (default: 1)
TYPE:
|
Returns: Decorator function that creates a Tool instance
AgentContext
dataclass
¶
AgentContext(input: Input, base_prompt: str = '', content: Content = Content(), _history: list[ChatMessage] = list(), _pad_ids: set[str] = set(), _used_pad_ids: set[str] = set(), _file_paths: set[str] = set(), _observed_files: dict[str, str] = dict(), _observations: list[AgentObservation] = list(), _language_model_ids: dict[LanguageModelType, str] = lambda: {'core': id, 'editor': id, 'reasoner': id, 'router': id}(), _tools: Sequence[Tool] = list(), _tool_use_counts: dict[str, int] = dict(), _hashtags: set[str] = set())
METHOD | DESCRIPTION |
---|---|
add_file_paths |
|
add_pad_ids |
|
call_tool |
Calls the specified tool with the given arguments and yields the |
get_file_paths |
|
get_prompt |
|
observe |
|
observe_file_paths |
|
stream_chunks |
|
stream_step |
Calls the 'editor' language model to figure out if it should use a tool |
stream_structured_output |
|
stream_to_content |
Calls the 'core' model to generate a final response if no tools are used. |
ATTRIBUTE | DESCRIPTION |
---|---|
base_prompt |
TYPE:
|
content |
TYPE:
|
hashtags |
TYPE:
|
history |
TYPE:
|
input |
TYPE:
|
call_tool
¶
call_tool(tool: ToolHandler[ToolParams], *args: args, **kwargs: kwargs) -> Generator[None, None, Any]
Calls the specified tool with the given arguments and yields the generated content.
stream_chunks
¶
stream_chunks(*, content: Content | None = None, input: str | None = None, model_type: LanguageModelType = 'core', system_prompt: str = '', skip_observe_files: bool = False) -> Generator[AgentChunk, None, None]
stream_step
¶
Calls the 'editor' language model to figure out if it should use a tool or provide a final answer. Returns the concatenated text response.
stream_structured_output
¶
stream_structured_output(output_type: type[BaseModelType], *, input: str | None = None, model_type: LanguageModelType = 'router', system_prompt: str = '') -> Generator[BaseModelType, None, None]
stream_to_content
¶
stream_to_content(*, content: Content | None = None, input: str | None = None, system_prompt: str | None = None, model_type: LanguageModelType = 'core') -> Generator[None, None, None]
Calls the 'core' model to generate a final response if no tools are used. Yields the streaming content.
Content
¶
Bases: BaseModel
METHOD | DESCRIPTION |
---|---|
add_child |
Add a child to the chat content. |
append_chunk |
Append a chunk to the chat content. |
data_of |
|
from_text |
Create a chat content object from text. |
get_direct_text |
Get the text content of the chat. |
get_text |
Get the text content of the chat. |
set_data |
|
set_text |
Set the text content of the chat. |
ATTRIBUTE | DESCRIPTION |
---|---|
children |
TYPE:
|
errors |
TYPE:
|
internal_checkpoint |
TYPE:
|
internal_children |
TYPE:
|
internal_data |
TYPE:
|
internal_tool_render_id |
TYPE:
|
is_loading |
TYPE:
|
metadata |
TYPE:
|
parts |
TYPE:
|
step |
TYPE:
|
errors
class-attribute
instance-attribute
¶
internal_checkpoint
class-attribute
instance-attribute
¶
internal_children
class-attribute
instance-attribute
¶
internal_children: list[Content] = Field(default_factory=list)
internal_tool_render_id
class-attribute
instance-attribute
¶
metadata
class-attribute
instance-attribute
¶
AgentChunk
module-attribute
¶
AgentChunk = Annotated[TextChunk | ErrorChunk, Field(discriminator='type')]
ErrorChunk
¶
TextChunk
¶
CompletionMetadataChunk
¶
Bases: BaseModel
Chat usage metadata.
CLASS | DESCRIPTION |
---|---|
Config |
|
ATTRIBUTE | DESCRIPTION |
---|---|
cached_input_tokens_count |
TYPE:
|
finish_reason |
|
input_tokens_count |
TYPE:
|
output_tokens_count |
TYPE:
|
type |
TYPE:
|
finish_reason
class-attribute
instance-attribute
¶
finish_reason: LanguageModelFinishReason = UNKNOWN
type
class-attribute
instance-attribute
¶
LanguageModelFinishReason
¶
Bases: Enum
Enum for the reason why the language model stopped generating.
ATTRIBUTE | DESCRIPTION |
---|---|
MAX_TOKENS |
|
OTHER |
|
STOP |
|
UNKNOWN |
|
ChatMessage
¶
Bases: BaseModel
Chat message metadata.
METHOD | DESCRIPTION |
---|---|
to_language_model_text |
|
ATTRIBUTE | DESCRIPTION |
---|---|
content |
TYPE:
|
id |
TYPE:
|
role |
TYPE:
|