pydantic_ai.capabilities
Bases: AbstractCapability[AgentDepsT]
A capability that provides a toolset.
Bases: AbstractCapability[Any]
Enables and configures model thinking/reasoning.
Uses the unified thinking setting in
ModelSettings to work portably across providers.
Provider-specific thinking settings (e.g., anthropic_thinking,
openai_reasoning_effort) take precedence when both are set.
The thinking effort level.
True: Enable thinking with the provider’s default effort.False: Disable thinking (silently ignored on always-on models).'minimal'/'low'/'medium'/'high'/'xhigh': Enable thinking at a specific effort level.
Type: ThinkingLevel Default: True
Bases: AbstractCapability[AgentDepsT]
Capability that filters or modifies tool definitions using a callable.
Wraps a ToolsPrepareFunc as a capability,
allowing it to be composed with other capabilities via the capability system.
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import PrepareTools
from pydantic_ai.tools import ToolDefinition
async def hide_admin_tools(
ctx: RunContext[None], tool_defs: list[ToolDefinition]
) -> list[ToolDefinition] | None:
return [td for td in tool_defs if not td.name.startswith('admin_')]
agent = Agent('openai:gpt-5', capabilities=[PrepareTools(hide_admin_tools)])
Bases: BuiltinOrLocalTool[AgentDepsT]
URL fetching capability.
Uses the model’s builtin URL fetching when available, falling back to a local function tool (markdownify-based fetch by default) when it isn’t.
The local fallback requires the web-fetch optional group::
pip install “pydantic-ai-slim[web-fetch]“
Only fetch from these domains. Enforced locally when builtin is unavailable.
Type: list[str] | None Default: allowed_domains
Never fetch from these domains. Enforced locally when builtin is unavailable.
Type: list[str] | None Default: blocked_domains
Maximum number of fetches per run. Requires builtin support.
Type: int | None Default: max_uses
Enable citations for fetched content. Builtin-only; ignored by local tools.
Type: bool | None Default: enable_citations
Maximum content length in tokens. Builtin-only; ignored by local tools.
Type: int | None Default: max_content_tokens
Bases: BuiltinOrLocalTool[AgentDepsT]
Web search capability.
Uses the model’s builtin web search when available, falling back to a local function tool (DuckDuckGo by default) when it isn’t.
Controls how much context is retrieved from the web. Builtin-only; ignored by local tools.
Type: Literal[‘low’, ‘medium’, ‘high’] | None Default: search_context_size
Localize search results based on user location. Builtin-only; ignored by local tools.
Type: WebSearchUserLocation | None Default: user_location
Domains to exclude from results. Requires builtin support.
Type: list[str] | None Default: blocked_domains
Only include results from these domains. Requires builtin support.
Type: list[str] | None Default: allowed_domains
Maximum number of web searches per run. Requires builtin support.
Type: int | None Default: max_uses
Bases: WrapperCapability[AgentDepsT]
A capability that wraps another capability and prefixes its tool names.
Only the wrapped capability’s tools are prefixed; other agent tools are unaffected.
from pydantic_ai import Agent
from pydantic_ai.capabilities import PrefixTools, Toolset
from pydantic_ai.toolsets import FunctionToolset
toolset = FunctionToolset()
agent = Agent(
'openai:gpt-5',
capabilities=[
PrefixTools(
wrapped=Toolset(toolset),
prefix='ns',
),
],
)
@classmethod
def from_spec(cls, prefix: str, capability: CapabilitySpec) -> PrefixTools[Any]
Create from spec with a nested capability specification.
PrefixTools[Any]
prefix : str
The prefix to add to tool names (e.g. 'mcp' turns 'search' into 'mcp_search').
A capability spec (same format as entries in the capabilities list).
Bases: AbstractCapability[Any]
Use a custom executor for running sync functions in threads.
By default, sync tool functions and other sync callbacks are run in threads using
anyio.to_thread.run_sync, which creates ephemeral threads.
In long-running servers (e.g. FastAPI), this can lead to thread accumulation under sustained load.
This capability provides a bounded ThreadPoolExecutor
(or any Executor) to use instead, scoped to agent runs:
from concurrent.futures import ThreadPoolExecutor
from pydantic_ai import Agent
from pydantic_ai.capabilities import ThreadExecutor
executor = ThreadPoolExecutor(max_workers=16, thread_name_prefix='agent-worker')
agent = Agent('openai:gpt-5.2', capabilities=[ThreadExecutor(executor)])
To set an executor for all agents globally, use
Agent.using_thread_executor().
The executor to use for running sync functions.
Type: Executor
Bases: AbstractCapability[AgentDepsT]
Capability that pairs a provider builtin tool with a local fallback.
When the model supports the builtin natively, the local fallback is removed. When the model doesn’t support the builtin, it is removed and the local tool stays.
Can be used directly:
from pydantic_ai.capabilities import BuiltinOrLocalTool
cap = BuiltinOrLocalTool(builtin=WebSearchTool(), local=my_search_func)
Or subclassed to set defaults by overriding _default_builtin, _default_local,
and _requires_builtin.
The built-in WebSearch,
WebFetch, and
ImageGeneration capabilities
are all subclasses.
Configure the provider builtin tool.
True(default): use the default builtin tool configuration (subclasses only).False: disable the builtin; always use the local tool.- An
AbstractBuiltinToolinstance: use this specific configuration. - A callable (
BuiltinToolFunc): dynamically create the builtin per-run viaRunContext.
Type: AgentBuiltinTool[AgentDepsT] | bool Default: True
Configure the local fallback tool.
None(default): auto-detect a local fallback via_default_local.False: disable the local fallback; only use the builtin.- A
ToolorAbstractToolsetinstance: use this specific local tool. - A bare callable: automatically wrapped in a
Tool.
Type: Tool[AgentDepsT] | Callable[…, Any] | AbstractToolset[AgentDepsT] | Literal[False] | None Default: None
Bases: AbstractCapability[AgentDepsT]
A capability that registers a builtin tool with the agent.
Wraps a single AgentBuiltinTool — either a static
AbstractBuiltinTool instance or a callable
that dynamically produces one.
When builtin_tools is passed to Agent.__init__, each item is
automatically wrapped in a BuiltinTool capability.
@classmethod
def from_spec(
cls,
tool: AbstractBuiltinTool | None = None,
kwargs: Any = {},
) -> BuiltinTool[Any]
Create from spec.
Supports two YAML forms:
- Flat:
\{BuiltinTool: \{kind: web_search, search_context_size: high\}\} - Explicit:
\{BuiltinTool: \{tool: \{kind: web_search\}\}\}
BuiltinTool[Any]
Bases: BuiltinOrLocalTool[AgentDepsT]
Image generation capability.
Uses the model’s builtin image generation when available. When the model doesn’t
support it and fallback_model is provided, falls back to a local tool that
delegates to a subagent running the specified image-capable model.
Image generation settings (quality, size, etc.) are forwarded to the
ImageGenerationTool used by
both the builtin and the local fallback subagent. When passing a custom builtin
instance, its settings are also used for the fallback subagent; capability-level
fields override any builtin instance settings.
Model to use for image generation when the agent’s model doesn’t support it natively.
Must be a model that supports image generation via the
ImageGenerationTool builtin.
This requires a conversational model with image generation support, not a dedicated
image-only API. Examples:
'openai-responses:gpt-5.4'— OpenAI model with image generation support'google-gla:gemini-3-pro-image-preview'— Google image generation model
Can be a model name string, Model instance, or a callable taking RunContext
that returns a Model instance.
Type: ImageGenerationFallbackModel Default: fallback_model
Background type for the generated image.
Supported by: OpenAI Responses. 'transparent' only supported for 'png' and 'webp'.
Type: Literal[‘transparent’, ‘opaque’, ‘auto’] | None Default: background
Input fidelity for matching style/features of input images.
Supported by: OpenAI Responses. Default: 'low'.
Type: Literal[‘high’, ‘low’] | None Default: input_fidelity
Moderation level for the generated image.
Supported by: OpenAI Responses.
Type: Literal[‘auto’, ‘low’] | None Default: moderation
Compression level for the output image.
Supported by: OpenAI Responses (jpeg/webp, default: 100), Google Vertex AI (jpeg, default: 75).
Type: int | None Default: output_compression
Output format of the generated image.
Supported by: OpenAI Responses (default: 'png'), Google Vertex AI.
Type: Literal[‘png’, ‘webp’, ‘jpeg’] | None Default: output_format
Quality of the generated image.
Supported by: OpenAI Responses.
Type: Literal[‘low’, ‘medium’, ‘high’, ‘auto’] | None Default: quality
Size of the generated image.
Supported by: OpenAI Responses ('auto', '1024x1024', '1024x1536', '1536x1024'),
Google ('512', '1K', '2K', '4K').
Type: Literal[‘auto’, ‘1024x1024’, ‘1024x1536’, ‘1536x1024’, ‘512’, ‘1K’, ‘2K’, ‘4K’] | None Default: size
Aspect ratio for generated images.
Supported by: Google (Gemini), OpenAI Responses (maps '1:1', '2:3', '3:2' to sizes).
Type: ImageAspectRatio | None Default: aspect_ratio
Bases: BuiltinOrLocalTool[AgentDepsT]
MCP server capability.
Uses the model’s builtin MCP server support when available, connecting directly via HTTP when it isn’t.
The URL of the MCP server.
Type: str Default: url
Unique identifier for the MCP server. Defaults to a slug derived from the URL.
Authorization header value for MCP server requests. Passed to both builtin and local.
Type: str | None Default: authorization_token
HTTP headers for MCP server requests. Passed to both builtin and local.
Type: dict[str, str] | None Default: headers
Filter to only these tools. Applied to both builtin and local.
Type: list[str] | None Default: allowed_tools
Description of the MCP server. Builtin-only; ignored by local tools.
Type: str | None Default: description
Bases: AbstractCapability[AgentDepsT]
A capability that processes message history before model requests.
Bases: AbstractCapability[AgentDepsT]
A capability that combines multiple capabilities.
Bases: AbstractCapability[AgentDepsT]
A capability that wraps another capability and delegates all methods.
Analogous to WrapperToolset for toolsets.
Subclass and override specific methods to modify behavior while delegating the rest.
Bases: ABC, Generic[AgentDepsT]
Abstract base class for agent capabilities.
A capability is a reusable, composable unit of agent behavior that can provide instructions, model settings, tools, and request/response hooks.
Lifecycle: capabilities are passed to an Agent at construction time, where
most get_* methods are called to collect static configuration (instructions, model
settings, toolsets, builtin tools). The exception is
get_wrapper_toolset,
which is called per-run during toolset assembly. Then, on each model request during a
run, the before_model_request
and after_model_request
hooks are called to allow dynamic adjustments.
See the capabilities documentation for built-in capabilities.
get_serialization_name
and from_spec support
YAML/JSON specs (via Agent.from_spec); they have
sensible defaults and typically don’t need to be overridden.
Whether this capability (or any sub-capability) overrides wrap_node_run.
Type: bool
@classmethod
def get_serialization_name(cls) -> str | None
Return the name used for spec serialization (CamelCase class name by default).
Return None to opt out of spec-based construction.
@classmethod
def from_spec(cls, args: Any = (), kwargs: Any = {}) -> AbstractCapability[Any]
Create from spec arguments. Default: cls(*args, **kwargs).
Override when __init__ takes non-serializable types.
AbstractCapability[Any]
@async
def for_run(ctx: RunContext[AgentDepsT]) -> AbstractCapability[AgentDepsT]
Return the capability instance to use for this agent run.
Called once per run, before get_*() re-extraction and before any hooks fire.
Override to return a fresh instance for per-run state isolation.
Default: return self (shared across runs).
AbstractCapability[AgentDepsT]
def get_instructions() -> AgentInstructions[AgentDepsT] | None
Return instructions to include in the system prompt, or None.
This method is called once at agent construction time. To get dynamic
per-request behavior, return a callable that receives
RunContext or a
TemplateStr — not a dynamic string.
AgentInstructions[AgentDepsT] | None
def get_model_settings() -> AgentModelSettings[AgentDepsT] | None
Return model settings to merge into the agent’s defaults, or None.
This method is called once at agent construction time. Return a static
ModelSettings dict when the settings don’t change between requests.
Return a callable that receives RunContext
when settings need to vary per step (e.g. based on ctx.run_step or ctx.deps).
When the callable is invoked, ctx.model_settings contains the merged
result of all layers resolved before this capability (model defaults and
agent-level settings). The returned dict is merged on top of that.
AgentModelSettings[AgentDepsT] | None
def get_toolset() -> AgentToolset[AgentDepsT] | None
Return a toolset to register with the agent, or None.
AgentToolset[AgentDepsT] | None
def get_builtin_tools() -> Sequence[AgentBuiltinTool[AgentDepsT]]
Return builtin tools to register with the agent.
Sequence[AgentBuiltinTool[AgentDepsT]]
def get_wrapper_toolset(
toolset: AbstractToolset[AgentDepsT],
) -> AbstractToolset[AgentDepsT] | None
Wrap the agent’s assembled toolset, or return None to leave it unchanged.
Called per-run with the combined non-output toolset (after agent-level
prepare_tools wrapping).
Output tools are added separately and are not included.
Unlike the other get_* methods which are called once at agent construction,
this is called each run (after for_run).
When multiple capabilities provide wrappers, each receives the already-wrapped
toolset from earlier capabilities (first capability wraps innermost).
Use this to apply cross-cutting toolset wrappers like
PreparedToolset,
FilteredToolset,
or custom WrapperToolset subclasses.
AbstractToolset[AgentDepsT] | None
@async
def prepare_tools(
ctx: RunContext[AgentDepsT],
tool_defs: list[ToolDefinition],
) -> list[ToolDefinition]
Filter or modify tool definitions visible to the model for this step.
The list contains all tool kinds (function, output, unapproved) distinguished
by tool_def.kind. Return a filtered
or modified list. Called after the agent-level
prepare_tools has already run.
@async
def before_run(ctx: RunContext[AgentDepsT]) -> None
Called before the agent run starts. Observe-only; use wrap_run for modification.
@async
def after_run(
ctx: RunContext[AgentDepsT],
result: AgentRunResult[Any],
) -> AgentRunResult[Any]
Called after the agent run completes. Can modify the result.
@async
def wrap_run(
ctx: RunContext[AgentDepsT],
handler: WrapRunHandler,
) -> AgentRunResult[Any]
Wraps the entire agent run. handler() executes the run.
If handler() raises and this method catches the exception and
returns a result instead, the error is suppressed and the recovery
result is used.
If this method does not call handler() (short-circuit), the run
is skipped and the returned result is used directly.
Note: if the caller cancels the run (e.g. by breaking out of an
iter() loop), this method receives an :class:asyncio.CancelledError.
Implementations that hold resources should handle cleanup accordingly.
@async
def on_run_error(
ctx: RunContext[AgentDepsT],
error: BaseException,
) -> AgentRunResult[Any]
Called when the agent run fails with an exception.
This is the error counterpart to
after_run:
while after_run is called on success, on_run_error is called on
failure (after wrap_run
has had its chance to recover).
Raise the original error (or a different exception) to propagate it.
Return an AgentRunResult to suppress
the error and recover the run.
Not called for GeneratorExit or KeyboardInterrupt.
@async
def before_node_run(
ctx: RunContext[AgentDepsT],
node: AgentNode[AgentDepsT],
) -> AgentNode[AgentDepsT]
Called before each graph node executes. Can observe or replace the node.
AgentNode[AgentDepsT]
@async
def after_node_run(
ctx: RunContext[AgentDepsT],
node: AgentNode[AgentDepsT],
result: NodeResult[AgentDepsT],
) -> NodeResult[AgentDepsT]
Called after each graph node succeeds. Can modify the result (next node or End).
NodeResult[AgentDepsT]
@async
def wrap_node_run(
ctx: RunContext[AgentDepsT],
node: AgentNode[AgentDepsT],
handler: WrapNodeRunHandler[AgentDepsT],
) -> NodeResult[AgentDepsT]
Wraps execution of each agent graph node (run step).
Called for every node in the agent graph (UserPromptNode,
ModelRequestNode, CallToolsNode). handler(node) executes
the node and returns the next node (or End).
Override to inspect or modify nodes before execution, inspect or modify
the returned next node, call handler multiple times (retry), or
return a different node to redirect graph progression.
Note: this hook fires when using agent.run(),
agent.run_stream(), and when manually driving
an agent.iter() run with
next(), but it does not fire when
iterating over the run with bare async for (which yields stream events, not
node results).
When using agent.run() with event_stream_handler, the handler wraps both
streaming and graph advancement (i.e. the model call happens inside the wrapper).
When using agent.run_stream(), the handler wraps only graph advancement — streaming
happens before the wrapper because run_stream() must yield the stream to the caller
while the stream context is still open, which cannot happen from inside a callback.
NodeResult[AgentDepsT]
@async
def on_node_run_error(
ctx: RunContext[AgentDepsT],
node: AgentNode[AgentDepsT],
error: Exception,
) -> NodeResult[AgentDepsT]
Called when a graph node fails with an exception.
This is the error counterpart to
after_node_run.
Raise the original error (or a different exception) to propagate it.
Return a next node or End to recover and continue the graph.
Useful for recovering from
UnexpectedModelBehavior
by redirecting to a different node (e.g. retry with different model settings).
NodeResult[AgentDepsT]
@async
def wrap_run_event_stream(
ctx: RunContext[AgentDepsT],
stream: AsyncIterable[AgentStreamEvent],
) -> AsyncIterable[AgentStreamEvent]
Wraps the event stream for a streamed node. Can observe or transform events.
AsyncIterable[AgentStreamEvent]
@async
def before_model_request(
ctx: RunContext[AgentDepsT],
request_context: ModelRequestContext,
) -> ModelRequestContext
Called before each model request. Can modify messages, settings, and parameters.
ModelRequestContext
@async
def after_model_request(
ctx: RunContext[AgentDepsT],
request_context: ModelRequestContext,
response: ModelResponse,
) -> ModelResponse
Called after each model response. Can modify the response before further processing.
Raise ModelRetry to reject the response and
ask the model to try again. The original response is still appended to message history
so the model can see what it said. Retries count against max_result_retries.
@async
def wrap_model_request(
ctx: RunContext[AgentDepsT],
request_context: ModelRequestContext,
handler: WrapModelRequestHandler,
) -> ModelResponse
Wraps the model request. handler() calls the model.
Raise ModelRetry to skip on_model_request_error
and directly retry the model request with a retry prompt. If the handler was called,
the model response is preserved in history for context (same as after_model_request).
@async
def on_model_request_error(
ctx: RunContext[AgentDepsT],
request_context: ModelRequestContext,
error: Exception,
) -> ModelResponse
Called when a model request fails with an exception.
This is the error counterpart to
after_model_request.
Raise the original error (or a different exception) to propagate it.
Return a ModelResponse to suppress
the error and use the response as if the model call succeeded.
Raise ModelRetry to retry the model request
with a retry prompt instead of recovering or propagating.
Not called for SkipModelRequest
or ModelRetry.
@async
def before_tool_validate(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: RawToolArgs,
) -> RawToolArgs
Modify raw args before validation.
Raise ModelRetry to skip validation and
ask the model to redo the tool call.
RawToolArgs
@async
def after_tool_validate(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: ValidatedToolArgs,
) -> ValidatedToolArgs
Modify validated args. Called only on successful validation.
Raise ModelRetry to reject the validated args
and ask the model to redo the tool call.
ValidatedToolArgs
@async
def wrap_tool_validate(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: RawToolArgs,
handler: WrapToolValidateHandler,
) -> ValidatedToolArgs
Wraps tool argument validation. handler() runs the validation.
ValidatedToolArgs
@async
def on_tool_validate_error(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: RawToolArgs,
error: ValidationError | ModelRetry,
) -> ValidatedToolArgs
Called when tool argument validation fails.
This is the error counterpart to
after_tool_validate.
Fires for ValidationError (schema mismatch) and
ModelRetry (custom validator rejection).
Raise the original error (or a different exception) to propagate it.
Return validated args to suppress the error and continue as if validation passed.
Not called for SkipToolValidation.
ValidatedToolArgs
@async
def before_tool_execute(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: ValidatedToolArgs,
) -> ValidatedToolArgs
Modify validated args before execution.
Raise ModelRetry to skip execution and
ask the model to redo the tool call.
ValidatedToolArgs
@async
def after_tool_execute(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: ValidatedToolArgs,
result: Any,
) -> Any
Modify result after execution.
Raise ModelRetry to reject the tool result
and ask the model to redo the tool call.
@async
def wrap_tool_execute(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: ValidatedToolArgs,
handler: WrapToolExecuteHandler,
) -> Any
Wraps tool execution. handler() runs the tool.
@async
def on_tool_execute_error(
ctx: RunContext[AgentDepsT],
call: ToolCallPart,
tool_def: ToolDefinition,
args: ValidatedToolArgs,
error: Exception,
) -> Any
Called when tool execution fails with an exception.
This is the error counterpart to
after_tool_execute.
Raise the original error (or a different exception) to propagate it.
Return any value to suppress the error and use it as the tool result.
Raise ModelRetry to ask the model to
redo the tool call instead of recovering or propagating.
Not called for control flow exceptions
(SkipToolExecution,
CallDeferred,
ApprovalRequired)
or retry signals (ToolRetryError
from ModelRetry).
Use wrap_tool_execute
to intercept retries.
def prefix_tools(prefix: str) -> PrefixTools[AgentDepsT]
Returns a new capability that wraps this one and prefixes its tool names.
Only this capability’s tools are prefixed; other agent tools are unaffected.
PrefixTools[AgentDepsT]
Bases: TimeoutError
Raised when a hook function exceeds its configured timeout.
Bases: AbstractCapability[AgentDepsT]
Register hook functions via decorators or constructor kwargs.
For extension developers building reusable capabilities, subclass
:class:AbstractCapability directly. For application code that needs
a few hooks without the ceremony of a subclass, use Hooks.
Example using decorators::
hooks = Hooks()
@hooks.on.before_model_request async def log_request(ctx, request_context): print(f’Request: {request_context}’) return request_context
agent = Agent(‘openai:gpt-5’, capabilities=[hooks])
Example using constructor kwargs::
agent = Agent(‘openai:gpt-5’, capabilities=[ Hooks(before_model_request=log_request) ])
Decorator namespace for registering hook functions.
Type: _HookRegistration[AgentDepsT]
Type alias for an agent graph node (UserPromptNode, ModelRequestNode, CallToolsNode).
Type: TypeAlias Default: '_agent_graph.AgentNode[AgentDepsT, Any]'
Type alias for the result of executing an agent graph node: either the next node or End.
Type: TypeAlias Default: '_agent_graph.AgentNode[AgentDepsT, Any] | End[FinalResult[Any]]'
Registry of all capability types that have a serialization name, mapping name to class.
Type: dict[str, type[AbstractCapability[Any]]] Default: \{name: cls for cls in (BuiltinTool, HistoryProcessor, ImageGeneration, MCP, PrefixTools, PrepareTools, Thinking, Toolset, WebFetch, WebSearch) if (name := (cls.get_serialization_name())) is not None\}
Handler type for wrap_run.
Type: TypeAlias Default: 'Callable[[], Awaitable[AgentRunResult[Any]]]'
Handler type for wrap_node_run.
Type: TypeAlias Default: 'Callable[[_agent_graph.AgentNode[AgentDepsT, Any]], Awaitable[_agent_graph.AgentNode[AgentDepsT, Any] | End[FinalResult[Any]]]]'
Handler type for wrap_model_request.
Type: TypeAlias Default: 'Callable[[ModelRequestContext], Awaitable[ModelResponse]]'
Type alias for raw (pre-validation) tool arguments.
Type: TypeAlias Default: 'str | dict[str, Any]'
Type alias for validated tool arguments.
Type: TypeAlias Default: 'dict[str, Any]'
Handler type for wrap_tool_validate.
Type: TypeAlias Default: 'Callable[[str | dict[str, Any]], Awaitable[dict[str, Any]]]'
Handler type for wrap_tool_execute.
Type: TypeAlias Default: 'Callable[[dict[str, Any]], Awaitable[Any]]'