-
Notifications
You must be signed in to change notification settings - Fork 421
Feature request: Add ToolSpec support for Bedrock #6267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for opening your first issue here! We'll come back to you as soon as we can. |
Hi @brnaba-aws ! |
Hi @brnaba-aws, thanks for opening this feature request. While @anafalcao researches the topic, I'd like to understand a bit better the use case and which parts of the experience you think we could improve. The part that generates a JSON Schema for tools has been on our radars for a while. I did a PoC a couple months ago and the use case seems compelling enough. However I'm curious if you have taken a look at PydanticAI, which does something very similar: from pydantic_ai import Agent, RunContext
agent = Agent(
'some-model-id',
deps_type=str,
system_prompt=(
"You're a dice game, you should roll the die and see if the number "
"you get back matches the user's guess. If so, tell them they're a winner. "
"Use the player's name in the response."
),
)
@agent.tool_plain
def roll_die() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps
dice_result = agent.run_sync('My guess is 4', deps='Anne') This is just one example of library that does this, for example in other ecosystems patterns might look slightly different due to language differences, but in most cases they all rely on some kind of parsing library (Pydantic, Zod, etc) and then use it to introspect the spec to create a JSON Schema - which is clever. With this in mind, I think offering only this type of experience would probably be kind of redundant and I would rather us think about some kind of value add that Powertools for AWS can bring to this, and answer the question: why should I not just use these other tools? For example, maybe we could lean on other Powertools for AWS features, and improve the experience further by integrating with things like logger/tracer/metrics to give operators visibility over the agent inner workings, or we could work with Idempotency to make sure tool calls have the same result even if called multiple times, given the same inputs. Regarding the Canonically we have used the notion of "resolver" for things that take in a request payload as it arrives to your AWS Lambda function, and "route" the payload to some other portion of your code, based on rules that you have defined using things like decorators. In this case, unless we're talking about MCP servers (which can be a bit hard to work with in Lambda until this becomes a thing), I think what this "resolver" does is actually very different and instead is a way to basically "converse" with the LLM and execute tools when the LLM returns a |
Thanks for the response.
Because it means bringing another library into Lambda. And I would much prefer to use AWS ones rather than another external package which we don't control.
Don't pay too much attention to the name, I didn't mean to be as is. It is yes, a way to return a toolConfig |
Thanks for replying.
While I value this argument, if the only reason for us to do this is a case of Not Made Here syndrome, then I think we should keep thinking and developing this idea/use case. Besides, if we're talking about dependency cost, our implementation would also add bytes; if instead we're talking about supply chain, I purposefully linked a Pydantic-adjacent tool because we're already using Pydantic through Powertools, so adding this other package might not significantly worsen our footprint. With that said, this is an area where I'd like to see Powertools for AWS more active, but I'd like to avoid us just lifting an existing pattern and slapping our name on it without adding any value/flavor to it. I think there's an opportunity to do better. Given your background, body of work, and the fact that you opened the issue I'd like to ask you to please work with us and expand on the requirements/purpose of the feature.
Again, I don't think just transforming a model/dataclass to a JSON Schema and using functools to call a function enough to have a full utility - this is more like a helper. As you know there's more to function calling/tool use than just generating the spec. There's advancing the conversation, validating input/outputs to and from the LLM, observing the entire thing - and probably more. I'd like us to bring the conversation into this direction rather than just take the request at face value and open a PR to add this. |
Use case
Use case:
I would like to see Lambda Power Tools being able to produce the ToolSpec definition, from a method, to be then used with Bedrock.
Here is a ToolSpec example:
the python method would look like this:
Solution/User Experience
A developer woud decorate its method like this:
then it could use it with Converse Api like this:
Alternative solutions
Acknowledgment
The text was updated successfully, but these errors were encountered: