mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
feat: unify MCP authentication across Responses and Tool Runtime APIs
- Add authorization parameter to Tool Runtime API signatures (list_runtime_tools, invoke_tool) - Update MCP provider implementation to use authorization from request body instead of provider-data - Deprecate mcp_authorization and mcp_headers from provider-data (MCPProviderDataValidator now empty) - Update all Tool Runtime tests to pass authorization as request body parameter - Responses API already uses request body authorization (no changes needed) This provides a single, consistent way to pass MCP authentication tokens across both APIs, addressing reviewer feedback about avoiding multiple configuration paths.
This commit is contained in:
parent
893e186d5c
commit
84baa5c406
6 changed files with 87 additions and 134 deletions
|
|
@ -193,15 +193,15 @@ class TestMCPToolsInChatCompletion:
|
|||
mcp_endpoint=dict(uri=uri),
|
||||
)
|
||||
|
||||
provider_data = {"mcp_authorization": {uri: AUTH_TOKEN}} # Token without "Bearer " prefix
|
||||
auth_headers = {
|
||||
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
|
||||
# Authorization now passed as request body parameter
|
||||
# Removed auth_headers - using authorization parameter instead
|
||||
# (no longer needed)
|
||||
}
|
||||
|
||||
# Get the tools from MCP
|
||||
tools_response = llama_stack_client.tool_runtime.list_tools(
|
||||
tool_group_id=test_toolgroup_id,
|
||||
extra_headers=auth_headers,
|
||||
authorization=AUTH_TOKEN,
|
||||
)
|
||||
|
||||
# Convert to OpenAI format for inference
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue