docs: clarify security mechanism comments in get_headers_from_request

Based on user feedback, improved comments to distinguish between
the two security layers:

1. PRIMARY: Line 89 - Architectural prevention
   - get_request_provider_data() only reads from request body
   - Never accesses HTTP Authorization header
   - This is what actually prevents inference token leakage

2. SECONDARY: Lines 97-104 - Validation prevention
   - Rejects Authorization in mcp_headers dict
   - Enforces using dedicated mcp_authorization field
   - Prevents users from misusing the API

Previous comment was misleading by suggesting the validation
prevented inference token leakage, when the architecture
already ensures that isolation.
This commit is contained in:
Omar Abdelwahab 2025-11-07 14:05:48 -08:00
parent 2295a1aad5
commit 9e972cf20c

View file

@ -86,6 +86,9 @@ class ModelContextProtocolToolRuntimeImpl(ToolGroupsProtocolPrivate, ToolRuntime
headers = {}
authorization = None
# PRIMARY SECURITY: This line prevents inference token leakage
# provider_data only contains X-LlamaStack-Provider-Data (request body),
# never the HTTP Authorization header (which contains the inference token)
provider_data = self.get_request_provider_data()
if provider_data:
# Extract headers (excluding Authorization)
@ -95,7 +98,8 @@ class ModelContextProtocolToolRuntimeImpl(ToolGroupsProtocolPrivate, ToolRuntime
continue
# Security check: reject Authorization header in mcp_headers
# This prevents accidentally passing inference tokens to MCP servers
# This enforces using the dedicated mcp_authorization field for auth tokens
# Note: Inference tokens are already isolated by line 89 (provider_data only contains request body)
for key in values.keys():
if key.lower() == "authorization":
raise ValueError(