llama-stack-mirror/src/llama_stack/providers/remote
Omar Abdelwahab 9e972cf20c docs: clarify security mechanism comments in get_headers_from_request
Based on user feedback, improved comments to distinguish between
the two security layers:

1. PRIMARY: Line 89 - Architectural prevention
   - get_request_provider_data() only reads from request body
   - Never accesses HTTP Authorization header
   - This is what actually prevents inference token leakage

2. SECONDARY: Lines 97-104 - Validation prevention
   - Rejects Authorization in mcp_headers dict
   - Enforces using dedicated mcp_authorization field
   - Prevents users from misusing the API

Previous comment was misleading by suggesting the validation
prevented inference token leakage, when the architecture
already ensures that isolation.
2025-11-07 14:05:48 -08:00
..
agents chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
datasetio chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
eval chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
files feat: openai files provider (#3946) 2025-10-28 16:25:03 -07:00
inference fix: Avoid model_limits KeyError (#4060) 2025-11-05 10:34:40 -08:00
post_training chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
safety chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
tool_runtime docs: clarify security mechanism comments in get_headers_from_request 2025-11-07 14:05:48 -08:00
vector_io chore!: BREAKING CHANGE: vector_db_id -> vector_store_id (#3923) 2025-10-27 14:26:06 -07:00
__init__.py chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00