llama-stack-mirror/llama_stack
Ashwin Bharambe 4fec49dfdb
feat(responses): add include parameter (#3115)
Well our Responses tests use it so we better include it in the API, no?

I discovered it because I want to make sure `llama-stack-client` can be
used always instead of `openai-python` as the client (we do want to be
_truly_ compatible.)
2025-08-12 10:24:01 -07:00
..
apis feat(responses): add include parameter (#3115) 2025-08-12 10:24:01 -07:00
cli chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
core refactor: standardize InferenceRouter model handling (#2965) 2025-08-12 04:20:39 -06:00
distributions feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers feat(responses): add include parameter (#3115) 2025-08-12 10:24:01 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
testing fix(recording): endpoint resolution (#3013) 2025-08-01 16:23:54 -07:00
ui chore: Updating UI Sidebar (#3081) 2025-08-11 07:39:52 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore(logging)!: use comma as a delimiter (#3095) 2025-08-11 11:51:43 -07:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00