llama-stack-mirror/llama_stack/providers
Ashwin Bharambe 4fec49dfdb
feat(responses): add include parameter (#3115)
Well our Responses tests use it so we better include it in the API, no?

I discovered it because I want to make sure `llama-stack-client` can be
used always instead of `openai-python` as the client (we do want to be
_truly_ compatible.)
2025-08-12 10:24:01 -07:00
..
inline feat(responses): add include parameter (#3115) 2025-08-12 10:24:01 -07:00
registry feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
remote refactor: standardize InferenceRouter model handling (#2965) 2025-08-12 04:20:39 -06:00
utils fix(dep): update to openai >= 1.99.6 and use new Function location (#3087) 2025-08-12 08:40:32 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00