llama-stack-mirror/llama_stack/providers/inline
Ashwin Bharambe 4fec49dfdb
feat(responses): add include parameter (#3115)
Well our Responses tests use it so we better include it in the API, no?

I discovered it because I want to make sure `llama-stack-client` can be
used always instead of `openai-python` as the client (we do want to be
_truly_ compatible.)
2025-08-12 10:24:01 -07:00
..
agents feat(responses): add include parameter (#3115) 2025-08-12 10:24:01 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
files/localfs chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inference chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
safety fix: Implement missing run_moderation method in PromptGuardSafetyImpl (#3101) 2025-08-12 08:32:52 -07:00
scoring chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
telemetry fix: telemetry fixes (inference and core telemetry) (#2733) 2025-08-06 13:37:40 -07:00
tool_runtime feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
vector_io docs: Add unsupported search mode info about FAISS (#3089) 2025-08-10 17:34:34 -06:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00