llama-stack/llama_stack/providers/inline
Ashwin Bharambe c9e5578151
[memory refactor][5/n] Migrate all vector_io providers (#835)
See https://github.com/meta-llama/llama-stack/issues/827 for the broader
design.

This PR finishes off all the stragglers and migrates everything to the
new naming.
2025-01-22 10:17:59 -08:00
..
agents [memory refactor][5/n] Migrate all vector_io providers (#835) 2025-01-22 10:17:59 -08:00
datasetio Add persistence for localfs datasets (#557) 2025-01-09 17:34:18 -08:00
eval rebase eval test w/ tool_runtime fixtures (#773) 2025-01-15 12:55:19 -08:00
inference meta reference inference fixes (#797) 2025-01-16 18:17:46 -08:00
ios/inference impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
post_training More idiomatic REST API (#765) 2025-01-15 13:20:09 -08:00
safety [bugfix] fix llama guard parsing ContentDelta (#772) 2025-01-15 11:20:23 -08:00
scoring Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data (#735) 2025-01-09 11:51:36 -08:00
telemetry optional api dependencies (#793) 2025-01-17 15:26:53 -08:00
tool_runtime [memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832) 2025-01-22 10:04:16 -08:00
vector_io [memory refactor][5/n] Migrate all vector_io providers (#835) 2025-01-22 10:17:59 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00