llama-stack-mirror/llama_stack/providers/inline
Ashwin Bharambe a385e0d95e more better
2025-10-05 21:55:44 -07:00
..
agents more better 2025-10-05 21:55:44 -07:00
batches more better 2025-10-05 21:55:44 -07:00
datasetio feat!: providers use unified 'persistence' field 2025-10-05 20:33:03 -07:00
eval feat!: providers use unified 'persistence' field 2025-10-05 20:33:03 -07:00
files/localfs feat!: providers use unified 'persistence' field 2025-10-05 20:33:03 -07:00
inference chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
ios/inference feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
post_training chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
safety feat: use /v1/chat/completions for safety model inference (#3591) 2025-09-30 11:01:44 -07:00
scoring chore: use openai_chat_completion for llm as a judge scoring (#3635) 2025-10-01 09:44:31 -04:00
telemetry chore: Remove debug logging from telemetry adapter (#3643) 2025-10-01 15:16:23 -07:00
tool_runtime feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
vector_io feat!: providers use unified 'persistence' field 2025-10-05 20:33:03 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00