llama-stack-mirror/llama_stack/providers/inline
2025-10-01 20:48:17 -07:00
..
agents moar recordings and test fixes 2025-10-01 20:48:17 -07:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval feat: update eval runner to use openai endpoints (#3588) 2025-09-29 13:13:53 -07:00
files/localfs fix(expires_after): make sure multipart/form-data is properly parsed (#3612) 2025-09-30 16:14:03 -04:00
inference chore(api): remove batch inference (#3261) 2025-09-26 14:35:34 -07:00
ios/inference fixes 2025-09-30 21:06:40 -07:00
post_training chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
safety feat: use /v1/chat/completions for safety model inference (#3591) 2025-09-30 11:01:44 -07:00
scoring feat: create HTTP DELETE API endpoints to unregister ScoringFn and Benchmark resources in Llama Stack (#3371) 2025-09-15 12:43:38 -07:00
telemetry fix(logging): disable console telemetry sink by default (#3623) 2025-09-30 14:58:05 -07:00
tool_runtime more substantial cleanup of Tool vs. ToolDef crap 2025-10-01 15:54:14 -07:00
vector_io refactor: use generic WeightedInMemoryAggregator for hybrid search in SQLiteVecIndex (#3303) 2025-09-02 10:38:35 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00