llama-stack-mirror/llama_stack
Sébastien Han 7e30b5a466
fix: remove sentence-transformers from remote vllm
vLLM itself can perform the embeddings generation so we don't need this
extra provider.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-03 18:00:27 +02:00
..
apis feat(responses): implement full multi-turn support (#2295) 2025-06-02 15:35:49 -07:00
cli refactor: remove container from list of run image types (#2178) 2025-06-02 09:57:55 +02:00
distribution feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
models chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
providers feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: remove sentence-transformers from remote vllm 2025-06-03 18:00:27 +02:00
ui chore: revert llama-stack-client dep (#2342) 2025-06-02 16:05:21 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00