llama-stack-mirror/llama_stack/providers/utils
Dinesh Yeduguru 516e1a3e59
add embedding model by default to distribution templates (#617)
# What does this PR do?
Adds the sentence transformer provider and the `all-MiniLM-L6-v2`
embedding model to the default models to register in the run.yaml for
all providers.

## Test Plan
llama stack build --template together --image-type conda
llama stack run
~/.llama/distributions/llamastack-together/together-run.yaml
2024-12-13 12:48:00 -08:00
..
bedrock Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
datasetio [Evals API][11/n] huggingface dataset provider + mmlu scoring fn (#392) 2024-11-11 14:49:50 -05:00
inference add embedding model by default to distribution templates (#617) 2024-12-13 12:48:00 -08:00
kvstore use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
memory Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
scoring [/scoring] add ability to define aggregation functions for scoring functions & refactors (#597) 2024-12-11 10:03:42 -08:00
telemetry add tracing back to the lib cli (#595) 2024-12-11 08:44:20 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00