forked from phoenix-oss/llama-stack-mirror
We need to support: - asymmetric embedding models (#934) - truncation policies (#933) - varying dimensional output (#932) ## Test Plan ```bash $ cd llama_stack/providers/tests/inference $ pytest -s -v -k fireworks test_embeddings.py \ --inference-model nomic-ai/nomic-embed-text-v1.5 --env EMBEDDING_DIMENSION=784 $ pytest -s -v -k together test_embeddings.py \ --inference-model togethercomputer/m2-bert-80M-8k-retrieval --env EMBEDDING_DIMENSION=784 $ pytest -s -v -k ollama test_embeddings.py \ --inference-model all-minilm:latest --env EMBEDDING_DIMENSION=784 ``` |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| databricks | ||
| fireworks | ||
| groq | ||
| nvidia | ||
| ollama | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| sample | ||
| tgi | ||
| together | ||
| vllm | ||
| __init__.py | ||