llama-stack/llama_stack/providers/remote/inference
Matthew Farrellee e28cedd833
feat: add nvidia embedding implementation for new signature, task_type, output_dimention, text_truncation (#1213)
# What does this PR do?

updates nvidia inference provider's embedding implementation to use new
signature

add support for task_type, output_dimensions, text_truncation parameters

## Test Plan

`LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v
tests/client-sdk/inference/test_embedding.py --embedding-model
baai/bge-m3`
2025-02-27 16:58:11 -08:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
cerebras fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
databricks fix: resolve type hint issues and import dependencies (#1176) 2025-02-25 11:06:47 -08:00
fireworks feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia feat: add nvidia embedding implementation for new signature, task_type, output_dimention, text_truncation (#1213) 2025-02-27 16:58:11 -08:00
ollama feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
runpod feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
sambanova feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
sample build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
tgi feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
together feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
vllm fix: Get builtin tool calling working in remote-vllm (#1236) 2025-02-26 15:25:47 -05:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00