llama-stack-mirror/llama_stack/providers/remote/inference
Nathan Weinberg 0e5574cf9d feat: allow ollama to use 'latest' if available but not specified
ollama's CLI supports running models via commands such as 'ollama run llama3.2'
this syntax does not work with the INFERENCE_MODEL llamastack var as currently
specifying a tag such as 'latest' is required

this commit will check to see if the 'latest' model is available and use that
model if a user passes a model name without a tag but the 'latest' is available
in ollama

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-04-10 20:12:09 -04:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cerebras refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cerebras_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
databricks refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
fireworks test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
fireworks_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
nvidia refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ollama feat: allow ollama to use 'latest' if available but not specified 2025-04-10 20:12:09 -04:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00
runpod test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
sambanova_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
together test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
together_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vllm chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00