llama-stack-mirror/llama_stack/providers/remote/inference
Ihar Hrachyshka 66d6c2580e
chore: more mypy checks (ollama, vllm, ...) (#1777)
# What does this PR do?

- **chore: mypy for strong_typing**
- **chore: mypy for remote::vllm**
- **chore: mypy for remote::ollama**
- **chore: mypy for providers.datatype**

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-04-01 17:12:39 +02:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
cerebras fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
databricks test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
fireworks fix: remove Llama-3.2-1B-Instruct for fireworks (#1558) 2025-03-11 11:19:29 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia fix: NVIDIA embedding results in InternalServerError (#1851) 2025-04-01 13:31:29 +02:00
ollama chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00
runpod test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova fix: Updating ToolCall.arguments to allow for json strings that can be decoded on client side (#1685) 2025-03-19 10:36:19 -07:00
tgi chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
together feat: Add open benchmark template codegen (#1579) 2025-03-12 11:12:08 -07:00
vllm chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00