llama-stack/llama_stack/providers/remote/inference
Rashmi Pawar 996f27a308
fix: add logging import (#1174)
# What does this PR do?
Fixes logging import and the logger instance creation

cc: @dglogo
2025-02-20 11:26:47 -05:00
..
bedrock chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
cerebras chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
databricks chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
fireworks chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
groq chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
nvidia fix: add logging import (#1174) 2025-02-20 11:26:47 -05:00
ollama chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
passthrough feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
runpod chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
sambanova chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
sample build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
tgi chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
together chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
vllm fix: More robust handling of the arguments in tool call response in remote::vllm (#1169) 2025-02-19 22:27:02 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00