llama-stack/llama_stack/providers/remote/inference
Ashwin Bharambe eddef0b2ae
chore: slight renaming of model alias stuff (#1181)
Quick test by running:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk
```
2025-02-20 11:48:46 -08:00
..
bedrock chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
cerebras chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
databricks chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
fireworks chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
groq chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
nvidia chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
ollama chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
passthrough feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
runpod chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
sambanova chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
sample build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
tgi chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
together chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
vllm chore: slight renaming of model alias stuff (#1181) 2025-02-20 11:48:46 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00