llama-stack/llama_stack/providers/remote/inference
Dinesh Yeduguru de7af28756
Tgi fixture (#519)
# What does this PR do?

* Add a test fixture for tgi
* Fixes the logic to correctly pass the llama model for chat completion

Fixes #514

## Test Plan

pytest -k "tgi"
llama_stack/providers/tests/inference/test_text_inference.py --env
TGI_URL=http://localhost:$INFERENCE_PORT --env TGI_API_TOKEN=$HF_TOKEN
2024-11-25 13:17:02 -08:00
..
bedrock Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
databricks Inference to use provider resource id to register and validate (#428) 2024-11-12 20:02:00 -08:00
fireworks fix 3.2-1b fireworks 2024-11-19 14:20:07 -08:00
nvidia add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
ollama Update Ollama supported llama model list (#483) 2024-11-22 21:56:43 -08:00
sample migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
tgi Tgi fixture (#519) 2024-11-25 13:17:02 -08:00
together fix llama stack build for together & llama stack build from templates (#479) 2024-11-18 22:29:16 -08:00
vllm use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00