llama-stack/llama_stack/providers/tests/inference
Dinesh Yeduguru de7af28756
Tgi fixture (#519)
# What does this PR do?

* Add a test fixture for tgi
* Fixes the logic to correctly pass the llama model for chat completion

Fixes #514

## Test Plan

pytest -k "tgi"
llama_stack/providers/tests/inference/test_text_inference.py --env
TGI_URL=http://localhost:$INFERENCE_PORT --env TGI_API_TOKEN=$HF_TOKEN
2024-11-25 13:17:02 -08:00
..
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
fixtures.py Tgi fixture (#519) 2024-11-25 13:17:02 -08:00
pasta.jpeg Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
test_model_registration.py Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
test_prompt_adapter.py Added tests for persistence (#274) 2024-10-22 19:41:46 -07:00
test_text_inference.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
test_vision_inference.py Don't skip meta-reference for the tests 2024-11-21 13:29:53 -08:00
utils.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00