llama-stack/llama_stack/providers/tests/inference
Ashwin Bharambe e84d4436b5
Since we are pushing for HF repos, we should accept them in inference configs (#497)
# What does this PR do?

As the title says. 

## Test Plan

This needs
8752149f58
to also land. So the next package (0.0.54) will make this work properly.

The test is:

```bash
pytest -v -s -m "llama_3b and meta_reference" test_model_registration.py
```
2024-11-20 16:14:37 -08:00
..
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py update tests --inference-model to hf id 2024-11-18 17:36:58 -08:00
fixtures.py Kill "remote" providers and fix testing with a remote stack properly (#435) 2024-11-12 21:51:29 -08:00
pasta.jpeg Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
test_model_registration.py Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
test_prompt_adapter.py Added tests for persistence (#274) 2024-10-22 19:41:46 -07:00
test_text_inference.py Update condition in tests to handle llama-3.1 vs llama3.1 (HF names) 2024-11-19 13:25:36 -08:00
test_vision_inference.py Kill "remote" providers and fix testing with a remote stack properly (#435) 2024-11-12 21:51:29 -08:00
utils.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00