llama-stack-mirror/llama_stack/providers/tests/inference
Dinesh Yeduguru 96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00
..
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
fixtures.py Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
pasta.jpeg Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
test_embeddings.py Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
test_model_registration.py Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
test_prompt_adapter.py Added tests for persistence (#274) 2024-10-22 19:41:46 -07:00
test_text_inference.py add completion api support to nvidia inference provider (#533) 2024-12-11 10:08:38 -08:00
test_vision_inference.py Don't skip meta-reference for the tests 2024-11-21 13:29:53 -08:00
utils.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00