llama-stack/llama_stack/providers/utils/inference
Dinesh Yeduguru fdff24e77a
Inference to use provider resource id to register and validate (#428)
This PR changes the way model id gets translated to the final model name
that gets passed through the provider.
Major changes include:
1) Providers are responsible for registering an object and as part of
the registration returning the object with the correct provider specific
name of the model provider_resource_id
2) To help with the common look ups different names a new ModelLookup
class is created.



Tested all inference providers including together, fireworks, vllm,
ollama, meta reference and bedrock
2024-11-12 20:02:00 -08:00
..
__init__.py Use inference APIs for executing Llama Guard (#121) 2024-09-28 15:40:06 -07:00
model_registry.py Inference to use provider resource id to register and validate (#428) 2024-11-12 20:02:00 -08:00
openai_compat.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
prompt_adapter.py Inference to use provider resource id to register and validate (#428) 2024-11-12 20:02:00 -08:00