llama-stack-mirror/llama_stack/providers/remote/inference/gemini
Charlie Doern 41431d8bdd refactor: convert providers to be installed via package
currently providers have a `pip_package` list. Rather than make our own form of python dependency management, we should use `pyproject.toml` files in each provider declaring the dependencies in a more trackable manner.
Each provider can then be installed using the already in place `module` field in the ProviderSpec, pointing to the directory the provider lives in
we can then simply `uv pip install` this directory as opposed to installing the dependencies one by one

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-09-22 09:23:50 -04:00
..
__init__.py chore: remove duplicate OpenAI and Gemini data validators (#3513) 2025-09-22 13:53:17 +02:00
config.py feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
gemini.py chore: update the gemini inference impl to use openai-python for openai-compat functions (#3351) 2025-09-06 12:22:20 -07:00
models.py feat: Flash-Lite 2.0 and 2.5 models added to Gemini inference provider (#3058) 2025-08-08 13:48:15 -07:00
pyproject.toml refactor: convert providers to be installed via package 2025-09-22 09:23:50 -04:00