llama-stack/llama_stack
Matthew Farrellee 706b4ca651
feat: support nvidia hosted vision models (llama 3.2 11b/90b) (#1278)
# What does this PR do?

support nvidia hosted 3.2 11b/90b vision models. they are not hosted on
the common https://integrate.api.nvidia.com/v1. they are hosted on their
own individual urls.

## Test Plan

`LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -s -v
tests/client-sdk/inference/test_vision_inference.py
--inference-model=meta/llama-3.2-11b-vision-instruct -k image`
2025-03-18 11:54:10 -07:00
..
apis feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
cli refactor: simplify command execution and remove PTY handling (#1641) 2025-03-17 15:03:14 -07:00
distribution feat: Created Playground Containerfile and Image Workflow (#1256) 2025-03-18 09:26:49 -07:00
models/llama refactor: move all datetime.now() calls to UTC (#1589) 2025-03-13 15:34:53 -07:00
providers feat: support nvidia hosted vision models (llama 3.2 11b/90b) (#1278) 2025-03-18 11:54:10 -07:00
strong_typing Ensure that deprecations for fields follow through to OpenAPI 2025-02-19 13:54:04 -08:00
templates fix: Add the option to not verify SSL at remote-vllm provider (#1585) 2025-03-18 09:33:35 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py feat: add support for logging config in the run.yaml (#1408) 2025-03-14 12:36:25 -07:00
schema_utils.py ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00