llama-stack-mirror/tests/integration/inference
Dennis Kennetz 209a78b618
feat: add oci genai service as chat inference provider (#3876)
# What does this PR do?
Adds OCI GenAI PaaS models for openai chat completion endpoints.

## Test Plan
In an OCI tenancy with access to GenAI PaaS, perform the following
steps:

1. Ensure you have IAM policies in place to use service (check docs
included in this PR)
2. For local development, [setup OCI
cli](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm)
and configure the CLI with your region, tenancy, and auth
[here](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliconfigure.htm)
3. Once configured, go through llama-stack setup and run llama-stack
(uses config based auth) like:
```bash
OCI_AUTH_TYPE=config_file \
OCI_CLI_PROFILE=CHICAGO \
OCI_REGION=us-chicago-1 \
OCI_COMPARTMENT_OCID=ocid1.compartment.oc1..aaaaaaaa5...5a \
llama stack run oci
```
4. Hit the `models` endpoint to list models after server is running:
```bash
curl http://localhost:8321/v1/models | jq
...
{
      "identifier": "meta.llama-4-scout-17b-16e-instruct",
      "provider_resource_id": "ocid1.generativeaimodel.oc1.us-chicago-1.am...q",
      "provider_id": "oci",
      "type": "model",
      "metadata": {
        "display_name": "meta.llama-4-scout-17b-16e-instruct",
        "capabilities": [
          "CHAT"
        ],
        "oci_model_id": "ocid1.generativeaimodel.oc1.us-chicago-1.a...q"
      },
      "model_type": "llm"
},
   ...
```
5. Use the "display_name" field to use the model in a
`/chat/completions` request:
```bash
# Streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": true,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'

# Non-streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": false,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'
```
6. Try out other models from the `/models` endpoint.
2025-11-10 16:16:24 -05:00
..
recordings feat(responses)!: Add web_search_2025_08_26 to the WebSearchToolTypes (#4103) 2025-11-07 10:01:12 -08:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py feat: add oci genai service as chat inference provider (#3876) 2025-11-10 16:16:24 -05:00
test_openai_embeddings.py feat: add oci genai service as chat inference provider (#3876) 2025-11-10 16:16:24 -05:00
test_openai_vision_inference.py feat(internal): add image_url download feature to OpenAIMixin (#3516) 2025-09-26 17:32:16 -04:00
test_provider_data_routing.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
test_rerank.py feat: Add rerank API for NVIDIA Inference Provider (#3329) 2025-10-30 21:42:09 -07:00
test_tools_with_schemas.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
test_vision_inference.py chore(apis): unpublish deprecated /v1/inference apis (#3297) 2025-09-27 11:20:06 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00