llama-stack-mirror/docs/docs
Dennis Kennetz 209a78b618
feat: add oci genai service as chat inference provider (#3876)
# What does this PR do?
Adds OCI GenAI PaaS models for openai chat completion endpoints.

## Test Plan
In an OCI tenancy with access to GenAI PaaS, perform the following
steps:

1. Ensure you have IAM policies in place to use service (check docs
included in this PR)
2. For local development, [setup OCI
cli](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm)
and configure the CLI with your region, tenancy, and auth
[here](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliconfigure.htm)
3. Once configured, go through llama-stack setup and run llama-stack
(uses config based auth) like:
```bash
OCI_AUTH_TYPE=config_file \
OCI_CLI_PROFILE=CHICAGO \
OCI_REGION=us-chicago-1 \
OCI_COMPARTMENT_OCID=ocid1.compartment.oc1..aaaaaaaa5...5a \
llama stack run oci
```
4. Hit the `models` endpoint to list models after server is running:
```bash
curl http://localhost:8321/v1/models | jq
...
{
      "identifier": "meta.llama-4-scout-17b-16e-instruct",
      "provider_resource_id": "ocid1.generativeaimodel.oc1.us-chicago-1.am...q",
      "provider_id": "oci",
      "type": "model",
      "metadata": {
        "display_name": "meta.llama-4-scout-17b-16e-instruct",
        "capabilities": [
          "CHAT"
        ],
        "oci_model_id": "ocid1.generativeaimodel.oc1.us-chicago-1.a...q"
      },
      "model_type": "llm"
},
   ...
```
5. Use the "display_name" field to use the model in a
`/chat/completions` request:
```bash
# Streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": true,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'

# Non-streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": false,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'
```
6. Try out other models from the `/models` endpoint.
2025-11-10 16:16:24 -05:00
..
advanced_apis chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
building_applications chore(ui): remove the Streamlit UI (#4097) 2025-11-06 15:51:57 -08:00
concepts chore!: remove SDG API (#4035) 2025-11-03 16:12:06 -08:00
contributing feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
deploying docs: Add Llama Stack Operator docs (#3983) 2025-11-10 15:29:15 +01:00
distributions feat: add oci genai service as chat inference provider (#3876) 2025-11-10 16:16:24 -05:00
getting_started fix: update tests for OpenAI-style models endpoint (#4053) 2025-11-03 17:30:08 -08:00
providers feat: add oci genai service as chat inference provider (#3876) 2025-11-10 16:16:24 -05:00
references chore: update docs for telemetry api removal (#3900) 2025-10-24 13:57:28 -07:00
api-overview.md docs: api separation (#3630) 2025-10-01 10:13:31 -07:00
index.mdx chore: update docs for telemetry api removal (#3900) 2025-10-24 13:57:28 -07:00