feat: add oci genai service as chat inference provider (#3876)

# What does this PR do?
Adds OCI GenAI PaaS models for openai chat completion endpoints.

## Test Plan
In an OCI tenancy with access to GenAI PaaS, perform the following
steps:

1. Ensure you have IAM policies in place to use service (check docs
included in this PR)
2. For local development, [setup OCI
cli](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm)
and configure the CLI with your region, tenancy, and auth
[here](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliconfigure.htm)
3. Once configured, go through llama-stack setup and run llama-stack
(uses config based auth) like:
```bash
OCI_AUTH_TYPE=config_file \
OCI_CLI_PROFILE=CHICAGO \
OCI_REGION=us-chicago-1 \
OCI_COMPARTMENT_OCID=ocid1.compartment.oc1..aaaaaaaa5...5a \
llama stack run oci
```
4. Hit the `models` endpoint to list models after server is running:
```bash
curl http://localhost:8321/v1/models | jq
...
{
      "identifier": "meta.llama-4-scout-17b-16e-instruct",
      "provider_resource_id": "ocid1.generativeaimodel.oc1.us-chicago-1.am...q",
      "provider_id": "oci",
      "type": "model",
      "metadata": {
        "display_name": "meta.llama-4-scout-17b-16e-instruct",
        "capabilities": [
          "CHAT"
        ],
        "oci_model_id": "ocid1.generativeaimodel.oc1.us-chicago-1.a...q"
      },
      "model_type": "llm"
},
   ...
```
5. Use the "display_name" field to use the model in a
`/chat/completions` request:
```bash
# Streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": true,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'

# Non-streaming result
curl -X POST http://localhost:8321/v1/chat/completions   -H "Content-Type: application/json"   -d '{
        "model": "meta.llama-4-scout-17b-16e-instruct",
       "stream": false,
       "temperature": 0.9,
      "messages": [
         {
           "role": "system",
           "content": "You are a funny comedian. You can be crass."
         },
          {
           "role": "user",
          "content": "Tell me a funny joke about programming."
         }
       ]
}'
```
6. Try out other models from the `/models` endpoint.
This commit is contained in:
Dennis Kennetz 2025-11-10 15:16:24 -06:00 committed by GitHub
parent fadf17daf3
commit 209a78b618
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 938 additions and 0 deletions

View file

@ -0,0 +1,75 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import os
from typing import Any
from pydantic import BaseModel, Field
from llama_stack.providers.utils.inference.model_registry import RemoteInferenceProviderConfig
from llama_stack.schema_utils import json_schema_type
class OCIProviderDataValidator(BaseModel):
oci_auth_type: str = Field(
description="OCI authentication type (must be one of: instance_principal, config_file)",
)
oci_region: str = Field(
description="OCI region (e.g., us-ashburn-1)",
)
oci_compartment_id: str = Field(
description="OCI compartment ID for the Generative AI service",
)
oci_config_file_path: str | None = Field(
default="~/.oci/config",
description="OCI config file path (required if oci_auth_type is config_file)",
)
oci_config_profile: str | None = Field(
default="DEFAULT",
description="OCI config profile (required if oci_auth_type is config_file)",
)
@json_schema_type
class OCIConfig(RemoteInferenceProviderConfig):
oci_auth_type: str = Field(
description="OCI authentication type (must be one of: instance_principal, config_file)",
default_factory=lambda: os.getenv("OCI_AUTH_TYPE", "instance_principal"),
)
oci_region: str = Field(
default_factory=lambda: os.getenv("OCI_REGION", "us-ashburn-1"),
description="OCI region (e.g., us-ashburn-1)",
)
oci_compartment_id: str = Field(
default_factory=lambda: os.getenv("OCI_COMPARTMENT_OCID", ""),
description="OCI compartment ID for the Generative AI service",
)
oci_config_file_path: str = Field(
default_factory=lambda: os.getenv("OCI_CONFIG_FILE_PATH", "~/.oci/config"),
description="OCI config file path (required if oci_auth_type is config_file)",
)
oci_config_profile: str = Field(
default_factory=lambda: os.getenv("OCI_CLI_PROFILE", "DEFAULT"),
description="OCI config profile (required if oci_auth_type is config_file)",
)
@classmethod
def sample_run_config(
cls,
oci_auth_type: str = "${env.OCI_AUTH_TYPE:=instance_principal}",
oci_config_file_path: str = "${env.OCI_CONFIG_FILE_PATH:=~/.oci/config}",
oci_config_profile: str = "${env.OCI_CLI_PROFILE:=DEFAULT}",
oci_region: str = "${env.OCI_REGION:=us-ashburn-1}",
oci_compartment_id: str = "${env.OCI_COMPARTMENT_OCID:=}",
**kwargs,
) -> dict[str, Any]:
return {
"oci_auth_type": oci_auth_type,
"oci_config_file_path": oci_config_file_path,
"oci_config_profile": oci_config_profile,
"oci_region": oci_region,
"oci_compartment_id": oci_compartment_id,
}