mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
feat(openai_movement)!: Change URL structures to kill /openai/v1 (part 2) (#3605)
This commit is contained in:
parent
3a09f00cdb
commit
56b625d18a
10 changed files with 3 additions and 2834 deletions
|
@ -7,7 +7,7 @@ sidebar_position: 1
|
|||
|
||||
### Server path
|
||||
|
||||
Llama Stack exposes an OpenAI-compatible API endpoint at `/v1/openai/v1`. So, for a Llama Stack server running locally on port `8321`, the full url to the OpenAI-compatible API endpoint is `http://localhost:8321/v1/openai/v1`.
|
||||
Llama Stack exposes OpenAI-compatible API endpoints at `/v1`. So, for a Llama Stack server running locally on port `8321`, the full url to the OpenAI-compatible API endpoint is `http://localhost:8321/v1`.
|
||||
|
||||
### Clients
|
||||
|
||||
|
@ -25,12 +25,12 @@ client = LlamaStackClient(base_url="http://localhost:8321")
|
|||
|
||||
#### OpenAI Client
|
||||
|
||||
When using an OpenAI client, set the `base_url` to the `/v1/openai/v1` path on your Llama Stack server.
|
||||
When using an OpenAI client, set the `base_url` to the `/v1` path on your Llama Stack server.
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/openai/v1", api_key="none")
|
||||
client = OpenAI(base_url="http://localhost:8321/v1", api_key="none")
|
||||
```
|
||||
|
||||
Regardless of the client you choose, the following code examples should all work the same.
|
||||
|
|
1593
docs/static/llama-stack-spec.html
vendored
1593
docs/static/llama-stack-spec.html
vendored
File diff suppressed because it is too large
Load diff
1188
docs/static/llama-stack-spec.yaml
vendored
1188
docs/static/llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue