forked from phoenix-oss/llama-stack-mirror
# What does this PR do?
We need to change
```yaml
/v1/inference/chat-completion:
post:
responses:
'200':
description: >-
If stream=False, returns a ChatCompletionResponse with the full completion.
If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
content:
text/event-stream:
schema:
oneOf:
- $ref: '#/components/schemas/ChatCompletionResponse'
- $ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
```
into
```yaml
/v1/inference/chat-completion:
post:
responses:
'200':
description: >-
If stream=False, returns a ChatCompletionResponse with the full completion.
If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
content:
text/event-stream:
schema:
$ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
application/json:
schema:
$ref: '#/components/schemas/ChatCompletionResponse'
```
## Test Plan
**Python**
- tested in SDK sync:
https://github.com/meta-llama/llama-stack-client-python/pull/108
**Node**
- tested w/
https://gist.github.com/yanxi0830/b782f4b91e21dcccdfef8898ce55157e (SDK
udpate follow up)
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
|
||
|---|---|---|
| .. | ||
| agentic-system.png | ||
| list-templates.png | ||
| llama-stack-spec.html | ||
| llama-stack-spec.yaml | ||
| llama-stack.png | ||
| model-lifecycle.png | ||
| prompt-format.png | ||