forked from phoenix-oss/llama-stack-mirror
align with CompletionResponseStreamChunk.delta as str (instead of TextDelta) (#900)
# What does this PR do? fix type mismatch in /v1/inference/completion ## Test Plan `llama stack run ./llama_stack/templates/nvidia/run.yaml` `LLAMA_STACK_BASE_URL="http://localhost:8321" pytest -v tests/client-sdk/inference/test_inference.py` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Ran pre-commit to handle lint / formatting issues. - [x] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
9f709387e2
commit
1a5c17a92f
1 changed files with 1 additions and 1 deletions
|
@ -632,7 +632,7 @@ async def convert_openai_completion_stream(
|
|||
async for chunk in stream:
|
||||
choice = chunk.choices[0]
|
||||
yield CompletionResponseStreamChunk(
|
||||
delta=TextDelta(text=choice.text),
|
||||
delta=choice.text,
|
||||
stop_reason=_convert_openai_finish_reason(choice.finish_reason),
|
||||
logprobs=_convert_openai_completion_logprobs(choice.logprobs),
|
||||
)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue