llama-stack/llama_stack/providers/impls/vllm
Ashwin Bharambe 2089427d60
Make all methods async def again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
2024-10-18 20:50:59 -07:00
..
__init__.py Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
config.py Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
vllm.py Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00