llama-stack-mirror/tests/client-sdk/inference
Ashwin Bharambe 928a39d17b
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
Groq has never supported raw completions anyhow. So this makes it easier
to switch it to LiteLLM. All our test suite passes.

I also updated all the openai-compat providers so they work with api
keys passed from headers. `provider_data`

## Test Plan

```bash
LLAMA_STACK_CONFIG=groq \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
   --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model=""
```

Also tested (openai, anthropic, gemini) providers. No regressions.
2025-02-27 13:16:50 -08:00
..
__init__.py [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
dog.png fix vllm base64 image inference (#815) 2025-01-17 17:07:28 -08:00
test_embedding.py fix: remove list of list tests, no longer relevant after #1161 (#1205) 2025-02-21 08:07:35 -08:00
test_text_inference.py feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
test_vision_inference.py feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00