llama-stack-mirror/llama_stack/providers/remote/inference
Aidan Do 39c34dd25f
[#432] Groq Provider tool call tweaks (#811)
# What does this PR do?

Follow up for @ashwinb's comments in
https://github.com/meta-llama/llama-stack/pull/630

- [x] Contributes to issue (#432)


## Test Plan
<details>
<summary>Environment</summary>

```shell
export GROQ_API_KEY=<api-key>

# Create environment if not already
conda create --name llamastack-groq python=3.10
conda activate llamastack-groq

wget https://raw.githubusercontent.com/aidando73/llama-stack/9165502582cd7cb178bc1dcf89955b45768ab6c1/build.yaml
wget https://raw.githubusercontent.com/meta-llama/llama-stack/918172c7fa92522c9ebc586bdb4f386b1d9ea224/run.yaml

# Build
pip install -e . && llama stack build --config ./build.yaml --image-type conda

# Activate built environment
conda activate llamastack-groq

# Test deps
pip install pytest pytest_html pytest_asyncio
```
</details>



<details>
<summary>Unit tests</summary>

```shell
# Setup
conda activate llamastack-groq
pytest llama_stack/providers/tests/inference/groq/test_groq_utils.py -vv -k groq -s

# Result
llama_stack/providers/tests/inference/groq/test_groq_utils.py .......................

========================================= 23 passed, 11 warnings in 0.06s =========================================
```
</details>

<details>
<summary>Integration tests</summary>

```shell
# Tests
 pytest llama_stack/providers/tests/inference/test_text_inference.py -k groq -s

# Results
___________________________ TestInference.test_chat_completion_with_tool_calling[-groq] ___________________________
llama_stack/providers/tests/inference/test_text_inference.py:403: in test_chat_completion_with_tool_calling
    assert len(message.tool_calls) > 0
E   assert 0 > 0
E    +  where 0 = len([])
E    +    where [] = CompletionMessage(role='assistant', content='<function=get_weather>{"location": "San Francisco, CA"}', stop_reason=<StopReason.end_of_turn: 'end_of_turn'>, tool_calls=[]).tool_calls
============================================= short test summary info =============================================
FAILED llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[-groq] - assert 0 > 0
======================== 1 failed, 3 passed, 5 skipped, 99 deselected, 7 warnings in 2.13s ========================
```

(One failure as expected from 3.2 3B - re:
https://github.com/meta-llama/llama-stack/pull/630#discussion_r1914056503)
</details>

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-01-29 12:02:12 -08:00
..
bedrock Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
cerebras Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
databricks remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
fireworks Fix fireworks client sdk chat completion with images (#840) 2025-01-22 11:25:10 -08:00
groq [#432] Groq Provider tool call tweaks (#811) 2025-01-29 12:02:12 -08:00
nvidia align with CompletionResponseStreamChunk.delta as str (instead of TextDelta) (#900) 2025-01-29 09:25:50 -08:00
ollama [ez] structured output for /completion ollama & enable tests (#822) 2025-01-21 21:10:24 -08:00
runpod Move runpod provider to the correct directory 2025-01-23 12:25:12 -08:00
sambanova Sambanova - LlamaGuard (#886) 2025-01-27 15:46:30 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi Fix tgi adapter (#796) 2025-01-16 17:44:12 -08:00
together remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
vllm Add vLLM raw completions API (#823) 2025-01-22 22:58:27 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00