llama-stack-mirror/llama_stack/providers/remote/inference
Aidan Do 485476c29a
Fix Groq invalid self.config reference (#719)
# What does this PR do?

Contributes towards: #432

RE: https://github.com/meta-llama/llama-stack/pull/609

I missed this one while refactoring. Fixes:

```python
Traceback (most recent call last):
  File "/Users/aidand/dev/llama-stack/llama_stack/distribution/server/server.py", line 191, in endpoint
    return await maybe_await(value)
  File "/Users/aidand/dev/llama-stack/llama_stack/distribution/server/server.py", line 155, in maybe_await
    return await value
  File "/Users/aidand/dev/llama-stack/llama_stack/providers/utils/telemetry/trace_protocol.py", line 101, in async_wrapper
    result = await method(self, *args, **kwargs)
  File "/Users/aidand/dev/llama-stack/llama_stack/distribution/routers/routers.py", line 156, in chat_completion
    return await provider.chat_completion(**params)
  File "/Users/aidand/dev/llama-stack/llama_stack/providers/utils/telemetry/trace_protocol.py", line 101, in async_wrapper
    result = await method(self, *args, **kwargs)
  File "/Users/aidand/dev/llama-stack/llama_stack/providers/remote/inference/groq/groq.py", line 127, in chat_completion
    response = self._get_client().chat.completions.create(**request)
  File "/Users/aidand/dev/llama-stack/llama_stack/providers/remote/inference/groq/groq.py", line 143, in _get_client
    return Groq(api_key=self.config.api_key)
AttributeError: 'GroqInferenceAdapter' object has no attribute 'config'. Did you mean: '_config'?
```


## Test Plan

Environment:

```shell
export GROQ_API_KEY=<api-key>

# build.yaml and run.yaml files
wget https://raw.githubusercontent.com/aidando73/llama-stack/9165502582cd7cb178bc1dcf89955b45768ab6c1/build.yaml
wget https://raw.githubusercontent.com/aidando73/llama-stack/9165502582cd7cb178bc1dcf89955b45768ab6c1/run.yaml

# Create environment if not already
conda create --prefix ./envs python=3.10
conda activate ./envs

# Build
pip install -e . && llama stack build --config ./build.yaml --image-type conda

# Activate built environment
conda activate llamastack-groq
```
<details>
<summary>Manual</summary>

```bash
llama stack run ./run.yaml --port 5001
```

Via this Jupyter notebook:
9165502582/hello.ipynb
</details>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-03 15:47:10 -08:00
..
bedrock [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
cerebras Redact sensitive information from configs when printing, etc. 2025-01-02 13:54:02 -08:00
databricks [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
fireworks Redact sensitive information from configs when printing, etc. 2025-01-02 13:54:02 -08:00
groq Fix Groq invalid self.config reference (#719) 2025-01-03 15:47:10 -08:00
nvidia Redact sensitive information from configs when printing, etc. 2025-01-02 13:54:02 -08:00
ollama Add JSON structured outputs to Ollama Provider (#680) 2025-01-02 09:05:51 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi Redact sensitive information from configs when printing, etc. 2025-01-02 13:54:02 -08:00
together Redact sensitive information from configs when printing, etc. 2025-01-02 13:54:02 -08:00
vllm Fix assert message and call to completion_request_to_prompt in remote:vllm (#709) 2025-01-03 13:44:49 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00