forked from phoenix-oss/llama-stack-mirror
Groq has never supported raw completions anyhow. So this makes it easier to switch it to LiteLLM. All our test suite passes. I also updated all the openai-compat providers so they work with api keys passed from headers. `provider_data` ## Test Plan ```bash LLAMA_STACK_CONFIG=groq \ pytest -s -v tests/client-sdk/inference/test_text_inference.py \ --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model="" ``` Also tested (openai, anthropic, gemini) providers. No regressions.
17 lines
528 B
Python
17 lines
528 B
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from llama_stack.apis.inference import Inference
|
|
|
|
from .config import GroqConfig
|
|
|
|
|
|
async def get_adapter_impl(config: GroqConfig, _deps) -> Inference:
|
|
# import dynamically so the import is used only when it is needed
|
|
from .groq import GroqInferenceAdapter
|
|
|
|
adapter = GroqInferenceAdapter(config)
|
|
return adapter
|