mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
# What does this PR do? add/enable the Databricks inference adapter Databricks inference adapter was broken, closes #3486 - remove deprecated completion / chat_completion endpoints - enable dynamic model listing w/o refresh, listing is not async - use SecretStr instead of str for token - backward incompatible change: for consistency with databricks docs, env DATABRICKS_URL -> DATABRICKS_HOST and DATABRICKS_API_TOKEN -> DATABRICKS_TOKEN - databricks urls are custom per user/org, add special recorder handling for databricks urls - add integration test --setup databricks - enable chat completions tests - enable embeddings tests - disable n > 1 tests - disable embeddings base64 tests - disable embeddings dimensions tests note: reasoning models, e.g. gpt oss, fail because databricks has a custom, incompatible response format ## Test Plan ci and ``` ./scripts/integration-tests.sh --stack-config server:ci-tests --setup databricks --subdirs inference --pattern openai ``` note: databricks needs to be manually added to the ci-tests distro for replay testing |
||
|---|---|---|
| .. | ||
| anthropic | ||
| azure | ||
| bedrock | ||
| cerebras | ||
| databricks | ||
| fireworks | ||
| gemini | ||
| groq | ||
| llama_openai_compat | ||
| nvidia | ||
| ollama | ||
| openai | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| tgi | ||
| together | ||
| vertexai | ||
| vllm | ||
| watsonx | ||
| __init__.py | ||