mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 20:14:13 +00:00
Databricks inference adapter was broken, would not start, see #3486 - remove deprecated completion / chat_completion endpoints - enable dynamic model listing w/o refresh, listing is not async - use SecretStr instead of str for token - backward incompatible change: for consistency with databricks docs, env DATABRICKS_URL -> DATABRICKS_HOST and DATABRICKS_API_TOKEN -> DATABRICKS_TOKEN - databricks urls are custom per user/org, add special recorder handling for databricks urls - add integration test --setup databricks - enable chat completions tests - enable embeddings tests - disable n > 1 tests - disable embeddings base64 tests - disable embeddings dimensions tests note: reasoning models, e.g. gpt oss, fail because databricks has a custom, incompatible response format test with: ./scripts/integration-tests.sh --stack-config server:ci-tests --setup databricks --subdirs inference --pattern openai note: databricks needs to be manually added to the ci-tests distro for replay testing |
||
---|---|---|
.. | ||
index.md | ||
inline_meta-reference.md | ||
inline_sentence-transformers.md | ||
remote_anthropic.md | ||
remote_azure.md | ||
remote_bedrock.md | ||
remote_cerebras.md | ||
remote_databricks.md | ||
remote_fireworks.md | ||
remote_gemini.md | ||
remote_groq.md | ||
remote_hf_endpoint.md | ||
remote_hf_serverless.md | ||
remote_llama-openai-compat.md | ||
remote_nvidia.md | ||
remote_ollama.md | ||
remote_openai.md | ||
remote_passthrough.md | ||
remote_runpod.md | ||
remote_sambanova-openai-compat.md | ||
remote_sambanova.md | ||
remote_tgi.md | ||
remote_together.md | ||
remote_vertexai.md | ||
remote_vllm.md | ||
remote_watsonx.md |