mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-24 05:28:04 +00:00
feat: pre-commit results
This commit is contained in:
parent
a562d81825
commit
f65a260cdd
3 changed files with 104 additions and 0 deletions
|
|
@ -18,6 +18,7 @@ This section contains documentation for all available providers for the **infere
|
|||
- [remote::hf::endpoint](remote_hf_endpoint.md)
|
||||
- [remote::hf::serverless](remote_hf_serverless.md)
|
||||
- [remote::llama-openai-compat](remote_llama-openai-compat.md)
|
||||
- [remote::llamacpp](remote_llamacpp.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
- [remote::ollama](remote_ollama.md)
|
||||
- [remote::openai](remote_openai.md)
|
||||
|
|
|
|||
17
docs/source/providers/inference/remote_llamacpp.md
Normal file
17
docs/source/providers/inference/remote_llamacpp.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# remote::llamacpp
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The llama.cpp server API key (optional for local servers) |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | http://localhost:8080/v1 | The URL for the llama.cpp server with OpenAI-compatible API |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: ${env.LLAMACPP_URL:http://localhost:8080}/v1
|
||||
api_key: ${env.LLAMACPP_API_KEY:}
|
||||
|
||||
```
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue