forked from phoenix-oss/llama-stack-mirror
docs: Add OpenAI, Anthropic, Gemini to inference API providers table (#1622)
# What does this PR do? Forgot to update this page as well as part of https://github.com/meta-llama/llama-stack/pull/1617. Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
a062723d03
commit
b906bad238
1 changed files with 4 additions and 0 deletions
|
@ -61,6 +61,10 @@ A number of "adapters" are available for some popular Inference and Vector Store
|
|||
| Groq | Hosted |
|
||||
| SambaNova | Hosted |
|
||||
| PyTorch ExecuTorch | On-device iOS, Android |
|
||||
| OpenAI | Hosted |
|
||||
| Anthropic | Hosted |
|
||||
| Gemini | Hosted |
|
||||
|
||||
|
||||
**Vector IO API**
|
||||
| **Provider** | **Environments** |
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue