forked from phoenix/litellm-mirror
LiteLLM Minor Fixes & Improvements (10/24/2024) (#6421)
* fix(utils.py): support passing dynamic api base to validate_environment Returns True if just api base is required and api base is passed * fix(litellm_pre_call_utils.py): feature flag sending client headers to llm api Fixes https://github.com/BerriAI/litellm/issues/6410 * fix(anthropic/chat/transformation.py): return correct error message * fix(http_handler.py): add error response text in places where we expect it * fix(factory.py): handle base case of no non-system messages to bedrock Fixes https://github.com/BerriAI/litellm/issues/6411 * feat(cohere/embed): Support cohere image embeddings Closes https://github.com/BerriAI/litellm/issues/6413 * fix(__init__.py): fix linting error * docs(supported_embedding.md): add image embedding example to docs * feat(cohere/embed): use cohere embedding returned usage for cost calc * build(model_prices_and_context_window.json): add embed-english-v3.0 details (image cost + 'supports_image_input' flag) * fix(cohere_transformation.py): fix linting error * test(test_proxy_server.py): cleanup test * test: cleanup test * fix: fix linting errors
This commit is contained in:
parent
38708a355a
commit
c03e5da41f
23 changed files with 417 additions and 150 deletions
|
@ -84,6 +84,60 @@ print(query_result[:5])
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## Image Embeddings
|
||||
|
||||
For models that support image embeddings, you can pass in a base64 encoded image string to the `input` param.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="sdk" label="SDK">
|
||||
|
||||
```python
|
||||
from litellm import embedding
|
||||
import os
|
||||
|
||||
# set your api key
|
||||
os.environ["COHERE_API_KEY"] = ""
|
||||
|
||||
response = embedding(model="cohere/embed-english-v3.0", input=["<base64 encoded image>"])
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="proxy" label="PROXY">
|
||||
|
||||
1. Setup config.yaml
|
||||
|
||||
```yaml
|
||||
model_list:
|
||||
- model_name: cohere-embed
|
||||
litellm_params:
|
||||
model: cohere/embed-english-v3.0
|
||||
api_key: os.environ/COHERE_API_KEY
|
||||
```
|
||||
|
||||
|
||||
2. Start proxy
|
||||
|
||||
```bash
|
||||
litellm --config /path/to/config.yaml
|
||||
|
||||
# RUNNING on http://0.0.0.0:4000
|
||||
```
|
||||
|
||||
3. Test it!
|
||||
|
||||
```bash
|
||||
curl -X POST 'http://0.0.0.0:4000/v1/embeddings' \
|
||||
-H 'Authorization: Bearer sk-54d77cd67b9febbb' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"model": "cohere/embed-english-v3.0",
|
||||
"input": ["<base64 encoded image>"]
|
||||
}'
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Input Params for `litellm.embedding()`
|
||||
|
||||
|
||||
|
|
|
@ -814,6 +814,7 @@ general_settings:
|
|||
| pass_through_endpoints | List[Dict[str, Any]] | Define the pass through endpoints. [Docs](./pass_through) |
|
||||
| enable_oauth2_proxy_auth | boolean | (Enterprise Feature) If true, enables oauth2.0 authentication |
|
||||
| forward_openai_org_id | boolean | If true, forwards the OpenAI Organization ID to the backend LLM call (if it's OpenAI). |
|
||||
| forward_client_headers_to_llm_api | boolean | If true, forwards the client headers (any `x-` headers) to the backend LLM call |
|
||||
|
||||
### router_settings - Reference
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue