forked from phoenix/litellm-mirror
* fix(pattern_matching_router.py): update model name using correct function
* fix(langfuse.py): metadata deepcopy can cause unhandled error (#6563)
Co-authored-by: seva <seva@inita.com>
* fix(stream_chunk_builder_utils.py): correctly set prompt tokens + log correct streaming usage
Closes https://github.com/BerriAI/litellm/issues/6488
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577)
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check
* fix(lowest_tpm_rpm_v2.py): return headers in correct format
* test: update test
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* test: remove eol model
* fix(proxy_server.py): fix db config loading logic
* fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten
* test: skip test if required env var is missing
* test: fix test
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
* test: mark flaky test
* test: handle anthropic api instability
* test(test_proxy_utils.py): add testing for db config update logic
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version (#6597)
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
* fix(langfuse.py): fix linting errors
* fix: fix linting errors
* fix: fix casting error
* fix: fix typing error
* fix: add more tests
* fix(utils.py): fix return_processed_chunk_logic
* Revert "Update setuptools in docker and fastapi to latest verison, in order t…" (#6615)
This reverts commit 1a7f7bdfb7
.
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* fix flake8 checks
* test: bump sleep time
* refactor: replace claude-instant-1.2 with haiku in testing
* fix(proxy_server.py): move to using sl payload in track_cost_callback
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fallback to kwargs(response_cost) if given
* test: remove claude-instant-1 from tests
* test: fix claude test
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* build: remove lint.yml
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Vsevolod Karvetskiy <56288164+karvetskiy@users.noreply.github.com>
Co-authored-by: seva <seva@inita.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt P Suorra <Jacobh2@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
806 lines
25 KiB
Python
806 lines
25 KiB
Python
#### What this tests ####
|
|
# This tests setting provider specific configs across providers
|
|
# There are 2 types of tests - changing config dynamically or by setting class variables
|
|
|
|
import os
|
|
import sys
|
|
import traceback
|
|
|
|
import pytest
|
|
|
|
sys.path.insert(
|
|
0, os.path.abspath("../..")
|
|
) # Adds the parent directory to the system path
|
|
from unittest.mock import AsyncMock, MagicMock, patch
|
|
|
|
import litellm
|
|
from litellm import RateLimitError, completion
|
|
|
|
# Huggingface - Expensive to deploy models and keep them running. Maybe we can try doing this via baseten??
|
|
# def hf_test_completion_tgi():
|
|
# litellm.HuggingfaceConfig(max_new_tokens=200)
|
|
# litellm.set_verbose=True
|
|
# try:
|
|
# # OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
# response_1 = litellm.completion(
|
|
# model="huggingface/mistralai/Mistral-7B-Instruct-v0.1",
|
|
# messages=[{ "content": "Hello, how are you?","role": "user"}],
|
|
# api_base="https://n9ox93a8sv5ihsow.us-east-1.aws.endpoints.huggingface.cloud",
|
|
# max_tokens=10
|
|
# )
|
|
# # Add any assertions here to check the response
|
|
# print(response_1)
|
|
# response_1_text = response_1.choices[0].message.content
|
|
|
|
# # USE CONFIG TOKENS
|
|
# response_2 = litellm.completion(
|
|
# model="huggingface/mistralai/Mistral-7B-Instruct-v0.1",
|
|
# messages=[{ "content": "Hello, how are you?","role": "user"}],
|
|
# api_base="https://n9ox93a8sv5ihsow.us-east-1.aws.endpoints.huggingface.cloud",
|
|
# )
|
|
# # Add any assertions here to check the response
|
|
# print(response_2)
|
|
# response_2_text = response_2.choices[0].message.content
|
|
|
|
# assert len(response_2_text) > len(response_1_text)
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
# hf_test_completion_tgi()
|
|
|
|
# Anthropic
|
|
|
|
|
|
def claude_test_completion():
|
|
litellm.AnthropicConfig(max_tokens_to_sample=200)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="claude-3-haiku-20240307",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
max_tokens=10,
|
|
)
|
|
# Add any assertions here to check the response
|
|
print(response_1)
|
|
response_1_text = response_1.choices[0].message.content
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="claude-3-haiku-20240307",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
)
|
|
# Add any assertions here to check the response
|
|
print(response_2)
|
|
response_2_text = response_2.choices[0].message.content
|
|
|
|
assert len(response_2_text) > len(response_1_text)
|
|
|
|
try:
|
|
response_3 = litellm.completion(
|
|
model="claude-3-5-haiku-20241022",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
|
|
except Exception as e:
|
|
print(e)
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# claude_test_completion()
|
|
|
|
# Replicate
|
|
|
|
|
|
def replicate_test_completion():
|
|
litellm.ReplicateConfig(max_new_tokens=200)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
max_tokens=10,
|
|
)
|
|
# Add any assertions here to check the response
|
|
print(response_1)
|
|
response_1_text = response_1.choices[0].message.content
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
)
|
|
# Add any assertions here to check the response
|
|
print(response_2)
|
|
response_2_text = response_2.choices[0].message.content
|
|
|
|
assert len(response_2_text) > len(response_1_text)
|
|
try:
|
|
response_3 = litellm.completion(
|
|
model="meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
except Exception:
|
|
pass
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# replicate_test_completion()
|
|
|
|
# Cohere
|
|
|
|
|
|
def cohere_test_completion():
|
|
# litellm.CohereConfig(max_tokens=200)
|
|
litellm.set_verbose = True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="command-nightly",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
max_tokens=10,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="command-nightly",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
|
|
assert len(response_2_text) > len(response_1_text)
|
|
|
|
response_3 = litellm.completion(
|
|
model="command-nightly",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
assert len(response_3.choices) > 1
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# cohere_test_completion()
|
|
|
|
# AI21
|
|
|
|
|
|
def ai21_test_completion():
|
|
litellm.AI21Config(maxTokens=10)
|
|
litellm.set_verbose = True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="j2-mid",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="j2-mid",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
|
|
response_3 = litellm.completion(
|
|
model="j2-light",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
assert len(response_3.choices) > 1
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# ai21_test_completion()
|
|
|
|
# TogetherAI
|
|
|
|
|
|
def togetherai_test_completion():
|
|
litellm.TogetherAIConfig(max_tokens=10)
|
|
litellm.set_verbose = True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="together_ai/togethercomputer/llama-2-70b-chat",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="together_ai/togethercomputer/llama-2-70b-chat",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
|
|
try:
|
|
response_3 = litellm.completion(
|
|
model="together_ai/togethercomputer/llama-2-70b-chat",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
pytest.fail(f"Error not raised when n=2 passed to provider")
|
|
except Exception:
|
|
pass
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# togetherai_test_completion()
|
|
|
|
# Palm
|
|
|
|
|
|
# palm_test_completion()
|
|
|
|
# NLP Cloud
|
|
|
|
|
|
def nlp_cloud_test_completion():
|
|
litellm.NLPCloudConfig(max_length=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="dolphin",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="dolphin",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
|
|
try:
|
|
response_3 = litellm.completion(
|
|
model="dolphin",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
pytest.fail(f"Error not raised when n=2 passed to provider")
|
|
except Exception:
|
|
pass
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# nlp_cloud_test_completion()
|
|
|
|
# AlephAlpha
|
|
|
|
|
|
def aleph_alpha_test_completion():
|
|
litellm.AlephAlphaConfig(maximum_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="luminous-base",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="luminous-base",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
|
|
response_3 = litellm.completion(
|
|
model="luminous-base",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
|
|
assert len(response_3.choices) > 1
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# aleph_alpha_test_completion()
|
|
|
|
# Petals - calls are too slow, will cause circle ci to fail due to delay. Test locally.
|
|
# def petals_completion():
|
|
# litellm.PetalsConfig(max_new_tokens=10)
|
|
# # litellm.set_verbose=True
|
|
# try:
|
|
# # OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
# response_1 = litellm.completion(
|
|
# model="petals/petals-team/StableBeluga2",
|
|
# messages=[{ "content": "Hello, how are you? Be as verbose as possible","role": "user"}],
|
|
# api_base="https://chat.petals.dev/api/v1/generate",
|
|
# max_tokens=100
|
|
# )
|
|
# response_1_text = response_1.choices[0].message.content
|
|
# print(f"response_1_text: {response_1_text}")
|
|
|
|
# # USE CONFIG TOKENS
|
|
# response_2 = litellm.completion(
|
|
# model="petals/petals-team/StableBeluga2",
|
|
# api_base="https://chat.petals.dev/api/v1/generate",
|
|
# messages=[{ "content": "Hello, how are you? Be as verbose as possible","role": "user"}],
|
|
# )
|
|
# response_2_text = response_2.choices[0].message.content
|
|
# print(f"response_2_text: {response_2_text}")
|
|
|
|
# assert len(response_2_text) < len(response_1_text)
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
|
|
# petals_completion()
|
|
|
|
# VertexAI
|
|
# We don't have vertex ai configured for circle ci yet -- need to figure this out.
|
|
# def vertex_ai_test_completion():
|
|
# litellm.VertexAIConfig(max_output_tokens=10)
|
|
# # litellm.set_verbose=True
|
|
# try:
|
|
# # OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
# response_1 = litellm.completion(
|
|
# model="chat-bison",
|
|
# messages=[{ "content": "Hello, how are you? Be as verbose as possible","role": "user"}],
|
|
# max_tokens=100
|
|
# )
|
|
# response_1_text = response_1.choices[0].message.content
|
|
# print(f"response_1_text: {response_1_text}")
|
|
|
|
# # USE CONFIG TOKENS
|
|
# response_2 = litellm.completion(
|
|
# model="chat-bison",
|
|
# messages=[{ "content": "Hello, how are you? Be as verbose as possible","role": "user"}],
|
|
# )
|
|
# response_2_text = response_2.choices[0].message.content
|
|
# print(f"response_2_text: {response_2_text}")
|
|
|
|
# assert len(response_2_text) < len(response_1_text)
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
|
|
# vertex_ai_test_completion()
|
|
|
|
# Sagemaker
|
|
|
|
|
|
@pytest.mark.skip(reason="AWS Suspended Account")
|
|
def sagemaker_test_completion():
|
|
litellm.SagemakerConfig(max_new_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="sagemaker/berri-benchmarking-Llama-2-70b-chat-hf-4",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="sagemaker/berri-benchmarking-Llama-2-70b-chat-hf-4",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# sagemaker_test_completion()
|
|
|
|
|
|
def test_sagemaker_default_region():
|
|
"""
|
|
If no regions are specified in config or in environment, the default region is us-west-2
|
|
"""
|
|
mock_response = MagicMock()
|
|
|
|
def return_val():
|
|
return {
|
|
"generated_text": "This is a mock response from SageMaker.",
|
|
"id": "cmpl-mockid",
|
|
"object": "text_completion",
|
|
"created": 1629800000,
|
|
"model": "sagemaker/jumpstart-dft-hf-textgeneration1-mp-20240815-185614",
|
|
"choices": [
|
|
{
|
|
"text": "This is a mock response from SageMaker.",
|
|
"index": 0,
|
|
"logprobs": None,
|
|
"finish_reason": "length",
|
|
}
|
|
],
|
|
"usage": {"prompt_tokens": 1, "completion_tokens": 8, "total_tokens": 9},
|
|
}
|
|
|
|
mock_response.json = return_val
|
|
mock_response.status_code = 200
|
|
|
|
with patch(
|
|
"litellm.llms.custom_httpx.http_handler.HTTPHandler.post",
|
|
return_value=mock_response,
|
|
) as mock_post:
|
|
response = litellm.completion(
|
|
model="sagemaker/mock-endpoint",
|
|
messages=[{"content": "Hello, world!", "role": "user"}],
|
|
)
|
|
mock_post.assert_called_once()
|
|
_, kwargs = mock_post.call_args
|
|
args_to_sagemaker = kwargs["json"]
|
|
print("Arguments passed to sagemaker=", args_to_sagemaker)
|
|
print("url=", kwargs["url"])
|
|
|
|
assert (
|
|
kwargs["url"]
|
|
== "https://runtime.sagemaker.us-west-2.amazonaws.com/endpoints/mock-endpoint/invocations"
|
|
)
|
|
|
|
|
|
# test_sagemaker_default_region()
|
|
|
|
|
|
def test_sagemaker_environment_region():
|
|
"""
|
|
If a region is specified in the environment, use that region instead of us-west-2
|
|
"""
|
|
expected_region = "us-east-1"
|
|
os.environ["AWS_REGION_NAME"] = expected_region
|
|
mock_response = MagicMock()
|
|
|
|
def return_val():
|
|
return {
|
|
"generated_text": "This is a mock response from SageMaker.",
|
|
"id": "cmpl-mockid",
|
|
"object": "text_completion",
|
|
"created": 1629800000,
|
|
"model": "sagemaker/jumpstart-dft-hf-textgeneration1-mp-20240815-185614",
|
|
"choices": [
|
|
{
|
|
"text": "This is a mock response from SageMaker.",
|
|
"index": 0,
|
|
"logprobs": None,
|
|
"finish_reason": "length",
|
|
}
|
|
],
|
|
"usage": {"prompt_tokens": 1, "completion_tokens": 8, "total_tokens": 9},
|
|
}
|
|
|
|
mock_response.json = return_val
|
|
mock_response.status_code = 200
|
|
|
|
with patch(
|
|
"litellm.llms.custom_httpx.http_handler.HTTPHandler.post",
|
|
return_value=mock_response,
|
|
) as mock_post:
|
|
response = litellm.completion(
|
|
model="sagemaker/mock-endpoint",
|
|
messages=[{"content": "Hello, world!", "role": "user"}],
|
|
)
|
|
mock_post.assert_called_once()
|
|
_, kwargs = mock_post.call_args
|
|
args_to_sagemaker = kwargs["json"]
|
|
print("Arguments passed to sagemaker=", args_to_sagemaker)
|
|
print("url=", kwargs["url"])
|
|
|
|
assert (
|
|
kwargs["url"]
|
|
== f"https://runtime.sagemaker.{expected_region}.amazonaws.com/endpoints/mock-endpoint/invocations"
|
|
)
|
|
|
|
del os.environ["AWS_REGION_NAME"] # cleanup
|
|
|
|
|
|
# test_sagemaker_environment_region()
|
|
|
|
|
|
def test_sagemaker_config_region():
|
|
"""
|
|
If a region is specified as part of the optional parameters of the completion, including as
|
|
part of the config file, then use that region instead of us-west-2
|
|
"""
|
|
expected_region = "us-east-1"
|
|
mock_response = MagicMock()
|
|
|
|
def return_val():
|
|
return {
|
|
"generated_text": "This is a mock response from SageMaker.",
|
|
"id": "cmpl-mockid",
|
|
"object": "text_completion",
|
|
"created": 1629800000,
|
|
"model": "sagemaker/jumpstart-dft-hf-textgeneration1-mp-20240815-185614",
|
|
"choices": [
|
|
{
|
|
"text": "This is a mock response from SageMaker.",
|
|
"index": 0,
|
|
"logprobs": None,
|
|
"finish_reason": "length",
|
|
}
|
|
],
|
|
"usage": {"prompt_tokens": 1, "completion_tokens": 8, "total_tokens": 9},
|
|
}
|
|
|
|
mock_response.json = return_val
|
|
mock_response.status_code = 200
|
|
|
|
with patch(
|
|
"litellm.llms.custom_httpx.http_handler.HTTPHandler.post",
|
|
return_value=mock_response,
|
|
) as mock_post:
|
|
|
|
response = litellm.completion(
|
|
model="sagemaker/mock-endpoint",
|
|
messages=[{"content": "Hello, world!", "role": "user"}],
|
|
aws_region_name=expected_region,
|
|
)
|
|
|
|
mock_post.assert_called_once()
|
|
_, kwargs = mock_post.call_args
|
|
args_to_sagemaker = kwargs["json"]
|
|
print("Arguments passed to sagemaker=", args_to_sagemaker)
|
|
print("url=", kwargs["url"])
|
|
|
|
assert (
|
|
kwargs["url"]
|
|
== f"https://runtime.sagemaker.{expected_region}.amazonaws.com/endpoints/mock-endpoint/invocations"
|
|
)
|
|
|
|
|
|
# test_sagemaker_config_region()
|
|
|
|
|
|
# test_sagemaker_config_and_environment_region()
|
|
|
|
|
|
# Bedrock
|
|
|
|
|
|
def bedrock_test_completion():
|
|
litellm.AmazonCohereConfig(max_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="bedrock/cohere.command-text-v14",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="bedrock/cohere.command-text-v14",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
except RateLimitError:
|
|
pass
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# bedrock_test_completion()
|
|
|
|
|
|
# OpenAI Chat Completion
|
|
def openai_test_completion():
|
|
litellm.OpenAIConfig(max_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="gpt-3.5-turbo",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="gpt-3.5-turbo",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# openai_test_completion()
|
|
|
|
|
|
# OpenAI Text Completion
|
|
def openai_text_completion_test():
|
|
litellm.OpenAITextCompletionConfig(max_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="gpt-3.5-turbo-instruct",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="gpt-3.5-turbo-instruct",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
|
|
response_3 = litellm.completion(
|
|
model="gpt-3.5-turbo-instruct",
|
|
messages=[{"content": "Hello, how are you?", "role": "user"}],
|
|
n=2,
|
|
)
|
|
assert len(response_3.choices) > 1
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# openai_text_completion_test()
|
|
|
|
|
|
# Azure OpenAI
|
|
def azure_openai_test_completion():
|
|
litellm.AzureOpenAIConfig(max_tokens=10)
|
|
# litellm.set_verbose=True
|
|
try:
|
|
# OVERRIDE WITH DYNAMIC MAX TOKENS
|
|
response_1 = litellm.completion(
|
|
model="azure/chatgpt-v-2",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
max_tokens=100,
|
|
)
|
|
response_1_text = response_1.choices[0].message.content
|
|
print(f"response_1_text: {response_1_text}")
|
|
|
|
# USE CONFIG TOKENS
|
|
response_2 = litellm.completion(
|
|
model="azure/chatgpt-v-2",
|
|
messages=[
|
|
{
|
|
"content": "Hello, how are you? Be as verbose as possible",
|
|
"role": "user",
|
|
}
|
|
],
|
|
)
|
|
response_2_text = response_2.choices[0].message.content
|
|
print(f"response_2_text: {response_2_text}")
|
|
|
|
assert len(response_2_text) < len(response_1_text)
|
|
except Exception as e:
|
|
pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# azure_openai_test_completion()
|