forked from phoenix/litellm-mirror
* fix(pattern_matching_router.py): update model name using correct function
* fix(langfuse.py): metadata deepcopy can cause unhandled error (#6563)
Co-authored-by: seva <seva@inita.com>
* fix(stream_chunk_builder_utils.py): correctly set prompt tokens + log correct streaming usage
Closes https://github.com/BerriAI/litellm/issues/6488
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577)
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check
* fix(lowest_tpm_rpm_v2.py): return headers in correct format
* test: update test
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* test: remove eol model
* fix(proxy_server.py): fix db config loading logic
* fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten
* test: skip test if required env var is missing
* test: fix test
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
* test: mark flaky test
* test: handle anthropic api instability
* test(test_proxy_utils.py): add testing for db config update logic
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version (#6597)
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
* fix(langfuse.py): fix linting errors
* fix: fix linting errors
* fix: fix casting error
* fix: fix typing error
* fix: add more tests
* fix(utils.py): fix return_processed_chunk_logic
* Revert "Update setuptools in docker and fastapi to latest verison, in order t…" (#6615)
This reverts commit 1a7f7bdfb7
.
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* fix flake8 checks
* test: bump sleep time
* refactor: replace claude-instant-1.2 with haiku in testing
* fix(proxy_server.py): move to using sl payload in track_cost_callback
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fallback to kwargs(response_cost) if given
* test: remove claude-instant-1 from tests
* test: fix claude test
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* build: remove lint.yml
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Vsevolod Karvetskiy <56288164+karvetskiy@users.noreply.github.com>
Co-authored-by: seva <seva@inita.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt P Suorra <Jacobh2@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
382 lines
13 KiB
Python
382 lines
13 KiB
Python
# #### What this tests ####
|
|
# # This tests error logging (with custom user functions) for the raw `completion` + `embedding` endpoints
|
|
|
|
# # Test Scenarios (test across completion, streaming, embedding)
|
|
# ## 1: Pre-API-Call
|
|
# ## 2: Post-API-Call
|
|
# ## 3: On LiteLLM Call success
|
|
# ## 4: On LiteLLM Call failure
|
|
|
|
# import sys, os, io
|
|
# import traceback, logging
|
|
# import pytest
|
|
# import dotenv
|
|
# dotenv.load_dotenv()
|
|
|
|
# # Create logger
|
|
# logger = logging.getLogger(__name__)
|
|
# logger.setLevel(logging.DEBUG)
|
|
|
|
# # Create a stream handler
|
|
# stream_handler = logging.StreamHandler(sys.stdout)
|
|
# logger.addHandler(stream_handler)
|
|
|
|
# # Create a function to log information
|
|
# def logger_fn(message):
|
|
# logger.info(message)
|
|
|
|
# sys.path.insert(
|
|
# 0, os.path.abspath("../..")
|
|
# ) # Adds the parent directory to the system path
|
|
# import litellm
|
|
# from litellm import embedding, completion
|
|
# from openai.error import AuthenticationError
|
|
# litellm.set_verbose = True
|
|
|
|
# score = 0
|
|
|
|
# user_message = "Hello, how are you?"
|
|
# messages = [{"content": user_message, "role": "user"}]
|
|
|
|
# # 1. On Call Success
|
|
# # normal completion
|
|
# # test on openai completion call
|
|
# def test_logging_success_completion():
|
|
# global score
|
|
# try:
|
|
# # Redirect stdout
|
|
# old_stdout = sys.stdout
|
|
# sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# response = completion(model="gpt-3.5-turbo", messages=messages)
|
|
# # Restore stdout
|
|
# sys.stdout = old_stdout
|
|
# output = new_stdout.getvalue().strip()
|
|
|
|
# if "Logging Details Pre-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details Post-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details LiteLLM-Success Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# score += 1
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
# pass
|
|
|
|
# # ## test on non-openai completion call
|
|
# # def test_logging_success_completion_non_openai():
|
|
# # global score
|
|
# # try:
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # response = completion(model="claude-3-5-haiku-20241022", messages=messages)
|
|
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Success Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # score += 1
|
|
# # except Exception as e:
|
|
# # pytest.fail(f"Error occurred: {e}")
|
|
# # pass
|
|
|
|
# # streaming completion
|
|
# ## test on openai completion call
|
|
# def test_logging_success_streaming_openai():
|
|
# global score
|
|
# try:
|
|
# # litellm.set_verbose = False
|
|
# def custom_callback(
|
|
# kwargs, # kwargs to completion
|
|
# completion_response, # response from completion
|
|
# start_time, end_time # start/end time
|
|
# ):
|
|
# if "complete_streaming_response" in kwargs:
|
|
# print(f"Complete Streaming Response: {kwargs['complete_streaming_response']}")
|
|
|
|
# # Assign the custom callback function
|
|
# litellm.success_callback = [custom_callback]
|
|
|
|
# # Redirect stdout
|
|
# old_stdout = sys.stdout
|
|
# sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
|
|
# for chunk in response:
|
|
# pass
|
|
|
|
# # Restore stdout
|
|
# sys.stdout = old_stdout
|
|
# output = new_stdout.getvalue().strip()
|
|
|
|
# if "Logging Details Pre-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details Post-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details LiteLLM-Success Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Complete Streaming Response:" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# score += 1
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
# pass
|
|
|
|
# # test_logging_success_streaming_openai()
|
|
|
|
# ## test on non-openai completion call
|
|
# def test_logging_success_streaming_non_openai():
|
|
# global score
|
|
# try:
|
|
# # litellm.set_verbose = False
|
|
# def custom_callback(
|
|
# kwargs, # kwargs to completion
|
|
# completion_response, # response from completion
|
|
# start_time, end_time # start/end time
|
|
# ):
|
|
# # print(f"streaming response: {completion_response}")
|
|
# if "complete_streaming_response" in kwargs:
|
|
# print(f"Complete Streaming Response: {kwargs['complete_streaming_response']}")
|
|
|
|
# # Assign the custom callback function
|
|
# litellm.success_callback = [custom_callback]
|
|
|
|
# # Redirect stdout
|
|
# old_stdout = sys.stdout
|
|
# sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# response = completion(model="claude-3-5-haiku-20241022", messages=messages, stream=True)
|
|
# for idx, chunk in enumerate(response):
|
|
# pass
|
|
|
|
# # Restore stdout
|
|
# sys.stdout = old_stdout
|
|
# output = new_stdout.getvalue().strip()
|
|
|
|
# if "Logging Details Pre-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details Post-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details LiteLLM-Success Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Complete Streaming Response:" not in output:
|
|
# raise Exception(f"Required log message not found! {output}")
|
|
# score += 1
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
# pass
|
|
|
|
# # test_logging_success_streaming_non_openai()
|
|
# # embedding
|
|
|
|
# def test_logging_success_embedding_openai():
|
|
# try:
|
|
# # Redirect stdout
|
|
# old_stdout = sys.stdout
|
|
# sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# response = embedding(model="text-embedding-ada-002", input=["good morning from litellm"])
|
|
|
|
# # Restore stdout
|
|
# sys.stdout = old_stdout
|
|
# output = new_stdout.getvalue().strip()
|
|
|
|
# if "Logging Details Pre-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details Post-API Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# elif "Logging Details LiteLLM-Success Call" not in output:
|
|
# raise Exception("Required log message not found!")
|
|
# except Exception as e:
|
|
# pytest.fail(f"Error occurred: {e}")
|
|
|
|
# # ## 2. On LiteLLM Call failure
|
|
# # ## TEST BAD KEY
|
|
|
|
# # # normal completion
|
|
# # ## test on openai completion call
|
|
# # try:
|
|
# # temporary_oai_key = os.environ["OPENAI_API_KEY"]
|
|
# # os.environ["OPENAI_API_KEY"] = "bad-key"
|
|
|
|
# # temporary_anthropic_key = os.environ["ANTHROPIC_API_KEY"]
|
|
# # os.environ["ANTHROPIC_API_KEY"] = "bad-key"
|
|
|
|
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # try:
|
|
# # response = completion(model="gpt-3.5-turbo", messages=messages)
|
|
# # except AuthenticationError:
|
|
# # print(f"raised auth error")
|
|
# # pass
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # print(output)
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Failure Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
|
|
# # os.environ["OPENAI_API_KEY"] = temporary_oai_key
|
|
# # os.environ["ANTHROPIC_API_KEY"] = temporary_anthropic_key
|
|
|
|
# # score += 1
|
|
# # except Exception as e:
|
|
# # print(f"exception type: {type(e).__name__}")
|
|
# # pytest.fail(f"Error occurred: {e}")
|
|
# # pass
|
|
|
|
# # ## test on non-openai completion call
|
|
# # try:
|
|
# # temporary_oai_key = os.environ["OPENAI_API_KEY"]
|
|
# # os.environ["OPENAI_API_KEY"] = "bad-key"
|
|
|
|
# # temporary_anthropic_key = os.environ["ANTHROPIC_API_KEY"]
|
|
# # os.environ["ANTHROPIC_API_KEY"] = "bad-key"
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # try:
|
|
# # response = completion(model="claude-3-5-haiku-20241022", messages=messages)
|
|
# # except AuthenticationError:
|
|
# # pass
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Failure Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # os.environ["OPENAI_API_KEY"] = temporary_oai_key
|
|
# # os.environ["ANTHROPIC_API_KEY"] = temporary_anthropic_key
|
|
# # score += 1
|
|
# # except Exception as e:
|
|
# # print(f"exception type: {type(e).__name__}")
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # print(output)
|
|
# # pytest.fail(f"Error occurred: {e}")
|
|
|
|
|
|
# # # streaming completion
|
|
# # ## test on openai completion call
|
|
# # try:
|
|
# # temporary_oai_key = os.environ["OPENAI_API_KEY"]
|
|
# # os.environ["OPENAI_API_KEY"] = "bad-key"
|
|
|
|
# # temporary_anthropic_key = os.environ["ANTHROPIC_API_KEY"]
|
|
# # os.environ["ANTHROPIC_API_KEY"] = "bad-key"
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # try:
|
|
# # response = completion(model="gpt-3.5-turbo", messages=messages)
|
|
# # except AuthenticationError:
|
|
# # pass
|
|
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # print(output)
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Failure Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
|
|
# # os.environ["OPENAI_API_KEY"] = temporary_oai_key
|
|
# # os.environ["ANTHROPIC_API_KEY"] = temporary_anthropic_key
|
|
# # score += 1
|
|
# # except Exception as e:
|
|
# # print(f"exception type: {type(e).__name__}")
|
|
# # pytest.fail(f"Error occurred: {e}")
|
|
|
|
# # ## test on non-openai completion call
|
|
# # try:
|
|
# # temporary_oai_key = os.environ["OPENAI_API_KEY"]
|
|
# # os.environ["OPENAI_API_KEY"] = "bad-key"
|
|
|
|
# # temporary_anthropic_key = os.environ["ANTHROPIC_API_KEY"]
|
|
# # os.environ["ANTHROPIC_API_KEY"] = "bad-key"
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # try:
|
|
# # response = completion(model="claude-3-5-haiku-20241022", messages=messages)
|
|
# # except AuthenticationError:
|
|
# # pass
|
|
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # print(output)
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Failure Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # score += 1
|
|
# # except Exception as e:
|
|
# # print(f"exception type: {type(e).__name__}")
|
|
# # pytest.fail(f"Error occurred: {e}")
|
|
|
|
# # # embedding
|
|
|
|
# # try:
|
|
# # temporary_oai_key = os.environ["OPENAI_API_KEY"]
|
|
# # os.environ["OPENAI_API_KEY"] = "bad-key"
|
|
|
|
# # temporary_anthropic_key = os.environ["ANTHROPIC_API_KEY"]
|
|
# # os.environ["ANTHROPIC_API_KEY"] = "bad-key"
|
|
# # # Redirect stdout
|
|
# # old_stdout = sys.stdout
|
|
# # sys.stdout = new_stdout = io.StringIO()
|
|
|
|
# # try:
|
|
# # response = embedding(model="text-embedding-ada-002", input=["good morning from litellm"])
|
|
# # except AuthenticationError:
|
|
# # pass
|
|
|
|
# # # Restore stdout
|
|
# # sys.stdout = old_stdout
|
|
# # output = new_stdout.getvalue().strip()
|
|
|
|
# # print(output)
|
|
|
|
# # if "Logging Details Pre-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details Post-API Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # elif "Logging Details LiteLLM-Failure Call" not in output:
|
|
# # raise Exception("Required log message not found!")
|
|
# # except Exception as e:
|
|
# # print(f"exception type: {type(e).__name__}")
|
|
# # pytest.fail(f"Error occurred: {e}")
|