mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
Litellm dev 12 30 2024 p1 (#7480)
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model * fix(base_llm_unit_tests.py): handle azure o1 preview response format tests skip as o1 on azure doesn't support tool calling yet * fix: initial commit of azure o1 handler using openai caller simplifies calling + allows fake streaming logic alr. implemented for openai to just work * feat(azure/o1_handler.py): fake o1 streaming for azure o1 models azure does not currently support streaming for o1 * feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info enables user to toggle on when azure allows o1 streaming without needing to bump versions * style(router.py): remove 'give feedback/get help' messaging when router is used Prevents noisy messaging Closes https://github.com/BerriAI/litellm/issues/5942 * test: fix azure o1 test * test: fix tests * fix: fix test
This commit is contained in:
parent
f0ed02d3ee
commit
0178e75cd9
17 changed files with 273 additions and 141 deletions
|
@ -307,6 +307,9 @@ async def test_langfuse_logging_audio_transcriptions(langfuse_client):
|
|||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.skip(
|
||||
reason="langfuse now takes 5-10 mins to get this trace. Need to figure out how to test this"
|
||||
)
|
||||
async def test_langfuse_masked_input_output(langfuse_client):
|
||||
"""
|
||||
Test that creates a trace with masked input and output
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue