forked from phoenix/litellm-mirror
* fix(ui_sso.py): fix faulty admin only check Fixes https://github.com/BerriAI/litellm/issues/6286 * refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing Prevent future regressions * feat(prompt_factory): support 'ensure_alternating_roles' param Closes https://github.com/BerriAI/litellm/issues/6257 * fix(proxy/utils.py): add dailytagspend to expected views * feat(auth_utils.py): support setting regex for clientside auth credentials Fixes https://github.com/BerriAI/litellm/issues/6203 * build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing * feat(argilla.py): add argilla logging integration Closes https://github.com/BerriAI/litellm/issues/6201 * fix: fix linting errors * fix: fix ruff error * test: fix test * fix: update vertex ai assumption - parts not always guaranteed (#6296) * docs(configs.md): add argila env var to docs |
||
---|---|---|
.. | ||
benchmark | ||
codellama-server | ||
community-resources | ||
litellm-ollama-docker-image | ||
litellm_proxy_server | ||
litellm_router | ||
litellm_router_load_test | ||
logging_observability | ||
misc | ||
Benchmarking_LLMs_by_use_case.ipynb | ||
Claude_(Anthropic)_with_Streaming_liteLLM_Examples.ipynb | ||
Evaluating_LLMs.ipynb | ||
liteLLM_A121_Jurrasic_example.ipynb | ||
LiteLLM_Azure_and_OpenAI_example.ipynb | ||
liteLLM_Baseten.ipynb | ||
LiteLLM_batch_completion.ipynb | ||
LiteLLM_Bedrock.ipynb | ||
liteLLM_clarifai_Demo.ipynb | ||
LiteLLM_Comparing_LLMs.ipynb | ||
LiteLLM_Completion_Cost.ipynb | ||
liteLLM_function_calling.ipynb | ||
liteLLM_Getting_Started.ipynb | ||
LiteLLM_HuggingFace.ipynb | ||
liteLLM_IBM_Watsonx.ipynb | ||
liteLLM_Langchain_Demo.ipynb | ||
litellm_model_fallback.ipynb | ||
liteLLM_Ollama.ipynb | ||
LiteLLM_OpenRouter.ipynb | ||
LiteLLM_Petals.ipynb | ||
LiteLLM_PromptLayer.ipynb | ||
liteLLM_Replicate_Demo.ipynb | ||
liteLLM_Streaming_Demo.ipynb | ||
litellm_test_multiple_llm_demo.ipynb | ||
litellm_Test_Multiple_Providers.ipynb | ||
LiteLLM_User_Based_Rate_Limits.ipynb | ||
liteLLM_VertextAI_Example.ipynb | ||
Migrating_to_LiteLLM_Proxy_from_OpenAI_Azure_OpenAI.ipynb | ||
mlflow_langchain_tracing_litellm_proxy.ipynb | ||
Parallel_function_calling.ipynb | ||
Proxy_Batch_Users.ipynb | ||
result.html | ||
TogetherAI_liteLLM.ipynb | ||
Using_Nemo_Guardrails_with_LiteLLM_Server.ipynb | ||
VLLM_Model_Testing.ipynb |