litellm/cookbook
2023-11-30 19:04:51 -08:00
..
benchmark
codellama-server
community-resources
litellm-ollama-docker-image
litellm_router docs(routing.md): add queueing to docs 2023-11-21 18:01:02 -08:00
logging_observability
proxy-server (docs) Fix missing -r in pip command 2023-11-22 23:41:16 -05:00
Benchmarking_LLMs_by_use_case.ipynb
Claude_(Anthropic)_with_Streaming_liteLLM_Examples.ipynb
Evaluating_LLMs.ipynb
liteLLM_A121_Jurrasic_example.ipynb
LiteLLM_Azure_and_OpenAI_example.ipynb
liteLLM_Baseten.ipynb
LiteLLM_batch_completion.ipynb
LiteLLM_Bedrock.ipynb
LiteLLM_Comparing_LLMs.ipynb
LiteLLM_Completion_Cost.ipynb
liteLLM_function_calling.ipynb
liteLLM_Getting_Started.ipynb
LiteLLM_HuggingFace.ipynb
liteLLM_Langchain_Demo.ipynb
litellm_model_fallback.ipynb
liteLLM_Ollama.ipynb
LiteLLM_OpenRouter.ipynb
LiteLLM_Petals.ipynb
LiteLLM_PromptLayer.ipynb
liteLLM_Replicate_Demo.ipynb
liteLLM_Streaming_Demo.ipynb
litellm_test_multiple_llm_demo.ipynb
litellm_Test_Multiple_Providers.ipynb
LiteLLM_User_Based_Rate_Limits.ipynb
liteLLM_VertextAI_Example.ipynb
Parallel_function_calling.ipynb
result.html (docs) add embedding() profile 2023-11-30 19:04:51 -08:00
TogetherAI_liteLLM.ipynb
Using_Nemo_Guardrails_with_LiteLLM_Server.ipynb
VLLM_Model_Testing.ipynb