Ishaan Jaff
e5051a93a8
(docs) add benchmarks on 1K RPS ( #6704 )
...
* docs litellm proxy benchmarks
* docs GCS bucket
* doc fix - reduce clutter on logging doc title
2024-11-11 19:25:53 -08:00
Ishaan Jaff
c047d51cc8
(feat) add Predicted Outputs
for OpenAI ( #6594 )
...
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
2024-11-04 21:16:57 -08:00
Krrish Dholakia
246052fe23
docs(lm_studio.md): add doc on lm studio support
2024-11-02 02:12:35 +05:30
Ishaan Jaff
5652c375b3
(feat) add XAI ChatCompletion Support ( #6373 )
...
* init commit for XAI
* add full logic for xai chat completion
* test_completion_xai
* docs xAI
* add xai/grok-beta
* test_xai_chat_config_get_openai_compatible_provider_info
* test_xai_chat_config_map_openai_params
* add xai streaming test
2024-11-01 20:37:09 +05:30
Krrish Dholakia
e6e518ad58
docs(sidebars.js): add jina ai to left nav
2024-10-21 21:48:06 -07:00
Ishaan Jaff
4cbdad9fc5
doc - using gpt-4o-audio-preview ( #6326 )
...
* doc on audio models
* doc supports vision
* doc audio input / output
2024-10-19 09:34:56 +05:30
Krrish Dholakia
4f5ff65882
docs(argilla.md): add doc on argilla logging
2024-10-17 22:51:55 -07:00
Krish Dholakia
54ebdbf7ce
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
...
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api
* fix(litellm_pre_call_utils.py): handle custom litellm key header
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231 )
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
* fix(allowed_model_region): allow us as allowed region (#6234 )
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* fix(allowed_model_region): allow us as allowed region
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers
* docs(customer_routing.md): fix region-based routing example
* feat(azure.py): handle empty arguments function call - azure
Closes https://github.com/BerriAI/litellm/issues/6241
* feat(guardrails_ai.py): support guardrails ai integration
Adds support for on-prem guardrails via guardrails ai
* fix(proxy/utils.py): prevent sql injection attack
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix: fix linting errors
* fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers
* fix(litellm_pre_call_utils.py): don't forward stainless headers
* docs(guardrails_ai.md): add guardrails ai quick start to docs
* test: handle flaky test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Marcus Elwin <marcus@elwin.com>
2024-10-16 07:32:06 -07:00
Willy Douhard
8b00d2a25f
Add literalai in the sidebar observability category ( #6163 )
...
* fix: add literalai in the sidebar
* fix: typo
2024-10-11 19:18:47 +05:30
Jacques Verré
4064bfc6dd
[Feat] Observability integration - Opik by Comet ( #6062 )
...
* Added Opik logging and evaluation
* Updated doc examples
* Default tags should be [] in case appending
* WIP
* Work in progress
* Opik integration
* Opik integration
* Revert changes on litellm_logging.py
* Updated Opik integration for synchronous API calls
* Updated Opik documentation
---------
Co-authored-by: Douglas Blank <doug@comet.com>
Co-authored-by: Doug Blank <doug.blank@gmail.com>
2024-10-10 18:27:50 +05:30
Ishaan Jaff
0e83a68a69
doc - move rbac under auth
2024-10-09 15:27:32 +05:30
Ishaan Jaff
1fd437e263
(feat proxy) [beta] add support for organization role based access controls ( #6112 )
...
* track LiteLLM_OrganizationMembership
* add add_internal_user_to_organization
* add org membership to schema
* read organization membership when reading user info in auth checks
* add check for valid organization_id
* add test for test_create_new_user_in_organization
* test test_create_new_user_in_organization
* add new ADMIN role
* add test for org admins creating teams
* add test for test_org_admin_create_user_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* fix organization_role_based_access_check
* fix getting user members
* fix TeamBase
* fix types used for use role
* fix type checks
* sync prisma schema
* docs - organization admins
* fix use organization_endpoints for /organization management
* add types for org member endpoints
* fix role name for org admin
* add type for member add response
* add organization/member_add
* add error handling for adding members to an org
* add nice doc string for oranization/member_add
* fix test_create_new_user_in_organization
* linting fix
* use simple route changes
* fix types
* add organization member roles
* add org admin auth checks
* add auth checks for orgs
* test for creating teams as org admin
* simplify org id usage
* fix typo
* test test_org_admin_create_user_team_wrong_org_permissions
* fix type check issue
* code quality fix
* fix schema.prisma
2024-10-09 15:18:18 +05:30
Krish Dholakia
2e5c46ef6d
LiteLLM Minor Fixes & Improvements (10/04/2024) ( #6064 )
...
* fix(litellm_logging.py): ensure cache hits are scrubbed if 'turn_off_message_logging' is enabled
* fix(sagemaker.py): fix streaming to raise error immediately
Fixes https://github.com/BerriAI/litellm/issues/6054
* (fixes) gcs bucket key based logging (#6044 )
* fixes for gcs bucket logging
* fix StandardCallbackDynamicParams
* fix - gcs logging when payload is not serializable
* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket
* working success callbacks
* linting fixes
* fix linting error
* add type hints to functions
* fixes for dynamic success and failure logging
* fix for test_async_chat_openai_stream
* fix handle case when key based logging vars are set as os.environ/ vars
* fix prometheus track cooldown events on custom logger (#6060 )
* (docs) add 1k rps load test doc (#6059 )
* docs 1k rps load test
* docs load testing
* docs load testing litellm
* docs load testing
* clean up load test doc
* docs prom metrics for load testing
* docs using prometheus on load testing
* doc load testing with prometheus
* (fixes) docs + qa - gcs key based logging (#6061 )
* fixes for required values for gcs bucket
* docs gcs bucket logging
* bump: version 1.48.12 → 1.48.13
* ci/cd run again
* bump: version 1.48.13 → 1.48.14
* update load test doc
* (docs) router settings - on litellm config (#6037 )
* add yaml with all router settings
* add docs for router settings
* docs router settings litellm settings
* (feat) OpenAI prompt caching models to model cost map (#6063 )
* add prompt caching for latest models
* add cache_read_input_token_cost for prompt caching models
* fix(litellm_logging.py): check if param is iterable
Fixes https://github.com/BerriAI/litellm/issues/6025#issuecomment-2393929946
* fix(factory.py): support passing an 'assistant_continue_message' to prevent bedrock error
Fixes https://github.com/BerriAI/litellm/issues/6053
* fix(databricks/chat): handle streaming responses
* fix(factory.py): fix linting error
* fix(utils.py): unify anthropic + deepseek prompt caching information to openai format
Fixes https://github.com/BerriAI/litellm/issues/6069
* test: fix test
* fix(types/utils.py): support all openai roles
Fixes https://github.com/BerriAI/litellm/issues/6052
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-04 21:28:53 -04:00
Ishaan Jaff
2449d258cf
(docs) add 1k rps load test doc ( #6059 )
...
* docs 1k rps load test
* docs load testing
* docs load testing litellm
* docs load testing
* clean up load test doc
* docs prom metrics for load testing
* docs using prometheus on load testing
* doc load testing with prometheus
2024-10-04 16:56:34 +05:30
Krrish Dholakia
793593e735
docs(realtime.md): add new /v1/realtime endpoint
2024-10-03 22:44:02 -04:00
Krrish Dholakia
121b493fe8
docs(code_quality.md): add doc on litellm code qa
2024-10-02 11:20:15 -04:00
Krrish Dholakia
7630680690
docs(response_headers.md): add response headers to docs
2024-09-28 23:33:50 -07:00
Ishaan Jaff
b817974c8e
docs clean up langfuse.md
2024-09-28 18:59:02 -07:00
Ishaan Jaff
bbf4db79c1
docs - show correct rpm - > tpm conversion for Azure
2024-09-27 17:18:55 -07:00
Ishaan Jaff
4bdeefd7e4
docs service accounts ( #5900 )
2024-09-25 15:46:13 -07:00
Ishaan Jaff
4ec4d02474
[Feat-Router] Allow setting which environment to use a model on ( #5892 )
...
* add check deployment_is_active_for_environment
* add test for test_init_router_with_supported_environments
* show good example config for environments
* docs clean up config.yaml
* docs cleanup
* docs configs
* docs specfic env
2024-09-25 10:12:06 -07:00
Krrish Dholakia
16c8549b77
docs(virtual_keys.md): add enable/disable virtual keys to docs + refactor sidebar
2024-09-21 22:20:39 -07:00
Ishaan Jaff
cf7dcd9168
[Feat-Proxy] Allow using custom sso handler ( #5809 )
...
* update internal user doc string
* add readme on location of /sso routes
* add custom_sso_handler
* docs custom sso
* use secure=True for cookies
2024-09-20 19:14:33 -07:00
Krish Dholakia
98c335acd0
LiteLLM Minor Fixes & Improvements (09/17/2024) ( #5742 )
...
* fix(proxy_server.py): use default azure credentials to support azure non-client secret kms
* fix(langsmith.py): raise error if credentials missing
* feat(langsmith.py): support error logging for langsmith + standard logging payload
Fixes https://github.com/BerriAI/litellm/issues/5738
* Fix hardcoding of schema in view check (#5749 )
* fix - deal with case when check view exists returns None (#5740 )
* Revert "fix - deal with case when check view exists returns None (#5740 )" (#5741 )
This reverts commit 535228159b
.
* test(test_router_debug_logs.py): move to mock response
* Fix hardcoding of schema
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(proxy_server.py): allow admin to disable ui via `DISABLE_ADMIN_UI` flag
* fix(router.py): fix default model name value
Fixes 55db19a1e4 (r1763712148)
* fix(utils.py): fix unbound variable error
* feat(rerank/main.py): add azure ai rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/5667
* feat(secret_detection.py): Allow configuring secret detection params
Allows admin to control what plugins to run for secret detection. Prevents overzealous secret detection.
* docs(secret_detection.md): add secret detection guardrail docs
* fix: fix linting errors
* fix - deal with case when check view exists returns None (#5740 )
* Revert "fix - deal with case when check view exists returns None (#5740 )" (#5741 )
This reverts commit 535228159b
.
* Litellm fix router testing (#5748 )
* test: fix testing - azure changed content policy error logic
* test: fix tests to use mock responses
* test(test_image_generation.py): handle api instability
* test(test_image_generation.py): handle azure api instability
* fix(utils.py): fix unbounded variable error
* fix(utils.py): fix unbounded variable error
* test: refactor test to use mock response
* test: mark flaky azure tests
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard (#5753 )
Bumps [next](https://github.com/vercel/next.js ) from 14.1.1 to 14.2.10.
- [Release notes](https://github.com/vercel/next.js/releases )
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js )
- [Commits](https://github.com/vercel/next.js/compare/v14.1.1...v14.2.10 )
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` (#5754 )
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog (#5750 )
* dd - start tracking redis status on dd
* add async_service_succes_hook / failure hook in custom logger
* add async_service_failure_hook
* log service failures on dd
* fix import error
* add test for redis errors / warning
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on (#5745 )
* fix tag routing - raise correct error when no model with tag based routing
* fix error string from tag based routing
* test router tag based routing
* raise 401 error when no tags avialable for deploymen
* linting fix
* [Feat] Log Request metadata on gcs bucket logging (#5743 )
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* fix(litellm_logging.py): fix logging message
* fix(rerank_api/main.py): fix linting errors
* fix(custom_guardrails.py): maintain backwards compatibility for older guardrails
* fix(rerank_api/main.py): fix cost tracking for rerank endpoints
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: steffen-sbt <148480574+steffen-sbt@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-17 23:00:04 -07:00
Krish Dholakia
234185ec13
LiteLLM Minor Fixes & Improvements (09/16/2024) ( #5723 ) ( #5731 )
...
* LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723 )
* coverage (#5713 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Move (#5714 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix(litellm_logging.py): fix logging client re-init (#5710 )
Fixes https://github.com/BerriAI/litellm/issues/5695
* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config
Fixes https://github.com/BerriAI/litellm/issues/5682
* feat(o1_handler.py): fake streaming for openai o1 models
Fixes https://github.com/BerriAI/litellm/issues/5694
* docs: deprecated traceloop integration in favor of native otel (#5249 )
* fix: fix linting errors
* fix: fix linting errors
* fix(main.py): fix o1 import
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730 )
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view
Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it
* fix(custom_logger.py): reset calltype
* fix: fix linting errors
* fix: fix linting error
* fix: fix import
* test(test_databricks.py): fix databricks tests
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
2024-09-17 08:05:52 -07:00
Ishaan Jaff
7c2ddba6c6
sambanova support ( #5547 ) ( #5703 )
...
* add sambanova support
* sambanova support
* updated api endpoint for sambanova
---------
Co-authored-by: Venu Anuganti <venu@venublog.com>
Co-authored-by: Venu Anuganti <venu@vairmac2020>
2024-09-14 17:23:04 -07:00
Ishaan Jaff
54db564529
add arch diagram
2024-09-07 15:49:51 -07:00
Ishaan Jaff
05505903b2
docs better sidebar
2024-09-07 11:31:07 -07:00
Ishaan Jaff
3984b9080c
docs cleanup
2024-09-07 11:23:44 -07:00
Ishaan Jaff
2cf0714b0d
docs organize sidebar
2024-09-07 11:23:06 -07:00
Ishaan Jaff
808ba36b55
ui cleanup
2024-09-07 11:20:07 -07:00
Ishaan Jaff
6c30f18f8c
docs new presidio language controls
2024-09-04 13:04:19 -07:00
Krish Dholakia
be3c7b401e
LiteLLM Minor fixes + improvements (08/03/2024) ( #5488 )
...
* fix(internal_user_endpoints.py): set budget_reset_at for /user/update
* fix(vertex_and_google_ai_studio_gemini.py): handle accumulated json
Fixes https://github.com/BerriAI/litellm/issues/5479
* fix(vertex_ai_and_gemini.py): fix assistant message function call when content is not None
Fixes https://github.com/BerriAI/litellm/issues/5490
* fix(proxy_server.py): generic state uuid for okta sso
* fix(lago.py): improve debug logs
Debugging for https://github.com/BerriAI/litellm/issues/5477
* docs(bedrock.md): add bedrock cross-region inferencing to docs
* fix(azure.py): return azure response headers on aembedding call
* feat(azure.py): return azure response headers for `/audio/transcription`
* fix(types/utils.py): standardize deepseek / anthropic prompt caching usage information
Closes https://github.com/BerriAI/litellm/issues/5285
* docs(usage.md): add docs on litellm usage object
* test(test_completion.py): mark flaky test
2024-09-03 21:21:34 -07:00
Krrish Dholakia
9aa006d353
docs(bedrock.md): add multimodal embedding support to docs
2024-09-03 08:14:10 -07:00
Ishaan Jaff
fd4157cf71
docs add cerebras
2024-08-31 14:57:12 -07:00
Krrish Dholakia
601945d114
docs(docker_quick_start.md): add new quick start doc for litellm proxy
2024-08-29 15:35:39 -07:00
Ishaan Jaff
e396895288
update doc on palm provider
2024-08-27 21:11:24 -07:00
Ishaan Jaff
a5a17c120b
docs add rerank api to docs
2024-08-27 18:06:59 -07:00
Ishaan Jaff
68bb735b3b
docs use litellm proxy with litellm python sdk
2024-08-26 10:50:24 -07:00
Ishaan Jaff
d10430c881
doc custom guardrail
2024-08-23 09:41:54 -07:00
Ishaan Jaff
b054dd0e45
docs fix
2024-08-22 19:04:14 -07:00
Ishaan Jaff
25609a94ad
docs moderation
2024-08-22 18:57:54 -07:00
Ishaan Jaff
b865993b34
docs move pass thru endpoints
2024-08-22 18:49:26 -07:00
Krrish Dholakia
11c7e92b58
docs(sidebars.js): refactor docs
2024-08-22 18:22:50 -07:00
Ishaan Jaff
70f9e41ed9
Merge branch 'main' into litellm_add_bedrock_guardrails
2024-08-22 17:28:49 -07:00
Ishaan Jaff
499b6b3368
doc bedrock guardrails
2024-08-22 16:25:22 -07:00
Michał Pstrąg
a37f004c1d
Merge branch 'main' into docs-dbally
2024-08-22 23:25:57 +02:00
Michał Pstrąg
62df7c755b
add dbally project
2024-08-22 23:21:40 +02:00
Ishaan Jaff
e7ecb2fe3a
fix qdrant litellm on proxy
2024-08-21 12:52:29 -07:00
Ishaan Jaff
30da63bd4f
docs move lakera to free
2024-08-20 16:38:37 -07:00