Commit graph

18313 commits

Author SHA1 Message Date
Ishaan Jaff
74ae7deee3
uo fixes for default team (#6134) 2024-10-09 16:02:08 +05:30
Ishaan Jaff
4b4bb9296f bump: version 1.48.20 → 1.49.0 2024-10-09 15:45:39 +05:30
Ishaan Jaff
005846316d fix get_all_team_memberships 2024-10-09 15:43:32 +05:30
Ishaan Jaff
54d8d46a3b remove unused file from root 2024-10-09 15:28:36 +05:30
Ishaan Jaff
0e83a68a69 doc - move rbac under auth 2024-10-09 15:27:32 +05:30
Ishaan Jaff
8a9bb51f4e fix schema.prisma change 2024-10-09 15:25:27 +05:30
Ishaan Jaff
a0bebc3413 fix literal ai typing errors 2024-10-09 15:23:39 +05:30
Ishaan Jaff
1fd437e263
(feat proxy) [beta] add support for organization role based access controls (#6112)
* track LiteLLM_OrganizationMembership

* add add_internal_user_to_organization

* add org membership to schema

* read organization membership when reading user info in auth checks

* add check for valid organization_id

* add test for test_create_new_user_in_organization

* test test_create_new_user_in_organization

* add new ADMIN role

* add test for org admins creating teams

* add test for test_org_admin_create_user_permissions

* test_org_admin_create_user_team_wrong_org_permissions

* test_org_admin_create_user_team_wrong_org_permissions

* fix organization_role_based_access_check

* fix getting user members

* fix TeamBase

* fix types used for use role

* fix type checks

* sync prisma schema

* docs - organization admins

* fix use organization_endpoints for /organization management

* add types for org member endpoints

* fix role name for org admin

* add type for member add response

* add organization/member_add

* add error handling for adding members to an org

* add nice doc string for oranization/member_add

* fix test_create_new_user_in_organization

* linting fix

* use simple route changes

* fix types

* add organization member roles

* add org admin auth checks

* add auth checks for orgs

* test for creating teams as org admin

* simplify org id usage

* fix typo

* test test_org_admin_create_user_team_wrong_org_permissions

* fix type check issue

* code quality fix

* fix schema.prisma
2024-10-09 15:18:18 +05:30
Krrish Dholakia
945267a511 build: bump version 2024-10-08 22:10:14 -07:00
Krish Dholakia
9695c1af10
LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119)
* refactor(cost_calculator.py): move error line to debug - https://github.com/BerriAI/litellm/issues/5683#issuecomment-2398599498

* fix(migrate-hidden-params-to-read-from-standard-logging-payload): Fixes https://github.com/BerriAI/litellm/issues/5546#issuecomment-2399994026

* fix(types/utils.py): mark weight as a litellm param

Fixes https://github.com/BerriAI/litellm/issues/5781

* feat(internal_user_endpoints.py): fix /user/info + show user max budget as default max budget

Fixes https://github.com/BerriAI/litellm/issues/6117

* feat: support returning team member budget in `/user/info`

Sets user max budget in team as max budget on ui

  Closes https://github.com/BerriAI/litellm/issues/6117

* bug fix for optional parameter passing to replicate (#6067)

Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>

* fix(o1_transformation.py): handle o1 temperature=0

o1 doesn't support temp=0, allow admin to drop this param

* test: fix test

---------

Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
Co-authored-by: Mandana Vaziri <mvaziri@us.ibm.com>
2024-10-08 21:57:03 -07:00
Willy Douhard
ac6fb0cbef
Fix: Literal AI llm completion logging (#6096)
* fix: log llm output

* chore: rename var
2024-10-08 08:33:32 -07:00
Kyrylo Yefimenko
b68fee48a6
(fix) Fix Groq pricing for llama3.1 (#6114)
* Adjust ollama models to chat instead of completions

* Fix Groq prices for llama3.1
2024-10-08 20:20:58 +05:30
Ishaan Jaff
92a1924112 trigger ci/cd run 2024-10-08 20:16:37 +05:30
Ishaan Jaff
d1760b1b04
(fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110)
* fix move docker files to docker folders

* move check file length

* fix docker hub deploy

* fix clean up root

* fix circle ci config
2024-10-08 11:34:43 +05:30
Krrish Dholakia
cc960da4b6 docs(azure.md): add o1 model support to config 2024-10-07 22:37:49 -07:00
Krrish Dholakia
9ee1a3ff8c bump: version 1.48.18 → 1.48.19 2024-10-07 22:22:02 -07:00
Krish Dholakia
6729c9ca7f
LiteLLM Minor Fixes & Improvements (10/07/2024) (#6101)
* fix(utils.py): support dropping temperature param for azure o1 models

* fix(main.py): handle azure o1 streaming requests

o1 doesn't support streaming, fake it to ensure code works as expected

* feat(utils.py): expose `hosted_vllm/` endpoint, with tool handling for vllm

Fixes https://github.com/BerriAI/litellm/issues/6088

* refactor(internal_user_endpoints.py): cleanup unused params + update docstring

Closes https://github.com/BerriAI/litellm/issues/6100

* fix(main.py): expose custom image generation api support

Fixes https://github.com/BerriAI/litellm/issues/6097

* fix: fix linting errors

* docs(custom_llm_server.md): add docs on custom api for image gen calls

* fix(types/utils.py): handle dict type

* fix(types/utils.py): fix linting errors
2024-10-07 22:17:22 -07:00
Ishaan Jaff
5de69cb1b2 fix using Dockerfile 2024-10-08 08:45:40 +05:30
Ishaan Jaff
59b247ab23 fix config.yml 2024-10-08 08:36:03 +05:30
Ishaan Jaff
d742e8cb43
(clean up) move docker files from root to docker folder (#6109)
* fix move docker files to docker folders

* move check file length

* fix docker hub deploy
2024-10-08 08:23:52 +05:30
Ishaan Jaff
ef815f3a84
(docs) add remaining litellm settings on configs.md doc (#6108)
* docs add litellm settings configs

* docs langfuse tags on config
2024-10-08 07:57:04 +05:30
Ishaan Jaff
2b370f8e9e
(docs) key based callbacks (#6107) 2024-10-08 07:12:01 +05:30
Pradyumna Singh Rathore
b7ba558b74
fix links due to broken list (#6103) 2024-10-07 15:47:29 -04:00
Ishaan Jaff
5afc45d411 bump: version 1.48.17 → 1.48.18 2024-10-07 18:22:21 +05:30
Ishaan Jaff
b1e9d344b2
Update readme.md 2024-10-07 18:15:15 +05:30
Ishaan Jaff
a0cbf31fcf
Update readme.md 2024-10-07 18:12:43 +05:30
Ishaan Jaff
1bafbf8382
(feat proxy) add v2 maintained LiteLLM grafana dashboard (#6098)
* add new grafana dashboard litellm

* add v2 grafana dashboard
2024-10-07 18:11:20 +05:30
Ishaan Jaff
2c8bba293f
(bug fix) TTL not being set for embedding caching requests (#6095)
* fix ttl for cache pipeline settings

* add test for caching

* add test for setting ttls on redis caching
2024-10-07 15:53:18 +05:30
Ishaan Jaff
285b589095 ui new build 2024-10-07 13:01:19 +05:30
Ishaan Jaff
51af0d5d94
(proxy ui sso flow) - fix invite user sso flow (#6093)
* return if sso setup on ui_settings

* use helper to get invite link
2024-10-07 12:32:08 +05:30
Ishaan Jaff
a7628317cd
(proxy ui) - fix view user pagination (#6094)
* ui - fix view user pagination

* add new internal user test
2024-10-07 12:31:55 +05:30
Ishaan Jaff
abe8059713 ui - fix view user pagination 2024-10-07 12:15:29 +05:30
kvadros
e007bb65b5
Proxy: include customer budget in responses (#5977) 2024-10-07 10:05:28 +05:30
Ishaan Jaff
b2fbee3923 docs key logging 2024-10-06 13:49:27 +05:30
Ishaan Jaff
fd7014a326 correct use of healthy / unhealthy 2024-10-06 13:48:30 +05:30
Krish Dholakia
49d8b2be46
fix(utils.py): fix fix pydantic obj to schema creation for vertex en… (#6071)
* fix(utils.py): fix  fix pydantic obj to schema creation for vertex endpoints

Fixes https://github.com/BerriAI/litellm/issues/6027

* test(test_completion.pyu): skip test - avoid hitting gemini rate limits

* fix(common_utils.py): fix ruff linting error
2024-10-06 00:25:55 -04:00
Krrish Dholakia
29da2d49d6 bump: version 1.48.16 → 1.48.17 2024-10-05 21:27:22 -04:00
Krish Dholakia
04e5963b65
Litellm expose disable schema update flag (#6085)
* fix: enable new 'disable_prisma_schema_update' flag

* build(config.yml): remove setup remote docker step

* ci(config.yml): give container time to start up

* ci(config.yml): update test

* build(config.yml): actually start docker

* build(config.yml): simplify grep check

* fix(prisma_client.py): support reading disable_schema_update via env vars

* ci(config.yml): add test to check if all general settings are documented

* build(test_General_settings.py): check available dir

* ci: check ../ repo path

* build: check ./

* build: fix test
2024-10-05 21:26:51 -04:00
Krish Dholakia
f2c0a31e3c
LiteLLM Minor Fixes & Improvements (10/05/2024) (#6083)
* docs(prompt_caching.md): add prompt caching cost calc example to docs

* docs(prompt_caching.md): add proxy examples to docs

* feat(utils.py): expose new helper `supports_prompt_caching()` to check if a model supports prompt caching

* docs(prompt_caching.md): add docs on checking model support for prompt caching

* build: fix invalid json
2024-10-05 18:59:11 -04:00
Krish Dholakia
fac3b2ee42
Add pyright to ci/cd + Fix remaining type-checking errors (#6082)
* fix: fix type-checking errors

* fix: fix additional type-checking errors

* fix: additional type-checking error fixes

* fix: fix additional type-checking errors

* fix: additional type-check fixes

* fix: fix all type-checking errors + add pyright to ci/cd

* fix: fix incorrect import

* ci(config.yml): use mypy on ci/cd

* fix: fix type-checking errors in utils.py

* fix: fix all type-checking errors on main.py

* fix: fix mypy linting errors

* fix(anthropic/cost_calculator.py): fix linting errors

* fix: fix mypy linting errors

* fix: fix linting errors
2024-10-05 17:04:00 -04:00
Ishaan Jaff
f7ce1173f3 bump: version 1.48.15 → 1.48.16 2024-10-05 16:59:16 +05:30
Ishaan Jaff
3cb04480fb
(code clean up) use a folder for gcs bucket logging + add readme in folder (#6080)
* refactor gcs bucket

* add readme
2024-10-05 16:58:10 +05:30
Ishaan Jaff
6e6d38841f docs fix 2024-10-05 15:25:25 +05:30
GTonehour
d533acd24a
openrouter/openai's litellm_provider should be openrouter, not openai (#6079)
In model_prices_and_context_window.json, openrouter/* models all have litellm_provider set as "openrouter", except for four openrouter/openai/* models, which were set to "openai".
I suppose they must be set to "openrouter", so one can know it should use the openrouter API for this model.
2024-10-05 15:20:44 +05:30
Ishaan Jaff
ab0b536143
(feat) add azure openai cost tracking for prompt caching (#6077)
* add azure o1 models to model cost map

* add azure o1 cost tracking

* fix azure cost calc

* add get llm provider test
2024-10-05 15:04:18 +05:30
Ishaan Jaff
7267852511 linting error fix 2024-10-05 15:03:39 +05:30
Ishaan Jaff
5ee1342d37
(docs) reference router settings general settings etc (#6078) 2024-10-05 15:01:28 +05:30
Ishaan Jaff
d2f17cf97c docs routing config table 2024-10-05 14:40:07 +05:30
Ishaan Jaff
530915da51 add o-1 to Azure docs 2024-10-05 14:23:54 +05:30
Ishaan Jaff
3682f661d8
(feat) add cost tracking for OpenAI prompt caching (#6055)
* add cache_read_input_token_cost for prompt caching models

* add prompt caching for latest models

* add openai cost calculator

* add openai prompt caching test

* fix lint check

* add not on how usage._cache_read_input_tokens is used

* fix cost calc whisper openai

* use output_cost_per_second

* add input_cost_per_second
2024-10-05 14:20:15 +05:30