Skip to content
forked from BerriAI/litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]

License

Notifications You must be signed in to change notification settings

contiamo/litellm

This branch is 4 commits ahead of, 3932 commits behind BerriAI/litellm:main.

Folders and files

NameName
Last commit message
Last commit date
Sep 12, 2024
Sep 7, 2024
Aug 20, 2024
Feb 5, 2024
Aug 28, 2024
Sep 12, 2024
Sep 10, 2024
Jan 6, 2024
Sep 13, 2024
Sep 10, 2024
Aug 22, 2024
Sep 15, 2024
Sep 14, 2024
Sep 13, 2024
Apr 12, 2024
Aug 28, 2023
Dec 23, 2023
May 12, 2024
Aug 31, 2023
Sep 7, 2024
Sep 13, 2024
Aug 20, 2024
Jan 10, 2024
Aug 7, 2024
Aug 20, 2024
Aug 20, 2024
Feb 15, 2024
Sep 15, 2024
Sep 15, 2024
Aug 5, 2024
Jun 15, 2024
Jul 24, 2024
Jun 28, 2024
Aug 19, 2024
May 6, 2024
Jul 9, 2024
Aug 1, 2024
Aug 19, 2024
Jun 13, 2024
Sep 13, 2024
Apr 4, 2024
Mar 29, 2024
Mar 29, 2024
Sep 12, 2024
Jul 24, 2024
Sep 10, 2024
Sep 13, 2024
May 24, 2024
Sep 14, 2024
Jan 6, 2024
Jun 8, 2024
Sep 12, 2024
Sep 2, 2024

Repository files navigation

Forked Litellm Repo

This litellm fork was made to extend the original functionalities by some additional callback functions that are expected to be present in the image builts.

Additional Features

Compliance Checker

Introduction

When enabled via the litellm-config.yaml the compliance checker check wether or not a llm request violates compliance regulations and should be handled by a llm that is compliance approved.

In the current version, the intended llm request is initially forwarded to the compliant llm to have it checked for copliance conformity.

a. If the request does not violate the regulations, it is simply being forwarded to the desired llm.

b. If the request does violate the regulations, the intended request message(s) is being replaced by the request to simply return a statement that a compliance safe llm should be used. This way, no sensitive information finds it way to the unsafe llm.

What changed to the original repository?

One new file was created and one file was adjusted.

litellm/proxy/hooks/compliance_checker.py is the new file with the callback logic.

litellm/proxy/common_utils/callback_utils.py has some additional lines 221-227.

In these lines it was made sure that the new file and logic is being included into the litellm project.

How to enable the feature?

  1. in the litellm-config.yaml add callbacks: ['complaince_checker'] like so:
litellm_settings:
  callbacks: ["compliance_checker"]
  1. add two new environment variables COMPLIANCE_MODEL COMPLIANCE_MODEL_URL

Current Limitations

  • The first POC was designed to work with a llama3 model as compliance model that is being hosted with an openai api structure without an API key, but that needs a verified ip address. Thus, not any model can be used at the moment and adjustments would be necessary.
  • The system prompt to validate the compliance conformity is not well thought through yet

About

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.0%
  • TypeScript 6.4%
  • HTML 1.3%
  • JavaScript 0.1%
  • Shell 0.1%
  • Dockerfile 0.1%