
Package | |
---|---|
License | |
Documentation | 📘 Full Docs |
Powered By | George Kour |
Prompt Sentinel is a Python library designed to protect sensitive data during interactions with language models (LLMs). It automatically detects and sanitizes confidential information—such as passwords, tokens, and secrets—before sending input to the LLM, reducing the risk of accidental exposure. Once a response is received, the original values are seamlessly restored. This makes the masking process transparent to the client, enabling smooth and uninterrupted interactions, including function calling.
Prompt Sentinel’s focuses on simplicity—it works out of the box without requiring complex proxy-based LLM deployments or infrastructure changes. It offers flexible detection options, ranging from regular expressions to trusted local or external LLMs, enabling accurate identification of sensitive data such as personally identifiable information (PII) and secrets. Both detection and restoration processes are easy to set up and highly configurable to suit a wide range of applications. To avoid redundant detection calls—especially when relying on LLM-based detection—Prompt Sentinel includes a smart caching mechanism that boosts efficiency and significantly reduces overhead.
Below are examples of how to use Prompt Sentinel in different LLM pipelines. For detailed examples, please refer to the examples
directory in the repository.
@sentinel(detector=LLMSecretDetector(...))
def call_llm(messages):
# Call the LLM with sanitized messages
return response
InstrumentedClass = instrument_model_class(BaseChatModel, detector=LLMSecretDetector(...), methods_to_wrap=['invoke', 'ainvoke', 'stream', 'astream'])
llm = InstrumentedClass()
response = llm.invoke(messages)
-
Sensitive Data Detection:
Use detectors likeLLMSecretDetector
andPythonStringDataDetector
to identify sensitive or private data in your text. -
Automatic Sanitization through secret masking:
Mechanisms for replacing detected secrets with unique mask tokens (e.g.,__SECRET_1__
) so that the LLM operates on sanitized input. -
Following the LLM returned output, the response is decoded to reinstate the original secrets.
-
Decorator Integration:
Easily integrate secret sanitization into your LLM calling pipeline using the@sentinel
decorator. Preprocess your messages before they reach the LLM and post-process the responses to decode tokens. -
Caching:
Implement caching for repeated detections on the same text to reduce redundant API calls and improve performance. -
Async LLM calls Support async function/method LLM call decoration.
Install the package using pip (or include it in your project as needed):
pip install prompt-sentinel
Note: This package requires Python 3.7 or higher.
Step-by-step flow:
-
User Input
The user submits a prompt containing potential secrets. -
Sanitize Input via
@sentinel
The decorator intercepts the prompt before it reaches the LLM. -
Detect Secrets with SecretDetector
A detector scans the prompt for sensitive information like passwords, keys, or tokens. -
Replace Secrets with Tokens (e.g.,
__SECRET_1__
)
Each secret is replaced by a unique placeholder token and stored in a mapping. -
Send Sanitized Input to LLM
The modified, tokenized prompt is passed to the language model. -
LLM Generates Response with Tokens
The response from the model may include those placeholder tokens. -
Decode LLM Output using Secret Mapping
Tokens are replaced with their original secrets using the stored mapping.
-
Detectors:
You can implement your own secret detectors by extending theSecretDetector
abstract base class. Check out the provided implementations in thesentinel_detectors
module for guidance. -
Context Management:
Internally, a singleton context is used to persist secret mappings during LLM interaction and tool invocation. This ensures secrets encoded in the LLM prompt are automatically decoded before tool execution. -
Caching:
The detectors can use caching to avoid redundant API calls. In the provided implementation ofLLMSecretDetector
, caching is handled via an instance variable (_detect_cache
).
Contributions are welcome! Feel free to submit issues or pull requests on GitHub. When contributing, please follow the guidelines in our CONTRIBUTING.md.
This project is licensed under the MIT License. See the LICENSE file for details.