Skip to content

✨ Masking sensitive information in lang/emb requests #258

Open
@roma-glushko

Description

@roma-glushko

Requirements

  • Mask PHI, PII, and PCI information in lang/embedding requests
  • Unmask all masked values before returning LLM chat responses back
  • Unmask masked tokens in streaming chat requests
  • Allow to enable the functionality per router
  • consider using the middleware-like approach to the architecture and that there may be more pre/post processing logic to include in the future (like ✨ [Cost] Incoming Token Compression #259)

Use Case

  • I want to be able to mask sensitive information (PHI, PII, and PCI) before sending it to LLM providers, so I'm not leaking it. If model returns masked placeholders, I want it to be unmasked automatically and seamlessly

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Icebox

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions