Add defense against prompt injection attack #181
Replies: 2 comments 1 reply
-
You seem confused over what MCP is and can do. |
Beta Was this translation helpful? Give feedback.
-
Hey @zhilongwang, building on the other comment, I wanted to help clarify where something like this would probably fit. MCP is primarily a communication protocol. It defines how LLMs connect to data sources and tools, but not how they process the content internally. Similar to how HTTP handles connections and data transfer on the web, but doesn't define how websites protect against things like XSS attacks. Prompt injection defenses would typically live within the LLM service itself or in the application using the LLM. Or, this could take the shape of specialized security libraries that work alongside MCP. Or maybe even an MCP server that offers prompt hardening as a service. |
Beta Was this translation helpful? Give feedback.
-
Pre-submission Checklist
Your Idea
Prompt injection is a significant security concern in any LLM application that processes user input.
I noticed that MCP has not deployed any prompt injection defenses in its framework.
I am researching the protection of LLM applications and have an idea: adopting randomization in prompts to mitigate prompt injection attacks. This approach is similar to ASLR in traditional software protection and introduces almost zero runtime overhead.
I am interested in contributing to the implementation of this feature and would like to hear your thoughts on it.
Scope
Beta Was this translation helpful? Give feedback.
All reactions