-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Update system prompt #835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update system prompt #835
Conversation
## What I Did Referred to the leaked system prompts of tools like Cursor, Windsurf, Trae, and other agentic IDEs, and crafted a specialized system prompt. This new prompt is designed to make the model strictly adhere to both agentic-mode instructions and the user's intent, improving the overall developer experience. --- ## What It Changes The default system prompt isn't strict or directive enough for smaller, local, or non-agent-aware models. Such models often ask multiple clarifying questions (5–6 on average) before beginning the task. This new prompt was designed and tuned specifically for: - OpenAI’s `O` series (e.g., GPT-4o) - Locally hosted models With this change, the model is capable of initiating **full project generation** from a single, high-level prompt, rather than requesting further clarification. --- ## Key Improvements - Enables models to **actually modify files** instead of only generating code blocks in responses. - Makes the system prompt more actionable, reducing friction for developers and enabling real-time coding experiences. --- ## My Opinion Compared to the default system prompt, this enhanced version is a major step forward — especially for locally hosted or prompt-sensitive models. While it still isn’t on par with Cursor or Windsurf’s prompt-engineering depth, it’s a substantial improvement. 👉 Please review and consider integrating this updated system prompt in the next release.
## What I Did Referred to the leaked system prompts of tools like Cursor, Windsurf, Trae, and other agentic IDEs, and crafted a specialized system prompt. This new prompt is designed to make the model strictly adhere to both agentic-mode instructions and the user's intent, improving the overall developer experience. --- ## What It Changes The default system prompt isn't strict or directive enough for smaller, local, or non-agent-aware models. Such models often ask multiple clarifying questions (5–6 on average) before beginning the task. This new prompt was designed and tuned specifically for: - OpenAI’s `O` series (e.g., GPT-4o) - Locally hosted models With this change, the model is capable of initiating **full project generation** from a single, high-level prompt, rather than requesting further clarification. --- ## Key Improvements - Enables models to **actually modify files** instead of only generating code blocks in responses. - Makes the system prompt more actionable, reducing friction for developers and enabling real-time coding experiences. --- ## My Opinion Compared to the default system prompt, this enhanced version is a major step forward — especially for locally hosted or prompt-sensitive models. While it still isn’t on par with Cursor or Windsurf’s prompt-engineering depth, it’s a substantial improvement. 👉 Please review and consider integrating this updated system prompt in the next release.
@adyanthm Can I ask one thing on this change ? As I read PR, a lot of prompts are updated for agent, finally I could find
// system message
private _generateChatMessagesSystemMessage = async (chatMode: ChatMode, specialToolFormat: 'openai-style' | 'anthropic-style' | 'gemini-style' | undefined) => {
...
const systemMessage = chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
return systemMessage
} Should const systemMessage = chatMode === 'agent'
? agentSystemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, mcpTools, includeXMLToolDefinitions })
: chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions }) If I am wrong, please let me know. |
Oh yes sorry! I have mentioned that my prompt is specifically modded for agent. Its from my modded fork. I have only agent mode in my fork so i just set the system prompt without checking for agent mode. Can you fix it?? |
It seems that I couldn't add commit in your PR and I just share the change.
// before
const systemMessage = chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
// after
import { ..., agentSystemMessage } from "../common/prompt/prompts.js";
...
const systemMessage = chatMode === 'agent'
? agentSystemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, mcpTools, includeXMLToolDefinitions })
: chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions }) You can check |
I am busy with another project at moment. I gave you collab access to my forked repo. Please commit the changes there if you can. |
- import `agentSystemMessage` from prompts.js - conditionally set systemMessage based on 'agent' chat mode
applied it. |
@andrewpareles Look into this and merge this asap. |
What I Did
Referred to the leaked system prompts of tools like Cursor, Windsurf, Trae, and other agentic IDEs, and crafted a specialized system prompt. This new prompt is designed to make the model strictly adhere to both agentic-mode instructions and the user's intent, improving the overall developer experience.
What It Changes
The default system prompt isn't strict or directive enough for smaller, local, or non-agent-aware models. Such models often ask multiple clarifying questions (5–6 on average) before beginning the task.
This new prompt was designed and tuned specifically for:
O
series (e.g., GPT-4o)With this change, the model is capable of initiating full project generation from a single, high-level prompt, rather than requesting further clarification.
Key Improvements
Enables models to actually modify files instead of only generating code blocks in responses.
Makes the system prompt more actionable, reducing friction for developers and enabling real-time coding experiences.
My Opinion
Compared to the default system prompt, this enhanced version is a major step forward — especially for locally hosted or prompt-sensitive models. While it still isn’t on par with Cursor or Windsurf’s prompt-engineering depth, it’s a substantial improvement.
👉 Please review and consider integrating this updated system prompt in the next release.