diff --git a/tims-elements-prompting.md b/tims-elements-prompting.md index e26de4a..354158c 100644 --- a/tims-elements-prompting.md +++ b/tims-elements-prompting.md @@ -1,18 +1,41 @@ # Tim's Elements of Prompt Engineering -Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness) -Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies -Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities -Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how -Think procedurally and in a step-by-step manner to help the AI break down complex topics -Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses -Use system prompts and meta prompts to direct and focus the LLM's capabilities -Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs) -Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing -Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons -Remember to attribute AI-enriched content where relevant -Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions -"LLM Pillar Jumping": Use insights from one LLM session to support or refine another -Consider "A/B testing" LLMs against each other for more grounded and reliable answers -Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response -Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI \ No newline at end of file +## Core Principles + +* Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness) + +* Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies + +* Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities + +## Prompt Creation Techniques + +* Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how + +* Think procedurally and in a step-by-step manner to help the AI break down complex topics + +* Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses + +* Use system prompts and meta prompts to direct and focus the LLM's capabilities + +## Best Practices + +* Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs) + +* Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing + +* Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons + +* Remember to attribute AI-enriched content where relevant + +## Advanced Strategies + +* Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions + +* "LLM Pillar Jumping": Use insights from one LLM session to support or refine another + +* Consider "A/B testing" LLMs against each other for more grounded and reliable answers + +* Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response + +* Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI \ No newline at end of file