Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 39 additions & 16 deletions tims-elements-prompting.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,41 @@
# Tim's Elements of Prompt Engineering

Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness)
Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies
Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities
Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how
Think procedurally and in a step-by-step manner to help the AI break down complex topics
Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses
Use system prompts and meta prompts to direct and focus the LLM's capabilities
Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs)
Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing
Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons
Remember to attribute AI-enriched content where relevant
Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions
"LLM Pillar Jumping": Use insights from one LLM session to support or refine another
Consider "A/B testing" LLMs against each other for more grounded and reliable answers
Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response
Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI
## Core Principles

* Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness)

* Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies

* Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities

## Prompt Creation Techniques

* Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how

* Think procedurally and in a step-by-step manner to help the AI break down complex topics

* Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses

* Use system prompts and meta prompts to direct and focus the LLM's capabilities

## Best Practices

* Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs)

* Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing

* Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons

* Remember to attribute AI-enriched content where relevant

## Advanced Strategies

* Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions

* "LLM Pillar Jumping": Use insights from one LLM session to support or refine another

* Consider "A/B testing" LLMs against each other for more grounded and reliable answers

* Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response

* Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI