Best practices for agent design, token efficiency, and backlog prioritisation use cases #2846
Unanswered
p-nicolaou
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team
First off — really enjoying working with Oh My OpenAgent. I’m exploring how to use agents more effectively in production workflows and had a few questions around best practices, especially when scaling usage.
Best practices for designing agents
• Split logic across multiple agents
• Keep everything within one agent with tools
• Any guidance on orchestration vs delegation between agents?
Token efficiency becomes critical at scale, so I’m curious
Are there recommended strategies for:
• Reducing prompt size without losing performance?
• Managing conversation memory effectively?
Should we prefer:
• Short system prompts + external context injection?
• Or richer, self-contained prompts?
• Any built-in features or patterns in this repo that help control token usage?
Use case: Analysing & prioritising a Jira backlog
I’m particularly interested in using agents for product/engineering workflows.
Example scenario:
• Input: A Jira backlog with ~100 tickets
• Each ticket includes:
• Title
• Description (some not clear)
• Labels
Goal:
• Cluster similar tickets
• Identify high-impact vs low-impact work
• Suggest prioritisation order
• Highlight quick wins vs complex tasks
Would you recommend:
• A single agent handling the full pipeline?
• Or multiple specialised agents (e.g. classifier → scorer → prioritiser)?
Any examples or patterns for:
• Batch processing large datasets like this?
• Avoiding excessive token usage when analysing many items?
Beta Was this translation helpful? Give feedback.
All reactions