- Set up project structure and dependencies
- Implement logging system
- Set up configuration management
- Create basic CLI interface
- Create repository parser foundation
- Implement code file parsing
- Add documentation and README parsing
- Develop technical feature identification
- Implement "interview mode"
- Create content review interface
- Build approval workflow
- Build content generation foundation
- Implement platform-specific content adaptation
- Use AI to create the content
- Add AI enhancement capabilities
- Implement MCP integration
- Add publication functionality
- Create scheduling system
- Build error handling and retry logic
- Unit and integration testing
- End-to-end testing
- Performance optimization
- Documentation and final polish
# Prompt 1: Project Structure Setup
Create a basic Python project structure for a Code Repository Content Marketing System with the following requirements:
- Python 3.12
- LiteLLM for AI calls
- MCP for publication (Model Context Protocol)
- Loguru for logging
- Pytest for testing
- Asyncio for asynchronous operations
The project should include:
1. A proper directory structure
2. A requirements.txt file with appropriate dependencies and versions
3. A setup.py file for packaging
4. A simple README.md with project description
5. A basic entry point script
Focus on creating a clean, maintainable structure that follows Python best practices and sets us up for the additions we'll make later.# Prompt 2: Logging and Configuration System
Extend our project by adding:
1. A robust logging system using Loguru with:
- Appropriate log levels
- Formatted output
- File rotation
- Error capturing
2. A configuration system that:
- Loads from YAML/JSON files
- Has sensible defaults
- Includes validation
- Supports environment variable overrides
- Stores user preferences for content tone, platforms, approval workflow
Ensure the configuration system covers all options mentioned in the spec, including platform selection, approval workflows, and content preferences.
Build upon the existing project structure and make sure components are properly connected.# Prompt 3: Command Line Interface
Create a command-line interface for our application using Click or Typer that provides:
1. A main entry point with subcommands
2. Commands for:
- Analyzing a repository
- Generating content
- Publishing content
- Configuring settings
- Viewing logs
3. Appropriate help text and usage examples
4. Error handling for invalid inputs
Integrate with our existing logging and configuration systems. Make sure the CLI is user-friendly with clear instructions and feedback. This should be a foundation we can extend as we add more functionality.# Prompt 4: Repository Parser Foundation
Create a repository parser foundation that can:
1. Accept a local repository path
2. Identify repository type (git, etc.)
3. Extract basic repository metadata (name, description, branches)
4. Scan directory structure and build a file tree
5. Categorize files by type (code, documentation, configuration, etc.)
Include proper error handling for invalid repositories and integrate with our logging system. This should be extensible for future enhancements like remote repository support. Make it a modular component that other parts of the system can use.
Build upon our existing project structure and ensure it works with our CLI.# Prompt 5: Code File Parsing
Extend our repository parser to analyze code files:
1. Create language detection functionality
2. Implement language-specific parsers for common languages (start with Python, JavaScript, and one more)
3. Extract:
- Function and class definitions
- Comments and docstrings
- Import statements to identify dependencies
- Basic code metrics (size, complexity)
4. Organize the extracted data into a structured format for later analysis
Make the system extensible for adding more language parsers in the future. Focus on creating clean abstractions and proper error handling. Integrate with our existing logging system and ensure it works with previously built components.# Prompt 6: Documentation and README Parsing
Enhance our repository parser to handle documentation:
1. Implement Markdown and restructuredText parsing
2. Extract key sections from README files:
- Project descriptions
- Features lists
- Installation instructions
- Usage examples
- API documentation
3. Parse other documentation files to identify:
- Architecture descriptions
- Technical decisions
- API references
- Tutorials
4. Organize extracted information into a structured format for content generation
Make sure to handle different documentation styles and formats gracefully. Integrate with our existing code and maintain the same error handling and logging patterns.# Prompt 7: Technical Feature Identification
Create an analysis module that processes the parsed repository data to identify noteworthy technical elements:
1. Implement detection for:
- Architectural patterns (MVC, microservices, etc.)
- Technology stack components
- Unique implementation approaches
- Performance optimizations
- Testing strategies
- Developer experience considerations
2. Add scoring system to prioritize features based on:
- Uniqueness
- Complexity
- Potential interest to target audience
3. Create a structured output format that can feed into content generation
Include confidence levels for detections and ensure proper error handling. Integrate with our existing code and maintain consistent logging patterns. This module should work with the data extracted by our repository parser.# Prompt 8: Content Generation Foundation
Create a content generation system foundation:
1. Implement a base content generator that:
- Takes repository analysis data as input
- Uses LiteLLM to generate content
- Supports different content templates
- Handles prompt engineering and context management
2. Design a modular system for:
- Different platforms (dev.to, Twitter/X, LinkedIn)
- Different content types (announcements, deep-dives, etc.)
- User customization options
3. Create an initial prompt template system that can be extended
Focus on creating a clean architecture that separates concerns and allows for easy extension. Ensure proper error handling and logging. This foundation should work with our existing code and provide a basis for the platform-specific generators.# Prompt 9: Platform-Specific Content Adaptation
Extend our content generation system with platform-specific adapters:
1. Implement specialized generators for:
- Dev.to (technical articles with code samples, headers, images)
- Twitter/X (short messages with appropriate hashtags, threading support)
- LinkedIn (professional content with business value focus)
2. For each platform, handle:
- Character/size limitations
- Formatting requirements
- Platform-specific features (hashtags, mentions, etc.)
- Content tone adjustments based on platform norms
3. Create specialized prompt templates for each platform
Make sure each adapter follows a common interface but implements platform-specific logic. Integrate with our existing content generation foundation and maintain consistent error handling and logging patterns.# Prompt 10: Content Type Templates
Enhance our content generation system with templates for different content types:
1. Create specialized templates for:
- Project introductions/announcements
- Technical deep-dives
- Architecture breakdowns
- Feature highlights
- Release notes transformations
2. For each template, implement:
- Specific prompt engineering strategies
- Structure guidelines
- AI instructions for tone and focus
- Appropriate content selection logic
3. Add a template selection system based on repository analysis and user preferences
These should integrate with our platform-specific adapters, allowing different content types to be formatted appropriately for each platform. Maintain consistent error handling and logging patterns and ensure compatibility with existing code.# Prompt 11: AI Enhancement Capabilities
Add AI enhancement capabilities to our content generation system:
1. Implement specialized modules for:
- Technical explanation generation
- Code sample extraction and explanation
- Architectural diagram description
- Performance benefit quantification
- Business value articulation
2. Create a system for dynamically adjusting AI prompts based on:
- Repository analysis results
- User preferences
- Target platform
- Previous generation results
3. Add quality checking for generated content:
- Technical accuracy validation
- Readability scoring
- Engagement potential assessment
These enhancements should work with our existing content generation system and follow the same patterns for error handling and logging. Focus on making the AI outputs more valuable and targeted for the specific use cases.# Prompt 12: Interview Mode Implementation
Create an interactive "interview mode" that can gather additional context from users:
1. Implement a system that:
- Identifies missing information from repository analysis
- Generates targeted questions for the user
- Processes and incorporates user responses
- Adapts questions based on previous answers
2. Add support for different question types:
- Open-ended questions for context
- Specific questions about technical decisions
- Clarification questions for ambiguous findings
- Preference questions for content focus
3. Create a user-friendly CLI interface for the interview process
This should integrate with our existing repository analysis and content generation systems. Maintain consistent error handling and logging patterns, and ensure the interface is intuitive and helpful.# Prompt 13: Content Review Interface
Create a content review interface that allows users to:
1. View generated content with:
- Platform-specific previews
- Formatting as it would appear on the target platform
- Highlighting of key points and technical elements
2. Edit content directly with:
- Simple text editing capabilities
- Format validation for platform constraints
- Suggestions for improvements
3. Compare different versions or variations of content
Make the interface user-friendly and responsive. It should integrate with our existing content generation system and support all the platforms we're targeting. Ensure consistent error handling and logging, and provide helpful feedback to users.# Prompt 14: Approval Workflow
Implement a configurable approval workflow system:
1. Create a workflow manager that supports:
- Automatic publishing based on conditions
- Manual approval requirements
- Approval routing (different approvers for different platforms)
- Notification mechanisms for pending approvals
2. Implement approval states:
- Draft
- Pending approval
- Approved
- Rejected (with feedback)
- Published
- Failed
3. Add persistence for approval status and history
This should integrate with our existing content generation and review systems. The workflow should be configurable through our configuration system and accessible through our CLI. Maintain consistent error handling and logging patterns.# Prompt 15: MCP Integration
Implement integration with the Model Context Protocol (MCP):
1. Set up MCP client functionality:
- Client configuration and initialization
- Authentication handling
- Request/response management
- Error handling and retries
2. Implement specific MCP tool interactions:
- Context7 for technology information
- Any other relevant MCP tools
3. Create abstractions that hide MCP complexity from the rest of the system
This should follow MCP best practices and integrate cleanly with our existing code. Maintain consistent error handling and logging patterns, and ensure the integration is robust and fault-tolerant.# Prompt 16: Publication Functionality
Extend our MCP integration to implement publication functionality:
1. Create platform-specific publishers for:
- Dev.to
- Twitter/X
- LinkedIn
2. For each platform, implement:
- Authentication
- Content formatting for API requirements
- Metadata handling (tags, categories, etc.)
- Publication status tracking
- Error handling and retries
3. Add publication verification and confirmation
This should leverage our MCP integration and work with our content generation and approval systems. Maintain consistent error handling and logging patterns, and ensure robust error recovery for publication failures.# Prompt 17: Scheduling System
Implement a content scheduling system:
1. Create a scheduler that supports:
- One-time scheduled publications
- Recurring publication patterns
- Platform-specific optimal timing
- Queue management for multiple scheduled items
2. Add persistence for scheduled items
3. Implement a background worker for handling scheduled publications
4. Create CLI commands for managing scheduled content
This should integrate with our publication functionality and approval workflow. Ensure the system is reliable and can recover from interruptions. Maintain consistent error handling and logging patterns.# Prompt 18: Error Handling and Retry Logic
Enhance our system with robust error handling and retry logic:
1. Implement a centralized error handling system:
- Error categorization (transient vs. permanent)
- Appropriate recovery strategies
- User notification for critical errors
2. Add retry mechanisms with:
- Exponential backoff
- Maximum retry limits
- Failure circuit breakers
3. Implement queue systems for operations that might fail:
- Content generation retries
- Publication retries
- API rate limit handling
This should be applied consistently across our existing code, with special focus on external integrations and long-running processes. Ensure that errors are properly logged and that the system can recover gracefully from failures.# Prompt 19: Unit and Integration Testing
Implement comprehensive testing for our system:
1. Create unit tests for core components:
- Repository parsers
- Content generators
- Publication integrations
- User interaction components
2. Implement integration tests for:
- End-to-end content generation workflows
- API interactions with mocked responses
- Configuration handling and validation
3. Add test fixtures and mocks for:
- Sample repositories
- AI responses
- Platform API responses
Use pytest and follow testing best practices. Aim for good test coverage, particularly for critical components. The tests should be integrated with our existing codebase and structured to support CI/CD in the future.# Prompt 20: End-to-End Testing
Create end-to-end tests for our system:
1. Implement test scenarios for:
- Complete workflows from repository analysis to publication
- Error recovery and retry paths
- Configuration changes and their effects
- User interaction flows
2. Add testing with actual repositories of varying complexity
3. Create sandbox testing for platform integrations
These tests should validate that the entire system works together as expected. Focus on realistic scenarios and edge cases. Ensure the tests are automated and can be run as part of a CI/CD pipeline.# Prompt 21: Performance Optimization
Optimize the performance of our system:
1. Identify and address performance bottlenecks:
- Repository parsing for large repositories
- AI generation latency
- Parallelization opportunities
2. Implement caching for:
- Repository analysis results
- Common AI queries
- API responses
3. Add monitoring for performance metrics
Focus on optimizations that provide significant improvements for real-world usage. Ensure that optimizations don't compromise reliability or correctness. Maintain consistent error handling and logging patterns.# Prompt 22: Documentation and Final Polish
Complete the project with comprehensive documentation and final polish:
1. Create user documentation:
- Installation and setup guide
- Usage instructions
- Configuration options
- Troubleshooting guide
2. Write developer documentation:
- Architecture overview
- Component descriptions
- Extension points
- Contributing guidelines
3. Add final polish:
- Code cleanup and consistency
- User experience improvements
- Error message refinement
- Log output optimization
This should tie everything together and ensure the system is ready for use. Focus on making the documentation clear, comprehensive, and helpful for both users and developers.For each step:
- Start with a clear understanding of the requirements
- Break down complex tasks into smaller subtasks
- Build incrementally on previous work
- Ensure thorough testing before moving to the next step
- Maintain consistent patterns for:
- Error handling
- Logging
- Configuration
- API design
- Documentation
Each prompt should result in code that:
- Works independently
- Integrates with previous components
- Follows Python best practices
- Is well-tested and robust
- Has clear documentation
The final system should provide a seamless experience from repository analysis to content publication, with all components working together harmoniously. Users should be able to easily:
- Analyze their repositories
- Generate tailored content for multiple platforms
- Review and approve content
- Schedule and publish content
- Monitor results and handle errors
The architecture should be modular and extensible to support future enhancements as outlined in the specification.