-
Notifications
You must be signed in to change notification settings - Fork 46.2k
refactor(backend/blocks): Add retry function to replicate models & fix AI Music Generator block #11493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
refactor(backend/blocks): Add retry function to replicate models & fix AI Music Generator block #11493
Conversation
…unction ### Changes 🏗️ - Introduced a new helper function `run_replicate_with_retry` to handle retries for model execution across multiple blocks, improving error handling and reducing code duplication. - Updated `AIImageCustomizerBlock`, `AIImageGeneratorBlock`, `AIMusicGeneratorBlock`, `AIImageEditorBlock`, `ReplicateFluxAdvancedModelBlock`, and `ReplicateModelBlock` to utilize the new helper function for running models.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughA new retry wrapper function is introduced to the Replicate helper module with exponential backoff and failure-status handling, which is then integrated across multiple image and music generation blocks to replace direct API calls. Additionally, the music generator gains support for the Minimax Music 1.5 model with lyrics input and PCM audio format. Changes
Sequence DiagramsequenceDiagram
participant Block as Block Code
participant Helper as run_replicate_with_retry
participant Client as Replicate Client
participant API as Replicate API
Block->>Helper: call with model, input_params
loop Retry Loop (up to max_retries)
Helper->>Client: async_run(model, input_params)
Client->>API: HTTP Request
API-->>Client: Response
Client-->>Helper: Output
alt Success (no exception)
Helper->>Helper: Check status field
alt status == "failed"
Helper->>Helper: Raise RuntimeError with details
else Status OK
Helper-->>Block: Return output
Note over Block,Helper: Success path
end
else Exception Thrown
Helper->>Helper: Track error, log warning
Helper->>Helper: Exponential backoff delay
Note over Helper: Retry...
end
end
alt All retries exhausted
Helper->>Helper: Log final error
Helper-->>Block: Raise last exception
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
This PR targets the Automatically setting the base branch to |
✅ Deploy Preview for auto-gpt-docs canceled.
|
…ctions-with-retries-in-all
✅ Deploy Preview for auto-gpt-docs-dev canceled.
|
|
Here's the code health analysis summary for commits Analysis Summary
|
|
@claude please review this PR |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
1-95: Formatting required per CI failure.The pipeline indicates the formatter would modify this file. Run
poetry run format(Black + isort) before merging.autogpt_platform/backend/backend/blocks/ai_music_generator.py (1)
223-236: Based on my verification, I found the following:Key Findings:
- Replicate API Documentation: The MusicGen model expects
output_formatandnormalization_strategyas strings ("mp3", "wav", "loudness", "clip", "peak", "rms")- Enum Definitions: Both
AudioFormatandNormalizationStrategyinherit from(str, Enum), making them str-like but still Enum objects- Inconsistency Confirmed:
- MINIMAX branch (line 220) explicitly uses
.valueforoutput_format- Non-MINIMAX branch (lines 232-233) passes enum objects directly
- Risk: While str Enums may serialize to strings implicitly in some contexts, this is not guaranteed and creates inconsistency across the codebase
The issue is real and should be fixed for consistency and robustness.
Ensure parameter format consistency for Replicate API.
The MINIMAX branch correctly uses
output_format.value(line 220), while the non-MINIMAX branch passes enum objects directly (lines 232-233). The Replicate API expects string values foroutput_format("mp3"/"wav") andnormalization_strategy("loudness"/"clip"/"peak"/"rms"). AlthoughAudioFormatandNormalizationStrategyinherit from(str, Enum), explicitly use.valuefor both branches to ensure consistent, explicit serialization:"output_format": output_format.value, "normalization_strategy": normalization_strategy.value,
🧹 Nitpick comments (3)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
45-95: Retry logic looks solid, but consider edge case withmax_retries=0.The implementation correctly uses exponential backoff and handles both dict and object response types. However, if
max_retries=0is passed, the loop never executes and the function implicitly returnsNonewithout making any API call or raising an error.Consider adding a guard:
async def run_replicate_with_retry( client: ReplicateClient, model: str, input_params: dict[str, Any], wait: bool = False, max_retries: int = 3, **kwargs: Any, ) -> Any: + if max_retries < 1: + raise ValueError("max_retries must be at least 1") last_error = None retry_delay = 2 # secondsautogpt_platform/backend/backend/blocks/replicate/flux_advanced.py (1)
197-197: Nit: Unnecessary f-string.
f"{model_name}"is equivalent to justmodel_namesince no formatting is applied.output: ReplicateOutputs = await run_replicate_with_retry( # type: ignore This is because they changed the return type, and didn't update the type hint! It should be overloaded depending on the value of `use_file_output` to `FileOutput | list[FileOutput]` but it's `Any | Iterator[Any]` client, - f"{model_name}", + model_name, input_params={autogpt_platform/backend/backend/blocks/ai_music_generator.py (1)
246-256: Reduce code duplication by using existing helper.This output handling logic duplicates functionality already available in the
extract_resulthelper function frombackend.blocks.replicate._helper. Using the helper would provide more robust handling (including dict outputs) and better error logging.Apply this diff to use the existing helper:
+from backend.blocks.replicate._helper import extract_result, run_replicate_with_retry -from backend.blocks.replicate._helper import run_replicate_with_retryThen replace the output handling:
# Handle the output - if isinstance(output, list) and len(output) > 0: - result_url = output[0] # If output is a list, get the first element - elif isinstance(output, str): - result_url = output # If output is a string, use it directly - elif isinstance(output, FileOutput): - result_url = output.url - else: - result_url = ( - "No output received" # Fallback message if output is not as expected - ) - - return result_url + return extract_result(output)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
autogpt_platform/backend/backend/blocks/ai_image_customizer.py(2 hunks)autogpt_platform/backend/backend/blocks/ai_image_generator_block.py(2 hunks)autogpt_platform/backend/backend/blocks/ai_music_generator.py(7 hunks)autogpt_platform/backend/backend/blocks/flux_kontext.py(2 hunks)autogpt_platform/backend/backend/blocks/replicate/_helper.py(2 hunks)autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py(2 hunks)autogpt_platform/backend/backend/blocks/replicate/replicate_block.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Always run backend setup commands in order: poetry install, poetry run prisma migrate dev, poetry run prisma generate before backend development
Always run poetry run format (Black + isort) before poetry run lint (ruff) for backend code
Use Python 3.10-3.13 with Python 3.11 required for development (managed by Poetry via pyproject.toml)Run linting and formatting: use
poetry run format(Black + isort) to auto-fix, andpoetry run lint(ruff) to check remaining errors
Files:
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.pyautogpt_platform/backend/backend/blocks/ai_image_generator_block.pyautogpt_platform/backend/backend/blocks/replicate/replicate_block.pyautogpt_platform/backend/backend/blocks/replicate/_helper.pyautogpt_platform/backend/backend/blocks/flux_kontext.pyautogpt_platform/backend/backend/blocks/ai_music_generator.pyautogpt_platform/backend/backend/blocks/ai_image_customizer.py
autogpt_platform/backend/backend/blocks/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Agent blocks in backend/blocks/ must include: block definition with input/output schemas, execution logic with proper error handling, and tests validating functionality. Blocks inherit from Block base class with input/output schemas, implement run method, use uuid.uuid4() for block UUID, and be registered in block registry
Files:
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.pyautogpt_platform/backend/backend/blocks/ai_image_generator_block.pyautogpt_platform/backend/backend/blocks/replicate/replicate_block.pyautogpt_platform/backend/backend/blocks/replicate/_helper.pyautogpt_platform/backend/backend/blocks/flux_kontext.pyautogpt_platform/backend/backend/blocks/ai_music_generator.pyautogpt_platform/backend/backend/blocks/ai_image_customizer.py
autogpt_platform/{backend,autogpt_libs}/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.pyautogpt_platform/backend/backend/blocks/ai_image_generator_block.pyautogpt_platform/backend/backend/blocks/replicate/replicate_block.pyautogpt_platform/backend/backend/blocks/replicate/_helper.pyautogpt_platform/backend/backend/blocks/flux_kontext.pyautogpt_platform/backend/backend/blocks/ai_music_generator.pyautogpt_platform/backend/backend/blocks/ai_image_customizer.py
autogpt_platform/backend/**
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
autogpt_platform/backend/**: Install dependencies for backend usingpoetry install
Run database migrations usingpoetry run prisma migrate dev
Files:
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.pyautogpt_platform/backend/backend/blocks/ai_image_generator_block.pyautogpt_platform/backend/backend/blocks/replicate/replicate_block.pyautogpt_platform/backend/backend/blocks/replicate/_helper.pyautogpt_platform/backend/backend/blocks/flux_kontext.pyautogpt_platform/backend/backend/blocks/ai_music_generator.pyautogpt_platform/backend/backend/blocks/ai_image_customizer.py
🧠 Learnings (1)
📚 Learning: 2025-11-25T08:49:03.562Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.562Z
Learning: Applies to autogpt_platform/backend/blocks/**/*.py : Implement async `run` method in block classes for backend
Applied to files:
autogpt_platform/backend/backend/blocks/ai_image_generator_block.pyautogpt_platform/backend/backend/blocks/replicate/replicate_block.py
🧬 Code graph analysis (7)
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (2)
extract_result(13-42)run_replicate_with_retry(45-95)
autogpt_platform/backend/backend/blocks/ai_image_generator_block.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
run_replicate_with_retry(45-95)
autogpt_platform/backend/backend/blocks/replicate/replicate_block.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (2)
extract_result(13-42)run_replicate_with_retry(45-95)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
autogpt_platform/backend/backend/util/logging.py (2)
error(49-51)warning(45-47)
autogpt_platform/backend/backend/blocks/flux_kontext.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
run_replicate_with_retry(45-95)
autogpt_platform/backend/backend/blocks/ai_music_generator.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
run_replicate_with_retry(45-95)
autogpt_platform/backend/backend/blocks/ai_image_customizer.py (1)
autogpt_platform/backend/backend/blocks/replicate/_helper.py (1)
run_replicate_with_retry(45-95)
🪛 GitHub Actions: AutoGPT Platform - Backend CI
autogpt_platform/backend/backend/blocks/replicate/_helper.py
[error] 1-1: Formatter would modify this file during lint/format checks. 1 file would be reformatted, 489 files would be left unchanged.
autogpt_platform/backend/backend/blocks/ai_music_generator.py
[error] 1-1: Imports are incorrectly sorted and/or formatted. Linting failed during 'poetry run lint'. 1 file would be reformatted; run 'poetry run format' to fix issues.
🔇 Additional comments (10)
autogpt_platform/backend/backend/blocks/ai_image_generator_block.py (1)
8-8: LGTM!Clean integration of the retry wrapper. The call signature correctly passes
client,model_name, andinput_paramsas positional arguments matching the helper function's signature.Also applies to: 185-187
autogpt_platform/backend/backend/blocks/ai_image_customizer.py (1)
9-9: LGTM!Correct integration of the retry wrapper with appropriate type annotation and output handling.
Also applies to: 187-192
autogpt_platform/backend/backend/blocks/flux_kontext.py (1)
8-8: LGTM!Correct integration of the retry wrapper. The use of
input_params=input_paramsas a keyword argument is functionally equivalent to the positional style used in other blocks.Also applies to: 177-182
autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py (1)
12-16: LGTM!Clean integration of the retry wrapper with proper use of
extract_resultfor output processing. The type annotation comment helpfully documents the Replicate API typing inconsistency.Also applies to: 195-210
autogpt_platform/backend/backend/blocks/replicate/replicate_block.py (2)
12-16: LGTM! Import changes support retry functionality.The addition of
run_replicate_with_retryto the imports is correct and necessary for the new retry logic integration.
136-138: LGTM! Clean integration of retry logic.The change correctly replaces the direct
client.async_runcall withrun_replicate_with_retry, which will automatically retry up to 3 times with exponential backoff on failures. The parameter mapping is correct:model_inputsis passed asinput_params, andwait=Falsemaintains the existing behavior.autogpt_platform/backend/backend/blocks/ai_music_generator.py (4)
49-49: LGTM! New model version added.The addition of
MINIMAX_MUSIC_1_5model version enables support for the new Minimax Music 1.5 model. Note that this value includes the full model path format unlike other enum values, which is intentional for the special handling required by this model.
56-56: LGTM! New audio format added.The addition of
PCMaudio format correctly extends the supported output formats.
80-87: LGTM! Lyrics field properly documented.The new optional
lyricsfield is correctly implemented with clear documentation indicating it's required for the Minimax Music 1.5 model. The field description helpfully explains the supported format including line breaks and tags.
9-11: Fix import formatting to resolve pipeline failure.The imports are incorrectly formatted, causing the CI pipeline to fail. Per the pipeline error, you need to run
poetry run formatto automatically fix the import ordering and formatting.#!/bin/bash # Run the format command to fix import issues cd autogpt_platform/backend poetry run format⛔ Skipped due to learnings
Learnt from: CR Repo: Significant-Gravitas/AutoGPT PR: 0 File: autogpt_platform/CLAUDE.md:0-0 Timestamp: 2025-11-25T08:49:03.562Z Learning: Applies to autogpt_platform/backend/**/*.py : Run linting and formatting: use `poetry run format` (Black + isort) to auto-fix, and `poetry run lint` (ruff) to check remaining errorsLearnt from: CR Repo: Significant-Gravitas/AutoGPT PR: 0 File: .github/copilot-instructions.md:0-0 Timestamp: 2025-11-25T08:48:33.234Z Learning: Applies to autogpt_platform/backend/**/*.py : Always run poetry run format (Black + isort) before poetry run lint (ruff) for backend codeLearnt from: CR Repo: Significant-Gravitas/AutoGPT PR: 0 File: AGENTS.md:0-0 Timestamp: 2025-11-25T08:48:55.833Z Learning: Applies to autogpt_platform/{backend,autogpt_libs}/**/*.py : Format Python code with `poetry run format`Learnt from: CR Repo: Significant-Gravitas/AutoGPT PR: 0 File: autogpt_platform/CLAUDE.md:0-0 Timestamp: 2025-11-25T08:49:03.562Z Learning: Applies to autogpt_platform/backend/**/*_test.py : Always review snapshot changes before committing using `git diff` when creating or updating test snapshots
| if music_gen_model_version == MusicGenModelVersion.MINIMAX_MUSIC_1_5: | ||
| if not lyrics: | ||
| raise ValueError("Lyrics are required for Minimax Music 1.5 model") | ||
|
|
||
| # Validate prompt length (10-300 chars) | ||
| if len(prompt) < 10: | ||
| prompt = prompt.ljust(10, ".") | ||
| elif len(prompt) > 300: | ||
| prompt = prompt[:300] | ||
|
|
||
| input_params = { | ||
| "prompt": prompt, | ||
| "lyrics": lyrics, | ||
| "audio_format": output_format.value, | ||
| } | ||
| model_name = "minimax/music-1.5" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Minimax Music 1.5 API prompt length requirements documentation
💡 Result:
Summary of documented prompt-length limits for Minimax Music 1.5
- AI/ML API (Minimax/music-1.5): prompt = 10–300 characters; lyrics = 10–3000 characters. [1]
- Replicate README (Minimax/music-1.5): lyrics recommended/limited to 600 characters (notes state max output 240s and “max lyrics 600 characters”). [2]
- FAL.ai wrapper docs: prompt = 10–300 characters; lyrics/text-to-music input max = 600 characters (other fields show lyrics 10–3000 in alternate schema). [3]
Sources
[1] AI/ML API docs — minimax/music-1.5 (Generate music sample).
[2] Replicate — minimax/music-1.5 README.
[3] FAL.ai Minimax Music v1.5 API docs.
🏁 Script executed:
#!/bin/bash
# Search codebase for context about prompt validation and Minimax API usage
rg -n "minimax|music-1.5" --type py -B 3 -A 3Repository: Significant-Gravitas/AutoGPT
Length of output: 1368
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or documentation about prompt validation
rg -n "prompt.*length|10.*300|minimax.*prompt" --type py -B 2 -A 2Repository: Significant-Gravitas/AutoGPT
Length of output: 4921
🏁 Script executed:
#!/bin/bash
# Check lyrics validation for consistency
sed -n '207,243p' autogpt_platform/backend/backend/blocks/ai_music_generator.pyRepository: Significant-Gravitas/AutoGPT
Length of output: 1465
🏁 Script executed:
#!/bin/bash
# Check if there are tests for prompt validation behavior
fd "test" -e py --exec grep -l "ai_music_generator\|prompt.*minimax" {} \;Repository: Significant-Gravitas/AutoGPT
Length of output: 54
Replace silent prompt padding/truncation with proper validation that raises an error.
The code silently modifies prompts outside the 10–300 character API requirement by padding short prompts with dots and truncating long prompts. This contradicts the code comment's intent to "Validate" and creates an inconsistency: lyrics validation raises ValueError for missing required fields, while prompt validation silently auto-fixes.
According to the Minimax Music 1.5 API documentation, the prompt must be strictly 10–300 characters. Users should be informed when their input violates this constraint rather than having it silently modified, which can produce unexpected results and masks the actual problem.
Replace lines 212–215 with validation that raises ValueError if the prompt is out of range, matching the pattern used for lyrics validation on line 209.
🤖 Prompt for AI Agents
In autogpt_platform/backend/backend/blocks/ai_music_generator.py around lines
207 to 222, the prompt is currently silently padded or truncated to meet the
10–300 character requirement; replace that behavior with strict validation: if
len(prompt) < 10 or len(prompt) > 300 raise a ValueError with a clear message
(e.g. "Prompt must be between 10 and 300 characters") following the same
validation pattern used for lyrics, and remove the ljust/truncate logic (i.e.,
replace lines 212–215 with the ValueError checks).
ntindle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved but not tested
…ctions-with-retries-in-all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| result_url = output[0] # If output is a list, get the first element | ||
| elif isinstance(output, str): | ||
| result_url = output # If output is a string, use it directly | ||
| elif isinstance(output, FileOutput): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: FileOutput in list not extracted to URL string
When output is a list, the code assigns output[0] directly to result_url without checking if it's a FileOutput object. If the model returns list[FileOutput], result_url becomes a FileOutput object instead of a URL string. This causes the subsequent check isinstance(result, str) in the run() method to fail, incorrectly yielding an error. Other blocks like ai_image_generator_block.py and the extract_result helper correctly check isinstance(output[0], FileOutput) and extract the .url property.
…ctions-with-retries-in-all
|
@cursor run the linter and update the pr please |
|
Unable to authenticate your request. Please make sure to connect your GitHub account to Cursor. Go to Cursor |
|
@cursor run the linter and update the pr please |
|
Ran the backend lint/typecheck hooks on just the files touched by this PR:
from pydantic import SecretStr Let me know if you’d like me to commit these or rerun with a different hook set. Learn more about Cursor Agents |
Co-authored-by: nicholas.tindle <[email protected]>
|
Cursor Agent can help with this pull request. Just |
The purpose of this PR is to handle a situation where replicate models occasionally return a failure. This is an issue on replicate's side, and this pull request handles that by retrying in the case of such a failure up to three times. This practically completely nullifies the effect of this failure since it happens quite rarely.
Test results of all changed blocks:

Whilst testing this pull request, I noticed that the AI Music Generator block is not working in production, so I've also fixed the block in this PR. I also added one new model to the block since the existing ones are behind the curve.
Test Results of all music block models:

[x] I have tested all changes in my local environment according to the following plan:
[x] Build an agent using all changed blocks.
[x] Build an agent using all Music Block models to confirm bug is fixed.
[x] Run both agents several times and confirm the outputs are as expected
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.
Note
Adds a shared Replicate retry helper and migrates image/edit blocks to it; upgrades AI Music Generator with Minimax Music 1.5 support, lyrics input, PCM format, and improved output handling.
run_replicate_with_retryinbackend/blocks/replicate/_helper.pywith exponential backoff and failure detection.client.async_runcalls.backend/blocks/ai_image_customizer.pybackend/blocks/ai_image_generator_block.pybackend/blocks/flux_kontext.pybackend/blocks/replicate/flux_advanced.pybackend/blocks/replicate/replicate_block.pybackend/blocks/ai_music_generator.py):MINIMAX_MUSIC_1_5with requiredlyricsinput and prompt length validation.AudioFormat.PCM; includesFileOutputhandling andwait=Trueexecution path.Written by Cursor Bugbot for commit 1faf903. This will update automatically on new commits. Configure here.