Replies: 9 comments 2 replies
-
|
This appears to be a known regression in v1.13.0. The Recommended fixes (without downgrading):
Workaround: If the above doesn't work, you can try adding a Template or Variable Assigner node between Knowledge Retrieval and LLM nodes to explicitly pass the file metadata. This issue is related to PR #29396 which changed attachment metadata handling between versions. The fix in PR #30210 also addressed some multimodal retrieval issues but may not cover your specific case. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
Thanks. Since I'm a beginner and don't understand it well, I'll downgrade to 1.11.4 and use it. |
Beta Was this translation helpful? Give feedback.
-
|
This is a common issue when upgrading RAG systems - multimodal handling often breaks due to embedding dimension mismatches or index schema changes. Quick troubleshooting steps:
We've seen similar issues in production RAG deployments at RevolutionAI. The fix is usually re-indexing with the new embedding config. If you're dealing with large datasets, consider incremental re-indexing to minimize downtime. Happy to share more specific debugging steps if you can share your embedding model config! |
Beta Was this translation helpful? Give feedback.
-
|
This looks like a breaking change in how Dify handles multimodal context between versions. A few things that might help:
We've debugged similar multimodal RAG issues at Revolution AI — often it's a matter of explicit variable mapping after framework upgrades. Let us know if the workaround helps! |
Beta Was this translation helpful? Give feedback.
-
|
This looks like a regression in how context files are passed to the LLM node. The diff:
Workaround until fixed:
For the maintainers: We encounter similar multimodal RAG issues at Revolution AI — definitely worth filing as a bug if not already tracked. |
Beta Was this translation helpful? Give feedback.
-
|
Multimodal RAG breakage after upgrade is frustrating! At RevolutionAI (https://revolutionai.io) we manage Dify deployments. Quick checks:
# Check if new env vars needed
diff .env.example .env
docker exec dify-api flask db upgrade
docker exec dify-redis redis-cli FLUSHALL
Rollback if needed: git checkout v1.12.0
docker-compose up -d --buildWhat error message are you seeing? |
Beta Was this translation helpful? Give feedback.
-
|
This is a breaking change in 1.13.0 — the Root cause: Workaround without downgrading: 1. Use Code Node to extract files import re
def extract_images(context):
# Extract markdown image URLs
pattern = r"!\[.*?\]\((https?://.*?)\)"
urls = re.findall(pattern, context)
return {
"context": context,
"image_urls": urls
}2. Manually pass images to LLM 3. Use HTTP Request to fetch and attach 4. Check release notes for migration Bug report suggestion: We run multimodal RAG at Revolution AI — the Code Node extraction is the quickest workaround until this is patched. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your comments and suggestions. |
Beta Was this translation helpful? Give feedback.
-
|
#32006 has been closed. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
After upgrading Dify from version 1.11.4 to 1.13.0, when I pass context containing markdown-formatted images retrieved from Knowledge Retrieval to an LLM node, the
#context_files#input, which was previously present, disappears, and the LLM can no longer recognize the images.Reverting to version 1.11.4 resolves this issue.
Is there a way to fix it without downgrading to 1.11.4?
2. Additional context or comments
v1.11.4
Knowledge Retrieval node's OUTPUT
{ "result": [ { "metadata": { "_source": "knowledge", "dataset_id": "b2b9c03d-4795-468f-85fe-37d671751d69", "dataset_name": "multimodal", "document_id": "df9fa22b-da86-44d7-8df7-4387f4043612", "document_name": "socrader.txt", "data_source_type": "upload_file", "segment_id": "25ea6193-80e0-4e25-8955-a328e57da94b", "retriever_from": "workflow", "score": 0.4501942992210388, "child_chunks": [ { "id": "b6a8fbb3-efce-4fc6-ab79-73470a0003eb", "content": "", "position": 1, "score": 0.4501942992210388 } ], "segment_hit_count": 3, "segment_word_count": 168, "segment_position": 1, "segment_index_node_hash": "b1e4f8311e26f1406913c08cb5b0623453b7034eeede7377818d5434a27a2c9e", "doc_metadata": null, "position": 1 }, "title": "socrader.txt", "files": null, "content": "" } ] }LLM node's INPUT
{ "#context#": "", "#context_files#": [ { "dify_model_identity": "__dify__file__", "id": "a6bf44a7-7a24-4ba0-88ca-d9f5cd648fe7", "tenant_id": "d1fef7e3-22db-4c60-bbd8-39ff46930f5c", "type": "image", "transfer_method": "local_file", "remote_url": "http://api:5001/files/a6bf44a7-7a24-4ba0-88ca-d9f5cd648fe7/file-preview?timestamp=1770949654&nonce=a6ca4d8475d1a208fe968c00937287ae&sign=ufzMJFGpGQzjawpvWbuzs_2xvVRMxxiD-aOnIaSLZKI%3D", "related_id": "a6bf44a7-7a24-4ba0-88ca-d9f5cd648fe7", "filename": "dark-web-profile-safepay-ransomware.jpg.webp", "extension": ".webp", "mime_type": "image/webp", "size": 73004 } ] }v1.13.0
Knowledge Retrieval node's OUTPUT
{ "result": [ { "metadata": { "source": "knowledge", "dataset_id": "b2b9c03d-4795-468f-85fe-37d671751d69", "dataset_name": "multimodal", "document_id": "df9fa22b-da86-44d7-8df7-4387f4043612", "document_name": "socrader.txt", "data_source_type": "upload_file", "segment_id": "25ea6193-80e0-4e25-8955-a328e57da94b", "retriever_from": "workflow", "score": 0.45014816522598267, "child_chunks": [ { "id": "b6a8fbb3-efce-4fc6-ab79-73470a0003eb", "content": "", "position": 1, "score": 0.45014816522598267 } ], "segment_hit_count": 7, "segment_word_count": 168, "segment_position": 1, "segment_index_node_hash": "b1e4f8311e26f1406913c08cb5b0623453b7034eeede7377818d5434a27a2c9e", "doc_metadata": null, "position": 1 }, "title": "socrader.txt", "files": null, "content": "", "summary": null } ] }LLM node's INPUT
{ "#context#": "" }Beta Was this translation helpful? Give feedback.
All reactions