fix: limit resolve_extracted_nodes context to prevent max_tokens overflow#1276
Open
rafaelreis-r wants to merge 1 commit intogetzep:mainfrom
Open
fix: limit resolve_extracted_nodes context to prevent max_tokens overflow#1276rafaelreis-r wants to merge 1 commit intogetzep:mainfrom
rafaelreis-r wants to merge 1 commit intogetzep:mainfrom
Conversation
…flow When resolving deduplicated nodes, the LLM prompt included full candidate.attributes (summaries, descriptions, etc.) for every candidate returned by similarity search. With 10-15 extracted nodes and up to 10 candidates each, this produces ~150 verbose candidates in a single context, routinely exceeding the max_tokens limit (16384) and causing the JSON output to be truncated mid-response. Two surgical changes: 1. Remove candidate.attributes from the resolution context — only name and entity_types are needed for identity matching. 2. Cap the candidate list at MAX_RESOLVE_CANDIDATES = 50 to bound worst-case context growth regardless of search result count. Closes getzep#1275
Member
|
I have read the CLA Document and I hereby sign the CLA Rafael Reis seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When deduplicating extracted nodes in
resolve_extracted_nodes, the LLM prompt includes the fullcandidate.attributesdict for every candidate returned by the similarity search. In production workloads with:This produces ~100–150 candidates, each carrying verbose attributes (summaries, descriptions, relationship lists, etc.). The resulting prompt regularly exceeds 17k tokens, pushing the JSON output past the
max_tokens=16384limit and causing truncated/invalid responses.Reported in issue #1275.
Fix
Two surgical changes to
graphiti_core/utils/maintenance/node_operations.py:1. Remove
candidate.attributesfrom the resolution contextOnly
nameandentity_typesare needed for identity-level deduplication. The full attribute payload adds thousands of tokens without improving match quality.2. Add
MAX_RESOLVE_CANDIDATES = 50constantCaps the candidate list to bound worst-case context size regardless of search result count. Placed alongside the existing
MAX_NODESconstant.Impact
MAX_RESOLVE_CANDIDATES = 50provides a safety ceiling for extreme cases