Skip to content

Conversation

@kov
Copy link

@kov kov commented Jan 20, 2026

The bash tool and the inline command execution were accumulating command output using output += chunk.toString(), which creates a new string for every chunk and copies all previous content.

For commands producing large output (like meson in weston), this caused catastrophic O(n²) memory usage, as the GC had not time to catch up.

Large output was eventually written to disk and truncated in memory, but that was only done at the end, potentially leading to running out of memory.

Fix by streaming output to disk when it exceeds 50KB threshold:

  • Output < 50KB: stays in memory (no file created)
  • Output ≥ 50KB: streams to file, 50KB sliding window in memory

Also adds line count tracking to match original Truncate.MAX_LINES behavior and proper cleanup on error/abort.

The session/prompt.ts InteractiveCommand uses the same pattern.

What does this PR do?

Avoids a pathological GC case caused by repeated string concatenation that was leading to the OOM killer getting engaged on my 16GB RAM laptop when opencode ran a large meson build with verbose output. I was hitting the issue with the bash tool, but the problem also happens on the interactive command codepath, so changed that as well.

The fix is to not accumulate the whole output as a big string through repeated concatenation with Truncate.output() handling it after the tool has finished, but to do our own handling of truncation. We only accumulate up to 50K in memory (this could be improved to be a sliding window of 50K if we prefer) and when we get anything beyond that stream the output to a file. The final outcome is the same as we had before, but without explosive memory usage.

How did you verify your code works?

I ran the tests, typechecked, and ran a build of opencode that includes the fix. opencode itself tried using the tool and succeeded, was also able to look at the whole file after. I did not observe the explosive memory usage I was seeing before, so I assume that is now fixed.

A good way to test this is to do !find / and click to expand the preview - be careful as that may cause your machine to grind to a halt - you can use something else that causes long output. With this PR applied you should see the same text at the end, but you should not see the memory usage explode.

@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found one potentially related PR that addresses similar memory concerns:

Related PR:

The other search results (like #7049 about output pruning) are related to memory management but focus on different aspects.

The bash tool and the inline command execution were accumulating command
output using `output += chunk.toString()`, which creates a new string
for every chunk and copies all previous content.

For commands producing large output (like meson in weston), this caused
catastrophic O(n²) memory usage, as the GC had not time to catch up.

Large output was eventually written to disk and truncated in memory,
but that was only done at the end, potentially leading to running out
of memory.

Fix by streaming output to disk when it exceeds 50KB threshold:
- Output < 50KB: stays in memory (no file created)
- Output ≥ 50KB: streams to file, 50KB sliding window in memory

Also adds line count tracking to match original Truncate.MAX_LINES behavior
and proper cleanup on error/abort.

The session/prompt.ts InteractiveCommand uses the same pattern.
import { Truncate } from "./truncation"

const MAX_METADATA_LENGTH = 30_000
import { Log } from "../util/log"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep that at 6 (now 9) just to reduce the diff?

@LucasArruda
Copy link

Could you review this, please, @rekram1-node ?

I have review it myself and I think the code looks good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants