-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aborted() error spamming the console with vips.Image.arrayjoin() #92
Comments
If I halve the image size to 40x40px, the number of error logs decreases significantly
|
Are you able to provide a complete, standalone code sample that allows someone else to reproduce this issue? I could not reproduce this using this playground link.
I think Lines 111 to 112 in 88df6ba
Line 131 in 88df6ba
Given this constraint, an image size of 4800 x 80 ought to be fine. |
Oh, that’s interesting. I ran into it every time when testing the code within my project. I can’t reproduce with 40px in a clean script, but I can with 80px. I made a simple reproduction in Node.js (v22.14.0 import Vips from "wasm-vips";
import fs from "fs/promises";
function bufferToUint8Array(buffer: Buffer): Uint8Array {
return new Uint8Array(
buffer.buffer,
buffer.byteOffset,
buffer.byteLength / Uint8Array.BYTES_PER_ELEMENT
);
}
export type Cleanup = () => void;
let _cleanup: Cleanup;
function cleanup() {
if (typeof _cleanup === "function") {
_cleanup();
}
}
const vips = await Vips({
preRun(module) {
// cleanup https://github.com/kleisauke/wasm-vips/issues/13#issuecomment-1073246828
module.setAutoDeleteLater(true);
module.setDelayFunction((fn: Cleanup) => {
_cleanup = fn;
});
},
});
const length = 60;
let vipsImages: Vips.ArrayImage = [];
for (let index = 0; index < length; index++) {
// rather than 60 different sample images, load the same one 60 times
const webp = bufferToUint8Array(await fs.readFile("src/assets/80px.jpg"));
const vipsImage = vips.Image.newFromBuffer(webp);
vipsImages.push(vipsImage);
}
const image = vips.Image.arrayjoin(vipsImages);
const webp = image.writeToBuffer(`.webp`);
await fs.writeFile("src/generated/sprite.webp", webp);
cleanup(); And the attached image |
Could it be an issue with the automatic memory management? Most of my code is synchronous, so perhaps garbage collection doesn’t have opportunities to run. That may also explain why it occurs earlier in my project, since I do a couple operations with vips beforehand. I always call EDIT: Just for the fun of it, in my project, I decided to call
It would be a bit painful to go back through and change all my wasm-vips code to call With manual memory management, would I need to call const imageVips = vips.Image.newFromBuffer(image);
const imageVipsResize = imageVipsTrimmed.resize(scale, {
kernel: vips.Kernel.lanczos3,
});
const flattenedImage = imageVipsResize
.flatten({
background: [255, 255, 255],
});
const jpeg = flattenedImage.writeToBuffer(`.jpeg`)
imageVips.delete();
// vs
imageVips.delete();
imageVipsResize.delete();
flattenedImage.delete(); EDIT: after going back and reading #13 (comment), it looks like avoiding chains is the best way to clean memory manually |
It seems manual memory management is also inconsistent and sometimes fails. import Vips from "wasm-vips";
import fs from "fs/promises";
function bufferToUint8Array(buffer: Buffer): Uint8Array {
return new Uint8Array(
buffer.buffer,
buffer.byteOffset,
buffer.byteLength / Uint8Array.BYTES_PER_ELEMENT
);
}
const vips = await Vips();
const length = 60;
let vipsImages: Vips.ArrayImage = [];
for (let index = 0; index < length; index++) {
const webp = bufferToUint8Array(await fs.readFile("src/assets/80px.jpg"));
const vipsImage = vips.Image.newFromBuffer(webp);
vipsImages.push(vipsImage);
}
const image = vips.Image.arrayjoin(vipsImages);
const webp = image.writeToBuffer(`.webp`);
for (let index = 0; index < length; index++) {
vipsImages[index]?.delete();
}
image.delete();
await fs.writeFile("src/generated/duck.sprite.webp", webp);
console.log(webp.byteLength); |
I wasn't able to reproduce this on my AMD Ryzen 9 7900 workstation, but I did notice a possible thread oversubscription issue with that sample. Could you try this patch?: --- a/script.ts
+++ b/script.ts
@@ -21,6 +21,10 @@ function cleanup() {
const vips = await Vips({
preRun(module) {
+ // Handy for debugging.
+ //module.ENV.VIPS_INFO = 1;
+ module.ENV.VIPS_LEAK = 1;
+
// cleanup https://github.com/kleisauke/wasm-vips/issues/13#issuecomment-1073246828
module.setAutoDeleteLater(true);
module.setDelayFunction((fn: Cleanup) => {
@@ -36,7 +40,9 @@ let vipsImages: Vips.ArrayImage = [];
for (let index = 0; index < length; index++) {
// rather than 60 different sample images, load the same one 60 times
const webp = bufferToUint8Array(await fs.readFile("src/assets/80px.jpg"));
- const vipsImage = vips.Image.newFromBuffer(webp);
+ const vipsImage = vips.Image.newFromBuffer(webp, "", {
+ access: vips.Access.sequential // "sequential"
+ });
vipsImages.push(vipsImage);
}
@@ -46,3 +52,5 @@ await fs.writeFile("src/generated/sprite.webp", webp);
cleanup();
+// In order to make the leak checker work.
+vips.shutdown();
When opening the images in sequential access mode with the vips_threadset_free: peak of 9 threads
memory: high-water mark 10.00 MB However, in random access mode, I see: vips_threadset_free: peak of 74 threads
memory: high-water mark 5.78 MB So, significantly more threads are required when sequential access isn't used. If random access mode is required, you can might also try to lower the concurrency to 1, i.e. by calling Lines 68 to 74 in 88df6ba
|
You can also use explicit resource management as of wasm-vips v0.0.12. I added a type definition for this in commit 92dbf92. |
I have good news and bad news! Unfortunately, I can no longer reproduce the example after my machine restarted. I’ve been waiting in anticipation of the error occurring again. However, when I was attempting various workarounds before reporting, I can say with confidence that I did try I did not try It’s great to see the |
Oh, the type definitions for using weren’t added in |
Are you able to provide the output of?: $ npx envinfo --binaries --system
$ node -p "require('os').cpus().length"
$ node -p "require('os').availableParallelism()" This will help me understand the concurrency settings under which this is reproducible. Also, were you able to reproduce this on v0.0.10? libvips' threadpool was slightly refactored in version 8.15.5, so it could be due to that. The interesting thing is that this possible thread oversubscription issue doesn't occur in Python. For example, this program: Details#!/usr/bin/env python3
# Usage: VIPS_LEAK=1 ./vips.py
import pyvips
from pathlib import Path
import logging
logging.basicConfig(level=logging.INFO)
bytes = Path('80px.jpg').read_bytes()
images = []
for i in range(60):
images.append(pyvips.Image.new_from_buffer(bytes, ''))
joined = pyvips.Image.arrayjoin(images)
joined.write_to_file('x.webp')
del images
del joined
pyvips.shutdown() Outputs: vips_threadset_free: peak of 14 threads
memory: high-water mark 5.78 MB This makes me wonder whether this is also needed on Node.js: Lines 15 to 21 in 92dbf92
See: libvips/libvips#3963. In some cases, it's not always obvious. For example,
Indeed, it's more or less a workaround, but it could be useful to check if threading is the culprit here.
That might work, but it's probably only useful for operations that trigger a pixel loop. Most operations just adds a node to the computation graph since libvips is "lazy".
The type definition was the only missing part in |
Track waiting threads explicitly instead of relying on GAsyncQueue's internal counter. This ensures the queue doesn't fill up between the time a thread is spawned and when it pops a task off the queue, where this counter was previously updated. This issue likely only affected Wasm, as thread spawning is usually instant in other environments. See: kleisauke/wasm-vips#92.
Track the number of threads that haven't reached their entry point to prevent thread oversubscription, which could occur on Wasm. This issue likely did not affect other environments, where thread spawning is usually instant. See: kleisauke/wasm-vips#92.
With (draft) PR libvips/libvips#4436, I now see this with random access mode: vips_threadset_free: peak of 17 threads
memory: high-water mark 5.77 MB So, that significantly reduced the number of threads. However, I'm unsure if this is the root cause, as you mentioned reproducing the issue with sequential access, which shouldn't be an issue with that sample (as far as I know). |
I was writing and testing the reproductions using v0.0.11, but haven’t reproduced them since (I’ll be working on that project more in the next couple of weeks, and will keep an eye out for it)
Hmm, I run wasm-vips in Node.js on the main thread (mainly for scripting). Restricting concurrency and threads to the web defaults would significantly reduce performance?
Thank you! Understanding more about the access mode is very helpful
Great to see! I think that’ll improve the wasm version regardless |
I have multiple images sized 80x80px, each loaded with
vips.Image.newFromBuffer(webpUint8Array)
vips.Image.arrayjoin(vipsImages)
works fine images with a total of 10 images, 800 × 80pxvips.Image.arrayjoin(vipsImages)
logs errors on a larger workload with a total of 60 images, 4800 x 80pxThe funny thing is, although slower and logging errors, the arrayjoin function does eventually complete successfully.
Is there a different or more optimized function I should be using? Is there a limit to the size or number of images? It feels like 4800 x 80px should be doable.
The logs aren’t very helpful:
The text was updated successfully, but these errors were encountered: