Stream big uploaded files directly to the file system #399
martinkirch
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm writing an application that lets an user upload big files: more than 10Mb, often more that 100Mb, at most 1Gb.
I noticed that using
request.files
puts the whole file in memory. Not only this consumes a lot of memory, it also blocks the worker while it writes the files to disk. Thequart-uploads
extension does not provide a better method, as it relies on await request.files too. My expectations for an async framework were that it should be able to write such big files directly to disk, while they're uploaded. FastAPI has UploadFile, for example.Using another multipart parser, I came up with the following solution. It essentially relies on
async for chunk in quart.request.body
to write eachchunk
to disk if it belongs to an uploaded file (that happens atcurrent_file.write(result)
).(Excerpt from the complete app)
Using this function, even while sending 800Mb the worker could still process 75 read-update DB requests per second on another endpoint (otherwise it handles ~80req/s), consuming less than 100Mb of memory.
This function could be more generic by letting the caller provide a function to compute
target_path
. I'm wondering if that could be an interesting addition to Quart itself ? A new extension ? or just too specific ?In any case: thanks for Quart ! It's the only framework that lets me implement a very weird architecture :)
Beta Was this translation helpful? Give feedback.
All reactions