-
Notifications
You must be signed in to change notification settings - Fork 831
[JS SDK] uploadFile uses chunked transfer encoding, causing 501 NotImplemented on S3 presigned PUT URLs #1243
Copy link
Copy link
Open
Description
Bug Report
Summary
When building a template using the JS SDK in a self-hosted deployment, uploading files via .copy() fails with 501 NotImplemented from S3. The root
cause is that uploadFile sends the tar stream with Transfer-Encoding: chunked, which S3 presigned PUT URLs do not support.
Environment
- e2b JS SDK: v2.18.0 (latest)
- Node.js: v22
- Storage: Self-hosted with S3-compatible storage (presigned PUT URL returned directly to SDK)
Error
501 NotImplemented
A header you provided implies functionality that is not implemented
Root Cause
In dist/index.js, uploadFile passes a Node.js Readable stream directly as the fetch body:
const res = await fetch(url, {
method: "PUT",
body: uploadStream, // Node.js Readable, no Content-Length
duplex: "half"
});
Since Content-Length is unknown, Node.js undici falls back to Transfer-Encoding: chunked. S3 presigned PUT URLs reject chunked requests with 501
NotImplemented — this is an S3 protocol-level restriction that cannot be resolved via IAM or bucket policy.
Comparison with Python SDK
The Python SDK (v2.19.0) already handles this correctly — it buffers the tar archive into io.BytesIO first, then uploads bytes via httpx, which sets
Content-Length automatically:
tar_buffer = tar_file_stream(...)
response = client.put(url, content=tar_buffer.getvalue()) # bytes, not stream
┌───────────────────┬────────────────────┬───────────────────┐
│ │ JS SDK │ Python SDK │
├───────────────────┼────────────────────┼───────────────────┤
│ Body type │ ReadableStream │ bytes │
├───────────────────┼────────────────────┼───────────────────┤
│ Content-Length │ Not set │ Set automatically │
├───────────────────┼────────────────────┼───────────────────┤
│ Transfer-Encoding │ chunked │ fixed-length │
├───────────────────┼────────────────────┼───────────────────┤
│ S3 presigned PUT │ 501 NotImplemented │ Works │
└───────────────────┴────────────────────┴───────────────────┘
Suggested Fix
Buffer the stream into memory before uploading, consistent with how the Python SDK works:
// Before
const res = await fetch(url, {
method: "PUT",
body: uploadStream,
duplex: "half"
});
// After
const _buf = await new Promise((resolve, reject) => {
const chunks = [];
uploadStream.on("data", (chunk) => chunks.push(Buffer.from(chunk)));
uploadStream.on("end", () => resolve(Buffer.concat(chunks)));
uploadStream.on("error", reject);
});
const res = await fetch(url, {
method: "PUT",
body: _buf,
headers: { "Content-Length": String(_buf.length) }
});
This is a 5-line change and aligns the JS SDK behavior with the Python SDK. Happy to submit a PR if helpful.
---Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels