-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
package Job failed with error: Unsupported value for canned acl 'public-read' #150
Comments
First of all, thank you for joining the GH sponsors program! It's things like this that keep me going. I haven't used B2 before but going through the docs (https://www.backblaze.com/docs/cloud-storage-s3-compatible-api), I noticed the following: "The call succeeds only when the specified ACL matches the ACL of the bucket" The difference between B2 and S3 is that with B2, ACL is defined on the "bucket" level contrary to S3's "per object" ACL. By default, each object syncing back to storage is superstreamer/packages/artisan/src/lib/s3.ts Lines 45 to 47 in 027f3fc
I'm leaning towards dropping ACL "per object" from code, as most (if not, all) users should put a CDN in front and not have S3 compliant storage be public by default. Would that work for you? Superstreamer would then sync back with each file being private (or rather, the default as it's not defined by Superstreamer), but it's up to you to define the proper permissions on CDN. If necessairy, I could always include a flag to mark an upload as "public" from the package job's input payload. Are you using Docker to run the project? If so, would you be willing to run the alpha build? See the docker-compose.yml file in the repo instead. Along the way, so much has changed underneath that I'd like to be sure we're closest to the main branch. If you run the project from source, I'd be happy to provide a commit that adds compatibility to B2. Feel free to join the Discord group for some quick back and forth chatter too. |
Thank you for the insight! I would not have been able to connect all the dots so quickly. That totally makes sense, that would explain it.
That's correct, my B2 bucket is marked as private and the files are publicly accessed via CDN.
That would absolutely work for me!
Yes I'm using Docker. I tried the alpha build. No errors whatsoever! I'm able to transcode and package and upload to the B2 bucket without any issues. I was looking through the files uploaded by the alpha build of superstreamer and I realized that the earlier (failed) transcoded/packaged video was actually there! It uploaded even though there was an error displayed in Superstreamer. But yeah, the |
I spoke too soon. superstreamer created a I played around with the
|
I deployed f4d5a2d on the
There's a version indicator at the left bottom of the dashboard, make sure it's at As you stated, you're writing this issue as progress in order to help others, let me provide some extra info too:
Let me know if the commit above works out for you. |
What a legend, making a patch on a weekend; your work is very appreciated! Running |
We got a success, let's go!!!!
This was with modified superstreamer built from source. I changed the line @matvp91 mentioned. superstreamer/packages/artisan/src/lib/s3.ts Lines 45 to 47 in 027f3fc
changed to the following. ACL: options?.public ? "public-read" : "private", I changed this line because of the docs @matvp91 mentioned.
I think that means the ACL must be either I'll try again tomorrow |
Yes, B2 requires "object ACL" to match the "bucket ACL" which is slightly different from S3 which has no concept of an overarching "bucket ACL". In the commit above (f4d5a2d) I dropped support for setting ACL as I do think it's best to keep your S3 private and have it only being accessed by a CDN put in front. The original reason for ACL was that I went with DigitalOcean Spaces as S3, which comes with its own CDN that is tightly integrated with "object ACL". Thus, in order for Spaces to expose an HLS playlist, it must have the "public-read" permission set. I made an enhancement ticket (#152) where we can mark an object public when a package job is scheduled in case people use something like Spaces.
It seems leaving ACL undefined works fine (it'll fallback to the S3 provider default), as you tested with the latest Shall we close this one and continue in #151, which is a different issue? |
Hi, firstly I want to say thanks for superstreamer. I have been looking for something like this for awhile and I'm excited to set up an integration and see if superstreamer is right for my project.
I have a few more things I can try before I'm stuck. I'm pre-emptively writing this issue report to document progress and hoping it helps someone in the future. I'll update it if I find any solutions.
Describe the bug
A package Job fails on step 4 (Uploading) with an error about an Unsupported value.
I was able to successfully run my first transcode job.
But I haven't been able to successfully run a package job. In the package job which was automatically invoked after the test2 job, I see the following error.
I'm using Backblaze as my S3 provider. I see lots of my S3 bucket's files listed in
http://localhost:52002/storage
so I see it's partially working. I think this might be a permissions issue and there might be some setting I can change to get through this error.To Reproduce
"packageAfter": true
Expected behavior
I was expecting the package job to succeed
Screenshots
n/a
Desktop (please complete the following information):
n/a
Smartphone (please complete the following information):
n/a
Additional context
Here's the package Job logs
The text was updated successfully, but these errors were encountered: