Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

package Job failed with error: Unsupported value for canned acl 'public-read' #150

Open
insanity54 opened this issue Jan 3, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@insanity54
Copy link

Hi, firstly I want to say thanks for superstreamer. I have been looking for something like this for awhile and I'm excited to set up an integration and see if superstreamer is right for my project.

I have a few more things I can try before I'm stuck. I'm pre-emptively writing this issue report to document progress and hoping it helps someone in the future. I'll update it if I find any solutions.

Describe the bug

A package Job fails on step 4 (Uploading) with an error about an Unsupported value.

I was able to successfully run my first transcode job.

{
  "assetId": "test2",
  "inputs": [
    {
      "type": "video",
      "path": "s3://2024-06-04.mp4",
      "height": 1080
    }
  ],
  "streams": [
    {
      "type": "video",
      "codec": "h264",
      "height": 360
    }
  ],
  "segmentSize": 10,
  "packageAfter": true,
  "tag": "test",
  "step": 3
}

But I haven't been able to successfully run a package job. In the package job which was automatically invoked after the test2 job, I see the following error.

Failed
Unsupported value for canned acl 'public-read'

I'm using Backblaze as my S3 provider. I see lots of my S3 bucket's files listed in http://localhost:52002/storage so I see it's partially working. I think this might be a permissions issue and there might be some setting I can change to get through this error.

To Reproduce

Expected behavior

I was expecting the package job to succeed

Screenshots

n/a

Desktop (please complete the following information):

n/a

Smartphone (please complete the following information):

n/a

Additional context

Here's the package Job logs

Logs

1
Synced folder in /tmp/superstreamer-75f1ce15-0e29-48d8-8e84-01c7b916254eA5GFNi

2
Got meta: "{"version":1,"streams":{"video_360_400000_h264.m4v":{"type":"video","codec":"h264","height":360,"bitrate":400000,"framerate":60}},"segmentSize":10}"

3
in=/tmp/superstreamer-75f1ce15-0e29-48d8-8e84-01c7b916254eA5GFNi/video_360_400000_h264.m4v,stream=video,init_segment=video_360_400000_h264/init.mp4,segment_template=video_360_400000_h264/$Number$.m4s,playlist_name=video_360_400000_h264/playlist.m3u8,iframe_playlist_name=video_360_400000_h264/iframe.m3u8 --segment_duration 10 --fragment_duration 10 --hls_master_playlist_output master.m3u8

4
Uploading to package/test2/hls
@matvp91
Copy link
Collaborator

matvp91 commented Jan 3, 2025

First of all, thank you for joining the GH sponsors program! It's things like this that keep me going.

I haven't used B2 before but going through the docs (https://www.backblaze.com/docs/cloud-storage-s3-compatible-api), I noticed the following: "The call succeeds only when the specified ACL matches the ACL of the bucket"

The difference between B2 and S3 is that with B2, ACL is defined on the "bucket" level contrary to S3's "per object" ACL.

By default, each object syncing back to storage is public-read but I assume you wouldn't want that, considering you have a CDN infront.

return {
ContentType: contentType,
ACL: options?.public ? "public-read" : undefined,

I'm leaning towards dropping ACL "per object" from code, as most (if not, all) users should put a CDN in front and not have S3 compliant storage be public by default. Would that work for you? Superstreamer would then sync back with each file being private (or rather, the default as it's not defined by Superstreamer), but it's up to you to define the proper permissions on CDN. If necessairy, I could always include a flag to mark an upload as "public" from the package job's input payload.

Are you using Docker to run the project? If so, would you be willing to run the alpha build? See the docker-compose.yml file in the repo instead. Along the way, so much has changed underneath that I'd like to be sure we're closest to the main branch.

If you run the project from source, I'd be happy to provide a commit that adds compatibility to B2.

Feel free to join the Discord group for some quick back and forth chatter too.

@insanity54
Copy link
Author

insanity54 commented Jan 4, 2025

Thank you for the insight! I would not have been able to connect all the dots so quickly. That totally makes sense, that would explain it.

By default, each object syncing back to storage is public-read but I assume you wouldn't want that, considering you have a CDN infront.

That's correct, my B2 bucket is marked as private and the files are publicly accessed via CDN.

I'm leaning towards dropping ACL "per object" from code, as most (if not, all) users should put a CDN in front and not have S3 compliant storage be public by default. Would that work for you?

That would absolutely work for me!

Are you using Docker to run the project? If so, would you be willing to run the alpha build?

Yes I'm using Docker. I tried the alpha build. No errors whatsoever! I'm able to transcode and package and upload to the B2 bucket without any issues.

I was looking through the files uploaded by the alpha build of superstreamer and I realized that the earlier (failed) transcoded/packaged video was actually there! It uploaded even though there was an error displayed in Superstreamer.

But yeah, the :alpha Docker tag addressed every issue I had. Thank you!

@insanity54
Copy link
Author

insanity54 commented Jan 4, 2025

I spoke too soon. superstreamer created a transcode directory which I thought was where the packaged output ends up. After reading more of the docs, I see that there's supposed to be a package directory which is missing in my bucket.

I played around with the POST /pipeline endpoint and I'm seeing transcode Jobs succeed, but package jobs are failing.

Unsupported value for canned acl 'public-read'
Logs
1
Synced folder in /tmp/superstreamer-cc81dbd2-4cf7-4432-8d1f-9c5c385cdb3biuit3f
2
Got meta: "{"version":1,"streams":{"video_360_400000_h264.m4v":{"type":"video","codec":"h264","height":360,"bitrate":400000,"framerate":60},"video_144_108000_h264.m4v":{"type":"video","codec":"h264","height":144,"bitrate":108000,"framerate":60},"audio_eng_128000_aac.m4a":{"type":"audio","codec":"aac","language":"eng","bitrate":128000,"channels":2}},"segmentSize":2.24}"
expand
3
in=/tmp/superstreamer-cc81dbd2-4cf7-4432-8d1f-9c5c385cdb3biuit3f/video_360_400000_h264.m4v,stream=video,init_segment=video_360_400000_h264/init.mp4,segment_template=video_360_400000_h264/$Number$.m4s,playlist_name=video_360_400000_h264/playlist.m3u8,iframe_playlist_name=video_360_400000_h264/iframe.m3u8 in=/tmp/superstreamer-cc81dbd2-4cf7-4432-8d1f-9c5c385cdb3biuit3f/video_144_108000_h264.m4v,stream=video,init_segment=video_144_108000_h264/init.mp4,segment_template=video_144_108000_h264/$Number$.m4s,playlist_name=video_144_108000_h264/playlist.m3u8,iframe_playlist_name=video_144_108000_h264/iframe.m3u8 in=/tmp/superstreamer-cc81dbd2-4cf7-4432-8d1f-9c5c385cdb3biuit3f/audio_eng_128000_aac.m4a,stream=audio,init_segment=audio_eng_128000_aac/init.mp4,segment_template=audio_eng_128000_aac/$Number$.m4a,playlist_name=audio_eng_128000_aac/playlist.m3u8,hls_group_id=audio_aac,hls_name=eng_aac,language=eng --segment_duration 2.24 --fragment_duration 2.24 --hls_master_playlist_output master.m3u8
expand
4
Uploading to package/c70ec7a2-dbb0-476d-b9c6-c3d083f602f7/hls

@matvp91
Copy link
Collaborator

matvp91 commented Jan 4, 2025

I deployed f4d5a2d on the alpha tag which should theoretically fix your issue by not enforcing ACL for each object sync.

docker compose pull
docker compose up -d

There's a version indicator at the left bottom of the dashboard, make sure it's at alpha 2025-01-04 07:37.

As you stated, you're writing this issue as progress in order to help others, let me provide some extra info too:

  • I hope I can release v1.2.0 in Januari which will be close to what you're seeing with the alpha build. I'll spend most of this month on writing the docs as I'd love to get that spot on. Once that's out, you can safely move to the versioned, or latest, container.
  • The packageAfter flag is removed, both transcode and package are isolated jobs now. As you've seen, there's a pipeline job that orchestrates transcoding and packaging.
  • The idea behind a separate transcode and package job is that transcoding is time consuming and resource intensive, packaging is pretty much instant. A transcode result (which you'll see in storage/transcode) contains separate tracks, each track has proper keyframes inserted for packaging / splicing purposes (chunking an m4v into a set of CMAF compliant segments). You can then package the same transcode result as many times as you'd like, with a variety of options (eg; an encrypted and plain stream, or a different set of segment sizes, ...). This approach opens up the door to theoretically do real-time packaging, I have a rough idea where I'd like to construct an HLS playlist on the fly that references segments by byte ranges directly from the transcode result, skipping a package job all together as "preparation" step. Wishful thinking here.
  • There's the concept of assets now, each unique identified with a UUID. In your first sample, you provided test2 as the assetId, that's no longer possible. Each job contributes something to an asset (eg; a transcode result, a package result, in later stages a set of thumbnails, auto generated subtitles, etc...).

Let me know if the commit above works out for you.

@insanity54
Copy link
Author

insanity54 commented Jan 4, 2025

What a legend, making a patch on a weekend; your work is very appreciated!

Running alpha 2025-01-04 07:37, the good news is that I don't see any error about S3 ACL. Bad news, we have a new error! I opened a new issue for that #151

@insanity54
Copy link
Author

insanity54 commented Jan 5, 2025

We got a success, let's go!!!!

Screenshot From 2025-01-04 21-44-24

Logs

    1
    Synced folder in /tmp/superstreamer-8c2e2f96-19ce-458f-8e9f-cfe2fedb8f6ftU2Kjf

2
Got meta: "{"version":1,"streams":{"video_144_108000_h264.m4v":{"type":"video","codec":"h264","height":144,"bitrate":108000,"framerate":60},"audio_eng_128000_aac.m4a":{"type":"audio","codec":"aac","language":"eng","bitrate":128000,"channels":2}},"segmentSize":2.24}"
3
in=/tmp/superstreamer-8c2e2f96-19ce-458f-8e9f-cfe2fedb8f6ftU2Kjf/video_144_108000_h264.m4v,stream=video,init_segment=video_144_108000_h264/init.mp4,segment_template=video_144_108000_h264/$Number$.m4s,playlist_name=video_144_108000_h264/playlist.m3u8,iframe_playlist_name=video_144_108000_h264/iframe.m3u8 in=/tmp/superstreamer-8c2e2f96-19ce-458f-8e9f-cfe2fedb8f6ftU2Kjf/audio_eng_128000_aac.m4a,stream=audio,init_segment=audio_eng_128000_aac/init.mp4,segment_template=audio_eng_128000_aac/$Number$.m4a,playlist_name=audio_eng_128000_aac/playlist.m3u8,hls_group_id=audio_aac,hls_name=eng_aac,language=eng --segment_duration 2.24 --fragment_duration 2.24 --hls_master_playlist_output master.m3u8
4
Uploading to package/35a0a0da-b182-40de-97fd-6b0178c9eabe/hls. BTW this is a custom built version of superstreamer to test compatibility with Backblaze B2

This was with modified superstreamer built from source. I changed the line @matvp91 mentioned.

return {
ContentType: contentType,
ACL: options?.public ? "public-read" : undefined,

changed to the following.

ACL: options?.public ? "public-read" : "private",

I changed this line because of the docs @matvp91 mentioned.

"The call succeeds only when the specified ACL matches the ACL of the bucket"

I think that means the ACL must be either "public" | "private". I suspect undefined won't work, but I'm not completely sure. In my testing, I ran into #151 again and again, so I wasn't able to get a second success. I'm running superstreamer on Kubernetes with Tilt autoreloading and I was doing a lot of changing of variables so I wasn't sure if the pod was actually running the most up-to-date patch. Because of this I can't say for certain which ACL setting is the one that actually led to success.

I'll try again tomorrow

@matvp91
Copy link
Collaborator

matvp91 commented Jan 5, 2025

Yes, B2 requires "object ACL" to match the "bucket ACL" which is slightly different from S3 which has no concept of an overarching "bucket ACL".

In the commit above (f4d5a2d) I dropped support for setting ACL as I do think it's best to keep your S3 private and have it only being accessed by a CDN put in front.

The original reason for ACL was that I went with DigitalOcean Spaces as S3, which comes with its own CDN that is tightly integrated with "object ACL". Thus, in order for Spaces to expose an HLS playlist, it must have the "public-read" permission set.

I made an enhancement ticket (#152) where we can mark an object public when a package job is scheduled in case people use something like Spaces.

I think that means the ACL must be either "public" | "private". I suspect undefined won't work, but I'm not completely sure.

It seems leaving ACL undefined works fine (it'll fallback to the S3 provider default), as you tested with the latest alpha container, which does not set ACL at all. Enforcing an ACL is optional in the AWS S3 SDK which makes me suspect it's supposed to work this way.

Shall we close this one and continue in #151, which is a different issue?

@matvp91 matvp91 added the bug Something isn't working label Jan 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants