You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If Data Prepper fails to read the S3 object, it will keep the SQS message in the SQS queue. This is intentional. Users should configure an SQS redrive policy to an SQS DLQ. After, say 5, failed attempts SQS will automatically put this in an SQS DLQ.
I think we can do a few things to help with this:
When a failure like this occurs, change the visibility timeout to some reasonable value (say 5 minutes). This way, the message does not become available too soon.
Improve the documentation to direct users to use an SQS redrive policy and DLQ.
Another question: In this use-case, is it intentional that the user does not have access to Bucket 2? If the user will never have access, we could add some additional configurations to ignore certain buckets and/or key prefixes.
Describe the bug
S3-SQS source pipelines can get stuck when SQS is reading from multiple buckets and one bucket has permissions issues.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
If a pipeline hits access denied errors with one bucket it should still process objects from the other bucket
The text was updated successfully, but these errors were encountered: