Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AccessDenied error #4

Open
bluecanarybe opened this issue Jul 8, 2020 · 2 comments
Open

AccessDenied error #4

bluecanarybe opened this issue Jul 8, 2020 · 2 comments

Comments

@bluecanarybe
Copy link

bluecanarybe commented Jul 8, 2020

i'm running against an authentication error. I'm used the same environment profile when applying terraform and running the docker. I'm certain I used the correct output values in my docker run command.

2020-07-08 20:27:49,513 - 1 - [INFO] Found credentials in environment variables.
2020-07-08 20:27:50,367 - 1 - [INFO] Running against AWS account REDACTED
2020-07-08 20:27:51,279 - 1 - [INFO] Configured input queue is https://eu-west-3.queue.amazonaws.com/REDACTED/perimeterator-scanner
2020-07-08 20:27:51,279 - 1 - [INFO] Configured output queue is https://eu-west-3.queue.amazonaws.com/REDACTED/perimeterator-enumerator
2020-07-08 20:27:51,283 - 1 - [INFO] Starting message polling loop
Traceback (most recent call last):
  File "/opt/perimeterator/src/scanner.py", line 143, in <module>
    main()
  File "/opt/perimeterator/src/scanner.py", line 57, in main
    MessageAttributeNames=["All"],
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ReceiveMessage operation: Access to the resource https://eu-west-3.queue.amazonaws.com/ is denied.
@darkarnium
Copy link
Owner

Hey there!

I'm sorry for the delay on this one, apparently I wasn't watching my own repository (?!).

I've only encountered these sort of errors - outside missing SQS permissions - when an SQS queue is configured to use KMS encryption, but the appropriate IAM policies do not permit accesss to the relevant KMS keys and operations. That said, if the environment was deployed using only the Terraform in this repository this should not be the case.

I'll see if I can get a repro on this one today.

Cheers,
Peter

@darkarnium
Copy link
Owner

Hey there,

I've just redeployed this into us-west-2 in a clean environment, with no existing fixtures or data and I kicked off a build of the Docker image and launched it on a fresh machine. I'm seeing the scans proceed without error:

[darkarnium::io perimeterator][0]$ docker run \
>     -e AWS_ACCESS_KEY_ID=REDACTED \
>     -e AWS_SECRET_ACCESS_KEY=REDACTED \
>     -e SCANNER_SQS_QUEUE=arn:aws:sqs:us-west-2:REDACTED:perimeterator-scanner \
>     -e ENUMERATOR_SQS_QUEUE=arn:aws:sqs:us-west-2:REDACTED:perimeterator-enumerator \
>     perimeterator-scanner:master
2020-11-07 15:36:19,615 - 1 - [INFO] Found credentials in environment variables.
2020-11-07 15:36:20,112 - 1 - [INFO] Running against AWS account REDACTED
2020-11-07 15:36:21,629 - 1 - [INFO] Configured input queue is https://us-west-2.queue.amazonaws.com/REDACTED/perimeterator-enumerator
2020-11-07 15:36:21,630 - 1 - [INFO] Configured output queue is https://us-west-2.queue.amazonaws.com/REDACTED/perimeterator-scanner
2020-11-07 15:36:21,638 - 1 - [INFO] Starting message polling loop
2020-11-07 15:36:22,323 - 1 - [INFO] Got 1 messages from the queue
2020-11-07 15:36:22,323 - 1 - [INFO] [23ce367d-cd54-4d17-af1d-3dbf2f49f7ad] Processing message body
2020-11-07 15:36:22,324 - 1 - [INFO] [23ce367d-cd54-4d17-af1d-3dbf2f49f7ad] Starting scan of resource arn:aws:ec2:us-west-2:REDACTED:instance/i-0c2f140a19054fd62
2020-11-07 15:36:42,562 - 1 - [INFO] Enqueued scan results for resource arn:aws:ec2:us-west-2:REDACTED:instance/i-0c2f140a19054fd62 as c4fe2cdd-89e7-4806-8428-e974d759344f
2020-11-07 15:36:42,562 - 1 - [INFO] [23ce367d-cd54-4d17-af1d-3dbf2f49f7ad] Message processed successfully
2020-11-07 15:36:42,952 - 1 - [INFO] Got 1 messages from the queue
2020-11-07 15:36:42,952 - 1 - [INFO] [7b45886b-4eb6-42bb-938d-f9aa97ff0821] Processing message body
2020-11-07 15:36:42,953 - 1 - [INFO] [7b45886b-4eb6-42bb-938d-f9aa97ff0821] Starting scan of resource arn:aws:ec2:eu-west-2:REDACTED:instance/i-0ef7527bb085d040a
2020-11-07 15:36:53,025 - 1 - [INFO] Enqueued scan results for resource arn:aws:ec2:eu-west-2:REDACTED:instance/i-0ef7527bb085d040a as e3268bef-c387-4562-bbcf-7a2f4c57f8c5
2020-11-07 15:36:53,026 - 1 - [INFO] [7b45886b-4eb6-42bb-938d-f9aa97ff0821] Message processed successfully
2020-11-07 15:36:53,371 - 1 - [INFO] Got 1 messages from the queue
2020-11-07 15:36:53,371 - 1 - [INFO] [e4971f30-84cf-4cf5-8f87-e60756df9740] Processing message body
2020-11-07 15:36:53,372 - 1 - [INFO] [e4971f30-84cf-4cf5-8f87-e60756df9740] Starting scan of resource arn:aws:ec2:eu-west-2:REDACTED:instance/i-0ef7527bb085d040a
2020-11-07 15:36:57,942 - 1 - [INFO] Enqueued scan results for resource arn:aws:ec2:eu-west-2:REDACTED:instance/i-0ef7527bb085d040a as fd61426a-6b41-4b31-b990-7233c7db03fa
2020-11-07 15:36:57,942 - 1 - [INFO] [e4971f30-84cf-4cf5-8f87-e60756df9740] Message processed successfully
2020-11-07 15:36:58,310 - 1 - [INFO] Got 1 messages from the queue
2020-11-07 15:36:58,310 - 1 - [INFO] [dd33e8fd-da4e-4c13-9778-c5882dca4c6f] Processing message body
2020-11-07 15:36:58,311 - 1 - [INFO] [dd33e8fd-da4e-4c13-9778-c5882dca4c6f] Starting scan of resource arn:aws:ec2:us-west-2:REDACTED:instance/i-0c2f140a19054fd62
2020-11-07 15:37:18,677 - 1 - [INFO] Enqueued scan results for resource arn:aws:ec2:us-west-2:REDACTED:instance/i-0c2f140a19054fd62 as b83f7ba2-7557-4e49-b1dc-09d88b77d28e
2020-11-07 15:37:18,677 - 1 - [INFO] [dd33e8fd-da4e-4c13-9778-c5882dca4c6f] Message processed successfully

This is definitely a bit odd, if you're still encountering this one - again, sorry for such a long delay on the update - can you try and use the AWS CLI to inspect the queue and see whether any additional error messages are provided which might help to track down the cause?

Cheers,
Peter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants