mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2026-01-01 07:25:51 -08:00
229 lines
12 KiB
Markdown
229 lines
12 KiB
Markdown
# AWS - S3 Unauthenticated Enum
|
||
|
||
{% hint style="success" %}
|
||
Learn & practice AWS Hacking:<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">\
|
||
Learn & practice GCP Hacking: <img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">[**HackTricks Training GCP Red Team Expert (GRTE)**<img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)
|
||
|
||
<details>
|
||
|
||
<summary>Support HackTricks</summary>
|
||
|
||
* Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
|
||
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks_live)**.**
|
||
* **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||
|
||
</details>
|
||
{% endhint %}
|
||
|
||
## S3 Public Buckets
|
||
|
||
A bucket is considered **“public”** if **any user can list the contents** of the bucket, and **“private”** if the bucket's contents can **only be listed or written by certain users**.
|
||
|
||
Companies might have **buckets permissions miss-configured** giving access either to everything or to everyone authenticated in AWS in any account (so to anyone). Note, that even with such misconfigurations some actions might not be able to be performed as buckets might have their own access control lists (ACLs).
|
||
|
||
**Learn about AWS-S3 misconfiguration here:** [**http://flaws.cloud**](http://flaws.cloud/) **and** [**http://flaws2.cloud/**](http://flaws2.cloud)
|
||
|
||
### Finding AWS Buckets
|
||
|
||
Different methods to find when a webpage is using AWS to storage some resources:
|
||
|
||
#### Enumeration & OSINT:
|
||
|
||
* Using **wappalyzer** browser plugin
|
||
* Using burp (**spidering** the web) or by manually navigating through the page all **resources** **loaded** will be save in the History.
|
||
* **Check for resources** in domains like:
|
||
|
||
```
|
||
http://s3.amazonaws.com/[bucket_name]/
|
||
http://[bucket_name].s3.amazonaws.com/
|
||
```
|
||
* Check for **CNAMES** as `resources.domain.com` might have the CNAME `bucket.s3.amazonaws.com`
|
||
* Check [https://buckets.grayhatwarfare.com](https://buckets.grayhatwarfare.com/), a web with already **discovered open buckets**.
|
||
* The **bucket name** and the **bucket domain name** needs to be **the same.**
|
||
* **flaws.cloud** is in **IP** 52.92.181.107 and if you go there it redirects you to [https://aws.amazon.com/s3/](https://aws.amazon.com/s3/). Also, `dig -x 52.92.181.107` gives `s3-website-us-west-2.amazonaws.com`.
|
||
* To check it's a bucket you can also **visit** [https://flaws.cloud.s3.amazonaws.com/](https://flaws.cloud.s3.amazonaws.com/).
|
||
|
||
#### Brute-Force
|
||
|
||
You can find buckets by **brute-forcing name**s related to the company you are pentesting:
|
||
|
||
* [https://github.com/sa7mon/S3Scanner](https://github.com/sa7mon/S3Scanner)
|
||
* [https://github.com/clario-tech/s3-inspector](https://github.com/clario-tech/s3-inspector)
|
||
* [https://github.com/jordanpotti/AWSBucketDump](https://github.com/jordanpotti/AWSBucketDump) (Contains a list with potential bucket names)
|
||
* [https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets](https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets)
|
||
* [https://github.com/smaranchand/bucky](https://github.com/smaranchand/bucky)
|
||
* [https://github.com/tomdev/teh\_s3\_bucketeers](https://github.com/tomdev/teh_s3_bucketeers)
|
||
* [https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/s3](https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/s3)
|
||
* [https://github.com/Eilonh/s3crets\_scanner](https://github.com/Eilonh/s3crets_scanner)
|
||
* [https://github.com/belane/CloudHunter](https://github.com/belane/CloudHunter)
|
||
|
||
<pre class="language-bash"><code class="lang-bash"># Generate a wordlist to create permutations
|
||
curl -s https://raw.githubusercontent.com/cujanovic/goaltdns/master/words.txt > /tmp/words-s3.txt.temp
|
||
curl -s https://raw.githubusercontent.com/jordanpotti/AWSBucketDump/master/BucketNames.txt >>/tmp/words-s3.txt.temp
|
||
cat /tmp/words-s3.txt.temp | sort -u > /tmp/words-s3.txt
|
||
|
||
# Generate a wordlist based on the domains and subdomains to test
|
||
## Write those domains and subdomains in subdomains.txt
|
||
cat subdomains.txt > /tmp/words-hosts-s3.txt
|
||
cat subdomains.txt | tr "." "-" >> /tmp/words-hosts-s3.txt
|
||
cat subdomains.txt | tr "." "\n" | sort -u >> /tmp/words-hosts-s3.txt
|
||
|
||
# Create permutations based in a list with the domains and subdomains to attack
|
||
goaltdns -l /tmp/words-hosts-s3.txt -w /tmp/words-s3.txt -o /tmp/final-words-s3.txt.temp
|
||
## The previous tool is specialized increating permutations for subdomains, lets filter that list
|
||
<strong>### Remove lines ending with "."
|
||
</strong>cat /tmp/final-words-s3.txt.temp | grep -Ev "\.$" > /tmp/final-words-s3.txt.temp2
|
||
### Create list without TLD
|
||
cat /tmp/final-words-s3.txt.temp2 | sed -E 's/\.[a-zA-Z0-9]+$//' > /tmp/final-words-s3.txt.temp3
|
||
### Create list without dots
|
||
cat /tmp/final-words-s3.txt.temp3 | tr -d "." > /tmp/final-words-s3.txt.temp4http://phantom.s3.amazonaws.com/
|
||
### Create list without hyphens
|
||
cat /tmp/final-words-s3.txt.temp3 | tr "." "-" > /tmp/final-words-s3.txt.temp5
|
||
|
||
## Generate the final wordlist
|
||
cat /tmp/final-words-s3.txt.temp2 /tmp/final-words-s3.txt.temp3 /tmp/final-words-s3.txt.temp4 /tmp/final-words-s3.txt.temp5 | grep -v -- "-\." | awk '{print tolower($0)}' | sort -u > /tmp/final-words-s3.txt
|
||
|
||
## Call s3scanner
|
||
s3scanner --threads 100 scan --buckets-file /tmp/final-words-s3.txt | grep bucket_exists
|
||
</code></pre>
|
||
|
||
#### Loot S3 Buckets
|
||
|
||
Given S3 open buckets, [**BucketLoot**](https://github.com/redhuntlabs/BucketLoot) can automatically **search for interesting information**.
|
||
|
||
### Find the Region
|
||
|
||
You can find all the supported regions by AWS in [**https://docs.aws.amazon.com/general/latest/gr/s3.html**](https://docs.aws.amazon.com/general/latest/gr/s3.html)
|
||
|
||
#### By DNS
|
||
|
||
You can get the region of a bucket with a **`dig`** and **`nslookup`** by doing a **DNS request of the discovered IP**:
|
||
|
||
```bash
|
||
dig flaws.cloud
|
||
;; ANSWER SECTION:
|
||
flaws.cloud. 5 IN A 52.218.192.11
|
||
|
||
nslookup 52.218.192.11
|
||
Non-authoritative answer:
|
||
11.192.218.52.in-addr.arpa name = s3-website-us-west-2.amazonaws.com.
|
||
```
|
||
|
||
Check that the resolved domain have the word "website".\
|
||
You can access the static website going to: `flaws.cloud.s3-website-us-west-2.amazonaws.com`\
|
||
or you can access the bucket visiting: `flaws.cloud.s3-us-west-2.amazonaws.com`
|
||
|
||
#### By Trying
|
||
|
||
If you try to access a bucket, but in the **domain name you specify another region** (for example the bucket is in `bucket.s3.amazonaws.com` but you try to access `bucket.s3-website-us-west-2.amazonaws.com`, then you will be **indicated to the correct location**:
|
||
|
||
.png>)
|
||
|
||
### Enumerating the bucket
|
||
|
||
To test the openness of the bucket a user can just enter the URL in their web browser. A private bucket will respond with "Access Denied". A public bucket will list the first 1,000 objects that have been stored.
|
||
|
||
Open to everyone:
|
||
|
||
.png>)
|
||
|
||
Private:
|
||
|
||
.png>)
|
||
|
||
You can also check this with the cli:
|
||
|
||
```bash
|
||
#Use --no-sign-request for check Everyones permissions
|
||
#Use --profile <PROFILE_NAME> to indicate the AWS profile(keys) that youwant to use: Check for "Any Authenticated AWS User" permissions
|
||
#--recursive if you want list recursivelyls
|
||
#Opcionally you can select the region if you now it
|
||
aws s3 ls s3://flaws.cloud/ [--no-sign-request] [--profile <PROFILE_NAME>] [ --recursive] [--region us-west-2]
|
||
```
|
||
|
||
If the bucket doesn't have a domain name, when trying to enumerate it, **only put the bucket name** and not the whole AWSs3 domain. Example: `s3://<BUCKETNAME>`
|
||
|
||
### Public URL template
|
||
|
||
```
|
||
https://{user_provided}.s3.amazonaws.com
|
||
```
|
||
|
||
### Get Account ID from public Bucket
|
||
|
||
It's possible to determine an AWS account by taking advantage of the new **`S3:ResourceAccount`** **Policy Condition Key**. This condition **restricts access based on the S3 bucket** an account is in (other account-based policies restrict based on the account the requesting principal is in).\
|
||
And because the policy can contain **wildcards** it's possible to find the account number **just one number at a time**.
|
||
|
||
This tool automates the process:
|
||
|
||
```bash
|
||
# Installation
|
||
pipx install s3-account-search
|
||
pip install s3-account-search
|
||
# With a bucket
|
||
s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket
|
||
# With an object
|
||
s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket/path/to/object.ext
|
||
```
|
||
|
||
This technique also works with API Gateway URLs, Lambda URLs, Data Exchange data sets and even to get the value of tags (if you know the tag key). You can find more information in the [**original research**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) and the tool [**conditional-love**](https://github.com/plerionhq/conditional-love/) to automate this exploitation.
|
||
|
||
### Confirming a bucket belongs to an AWS account
|
||
|
||
As explained in [**this blog post**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/)**, if you have permissions to list a bucket** it’s possible to confirm an accountID the bucket belongs to by sending a request like:
|
||
|
||
```bash
|
||
curl -X GET "[bucketname].amazonaws.com/" \
|
||
-H "x-amz-expected-bucket-owner: [correct-account-id]"
|
||
|
||
<?xml version="1.0" encoding="UTF-8"?>
|
||
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">...</ListBucketResult>
|
||
```
|
||
|
||
If the error is an “Access Denied” it means that the account ID was wrong.
|
||
|
||
### Used Emails as root account enumeration
|
||
|
||
As explained in [**this blog post**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/), it's possible to check if an email address is related to any AWS account by **trying to grant an email permissions** over a S3 bucket via ACLs. If this doesn't trigger an error, it means that the email is a root user of some AWS account:
|
||
|
||
```python
|
||
s3_client.put_bucket_acl(
|
||
Bucket=bucket_name,
|
||
AccessControlPolicy={
|
||
'Grants': [
|
||
{
|
||
'Grantee': {
|
||
'EmailAddress': 'some@emailtotest.com',
|
||
'Type': 'AmazonCustomerByEmail',
|
||
},
|
||
'Permission': 'READ'
|
||
},
|
||
],
|
||
'Owner': {
|
||
'DisplayName': 'Whatever',
|
||
'ID': 'c3d78ab5093a9ab8a5184de715d409c2ab5a0e2da66f08c2f6cc5c0bdeadbeef'
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
## References
|
||
|
||
* [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ)
|
||
* [https://cloudar.be/awsblog/finding-the-account-id-of-any-public-s3-bucket/](https://cloudar.be/awsblog/finding-the-account-id-of-any-public-s3-bucket/)
|
||
|
||
{% hint style="success" %}
|
||
Learn & practice AWS Hacking:<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">\
|
||
Learn & practice GCP Hacking: <img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">[**HackTricks Training GCP Red Team Expert (GRTE)**<img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)
|
||
|
||
<details>
|
||
|
||
<summary>Support HackTricks</summary>
|
||
|
||
* Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
|
||
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks_live)**.**
|
||
* **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||
|
||
</details>
|
||
{% endhint %}
|