mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2025-12-23 07:29:04 -08:00
Recreating repository history for branch master
This commit is contained in:
181
pentesting-cloud/gcp-security/gcp-services/gcp-storage-enum.md
Normal file
181
pentesting-cloud/gcp-security/gcp-services/gcp-storage-enum.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# GCP - Storage Enum
|
||||
|
||||
{% hint style="success" %}
|
||||
Learn & practice AWS Hacking:<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">\
|
||||
Learn & practice GCP Hacking: <img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">[**HackTricks Training GCP Red Team Expert (GRTE)**<img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Support HackTricks</summary>
|
||||
|
||||
* Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
|
||||
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks_live)**.**
|
||||
* **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||
|
||||
</details>
|
||||
{% endhint %}
|
||||
|
||||
## Storage
|
||||
|
||||
Google Cloud Platform (GCP) Storage is a **cloud-based storage solution** that provides highly durable and available object storage for unstructured data. It offers **various storage classes** based on performance, availability, and cost, including Standard, Nearline, Coldline, and Archive. GCP Storage also provides advanced features such as **lifecycle policies, versioning, and access control** to manage and secure data effectively.
|
||||
|
||||
The bucket can be stored in a region, in 2 regions or **multi-region (default)**.
|
||||
|
||||
### Storage Types
|
||||
|
||||
* **Standard Storage**: This is the default storage option that **offers high-performance, low-latency access to frequently accessed data**. It is suitable for a wide range of use cases, including serving website content, streaming media, and hosting data analytics pipelines.
|
||||
* **Nearline Storage**: This storage class offers **lower storage costs** and **slightly higher access costs** than Standard Storage. It is optimized for infrequently accessed data, with a minimum storage duration of 30 days. It is ideal for backup and archival purposes.
|
||||
* **Coldline Storage**: This storage class is optimized for **long-term storage of infrequently accessed data**, with a minimum storage duration of 90 days. It offers the **lower storage costs** than Nearline Storage, but with **higher access costs.**
|
||||
* **Archive Storage**: This storage class is designed for cold data that is accessed **very infrequently**, with a minimum storage duration of 365 days. It offers the **lowest storage costs of all GCP storage options** but with the **highest access costs**. It is suitable for long-term retention of data that needs to be stored for compliance or regulatory reasons.
|
||||
* **Autoclass**: If you **don't know how much you are going to access** the data you can select Autoclass and GCP will **automatically change the type of storage for you to minimize costs**.
|
||||
|
||||
### Access Control
|
||||
|
||||
By **default** it's **recommended** to control the access via **IAM**, but it's also possible to **enable the use of ACLs**.\
|
||||
If you select to only use IAM (default) and **90 days passes**, you **won't be able to enable ACLs** for the bucket.
|
||||
|
||||
### Versioning
|
||||
|
||||
It's possible to enable versioning, this will **save old versions of the file inside the bucket**. It's possible to configure the **number of versions you want to keep** and even **how long** you want **noncurrent** versions (old versions) to live. Recommended is **7 days for Standard type**.
|
||||
|
||||
The **metadata of a noncurrent version is kept**. Moreover, **ACLs of noncurrent versions are also kept**, so older versions might have different ACLs from the current version.
|
||||
|
||||
Learn more in the [**docs**](https://cloud.google.com/storage/docs/object-versioning).
|
||||
|
||||
### Retention Policy
|
||||
|
||||
Indicate how **long** you want to **forbid the deletion of Objects inside the bucket** (very useful for compliance at least).\
|
||||
Only one of **versioning or retention policy can be enabled at the same time**.
|
||||
|
||||
### Encryption
|
||||
|
||||
By default objects are **encrypted using Google managed keys**, but you could also use a **key from KMS**.
|
||||
|
||||
### Public Access
|
||||
|
||||
It's possible to give **external users** (logged in GCP or not) **access to buckets content**.\
|
||||
By default, when a bucket is created, it will have **disabled the option to expose publicly** the bucket, but with enough permissions the can be changed.
|
||||
|
||||
The **format of an URL** to access a bucket is **`https://storage.googleapis.com/<bucket-name>` or `https://<bucket_name>.storage.googleapis.com`** (both are valid).
|
||||
|
||||
### HMAC Keys
|
||||
|
||||
An HMAC key is a type of _credential_ and can be **associated with a service account or a user account in Cloud Storage**. You use an HMAC key to create _signatures_ which are then included in requests to Cloud Storage. Signatures show that a **given request is authorized by the user or service account**.
|
||||
|
||||
HMAC keys have two primary pieces, an _access ID_ and a _secret_.
|
||||
|
||||
* **Access ID**: An alphanumeric string linked to a specific service or user account. When linked to a service account, the string is 61 characters in length, and when linked to a user account, the string is 24 characters in length. The following shows an example of an access ID:
|
||||
|
||||
`GOOGTS7C7FUP3AIRVJTE2BCDKINBTES3HC2GY5CBFJDCQ2SYHV6A6XXVTJFSA`
|
||||
* **Secret**: A 40-character Base-64 encoded string that is linked to a specific access ID. A secret is a preshared key that only you and Cloud Storage know. You use your secret to create signatures as part of the authentication process. The following shows an example of a secret:
|
||||
|
||||
`bGoa+V7g/yqDXvKRqq+JTFn4uQZbPiQJo4pf9RzJ`
|
||||
|
||||
Both the **access ID and secret uniquely identify an HMAC key**, but the secret is much more sensitive information, because it's used to **create signatures**.
|
||||
|
||||
### Enumeration
|
||||
|
||||
```bash
|
||||
# List all storage buckets in project
|
||||
gsutil ls
|
||||
|
||||
# Get each bucket configuration (protections, CLs, times, configs...)
|
||||
gsutil ls -L
|
||||
|
||||
# List contents of a specific bucket
|
||||
gsutil ls gs://bucket-name/
|
||||
gsutil ls -r gs://bucket-name/ # Recursive
|
||||
gsutil ls -a gs://bucket-name/ # Get ALL versions of objects
|
||||
|
||||
# Cat the context of a file without copying it locally
|
||||
gsutil cat 'gs://bucket-name/folder/object'
|
||||
gsutil cat 'gs://bucket-name/folder/object#<num>' # cat specific version
|
||||
|
||||
# Copy an object from the bucket to your local storage for review
|
||||
gsutil cp gs://bucket-name/folder/object ~/
|
||||
|
||||
# List using a raw OAuth token
|
||||
## Useful because "CLOUDSDK_AUTH_ACCESS_TOKEN" and "gcloud config set auth/access_token_file" doesn't work with gsutil
|
||||
curl -H "Authorization: Bearer $TOKEN" "https://storage.googleapis.com/storage/v1/b/<storage-name>/o"
|
||||
# Download file content from bucket
|
||||
curl -H "Authorization: Bearer $TOKEN" "https://storage.googleapis.com/storage/v1/b/supportstorage-58249/o/flag.txt?alt=media" --output -
|
||||
|
||||
# Enumerate HMAC keys
|
||||
gsutil hmac list
|
||||
|
||||
# Get permissions
|
||||
gcloud storage buckets get-iam-policy gs://bucket-name/
|
||||
gcloud storage objects get-iam-policy gs://bucket-name/folder/object
|
||||
```
|
||||
|
||||
If you get a permission denied error listing buckets you may still have access to the content. So, now that you know about the name convention of the buckets you can generate a list of possible names and try to access them:
|
||||
|
||||
```bash
|
||||
for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done
|
||||
```
|
||||
|
||||
With permissions `storage.objects.list` and `storage.objects.get`, you should be able to enumerate all folders and files from the bucket in order to download them. You can achieve that with this Python script:
|
||||
|
||||
```python
|
||||
import requests
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
def list_bucket_objects(bucket_name, prefix='', marker=None):
|
||||
url = f"https://storage.googleapis.com/{bucket_name}?prefix={prefix}"
|
||||
if marker:
|
||||
url += f"&marker={marker}"
|
||||
response = requests.get(url)
|
||||
xml_data = response.content
|
||||
root = ET.fromstring(xml_data)
|
||||
ns = {'ns': 'http://doc.s3.amazonaws.com/2006-03-01'}
|
||||
for contents in root.findall('.//ns:Contents', namespaces=ns):
|
||||
key = contents.find('ns:Key', namespaces=ns).text
|
||||
print(key)
|
||||
next_marker = root.find('ns:NextMarker', namespaces=ns)
|
||||
if next_marker is not None:
|
||||
next_marker_value = next_marker.text
|
||||
list_bucket_objects(bucket_name, prefix, next_marker_value)
|
||||
|
||||
list_bucket_objects('<storage-name>')
|
||||
```
|
||||
|
||||
### Privilege Escalation
|
||||
|
||||
In the following page you can check how to **abuse storage permissions to escalate privileges**:
|
||||
|
||||
{% content-ref url="../gcp-privilege-escalation/gcp-storage-privesc.md" %}
|
||||
[gcp-storage-privesc.md](../gcp-privilege-escalation/gcp-storage-privesc.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
### Unauthenticated Enum
|
||||
|
||||
{% content-ref url="../gcp-unauthenticated-enum-and-access/gcp-storage-unauthenticated-enum/" %}
|
||||
[gcp-storage-unauthenticated-enum](../gcp-unauthenticated-enum-and-access/gcp-storage-unauthenticated-enum/)
|
||||
{% endcontent-ref %}
|
||||
|
||||
### Post Exploitation
|
||||
|
||||
{% content-ref url="../gcp-post-exploitation/gcp-storage-post-exploitation.md" %}
|
||||
[gcp-storage-post-exploitation.md](../gcp-post-exploitation/gcp-storage-post-exploitation.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
### Persistence
|
||||
|
||||
{% content-ref url="../gcp-persistence/gcp-storage-persistence.md" %}
|
||||
[gcp-storage-persistence.md](../gcp-persistence/gcp-storage-persistence.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
{% hint style="success" %}
|
||||
Learn & practice AWS Hacking:<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" data-size="line">\
|
||||
Learn & practice GCP Hacking: <img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">[**HackTricks Training GCP Red Team Expert (GRTE)**<img src="../../../.gitbook/assets/image (2) (1).png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Support HackTricks</summary>
|
||||
|
||||
* Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
|
||||
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks_live)**.**
|
||||
* **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||
|
||||
</details>
|
||||
{% endhint %}
|
||||
Reference in New Issue
Block a user